text stringlengths 559 401k | source stringlengths 13 121 |
|---|---|
In philosophy, absolute theory (or absolutism) usually refers to a theory based on concepts (such as the concept of space) that exist independently of other concepts and objects. The absolute point of view was advocated in physics by Isaac Newton. It is one of the traditional views of space along with relational theory and the Kantian theory.
== Overview ==
According to the absolute theory of space, it is a homogeneous structure which exists and is independent of other things. The Newtonian arguments of this theory, particularly those concerned with the ontological status of space and time, had been related to the existence of God through the concepts of absolute space and absolute time. It was proposed that the universe was finite in extent and was said to have begun in time. Additionally, space exists prior to the body or matter that occupies it and it was held that the universe – as a finite object – is situated within it.
The theory was also promoted by Newton's followers including Samuel Clarke and Roger Cotes during the 17th and 18th centuries.
== Related theories ==
An absolute theory is the opposite of a relational theory. Gottfried Wilhelm Leibniz, the main proponent of relational theory, argued that there is no absolute space and time. He maintained that space is not independent nor a container of the matter that occupies it, explaining that physical objects or forces are ordered spatially and that space is merely a system of relations. According to the relational theory, without objects, there is no space.
Martin Heidegger's theory of space also opposes the absolute theory, with the criticism that it is founded on metaphysical dichotomy of separated subject and object. He maintained that this nature keeps absolute theory from explaining the true nature of space.
== References == | Wikipedia/Absolute_theory |
Relationalism is any theoretical position that gives importance to the relational nature of things. For relationalism, things exist and function only as relational entities.
== Relationalism (philosophical theory) ==
Relationalism, in the broadest sense, applies to any system of thought that gives importance to the relational nature of reality. In its narrower and more philosophically restricted sense, as propounded by the Indian philosopher Joseph Kaipayil and others, relationalism refers to the theory of reality that interprets the existence, nature, and meaning of things in terms of their relationality or relatedness. In the relationalist view, things are neither self-standing entities nor vague events but relational particulars. Particulars are inherently relational, as they are ontologically open to other particulars in their constitution and action. Relational particulars, in the relationalist view, are the ultimate constituents of reality.
== Relationalism (theory of space and time) ==
In discussions about space and time, the name relationalism (or relationism) refers to Leibniz's relationist notion of space and time as against Newton's substantivalist views. According to Newton’s substantivalism, space and time are entities in their own right, existing independently of things. Leibniz's relationism, on the other hand, describes space and time as systems of relations that exist between objects.
More generally, in physics and philosophy, a relational theory is a framework to understand reality or a physical system in such a way that the positions and other properties of objects are only meaningful relative to other objects. In a relational spacetime theory, space does not exist unless there are objects in it; nor does time exist without events. The relational view proposes that space is contained in objects and that an object represents within itself relationships to other objects. Space can be defined through the relations among the objects that it contains considering their variations through time. This is an alternative to an absolute theory, in which the space exists independently of any objects that can be immersed in it.
The relational point of view was advocated in physics by Gottfried Wilhelm Leibniz and Ernst Mach (in his Mach's principle). It was rejected by Isaac Newton in his successful description of classical physics. Although Albert Einstein was impressed by Mach's principle, he did not fully incorporate it into his general theory of relativity. Several attempts have been made to formulate a full Machian theory, but most physicists think that none have so far succeeded. For example, see Brans–Dicke theory.
Relational quantum mechanics and a relational approach to quantum physics have been independently developed, in analogy with Einstein's special relativity of space and time. Relationist physicists such as John Baez and Carlo Rovelli have criticised the leading unified theory of gravity and quantum mechanics, string theory, for retaining absolute space. Some prefer a developing theory of gravity, loop quantum gravity, for its 'backgroundlessness'.
== Relationalism (colour theory) ==
Relationalism in colour theory, as defended by Jonathan Cohen and others, means the view that colours of an object are constituted partly in terms of relations with the perceiver. An anti-relationalist view about colour, on the other hand, would insist colours are object-dependent.
== Relationalism (sociological theory) ==
In relational sociology, relationalism is often contrasted with substantivalism. While substantivalism (also called substantialism) tends to view individuals as self-subsistent entities capable of social interaction, relationalism underscores the social human practices and the individual's transactional contexts and reciprocal relations. Georg Simmel was methodologically a relationalist, because he was more interested in the interactions among individuals than the substantial qualities of the individual.
== References ==
== External links ==
"Time/What Does Science Require of Time?". Internet Encyclopaedia of Philosophy. | Wikipedia/Relational_theory |
In philosophy, theophysics is an approach to cosmology that attempts to reconcile physical cosmology and religious cosmology. It is related to physicotheology, the difference between them being that the aim of physicotheology is to derive theology from physics, whereas that of theophysics is to unify physics and theology.
== Usage ==
Paul Richard Blum (2002) uses the term in a critique of physicotheology, i.e. the view that arguments for the existence of God can be derived from the existence of the physical world (e.g. the "argument from design"). Theophysics would be the opposite approach, i.e. an approach to the material world informed by the knowledge that it is created by God.
Richard H. Popkin (1990) applies the term to the "spiritual physics" of Cambridge Platonist Henry More and his pupil and collaborator Lady Anne Conway, who enthusiastically accepted the new science, but rejected the various forms of materialist mechanism proposed by Descartes, Hobbes and Spinoza to buttress it, as these, More and Conway argued, were incapable of explaining productive causality. Instead, More and Conway offered what Popkin calls "a genuine important alternative to modern mechanistic thought", "a thoroughly scientific view with a metaphysics of spirits to make everything operate". Materialist mechanism triumphed, however, and today their spiritual cosmology, as Popkin notes, "looks very odd indeed".
The term has been applied by some philosophers to the system of Emanuel Swedenborg. William Denovan (1889) wrote in Mind: "The highest stage of his revelation might be denominated Theophysics, or the science of Divine purpose in creation." R. M. Wenley (1910) referred to Swedenborg as "the Swedish theophysicist".
Pierre Laberge (1972) observes that Kant's famous critique of physicotheology in the Critique of Pure Reason (1781; second edition 1787) has tended to obscure the fact that in his early work, General History of Nature and Theory of the Heavens (1755), Kant defended a physicotheology that at the time was startlingly original, but that succeeded only to the extent that it concealed what Laberge terms a theophysics ("ce que nous appellerons une théophysique").
Theophysics is a fundamental concept in the thought of Raimon Panikkar, who wrote in Ontonomía de la ciencia (1961) that he was looking for "a theological vision of Science that is not a Metaphysics, but a Theophysics.... It is not a matter of a Physics 'of God', but rather of the 'God of the Physical'; of God the creator of the world... not the world as autonomous being, independent and disconnected from God, but rather ontonomicly linked to Him". As a vision of "Science as theology", it became central to Panikkar's "cosmotheandric" view of reality.
Frank J. Tipler's Omega Point theory (1994), which identifies concepts from physical cosmology with theistic concepts, is sometimes referred to by the term, although not by Tipler himself. Tipler was an atheist when he wrote The Anthropic Cosmological Principle (1986, co-authored with John D. Barrow, whose many popular books seldom mention theology) and The Physics of Immortality (1994), but a Christian when he wrote The Physics of Christianity (2007). In 1989, Wolfhart Pannenberg, a liberal theologian in the continental Protestant tradition, welcomed Tipler's work on cosmology as raising "the prospect of a rapprochement between physics and theology in the area of eschatology". In subsequent essays, while not concurring with all the details of Tipler's discussion, Pannenberg has defended the theology of the Omega Point.
== See also ==
Anthropic principle
Fine-tuned universe
List of science and religion scholars
Multiverse
Natural theology
Omega Point
Tipler's Omega Point
Ultimate fate of the universe
Zygon: Journal of Religion and Science
== References ==
== Further reading ==
John D. Barrow and Frank J. Tipler, Foreword by John A. Wheeler, 1986. The Anthropic Cosmological Principle. Oxford University Press. ISBN 0-19-851949-4. Excerpt from Chapter 1.
William Lane Craig and Quentin Smith, 1993. Theism, Atheism, and Big Bang Cosmology. Oxford Univ. Press.
William Dembski, 1998. The Design Inference. Cambridge Univ. Press.
David Deutsch, 1997. The Fabric of Reality New York: Alan Lane. ISBN 0-7139-9061-9. Extracts from Chapter 14: "The Ends of the Universe," with additional comments by Frank J. Tipler; also available here and here.
Arthur Eddington, 1930. Why I Believe in God: Science and Religion, as a Scientist Sees It.
George Ellis and Nancey Murphy, 1996. On the Moral Nature of the Universe: Theology, Cosmology, and Ethics. Augsburg Fortress Publishers. ISBN 0-8006-2983-3
Henry Margenau, 1992. Cosmos, Bios, Theos Scientists Reflect on Science, God, and the Origins of the Universe, Life, and Homo sapiens. Open Court.
E. A. Milne, 1952. Modern Cosmology and the Christian Idea of God. Oxford Univ. Press.
Arthur Peacocke, 1979. Creation and the World of Science.
John Polkinghorne, 1994. The Faith of a Physicist. Princeton Univ. Press.
---------, 1998. Science and Theology. ISBN 0-281-05176-3.
---------, 2000. Faith, Science and Understanding. Yale University Press. ISBN 0-300-08372-6; ISBN 978-0-300-09128-1.
[Lawrence Poole], 2003, "SELF-Empowerment", ISBN 2-922417-45-X, IQ Press.
Saunders, Nicholas, 2002. Divine Action and Modern Science. Cambridge Univ. Press.
Russell Stannard, 1999. The God Experiment. Faber. The 1987–88 Gifford lectures.
Richard Swinburne, 2004 (1979). The Existence of God.
Frank J. Tipler, 1994. The Physics of Immortality: Modern Cosmology, God and the Resurrection of the Dead. Doubleday. ISBN 978-0385467995.
--------, 2007. The Physics of Christianity. Doubleday. ISBN 0-385-51424-7. Chapter I and excerpt from Chapter II. Chapter I also available here.
Charles Hard Townes, 1966, "The Convergence of Science and Religion," Think.
Simon Sam Gutierrez, 1991, The Solomon Formula insaecula saeculorum: A Theophysical Find, TXu000559229
== External links ==
Theophysics. A website mainly about Tipler's Omega Point Theory, with links to short nontechnical articles mostly by Tipler, but also some by Deutsch and Pannenberg.
entertheophysics, A website containing the 12 principles of Theophysics as explained by the author, training consultant and conference speaker Lawrence Poole. Poole also relates several applications of Theophysics including a "unified field formula". | Wikipedia/Theophysics |
Physics Physique Физика, also known as various punctuations of Physics, Physique, Fizika, and as Physics for short, was a scientific journal published from 1964 through 1968. Founded by Philip Warren Anderson and Bernd T. Matthias, who were inspired by wide-circulation literary magazines like Harper's, the journal's original goal was to print papers of interest to scientists in all branches of physics. It is best known for publishing John Stewart Bell's paper on the result now known as Bell's theorem. Failing to attract sufficient interest as an unspecialized journal, Physics Physique Физика soon focused on solid-state physics before folding altogether in 1968. The four volumes of this journal were eventually made freely available online by the American Physical Society.
Bell chose to publish his theorem in this journal because it did not require page charges, and at the time it in fact paid the authors who published there. Because the journal did not provide free reprints of articles for the authors to distribute, however, Bell had to spend the money he received to buy copies that he could send to other physicists. While the articles printed in the journal themselves listed the publication's name simply as Physics, the covers carried the trilingual version Physics Physique Физика to reflect that it would print articles in English, French and Russian. In 1967, the unusual title caught the attention of John Clauser, who then discovered Bell's paper and began to consider how to perform a Bell test experiment in the laboratory. Clauser and Stuart Freedman would go on to perform a Bell test experiment in 1972.
== Selected publications ==
The following are among the most highly cited articles published in the journal during its four-year time span.
Susskind, Leonard; Glogower, Jonathan (1964-07-01). "Quantum mechanical phase and time operator". Physics Physique Fizika. 1 (1): 49–61. doi:10.1103/PhysicsPhysiqueFizika.1.49.
Gell-Mann, Murray (1964-07-01). "The symmetry group of vector and axial vector currents". Physics Physique Fizika. 1 (1): 63–75. doi:10.1103/PhysicsPhysiqueFizika.1.63.
Bell, J. S. (1964-11-01). "On the Einstein Podolsky Rosen paradox". Physics Physique Fizika. 1 (3): 195–200. doi:10.1103/PhysicsPhysiqueFizika.1.195.
Kadanoff, Leo P. (1966-06-01). "Scaling laws for Ising models near Tc". Physics Physique Fizika. 2 (6): 263–272. doi:10.1103/PhysicsPhysiqueFizika.2.263.
Fisher, Michael E. (1967-10-01). "The theory of condensation and the critical point". Physics Physique Fizika. 3 (5): 255–283. doi:10.1103/PhysicsPhysiqueFizika.3.255.
== See also ==
Epistemological Letters
== References ==
== External links ==
Full text of Physics Physique физика from the American Physical Society | Wikipedia/Physics_Physique_Физика |
Vision science is the scientific study of visual perception. Researchers in vision science can be called vision scientists, especially if their research spans some of the science's many disciplines.
Vision science encompasses all studies of vision, such as how human and non-human organisms process visual information, how conscious visual perception works in humans, how to exploit visual perception for effective communication, and how artificial systems can do the same tasks. Vision science overlaps with or encompasses disciplines such as ophthalmology and optometry, neuroscience(s), psychology (particularly sensation and perception psychology, cognitive psychology, linguistics, biopsychology, psychophysics, and neuropsychology), physics (particularly optics), ethology, and computer science (particularly computer vision, artificial intelligence, and computer graphics), as well as other engineering related areas such as data visualization, user interface design, and human factors and ergonomics. Below is a list of pertinent journals and international conferences.
== Journals ==
Scientific journals exclusively or predominantly concerned with vision science include:
Acta Ophthalmologica
American Journal of Ophthalmology
Annual Review of Vision Science
Attention Perception & Psychophysics (previously Perception & Psychophysics)
British Journal of Ophthalmology
Clinical & Experimental Ophthalmology
Current Opinion in Ophthalmology
European Journal of Ophthalmology
Experimental Eye Research
Eye
Graefe's Archive for Clinical and Experimental Ophthalmology
Investigative Ophthalmology & Vision Science (IOVS)
JAMA Ophthalmology
Journal of Glaucoma
Journal of Neuro-Ophthalmology
Journal of Ophthalmology
Journal of the Optical Society of America
Journal of Vision
translational vision science & technology (tvst)
Ophthalmic and Physiological Optics (OPO)
Ophthalmology
Optometry and Vision Science
Perception and i-Perception
Progress in Retinal and Eye Research
Seeing and Perceiving and Spatial Vision
Survey of Ophthalmology
Vision Research (including Clinical Vision Sciences)
Optica
== Conferences ==
Association for Research in Vision and Ophthalmology (ARVO)
American Academy of Ophthalmology (AAO) Annual Meeting
American Academy of Optometry(AAOpt) Annual Meeting
European Conference on Visual Perception (ECVP)[1]
Annual Meeting of the Vision Sciences Society (VSS)[2]
Asia Pacific Conference on Vision (APVC)[3]
British Congress of Optometry and Vision Science(BCOVS)[4]
Indian Contact Lens Education Program(ICLEP) Annual Meeting
International Myopia Conference, International Myopia Institute
World Congress of Optometry(WCO)[5]
IVI International Optometry Conference[6]
== See also ==
== References ==
Palmer, S.E. (1999). Vision Science: Photons to Phenomenology. MIT Press. ISBN 978-0-262-16183-1.
== External links ==
VisionScience – an Internet resource for research in human and animal vision.
Visiome Platform – digital research resource archive for vision science by the Neuroinformatics Japan Center
European Conference on Visual Perception – an international scientific conference on vision science | Wikipedia/Vision_science |
Computational neuroscience (also known as theoretical neuroscience or mathematical neuroscience) is a branch of neuroscience which employs mathematics, computer science, theoretical analysis and abstractions of the brain to understand the principles that govern the development, structure, physiology and cognitive abilities of the nervous system.
Computational neuroscience employs computational simulations to validate and solve mathematical models, and so can be seen as a sub-field of theoretical neuroscience; however, the two fields are often synonymous. The term mathematical neuroscience is also used sometimes, to stress the quantitative nature of the field.
Computational neuroscience focuses on the description of biologically plausible neurons (and neural systems) and their physiology and dynamics, and it is therefore not directly concerned with biologically unrealistic models used in connectionism, control theory, cybernetics, quantitative psychology, machine learning, artificial neural networks, artificial intelligence and computational learning theory;
although mutual inspiration exists and sometimes there is no strict limit between fields, with model abstraction in computational neuroscience depending on research scope and the granularity at which biological entities are analyzed.
Models in theoretical neuroscience are aimed at capturing the essential features of the biological system at multiple spatial-temporal scales, from membrane currents, and chemical coupling via network oscillations, columnar and topographic architecture, nuclei, all the way up to psychological faculties like memory, learning and behavior. These computational models frame hypotheses that can be directly tested by biological or psychological experiments.
== History ==
The term 'computational neuroscience' was introduced by Eric L. Schwartz, who organized a conference, held in 1985 in Carmel, California, at the request of the Systems Development Foundation to provide a summary of the current status of a field which until that point was referred to by a variety of names, such as neural modeling, brain theory and neural networks. The proceedings of this definitional meeting were published in 1990 as the book Computational Neuroscience. The first of the annual open international meetings focused on Computational Neuroscience was organized by James M. Bower and John Miller in San Francisco, California in 1989. The first graduate educational program in computational neuroscience was organized as the Computational and Neural Systems Ph.D. program at the California Institute of Technology in 1985.
The early historical roots of the field can be traced to the work of people including Louis Lapicque, Hodgkin & Huxley, Hubel and Wiesel, and David Marr. Lapicque introduced the integrate and fire model of the neuron in a seminal article published in 1907, a model still popular for artificial neural networks studies because of its simplicity (see a recent review).
About 40 years later, Hodgkin and Huxley developed the voltage clamp and created the first biophysical model of the action potential. Hubel and Wiesel discovered that neurons in the primary visual cortex, the first cortical area to process information coming from the retina, have oriented receptive fields and are organized in columns. David Marr's work focused on the interactions between neurons, suggesting computational approaches to the study of how functional groups of neurons within the hippocampus and neocortex interact, store, process, and transmit information. Computational modeling of biophysically realistic neurons and dendrites began with the work of Wilfrid Rall, with the first multicompartmental model using cable theory.
== Major topics ==
Research in computational neuroscience can be roughly categorized into several lines of inquiry. Most computational neuroscientists collaborate closely with experimentalists in analyzing novel data and synthesizing new models of biological phenomena.
=== Single-neuron modeling ===
Even a single neuron has complex biophysical characteristics and can perform computations (e.g.). Hodgkin and Huxley's original model only employed two voltage-sensitive currents (Voltage sensitive ion channels are glycoprotein molecules which extend through the lipid bilayer, allowing ions to traverse under certain conditions through the axolemma), the fast-acting sodium and the inward-rectifying potassium. Though successful in predicting the timing and qualitative features of the action potential, it nevertheless failed to predict a number of important features such as adaptation and shunting. Scientists now believe that there are a wide variety of voltage-sensitive currents, and the implications of the differing dynamics, modulations, and sensitivity of these currents is an important topic of computational neuroscience.
The computational functions of complex dendrites are also under intense investigation. There is a large body of literature regarding how different currents interact with geometric properties of neurons.
There are many software packages, such as GENESIS and NEURON, that allow rapid and systematic in silico modeling of realistic neurons. Blue Brain, a project founded by Henry Markram from the École Polytechnique Fédérale de Lausanne, aims to construct a biophysically detailed simulation of a cortical column on the Blue Gene supercomputer.
Modeling the richness of biophysical properties on the single-neuron scale can supply mechanisms that serve as the building blocks for network dynamics. However, detailed neuron descriptions are computationally expensive and this computing cost can limit the pursuit of realistic network investigations, where many neurons need to be simulated. As a result, researchers that study large neural circuits typically represent each neuron and synapse with an artificially simple model, ignoring much of the biological detail. Hence there is a drive to produce simplified neuron models that can retain significant biological fidelity at a low computational overhead. Algorithms have been developed to produce faithful, faster running, simplified surrogate neuron models from computationally expensive, detailed neuron models.
=== Modeling Neuron-glia interactions ===
Glial cells participate significantly in the regulation of neuronal activity at both the cellular and the network level. Modeling this interaction allows to clarify the potassium cycle, so important for maintaining homeostasis and to prevent epileptic seizures. Modeling reveals the role of glial protrusions that can penetrate in some cases the synaptic cleft to interfere with the synaptic transmission and thus control synaptic communication.
=== Development, axonal patterning, and guidance ===
Computational neuroscience aims to address a wide array of questions, including: How do axons and dendrites form during development? How do axons know where to target and how to reach these targets? How do neurons migrate to the proper position in the central and peripheral systems? How do synapses form? We know from molecular biology that distinct parts of the nervous system release distinct chemical cues, from growth factors to hormones that modulate and influence the growth and development of functional connections between neurons.
Theoretical investigations into the formation and patterning of synaptic connection and morphology are still nascent. One hypothesis that has recently garnered some attention is the minimal wiring hypothesis, which postulates that the formation of axons and dendrites effectively minimizes resource allocation while maintaining maximal information storage.
=== Sensory processing ===
Early models on sensory processing understood within a theoretical framework are credited to Horace Barlow. Somewhat similar to the minimal wiring hypothesis described in the preceding section, Barlow understood the processing of the early sensory systems to be a form of efficient coding, where the neurons encoded information which minimized the number of spikes. Experimental and computational work have since supported this hypothesis in one form or another. For the example of visual processing, efficient coding is manifested in the
forms of efficient spatial coding, color coding, temporal/motion coding, stereo coding, and combinations of them.
Further along the visual pathway, even the efficiently coded visual information is too much for the capacity of the information bottleneck, the visual attentional bottleneck. A subsequent theory, V1 Saliency Hypothesis (V1SH), has been developed on exogenous attentional selection of a fraction of visual input for further processing, guided by a bottom-up saliency map in the primary visual cortex.
Current research in sensory processing is divided among a biophysical modeling of different subsystems and a more theoretical modeling of perception. Current models of perception have suggested that the brain performs some form of Bayesian inference and integration of different sensory information in generating our perception of the physical world.
=== Motor control ===
Many models of the way the brain controls movement have been developed. This includes models of processing in the brain such as the cerebellum's role for error correction, skill learning in motor cortex and the basal ganglia, or the control of the vestibulo ocular reflex. This also includes many normative models, such as those of the Bayesian or optimal control flavor which are built on the idea that the brain efficiently solves its problems.
=== Memory and synaptic plasticity ===
Earlier models of memory are primarily based on the postulates of Hebbian learning. Biologically relevant models such as Hopfield net have been developed to address the properties of associative (also known as "content-addressable") style of memory that occur in biological systems. These attempts are primarily focusing on the formation of medium- and long-term memory, localizing in the hippocampus.
One of the major problems in neurophysiological memory is how it is maintained and changed through multiple time scales. Unstable synapses are easy to train but also prone to stochastic disruption. Stable synapses forget less easily, but they are also harder to consolidate. It is likely that computational tools will contribute greatly to our understanding of how synapses function and change in relation to external stimulus in the coming decades.
=== Behaviors of networks ===
Biological neurons are connected to each other in a complex, recurrent fashion. These connections are, unlike most artificial neural networks, sparse and usually specific. It is not known how information is transmitted through such sparsely connected networks, although specific areas of the brain, such as the visual cortex, are understood in some detail. It is also unknown what the computational functions of these specific connectivity patterns are, if any.
The interactions of neurons in a small network can be often reduced to simple models such as the Ising model. The statistical mechanics of such simple systems are well-characterized theoretically. Some recent evidence suggests that dynamics of arbitrary neuronal networks can be reduced to pairwise interactions. It is not known, however, whether such descriptive dynamics impart any important computational function. With the emergence of two-photon microscopy and calcium imaging, we now have powerful experimental methods with which to test the new theories regarding neuronal networks.
In some cases the complex interactions between inhibitory and excitatory neurons can be simplified using mean-field theory, which gives rise to the population model of neural networks. While many neurotheorists prefer such models with reduced complexity, others argue that uncovering structural-functional relations depends on including as much neuronal and network structure as possible. Models of this type are typically built in large simulation platforms like GENESIS or NEURON. There have been some attempts to provide unified methods that bridge and integrate these levels of complexity.
=== Visual attention, identification, and categorization ===
Visual attention can be described as a set of mechanisms that limit some processing to a subset of incoming stimuli. Attentional mechanisms shape what we see and what we can act upon. They allow for concurrent selection of some (preferably, relevant) information and inhibition of other information. In order to have a more concrete specification of the mechanism underlying visual attention and the binding of features, a number of computational models have been proposed aiming to explain psychophysical findings. In general, all models postulate the existence of a saliency or priority map for registering the potentially interesting areas of the retinal input, and a gating mechanism for reducing the amount of incoming visual information, so that the limited computational resources of the brain can handle it.
An example theory that is being extensively tested behaviorally and physiologically is the V1 Saliency Hypothesis that a bottom-up saliency map is created in the primary visual cortex to guide attention exogenously. Computational neuroscience provides a mathematical framework for studying the mechanisms involved in brain function and allows complete simulation and prediction of neuropsychological syndromes.
=== Cognition, discrimination, and learning ===
Computational modeling of higher cognitive functions has only recently begun. Experimental data comes primarily from single-unit recording in primates. The frontal lobe and parietal lobe function as integrators of information from multiple sensory modalities. There are some tentative ideas regarding how simple mutually inhibitory functional circuits in these areas may carry out biologically relevant computation.
The brain seems to be able to discriminate and adapt particularly well in certain contexts. For instance, human beings seem to have an enormous capacity for memorizing and recognizing faces. One of the key goals of computational neuroscience is to dissect how biological systems carry out these complex computations efficiently and potentially replicate these processes in building intelligent machines.
The brain's large-scale organizational principles are illuminated by many fields, including biology, psychology, and clinical practice. Integrative neuroscience attempts to consolidate these observations through unified descriptive models and databases of behavioral measures and recordings. These are the bases for some quantitative modeling of large-scale brain activity.
The Computational Representational Understanding of Mind (CRUM) is another attempt at modeling human cognition through simulated processes like acquired rule-based systems in decision making and the manipulation of visual representations in decision making.
=== Consciousness ===
One of the ultimate goals of psychology/neuroscience is to be able to explain the everyday experience of conscious life. Francis Crick, Giulio Tononi and Christof Koch made some attempts to formulate consistent frameworks for future work in neural correlates of consciousness (NCC), though much of the work in this field remains speculative.
=== Computational clinical neuroscience ===
Computational clinical neuroscience is a field that brings together experts in neuroscience, neurology, psychiatry, decision sciences and computational modeling to quantitatively define and investigate problems in neurological and psychiatric diseases, and to train scientists and clinicians that wish to apply these models to diagnosis and treatment.
=== Predictive computational neuroscience ===
Predictive computational neuroscience is a recent field that combines signal processing, neuroscience, clinical data and machine learning to predict the brain during coma or anesthesia. For example, it is possible to anticipate deep brain states using the EEG signal. These states can be used to anticipate hypnotic concentration to administrate to the patient.
=== Computational Psychiatry ===
Computational psychiatry is a new emerging field that brings together experts in machine learning, neuroscience, neurology, psychiatry, psychology to provide an understanding of psychiatric disorders.
== Technology ==
=== Neuromorphic computing ===
A neuromorphic computer/chip is any device that uses physical artificial neurons (made from silicon) to do computations (See: neuromorphic computing, physical neural network). One of the advantages of using a physical model computer such as this is that it takes the computational load of the processor (in the sense that the structural and some of the functional elements don't have to be programmed since they are in hardware). In recent times, neuromorphic technology has been used to build supercomputers which are used in international neuroscience collaborations. Examples include the Human Brain Project SpiNNaker supercomputer and the BrainScaleS computer.
== See also ==
== References ==
== Bibliography ==
Chklovskii DB (2004). "Synaptic connectivity and neuronal morphology: two sides of the same coin". Neuron. 43 (5): 609–17. doi:10.1016/j.neuron.2004.08.012. PMID 15339643. S2CID 16217065.
Sejnowski, Terrence J.; Churchland, Patricia Smith (1992). The computational brain. Cambridge, Mass: MIT Press. ISBN 978-0-262-03188-2.
Gerstner, W.; Kistler, W.; Naud, R.; Paninski, L. (2014). Neuronal Dynamics. Cambridge, UK: Cambridge University Press. ISBN 9781107447615.
Dayan P.; Abbott, L. F. (2001). Theoretical neuroscience: computational and mathematical modeling of neural systems. Cambridge, Mass: MIT Press. ISBN 978-0-262-04199-7.
Eliasmith, Chris; Anderson, Charles H. (2003). Neural engineering: Representation, computation, and dynamics in neurobiological systems. Cambridge, Mass: MIT Press. ISBN 978-0-262-05071-5.
Hodgkin AL, Huxley AF (28 August 1952). "A quantitative description of membrane current and its application to conduction and excitation in nerve". J. Physiol. 117 (4): 500–44. doi:10.1113/jphysiol.1952.sp004764. PMC 1392413. PMID 12991237.
William Bialek; Rieke, Fred; David Warland; Rob de Ruyter van Steveninck (1999). Spikes: exploring the neural code. Cambridge, Mass: MIT. ISBN 978-0-262-68108-7.
Schutter, Erik de (2001). Computational neuroscience: realistic modeling for experimentalists. Boca Raton: CRC. ISBN 978-0-8493-2068-2.
Sejnowski, Terrence J.; Hemmen, J. L. van (2006). 23 problems in systems neuroscience. Oxford [Oxfordshire]: Oxford University Press. ISBN 978-0-19-514822-0.
Michael A. Arbib; Shun-ichi Amari; Prudence H. Arbib (2002). The Handbook of Brain Theory and Neural Networks. Cambridge, Massachusetts: The MIT Press. ISBN 978-0-262-01197-6.
Zhaoping, Li (2014). Understanding vision: theory, models, and data. Oxford, UK: Oxford University Press. ISBN 978-0199564668.
== See also ==
=== Software ===
BRIAN, a Python based simulator
Budapest Reference Connectome, web based 3D visualization tool to browse connections in the human brain
Emergent, neural simulation software.
GENESIS, a general neural simulation system.
NEST is a simulator for spiking neural network models that focuses on the dynamics, size and structure of neural systems rather than on the exact morphology of individual neurons.
== External links ==
=== Journals ===
Journal of Mathematical Neuroscience
Journal of Computational Neuroscience
Neural Computation
Cognitive Neurodynamics
Frontiers in Computational Neuroscience
PLoS Computational Biology
Frontiers in Neuroinformatics
=== Conferences ===
Computational and Systems Neuroscience (COSYNE) – a computational neuroscience meeting with a systems neuroscience focus.
Annual Computational Neuroscience Meeting (CNS) – a yearly computational neuroscience meeting.
Neural Information Processing Systems (NIPS)– a leading annual conference covering mostly machine learning.
Cognitive Computational Neuroscience (CCN) – a computational neuroscience meeting focusing on computational models capable of cognitive tasks.
International Conference on Cognitive Neurodynamics (ICCN) – a yearly conference.
UK Mathematical Neurosciences Meeting– a yearly conference, focused on mathematical aspects.
Bernstein Conference on Computational Neuroscience (BCCN)– a yearly computational neuroscience conference ].
AREADNE Conferences– a biennial meeting that includes theoretical and experimental results.
=== Websites ===
Encyclopedia of Computational Neuroscience, part of Scholarpedia, an online expert curated encyclopedia on computational neuroscience and dynamical systems | Wikipedia/Mathematical_neuroscience |
Terror management theory (TMT) is both a social and evolutionary psychology theory originally proposed by Jeff Greenberg, Sheldon Solomon, and Tom Pyszczynski and codified in their book The Worm at the Core: On the Role of Death in Life (2015). It proposes that a basic psychological conflict results from having a self-preservation instinct while realizing that death is inevitable and to some extent unpredictable. This conflict produces terror, which is managed through escapism and cultural beliefs that counter biological reality with more significant and enduring forms of meaning and value—basically countering the personal insignificance represented by death with the significance provided by symbolic culture.
The most obvious examples of cultural values that assuage death anxiety are those that purport to offer literal immortality (e.g. belief in the afterlife through religion). However, TMT also argues that other cultural values – including those that are seemingly unrelated to death – offer symbolic immortality. For example, values of national identity, posterity, cultural perspectives on sex, and human superiority over animals have been linked to calming death concerns. In many cases these values are thought to offer symbolic immortality, by either a) providing the sense that one is part of something greater that will ultimately outlive the individual (e.g. country, lineage, species), or b) making one's symbolic identity superior to biological nature (i.e. one is a personality, which makes one more than a glob of cells).
Because cultural values influence what is meaningful, they are foundational for self-esteem. TMT describes self-esteem as being the personal, subjective measure of how well an individual is living up to their cultural values.
Terror management theory was developed by social psychologists Greenberg, Solomon, and Pyszczynski. However, the idea of TMT originated from anthropologist Ernest Becker's 1973 Pulitzer Prize-winning work of nonfiction The Denial of Death. Becker argues most human action is taken to ignore or avoid the inevitability of death. The terror of absolute annihilation creates such a profound – albeit subconscious – anxiety in people that they spend their lives attempting to make sense of it. On large scales, societies build symbols: Laws, religious meanings, cultures, and belief systems to explain the significance of life, define what makes certain characteristics, skills, and talents extraordinary, reward others whom they find to exemplify certain attributes, and punish or kill others who do not adhere to their cultural worldview. Adherence to these created "symbols" aids in relieving stresses associated with the reality of mortality. On an individual level, self-esteem provides a buffer against death-related anxiety.
== Background ==
In the 1st century CE, Statius in his Thebaid suggested that "fear first made gods in the world".
Cultural anthropologist Ernest Becker asserted in his 1973 book The Denial of Death that humans, as intelligent animals, are able to grasp the inevitability of death. They therefore spend their lives building and believing in cultural elements that illustrate how to make themselves stand out as individuals and to give their lives significance and meaning. Death creates an anxiety in humans; it strikes at unexpected and random moments, and its nature is essentially unknowable, causing people to spend most of their time and energy to explain, forestall, and avoid it.
Becker expounded upon the previous writings of Sigmund Freud, Søren Kierkegaard, Norman O. Brown, and Otto Rank. According to clinical psychiatrist Morton Levitt, Becker replaces the Freudian preoccupation with sexuality with the fear of death as the primary motivation in human behavior.
People desire to think of themselves as beings of value and worth with a feeling of permanence, a concept in psychology known as self-esteem. This feeling counters the cognitive dissonance created by an individual's realization that they may be no more important than any other living thing. Becker refers to high self-esteem as heroism: the problem of heroics is the central one of human life, that it goes deeper into human nature than anything else because it is based on organismic narcissism and on the child's need for self-esteem as the condition for his life. Society itself is a codified hero system, which means that society everywhere is a living myth of the significance of human life, a defiant creation of meaning.
The rationale behind decisions regarding one's own health can be explored through a terror-management model. A 2008 research article in Psychological Review proposes a three-part model for understanding how awareness of death can ironically subvert health-promoting behaviors by redirecting one's focus towards behaviors that build self-esteem instead:
Proposition 1 suggests that conscious thoughts about death can instigate health-oriented responses aimed at removing death-related thoughts from current focal attention. Proposition 2 suggests that the unconscious resonance of death-related cognition promotes self-oriented defenses directed toward maintaining, not one's health, but a sense of meaning and self-esteem. The last proposition suggests that confrontations with the physical body may undermine symbolic defenses and thus present a previously unrecognized barrier to health promotion activities.
=== Evolutionary backdrop ===
Terror-management theorists regard TMT as compatible with the theory of evolution: Valid fears of dangerous things have an adaptive function that helped facilitate the survival of our ancestors' genes. However, generalized existential anxiety resulting from the clash between a desire for life and awareness of the inevitability of death is neither adaptive nor selected for. TMT views existential anxiety as an unfortunate byproduct of these two highly adaptive human proclivities rather than as an adaptation that the evolutionary process selected for its advantages. Just as human bipedalism confers advantages as well as disadvantages, death anxiety is an inevitable part of our intelligence and awareness of dangers.
Anxiety in response to the inevitability of death threatened to undermine adaptive functioning and therefore needed amelioration. TMT posits that humankind used the same intellectual capacities that gave rise to this problem to fashion cultural beliefs and values that provided protection against this potential anxiety. TMT suggests these cultural beliefs (even unpleasant and frightening ones, such as ritual human sacrifice) manage potential death-anxiety in a way that promotes beliefs and behaviors which facilitated the functioning and survival of the collective.
Hunter-gatherers used their emerging cognitive abilities to facilitate solving practical problems, such as basic needs for nutrition, mating, and tool-making. As these abilities evolved, an explicit awareness of death also emerged. But once this awareness materialized, the potential for terror that it created put pressure on emerging conceptions of reality. Any conceptual formation that was to be widely accepted by the group needed to provide a means of managing this terror.
Originally, morality evolved to facilitate co-existence within groups. Together with language, morality served pragmatic functions that extended survival. The struggle to deny the finality of death co-opted and changed the function of these cultural inventions. For example, Neanderthals might have begun burying their dead as a means of avoiding unpleasant odors, disease-infested parasites, or dangerous scavengers. But during the Upper Paleolithic era, these pragmatic burial practices appear to have become imbued with layers of ritual performance and supernatural beliefs, suggested by the elaborate decoration of bodies with thousands of beads or other markers. Food and other necessities were also included within the burial chamber, indicating the potential for a belief system that included life after death. Many human cultures today treat funerals primarily as cultural events, viewed through the lens of morality and language, with little thought given to the utilitarian origins of burying the dead.
Evolutionary history also indicates that "the costs of ignoring threats have outweighed the costs of ignoring opportunities for self-development." This reinforces the concept that abstract needs for individual and group self-esteem may continue to be selected for by evolution, even when they sometimes confer risks to physical health and well-being.
== Self-esteem ==
Self-esteem lies at the heart of TMT and is a fundamental aspect of its core paradigms. TMT fundamentally seeks to elucidate the causes and consequences of a need for self-esteem. Theoretically, it draws heavily from Ernest Becker's conceptions of culture and self-esteem. TMT not only attempts to explain the concept of self-esteem, it also tries to explain why we need self-esteem. One explanation is that self-esteem is used as a coping mechanism for anxiety. It helps people control their sense of terror and nullify the realization that humans are just animals trying to manage the world around them. According to TMT, self-esteem is a sense of personal value that is created by beliefs in the validity of one's cultural worldview, and the belief that one is living up to the cultural standards created by that worldview.
Critically, Hewstone et al. (2002) have questioned the causal direction between self-esteem and death anxiety, evaluating whether one's self-esteem comes from their desire to reduce their death anxiety, or if death anxiety arises from a lack of self-esteem. In other words, an individual's suppression of death anxiety may arise from their overall need to increase their self-esteem in a positive manner.
Research has demonstrated that self-esteem can play an important role in physical health. In some cases, people may be so concerned with their physical appearance and boosting their self-esteem that they ignore problems or concerns with their own physical health. Arndt et al. (2009) conducted three studies to examine how peer perceptions and social acceptance of smokers contributes to their quitting, as well as if, and why these people continue smoking for outside reasons, even when faced with thoughts of death and anti-smoking prompts. Tanning and exercising were also looked at in the researchers' studies. The studies found that people are influenced by the situations around them. Specifically, Arndt et al. (2009) found in terms of their self-esteem and health, that participants who saw someone exercising were more likely to increase their intentions to exercise. In addition, the researchers found in study two that how participants reacted to an anti-smoking commercial was affected by their motivation for smoking and the situation which they were in. For instance, people who smoked for extrinsic reasons and were previously prompted with death reminders were more likely to be compelled by the anti-smoking message.
=== Self-esteem as anxiety buffer ===
An individual's level of self-consciousness can affect their views on life and death. To a point, increasing self-consciousness is adaptive in that it helps prevent awareness of danger. However, research has demonstrated that there may be diminishing returns from this phenomenon. Individuals with higher levels of self-consciousness sometimes have increased death cognition, and a more negative outlook on life, than those with reduced self-consciousness.
Conversely, self-esteem can work in the opposite manner. Research has confirmed that individuals with higher self-esteem, particularly in regard to their behavior, have a more positive attitude towards their life. Specifically, death cognition in the form of anti-smoking warnings weren't effective for smokers and in fact, increased their already positive attitudes towards the behavior. The reasons behind individuals' optimistic attitudes towards smoking after mortality was made salient, indicate that people use positivity as a buffer against anxiety. Continuing to hold certain beliefs even after they are shown to be flawed creates cognitive dissonance regarding current information and past behavior, and the way to alleviate this is to simply reject new information. Therefore, anxiety buffers such as self-esteem allow individuals to cope with their fears more easily. Death cognition may in fact cause negative reinforcement that leads people to further engage in dangerous behaviors (smoking in this instance) because accepting the new information would lead to a loss of self-esteem, increasing vulnerability and awareness of mortality.
== Mortality salience ==
The mortality salience hypothesis (MS) states that if indeed one's cultural worldview, or one's self-esteem, serves a death-denying function, then threatening these constructs should produce defenses aimed at restoring psychological equanimity (i.e., returning the individual to a state of feeling invulnerable). In the MS paradigm, these "threats" are simply experiential reminders of one's own death. This can, and has, taken many different forms in a variety of study paradigms (e.g., asking participants to write about their own death; conducting the experiment near funeral homes or cemeteries; having participants watch graphic depictions of death, etc.). Like the other TMT hypotheses, the literature supporting the MS hypothesis is vast and diverse. For a meta-analysis of MS research, see Burke et al. (2010).
Experimentally, the MS hypothesis has been tested in close to 200 empirical articles. After participants in an experiment are asked to write about their own death (vs. a neutral, non-death control topic, such as dental pain), and then following a brief delay (distal, worldview/self-esteem defenses work the best after a delay; see Greenberg et al. (1994) for a discussion), the participants' defenses are measured. In one early TMT study assessing the MS hypothesis, Greenberg et al. (1990) had Christian participants evaluate other Christian and Jewish students that were similar demographically, but differed in their religious affiliation. After being reminded of their death (experimental MS induction), Christian participants evaluated fellow Christians more positively, and Jewish participants more negatively, relative to the control condition. Conversely, bolstering self-esteem in these scenarios leads to less worldview defense and derogation of dissimilar others.
Mortality salience has an influence on individuals and their decisions regarding their health. Cox et al. (2009) discuss mortality salience in terms of suntanning. Specifically, the researchers found that participants who were prompted with the idea that pale was more socially attractive, along with mortality reminders, tended to lean towards decisions that resulted in more protective measures from the sun. The participants were placed in two different conditions; one group of participants were given an article relating to the fear of death, while the control group received an article unrelated to death, dealing with the fear of public speaking. Additionally, they gave one group an article pertaining to the message that "bronze is beautiful", one relating to the idea that "pale is pretty", and one neutral article that did not speak of tan or pale skin tones. Finally, after introducing a delay activity, the researchers gave the participants a five-item questionnaire asking them about their future sun-tanning behaviors. The study illustrated that when tan skin was associated with attractiveness, mortality salience positively affected people's intentions to suntan; however, when pale skin was associated with attractiveness, people's intentions to tan decreased.
=== Mortality and self-esteem on health risks ===
Studies have shown that mortality and self-esteem are important factors of the terror management theory. Jessop et al. (2008) study this relationship within four studies that all examine how people react when they are given information on risks, specifically, in terms of the mortality related to the risks of driving. More specifically, the researchers were exploring how participants acted in terms of self-esteem, and its impact on how mortality-related health-risk information would be received. Overall, Jessop et al. (2008) found that even when mortality is prominent, people who engage in certain behaviors to improve their self-esteem have a greater chance of continuing with these activities. Mortality and self-esteem are both factors that influence people's behaviors and decision-making regarding their health. Furthermore, individuals who are involved in behaviors and possess motivation to enhance their self-worth are less likely to be affected by the importance placed on health risks, in terms of mortality.
Self-esteem is important when mortality is made salient. It can allow people a coping mechanism, one that can cushion individuals' fears; and thus, impacting one's attitudes towards a given behavior. Individuals who have higher levels of self-esteem regarding their behavior(s) are less likely to have their attitudes, and thus their behaviors changed regardless of mortality salience or death messages. People will use their self-esteem to hide behind their fears of dying. In terms of smoking behaviors, people with higher smoking-based self-esteem are less susceptible to anti-smoking messages that relate to death; therefore, mortality salience and death warnings afford them with an even more positive outlook on their behavior, or in this instance their smoking.
In the Hansen et al. (2010) experiment the researchers manipulated mortality salience. In the experiment, Hansen et al. (2010) examined smokers' attitudes towards the behavior of smoking. Actual warning labels were utilized to create mortality salience in this specific experiment. The researchers first gave participants a questionnaire to measure their smoking-based self-esteem. Following the questionnaire, participants were randomly assigned to two different conditions; the first were given anti-smoking warning labels about death and the second control group were exposed to anti-smoking warning labels not dealing with death. Before the participants' attitudes towards smoking were taken the researchers introduced an unrelated question to provide a delay. Further research has demonstrated that delays allow mortality salience to emerge because thoughts of death become non-conscious. Finally, participants were asked questions regarding their intended future smoking behavior. However, one weakness in their conduction was that the final questionnaire addressed opinions and behavioral questions, as opposed to the participants' level of persuasion regarding the different anti-smoking warning labels.
=== Social influences ===
Many people are more motivated by social pressures, rather than health risks. Specifically for younger people, mortality salience is stronger in eliciting changes of one's behavior when it brings awareness to the immediate loss of social status or position, rather than a loss, such as death that one can not imagine and feels far off. However, there are many different factors to take into consideration, such as how strongly an individual feels toward a decision, his or her level of self-esteem, and the situation around the individual. Particularly with people's smoking behaviors, self-esteem and mortality salience have different effects on individuals' decisions. In terms of the longevity of their smoking decisions, it has been seen that individuals' smoking habits are affected, in the short-term sense, when they are exposed to mortality salience that interrelates with their own self-esteem. Moreover, people who viewed social exclusion prompts were more likely to quit smoking in the long run than those who were simply shown health-effects of smoking. More specifically, it was demonstrated that when individuals had high levels of self-esteem they were more likely to quit smoking following the social pressure messages, rather than the health risk messages. In this specific instance, terror management, and specifically mortality salience is showing how people are more motivated by the social pressures and consequences in their environment, rather than consequences relating to their health. This is mostly seen in young adult smokers with higher smoking-based self-esteems who are not thinking of their future health and the less-immediate effects of smoking on their health.
== Death thought accessibility ==
Another paradigm that TMT researchers use to get at unconscious concerns about death is what is known as the death thought accessibility (DTA) hypothesis. Essentially, the DTA hypothesis states that if individuals are motivated to avoid cognitions about death, and they avoid these cognitions by espousing a worldview or by buffering their self-esteem, then when threatened, an individual should possess more death-related cognitions (e.g., thoughts about death, and death-related stimuli) than they would when not threatened.
The DTA hypothesis has its origins in work by Greenberg et al. (1994) as an extension of their earlier terror management hypotheses (i.e., the anxiety buffer hypothesis and the mortality salience hypothesis). The researchers reasoned that if, as indicated by Wegner's research on thought suppression (1994; 1997), thoughts that are purposely suppressed from conscious awareness are often brought back with ease, then following a delay death-thought cognitions should be more available to consciousness than (a) those who keep the death-thoughts in their consciousness the whole time, and (b) those who suppress the death-thoughts but are not provided a delay. That is precisely what they found. However, other psychologists have failed to replicate these findings.
In these initial studies (i.e., Greenberg et al. (2004); Arndt et al. (1997)), and in numerous subsequent DTA studies, the main measure of DTA is a word fragment task, whereby participants can complete word fragments in distinctly death-related ways (e.g., coff_ _ as coffin, not coffee) or in non death-related ways (e.g., sk_ _l as skill, not skull). If death-thoughts are indeed more available to consciousness, then it stands to reason that the word fragments should be completed in a way that is semantically related to death.
=== Importance of the Death Thought Accessibility hypothesis ===
The introduction of this hypothesis has refined TMT, and led to new avenues of research that formerly could not be assessed due to the lack of an empirically validated way of measuring death-related cognitions. Also, the differentiation between proximal (conscious, near, and threat-focused) and distal (unconscious, distant, symbolic) defenses that have been derived from DTA studies have been extremely important in understanding how people deal with their terror.
It is important to note how the DTA paradigm subtly alters, and expands, TMT as a motivational theory. Instead of solely manipulating mortality and witnessing its effects (e.g., nationalism, increased prejudice, risky sexual behavior, etc.), the DTA paradigm allows a measure of the death-related cognitions that result from various affronts to the self. Examples include threats to self-esteem and to one's worldview; the DTA paradigm can therefore assess the role of death-thoughts in self-esteem and worldview defenses. Furthermore, the DTA hypothesis lends support to TMT in that it corroborates its central hypothesis that death is uniquely problematic for human beings, and that it is fundamentally different in its effects than meaning threats (i.e., Heine et al., 2006) and that is death itself, and not uncertainty and lack of control associated with death; Fritsche et al. (2008) explore this idea.
Since its inception, the DTA hypothesis had been rapidly gaining ground in TMT investigations, and as of 2009, has been employed in over 60 published papers, with a total of more than 90 empirical studies.
=== Death anxiety on health promotion ===
How people respond to their fears and anxiety of death is investigated in TMT. Moreover, Taubman-Ben-Ari and Noy (2010) examine the idea that a person's level of self-awareness and self-consciousness should be considered in relation to their responses to their anxiety and death cognitions. The more an individual is presented with their death or death cognitions in general, the more fear and anxiety one may have; therefore, to combat said anxiety one may implement anxiety buffers.
Due to a change in people's lifestyles, in the direction of more unhealthy behaviors, the leading causes of death now, being cancer and heart disease, most definitely are related to individuals' unhealthy behaviors (though the statement is over-generalizing and certainly cannot be applied to every case). Age and death anxiety both are factors that should be considered in the terror management theory, in relation to health-promoting behaviors. Age undoubtedly plays some kind of role in people's health-promoting behaviors; however, an actual age-related effect on death anxiety and health-promoting behaviors has yet to be seen. Although research has demonstrated that for young adults only, when they were prompted with death related scenarios, they yielded more health-promoting behaviors, compared to those participants in their sixties. In addition, death anxiety has been found to have an effect for young adults, on their behaviors of health promotion.
== Terror management health model ==
The terror management health model (TMHM) explores the role that death plays on one's health and behavior. Goldenberg and Arndt (2008) state that the TMHM proposes the idea that death, despite its threatening nature, is in fact instrumental and purposeful in the conditioning of one's behavior towards the direction of a longer life.
According to Goldenberg and Arndt (2008), certain health behaviors such as breast self-exams (BSEs) can consciously activate and facilitate people to think of death, especially their own death. While death can be instrumental for individuals, in some cases, when breast self-exams activate people's death thoughts an obstacle can present itself, in terms of health promotion, because of the experience of fear and threat. Abel and Kruger (2009) have suggested that the stress caused by increased awareness of mortality when celebrating one's birthday might explain the birthday effect, where mortality rates seem to spike around these days.
On the other hand, death and thoughts of death can serve as a way of empowering the self, not as threats. Researchers, Cooper et al. (2011) explored TMHM in terms of empowerment, specifically using BSEs under two conditions; when death thoughts were prompted, and when thoughts of death were non-conscious. According to TMHM, people's health decisions, when death thoughts are not conscious, should be based on their motivations to act appropriately, in terms of the self and identity. Cooper et al. (2011) found that when mortality and death thoughts were primed, women reported more empowerment feelings than those who were not prompted before performing a BSE.
Additionally, TMHM suggests that mortality awareness and self-esteem are important factors in individuals' decision making and behaviors relating to their health. TMHM explores how people will engage in behaviors, whether positive or negative, even with the heightened awareness of mortality, in the attempt to conform to society's expectations and improve their self-esteem. The TMHM is useful in understanding what motivates individuals regarding their health decisions and behaviors.
In terms of smoking behaviors and attitudes, the impact of warnings with death messages depends on:
The individuals' level of smoking-based self-esteem
The warnings' actual degree of death information
== Emotion ==
People with low self-esteem have more negative emotions when reminded of death compared to people with high self-esteem. A possible explanation posits that these individuals lack the very defenses that TMT argues protect people from mortality concerns (e.g., solid worldviews). In contrast, positive mood states are not impacted by death thoughts for people of low or high self-esteem.
== Leadership ==
It has been suggested that culture provides meaning, organization, and a coherent world-view that diminishes the psychological terror caused by the knowledge of eventual death. The terror management theory can help to explain why a leader's popularity can grow substantially during times of crisis. When a follower's mortality is made prominent they will tend to show a strong preference for iconic leaders. An example of this occurred when George W. Bush's approval rating jumped almost 50 percent following the September 11 attacks in the United States. As Forsyth (2009) posits, this tragedy made U.S. citizens aware of their mortality, and Bush provided an antidote to these existential concerns by promising to bring justice to the terrorist group responsible for the attacks.
Researchers Cohen et al. (2004), in their particular study on TMT, tested the preferences for different types of leaders, while reminding people of their mortality. Three different candidates were presented to participants. The three leaders were of three different types: task-oriented (emphasized setting goals, strategic planning, and structure), relationship-oriented (emphasized compassion, trust, and confidence in others), and charismatic. The participants were then placed in one of two conditions: mortality salient or control group. In the former condition the participants were asked to describe the emotions surrounding their own death, as well as the physical act of the death itself, whereas the control group were asked similar questions about an upcoming exam. The results of the study were that the charismatic leader was favored more, and the relationship-oriented leader was favored less, in the mortality-salient condition. Further research has shown that mortality salient individuals also prefer leaders who are members of the same group, as well as men rather than women (Hoyt et al. 2010). This has links to social role theory.
== Religion ==
TMT posits that religion was created as a means for humans to cope with their own mortality. Supporting this, arguments in favor of life after death, and simply being religious, reduce the effects of mortality salience on worldview defense. Thoughts of death have also been found to increase religious beliefs. At an implicit, subconscious level, this is the case even for people who claim to be nonreligious.
== Mental health ==
Some researchers have argued that death anxiety may play a central role in numerous mental health conditions. To test whether death anxiety causes a particular mental illness, TMT researchers use a mortality salience experiment, and examine whether reminding participants of death leads to increased prevalence of behaviors associated with that mental illness. Such studies have shown that reminders of death lead to increases in compulsive handwashing in obsessive-compulsive disorder, avoidance in spider phobias and social anxiety, and anxious behaviors in other disorders, including panic disorder and health anxiety, suggesting the role of death anxiety in these conditions according to TMT researchers.
== Criticisms ==
Criticisms of terror management theory have been based on several lines of arguments:
Suppression of fear and anxiety is implausible from an evolutionary point of view.
The observed psychological responses to terrifying cues are better explained by coalitional psychology and theories of collective defense.
The responses can be explained as fear of uncertainty and the unknown.
The responses can be explained as search for meaning of life and mortality.
The experimental results are difficult to replicate.
These arguments are discussed in the following sections.
=== Evolutionary argument ===
Anxiety and fear are psychological responses that have evolved because they help us avoid danger. A mechanism to suppress anxiety and fear, as postulated by TMT, is unlikely to have evolved because it would reduce the chances of survival.
It is argued that TMT relies on misguided assumptions about evolved human nature originating from psychoanalytic theory.
Proponents of TMT argue that the cultural self-esteem that counters death anxiety is either a spandrel or exaptation created as a byproduct of the human survival instinct being impinged upon by the awareness of death brought about by increased intelligence.
It is not responses to immediate danger that are suppressed, but existential reminders of mortality. They posit a "dual defense model" whereby "proximal" and "distal" defenses deal with threats differently, with the former doing so more "pragmatically" due to greater conscious awareness, and the latter more symbolically due to unconscious thought recession.
Critics argue that the observed responses are not only evoked by cues of essential mortality, but more generally by cues of danger or insecurity.
=== Coalitional psychology and collective defense as alternative explanations ===
TMT posits that people respond to cues about mortality by strengthening shared worldviews. Critics believe that such a worldview defense is better explained by coalitional psychology. People confronted with danger tend to build shared worldviews and a pro-normative orientation in order to garner social support and to build coalitions and alliances.
Proponents of TMT argue that the coalitional psychology theory is a black box explanation that 1) cannot account for the fact that virtually all cultures have a supernatural dimension; 2) does not explain why cultural worldview defense is symbolic, involving allegiance to both specific and general systems of abstract meaning unrelated to specific threats, rather than focused on the specific adaptive threats it supposedly evolved to deal with; and, 3) dismisses TMT's dual process account of the underlying processes that generate MS effects without providing an alternative of any kind or attempting to account for the data relevant to this aspect of the TMT analysis.
The coalitional theory is supported by a large statistical study finding that conservatism, traditionalism, and other responses represented by TMT theory are connected with collective danger, while individual danger has very different and often opposite effects.
The observed connection with collective danger supports the coalitional theory, while contradicting CP's interpretation of TMT, which is understood as explicitly dealing with individual danger only.
TMT theorists however, have explained how CP dismisses TMT's dual process shown in lab studies whereby proximal and distal defenses deal with threats differently; with the former doing so more "pragmatically" due to greater conscious awareness, and the latter more symbolically due to unconscious thought recession. This would account for the study's distinction between individual and collective danger — with the former being more proximal and the latter more distal. Unlike TMT, CP does not view national, political and religious coalitions as imagined communities that represent primarily cultural worldviews (distal defenses).
Similarly, another study has found that the response of system justification postulated by TMT theorists is increased by salience of terrorism, not by salience of individual mortality.
Earlier experimental findings can be explained by the fact that individual danger and collective danger are seriously confounded.
The findings that the observed responses are connected with collective danger rather than individual danger was predicted by regality theory. This finding is in agreement with authoritarianism theory, realistic group conflict theory, and Ronald Inglehart's theory of modernization, but not in agreement with CP's interpretation of terror management theory, which omits its distal/proximal dual defense model.
=== Prevalence of death ===
Since findings on mortality salience and worldview defense were first published, other researchers have claimed that the effects may have been obtained due to reasons other than death itself, such as anxiety, fear, or other aversive stimuli such as pain. The experimental manipulations in TMT research are likely to elicit a mixture of different types of negative emotions, including fear, anxiety, sadness, and anger.
Other studies have found effects similar to those that mortality salience results in – for example, thinking about difficult personal choices to be made, being made to respond to open-ended questions regarding uncertainty, thinking about being robbed, thinking about being socially isolated, and being told that one's life lacks meaning. While these cases exist, thoughts of death have since been compared to various aversive experimental controls, such as (but not limited to) thinking about: failure, writing a critical exam, public speaking with a considerable audience, being excluded, paralysis, dental pain, intense physical pain, etc.
With regards to the studies that found similar effects, TMT theorists have argued that in the previously mentioned studies where death was not the subject thought about, the subjects would quite easily be related to death in an individual's mind due to "linguistic or experiential connection with mortality" (p. 332). For example, being robbed invokes thoughts of violence and being unsafe in one's own home – many people have died trying to protect their property and family. A second possible explanation for these results involves the death-thought accessibility hypothesis: these threats somehow sabotage crucial anxiety-buffering aspects of an individual's worldview or self-esteem, which increases their death thought accessibility. For example, one study found increased death thought accessibility in response to thoughts of antagonistic relations with attachment figures. However, this makes it difficult or impossible to isolate the effect of mortality salience.
While many TMT theorists claim that affective responses to mortality salience are suppressed and pushed out of consciousness, later studies contradict this and show that affective responses are indeed observable.
=== Meaning maintenance model ===
The meaning maintenance model (MMM) was initially introduced as a comprehensive motivational theory that claimed to subsume TMT, with alternative explanations for TMT findings. Essentially, it posits that people automatically give meaning to things, and when those meanings are somehow disrupted, it causes anxiety. In response, people concentrate on "meaning maintenance to reestablish their sense of symbolic unity" and that such "meaning maintenance often involves the compensatory reaffirmation of alternative meaning structures". These meanings, among other things, should "provide a basis for prediction and control of our...environments, help [one] to cope with tragedy and trauma...and the symbolic cheating of death via adherence to the enduring values that these cultures provide".
While TMT regards the search for meaning as a defense mechanism, meaning management theory regards the quest for meaning as a primary motive because we are meaning-seeking and meaning-making creatures living in a world of meanings. When people are exposed to mortality salience, both TMT and meaning management theory would predict an increase in pro-culture and pro-esteem activities, but for very different reasons. The latter theory is replacing death denial by death acceptance.
TMT theorists argue that meaning management theory cannot describe why different sets of meaning are preferred by different people, and that different types of meaning have different psychological functions.
TMT theorists argue that unless something is an important element of a person's anxiety-buffering worldview or self-esteem, it will not require broad meaning maintenance.
TMT theorists believe that meaning management theory cannot accurately claim to be an alternative to TMT because it does not seem to be able to explain the current breadth of TMT evidence.
=== Offensive defensiveness ===
Some theorists have argued that it is not the idea of death and nonexistence that is unsettling to people, but the fact that uncertainty is involved.
For example, these researchers posited that people defend themselves by altering their fear responses from uncertainty to an enthusiasm approach. Other researchers argue for distinguishing fear of death from fear of dying and, therein, posit that ultimately the fear of death has more to do with some other fear (e.g., fear of pain) or reflects uncertainty avoidance or fear of the unknown.
TMT theorists agree that uncertainty can be disconcerting in some cases and it may even result in defense responses, but note that they believe the inescapability of death and the possibility of its finality regarding one's existence is most unsettling. They also note that people actually seek out some types of uncertainty, and that being uncertain is not always very unpleasant. In contrast, there is substantial evidence that, all things being equal, uncertainty and the unknown represent fundamental fears and are only experienced as pleasant when there is sufficient contextual certainty. For example, a surprise involves uncertainty, but is only perceived as pleasant if there is sufficient certainty that the surprise will be pleasant.
Though TMT theorists acknowledge that many responses to mortality salience involve greater approaches (zealousness) towards important worldviews, they also note examples of mortality salience which resulted in the opposite, which offensive defensiveness cannot account for: when negative features of a group to which participants belong were made salient, people actively distanced themselves from that group under mortality salience.
=== Replication failure ===
In addition to the criticisms from alternative theoretical perspectives, a large-scale attempt by Many Labs 4 to replicate published findings failed to replicate the mortality salience effect on worldview defense under any condition. The test is a multi-lab replication of Study 1 of Greenberg et al. (1994). Psychologists in 21 labs across the U.S. re-executed the original experiment among a total of 2,200 participants. In response to the Many Labs 4 paper, Tom Pyszczynski (one of the founding psychologists of TMT), criticized the study for insufficient sample sizes, failure to follow the advice of researchers, and deviation from a preregistered protocol.
== Popularity ==
Psychologist Yoel Inbar summarized the popularity of the theory:
I can not explain to people who were not around during this time - which I would say was roughly 2004 to 2008 - how much everything at the time was about terror management theory. You would go to SPSP and it seemed like half of the posters were about terror management theory. It was just everywhere. There is just an explosion of terror management theory stuff. And then it sort of receded. And now you barely see it. Which is also kind of weird. We were obsessed with this for a period of 3-5 years, then we moved on to other things.
== See also ==
Anxiety buffer disruption theory – Application of terror management theory
Cognitive dissonance – Mental phenomenon of holding contradictory beliefs
Death anxiety – Anxiety caused by thoughts of death
Flight from Death – a documentary film based on Ernest Becker's work and terror management theory
Mortality salience – Awareness about death
Necrophobia – Fear of dead organisms
Memento mori – Artistic or symbolic reminder of the inevitability of death
Chamber of Reflection – Initiation ritual in freemasonry
Protection motivation theory
== References ==
== Bibliography ==
Becker, Ernest (1973). The Denial of Death, The Free Press. ISBN 0-02-902380-7
Pyszczynski, Thomas; Solomon, Sheldon; Greenberg, Jeff (2003). In the Wake of 9/11: The Psychology of Terror, American Psychological Association. ISBN 1-55798-954-0
Solomon, Sheldon, Greenberg, J. & Pyszczynski, T. (1991) "A terror management theory of social behavior: The psychological functions of esteem and cultural worldviews", in M. P. Zanna (Ed.) Advances in Experimental Social Psychology, Volume 24, Academic Press, pp. 93–159. ISBN 0-12-015224-X
== Further reading ==
Curtis, V.; Biran, A. (2001). "Dirt, disgust, and disease: Is hygiene in our genes?". Perspectives in Biology and Medicine. 44 (1): 17–31. CiteSeerX 10.1.1.324.760. doi:10.1353/pbm.2001.0001. PMID 11253302. S2CID 15675303.
Darwin, C. (1998) [1872]. The expression of the emotions in man and animals (3rd ed.). London: Harper Collins.
Florian, V.; Mikulincer, M. (1997). "Fear of death and the judgment of social transgressions: a multidimensional test of terror". Journal of Personality and Social Psychology. 73 (2): 369–80. doi:10.1037/0022-3514.73.2.369. ISSN 0022-3514. PMID 9248054.
Goldenberg, J.L.; Pyszczynski, T.; Greenberg, J.; Solomon, S.; Kluck, B.; Cornwell, R. (2001). "I am not an animal: Mortality salience, disgust, and the denial of human creatureliness". Journal of Experimental Psychology. 130 (3): 427–435. doi:10.1037/0096-3445.130.3.427. PMID 11561918.
Greenberg, J.; Pyszczynski, T.; Solomon, S.; Rosenblatt, A.; Veeder, M.; Kirkland, S. (1990). "Evidence for terror management theory. II: The effects of mortality salience on reactions to those who threaten or bolster the cultural worldview" (Fee required). Journal of Personality and Social Psychology. 58 (2): 308–318. CiteSeerX 10.1.1.454.2378. doi:10.1037/0022-3514.58.2.308. ISSN 0022-3514. 13817, 35400000600727.0100 (INIST-CNRS). Retrieved 2007-07-27.
Greenberg, J.; Solomon, S.; Pyszczynski, T. (1997). "Terror management theory of self-esteem and cultural worldviews: Empirical assessments and conceptual refinements". Advances in Experimental Social Psychology. 29 (S 61): 139. doi:10.1016/s0065-2601(08)60016-7.
Hansen, J; Winzeler, S; Topolinski, S (2010). "When death makes you smoke: a terror management perspective on the effectiveness of cigarette on-pack warnings". Journal of Experimental Social Psychology. 46: 226–228. doi:10.1016/j.jesp.2009.09.007.
Hirschberger, G.; Florian, V.; Mikulincer, M. (2003). "Striving for romantic intimacy following partner complaint or partner criticism: A terror management perspective". Journal of Social and Personal Relationships. 20 (5): 675–687. doi:10.1177/02654075030205006. S2CID 144657212.
Judis, J.B. (August 27, 2007). "Death grip: How political psychology explains Bush's ghastly success". New Republic.
Lazarus, R.S. (1991). Emotion and adaptation. New York: Oxford University Press. ISBN 978-0-19-506994-5.
Mikulincer, M.; Florian, V.; Hirschberger, G. (2003). "The existential function of close relationships: Introducing death into the science of love". Personality and Social Psychology Review. 7 (1): 20–40. doi:10.1207/S15327957PSPR0701_2. PMID 12584055. S2CID 11600574.
Pyszczynski, T.; Greenberg, J.; Solomon, S. (1997). "Why do we need what we need? A terror management perspective on the roots of human social motivation". Psychological Inquiry. 8 (1): 1–20. doi:10.1207/s15327965pli0801_1.
Pyszczynski, T.; Greenberg, J.; Solomon, S. (1999). "A dual process model of defense against conscious and unconscious death-related thoughts: An extension of terror management theory". Psychological Review. 106 (4): 835–845. doi:10.1037/0033-295X.106.4.835. PMID 10560330. S2CID 2655060.
Rosenblatt, A.; Greenberg, J.; Solomon, S.; Pyszczynski, T.; Lyon, D. (1989). "Evidence for terror management theory: I. The effects of mortality salience on reactions to those who violate or uphold cultural values". Journal of Personality and Social Psychology. 57 (4): 681–90. CiteSeerX 10.1.1.457.5862. doi:10.1037/0022-3514.57.4.681. ISSN 0022-3514. PMID 2795438.
Royzman, E.B.; Sabini, J. (2001). "Something it takes to be an emotion: The interesting case of disgust". Journal for the Theory of Social Behaviour. 31 (1): 29–59. doi:10.1111/1468-5914.00145.
Shehryar, O.; Hunt, D.M. (2005). "A terror management perspective on the persuasiveness of fear appeals". Journal of Consumer Psychology. 15 (4): 275–287. doi:10.1207/s15327663jcp1504_2. S2CID 18866874.
Simon, L.; Arndt, J.; Greenberg, J.; Pyszczynski, T.; Solomon, S. (1998). "Terror management and meaning: Evidence that the opportunity to defend the worldview in response to mortality salience increases the meaningfulness of life in the mildly depressed". Journal of Personality. 66 (3359–382): 359–382. doi:10.1111/1467-6494.00016. hdl:10150/187250. PMID 9615422.
Simon, L.; Greenberg, J.; Harmon-Jones, E.; Solomon, S.; Pyszczynski, T.; Arndt, J.; Abend, T. (1997). "Terror management and Cognitive-Experiential Self-Theory: Evidence that terror management occurs in the experiential system". Journal of Personality and Social Psychology. 72 (5): 1132–1146. doi:10.1037/0022-3514.72.5.1132. PMID 9150588.
Greenberg, J.; Koole, S. L.; Pyszczynski, T. (2004). Handbook of experimental existential psychology. Guilford Press. ISBN 978-1-59385-040-1.
Cohen, Florette; Solomon, Sheldon; Maxfield, Molly; Pyszczynski, Tom; Greenberg, Jeff (2004). "Fatal Attraction". Psychological Science. 15 (12). SAGE Publications: 846–851. doi:10.1111/j.0956-7976.2004.00765.x. ISSN 0956-7976. PMID 15563330. S2CID 16787928.
Van Tilburg, W. A. P.; Igou, E. R (2011). "On the meaningfulness of existence: When life salience boosts adherence to worldviews". European Journal of Social Psychology (Submitted manuscript). 41 (6): 740–750. doi:10.1002/ejsp.819. hdl:10344/5416. S2CID 142993102.
Gutierrez, C. (2006). "Consumer attraction to luxury brand products: Social affiliation in terror management theory".
Discusses TMT at length
Griffin, R. (2007). Fascism & Modernism. New York: Palgrave Macmillan. ISBN 978-1-4039-8783-9.
TMT and self-esteem
Schmeichel, B.J.; Gailliot, M.T.; Filardo, E.A.; McGregor, I.; Gitter, S.; Baumeister, R.F. (2009). "Terror management theory and self esteem revisited: The roles of implicit and explicit self-esteem in mortality salience effects". Journal of Personality and Social Psychology. 96 (5): 1077–1087. doi:10.1037/a0015091. PMID 19379037. S2CID 13740871.
== External links == | Wikipedia/Terror_management_theory |
Affective science is the scientific study of emotion or affect. This includes the study of emotion elicitation, emotional experience and the recognition of emotions in others. Of particular relevance are the nature of feeling, mood, emotionally-driven behaviour, decision-making, attention and self-regulation, as well as the underlying physiology and neuroscience of the emotions.
== Discussion ==
An increasing interest in emotion can be seen in the behavioral, biological and social sciences. Research over the last two decades suggests that many phenomena, ranging from individual cognitive processing to social and collective behavior, cannot be understood without taking into account affective determinants (i.e. motives, attitudes, moods, and emotions). Just as the cognitive revolution of the 1960s spawned the cognitive sciences and linked the disciplines studying cognitive functioning from different vantage points, the emerging field of affective science seeks to bring together the disciplines which study the biological, psychological, and social dimensions of affect. In particular affective science includes psychology, affective neuroscience, sociology, psychiatry, anthropology, ethology, archaeology, economics, criminology, law, political science, history, geography, education and linguistics. Research is also informed by contemporary philosophical analysis and artistic explorations of emotions. Emotions developed in human history cause organisms to react to environmental stimuli and challenges.
The major challenge for this interdisciplinary domain is to integrate research focusing on the same phenomenon, emotion and similar affective processes, starting from different perspectives, theoretical backgrounds, and levels of analysis. As a result, one of the first challenges of affective science is to reach consensus on the definition of emotions. Discussion is ongoing as to whether emotions are primarily bodily responses or whether cognitive processing is central. Controversy also concerns the most effective ways to measure emotions and conceptualise how one emotion differs from another. Examples of this include the dimensional models of Russell and others, Plutchik's wheel of emotions, and the general distinction between basic and complex emotions.
Recent philosophical inquiry questions whether positive experiences have phenomenological properties that are true opposites of suffering. While intense pleasure or joy might contrast sharply with pain, introspective examination reveals no distinct “anti-suffering” quality equivalent to suffering’s phenomenological presence. This challenges assumptions about affective duality and suggests that positive affect may not mirror negative affect in a simple binary. For instance, just as silence is not an “anti-sound” but rather an absence of sound, positive experience may be better understood as the absence of suffering rather than its dual opposite in a phenomenological sense.: 47
== Measuring emotions ==
Whether scientific method is at all suited for the study of the subjective aspect of emotion, feelings, is a question for philosophy of science and epistemology. In practice, the use of self-report (i.e. questionnaires) has been widely adopted by researchers. Additionally, web-based research is being used to conduct large-scale studies on the components of happiness for example. (www.authentichappiness.com is a website run by the University of Pennsylvania, where questionnaires are routinely taken by thousands of people all over the world based on a well-being criteria devised in the book 'Flourish.' by Martin Seligman.) Nevertheless, Seligman mentions in the book the poor reliability of using this method as it is often entirely subjective to how the individual is feeling at the time, as opposed to questionnaires which test for more long standing personal features that contribute to well-being such as meaning in life. Alongside this researchers also use functional magnetic resonance imaging, electroencephalography and physiological measures of skin conductance, muscle tension and hormone secretion. This hybrid approach should allow researchers to gradually pinpoint the affective phenomenon. There are also a few commercial systems available that claim to measure emotions, for instance using automated video analysis or skin conductance (affectiva).
=== Affective display ===
A common way to measure the emotions of others is via their emotional expressions. These include facial expression, vocal expression and bodily posture. Much work has also gone into coding expressive behaviour computer programmes that can be used to read the subject's emotion more reliably. The model used for facial expression is the Facial Action Coding System or 'FACS'. An influential figure in the development of this system was Paul Ekman. For criticism, see the conceptual-act model of emotion.
These behavioural sources can be contrasted with language descriptive of emotions. In both respects one may observe the way that affective display differs from culture to culture.
== Stanford ==
The Stanford University Psychology Department has an Affective Science area. It emphasizes basic research on emotion, culture, and psychopathology using a broad range of experimental, psychophysiological, neural, and genetic methods to test theory about psychological mechanisms underlying human behavior. Topics include longevity, culture and emotion, reward processing, depression, social anxiety, risk for psychopathology, and emotion expression, suppression, and dysregulation.
== See also ==
Emotion
Music therapy
Psychology
Affective neuroscience
Affective computing
Affect (psychology)
== Notes and references == | Wikipedia/Affective_science |
Two-alternative forced choice (2AFC) is a method for measuring the sensitivity of a person or animal to some particular sensory input, stimulus, through that observer's pattern of choices and response times to two versions of the sensory input. For example, to determine a person's sensitivity to dim light, the observer would be presented with a series of trials in which a dim light was randomly either in the top or bottom of the display. After each trial, the observer responds "top" or "bottom". The observer is not allowed to say "I do not know", or "I am not sure", or "I did not see anything". In that sense the observer's choice is forced between the two alternatives.
Both options can be presented concurrently (as in the above example) or sequentially in two intervals (also known as two-interval forced choice, 2IFC). For example, to determine sensitivity to a dim light in a two-interval forced choice procedure, an observer could be presented with series of trials comprising two sub-trials (intervals) in which the dim light is presented randomly in the first or the second interval. After each trial, the observer responds only "first" or "second".
The term 2AFC is sometimes used to describe a task in which an observer is presented with a single stimulus and must choose between one of two alternatives. For example in a lexical decision task a participant observes a string of characters and must respond whether the string is a "word" or "non-word". Another example is the random dot kinetogram task, in which a participant must decide whether a group of moving dots are predominately moving "left" or "right". The results of these tasks, sometimes called yes-no tasks, are much more likely to be affected by various response biases than 2AFC tasks. For example, with extremely dim lights, a person might respond, completely truthfully, "no" (i.e., "I did not see any light") on every trial, whereas the results of a 2AFC task will show the person can reliably determine the location (top or bottom) of the same, extremely dim light.
2AFC is a method of psychophysics developed by Gustav Theodor Fechner.
== Behavioural experiments ==
There are various manipulations in the design of the task, engineered to test specific behavioral dynamics of choice. In one well known experiment of attention that examines the attentional shift, the Posner Cueing Task uses a 2AFC design to present two stimuli representing two given locations. In this design there is an arrow that cues which stimulus (location) to attend to. The person then has to make a response between the two stimuli (locations) when prompted. In animals, the 2AFC task has been used to test reinforcement probability learning, for example such as choices in pigeons after reinforcement of trials. A 2AFC task has also been designed to test decision making and the interaction of reward and probability learning in monkeys.
Monkeys were trained to look at a center stimulus and were then presented with two salient stimuli side by side. A response can then be made in the form of a saccade to the left or to the right stimulus. A juice reward is then administered after each response. The amount of juice reward is then varied to modulate choice.
In a different application, the 2AFC is designed to test discrimination of motion perception. The random dot motion coherence task, introduces a random dot kinetogram, with a percentage of net coherent motion distributed across the random dots.
The percentage of dots moving together in a given direction determines the coherence of motion towards the direction. In most experiments, the participant must make a choice response between two directions of motion (e.g. up or down), usually indicated by a motor response such as a saccade or pressing a button.
=== Biases in decision making ===
It is possible to introduce biases in decision making in the 2AFC task. For example, if one stimulus occurs with more frequency than the other, then the frequency of exposure to the stimuli may influence the participant's beliefs about the probability of the occurrence of the alternatives. Introducing biases in the 2AFC task is used to modulate decision making and examine the underlying processes.
== Models of decision making ==
The 2AFC task has yielded consistent behavioral results on decision-making, which lead to the development of theoretical and computational models of the dynamics and results of decision-making.
=== Normal distribution model ===
Suppose the two stimuli
x
1
{\displaystyle x_{1}}
and
x
2
{\displaystyle x_{2}}
in the 2AFC task are random variables from two different categories
a
{\displaystyle a}
and
b
{\displaystyle b}
, and the task is to decide which was which. A common model is to assume that the stimuli came from normal distributions
N
(
μ
a
,
σ
a
)
{\displaystyle N(\mu _{a},\sigma _{a})}
and
N
(
μ
b
,
σ
b
)
{\displaystyle N(\mu _{b},\sigma _{b})}
. Under this normal model, the optimal decision strategy (of the ideal observer) is to decide which of two bivariate normal distributions is more likely to produce the tuple
x
1
,
x
2
{\displaystyle x_{1},x_{2}}
: the joint distributions of
a
{\displaystyle a}
and
b
{\displaystyle b}
, or of
b
{\displaystyle b}
and
a
{\displaystyle a}
.
The probability of error with this ideal decision strategy is given by the generalized chi-square distribution:
p
(
e
)
=
p
(
χ
~
w
,
k
,
λ
,
0
,
0
2
)
<
0
{\displaystyle p(e)=p\left({\tilde {\chi }}_{{\boldsymbol {w}},{\boldsymbol {k}},{\boldsymbol {\lambda }},0,0}^{2}\right)<0}
, where
w
=
[
σ
a
2
−
σ
b
2
]
,
k
=
[
1
1
]
,
λ
=
μ
a
−
μ
b
σ
a
2
−
σ
b
2
[
σ
a
2
σ
b
2
]
.
{\displaystyle {\boldsymbol {w}}={\begin{bmatrix}\sigma _{a}^{2}&-\sigma _{b}^{2}\end{bmatrix}},\;{\boldsymbol {k}}={\begin{bmatrix}1&1\end{bmatrix}},\;{\boldsymbol {\lambda }}={\frac {\mu _{a}-\mu _{b}}{\sigma _{a}^{2}-\sigma _{b}^{2}}}{\begin{bmatrix}\sigma _{a}^{2}&\sigma _{b}^{2}\end{bmatrix}}.}
This model can also extend to the cases when each of the two stimuli is itself a multivariate normal vector, and also to the situations when the two categories have different prior probabilities, or the decisions are biased due to different values attached to the possible outcomes.
=== Drift-diffusion model ===
There are typically three assumptions made by computational models using the 2AFC:i) evidence favoring each alternative is integrated over time; ii) the process is subject to random fluctuations; and iii) the decision is made when sufficient evidence has accumulated favoring one alternative over the other.
It is typically assumed that the difference in evidence favoring each alternative is the quantity tracked over time and that which ultimately informs the decision; however, evidence for different alternatives could be tracked separately.
The drift-diffusion model (DDM) is a well defined model, that is proposed to implement an optimal decision policy for 2AFC. It is the continuous analog of a random walk model.
The DDM assumes that in a 2AFC task, the subject is accumulating evidence for one or other of the alternatives at each time step, and integrating that evidence until a decision threshold is reached. As the sensory input which constitutes the evidence is noisy, the accumulation to the threshold is stochastic rather than deterministic – this gives rise to the directed random-walk-like behavior.
The DDM has been shown to describe accuracy and reaction times in human data for 2AFC tasks.
==== Formal model ====
The accumulation of evidence in the DDM is governed according to the following formula:
d
x
=
A
d
t
+
c
d
W
,
x
(
0
)
=
0
{\displaystyle dx=Adt+cdW\ ,\ x(0)=0}
At time zero, the evidence accumulated, x, is set equal to zero. At each time step, some evidence, A, is accumulated for one of the two possibilities in the 2AFC. A is positive if the correct response is represented by the upper threshold, and negative if the lower. In addition, a noise term, cdW, is added to represent noise in input. On average, the noise will integrate to zero. The extended DDM allows for selection of
A
{\displaystyle A}
and the starting value of
x
(
0
)
{\displaystyle x(0)}
from separate distributions – this provides a better fit to experimental data for both accuracy and reaction times.
=== Other models ===
==== Ornstein–Uhlenbeck model ====
The Ornstein–Uhlenbeck model extends the DDM by adding another term,
λ
{\displaystyle \lambda }
, to the accumulation that is dependent on the current accumulation of evidence – this has the net effect of increasing the rate of accumulation towards the initially preferred option.
d
x
=
(
λ
x
+
A
)
d
t
+
c
d
W
{\displaystyle dx\ =\ (\lambda x+A)dt\ +\ cdW}
==== Race model ====
In the race model, evidence for each alternative is accumulated separately, and a decision made either when one of the accumulators reaches a predetermined threshold, or when a decision is forced and then the decision associated with the accumulator with the highest evidence is chosen. This can be represented formally by:
d
y
1
=
I
1
d
t
+
c
d
W
1
d
y
2
=
I
2
d
t
+
c
d
W
2
,
y
1
(
0
)
=
y
2
(
0
)
=
0
{\displaystyle {\begin{aligned}dy_{\text{1}}\ =\ I_{\text{1}}dt\ +\ cdW_{\text{1}}\\dy_{\text{2}}\ =\ I_{\text{2}}dt\ +\ cdW_{\text{2}}\end{aligned}},\quad y_{\text{1}}(0)\ =\ y_{\text{2}}(0)=0}
The race model is not mathematically reducible to the DDM, and hence cannot be used to implement an optimal decision procedure.
==== Mutual inhibition model ====
The mutual inhibition model also uses two accumulators to model the accumulation of evidence, as with the race model. In this model the two accumulators have an inhibitory effect on each other, so as evidence is accumulated in one, it dampens the accumulation of evidence in the other. In addition, leaky accumulators are used, so that over time evidence accumulated decays – this helps to prevent runaway accumulation towards one alternative based on a short run of evidence in one direction. Formally, this can be shown as:
d
y
1
=
(
−
k
y
1
−
w
y
2
+
I
1
)
d
t
+
c
d
W
1
d
y
2
=
(
−
k
y
2
−
w
y
1
+
I
2
)
d
t
+
c
d
W
2
,
y
1
(
0
)
=
y
2
(
0
)
=
0
{\displaystyle {\begin{aligned}dy_{\text{1}}\ =\ (-ky_{\text{1}}-wy_{\text{2}}+I_{\text{1}})dt\ +\ cdW_{\text{1}}\\dy_{\text{2}}\ =\ (-ky_{\text{2}}-wy_{\text{1}}+I_{\text{2}})dt\ +\ cdW_{\text{2}}\end{aligned}},\quad y_{\text{1}}(0)\ =\ y_{\text{2}}(0)=0}
Where
k
{\displaystyle k}
is a shared decay rate of the accumulators, and
w
{\displaystyle w}
is the rate of mutual inhibition.
==== Feedforward inhibition model ====
The feedforward inhibition model is similar to the mutual inhibition model, but instead of being inhibited by the current value of the other accumulator, each accumulator is inhibited by a fraction of the input to the other. It can be formally stated thus:
d
y
1
=
I
1
d
t
+
c
d
W
1
−
u
(
I
2
d
t
+
c
d
W
2
)
d
y
2
=
I
2
d
t
+
c
d
W
2
−
u
(
I
1
d
t
+
c
d
W
1
)
,
y
1
(
0
)
=
y
2
(
0
)
=
0
{\displaystyle {\begin{aligned}dy_{\text{1}}\ =\ I_{\text{1}}dt\ +\ cdW_{\text{1}}\ -\ u(I_{\text{2}}dt\ +\ cdW_{\text{2}})\\dy_{\text{2}}\ =\ I_{\text{2}}dt\ +\ cdW_{\text{2}}\ -\ u(I_{\text{1}}dt\ +\ cdW_{\text{1}})\end{aligned}},\quad y_{\text{1}}(0)\ =\ y_{\text{2}}(0)=0}
Where
u
{\displaystyle u}
is the fraction of accumulator input that inhibits the alternate accumulator.
==== Pooled inhibition model ====
Wang suggested the pooled inhibition model, where a third, decaying accumulator is driven by accumulation in both of the accumulators used for decision making, and in addition to the decay used in the mutual inhibition model, each of the decision driving accumulators self-reinforce based on their current value. It can be formally stated thus:
d
y
1
=
(
−
k
y
1
−
w
y
3
+
v
y
1
+
I
1
)
d
t
+
c
d
W
1
d
y
2
=
(
−
k
y
2
−
w
y
3
+
v
y
2
+
I
2
)
d
t
+
c
d
W
2
d
y
3
=
(
−
k
inh
y
3
+
w
′
(
y
1
+
y
2
)
)
d
t
{\displaystyle {\begin{aligned}dy_{\text{1}}\ =\ (-ky_{\text{1}}-wy_{\text{3}}+vy_{\text{1}}+I_{\text{1}})dt\ +\ cdW_{\text{1}}\\dy_{\text{2}}\ =\ (-ky_{\text{2}}-wy_{\text{3}}+vy_{\text{2}}+I_{\text{2}})dt\ +\ cdW_{\text{2}}\\dy_{\text{3}}\ =\ (-k_{\text{inh}}y_{\text{3}}+w'(y_{\text{1}}+y_{\text{2}}))dt\end{aligned}}}
The third accumulator has an independent decay coefficient,
k
inh
{\displaystyle k_{\text{inh}}}
, and increases based on the current values of the other two accumulators, at a rate modulated by
w
′
{\displaystyle w'}
.
== Neural correlates of decision making ==
=== Brain areas ===
In the parietal lobe, lateral intraparietal cortex (LIP) neuron firing rate in monkeys predicted the choice response of direction of motion suggesting this area is involved in decision making in the 2AFC.
Neural data recorded from LIP neurons in rhesus monkeys supports the DDM, as firing rates for the direction selective neuronal populations sensitive to the two directions used in the 2AFC task increase firing rates at stimulus onset, and average activity in the neuronal populations is biased in the direction of the correct response. In addition, it appears that a fixed threshold of neuronal spiking rate is used as the decision boundary for each 2AFC task.
== See also ==
Choice modelling
Choice set
Julian Rotter
== References == | Wikipedia/Two-alternative_forced_choice |
A psychometric function is an inferential psychometric model applied in detection and discrimination tasks. It models the relationship between a given feature of a physical stimulus, e.g. velocity, duration, brightness, weight etc., and forced-choice responses of a human or animal test subject. The psychometric function therefore is a specific application of the generalized linear model (GLM) to psychophysical data. The probability of response is related to a linear combination of predictors by means of a sigmoid link function (e.g. probit, logit, etc.).
== Design ==
Depending on the number of choices, the psychophysical experimental paradigms classify as simple forced choice (also known as yes-no task), two-alternative forced choice (2AFC), and n-alternative forced choice. The number of alternatives in the experiment determine the lower asymptote of the function.
== Example ==
A common example is visual acuity testing with an eye chart. The person sees symbols of different sizes (the size is the relevant physical stimulus parameter) and has to decide which symbol it is. Usually, there is one line on the chart where a subject can identify some, but not all, symbols. This is equal to the transition range of the psychometric function and the sensory threshold corresponds to visual acuity. (Strictly speaking, a typical optometric measurement does not exactly yield the sensory threshold due to biases in the standard procedure.)
== Plotting ==
Two different types of psychometric plots are in common use:
Plot the percentage of correct responses (or a similar value) displayed on the y-axis and the physical parameter on the x-axis. If the stimulus parameter is very far towards one end of its possible range, the person will always be able to respond correctly. Towards the other end of the range, the person never perceives the stimulus properly and therefore the probability of correct responses is at chance level. In between, there is a transition range where the subject has an above-chance rate of correct responses, but does not always respond correctly. The inflection point of the sigmoid function or the point at which the function reaches the middle between the chance level and 100% is usually taken as sensory threshold.
Plot the proportion of "yes" responses on the y-axis, and therefore create a sigmoidal shape covering the range [0, 1], rather than merely [0.5, 1]. This moves from a subject being certain that the stimulus was not of the particular type requested to certainty that it was.
The second way of plotting psychometric functions is often preferable, as it is more easily amenable to principled quantitative analysis using tools such as probit analysis (fitting of cumulative Gaussian distributions). However, it also has important drawbacks. First, the threshold estimation is based only on p(yes), namely on "Hit" in Signal Detection Theory terminology. Second, and consequently, it is not bias free or criterion free. Third, the threshold is identified with the p(yes) = .5, which is just a conventional and arbitrary choice.
== References ==
Wichmann, Felix A., and Frank Jäkel. "Methods in psychophysics." Stevens' Handbook of Experimental Psychology and Cognitive Neuroscience 5 (2018): 1-42. | Wikipedia/Psychometric_function |
Psychotherapy (also psychological therapy, talk therapy, or talking therapy) is the use of psychological methods, particularly when based on regular personal interaction, to help a person change behavior, increase happiness, and overcome problems. Psychotherapy aims to improve an individual's well-being and mental health, to resolve or mitigate troublesome behaviors, beliefs, compulsions, thoughts, or emotions, and to improve relationships and social skills. Numerous types of psychotherapy have been designed either for individual adults, families, or children and adolescents. Some types of psychotherapy are considered evidence-based for treating diagnosed mental disorders; other types have been criticized as pseudoscience.
There are hundreds of psychotherapy techniques, some being minor variations; others are based on very different conceptions of psychology. Most approaches involve one-to-one sessions, between the client and therapist, but some are conducted with groups, including couples and families.
Psychotherapists may be mental health professionals such as psychiatrists, psychologists, mental health nurses, clinical social workers, marriage and family therapists, or licensed professional counselors. Psychotherapists may also come from a variety of other backgrounds, and depending on the jurisdiction may be legally regulated, voluntarily regulated or unregulated (and the term itself may be protected or not).
It has shown general efficacy across a range of conditions, although its effectiveness varies by individual and condition. While large-scale reviews support its benefits, debates continue over the best methods for evaluating outcomes, including the use of randomized controlled trials versus individualized approaches. A 2022 umbrella review of 102 meta-analyses found that effect sizes for both psychotherapies and medications were generally small, leading researchers to recommend a paradigm shift in mental health research. Although many forms of therapy differ in technique, they often produce similar outcomes, leading to theories that common factors—such as the therapeutic relationship—are key drivers of effectiveness. Challenges include high dropout rates, limited understanding of mechanisms of change, potential adverse effects, and concerns about therapist adherence to treatment fidelity. Critics have raised questions about psychotherapy’s scientific basis, cultural assumptions, and power dynamics, while others argue it is underutilized compared to pharmacological treatments.
== Definitions ==
The term psychotherapy is derived from Ancient Greek psyche (ψυχή meaning "breath; spirit; soul") and therapeia (θεραπεία "healing; medical treatment"). The Oxford English Dictionary defines it as "The treatment of disorders of the mind or personality by psychological means...", however, in earlier use, it denoted the treatment of disease through hypnotic suggestion. Psychotherapy is often dubbed as a "talking therapy" or "talk therapy", particularly for a general audience, though not all forms of psychotherapy rely on verbal communication. Children or adults who do not engage in verbal communication (or not in the usual way) are not excluded from psychotherapy; indeed some types are designed for such cases.
The American Psychological Association adopted a resolution on the effectiveness of psychotherapy in 2012 based on a definition developed by American psychologist John C. Norcross: "Psychotherapy is the informed and intentional application of clinical methods and interpersonal stances derived from established psychological principles for the purpose of assisting people to modify their behaviors, cognitions, emotions, and/or other personal characteristics in directions that the participants deem desirable". Influential editions of a work by psychiatrist Jerome Frank defined psychotherapy as a healing relationship using socially authorized methods in a series of contacts primarily involving words, acts and rituals—which Frank regarded as forms of persuasion and rhetoric. Historically, psychotherapy has sometimes meant "interpretative" (i.e. Freudian) methods, namely psychoanalysis, in contrast with other methods to treat psychiatric disorders such as behavior modification.
Some definitions of counseling overlap with psychotherapy (particularly in non-directive client-centered approaches), or counseling may refer to guidance for everyday problems in specific areas, typically for shorter durations with a less medical or "professional" focus. Somatotherapy refers to the use of physical changes as injuries and illnesses, and sociotherapy to the use of a person's social environment to effect therapeutic change. Psychotherapy may address spirituality as a significant part of someone's mental / psychological life, and some forms are derived from spiritual philosophies, but practices based on treating the spiritual as a separate dimension are not necessarily considered as traditional or 'legitimate' forms of psychotherapy.
== Delivery ==
Psychotherapy may be delivered in person (one on one, or with couples, with families, or, in groups) or via telephone counseling or online counseling (see also § Telepsychotherapy). There have also been developments in computer-assisted therapy, such as virtual reality therapy for behavioral exposure, multimedia programs to teach cognitive techniques, and handheld devices for improved monitoring or putting ideas into practice (see also § Computer-supported).
Most forms of psychotherapy use spoken conversation. Some also use various other forms of communication such as the written word, artwork, drama, narrative story or music. Psychotherapy with children and their parents often involves play, dramatization (i.e. role-play), and drawing, with a co-constructed narrative from these non-verbal and displaced modes of interacting.
== Regulation ==
Psychotherapists traditionally may be mental health professionals like psychologists and psychiatrists; professionals from other backgrounds (family therapists, social workers, nurses, etc.) who have trained in a specific psychotherapy; or (in some cases) academic or scientifically trained professionals. In addition to the training, many countries require psychotherapist to register with a professional body in order to be permitted to offer services.
Psychiatrists are trained first as physicians, and as such they may prescribe prescription medication; and specialist psychiatric training begins after medical school in psychiatric residencies: however, their specialty is in mental disorders or forms of mental illness. Clinical psychologists have specialist doctoral degrees in psychology with some clinical and research components. Other clinical practitioners, social workers, mental health counselors, pastoral counselors, and nurses with a specialization in mental health, also often conduct psychotherapy. Many of the wide variety of psychotherapy training programs and institutional settings are multi-professional. In most countries, psychotherapy training is completed at a postgraduate level, often at a master's degree (or doctoral) level, over four years, with significant clinical supervision and clinical placements. Mental health professionals that choose to specialize in psychotherapeutic work also require a program of continuing professional education after basic professional training.
A listing of the extensive professional competencies of a European psychotherapist was developed by the European Association of Psychotherapy (EAP) in 2013.
As sensitive and deeply personal topics are often discussed during psychotherapy, therapists are expected, and usually legally bound, to respect client or patient confidentiality. The critical importance of client confidentiality—and the limited circumstances in which it may need to be broken for the protection of clients or others—is enshrined in the regulatory psychotherapeutic organizations' codes of ethical practice. Examples of when it is typically accepted to break confidentiality include when the therapist has knowledge that a child or elder is being physically abused; when there is a direct, clear and imminent threat of serious physical harm to self or to a specific individual. In some countries psychotherapists are required by law to be mandated reporters.
=== Europe ===
As of 2015, there are still a lot of variations between different European countries about the regulation and delivery of psychotherapy. Several countries have no regulation of the practice or no protection of the title. Some have a system of voluntary registration, with independent professional organizations, while other countries attempt to restrict the practice of psychotherapy to 'mental health professionals' (psychologists and psychiatrists) with state-certified training. The titles that are protected also vary. The European Association for Psychotherapy (EAP) established the 1990 Strasbourg Declaration on Psychotherapy, which is dedicated to establishing an independent profession of psychotherapy in Europe, with pan-European standards. The EAP has already made significant contacts with the European Union & European Commission towards this end.
Given that the European Union has a primary policy about the free movement of labor within Europe, European legislation can overrule national regulations that are, in essence, forms of restrictive practices.
In Germany, the practice of psychotherapy for adults is restricted to qualified psychologists and physicians (including psychiatrists) who have completed several years of specialist practical training and certification in psychotherapy. As psychoanalysis, psychodynamic therapy, cognitive behavioral therapy, and systemic therapy meet the requirements of German health insurance companies, mental health professionals regularly opt for one of these four specializations in their postgraduate training. For psychologists, this includes three years of full-time practical training (4,200 hours), encompassing a year-long internship at an accredited psychiatric institution, six months of clinical work at an outpatient facility, 600 hours of supervised psychotherapy in an outpatient setting, and at least 600 hours of theoretical seminars. Social workers may complete the specialist training for child and teenage clients. Similarly in Italy, the practice of psychotherapy is restricted to graduates in psychology or medicine who have completed four years of recognised specialist training. Sweden has a similar restriction on the title "psychotherapist", which may only be used by professionals who have gone through a post-graduate training in psychotherapy and then applied for a licence, issued by the National Board of Health and Welfare.
Legislation in France restricts the use of the title "psychotherapist" to professionals on the National Register of Psychotherapists, which requires a training in clinical psychopathology and a period of internship which is only open to physicians or titulars of a master's degree in psychology or psychoanalysis.
Austria and Switzerland (2011) have laws that recognize multi-disciplinary functional approaches.
In the United Kingdom, the government and Health and Care Professions Council considered mandatory legal registration but decided that it was best left to professional bodies to regulate themselves, so the Professional Standards Authority for Health and Social Care (PSA) launched an Accredited Voluntary Registers scheme. Counseling and psychotherapy are not protected titles in the United Kingdom. Counsellors and psychotherapists who have trained and qualify to a certain standard (usually a level 4 Diploma) can apply to be members of the professional bodies who are listed on the PSA Accredited Registers.
=== United States ===
In some states, counselors or therapists must be licensed to use certain words and titles on self-identification or advertising. In some other states, the restrictions on practice are more closely associated with the charging of fees. Licensing and regulation are performed by various states. Presentation of practice as licensed, but without such a license, is generally illegal. Without a license, for example, a practitioner cannot bill insurance companies. Information about state licensure of psychologists is provided by the American Psychological Association.
In addition to state laws, the American Psychological Association requires its members to adhere to its published Ethical Principles of Psychologists and Code of Conduct. The American Board of Professional Psychology examines and certifies "psychologists who demonstrate competence in approved specialty areas in professional psychology".
=== Canada ===
Regulation of psychotherapy is in the jurisdiction of, and varies among, the provinces and territories.
In Quebec, psychotherapy is a regulated activity which is restricted to psychologists, medical doctors, and holders of a psychotherapy permit issued by the Ordre des psychologues du Québec, the Quebec order of psychologists. Members of certain specified professions, including social workers, couple and family therapists, occupational therapists, guidance counsellors, criminologists, sexologists, psychoeducators, and registered nurses may obtain a psychotherapy permit by completing certain educational and practice requirements; their professional oversight is provided by their own professional orders. Some other professionals who were practising psychotherapy before the current system came into force continue to hold psychotherapy permits alone.
On 1 July 2019, Ontario's Missing Persons Act came into effect, with the purpose of giving police more power to investigate missing persons. It allows police to require (as opposed to permit) health professionals, including psychotherapists, to share otherwise confidential documents about their client, if there is reason to believe their client is missing. Some have expressed concern that this legislation undermines psychotherapy confidentiality and could be abused maliciously by police, while others have praised the act for how it respects privacy and includes checks and balances.
== History ==
Psychotherapy can be said to have been practiced through the ages, as medics, philosophers, spiritual practitioners and people in general used psychological methods to heal others.
In the Western tradition, by the 19th century, a moral treatment movement (then meaning morale or mental) developed based on non-invasive non-restraint therapeutic methods. Another influential movement was started by Franz Mesmer (1734–1815) and his student Armand-Marie-Jacques de Chastenet, Marquis of Puységur (1751–1825). Called Mesmerism or animal magnetism, it would have a strong influence on the rise of dynamic psychology and psychiatry as well as theories about hypnosis. In 1853, Walter Cooper Dendy introduced the term "psycho-therapeia" regarding how physicians might influence the mental states of patients and thus their bodily ailments, for example by creating opposing emotions to promote mental balance. Daniel Hack Tuke cited the term and wrote about "psycho-therapeutics" in 1872 in his book Illustrations of the Influence of the Mind upon the Body in Health and Disease, in which he also proposed making a science of animal magnetism. Hippolyte Bernheim and colleagues in the "Nancy School" developed the concept of "psychotherapy" in the sense of using the mind to heal the body through hypnotism, yet further. Charles Lloyd Tuckey's 1889 work, Psycho-therapeutics, or Treatment by Hypnotism and Suggestion popularized the work of the Nancy School in English. Also in 1889 a clinic used the word in its title for the first time, when Frederik van Eeden and Albert Willem van Renterghem in Amsterdam renamed theirs "Clinique de Psycho-thérapeutique Suggestive" after visiting Nancy. During this time, travelling stage hypnosis became popular, and such activities added to the scientific controversies around the use of hypnosis in medicine. Also in 1892, at the second congress of experimental psychology, van Eeden attempted to take the credit for the term psychotherapy and to distance the term from hypnosis. In 1896, the German journal Zeitschrift für Hypnotismus, Suggestionstherapie, Suggestionslehre und verwandte psychologische Forschungen changed its name to Zeitschrift für Hypnotismus, Psychotherapie sowie andere psychophysiologische und psychopathologische Forschungen, which is probably the first journal to use the term. Thus psychotherapy initially meant "the treatment of disease by psychic or hypnotic influence, or by suggestion".
Sigmund Freud visited the Nancy School and his early neurological practice involved the use of hypnotism. However following the work of his mentor Josef Breuer—in particular a case where symptoms appeared partially resolved by what the patient, Bertha Pappenheim, dubbed a "talking cure"—Freud began focusing on conditions that appeared to have psychological causes originating in childhood experiences and the unconscious mind. He went on to develop techniques such as free association, dream interpretation, transference and analysis of the id, ego and superego. His popular reputation as the father of psychotherapy was established by his use of the distinct term "psychoanalysis", tied to an overarching system of theories and methods, and by the effective work of his followers in rewriting history. Many theorists, including Alfred Adler, Carl Jung, Karen Horney, Anna Freud, Otto Rank, Erik Erikson, Melanie Klein and Heinz Kohut, built upon Freud's fundamental ideas and often developed their own systems of psychotherapy. These were all later categorized as psychodynamic, meaning anything that involved the psyche's conscious/unconscious influence on external relationships and the self. Sessions tended to number into the hundreds over several years.
Behaviorism developed in the 1920s, and behavior modification as a therapy became popularized in the 1950s and 1960s. Notable contributors were Joseph Wolpe in South Africa, M.B. Shapiro and Hans Eysenck in Britain, and John B. Watson and B.F. Skinner in the United States. Behavioral therapy approaches relied on principles of operant conditioning, classical conditioning and social learning theory to bring about therapeutic change in observable symptoms. The approach became commonly used for phobias, as well as other disorders.
Some therapeutic approaches developed out of the European school of existential philosophy. Concerned mainly with the individual's ability to develop and preserve a sense of meaning and purpose throughout life, major contributors to the field (e.g., Irvin Yalom, Rollo May) and Europe (Viktor Frankl, Ludwig Binswanger, Medard Boss, R.D.Laing, Emmy van Deurzen) attempted to create therapies sensitive to common "life crises" springing from the essential bleakness of human self-awareness, previously accessible only through the complex writings of existential philosophers (e.g., Søren Kierkegaard, Jean-Paul Sartre, Gabriel Marcel, Martin Heidegger, Friedrich Nietzsche). The uniqueness of the patient-therapist relationship thus also forms a vehicle for therapeutic inquiry. A related body of thought in psychotherapy started in the 1950s with Carl Rogers. Based also on the works of Abraham Maslow and his hierarchy of human needs, Rogers brought person-centered psychotherapy into mainstream focus. The primary requirement was that the client receive three core "conditions" from his counselor or therapist: unconditional positive regard, sometimes described as "prizing" the client's humanity; congruence [authenticity/genuineness/transparency]; and empathic understanding. This type of interaction was thought to enable clients to fully experience and express themselves, and thus develop according to their innate potential. Others developed the approach, like Fritz and Laura Perls in the creation of Gestalt therapy, as well as Marshall Rosenberg, founder of Nonviolent Communication, and Eric Berne, founder of transactional analysis. Later these fields of psychotherapy would become what is known as humanistic psychotherapy today. Self-help groups and books became widespread.
During the 1950s, Albert Ellis originated rational emotive behavior therapy (REBT). Independently a few years later, psychiatrist Aaron T. Beck developed a form of psychotherapy known as cognitive therapy. Both of these included relatively short, structured and present-focused techniques aimed at identifying and changing a person's beliefs, appraisals and reaction-patterns, by contrast with the more long-lasting insight-based approach of psychodynamic or humanistic therapies. Beck's approach used primarily the socratic method, and links have been drawn between ancient stoic philosophy and these cognitive therapies.
Cognitive and behavioral therapy approaches were increasingly combined and grouped under the umbrella term cognitive behavioral therapy (CBT) in the 1970s. Many approaches within CBT are oriented towards active/directive yet collaborative empiricism (a form of reality-testing), and assessing and modifying core beliefs and dysfunctional schemas. These approaches gained widespread acceptance as a primary treatment for numerous disorders. A "third wave" of cognitive and behavioral therapies developed, including acceptance and commitment therapy and dialectical behavior therapy, which expanded the concepts to other disorders and/or added novel components and mindfulness exercises. However the "third wave" concept has been criticized as not essentially different from other therapies and having roots in earlier ones as well. Counseling methods developed include solution-focused therapy and systemic coaching.
Postmodern psychotherapies such as narrative therapy and coherence therapy do not impose definitions of mental health and illness, but rather see the goal of therapy as something constructed by the client and therapist in a social context. Systemic therapy also developed, which focuses on family and group dynamics—and transpersonal psychology, which focuses on the spiritual facet of human experience. Other orientations developed in the last three decades include feminist therapy, brief therapy, somatic psychology, expressive therapy, applied positive psychology and the human givens approach. A survey of over 2,500 US therapists in 2006 revealed the most utilized models of therapy and the ten most influential therapists of the previous quarter-century.
== Types ==
There are hundreds of psychotherapy approaches or schools of thought. By 1980 there were more than 250; by 1996 more than 450; and at the start of the 21st century there were over a thousand different named psychotherapies—some being minor variations while others are based on very different conceptions of psychology, ethics (how to live) or technique. In practice therapy is often not of one pure type but draws from a number of perspectives and schools—known as an integrative or eclectic approach. The importance of the therapeutic relationship, also known as therapeutic alliance, between client and therapist is often regarded as crucial to psychotherapy. Common factors theory addresses this and other core aspects thought to be responsible for effective psychotherapy.
Sigmund Freud (1856–1939), a Viennese neurologist who studied with Jean-Martin Charcot in 1885, is often considered the father of modern psychotherapy. His methods included analyzing his patient's dreams in search of important hidden insights into their unconscious minds. Other major elements of his methods, which changed throughout the years, included identification of childhood sexuality, the role of anxiety as a manifestation of inner conflict, the differentiation of parts of the psyche (id, ego, superego), transference and countertransference (the patient's projections onto the therapist, and the therapist's emotional responses to that). Some of his concepts were too broad to be amenable to empirical testing and invalidation, and he was critiqued for this by Jaspers. Numerous major figures elaborated and refined Freud's therapeutic techniques including Melanie Klein, Donald Winnicott, and others. Since the 1960s, however, the use of Freudian-based analysis for the treatment of mental disorders has declined substantially. Different types of psychotherapy have been created along with the advent of clinical trials to test them scientifically. These incorporate subjective treatments (after Beck), behavioral treatments (after Skinner and Wolpe) and additional time-constrained and centered structures, for example, interpersonal psychotherapy. In youth issue and in schizophrenia, the systems of family treatment hold esteem. A portion of the thoughts emerging from therapy are presently pervasive and some are a piece of the tool set of ordinary clinical practice. They are not just medications, they additionally help to understand complex conduct.
Therapy may address specific forms of diagnosable mental illness, or everyday problems in managing or maintaining interpersonal relationships or meeting personal goals. A course of therapy may happen before, during or after pharmacotherapy (e.g. taking psychiatric medication).
Psychotherapies are categorized in several different ways. A distinction can be made between those based on a medical model and those based on a humanistic model. In the medical model, the client is seen as unwell and the therapist employs their skill to help the client back to health. The extensive use of the DSM-IV, the diagnostic and statistical manual of mental disorders in the United States is an example of a medically exclusive model. The humanistic or non-medical model in contrast strives to depathologise the human condition. The therapist attempts to create a relational environment conducive to experiential learning and help build the client's confidence in their own natural process resulting in a deeper understanding of themselves. The therapist may see themselves as a facilitator/helper.
Another distinction is between individual one-to-one therapy sessions, and group psychotherapy, including couples therapy and family therapy.
Therapies are sometimes classified according to their duration; a small number of sessions over a few weeks or months may be classified as brief therapy (or short-term therapy), others, where regular sessions take place for years, may be classified as long-term.
Some practitioners distinguish between more "uncovering" (or "depth") approaches and more "supportive" psychotherapy. Uncovering psychotherapy emphasizes facilitating the client's insight into the roots of their difficulties. The best-known example is classical psychoanalysis. Supportive psychotherapy by contrast stresses strengthening the client's coping mechanisms and often providing encouragement and advice, as well as reality-testing and limit-setting where necessary. Depending on the client's issues and situation, a more supportive or more uncovering approach may be optimal.
=== Humanistic ===
These psychotherapies, also known as "experiential", are based on humanistic psychology and emerged in reaction to both behaviorism and psychoanalysis, being dubbed the "third force". They are primarily concerned with the human development and needs of the individual, with an emphasis on subjective meaning, a rejection of determinism, and a concern for positive growth rather than pathology. Some posit an inherent human capacity to maximize potential, "the self-actualizing tendency"; the task of therapy is to create a relational environment where this tendency might flourish. Humanistic psychology can, in turn, be rooted in existentialism—the belief that human beings can only find meaning by creating it. This is the goal of existential therapy. Existential therapy is in turn philosophically associated with phenomenology.
Person-centered therapy, also known as client-centered, focuses on the therapist showing openness, empathy and "unconditional positive regard", to help clients express and develop their own self.
Humanistic Psychodrama (HPD) is based on the human image of humanistic psychology. So all rules and methods follow the axioms of humanistic psychology. The HPD sees itself as development-oriented psychotherapy and has completely moved away from the psychoanalytic catharsis theory.
Self-awareness and self-realization are essential aspects in the therapeutic process. Subjective experiences, feelings and thoughts and one's own experiences are the starting point for a change or reorientation in experience and behavior in the direction of more self-acceptance and satisfaction. Dealing with the biography of the individual is closely related to the sociometry of the group.
Gestalt therapy, originally called "concentration therapy", is an existential/experiential form that facilitates awareness in the various contexts of life, by moving from talking about relatively remote situations to action and direct current experience. Derived from various influences, including an overhaul of psychoanalysis, it stands on top of essentially four load-bearing theoretical walls: phenomenological method, dialogical relationship, field-theoretical strategies, and experimental freedom.
A briefer form of humanistic therapy is the human givens approach, introduced in 1998–99. It is a solution-focused intervention based on identifying emotional needs—such as for security, autonomy and social connection—and using various educational and psychological methods to help people meet those needs more fully or appropriately.
=== Insight-oriented ===
Insight-oriented psychotherapies focus on revealing or interpreting unconscious processes. Most commonly referring to psychodynamic therapy, of which psychoanalysis is the oldest and most intensive form, these applications of depth psychology encourage the verbalization of all the patient's thoughts, including free associations, fantasies, and dreams, from which the analyst formulates the nature of the past and present unconscious conflicts which are causing the patient's symptoms and character problems.
There are six main schools of psychoanalysis, which all influenced psychodynamic theory: Freudian, ego psychology, object relations theory, self psychology, interpersonal psychoanalysis, and relational psychoanalysis. Techniques for analytic group therapy have also developed.
=== Cognitive-behavioral ===
Behavior therapies use behavioral techniques, including applied behavior analysis (also known as behavior modification), to change maladaptive patterns of behavior to improve emotional responses, cognitions, and interactions with others. Functional analytic psychotherapy is one form of this approach. By nature, behavioral therapies are empirical (data-driven), contextual (focused on the environment and context), functional (interested in the effect or consequence a behavior ultimately has), probabilistic (viewing behavior as statistically predictable), monistic (rejecting mind-body dualism and treating the person as a unit), and relational (analyzing bidirectional interactions).
Cognitive therapy focuses directly on changing the thoughts, in order to improve the emotions and behaviors.
Cognitive behavioral therapy attempts to combine the above two approaches, focused on the construction and reconstruction of people's cognitions, emotions and behaviors. Generally in CBT, the therapist, through a wide array of modalities, helps clients assess, recognize and deal with problematic and dysfunctional ways of thinking, emoting and behaving.
The concept of "third wave" psychotherapies reflects an influence of Eastern philosophy in clinical psychology, incorporating principles such as meditation into interventions such as mindfulness-based cognitive therapy, acceptance and commitment therapy, and dialectical behavior therapy for borderline personality disorder.
Interpersonal psychotherapy (IPT) is a relatively brief form of psychotherapy (deriving from both CBT and psychodynamic approaches) that has been increasingly studied and endorsed by guidelines for some conditions. It focuses on the links between mood and social circumstances, helping to build social skills and social support. It aims to foster adaptation to current interpersonal roles and situations.
Exposure and response prevention (ERP) is primarily deployed by therapists in the treatment of OCD. The American Psychiatric Association (APA) state that CBT drawing primarily on behavioral techniques (such as ERP) has the "strongest evidence base" among psychosocial interventions. By confronting feared scenarios (i.e., exposure) and refraining from performing rituals (i.e., responsive prevention), patients may gradually feel less distress in confronting feared stimuli, while also feeling less inclination to use rituals to relieve that distress. Typically, ERP is delivered in "hierarchical fashion", meaning patients confront increasingly anxiety-provoking stimuli as they progress through a course of treatment.
Other types include reality therapy/choice theory, multimodal therapy, and therapies for specific disorders including PTSD therapies such as cognitive processing therapy, substance abuse therapies such as relapse prevention and contingency management; and co-occurring disorders therapies such as Seeking Safety.
=== Systemic ===
Systemic therapy seeks to address people not just individually, as is often the focus of other forms of therapy, but in relationship, dealing with the interactions of groups, their patterns and dynamics (includes family therapy and marriage counseling). Community psychology is a type of systemic psychology.
The term group therapy was first used around 1920 by Jacob L. Moreno, whose main contribution was the development of psychodrama, in which groups were used as both cast and audience for the exploration of individual problems by reenactment under the direction of the leader. The more analytic and exploratory use of groups in both hospital and out-patient settings was pioneered by a few European psychoanalysts who emigrated to the US, such as Paul Schilder, who treated severely neurotic and mildly psychotic out-patients in small groups at Bellevue Hospital, New York. The power of groups was most influentially demonstrated in Britain during the Second World War, when several psychoanalysts and psychiatrists proved the value of group methods for officer selection in the War Office Selection Boards. A chance to run an Army psychiatric unit on group lines was then given to several of these pioneers, notably Wilfred Bion and Rickman, followed by S. H. Foulkes, Main, and Bridger. The Northfield Hospital in Birmingham gave its name to what came to be called the two "Northfield Experiments", which provided the impetus for the development since the war of both social therapy, that is, the therapeutic community movement, and the use of small groups for the treatment of neurotic and personality disorders. Today group therapy is used in clinical settings and in private practice settings.
=== Expressive ===
Expressive psychotherapy is a form of therapy that utilizes artistic expression (via improvisational, compositional, re-creative, and receptive experiences) as its core means of treating clients. Expressive psychotherapists use the different disciplines of the creative arts as therapeutic interventions. This includes the modalities dance therapy, drama therapy, art therapy, music therapy, writing therapy, among others. This may include techniques such as affect labeling. Expressive psychotherapists believe that often the most effective way of treating a client is through the expression of imagination in creative work and integrating and processing what issues are raised in the act.
=== Postmodernist ===
Also known as post-structuralist or constructivist. Narrative therapy gives attention to each person's "dominant story" through therapeutic conversations, which also may involve exploring unhelpful ideas and how they came to prominence. Possible social and cultural influences may be explored if the client deems it helpful. Coherence therapy posits multiple levels of mental constructs that create symptoms as a way to strive for self-protection or self-realization. Feminist therapy does not accept that there is one single or correct way of looking at reality and therefore is considered a postmodernist approach.
=== Other ===
Transpersonal psychology addresses the client in the context of a spiritual understanding of consciousness. Positive psychotherapy (PPT) (since 1968) is a method in the field of humanistic and psychodynamic psychotherapy and is based on a positive image of humans, with a health-promoting, resource-oriented and conflict-centered approach.
Hypnotherapy is undertaken while a subject is in a state of hypnosis. Hypnotherapy is often applied in order to modify a subject's behavior, emotional content, and attitudes, as well as a wide range of conditions including: dysfunctional habits, anxiety, stress-related illness, pain management, and personal development.
Psychedelic therapy are therapeutic practices involving psychedelic drugs, such as LSD, psilocybin, DMT, and MDMA. In psychedelic therapy, in contrast to conventional psychiatric medication taken by the patient regularly or as needed, patients generally remain in an extended psychotherapy session during the acute psychedelic activity with additional sessions both before and after in order to help integrate experiences with the psychedelics. Psychedelic therapy has been compared with the shamanic healing rituals of indigenous people. Researchers identified two main differences: the first is the shamanic belief that multiple realities exist and can be explored through altered states of consciousness, and second the belief that spirits encountered in dreams and visions are real. The charitable initiative Founders Pledge has written a research report on cost-effective giving opportunities for funding psychedelic-assisted mental health treatments.
Body psychotherapy, part of the field of somatic psychology, focuses on the link between the mind and the body and tries to access deeper levels of the psyche through greater awareness of the physical body and emotions. There are various body-oriented approaches, such as Reichian (Wilhelm Reich) character-analytic vegetotherapy and orgonomy; neo-Reichian bioenergetic analysis; somatic experiencing; integrative body psychotherapy; Ron Kurtz's Hakomi psychotherapy; sensorimotor psychotherapy; Biosynthesis psychotherapy; and Biodynamic psychotherapy. These approaches are not to be confused with body work or body-therapies that seek to improve primarily physical health through direct work (touch and manipulation) on the body, rather than through directly psychological methods.
Some non-Western indigenous therapies have been developed. In African countries this includes harmony restoration therapy, meseron therapy and systemic therapies based on the Ubuntu philosophy.
Integrative psychotherapy is an attempt to combine ideas and strategies from more than one theoretical approach. These approaches include mixing core beliefs and combining proven techniques. Forms of integrative psychotherapy include multimodal therapy, the transtheoretical model, cyclical psychodynamics, systematic treatment selection, cognitive analytic therapy, internal family systems model, multitheoretical psychotherapy and conceptual interaction. In practice, most experienced psychotherapists develop their own integrative approach over time.
=== Child ===
Psychotherapy needs to be adapted to meet the developmental needs of children. Depending on age, it is generally held to be one part of an effective strategy to help the needs of a child within the family setting. Child psychotherapy training programs necessarily include courses in human development. Since children often do not have the ability to articulate thoughts and feelings, psychotherapists will use a variety of media such as musical instruments, sand and toys, crayons, paint, clay, puppets, bibliocounseling (books), or board games. The use of play therapy is often rooted in psychodynamic theory, but other approaches also exist.
In addition to therapy for the child, sometimes instead of it, children may benefit if their parents work with a therapist, take parenting classes, attend grief counseling, or take other action to resolve stressful situations that affect the child. Parent management training is a highly effective form of psychotherapy that teaches parenting skills to reduce their child's behavior problems.
In many cases a different psychotherapist will work with the care taker of the child, while a colleague works with the child. Therefore, contemporary thinking on working with the younger age group has leaned towards working with parent and child simultaneously, as well as individually as needed.
== Computer-supported ==
Research on computer-supported and computer-based interventions has increased significantly over the course of the last two decades. The following applications frequently have been investigated:
Virtual reality: VR is a computer-generated scenario that simulates experience. The immersive environment, used for simulated exposure, can be similar to the real world or it can be fantastical, creating a new experience.
Computer-based interventions (or online interventions or internet interventions): These interventions can be described as interactive self-help. They usually entail a combination of text, audio or video elements.
Computer-supported therapy (or blended therapy): Classical psychotherapy is supported by means of online or software application elements. The feasibility of such interventions has been investigated for individual and group therapy.
=== Telepsychotherapy ===
== Effects ==
=== Efficacy ===
There is considerable controversy about whether, or when, psychotherapy efficacy is best evaluated by randomized controlled trials or more individualized idiographic methods.
One issue with trials is what to use as a placebo treatment group or non-treatment control group. Often, this group includes patients on a waiting list, or those receiving some kind of regular non-specific contact or support. Researchers must consider how best to match the use of inert tablets or sham treatments in placebo-controlled studies in pharmaceutical trials. Several interpretations and differing assumptions and language remain. Another issue is the attempt to standardize and manualize therapies and link them to specific symptoms of diagnostic categories, making them more amenable to research. Some report that this may reduce efficacy or gloss over individual needs. Fonagy and Roth's opinion is that the benefits of the evidence-based approach outweighs the difficulties.
There are several formal frameworks for evaluating whether a psychotherapist is a good fit for a patient. One example is the Scarsdale Psychotherapy Self-Evaluation (SPSE). However, some scales, such as the SPS, elicit information specific to certain schools of psychotherapy alone (e.g. the superego).
Many psychotherapists believe that the nuances of psychotherapy cannot be captured by questionnaire-style observation, and prefer to rely on their own clinical experiences and conceptual arguments to support the type of treatment they practice. Psychodynamic therapists increasingly believe that evidence-based approaches are appropriate to their methods and assumptions, and have increasingly accepted the challenge to implement evidence-based approaches in their methods.
A pioneer in investigating the results of different psychological therapies was psychologist Hans Eysenck, who argued that psychotherapy does not produce any improvement in patients. He held that behavior therapy is the only effective one. However, it was revealed that Eysenck (who died in 1997) falsified data in his studies about this subject, fabricating data that would indicate that behavioral therapy enables achievements that are impossible to believe. Fourteen of his papers were retracted by journals in 2020, and journals issued 64 statements of concern about publications by him. Rod Buchanan, a biographer of Eysenck, has argued that 87 publications by Eysenck should be retracted.
The response rate of psychotherapy varies, no reliable changes due to psychotherapy can be found in up to 33% of patients.
==== Comparison with other treatments ====
Large-scale international reviews of scientific studies have concluded that psychotherapy is effective for numerous conditions. A 2022 umbrella review of 102 meta-analyses found that most effect sizes reported for both psychotherapies and pharmacotherapies, compared to treatment-as-usual or placebo, were small for most disorders and treatments, and concluded that a "paradigm shift in research" was needed to advance the field and improve treatment strategies for mental disorders.
One line of research consistently found that supposedly different forms of psychotherapy show similar effectiveness. According to the 2008 edition of The Handbook of Counseling Psychology: "Meta-analyses of psychotherapy studies have consistently demonstrated that there are no substantial differences in outcomes among treatments". The handbook stated that "little evidence suggests that any one treatment consistently outperforms any other for any specific psychological disorders". This is sometimes called the Dodo bird verdict after a scene/section in Alice in Wonderland where every competitor in a race was called a winner and is given prizes.
Further analyses seek to identify the factors that the psychotherapies have in common that seem to account for this, known as common factors theory; for example the quality of the therapeutic relationship, interpretation of problem, and the confrontation of painful emotions.
Outcome studies have been critiqued for being too removed from real-world practice in that they use carefully selected therapists who have been extensively trained and monitored, and patients who may be non-representative of typical patients by virtue of strict inclusionary/exclusionary criteria. Such concerns impact the replication of research results and the ability to generalize from them to practicing therapists.
However, specific therapies have been tested for use with specific disorders, and regulatory organizations in both the UK and US make recommendations for different conditions.
The Helsinki Psychotherapy Study was one of several large long-term clinical trials of psychotherapies that have taken place. Anxious and depressed patients in two short-term therapies (solution-focused and brief psychodynamic) improved faster, but five years long-term psychotherapy and psychoanalysis gave greater benefits. Several patient and therapist factors appear to predict suitability for different psychotherapies.
Meta-analyses have established that cognitive behavioural therapy (CBT) and psychodynamic psychotherapy are equally effective in treating depression.
A 2014 meta analysis over 11,000 patients reveals that Interpersonal Psychotherapy (IPT) is of comparable effectiveness to CBT for depression but is inferior to the latter for eating disorders. For children and adolescents, interpersonal psychotherapy and CBT are the best methods according to a 2014 meta analysis of almost 4000 patients.
=== Adverse effects ===
Research on adverse effects of psychotherapy has been limited, yet worsening of symptoms may be expected to occur in 3% to 15% of patients, with variability across patient and therapist characteristics. Potential problems include deterioration of symptoms or developing new symptoms, strains in other relationships, social stigma, and therapy dependence. Some techniques or therapists may carry more risks than others, and some client characteristics may make them more vulnerable. Side-effects from properly conducted therapy should be distinguished from harms caused by malpractice.
=== Adherence ===
Patient adherence to a course of psychotherapy—continuing to attend sessions or complete tasks—is a major issue.
The dropout level—early termination—ranges from around 30% to 60%, depending partly on how it is defined. The range is lower for research settings for various reasons, such as the selection of clients and how they are inducted. Early termination is associated on average with various demographic and clinical characteristics of clients, therapists and treatment interactions. The high level of dropout has raised some criticism about the relevance and efficacy of psychotherapy.
Most psychologists use between-session tasks in their general therapy work, and cognitive behavioral therapies in particular use and see them as an "active ingredient". It is not clear how often clients do not complete them, but it is thought to be a pervasive phenomenon.
From the other side, the adherence of therapists to therapy protocols and techniques—known as "treatment integrity" or "fidelity"—has also been studied, with complex mixed results. In general, however, it is a hallmark of evidence-based psychotherapy to use fidelity monitoring as part of therapy outcome trials and ongoing quality assurance in clinical implementation.
=== Mechanisms of change ===
It is not yet understood how psychotherapies can succeed in treating mental illnesses. Different therapeutic approaches may be associated with particular theories about what needs to change in a person for a successful therapeutic outcome.
In general, processes of emotional arousal and memory have long been held to play an important role. One theory combining these aspects proposes that permanent change occurs to the extent that the neuropsychological mechanism of memory reconsolidation is triggered and is able to incorporate new emotional experiences.
=== General critiques ===
Some critics are skeptical of the healing power of psychotherapeutic relationships. Some dismiss psychotherapy altogether in the sense of a scientific discipline requiring professional practitioners, instead favoring either nonprofessional help or biomedical treatments. Others have pointed out ways in which the values and techniques of therapists can be harmful as well as helpful to clients (or indirectly to other people in a client's life).
Many resources available to a person experiencing emotional distress—the friendly support of friends, peers, family members, clergy contacts, personal reading, healthy exercise, research, and independent coping—all present considerable value. Critics note that humans have been dealing with crises, navigating severe social problems and finding solutions to life problems long before the advent of psychotherapy.
On the other hand, some argue psychotherapy is under-utilized and under-researched by contemporary psychiatry despite offering more promise than stagnant medication development. In 2015, the US National Institute of Mental Health allocated only 5.4% of its budget to new clinical trials of psychotherapies (medication trials are largely funded by pharmaceutical companies), despite plentiful evidence they can work and that patients are more likely to prefer them.
Further critiques have emerged from feminist, constructionist and discourse-analytical sources. Key to these is the issue of power. In this regard there is a concern that clients are persuaded—both inside and outside the consulting room—to understand themselves and their difficulties in ways that are consistent with therapeutic ideas. This means that alternative ideas (e.g., feminist, economic, spiritual) are sometimes implicitly undermined. Critics suggest that we idealize the situation when we think of therapy only as a helping relationship—arguing instead that it is fundamentally a political practice, in that some cultural ideas and practices are supported while others are undermined or disqualified, and that while it is seldom intended, the therapist–client relationship always participates in society's power relations and political dynamics. A noted academic who espoused this criticism was Michel Foucault.
== See also ==
Improving Access to Psychological Therapies – United Kingdom initiative to improve access to psychological therapiesPages displaying wikidata descriptions as a fallback
List of psychotherapy journals
Physical therapy – Profession that helps a disabled person function in everyday life
Psychosomatic medicine – Interdisciplinary medical field exploring various influences on bodily processes
== References ==
== Further reading ==
Bartlett, Steven J. (1987). When You Don't Know Where to Turn: A Self-diagnosing Guide to Counseling and Therapy. Contemporary Books. ISBN 9780809248292.
Bloch, Sidney (2006). Introduction to the Psychotherapies (4th ed.). Oxford University Press. ISBN 0198520921.
Carter, Robert T., ed. (2005). Handbook of Racial-Cultural Psychology and Counseling. OCLC 54905669. Two volumes.
Corey, Gerald (2015). Theory and Practice of Counseling and Psychotherapy (10th ed.). Cengage Learning. ISBN 9781305263727.
Cozolino, Louis (2017). The Neuroscience of Psychotherapy: Healing the Social Brain (3rd ed.). National Geographic Books. ISBN 9780393712643.
DeBord, Kurt A.; Fischer, Ann R.; Bieschke, Kathleen J.; Perez, Ruperto M., eds. (2017). Handbook of Sexual Orientation and Gender Diversity in Counseling and Psychotherapy. American Psychological Association. ISBN 9781433823060.
Foschi, Renato; Innamorati, Marco (2023). A Critical History of Psychotherapy. Routledge, Taylor & Francis. ISBN 9781032364025. Two volumes.
Hofmann, Stefan G., ed. (2017). International Perspectives on Psychotherapy. Springer. ISBN 9783319561936.
Jongsma, Arthur E.; Peterson, L. Mark; Bruce, Timothy J. (2021). The Complete Adult Psychotherapy Treatment Planner (6th ed.). John Wiley & Sons. ISBN 978-1118067864.
McAuliffe, Garrett J., ed. (2021). Culturally Alert Counseling: A Comprehensive Introduction (3rd ed.). SAGE Publications. ISBN 9781483378213.
Prochaska, James O.; Norcross, John C. (2018). Systems of Psychotherapy: A Transtheoretical Analysis (9th ed.). Oxford University Press. ISBN 9780190880415.
Rastogi, Mudita; Wieling, Elizabeth, eds. (2005). Voices of Color: First-Person Accounts of Ethnic Minority Therapists. ISBN 0761928901.
Slavney, Phillip R. (2005). Psychotherapy: An Introduction for Psychiatry Residents and Other Mental Health Trainees. JHU Press. ISBN 0801880963.
Wampold, Bruce E. (2019). The Basics of Psychotherapy: An Introduction to Theory and Practice (2nd ed.). American Psychological Association. ISBN 9781433830198. | Wikipedia/Psychotherapy |
Methods of detecting exoplanets usually rely on indirect strategies – that is, they do not directly image the planet but deduce its existence from another signal. Any planet is an extremely faint light source compared to its parent star. For example, a star like the Sun is about a billion times as bright as the reflected light from any of the planets orbiting it. In addition to the intrinsic difficulty of detecting such a faint light source, the glare from the parent star washes it out. For those reasons, very few of the exoplanets reported as of June 2025 have been observed directly, being resolved from their host star.
== Established detection methods ==
The following methods have proven successful at least once for discovering a new planet or detecting an already discovered planet:
=== Radial velocity ===
A star with a planet will move in its own small orbit in response to the planet's gravity. This leads to variations in the speed with which the star moves toward or away from Earth, i.e. the variations are in the radial velocity of the star with respect to Earth. The radial velocity can be deduced from the displacement in the parent star's spectral lines due to the Doppler effect. The radial-velocity method measures these variations in order to confirm the presence of the planet using the binary mass function.
The speed of the star around the system's center of mass is much smaller than that of the planet, because the radius of its orbit around the center of mass is so small. (For example, the Sun moves by about 13 m/s due to Jupiter, but only about 9 cm/s due to Earth). However, velocity variations down to 3 m/s or even somewhat less can be detected with modern spectrometers, such as the HARPS (High Accuracy Radial Velocity Planet Searcher) spectrometer at the ESO 3.6 meter telescope in La Silla Observatory, Chile, the HIRES spectrometer at the Keck telescopes or EXPRES at the Lowell Discovery Telescope.
An especially simple and inexpensive method for measuring radial velocity is "externally dispersed interferometry".
Until around 2012, the radial-velocity method (also known as Doppler spectroscopy) was by far the most productive technique used by planet hunters. (After 2012, the transit method from the Kepler space telescope overtook it in number.) The radial velocity signal is distance independent, but requires high signal-to-noise ratio spectra to achieve high precision, and so is generally used only for relatively nearby stars, out to about 160 light-years from Earth, to find lower-mass planets. It is also not possible to simultaneously observe many target stars at a time with a single telescope. Planets of Jovian mass can be detectable around stars up to a few thousand light years away. This method easily finds massive planets that are close to stars. Modern spectrographs can also easily detect Jupiter-mass planets orbiting 10 astronomical units away from the parent star, but detection of those planets requires many years of observation. Earth-mass planets are currently detectable only in very small orbits around low-mass stars, e.g. Proxima b.
It is easier to detect planets around low-mass stars, for two reasons: First, these stars are more affected by gravitational tug from planets. The second reason is that low-mass main-sequence stars generally rotate relatively slowly. Fast rotation makes spectral-line data less clear because half of the star quickly rotates away from observer's viewpoint while the other half approaches. Detecting planets around more massive stars is easier if the star has left the main sequence, because leaving the main sequence slows down the star's rotation.
Sometimes Doppler spectrography produces false signals, especially in multi-planet and multi-star systems. Magnetic fields and certain types of stellar activity can also give false signals. When the host star has multiple planets, false signals can also arise from having insufficient data, so that multiple solutions can fit the data, as stars are not generally observed continuously. Some of the false signals can be eliminated by analyzing the stability of the planetary system, conducting photometry analysis on the host star and knowing its rotation period and stellar activity cycle periods.
Planets with orbits highly inclined to the line of sight from Earth produce smaller visible wobbles, and are thus more difficult to detect. One of the advantages of the radial velocity method is that eccentricity of the planet's orbit can be measured directly. One of the main disadvantages of the radial-velocity method is that it can only estimate a planet's minimum mass (
M
true
∗
sin
i
{\displaystyle M_{\text{true}}*{\sin i}\,}
). The posterior distribution of the inclination angle i depends on the true mass distribution of the planets. However, when there are multiple planets in the system that orbit relatively close to each other and have sufficient mass, orbital stability analysis allows one to constrain the maximum mass of these planets. The radial-velocity method can be used to confirm findings made by the transit method. When both methods are used in combination, then the planet's true mass can be estimated.
Although radial velocity of the star only gives a planet's minimum mass, if the planet's spectral lines can be distinguished from the star's spectral lines then the radial velocity of the planet itself can be found, and this gives the inclination of the planet's orbit. This enables measurement of the planet's actual mass. This also rules out false positives, and also provides data about the composition of the planet. The main issue is that such detection is possible only if the planet orbits around a relatively bright star and if the planet reflects or emits a lot of light.
=== Transit photometry ===
==== Technique, advantages, and disadvantages ====
While the radial velocity method provides information about a planet's mass, the photometric method can determine the planet's radius. If a planet crosses (transits) in front of its parent star's disk, then the observed visual brightness of the star drops by a small amount, depending on the relative sizes of the star and the planet. For example, in the case of HD 209458, the star dims by 1.7%. However, most transit signals are considerably smaller; for example, an Earth-size planet transiting a Sun-like star produces a dimming of only 80 parts per million (0.008 percent).
A theoretical transiting exoplanet light curve model predicts the following characteristics of an observed planetary system: transit depth (δ), transit duration (T), the ingress/egress duration (τ), and period of the exoplanet (P). However, these observed quantities are based on several assumptions. For convenience in the calculations, we assume that the planet and star are spherical, the stellar disk is uniform, and the orbit is circular. Depending on the relative position that an observed transiting exoplanet is while transiting a star, the observed physical parameters of the light curve will change. The transit depth (δ) of a transiting light curve describes the decrease in the normalized flux of the star during a transit. This details the radius of an exoplanet compared to the radius of the star. For example, if an exoplanet transits a solar radius size star, a planet with a larger radius would increase the transit depth and a planet with a smaller radius would decrease the transit depth. The transit duration (T) of an exoplanet is the length of time that a planet spends transiting a star. This observed parameter changes relative to how fast or slow a planet is moving in its orbit as it transits the star. The ingress/egress duration (τ) of a transiting light curve describes the length of time the planet takes to fully cover the star (ingress) and fully uncover the star (egress). If a planet transits from the one end of the diameter of the star to the other end, the ingress/egress duration is shorter because it takes less time for a planet to fully cover the star. If a planet transits a star relative to any other point other than the diameter, the ingress/egress duration lengthens as you move further away from the diameter because the planet spends a longer time partially covering the star during its transit. From these observable parameters, a number of different physical parameters (semi-major axis, star mass, star radius, planet radius, eccentricity, and inclination) are determined through calculations. With the combination of radial velocity measurements of the star, the mass of the planet is also determined.
This method has two major disadvantages. First, planetary transits are observable only when the planet's orbit happens to be perfectly aligned from the astronomers' vantage point. The probability of a planetary orbital plane being directly on the line-of-sight to a star is the ratio of the diameter of the star to the diameter of the orbit (in small stars, the radius of the planet is also an important factor). About 10% of planets with small orbits have such an alignment, and the fraction decreases for planets with larger orbits. For a planet orbiting a Sun-sized star at 1 AU, the probability of a random alignment producing a transit is 0.47%. Therefore, the method cannot guarantee that any particular star is not a host to planets. However, by scanning large areas of the sky containing thousands or even hundreds of thousands of stars at once, transit surveys can find more extrasolar planets than the radial-velocity method. Several surveys have taken that approach, such as the ground-based MEarth Project, SuperWASP, KELT, and HATNet, as well as the space-based COROT, Kepler and TESS missions. The transit method has also the advantage of detecting planets around stars that are located a few thousand light years away. The most distant planets detected by Sagittarius Window Eclipsing Extrasolar Planet Search are located near the galactic center. However, reliable follow-up observations of these stars are nearly impossible with current technology.
The second disadvantage of this method is a high rate of false detections. A 2012 study found that the rate of false positives for transits observed by the Kepler mission could be as high as 40% in single-planet systems. For this reason, a star with a single transit detection requires additional confirmation, typically from the radial-velocity method or orbital brightness modulation method. The radial velocity method is especially necessary for Jupiter-sized or larger planets, as objects of that size encompass not only planets, but also brown dwarfs and even small stars. As the false positive rate is very low in stars with two or more planet candidates, such detections often can be validated without extensive follow-up observations. Some can also be confirmed through the transit timing variation method.
Many points of light in the sky have brightness variations that may appear as transiting planets by flux measurements. False-positives in the transit photometry method arise in three common forms: blended eclipsing binary systems, grazing eclipsing binary systems, and transits by planet sized stars. Eclipsing binary systems usually produce deep eclipses that distinguish them from exoplanet transits, since planets are usually smaller than about 2RJ, but eclipses are shallower for blended or grazing eclipsing binary systems.
Blended eclipsing binary systems consist of a normal eclipsing binary blended with a third (usually brighter) star along the same line of sight, usually at a different distance. The constant light of the third star dilutes the measured eclipse depth, so the light-curve may resemble that for a transiting exoplanet. In these cases, the target most often contains a large main sequence primary with a small main sequence secondary or a giant star with a main sequence secondary.
Grazing eclipsing binary systems are systems in which one object will just barely graze the limb of the other. In these cases, the maximum transit depth of the light curve will not be proportional to the ratio of the squares of the radii of the two stars, but will instead depend solely on the small fraction of the primary that is blocked by the secondary. The small measured dip in flux can mimic that of an exoplanet transit. Some of the false positive cases of this category can be easily found if the eclipsing binary system has a circular orbit, with the two companions having different masses. Due to the cyclic nature of the orbit, there would be two eclipsing events, one of the primary occulting the secondary and vice versa. If the two stars have significantly different masses, and this different radii and luminosities, then these two eclipses would have different depths. This repetition of a shallow and deep transit event can easily be detected and thus allow the system to be recognized as a grazing eclipsing binary system. However, if the two stellar companions are approximately the same mass, then these two eclipses would be indistinguishable, thus making it impossible to demonstrate that a grazing eclipsing binary system is being observed using only the transit photometry measurements.
Finally, there are two types of stars that are approximately the same size as gas giant planets, white dwarfs and brown dwarfs. This is due to the fact that gas giant planets, white dwarfs, and brown dwarfs, are all supported by degenerate electron pressure. The light curve does not discriminate between masses as it only depends on the size of the transiting object. When possible, radial velocity measurements are used to verify that the transiting or eclipsing body is of planetary mass, meaning less than 13MJ. Transit Time Variations can also determine MP. Doppler Tomography with a known radial velocity orbit can obtain minimum MP and projected sing-orbit alignment.
Red giant branch stars have another issue for detecting planets around them: while planets around these stars are much more likely to transit due to the larger star size, these transit signals are hard to separate from the main star's brightness light curve as red giants have frequent pulsations in brightness with a period of a few hours to days. This is especially notable with subgiants. In addition, these stars are much more luminous, and transiting planets block a much smaller percentage of light coming from these stars. In contrast, planets can completely occult a very small star such as a neutron star or white dwarf, an event which would be easily detectable from Earth. However, due to the small star sizes, the chance of a planet aligning with such a stellar remnant is extremely small.
The main advantage of the transit method is that the size of the planet can be determined from the light curve. When combined with the radial-velocity method (which determines the planet's mass), one can determine the density of the planet, and hence learn something about the planet's physical structure. The planets that have been studied by both methods are by far the best-characterized of all known exoplanets.
The transit method also makes it possible to study the atmosphere of the transiting planet. When the planet transits the star, light from the star passes through the upper atmosphere of the planet. By studying the high-resolution stellar spectrum carefully, one can detect elements present in the planet's atmosphere. A planetary atmosphere, and planet for that matter, could also be detected by measuring the polarization of the starlight as it passed through or is reflected off the planet's atmosphere.
Additionally, the secondary eclipse (when the planet is blocked by its star) allows direct measurement of the planet's radiation and helps to constrain the planet's orbital eccentricity without needing the presence of other planets. If the star's photometric intensity during the secondary eclipse is subtracted from its intensity before or after, only the signal caused by the planet remains. It is then possible to measure the planet's temperature and even to detect possible signs of cloud formations on it. In March 2005, two groups of scientists carried out measurements using this technique with the Spitzer Space Telescope. The two teams, from the Harvard-Smithsonian Center for Astrophysics, led by David Charbonneau, and the Goddard Space Flight Center, led by L. D. Deming, studied the planets TrES-1 and HD 209458b respectively. The measurements revealed the planets' temperatures: 1,060 K (790°C) for TrES-1 and about 1,130 K (860 °C) for HD 209458b. In addition, the hot Neptune Gliese 436 b is known to enter secondary eclipse. However, some transiting planets orbit such that they do not enter secondary eclipse relative to Earth; HD 17156 b is over 90% likely to be one of the latter.
==== History ====
The first exoplanet for which transits were observed for HD 209458 b, which was discovered using radial velocity technique. These transits were observed in 1999 by two teams led David Charbonneau and Gregory W. Henry. The first exoplanet to be discovered with the transit method was OGLE-TR-56b in 2002 by the OGLE project.
A French Space Agency mission, CoRoT, began in 2006 to search for planetary transits from orbit, where the absence of atmospheric scintillation allows improved accuracy. This mission was designed to be able to detect planets "a few times to several times larger than Earth" and performed "better than expected", with two exoplanet discoveries (both of the "hot Jupiter" type) as of early 2008. In June 2013, CoRoT's exoplanet count was 32 with several still to be confirmed. The satellite unexpectedly stopped transmitting data in November 2012 (after its mission had twice been extended), and was retired in June 2013.
In March 2009, NASA mission Kepler was launched to scan a large number of stars in the constellation Cygnus with a measurement precision expected to detect and characterize Earth-sized planets. The NASA Kepler Mission uses the transit method to scan a hundred thousand stars for planets. It was hoped that by the end of its mission of 3.5 years, the satellite would have collected enough data to reveal planets even smaller than Earth. By scanning a hundred thousand stars simultaneously, it was not only able to detect Earth-sized planets, it was able to collect statistics on the numbers of such planets around Sun-like stars.
On 2 February 2011, the Kepler team released a list of 1,235 extrasolar planet candidates, including 54 that may be in the habitable zone. On 5 December 2011, the Kepler team announced that they had discovered 2,326 planetary candidates, of which 207 are similar in size to Earth, 680 are super-Earth-size, 1,181 are Neptune-size, 203 are Jupiter-size and 55 are larger than Jupiter. Compared to the February 2011 figures, the number of Earth-size and super-Earth-size planets increased by 200% and 140% respectively. Moreover, 48 planet candidates were found in the habitable zones of surveyed stars, marking a decrease from the February figure; this was due to the more stringent criteria in use in the December data. By June 2013, the number of planet candidates was increased to 3,278 and some confirmed planets were smaller than Earth, some even Mars-sized (such as Kepler-62c) and one even smaller than Mercury (Kepler-37b).
The Transiting Exoplanet Survey Satellite launched in April 2018.
=== Reflection and emission modulations ===
Short-period planets in close orbits around their stars will undergo reflected light variations because, like the Moon, they will go through phases from full to new and back again. In addition, as these planets receive a lot of starlight, it heats them, making thermal emissions potentially detectable. Since telescopes cannot resolve the planet from the star, they see only the combined light, and the brightness of the host star seems to change over each orbit in a periodic manner. Although the effect is small — the photometric precision required is about the same as to detect an Earth-sized planet in transit across a solar-type star – such Jupiter-sized planets with an orbital period of a few days are detectable by space telescopes such as the Kepler Space Observatory. Like with the transit method, it is easier to detect large planets orbiting close to their parent star than other planets as these planets catch more light from their parent star. When a planet has a high albedo and is situated around a relatively luminous star, its light variations are easier to detect in visible light while darker planets or planets around low-temperature stars are more easily detectable with infrared light with this method. In the long run, this method may find the most planets that will be discovered by that mission because the reflected light variation with orbital phase is largely independent of orbital inclination and does not require the planet to pass in front of the disk of the star. It still cannot detect planets with circular face-on orbits from Earth's viewpoint as the amount of reflected light does not change during its orbit.
The phase function of the giant planet is also a function of its thermal properties and atmosphere, if any. Therefore, the phase curve may constrain other planet properties, such as the size distribution of atmospheric particles. When a planet is found transiting and its size is known, the phase variations curve helps calculate or constrain the planet's albedo. It is more difficult with very hot planets as the glow of the planet can interfere when trying to calculate albedo. In theory, albedo can also be found in non-transiting planets when observing the light variations with multiple wavelengths. This allows scientists to find the size of the planet even if the planet is not transiting the star.
The first-ever direct detection of the spectrum of visible light reflected from an exoplanet was made in 2015 by an international team of astronomers. The astronomers studied light from 51 Pegasi b – the first exoplanet discovered orbiting a main-sequence star (a Sunlike star), using the High Accuracy Radial velocity Planet Searcher (HARPS) instrument at the European Southern Observatory's La Silla Observatory in Chile.
Both CoRoT and Kepler have measured the reflected light from planets. However, these planets were already known since they transit their host star. The first planets discovered by this method are Kepler-70b and Kepler-70c, found by Kepler.
=== Relativistic beaming ===
A separate novel method to detect exoplanets from light variations uses relativistic beaming of the observed flux from the star due to its motion. It is also known as Doppler beaming or Doppler boosting. The method was first proposed by Abraham Loeb and Scott Gaudi in 2003. As the planet tugs the star with its gravitation, the density of photons and therefore the apparent brightness of the star changes from observer's viewpoint. Like the radial velocity method, it can be used to determine the orbital eccentricity and the minimum mass of the planet. With this method, it is easier to detect massive planets close to their stars as these factors increase the star's motion. Unlike the radial velocity method, it does not require an accurate spectrum of a star, and therefore can be used more easily to find planets around fast-rotating stars and more distant stars.
One of the biggest disadvantages of this method is that the light variation effect is very small. A Jovian-mass planet orbiting 0.025 AU away from a Sun-like star is barely detectable even when the orbit is edge-on. This is not an ideal method for discovering new planets, as the amount of emitted and reflected starlight from the planet is usually much larger than light variations due to relativistic beaming. This method is still useful, however, as it allows for measurement of the planet's mass without the need for follow-up data collection from radial velocity observations.
The first discovery of a planet using this method (Kepler-76b) was announced in 2013.
=== Ellipsoidal variations ===
Massive planets can cause slight tidal distortions to their host stars. When a star has a slightly ellipsoidal shape, its apparent brightness varies, depending if the oblate part of the star is facing the observer's viewpoint. Like with the relativistic beaming method, it helps to determine the minimum mass of the planet, and its sensitivity depends on the planet's orbital inclination. The extent of the effect on a star's apparent brightness can be much larger than with the relativistic beaming method, but the brightness changing cycle is twice as fast. In addition, the planet distorts the shape of the star more if it has a low semi-major axis to stellar radius ratio and the density of the star is low. This makes this method suitable for finding planets around stars that have left the main sequence.
=== Pulsar timing ===
A pulsar is a neutron star: the small, ultradense remnant of a star that has exploded as a supernova. Pulsars emit radio waves extremely regularly as they rotate. Because the intrinsic rotation of a pulsar is so regular, slight anomalies in the timing of its observed radio pulses can be used to track the pulsar's motion. Like an ordinary star, a pulsar will move in its own small orbit if it has a planet. Calculations based on pulse-timing observations can then reveal the parameters of that orbit.
This method was not originally designed for the detection of planets, but is so sensitive that it is capable of detecting planets far smaller than any other method can, down to less than a tenth the mass of Earth. It is also capable of detecting mutual gravitational perturbations between the various members of a planetary system, thereby revealing further information about those planets and their orbital parameters. In addition, it can easily detect planets which are relatively far away from the pulsar.
There are two main drawbacks to the pulsar timing method: pulsars are relatively rare, and special circumstances are required for a planet to form around a pulsar. Therefore, it is unlikely that a large number of planets will be found this way. Additionally, life would likely not survive on planets orbiting pulsars due to the high intensity of ambient radiation.
In 1992, Aleksander Wolszczan and Dale Frail used this method to discover planets around the pulsar PSR 1257+12. Their discovery was confirmed by 1994, making it the first confirmation of planets outside the Solar System.
=== Variable star timing ===
Like pulsars, some other types of pulsating variable stars are regular enough that radial velocity could be determined purely photometrically from the Doppler shift of the pulsation frequency, without needing spectroscopy. This method is not as sensitive as the pulsar timing variation method, due to the periodic activity being longer and less regular. The ease of detecting planets around a variable star depends on the pulsation period of the star, the regularity of pulsations, the mass of the planet, and its distance from the host star.
The first success with this method came in 2007, when V391 Pegasi b was discovered around a pulsating subdwarf star.
=== Transit timing ===
The transit timing variation method considers whether transits occur with strict periodicity, or if there is a variation. When multiple transiting planets are detected, they can often be confirmed with the transit timing variation method. This is useful in planetary systems far from the Sun, where radial velocity methods cannot detect them due to the low signal-to-noise ratio. If a planet has been detected by the transit method, then variations in the timing of the transit provide an extremely sensitive method of detecting additional non-transiting planets in the system with masses comparable to Earth's. It is easier to detect transit-timing variations if planets have relatively close orbits, and when at least one of the planets is more massive, causing the orbital period of a less massive planet to be more perturbed.
The main drawback of the transit timing method is that usually not much can be learnt about the planet itself. Transit timing variation can help to determine the maximum mass of a planet. In most cases, it can confirm if an object has a planetary mass, but it does not put narrow constraints on its mass. There are exceptions though, as planets in the Kepler-36 and Kepler-88 systems orbit close enough to accurately determine their masses.
The first significant detection of a non-transiting planet using TTV was carried out with NASA's Kepler space telescope. The transiting planet Kepler-19b shows TTV with an amplitude of five minutes and a period of about 300 days, indicating the presence of a second planet, Kepler-19c, which has a period which is a near-rational multiple of the period of the transiting planet.
In circumbinary planets, variations of transit timing are mainly caused by the orbital motion of the stars, instead of gravitational perturbations by other planets. These variations make it harder to detect these planets through automated methods. However, it makes these planets easy to confirm once they are detected.
=== Transit duration variation ===
"Duration variation" refers to changes in how long the transit takes. Duration variations may be caused by an exomoon, apsidal precession for eccentric planets due to another planet in the same system, or general relativity.
When a circumbinary planet is found through the transit method, it can be easily confirmed with the transit duration variation method. In close binary systems, the stars significantly alter the motion of the companion, meaning that any transiting planet has significant variation in transit duration. The first such confirmation came from Kepler-16b.
=== Eclipsing binary minima timing ===
When a binary star system is aligned such that – from the Earth's point of view – the stars pass in front of each other in their orbits, the system is called an "eclipsing binary" star system. The time of minimum light, when the star with the brighter surface is at least partially obscured by the disc of the other star, is called the primary eclipse, and approximately half an orbit later, the secondary eclipse occurs when the brighter surface area star obscures some portion of the other star. These times of minimum light, or central eclipses, constitute a time stamp on the system, much like the pulses from a pulsar (except that rather than a flash, they are a dip in brightness). If there is a planet in circumbinary orbit around the binary stars, the stars will be offset around a binary-planet center of mass. As the stars in the binary are displaced back and forth by the planet, the times of the eclipse minima will vary. The periodicity of this offset may be the most reliable way to detect extrasolar planets around close binary systems. With this method, planets are more easily detectable if they are more massive, orbit relatively closely around the system, and if the stars have low masses.
The eclipsing timing method allows the detection of planets further away from the host star than the transit method. However, signals around cataclysmic variable stars hinting for planets tend to match with unstable orbits. In 2011, Kepler-16b became the first planet to be definitely characterized via eclipsing binary timing variations.
=== Gravitational microlensing ===
Gravitational microlensing occurs when the gravitational field of a star acts like a lens, magnifying the light of a distant background star. This effect occurs only when the two stars are almost exactly aligned. Lensing events are brief, lasting for weeks or days, as the two stars and Earth are all moving relative to each other. More than a thousand such events have been observed over the past ten years.
If the foreground lensing star has a planet, then that planet's own gravitational field can make a detectable contribution to the lensing effect. Since that requires a highly improbable alignment, a very large number of distant stars must be continuously monitored in order to detect planetary microlensing contributions at a reasonable rate. This method is most fruitful for planets between Earth and the center of the galaxy, as the galactic center provides a large number of background stars.
In 1991, astronomers Shude Mao and Bohdan Paczyński proposed using gravitational microlensing to look for binary companions to stars, and their proposal was refined by Andy Gould and Abraham Loeb in 1992 as a method to detect exoplanets. Successes with the method date back to 2002, when a group of Polish astronomers (Andrzej Udalski, Marcin Kubiak and Michał Szymański from Warsaw, and Bohdan Paczyński) during project OGLE (the Optical Gravitational Lensing Experiment) developed a workable technique. During one month, they found several possible planets, though limitations in the observations prevented clear confirmation. Since then, several confirmed extrasolar planets have been detected using microlensing. This was the first method capable of detecting planets of Earth-like mass around ordinary main-sequence stars.
Unlike most other methods, which have detection bias towards planets with small (or for resolved imaging, large) orbits, the microlensing method is most sensitive to detecting planets around 1-10 astronomical units away from Sun-like stars.
A notable disadvantage of the method is that the lensing cannot be repeated, because the chance alignment never occurs again. Also, the detected planets will tend to be several kiloparsecs away, so follow-up observations with other methods are usually impossible. In addition, the only physical characteristic that can be determined by microlensing is the mass of the planet, within loose constraints. Orbital properties also tend to be unclear, as the only orbital characteristic that can be directly determined is its current semi-major axis from the parent star, which can be misleading if the planet follows an eccentric orbit. When the planet is far away from its star, it spends only a tiny portion of its orbit in a state where it is detectable with this method, so the orbital period of the planet cannot be easily determined. It is also easier to detect planets around low-mass stars, as the gravitational microlensing effect increases with the planet-to-star mass ratio.
The main advantages of the gravitational microlensing method are that it can detect low-mass planets (in principle down to Mars mass with future space projects such as the Nancy Grace Roman Space Telescope); it can detect planets in wide orbits comparable to Saturn and Uranus, which have orbital periods too long for the radial velocity or transit methods; and it can detect planets around very distant stars. When enough background stars can be observed with enough accuracy, then the method should eventually reveal how common Earth-like planets are in the galaxy.
Observations are usually performed using networks of robotic telescopes. In addition to the European Research Council-funded OGLE, the Microlensing Observations in Astrophysics (MOA) group is working to perfect this approach.
The PLANET (Probing Lensing Anomalies NETwork)/RoboNet project is even more ambitious. It allows nearly continuous round-the-clock coverage by a world-spanning telescope network, providing the opportunity to pick up microlensing contributions from planets with masses as low as Earth's. This strategy was successful in detecting the first low-mass planet on a wide orbit, designated OGLE-2005-BLG-390Lb.
The NASA Nancy Grace Roman Space Telescope scheduled for launch in 2027 includes a microlensing planet survey as one of its three core projects.
=== Direct imaging ===
Planets are extremely faint light sources compared to stars, and what little light comes from them tends to be lost in the glare from their parent star. So in general, it is very difficult to detect and resolve them directly from their host star. Planets orbiting far enough from stars to be resolved reflect very little starlight, so planets are detected through their thermal emission instead. It is easier to obtain images when the planetary system is relatively near to the Solar System, and when the planet is especially large (considerably larger than Jupiter), widely separated from its parent star, and hot so that it emits intense infrared radiation; images have then been made in the infrared, where the planet is brighter than it is at visible wavelengths. Coronagraphs are used to block light from the star, while leaving the planet visible. Direct imaging of an Earth-like exoplanet requires extreme optothermal stability. During the accretion phase of planetary formation, the star-planet contrast may be even better in H alpha than it is in infrared – an H alpha survey is currently underway.
Direct imaging can give only loose constraints of the planet's mass, which is derived from the age of the star and the temperature of the planet. Mass can vary considerably, as planets can form several million years after the star has formed. The cooler the planet is, the less the planet's mass needs to be. In some cases it is possible to give reasonable constraints to the radius of a planet based on planet's temperature, its apparent brightness, and its distance from Earth. The spectra emitted from planets do not have to be separated from the star, which eases determining the chemical composition of planets.
Sometimes observations at multiple wavelengths are needed to rule out the planet being a brown dwarf. Direct imaging can be used to accurately measure the planet's orbit around the star. Unlike the majority of other methods, direct imaging works better with planets with face-on orbits rather than edge-on orbits, as a planet in a face-on orbit is observable during the entirety of the planet's orbit, while planets with edge-on orbits are most easily observable during their period of largest apparent separation from the parent star.
The planets detected through direct imaging currently fall into two categories. First, planets are found around stars more massive than the Sun which are young enough that protoplanetary disks just became debris disks. The second category consists of possible sub-brown dwarfs found around very dim stars, or brown dwarfs which are at least 100 AU away from their parent stars.
Planetary-mass objects not gravitationally bound to a star are found through direct imaging as well.
==== Early discoveries ====
In 2004, a group of astronomers used the European Southern Observatory's Very Large Telescope array in Chile to produce an image of 2M1207b, a companion to the brown dwarf 2M1207. In the following year, the planetary status of the companion was confirmed. The planet is estimated to be several times more massive than Jupiter, and to have an orbital radius greater than 40 AU.
On 13 November 2008 it was published that the Hubble Space Telescope directly observed an exoplanet orbiting Fomalhaut, with a mass no more than 3 MJ. Both systems around Fomalhaut and HR 8799 published that day are surrounded by disks not unlike the Kuiper belt.
On the same day, 13 November 2008, the first multiplanet system was published, first seen in images of October 2007, using telescopes at both the Keck Observatory and Gemini Observatory. Three planets were directly observed orbiting HR 8799, whose masses are approximately ten, ten, and seven times that of Jupiter.
On 21 November 2008, three days after acceptance of a letter to the editor published online on 11 December 2008, it was announced that analysis of images dating back to 2003, revealed a planet orbiting Beta Pictoris.
In 2010 a companion was confirmed to not be a chance alignment that was imaged first in April 2008 at a separation of 330 AU from the star 1RXS J160929.1−210524, first published to be detected on 6 November 2008 and already announced on 8 September 2008. It is not confirmed, yet, whether the mass of the companion is above or below the deuterium-burning limit.
In 2012, it was announced that a "Super-Jupiter" planet with a mass about 12.8 MJ orbiting Kappa Andromedae was directly imaged using the Subaru Telescope in Hawaii. It orbits its parent star at a distance of about 55 AU, or nearly twice the distance of Neptune from the sun.
Other possible exoplanets to have been directly imaged include GQ Lupi b, DH Tauri b, AB Pictoris b, CHXR 73 b and SCR 1845 b. As of 2006, none have been confirmed as planets; instead, they might themselves be small brown dwarfs.
==== Imaging instruments ====
Several planet-imaging-capable instruments are installed on large ground-based telescopes, such as Gemini Planet Imager, VLT-SPHERE, the Subaru Coronagraphic Extreme Adaptive Optics (SCExAO) instrument, or Palomar Project 1640. In space, there are currently no dedicated exoplanet imaging instruments. Although the James Webb Space Telescope does have some exoplanet imaging capabilities, it has not specifically been designed and optimised for that purpose. The Nancy Grace Roman Space Telescope will be the first space observatory to include a dedicated exoplanet imaging instrument. This instrument is designed by JPL as a demonstrator for a future large observatory in space that will have the imaging of Earth-like exoplanets as one of its primary science goals. Concepts such as the LUVOIR or the HabEx have been proposed.
In 2010, a team from NASA's Jet Propulsion Laboratory demonstrated that a vortex coronagraph could enable small scopes to directly image planets. They did this by imaging the previously imaged HR 8799 planets, using just a 1.5 meter-wide portion of the Hale Telescope.
Another promising approach is nulling interferometry.
It has also been proposed that space-telescopes that focus light using zone plates instead of mirrors would provide higher-contrast imaging, and be cheaper to launch into space due to being able to fold up the lightweight foil zone plate. Another possibility would be to use a large occulter in space designed to block the light of nearby stars in order to observe their orbiting planets, such as the New Worlds Mission.
==== Data Reduction Techniques ====
Post-processing of observational data to enhance signal strength of off-axial bodies (i.e. exoplanets) can be accomplished in a variety of ways. All methods are based on the presence of diversity in the data between the central star and the exoplanet companions: this diversity can originate from differences in the spectrum, the angular position, the orbital motion, the polarisation, or the coherence of the light. The most popular technique is Angular Differential Imaging (ADI), where exposures are acquired at different parallactic angle positions and the sky is left to rotate around the observed central star. The exposures are averaged, each exposure undergoes subtraction by the average, and then they are (de-)rotated to stack the faint planetary signal all in one place.
Specral Differential Imaging (SDI) performs an analogous procedure, but for radial changes in brightness (as a function of spectra or wavelength) instead of angular changes.
Combinations of the two are possible (ASDI, SADI, or Combined Differential Imaging "CODI").
=== Polarimetry ===
Light given off by a star is un-polarized, i.e. the direction of oscillation of the light wave is random. However, when the light is reflected off the atmosphere of a planet, the light waves interact with the molecules in the atmosphere and become polarized.
By analyzing the polarization in the combined light of the planet and star (about one part in a million), these measurements can in principle be made with very high sensitivity, as polarimetry is not limited by the stability of the Earth's atmosphere. Another main advantage is that polarimetry allows for determination of the composition of the planet's atmosphere. The main disadvantage is that it will not be able to detect planets without atmospheres. Larger planets and planets with higher albedo are easier to detect through polarimetry, as they reflect more light.
Astronomical devices used for polarimetry, called polarimeters, are capable of detecting polarized light and rejecting unpolarized beams. Groups such as ZIMPOL/CHEOPS and PlanetPol are currently using polarimeters to search for extrasolar planets. The first successful detection of an extrasolar planet using this method came in 2008, when HD 189733 b, a planet discovered three years earlier, was detected using polarimetry. However, no new planets have yet been discovered using this method.
=== Astrometry ===
This method consists of precisely measuring a star's position in the sky, and observing how that position changes over time. Originally, this was done visually, with hand-written records. By the end of the 19th century, this method used photographic plates, greatly improving the accuracy of the measurements as well as creating a data archive. If a star has a planet, then the gravitational influence of the planet will cause the star itself to move in a tiny circular or elliptical orbit. Effectively, star and planet each orbit around their mutual centre of mass (barycenter), as explained by solutions to the two-body problem. Since the star is much more massive, its orbit will be much smaller. Frequently, the mutual centre of mass will lie within the radius of the larger body. Consequently, it is easier to find planets around low-mass stars, especially brown dwarfs.
Astrometry is the oldest search method for extrasolar planets, and was originally popular because of its success in characterizing astrometric binary star systems. It dates back at least to statements made by William Herschel in the late 18th century. He claimed that an unseen companion was affecting the position of the star he cataloged as 70 Ophiuchi. The first known formal astrometric calculation for an extrasolar planet was made by William Stephen Jacob in 1855 for this star. Similar calculations were repeated by others for another half-century until finally refuted in the early 20th century.
For two centuries claims circulated of the discovery of unseen companions in orbit around nearby star systems that all were reportedly found using this method, culminating in the prominent 1996 announcement, of multiple planets orbiting the nearby star Lalande 21185 by George Gatewood. None of these claims survived scrutiny by other astronomers, and the technique fell into disrepute. Unfortunately, changes in stellar position are so small—and atmospheric and systematic distortions so large—that even the best ground-based telescopes cannot produce precise enough measurements. All claims of a planetary companion of less than 0.1 solar mass, as the mass of the planet, made before 1996 using this method are likely spurious. In 2002, the Hubble Space Telescope did succeed in using astrometry to characterize a previously discovered planet around the star Gliese 876.
The space-based observatory Gaia, launched in 2013, is expected to find thousands of planets via astrometry, but prior to the launch of Gaia, no planet detected by astrometry had been confirmed. SIM PlanetQuest was a US project (cancelled in 2010) that would have had similar exoplanet finding capabilities to Gaia.
One potential advantage of the astrometric method is that it is most sensitive to planets with large orbits. This makes it complementary to other methods that are most sensitive to planets with small orbits. However, very long observation times will be required — years, and possibly decades, as planets far enough from their star to allow detection via astrometry also take a long time to complete an orbit. Planets orbiting around one of the stars in binary systems are more easily detectable, as they cause perturbations in the orbits of stars themselves. However, with this method, follow-up observations are needed to determine which star the planet orbits around.
In 2009, the discovery of VB 10b by astrometry was announced. This planetary object, orbiting the low mass red dwarf star VB 10, was reported to have a mass seven times that of Jupiter. If confirmed, this would be the first exoplanet discovered by astrometry, of the many that have been claimed through the years. However recent radial velocity independent studies rule out the existence of the claimed planet.
In 2010, six binary stars were astrometrically measured. One of the star systems, called HD 176051, was found with "high confidence" to have a planet.
In 2018, a study comparing observations from the Gaia spacecraft to Hipparcos data for the Beta Pictoris system was able to measure the mass of Beta Pictoris b, constraining it to 11±2 Jupiter masses. This is in good agreement with previous mass estimations of roughly 13 Jupiter masses.
In 2019, data from the Gaia spacecraft and its predecessor Hipparcos was complemented with HARPS data enabling a better description of ε Indi Ab as the second-closest Jupiter-like exoplanet with a mass of 3 Jupiters on a slightly eccentric orbit with an orbital period of 45 years.
As of 2022, especially thanks to Gaia, the combination of radial velocity and astrometry has been used to detect and characterize numerous Jovian planets, including the nearest Jupiter analogues ε Eridani b and ε Indi Ab. In addition, radio astrometry using the VLBA has been used to discover planets in orbit around TVLM 513-46546 and EQ Pegasi A.
=== X-ray eclipse ===
In September 2020, the detection of a candidate planet orbiting the high-mass X-ray binary M51-ULS-1 in the Whirlpool Galaxy was announced. The planet was detected by eclipses of the X-ray source, which consists of a stellar remnant (either a neutron star or a black hole) and a massive star, likely a B-type supergiant. This is the only method capable of detecting a planet in another galaxy.
=== Disc kinematics ===
Planets in formation can be detected by the signatures they produce in their natal protoplanetary disks. The velocities of the gas in a protoplanetary disk can be observed, and their morphology can reveal the presence of planets. Planets perturb the gas velocities by imprinting strong variations from Keplerian motion. This method is now referred to as "disk kinematics." Notable examples of protoplanetary disks around young stars with signatures of embedded planets include HD 97048, HD 163296 and HD 100546.
== Other possible methods ==
=== Flare and variability echo detection ===
Non-periodic variability events, such as flares, can produce extremely faint echoes in the light curve if they reflect off an exoplanet or other scattering medium in the star system. More recently, motivated by advances in instrumentation and signal processing technologies, echoes from exoplanets are predicted to be recoverable from high-cadence photometric and spectroscopic measurements of active star systems, such as M dwarfs. These echoes are theoretically observable in all orbital inclinations.
=== Transit imaging ===
An optical/infrared interferometer array (e.g, the proposed 16 interferometer-array of the Big Fringe Telescope) doesn't collect as much light as a single telescope of equivalent size, but has the resolution of a single telescope the size of the array. For bright stars, this resolving power could be used to image a star's surface during a transit event and observe the shadow of the planet transiting. This could provide a direct measurement of the planet's angular radius and, via parallax, its actual radius. This is more accurate than radius estimates based on transit photometry, which are dependent on stellar radius estimates which in turn depend on models of star characteristics. Imaging also provides more accurate determination of the inclination than photometry does.
=== Magnetospheric (auroral) radio emissions ===
Auroral radio emissions from exoplanet magnetospheres could be detected with radio telescopes. The emission may be caused by the exoplanet's magnetic field interacting with a stellar wind, adjacent plasma sources (such as Jupiter's volcanic moon Io travelling through its magnetosphere) or the interaction of the magnetic field with the interstellar medium. Although several discoveries have been claimed, thus far, none have been verified. The most sensitive searches for direct radio emissions from exoplanet magnetic fields, or from exoplanet magnetic fields interacting with those from their host stars, have been conducted with the Arecibo radio telescope.
In addition to allowing for a study of exoplanet magnetic fields, radio emissions may be used to measure the interior rotation rate of an exoplanet.
=== Optical interferometry ===
In March 2019, ESO astronomers, employing the GRAVITY instrument on their Very Large Telescope Interferometer (VLTI), announced the first direct detection of an exoplanet, HR 8799 e, using optical interferometry.
=== Modified interferometry ===
By looking at the wiggles of an interferogram using a Fourier-Transform-Spectrometer, enhanced sensitivity could be obtained in order to detect faint signals from Earth-like planets.
=== Detection of dust trapping around Lagrangian points ===
Identification of dust clumps along a protoplanetary disk demonstrate trace accumulation around Lagrangian points. From the detection of this dust, it can be inferred that a planet exists such that it has created those accumulations.
=== Gravitational waves ===
The Laser Interferometer Space Antenna (LISA) for observing gravitational waves is expected to detect the presence of large planets and brown dwarfs orbiting white dwarf binaries. The number of such detections in the Milky Way is estimated to range from 17 in a pessimistic scenario to more than 2000 in an optimistic scenario, and even extragalactic detections in the Magellanic Clouds might be possible, far beyond the current capabilities of other detection methods.
== Detection of extrasolar asteroids and debris disks ==
=== Circumstellar disks ===
Disks of space dust (debris disks) surround many stars. The dust can be detected because it absorbs ordinary starlight and re-emits it as infrared radiation. Even if the dust particles have a total mass well less than that of Earth, they can still have a large enough total surface area that they outshine their parent star in infrared wavelengths.
The Hubble Space Telescope is capable of observing dust disks with its NICMOS (Near Infrared Camera and Multi-Object Spectrometer) instrument. Even better images have now been taken by its sister instrument, the Spitzer Space Telescope, and by the European Space Agency's Herschel Space Observatory, which can see far deeper into infrared wavelengths than the Hubble can. Dust disks have now been found around more than 15% of nearby sunlike stars.
The dust is thought to be generated by collisions among comets and asteroids. Radiation pressure from the star will push the dust particles away into interstellar space over a relatively short timescale. Therefore, the detection of dust indicates continual replenishment by new collisions, and provides strong indirect evidence of the presence of small bodies like comets and asteroids that orbit the parent star. For example, the dust disk around the star Tau Ceti indicates that that star has a population of objects analogous to our own Solar System's Kuiper Belt, but at least ten times thicker.
More speculatively, features in dust disks sometimes suggest the presence of full-sized planets. Some disks have a central cavity, meaning that they are really ring-shaped. The central cavity may be caused by a planet "clearing out" the dust inside its orbit. Other disks contain clumps that may be caused by the gravitational influence of a planet. Both these kinds of features are present in the dust disk around Epsilon Eridani, hinting at the presence of a planet with an orbital radius of around 40 AU (in addition to the inner planet detected through the radial-velocity method). These kinds of planet-disk interactions can be modeled numerically using collisional grooming techniques.
=== Contamination of stellar atmospheres ===
Spectral analysis of white dwarfs' atmospheres often finds contamination of heavier elements like magnesium and calcium. These elements cannot originate from the stars' core, and it is probable that the contamination comes from asteroids that got too close (within the Roche limit) to these stars by gravitational interaction with larger planets and were torn apart by star's tidal forces. Up to 50% of young white dwarfs may be contaminated in this manner.
Additionally, the dust responsible for the atmospheric pollution may be detected by infrared radiation if it exists in sufficient quantity, similar to the detection of debris discs around main sequence stars. Data from the Spitzer Space Telescope suggests that 1-3% of white dwarfs possess detectable circumstellar dust.
In 2015, minor planets were discovered transiting the white dwarf WD 1145+017. This material orbits with a period of around 4.5 hours, and the shapes of the transit light curves suggest that the larger bodies are disintegrating, contributing to the contamination in the white dwarf's atmosphere.
== Space telescopes ==
Most confirmed extrasolar planets have been found using space-based telescopes (as of 01/2015). Many of the detection methods can work more effectively with space-based telescopes that avoid atmospheric haze and turbulence. COROT (2007-2012) and Kepler were space missions dedicated to searching for extrasolar planets using transits. COROT discovered about 30 new exoplanets.
Kepler (2009-2013) and K2 (2013- ) have discovered over 2000 verified exoplanets. Hubble Space Telescope and MOST have also found or confirmed a few planets. The infrared Spitzer Space Telescope has been used to detect transits of extrasolar planets, as well as occultations of the planets by their host star and phase curves.
The Gaia mission, launched in December 2013, will use astrometry to determine the true masses of 1000 nearby exoplanets.
TESS, launched in 2018, CHEOPS launched in 2019 and PLATO in 2026 will use the transit method.
== Primary and secondary detection ==
== Verification and falsification methods ==
Verification by multiplicity
Transit color signature
Doppler tomography
Dynamical stability testing
Distinguishing between planets and stellar activity
Transit offset
== Characterization methods ==
Transmission spectroscopy
Emission spectroscopy, phase-resolved
Speckle imaging / Lucky imaging to detect companion stars that the planets could be orbiting instead of the primary star, which would alter planet parameters that are derived from stellar parameters.
Photoeccentric Effect
Rossiter–McLaughlin effect
== See also ==
List of exoplanets
Exomoon
== References ==
== External links ==
NASA's PlanetQuest
Lunine, Jonathan I.; MacIntosh, Bruce; Peale, Stanton (2009). "The detection and characterization of exoplanets". Physics Today. 62 (5): 46. Bibcode:2009PhT....62e..46L. doi:10.1063/1.3141941. S2CID 12379824.
Transiting exoplanet light curves
Hardy, Liam. "Exoplanet Transit". Deep Space Videos. Brady Haran.
The Radial Velocity Equation in the Search for Exoplanets ( The Doppler Spectroscopy or Wobble Method ) Archived 2 December 2021 at the Wayback Machine
Sackett, Penny (2010). "Microlensing exoplanets". Scholarpedia. 5 (1): 3991. Bibcode:2010SchpJ...5.3991S. doi:10.4249/scholarpedia.3991.
Alhejress, Omaymah A. (1 December 2023). "Extraplanetary system under modified gravity". Europhysics Letters. 144 (5): 59001. Bibcode:2023EL....14459001A. doi:10.1209/0295-5075/ad152d. | Wikipedia/Methods_of_detecting_exoplanets |
Geomathematics (also: mathematical geosciences, mathematical geology, mathematical geophysics) is the application of mathematical methods to solve problems in geosciences, including geology and geophysics, and particularly geodynamics and seismology.
== Applications ==
=== Geophysical fluid dynamics ===
Geophysical fluid dynamics develops the theory of fluid dynamics for the atmosphere, ocean and Earth's interior. Applications include geodynamics and the theory of the geodynamo.
=== Geophysical inverse theory ===
Geophysical inverse theory is concerned with analyzing geophysical data to get model parameters. It is concerned with the question: What can be known about the Earth's interior from measurements on the surface? Generally there are limits on what can be known even in the ideal limit of exact data.
The goal of inverse theory is to determine the spatial distribution of some variable (for example, density or seismic wave velocity). The distribution determines the values of an observable at the surface (for example, gravitational acceleration for density). There must be a forward model predicting the surface observations given the distribution of this variable.
Applications include geomagnetism, magnetotellurics and seismology.
=== Fractals and complexity ===
Many geophysical data sets have spectra that follow a power law, meaning that the frequency of an observed magnitude varies as some power of the magnitude. An example is the distribution of earthquake magnitudes; small earthquakes are far more common than large earthquakes. This is often an indicator that the data sets have an underlying fractal geometry. Fractal sets have a number of common features, including structure at many scales, irregularity, and self-similarity (they can be split into parts that look much like the whole). The manner in which these sets can be divided determine the Hausdorff dimension of the set, which is generally different from the more familiar topological dimension. Fractal phenomena are associated with chaos, self-organized criticality and turbulence. Fractal Models in the Earth Sciences by Gabor Korvin was one of the earlier books on the application of Fractals in the Earth Sciences.
=== Data assimilation ===
Data assimilation combines numerical models of geophysical systems with observations that may be irregular in space and time. Many of the applications involve geophysical fluid dynamics. Fluid dynamic models are governed by a set of partial differential equations. For these equations to make good predictions, accurate initial conditions are needed. However, often the initial conditions are not very well known. Data assimilation methods allow the models to incorporate later observations to improve the initial conditions. Data assimilation plays an increasingly important role in weather forecasting.
=== Geophysical statistics ===
Some statistical problems come under the heading of mathematical geophysics, including model validation and quantifying uncertainty.
=== Terrestrial Tomography ===
An important research area that utilises inverse methods is
seismic tomography, a technique for imaging the subsurface of the Earth using seismic waves. Traditionally seismic waves produced by earthquakes or anthropogenic seismic sources (e.g., explosives, marine air guns) were used.
=== Crystallography ===
Crystallography is one of the traditional areas of geology that use mathematics. Crystallographers make use of linear algebra by using the Metrical Matrix. The Metrical Matrix uses the basis vectors of the unit cell dimensions to find the volume of a unit cell, d-spacings, the angle between two planes, the angle between atoms, and the bond length. Miller's Index is also helpful in the application of the Metrical Matrix. Brag's equation is also useful when using an electron microscope to be able to show relationship between light diffraction angles, wavelength, and the d-spacings within a sample.
=== Geophysics ===
Geophysics is one of the most math heavy disciplines of Earth Science. There are many applications which include gravity, magnetic, seismic, electric, electromagnetic, resistivity, radioactivity, induced polarization, and well logging. Gravity and magnetic methods share similar characteristics because they're measuring small changes in the gravitational field based on the density of the rocks in that area. While similar gravity fields tend to be more uniform and smooth compared to magnetic fields. Gravity is used often for oil exploration and seismic can also be used, but it is often significantly more expensive. Seismic is used more than most geophysics techniques because of its ability to penetrate, its resolution, and its accuracy.
=== Geomorphology ===
Many applications of mathematics in geomorphology are related to water. In the soil aspect things like Darcy's law, Stokes' law, and porosity are used.
Darcy's law is used when one has a saturated soil that is uniform to describe how fluid flows through that medium. This type of work would fall under hydrogeology.
Stokes' law measures how quickly different sized particles will settle out of a fluid. This is used when doing pipette analysis of soils to find the percentage sand vs silt vs clay. A potential error is it assumes perfectly spherical particles which don't exist.
Stream power is used to find the ability of a river to incise into the river bed. This is applicable to see where a river is likely to fail and change course or when looking at the damage of losing stream sediments on a river system (like downstream of a dam).
Differential equations can be used in multiple areas of geomorphology including: The exponential growth equation, distribution of sedimentary rocks, diffusion of gas through rocks, and crenulation cleavages.
=== Glaciology ===
Mathematics in Glaciology consists of theoretical, experimental, and modeling. It usually covers glaciers, sea ice, waterflow, and the land under the glacier.
Polycrystalline ice deforms slower than single crystalline ice, due to the stress being on the basal planes that are already blocked by other ice crystals. It can be mathematically modeled with Hooke's law to show the elastic characteristics while using Lamé constants. Generally the ice has its linear elasticity constants averaged over one dimension of space to simplify the equations while still maintaining accuracy.
Viscoelastic polycrystalline ice is considered to have low amounts of stress usually below one bar. This type of ice system is where one would test for creep or vibrations from the tension on the ice. One of the more important equations to this area of study is called the relaxation function. Where it's a stress-strain relationship independent of time. This area is usually applied to transportation or building onto floating ice.
Shallow-Ice approximation is useful for glaciers that have variable thickness, with a small amount of stress and variable velocity. One of the main goals of the mathematical work is to be able to predict the stress and velocity. Which can be affected by changes in the properties of the ice and temperature. This is an area in which the basal shear-stress formula can be used.
== Academic journals ==
International Journal on Geomathematics
Mathematical Geosciences
== See also ==
Geocomputation
Geoinformatics
International Association for Mathematical Geosciences (IAMG)
== References ==
=== Works cited ===
Parker, Robert L. (1994). Geophysical Inverse Theory. Princeton University Press. ISBN 0-691-03634-9.
Pedlosky, Joseph (2005). Geophysical Fluid Dynamics. Society for Industrial and Applied Mathematics. ISBN 0-89871-572-5.
Tarantola, Albert (1987). Inverse Problem Theory and Methods for Model Parameter Estimation. Springer-Verlag. ISBN 0-387-96387-1.
Turcotte, Donald L. (1997). Fractals and Chaos in Geology and Geophysics. Cambridge University Press. ISBN 0-521-56164-7.
Wang, Bin; Zou, Xiaolei; Zhu, Jiang (2000). "Data assimilation and its applications". Proceedings of the National Academy of Sciences of the United States of America. 97 (21): 11143–11144. Bibcode:2000PNAS...9711143W. doi:10.1073/pnas.97.21.11143. PMC 34050. PMID 11027322.
== Further reading ==
Agterberg, Frits (2014). Geomathematics : theoretical foundations, applications and future developments. Cham: Springer. ISBN 978-3-319-06874-9. OCLC 885024357.
Merriam, Daniel F. (February 1982). "Development, significance, and influence of geomathematics: Observations of one geologist". Mathematical Geology. 14 (1). doi:10.1007/BF01037443.
Freeden, W (2010). Handbook of geomathematics. Berlin London: Springer. ISBN 978-3-642-01546-5. OCLC 676700046.
Bonham-Carter, Graeme; Cheng, Qiuming, eds. (2008). Progress in Geomathematics. Berlin, Heidelberg: Springer Berlin Heidelberg. doi:10.1007/978-3-540-69496-0. ISBN 978-3-540-69495-3. | Wikipedia/Mathematical_geophysics |
Planets in binary star systems may be candidates for supporting extraterrestrial life. Habitability of binary star systems is determined by many factors from a variety of sources. Typical estimates often suggest that 50% or more of all star systems are binary systems. This may be partly due to sample bias, as massive and bright stars tend to be in binaries and these are most easily observed and catalogued; a more precise analysis has suggested that the more common fainter stars are usually singular, and that up to two thirds of all stellar systems are therefore solitary.
The separation between stars in a binary may range from less than one astronomical unit (au, the "average" Earth-to-Sun distance) to several hundred au. In latter instances, the gravitational effects will be negligible on a planet orbiting an otherwise suitable star, and habitability potential will not be disrupted unless the orbit is highly eccentric. In reality, some orbital ranges are impossible for dynamical reasons (the planet would be expelled from its orbit relatively quickly, being either ejected from the system altogether or transferred to a more inner or outer orbital range), whilst other orbits present serious challenges for eventual biospheres because of likely extreme variations in surface temperature during different parts of the orbit. If the separation is significantly close to the planet's distance, a stable orbit may be impossible.
Planets that orbit just one star in a binary pair are said to have "S-type" orbits, whereas those that orbit around both stars have "P-type" or "circumbinary" orbits. It is estimated that 50–60% of binary stars are capable of supporting habitable terrestrial planets within stable orbital ranges.
== Non-circumbinary planet (S-Type) ==
In non-circumbinary planets, if a planet's distance to its primary exceeds about one fifth of the closest approach of the other star, orbital stability is not guaranteed. Whether planets might form in binaries at all had long been unclear, given that gravitational forces might interfere with planet formation. Theoretical work by Alan Boss at the Carnegie Institution has shown that gas giants can form around stars in binary systems much as they do around solitary stars.
Studies of Alpha Centauri, the nearest star system to the Sun, suggested that binaries need not be discounted in the search for habitable planets. Centauri A and B have an 11 au distance at closest approach (23 au mean), and both have stable habitable zones. A study of long-term orbital stability for simulated planets within the system shows that planets within approximately three au of either star may remain stable (i.e. the semi-major axis deviating by less than 5%). The habitable zone for Alpha Centauri A extends, conservatively estimated, from 1.37 to 1.76 au and that of Alpha Centauri B from 0.77 to 1.14 au—well within the stable region in both cases.
== Circumbinary planet (P-Type) ==
The minimum stable star-to-circumbinary-planet separation is about 2–4 times the binary star separation, or orbital period about 3–8 times the binary period. The innermost planets in all the Kepler circumbinary systems have been found orbiting close to this radius. The planets have semi-major axes that lie between 1.09 and 1.46 times this critical radius. The reason could be that migration might become inefficient near the critical radius, leaving planets just outside this radius.
For example, Kepler-47c is a gas giant in the circumbinary habitable zone of the Kepler-47 system.
If Earth-like planets form in or migrate into the circumbinary habitable zone, they would be capable of sustaining liquid water on their surface in spite of the dynamical and radiative interaction with the binary stars.
The limits of stability for S-type and P-type orbits within binary as well as trinary stellar systems have been established as a function of the orbital characteristics of the stars, for both prograde and retrograde motions of stars and planets.
== See also ==
Astrobiology
Circumstellar habitable zone
Habitability of yellow dwarf systems
Planetary habitability
Circumbinary planet
== References == | Wikipedia/Habitability_of_binary_star_systems |
Planetary oceanography, also called astro-oceanography or exo-oceanography, is the study of oceans on planets and moons other than Earth. Unlike other planetary sciences like astrobiology, astrochemistry, and planetary geology, it only began after the discovery of underground oceans in Saturn's moon Titan and Jupiter's moon Europa. This field remains speculative until further missions reach the oceans beneath the rock or ice layer of the moons. There are many theories about oceans or even ocean worlds of celestial bodies in the Solar System, from oceans made of liquid carbon with floating diamonds in Neptune to a gigantic ocean of liquid hydrogen that may exist underneath Jupiter's surface.
Early in their geologic histories, Mars and Venus are theorized to have had large water oceans. The Mars ocean hypothesis suggests that nearly a third of the surface of Mars was once covered by water, and a runaway greenhouse effect may have boiled away the global ocean of Venus. Compounds such as salts and ammonia, when dissolved in water, will lower water's freezing point, so that water might exist in large quantities in extraterrestrial environments as brine, or convecting ice. Unconfirmed oceans are speculated to exist beneath the surfaces of many dwarf planets and natural satellites; notably, the ocean of the moon Europa is estimated to have over twice the water volume of Earth's. The Solar System's giant planets are also thought to have liquid atmospheric layers of yet-to-be-confirmed compositions. Oceans may also exist on exoplanets and exomoons, including surface oceans of liquid water within a circumstellar habitable zone. Ocean planets are a hypothetical type of planet with a surface completely covered with liquid.
Extraterrestrial oceans may be composed of water, or other elements and compounds. The only confirmed large, stable bodies of extraterrestrial surface liquids are the lakes of Titan, which are made of hydrocarbons instead of water. However, there is strong evidence for the existence of subsurface water oceans elsewhere in the Solar System. The best-established candidates for subsurface water oceans in the Solar System are Jupiter's moons Europa, Ganymede, and Callisto, and Saturn's moons Enceladus and Titan.
Although Earth is the only known planet with large stable bodies of liquid water on its surface, and the only such planet in the Solar System, other celestial bodies are thought to have large oceans. In June 2020, NASA scientists reported that it is likely that exoplanets with oceans may be common in the Milky Way galaxy, based on mathematical modeling studies.
The inner structure of gas giants remain poorly understood. Scientists suspect that, under extreme pressure, hydrogen would act as a supercritical fluid, hence the likelihood of oceans of liquid hydrogen deep in the interior of gas giants like Jupiter. Oceans of liquid carbon have been hypothesized to exist on ice giants, notably Neptune and Uranus. Magma oceans exist during periods of accretion on any planet and some natural satellites when the planet or natural satellite is completely or partly molten.
== Extraterrestrial oceans ==
=== Planets ===
The gas giants, Jupiter and Saturn, are thought to lack surfaces and instead have a stratum of liquid hydrogen; however their planetary geology is not well understood. The possibility of the ice giants Uranus and Neptune having hot, highly compressed, supercritical water under their thick atmospheres has been hypothesised. Although their composition is still not fully understood, a 2006 study by Wiktorowicz and Ingersall ruled out the possibility of such a water "ocean" existing on Neptune, though oceans of metallic liquid carbon are possible.
The Mars ocean hypothesis suggests that nearly a third of the surface of Mars was once covered by water, though the water on Mars is no longer oceanic (much of it residing in the ice caps). The possibility continues to be studied along with reasons for their apparent disappearance. Some astronomers now propose that Venus may have had liquid water and perhaps oceans for over 2 billion years.
=== Natural satellites ===
A global layer of liquid water thick enough to decouple the crust from the mantle is thought to be present on the natural satellites Titan, Europa, Enceladus, Ganymede, and Triton; and, with less certainty, in Callisto, Mimas, Miranda, and Ariel. A magma ocean is thought to be present on Io. Geysers or fumaroles have been found on Saturn's moon Enceladus, possibly originating from an ocean about 10 kilometers (6 mi) beneath the surface ice shell. Other icy moons may also have internal oceans, or may once have had internal oceans that have now frozen.
Large bodies of liquid hydrocarbons are thought to be present on the surface of Titan, although they are not large enough to be considered oceans and are sometimes referred to as lakes or seas. The Cassini–Huygens space mission initially discovered only what appeared to be dry lakebeds and empty river channels, suggesting that Titan had lost what surface liquids it might have had. Later flybys of Titan provided radar and infrared images that showed a series of hydrocarbon lakes in the colder polar regions. Titan is thought to have a subsurface liquid-water ocean under the ice in addition to the hydrocarbon mix that forms atop its outer crust.
=== Dwarf planets and trans-Neptunian objects ===
Ceres appears to be differentiated into a rocky core and icy mantle and may harbour a liquid-water ocean under its surface.
Not enough is known of the larger trans-Neptunian objects to determine whether they are differentiated bodies capable of supporting oceans, although models of radioactive decay suggest that Pluto, Eris, Sedna, and Orcus have oceans beneath solid icy crusts approximately 100 to 180 kilometers (60 to 110 mi) thick. In June 2020, astronomers reported evidence that the dwarf planet Pluto may have had a subsurface ocean, and consequently may have been habitable, when it was first formed.
=== Extrasolar ===
Some planets and natural satellites outside the Solar System are likely to have oceans, including possible water ocean planets similar to Earth in the habitable zone or "liquid-water belt". The detection of oceans, even through the spectroscopy method, however is likely extremely difficult and inconclusive.
Theoretical models have been used to predict with high probability that GJ 1214 b, detected by transit, is composed of exotic form of ice VII, making up 75% of its mass,
making it an ocean planet.
Other possible candidates are merely speculative based on their mass and position in the habitable zone include planet though little is actually known of their composition. Some scientists speculate Kepler-22b may be an "ocean-like" planet. Models have been proposed for Gliese 581 d that could include surface oceans. Gliese 436 b is speculated to have an ocean of "hot ice". Exomoons orbiting planets, particularly gas giants within their parent star's habitable zone may theoretically have surface oceans.
Terrestrial planets will acquire water during their accretion, some of which will be buried in the magma ocean but most of it will go into a steam atmosphere, and when the atmosphere cools it will collapse on to the surface forming an ocean. There will also be outgassing of water from the mantle as the magma solidifies—this will happen even for planets with a low percentage of their mass composed of water, so "super-Earth exoplanets may be expected to commonly produce water oceans within tens to hundreds of millions of years of their last major accretionary impact."
== Non-water surface liquids ==
Oceans, seas, lakes and other bodies of liquids can be composed of liquids other than water, for example the hydrocarbon lakes on Titan. The possibility of seas of nitrogen on Triton was also considered but ruled out. There is evidence that the icy surfaces of the moons Ganymede, Callisto, Europa, Titan and Enceladus are shells floating on oceans of very dense liquid water or water–ammonia solution.
Extrasolar terrestrial planets that are extremely close to their parent star will be tidally locked and so one half of the planet will be a magma ocean. It is also possible that terrestrial planets had magma oceans at some point during their formation as a result of giant impacts. Hot Neptunes close to their star could lose their atmospheres via hydrodynamic escape, leaving behind their cores with various liquids on the surface. Where there are suitable temperatures and pressures, volatile chemicals that might exist as liquids in abundant quantities on planets (thalassogens) include ammonia, argon, carbon disulfide, ethane, hydrazine, hydrogen, hydrogen cyanide, hydrogen sulfide, methane, neon, nitrogen, nitric oxide, phosphine, silane, sulfuric acid, and water.
Supercritical fluids, although not liquids, do share various properties with liquids. Underneath the thick atmospheres of the planets Uranus and Neptune, it is expected that these planets are composed of oceans of hot high-density fluid mixtures of water, ammonia and other volatiles. The gaseous outer layers of Jupiter and Saturn transition smoothly into oceans of supercritical hydrogen. The atmosphere of Venus is 96.5% carbon dioxide, and is a supercritical fluid at the surface.
== See also ==
Extraterrestrial liquid water
Lava planet
List of largest lakes and seas in the Solar System
Magma ocean
Ocean world
== References == | Wikipedia/Planetary_oceanography |
In astrophysics, accretion is the accumulation of particles into a massive object by gravitationally attracting more matter, typically gaseous matter, into an accretion disk. Most astronomical objects, such as galaxies, stars, and planets, are formed by accretion processes.
== Overview ==
The accretion model that Earth and the other terrestrial planets formed from meteoric material was proposed in 1944 by Otto Schmidt, followed by the protoplanet theory of William McCrea (1960) and finally the capture theory of Michael Woolfson. In 1978, Andrew Prentice resurrected the initial Laplacian ideas about planet formation and developed the modern Laplacian theory. None of these models proved completely successful, and many of the proposed theories were descriptive.
The 1944 accretion model by Otto Schmidt was further developed in a quantitative way in 1969 by Viktor Safronov. He calculated, in detail, the different stages of terrestrial planet formation. Since then, the model has been further developed using intensive numerical simulations to study planetesimal accumulation. It is now accepted that stars form by the gravitational collapse of interstellar gas. Prior to collapse, this gas is mostly in the form of molecular clouds, such as the Orion Nebula. As the cloud collapses, losing potential energy, it heats up, gaining kinetic energy, and the conservation of angular momentum ensures that the cloud forms a flattened disk—the accretion disk.
== Accretion of galaxies ==
A few hundred thousand years after the Big Bang, the Universe cooled to the point where atoms could form. As the Universe continued to expand and cool, the atoms lost enough kinetic energy, and dark matter coalesced sufficiently, to form protogalaxies. As further accretion occurred, galaxies formed. Indirect evidence is widespread. Galaxies grow through mergers and smooth gas accretion. Accretion also occurs inside galaxies, forming stars.
== Accretion of stars ==
Stars are thought to form inside giant clouds of cold molecular hydrogen—giant molecular clouds of roughly 300,000 M☉ and 65 light-years (20 pc) in diameter. Over millions of years, giant molecular clouds are prone to collapse and fragmentation. These fragments then form small, dense cores, which in turn collapse into stars. The cores range in mass from a fraction to several times that of the Sun and are called protostellar (protosolar) nebulae. They possess diameters of 2,000–20,000 astronomical units (0.01–0.1 pc) and a particle number density of roughly 10,000 to 100,000/cm3 (160,000 to 1,600,000/cu in). Compare it with the particle number density of the air at the sea level—2.8×1019/cm3 (4.6×1020/cu in).
The initial collapse of a solar-mass protostellar nebula takes around 100,000 years. Every nebula begins with a certain amount of angular momentum. Gas in the central part of the nebula, with relatively low angular momentum, undergoes fast compression and forms a hot hydrostatic (non-contracting) core containing a small fraction of the mass of the original nebula. This core forms the seed of what will become a star. As the collapse continues, conservation of angular momentum dictates that the rotation of the infalling envelope accelerates, which eventually forms a disk.
As the infall of material from the disk continues, the envelope eventually becomes thin and transparent and the young stellar object (YSO) becomes observable, initially in far-infrared light and later in the visible. Around this time the protostar begins to fuse deuterium. If the protostar is sufficiently massive (above 80 MJ), hydrogen fusion follows. Otherwise, if its mass is too low, the object becomes a brown dwarf. This birth of a new star occurs approximately 100,000 years after the collapse begins. Objects at this stage are known as Class I protostars, which are also called young T Tauri stars, evolved protostars, or young stellar objects. By this time, the forming star has already accreted much of its mass; the total mass of the disk and remaining envelope does not exceed 10–20% of the mass of the central YSO.
At the next stage, the envelope completely disappears, having been gathered up by the disk, and the protostar becomes a classical T Tauri star. The latter have accretion disks and continue to accrete hot gas, which manifests itself by strong emission lines in their spectrum. The former do not possess accretion disks. Classical T Tauri stars evolve into weakly lined T Tauri stars. This happens after about 1 million years. The mass of the disk around a classical T Tauri star is about 1–3% of the stellar mass, and it is accreted at a rate of 10−7 to 10−9 M☉ per year. A pair of bipolar jets is usually present as well. The accretion explains all peculiar properties of classical T Tauri stars: strong flux in the emission lines (up to 100% of the intrinsic luminosity of the star), magnetic activity, photometric variability and jets. The emission lines actually form as the accreted gas hits the "surface" of the star, which happens around its magnetic poles. The jets are byproducts of accretion: they carry away excessive angular momentum. The classical T Tauri stage lasts about 10 million years (there are only a few examples of so-called Peter Pan disks, where the accretion continues to persist for much longer periods, sometimes lasting for more than 40 million years). The disk eventually disappears due to accretion onto the central star, planet formation, ejection by jets, and photoevaporation by ultraviolet radiation from the central star and nearby stars. As a result, the young star becomes a weakly lined T Tauri star, which, over hundreds of millions of years, evolves into an ordinary Sun-like star, dependent on its initial mass.
== Accretion of planets ==
Self-accretion of cosmic dust accelerates the growth of the particles into boulder-sized planetesimals. The more massive planetesimals accrete some smaller ones, while others shatter in collisions. Accretion disks are common around smaller stars, stellar remnants in a close binary, or black holes surrounded by material (such as those at the centers of galaxies). Some dynamics in the disk, such as dynamical friction, are necessary to allow orbiting gas to lose angular momentum and fall onto the central massive object. Occasionally, this can result in stellar surface fusion (see Bondi accretion).
In the formation of terrestrial planets or planetary cores, several stages can be considered. First, when gas and dust grains collide, they agglomerate by microphysical processes like van der Waals forces and electromagnetic forces, forming micrometer-sized particles. During this stage, accumulation mechanisms are largely non-gravitational in nature. However, planetesimal formation in the centimeter-to-meter range is not well understood, and no convincing explanation is offered as to why such grains would accumulate rather than simply rebound.: 341 In particular, it is still not clear how these objects grow to become 0.1–1 km (0.06–0.6 mi) sized planetesimals; this problem is known as the "meter size barrier": As dust particles grow by coagulation, they acquire increasingly large relative velocities with respect to other particles in their vicinity, as well as a systematic inward drift velocity, that leads to destructive collisions, and thereby limit the growth of the aggregates to some maximum size. Ward (1996) suggests that when slow moving grains collide, the very low, yet non-zero, gravity of colliding grains impedes their escape.: 341 It is also thought that grain fragmentation plays an important role replenishing small grains and keeping the disk thick, but also in maintaining a relatively high abundance of solids of all sizes.
A number of mechanisms have been proposed for crossing the 'meter-sized' barrier. Local concentrations of pebbles may form, which then gravitationally collapse into planetesimals the size of large asteroids. These concentrations can occur passively due to the structure of the gas disk, for example, between eddies, at pressure bumps, at the edge of a gap created by a giant planet, or at the boundaries of turbulent regions of the disk. Or, the particles may take an active role in their concentration via a feedback mechanism referred to as a streaming instability. In a streaming instability the interaction between the solids and the gas in the protoplanetary disk results in the growth of local concentrations, as new particles accumulate in the wake of small concentrations, causing them to grow into massive filaments. Alternatively, if the grains that form due to the agglomeration of dust are highly porous their growth may continue until they become large enough to collapse due to their own gravity. The low density of these objects allows them to remain strongly coupled with the gas, thereby avoiding high velocity collisions which could result in their erosion or fragmentation.
Grains eventually stick together to form mountain-size (or larger) bodies called planetesimals. Collisions and gravitational interactions between planetesimals combine to produce Moon-size planetary embryos (protoplanets) over roughly 0.1–1 million years. Finally, the planetary embryos collide to form planets over 10–100 million years. The planetesimals are massive enough that mutual gravitational interactions are significant enough to be taken into account when computing their evolution. Growth is aided by orbital decay of smaller bodies due to gas drag, which prevents them from being stranded between orbits of the embryos. Further collisions and accumulation lead to terrestrial planets or the core of giant planets.
If the planetesimals formed via the gravitational collapse of local concentrations of pebbles, their growth into planetary embryos and the cores of giant planets is dominated by the further accretions of pebbles. Pebble accretion is aided by the gas drag felt by objects as they accelerate toward a massive body. Gas drag slows the pebbles below the escape velocity of the massive body causing them to spiral toward and to be accreted by it. Pebble accretion may accelerate the formation of planets by a factor of 1000 compared to the accretion of planetesimals, allowing giant planets to form before the dissipation of the gas disk. However, core growth via pebble accretion appears incompatible with the final masses and compositions of Uranus and Neptune. Direct calculations indicate that, in a typical protoplanetary disk, the formation time of a giant planet via pebble accretion is comparable to the formation times resulting from planetesimal accretion.
The formation of terrestrial planets differs from that of giant gas planets, also called Jovian planets. The particles that make up the terrestrial planets are made from metal and rock that condensed in the inner Solar System. However, Jovian planets began as large, icy planetesimals, which then captured hydrogen and helium gas from the solar nebula. Differentiation between these two classes of planetesimals arise due to the frost line of the solar nebula.
== Accretion of asteroids ==
Meteorites contain a record of accretion and impacts during all stages of asteroid origin and evolution; however, the mechanism of asteroid accretion and growth is not well understood. Evidence suggests the main growth of asteroids can result from gas-assisted accretion of chondrules, which are millimeter-sized spherules that form as molten (or partially molten) droplets in space before being accreted to their parent asteroids. In the inner Solar System, chondrules appear to have been crucial for initiating accretion. The tiny mass of asteroids may be partly due to inefficient chondrule formation beyond 2 AU, or less-efficient delivery of chondrules from near the protostar. Also, impacts controlled the formation and destruction of asteroids, and are thought to be a major factor in their geological evolution.
Chondrules, metal grains, and other components likely formed in the solar nebula. These accreted together to form parent asteroids. Some of these bodies subsequently melted, forming metallic cores and olivine-rich mantles; others were aqueously altered. After the asteroids had cooled, they were eroded by impacts for 4.5 billion years, or disrupted.
For accretion to occur, impact velocities must be less than about twice the escape velocity, which is about 140 m/s (460 ft/s) for a 100 km (60 mi) radius asteroid. Simple models for accretion in the asteroid belt generally assume micrometer-sized dust grains sticking together and settling to the midplane of the nebula to form a dense layer of dust, which, because of gravitational forces, was converted into a disk of kilometer-sized planetesimals. But, several arguments suggest that asteroids may not have accreted this way.
== Accretion of comets ==
Comets, or their precursors, formed in the outer Solar System, possibly millions of years before planet formation. How and when comets formed is debated, with distinct implications for Solar System formation, dynamics, and geology. Three-dimensional computer simulations indicate the major structural features observed on cometary nuclei can be explained by pairwise low velocity accretion of weak cometesimals. The currently favored formation mechanism is that of the nebular hypothesis, which states that comets are probably a remnant of the original planetesimal "building blocks" from which the planets grew.
Astronomers think that comets originate in both the Oort cloud and the scattered disk. The scattered disk was created when Neptune migrated outward into the proto-Kuiper belt, which at the time was much closer to the Sun, and left in its wake a population of dynamically stable objects that could never be affected by its orbit (the Kuiper belt proper), and a population whose perihelia are close enough that Neptune can still disturb them as it travels around the Sun (the scattered disk). Because the scattered disk is dynamically active and the Kuiper belt relatively dynamically stable, the scattered disk is now seen as the most likely point of origin for periodic comets. The classic Oort cloud theory states that the Oort cloud, a sphere measuring about 50,000 AU (0.24 pc) in radius, formed at the same time as the solar nebula and occasionally releases comets into the inner Solar System as a giant planet or star passes nearby and causes gravitational disruptions. Examples of such comet clouds may already have been seen in the Helix Nebula.
The Rosetta mission to comet 67P/Churyumov–Gerasimenko determined in 2015 that when Sun's heat penetrates the surface, it triggers evaporation (sublimation) of buried ice. While some of the resulting water vapour may escape from the nucleus, 80% of it recondenses in layers beneath the surface. This observation implies that the thin ice-rich layers exposed close to the surface may be a consequence of cometary activity and evolution, and that global layering does not necessarily occur early in the comet's formation history. While most scientists thought that all the evidence indicated that the structure of nuclei of comets is processed rubble piles of smaller ice planetesimals of a previous generation, the Rosetta mission confirmed the idea that comets are "rubble piles" of disparate material. Comets appear to have formed as ~100-km bodies, then overwhelmingly ground/recontacted into their present states.
== See also ==
== References == | Wikipedia/Accretion_(astrophysics) |
Annual Review of Earth and Planetary Sciences is an annual peer-reviewed scientific journal published by Annual Reviews, which broadly covers Earth and planetary sciences, including geology, atmospheric sciences, climate, geophysics, environmental science, geological hazards, geodynamics, planet formation, and solar system origins. The co-editors are Katherine H. Freeman (Pennsylvania State University) and Raymond Jeanloz (University of California, Berkeley). As of 2024, Journal Citation Reports gives the journal a 2023 impact factor of 11.3. As of 2023, it is being published as open access, under the Subscribe to Open model.
== History ==
The Annual Review of Earth and Planetary Sciences was first published in 1973 by the nonprofit publisher Annual Reviews. The goal of the editorial committee was to produce critical review articles that condensed a large volume of research into a final product usable by students, specialists, and non-specialists. In the late 1990s it began publishing materials electronically. Format changes in 2006 included a simplification of the formatting, inclusion of definitions in the margins, more color, and expansion of supplementary materials, such as videos, in light of increasing access via the internet.
The size of individual volumes has grown over time: the volumes published in 2000, 2007, 2012, and 2013 were each noted at time of publication to be the largest-ever volume produced by the journal by page count or number of articles. As of 2020, it was published both in print and electronically. Some of its articles are available online in advance of the volume's publication date.
== Scope and indexing ==
It defines its scope as covering significant developments in the field of planetary science, encompassing earth science. Specific subdisciplines included are climatology, environmental science, and the history of life. Each volume begins with a prefatory chapter of the biography or autobiography of a notable scientist within the field. As of 2024, Journal Citation Reports lists the journal's impact factor as 11.3, ranking it third of 253 journals in the category "Geosciences, Multidisciplinary" and sixth of 84 in "Astronomy and Astrophysics". It is abstracted and indexed in Scopus, Science Citation Index Expanded, CAB Abstracts, INSPEC, and Academic Search, among others.
== Editorial processes ==
The Annual Review of Earth and Planetary Sciences is helmed by the editor or the co-editors. The editor is assisted by the editorial committee, which includes associate editors, regular members, and occasionally guest editors. Guest members participate at the invitation of the editor, and serve terms of one year. All other members of the editorial committee are appointed by the Annual Reviews board of directors and serve five-year terms. The editorial committee determines which topics should be included in each volume and solicits reviews from qualified authors. Unsolicited manuscripts are not accepted. Peer review of accepted manuscripts is undertaken by the editorial committee.
=== Editors of volumes ===
Dates indicate publication years in which someone was credited as a lead editor or co-editor of a journal volume. The planning process for a volume begins well before the volume appears, so appointment to the position of lead editor generally occurred prior to the first year shown here. An editor who has retired or died may be credited as a lead editor of a volume that they helped to plan, even if it is published after their retirement or death.
Fred A. Donath (1973–1980)
George Wetherill (1981–1996)
Raymond Jeanloz (1997–2013)
Jeanloz and Katherine H. Freeman (2014–present)
=== Current editorial committee ===
As of 2022, the editorial committee consists of the co-editors and the following members:
== See also ==
List of scientific journals in earth and atmospheric sciences
== References == | Wikipedia/Annual_Review_of_Earth_and_Planetary_Sciences |
The Asteroid Terrestrial-impact Last Alert System (ATLAS) is a robotic astronomical survey and early warning system optimized for detecting smaller near-Earth objects a few weeks to days before they impact Earth.
Funded by NASA, and developed and operated by the University of Hawaii's Institute for Astronomy, the system currently has four 0.5-meter telescopes. Two are located 160 km apart in the Hawaiian islands, at Haleakala (ATLAS–HKO, observatory code T05) and Mauna Loa (ATLAS–MLO, observatory code T08) observatories, one is located at the Sutherland Observatory (ATLAS–SAAO, observatory code M22) in South Africa, one is at the El Sauce Observatory in Rio Hurtado (Chile) (ATLAS–CHL, observatory code W68). The newest at Teide Observatory (ATLAS-Teide) was commissioned in February 2025, but as of May 2025 does not show results on the ATLAS web page.
ATLAS began observations in 2015 with one telescope at Haleakala, and a two-Hawaii-telescopes version became operational in 2017. The project then obtained NASA funding for two additional telescopes in the Southern hemisphere, which became operational in early 2022. Each telescope surveys one quarter of the whole observable sky four times per clear night, and the addition of the two southern telescopes improved ATLAS's four-fold coverage of the observable sky from every two clear nights to nightly, as well as filled its previous blind spot in the far southern sky.
== Context ==
Major astronomical impact events have significantly shaped Earth's history, having been implicated in the formation of the Earth–Moon system, the origin of water on Earth, the evolutionary history of life, and several mass extinctions. Notable prehistorical impact events include the Chicxulub impact by a 10 kilometer asteroid 66 million years ago, believed to be the cause of the Cretaceous–Paleogene extinction event which eliminated all non-avian dinosaurs and three-quarters of the plant and animal species on Earth. The 37 million years old asteroid impact that excavated the Mistastin crater generated temperatures exceeding 2,370 °C, the highest known to have naturally occurred on the surface of the Earth.
Throughout recorded history, hundreds of Earth impacts (and meteor air bursts) have been reported, with some very small fraction causing deaths, injuries, property damage, or other significant localised consequences. Stony asteroids with a diameter of 4 meters (13 ft) enter Earth's atmosphere approximately once per year. Asteroids with a diameter of 7 meters enter the atmosphere about every 5 years, with as much kinetic energy as the atomic bomb dropped on Hiroshima (approximately 16 kilotons of TNT). Their air burst dissipates about one third of that kinetic energy, or 5 kilotons. These relatively small asteroids ordinarily explode in the upper atmosphere and most or all of their solids are vaporized. Asteroids with a diameter of 20 m (66 ft) strike Earth approximately twice every century.
One of the best-known impacts in historical times is the 50 meter 1908 Tunguska event, which most likely caused no injuries but which leveled several thousand square kilometers of forest in a very sparsely populated part of Siberia, Russia. A similar impact over a more populous region would have caused locally catastrophic damage. The 2013 Chelyabinsk meteor event is the only known impact in historical times to have resulted in a large number of injuries, with the potential exception of the possibly highly deadly but poorly documented 1490 Qingyang event in China. The approximately 20 meter Chelyabinsk meteor is the largest recorded object to have impacted a continent of the Earth since the Tunguska event.
Future impacts are bound to occur, with much higher odds for smaller regionally damaging asteroids than for larger globally damaging ones. The 2018 final book of physicist Stephen Hawking, Brief Answers to the Big Questions, considers a large asteroid collision the biggest threat to our planet. In April 2018, the B612 Foundation reported "It's a 100 per cent certainty we'll be hit [by a devastating asteroid], but we're not 100 per cent sure when." In June 2018, the US National Science and Technology Council warned that America is unprepared for an asteroid impact event, and has developed and released the "National Near-Earth Object Preparedness Strategy Action Plan" to better prepare.
Larger asteroids are bright enough to be detected while far from the Earth, and their orbits can therefore be very precisely determined many years in advance of any close approach. Thanks largely to Spaceguard cataloging initiated by a 2005 mandate of the United States Congress to NASA, the inventory of the approximately one thousand Near Earth Objects with diameters above 1 kilometer was for instance 97% complete in 2017. The slowly improving completeness for 140 meter objects is estimated to be around 40%, and the planned NEO Surveyor NASA mission is expected to identify almost all of them by 2040. Any impact by one of these known asteroids would be predicted decades to centuries in advance, long enough to consider deflecting them away from Earth. None of them will impact Earth for at least the next century, and we are therefore largely safe from globally civilisation-ending kilometer-size impacts for at least the mid-term future. Regionally catastrophic impacts by asteroids a few hundred meters across, if any is to occur, cannot be excluded at this point in time, but should become predictable by 2040 thanks to NEO Surveyor.
Sub-140m impacting asteroids would not cause large scale damage but are still locally catastrophic. They are much more common and they can, by contrast to larger ones, only be detected when they come very close to the Earth. In most cases this only happens during their final approach. Those impacts therefore will always need a constant watch and they typically cannot be identified earlier than a few weeks in advance, far too late for interception. According to expert testimony in the United States Congress in 2013, NASA would at that time have required at least five years of preparation before a mission to intercept an asteroid could be launched. This preparation time could be much reduced by pre-planning a ready to launch mission, but the post-launch years needed to first meet the asteroid and then to slowly deflect it by at least the diameter of the Earth would be extremely hard to compress.
== Naming ==
The Last Alert part of the ATLAS name acknowledges that the system will find smaller asteroids years too late for potential deflection but would provide the days or weeks of warning needed to evacuate and otherwise prepare a target area. According to ATLAS project lead John Tonry, "that's enough time to evacuate the area of people, take measures to protect buildings and other infrastructure, and be alert to a tsunami danger generated by ocean impacts". Most of the more than 1 billion rubles (approximately $33M USD at the time) damage and of the 1500 injuries caused by the 17-m Chelyabinsk meteor impact in 2013 were from window glass broken by its shock wave. With even a few hours advance warning, those losses and injuries could have been much reduced by actions as simple as propping all windows open before the impact and staying away from them.
== Overview ==
The ATLAS project was developed at the University of Hawaii with US$5 million initial funding from NASA, and its first element was deployed on Haleakala in 2015. This first telescope became fully operational at the end of 2015, and the second one on Mauna Loa in March 2017. Replacement of the initially substandard Schmidt corrector plates of both telescopes in June 2017 brought their image quality closer to its nominal 2 pixels (3.8") width and consequently improved their sensitivity by one magnitude. In August 2018, the project obtained US$3.8 million of additional NASA funding to install two telescopes in the Southern hemisphere. One is now hosted by the South African Astronomical Observatory and the other at the El Sauce Observatory in Chile. Both started operating in early 2022. This geographical expansion of ATLAS provides visibility of the far Southern sky, more continuous coverage, better resilience to bad weather, and additional information on the asteroid orbit from the parallax effect. The full ATLAS concept consists of eight telescopes, spread over the globe for 24h/24h coverage of the full night sky.
As long as their radiant is not too close to the Sun, the automated system provides a one-week warning for a 45 metres (150 ft) diameter asteroid, and a three-week warning for a 120 m (390 ft) one. By comparison, the February 2013 Chelyabinsk meteor impact was from an object estimated at 17 m (60 ft) diameter. Its arrival direction happened to be close to the Sun and it therefore was in the blind spot of any Earth-based visible light warning system. A similar object arriving from a dark direction would now be detected by ATLAS a few days in advance.
As a by-product of its main design goal, ATLAS can identify any moderately bright variable or moving object in the night sky. It therefore also looks for variable stars, supernovae, dwarf planets, comets, and non-impacting asteroids.
== Design and operation ==
The full ATLAS concept consists of eight 50-centimeter diameter f/2 Wright-Schmidt telescopes, spread over the globe for full-night-sky and 24h/24h coverage, and each fitted with a 110 Megapixel CCD array camera. The current system consists of four such telescopes: ATLAS1 and ATLAS2 operate 160 km apart on the Haleakala and Mauna Loa volcanoes in the Hawaiian Islands, ATLAS3 is at the South African Astronomical Observatory and ATLAS4 is in Chile. These telescopes are notable for their large 7.4° field of view — about 15 times the diameter of the full moon — of which their 10 500 × 10 500 CCD camera images the central 5.4° × 5.4°. This system can image the whole night sky visible from a single location with about 1000 separate telescope pointings. At 30 seconds per exposure plus 10 seconds for simultaneously reading out the camera and repointing the telescope, each ATLAS unit can therefore scan the whole visible sky a little over once each night, with a median completeness limit at apparent magnitude 19. Since the mission of ATLAS is to identify moving objects, each telescope actually observes one quarter of the sky four times in a night at approximately 15-minute intervals. In perfect conditions, the four telescopes together can therefore observe the full night sky every night, but bad weather at one or the other site, occasional technical problems, and even the odd volcanic eruption of Mauna Loa, all reduce the effective coverage rate. The four nightly exposures by a telescope allow to automatically link multiple observations of an asteroid into a very preliminary orbit, with some robustness to the loss of one observation to overlap between the asteroid and a bright star. Astronomers can then predict the asteroid's approximate position on subsequent nights for follow-up. Apparent magnitude 19 is classified as "respectably but not extremely faint", and is approximately 100 000 times too faint to be seen with a naked eye from a very dark location. It is equivalent to the light of a match flame in New York viewed from San Francisco. ATLAS therefore scans the visible sky in much less depth, but much more quickly, than larger surveying telescope arrays such as University of Hawaii's Pan-STARRS. Pan-STARRS goes approximately 100 times deeper, but needs weeks instead of a quarter of a night to scan the whole sky just once. This makes ATLAS better suited to finding small asteroids which can only be seen during the just few days that they brighten dramatically when they happen to pass very close to the Earth.
NASA's Near Earth Observation Program initially provided a US$5 million grant, with $3.5 million covering the first three years of design, construction and software development, and the balance of the grant to fund the systems operation for two years following its entry into full operational service in late 2015. Further NASA grants funded continued operation of ATLAS and the construction of the two Southern telescopes.
== Discoveries ==
2024 YR4, an asteroid between 53 to 67 metres in diameter.
SN 2018cow, a relatively bright supernova on 2018-06-16.
2018 AH, largest asteroid to pass so close to Earth since 1971 on 2018-01-02.
A106fgF, a 2–5 m asteroid which either passed extremely close or impacted Earth on 2018-01-22.
2018 RC, near earth asteroid on 2018-09-03 (notable because it was discovered more than a day prior to closest approach on 2018-09-09).
A10bMLz, unknown space debris, so-called "empty trash bag object" on 2019-01-25.
2019 MO, an approximately 4 m asteroid which impacted the Caribbean Sea South of Puerto Rico in June 2019
C/2019 Y4 (ATLAS), comet
2020 VT4, a 5–10 m object which passed closer to Earth than any other known near-miss asteroid
Photographed ejecta from NASA's DART impact on asteroid Dimorphos
AT2022aedm explosion in an elliptical host galaxy
Many different comets, listed here: Comet ATLAS (disambiguation)
== See also ==
== References ==
== External links ==
Official website
ATLAS: The Asteroid Terrestrial-impact Last Alert System
Planetary Defense Conference 2017, (starts at 1h 10m 29s) on YouTube, Japan, 15 May 2017
asteroid List and Updates | Wikipedia/Asteroid_Terrestrial-impact_Last_Alert_System |
Extraterrestrial material refers to natural objects now on Earth that originated in outer space. Such materials include cosmic dust and meteorites, as well as samples brought to Earth by sample return missions from the Moon, asteroids and comets, as well as solar wind particles.
Extraterrestrial materials are of value to science as they preserve the primitive composition of the gas and dust from which the Sun and the Solar System formed.
== Categories ==
Extraterrestrial material for study on earth can be classified into a few broad categories, namely:
Meteorites too large to vaporize on atmospheric entry but small enough to leave fragments lying on the ground, among which are included likely specimens from the asteroid and Kuiper belts as well as from the moon and from Mars.
Moon rocks brought to Earth by robotic and crewed lunar missions.
Cosmic dust collected on Earth, in the Earth's stratosphere, and in low Earth orbit which likely include particles from the present day interplanetary dust cloud, as well as from comets.
Specimens collected by sample-return missions from comets, asteroids, solar wind, which include "stardust particles" from the present-day interstellar medium.
Presolar grains (extracted from meteorites and interplanetary dust particles) that predate the formation of the Solar System. These are the most pristine and valuable samples.
=== Collected on Earth ===
Examples of extraterrestrial material collected on Earth include cosmic dust and meteorites. Some of the meteorites found on Earth had their origin in another Solar System object such as the Moon, Martian meteorites, and the HED meteorite from Vesta. Another example is the Japanese Tanpopo mission that collected dust from low Earth orbit.
In 2019, researchers found interstellar dust in Antarctica which they relate to the Local Interstellar Cloud. The detection of interstellar dust in Antarctica was done by the measurement of the radionuclides Fe-60 and Mn-53 by highly sensitive Accelerator mass spectrometry, where Fe-60 is the clear signature for a recent-supernova origin.
=== Sample-return missions ===
To date, samples of Moon rock have been collected by robotic and crewed missions. The comet Wild 2 (Genesis mission) and the asteroid Itokawa (Hayabusa mission) have each been visited by robotic spacecraft that returned samples to Earth, and samples of the solar wind were also returned by the robotic Genesis mission.
Similar sample-return missions were OSIRIS-REx to asteroid Bennu, Hayabusa2 to asteroid Ryugu, and Tianwen-2 to asteroid 469219 Kamoʻoalewa. Several sample-return mission are planned for the Moon, Mars, and Mars' moons (see: Sample-return mission#List of missions).
Material obtained from sample-return missions are considered pristine and uncontaminated, and their curation and study must take place at specialized facilities where the samples are protected from Earthly contamination and from contact with the atmosphere. These facilities are specially designed to preserve both the sample integrity and protect the Earth from potential biological contamination. Restricted bodies include planets or moons suspected to have either past or present habitable environments to microscopic life, and therefore must be treated as extremely biohazardous.
== Lines of study ==
Samples analyzed on Earth can be matched against findings of remote sensing, for more insight into the processes that formed the Solar System.
=== Elemental and isotopic abundances ===
Present day elemental abundances are superimposed on an (evolving) galactic-average set of elemental abundances that was inherited by the Solar System, along with some atoms from local nucleosynthesis sources, at the time of the Sun's formation. Knowledge of these average planetary system elemental abundances is serving as a tool for tracking chemical and physical processes involved in the formation of planets, and the evolution of their surfaces.
Isotopic abundances provide important clues to the origin, transformation and geologic age of the material being analyzed.
Extraterrestrial materials also carry information on a wide range of nuclear processes. These include for example: (i) the decay of now-extinct radionuclides from supernova byproducts introduced into Solar System materials shortly before the collapse of our solar nebula, and (ii) the products of stellar and explosive nucleosynthesis found in almost undiluted form in presolar grains. The latter are providing astronomers with information on exotic environments from the early Milky Way galaxy.
Noble gases are particularly useful because they avoid chemical reactions, secondly because many of them have more than one isotope on which to carry the signature of nuclear processes, and because they are relatively easy to extract from solid materials by simple heating. As a result, they play a pivotal role in the study of extraterrestrial materials.
=== Nuclear spallation effects ===
Particles subject to bombardment by sufficiently energetic particles, like those found in cosmic rays, also experience the transmutation of atoms of one kind into another. These spallation effects can alter the trace element isotopic composition of specimens in ways which allow researchers to deduct the nature of their exposure in space.
These techniques have been used, for example, to look for (and determine the date of) events in the pre-Earth history of a meteorite's parent body (like a major collision) that drastically altered the space exposure of the material in that meteorite. For example, the Murchison meteorite landed in Australia in 1967, but its parent body apparently underwent a collision event about 800,000 years ago which broke it into meter-sized pieces.
=== Astrobiology ===
Astrobiology is an interdisciplinary scientific field concerned with the origins, early evolution, distribution, and future of life in the universe. It involves investigations on the presence of the organic compounds on comets, asteroids, Mars or the moons of the giant planets. Several sample-return missions to asteroids and comets are currently in the works with a key interest in astrobiology. More samples from asteroids, comets and moons could help determine whether life formed in other astronomical bodies, and if it could have been carried to Earth by meteorites or comets — a process termed panspermia.
The abundant organic compounds in primitive meteorites and interplanetary dust particles are thought to originate largely in the interstellar medium. However, this material may have been modified in the protoplanetary disk and has been modified to varying extents in the asteroidal parent bodies.
Cosmic dust contains complex organic compounds (amorphous organic solids with a mixed aromatic-aliphatic structure) that can be created naturally by stars and radiation. These compounds, in the presence of water and other habitable factors, are thought to have produced and spontaneously assembled the building blocks of life.
=== Origin of water on Earth ===
The origin of water on Earth is the subject of a significant body of research in the fields of planetary science, astronomy, and astrobiology. Isotopic ratios provide a unique "chemical fingerprint" that is used to compare Earth's water with reservoirs elsewhere in the Solar System. One such isotopic ratio, that of deuterium to hydrogen (D/H), is particularly useful in the search for the origin of water on Earth. However, when and how that water was delivered to Earth is the subject of ongoing research.
== See also ==
Cosmochemistry
Extraterrestrial sample curation
Glossary of meteoritics
Interplanetary dust cloud
List of Martian meteorites
Mars sample-return mission
== References ==
== External links ==
Planetary Science Research Discoveries Educational journal with articles about extraterrestrial materials. | Wikipedia/Extraterrestrial_materials |
F-type main-sequence stars are thought to be the hottest and more massive stars capable of hosting a planet with extraterrestrial life. Compared to cooler main-sequence stars of G, K and M types, F stars have shorter lifetimes and higher levels of ultraviolet radiation, which can hinder the development of life. Stars hotter than F stars have shorter lifetimes and higher UV incidency, which make life development not possible.
== Overview ==
One study on planets and their moons orbiting stars from F5 to F9.5 concluded that exoplanets/moons around exoplanets orbiting in habitable zones of F-type stars would receive excessive UV damage as compared to the Earth. If half a billion years is assumed as the amount of time it took for life to evolve, then the highest spectral type considerable for life-bearing planets' stars would be around A0. However, it took life on Earth some 3 billion years to establish complexity, which probably rules out all the A-type main sequence stars. Therefore, F0 stars may be the hottest stars that live long enough to allow for the development of complex life. Putting lifetime concerns aside, life on primordial Earth likely started in an underwater (and far underwater) environment anyway, and the water keeps the UV from reaching life-forms. In fact, it is possible that more UV could jumpstart the genesis and evolution of life, fulfilling main-sequence deadlines. In addition, hotter stars would have wider habitable zones (2.0–3.7 AU for an F0 star and 1.1–2.2 AU for an F8 star as opposed to 0.8–1.7 AU for the Solar System), which would be another advantage of looking for habitable planets around F-type stars. If UV does indeed prove to be primarily problematic, then according to Sato et al. (2014) a planet orbiting at the Early Mars limit around an F8 star would actually be better off than Earth, receiving only 95% of the Earth's UV irradiation, and atmospheric attenuation would decrease even a Venus-like (in terms of stellar flux) planet around an F0 star's UV irradiation to less than 1/4 that of Earth. The best case would be an Earth-like planet at the Early Mars limit with attenuation around an F8-type star, where UV irradiance is 3.7% Earth's.
== Evolutionary changes ==
The greatest variation in UV irradiance occurs in a planet orbiting an F0 star with >1.5 solar masses (M☉), as opposed to an F8 or F9 star with ≤1.2 M☉. The most dangerous phase in a star's life for orbiting habitable planets would be the first 500 million years. In some cases such as a planet at the outer edge of the habitable zone around an old F5 or F8 star, a planet can receive less UV than Earth receive.
== Frequency ==
According to the Kepler data, M-type stars supposedly had more Earth-sized planets than larger, Sunlike (where the term is broad, meaning any FGK star) stars. However, in recent years, the Gaia space telescope has exposed Kepler's flaws, making it apparent that Earth-sized planets around red dwarfs are no more common than around FGK stars. As a result, the habitability of F-type stars is not impaired by the overall frequency of Earth-sized planets around them. However, it does show that Earth-size planets should be extremely uncommon (<0.1%) in the habitable zones of their stars. So, instead of exoplanets, some studies focus mainly on exomoons orbiting Jupiter-like planets that fall within the habitable zone. Alternatively, a study done by NASA with the same telescope gave a result saying that up to 50% of stars with temperatures between 4,300 (K6) and 7,300 (F0) K had habitable planets, and the number increased to 75% when the optimistic habitable zone was used.
The habitability of F-type systems may be impaired, though, by the fact that they make up only 3% of the stars in the Milky way, compared to 6–8% for G-types, 12–13% for K-types, and ~70% for red dwarfs. Further study is required to make decisive conclusions about the frequency of habitable planets around F-type stars.
== Examples ==
As of 2023, there are no confirmed potentially habitable exoplanets around F-type stars, but some unconfirmed Kepler candidates may be potentially habitable, including KOI-7040.01, KOI-6676.01, KOI-5202.01, and KOI-5236.01. Upsilon Andromedae has a Jupiter-like planet in the habitable zone and could therefore have habitable exomoons. HD 10647 also has such a planet, which has a mass of >0.94 Jupiter masses and orbits at the outer frontier of the habitable zone.
=== Upsilon Andromedae system ===
Upsilon Andromedae is another F8-type star. It has 3 confirmed Jovian planets, and Upsilon Andromedae d orbits in the star's extended habitable zone on a 1267-day year. It orbits near the outer edge, at 2.5 AU, and has a minimum mass of 4.6 Jupiters. The habitability potential is therefore in possible Earth-like exomoons and not in the planet itself. It was the first multiple-planet system to be found around a main-sequence star (as well as, consequently, an F star) and is shown to be dynamically stable in all scenarios.
== See also ==
Habitability of red dwarf systems
Habitability of K-type main-sequence star systems
Habitability of yellow dwarf systems
Earth analog
List of Kepler exoplanet candidates in the habitable zone
List of potentially habitable exoplanets
== References == | Wikipedia/Habitability_of_F-type_main-sequence_star_systems |
Comparative planetary science or comparative planetology is a branch of space science and planetary science in which different natural processes and systems are studied by their effects and phenomena on and between multiple bodies. The planetary processes in question include geology, hydrology, atmospheric physics, and interactions such as impact cratering, space weathering, and magnetospheric physics in the solar wind, and possibly biology, via astrobiology.
Comparison of multiple bodies assists the researcher, if for no other reason than the Earth is far more accessible than any other body. Those distant bodies may then be evaluated in the context of processes already characterized on Earth. Conversely, other bodies (including extrasolar ones) may provide additional examples, edge cases, and counterexamples to earthbound processes; without a greater context, studying these phenomena in relation to Earth alone may result in low sample sizes and observational biases.
== Background ==
The term "comparative planetology" was coined by George Gamow, who reasoned that to fully understand our own planet, we must study others. Poldervaart focused on the Moon, stating "An adequate picture of this original planet and its development to the present earth is of great significance, is in fact the ultimate goal of geology as the science leading to knowledge and understanding of earth's history."
== Geology, geochemistry, and geophysics ==
All terrestrial planets (and some satellites, such as the Moon) are essentially composed of silicates wrapped around iron cores. The large outer Solar System moons and Pluto have more ice, and less rock and metal, but still undergo analogous processes.
=== Volcanism ===
Volcanism on Earth is largely lava-based. Other terrestrial planets display volcanic features assumed to be lava-based, evaluated in the context of analogues readily studied on Earth. For example, Jupiter's moon Io displays extant volcanism, including lava flows. These flows were initially inferred to be composed mostly of various forms of molten elemental sulfur, based on analysis of imaging done by the Voyager probes. However, Earth-based infrared studies done in the 1980s and 1990s caused the consensus to shift in favor of a primarily silicate-based model, with sulfur playing a secondary role.
Much of the surface of Mars is composed of various basalts considered analogous to Hawaiian basalts, by their spectra and in situ chemical analyses (including Martian meteorites). Mercury and Earth's Moon similarly feature large areas of basalts, formed by ancient volcanic processes. Surfaces in the polar regions show polygonal morphologies, also seen on Earth.
In addition to basalt flows, Venus is home to a large number of pancake dome volcanoes created by highly viscous silica-rich lava flows. These domes lack a known Earth analogue. They do bear some morphological resemblance to terrestrial rhyolite-dacite lava domes, although the pancake domes are much flatter and uniformly round in nature.
Certain regions further out in the Solar System exhibit cryovolcanism, a process not seen anywhere on earth. Cryovolcanism is studied through laboratory experiments, conceptual and numerical modeling, and by cross-comparison to other examples in the field. Examples of bodies with cryovolcanic features include comets, some asteroids and Centaurs, Mars, Europa, Enceladus, Triton, and possibly Titan, Ceres, Pluto, and Eris.
The trace dopants of Europa's ice are currently postulated to contain sulfur. This is being evaluated via a Canadian sulfate spring as an analogue, in preparation for future Europa probes.
Small bodies such as comets, some asteroid types, and dust grains, on the other hand, serve as counterexamples. Assumed to have experienced little or no heating, these materials may contain (or be) samples representing the early Solar System, which have since been erased from Earth or any other large body.
Some extrasolar planets are covered entirely in lava oceans, and some are tidally locked planets, whose star-facing hemisphere is entirely lava.
=== Cratering ===
The craters observed on the Moon were once assumed to be volcanic. Earth, by comparison, did not show a similar crater count, nor a high frequency of large meteor events, which would be expected as two nearby bodies should experience similar impact rates. Eventually this volcanism model was overturned, as numerous Earth craters (demonstrated by e. g., shatter cones, shocked quartz and other impactites, and possibly spall) were found, after having been eroded over geologic time. Craters formed by larger and larger ordnance also served as models. The Moon, on the other hand, shows no atmosphere or hydrosphere, and could thus accumulate and preserve impact craters over billions of years despite a low impact rate at any one time. In addition, more searches by more groups with better equipment highlighted the great number of asteroids, presumed to have been even more numerous in earlier Solar System periods.
As on Earth, a low crater count on other bodies indicates young surfaces. This is particularly credible if nearby regions or bodies show heavier cratering. Young surfaces, in turn, indicate atmospheric, tectonic or volcanic, or hydrological processing on large bodies and comets, or dust redistribution or a relatively recent formation on asteroids (i. e., splitting from a parent body).
Examination of the cratering record on multiple bodies, at multiple areas in the Solar System, points to a Late Heavy Bombardment, which in turn gives evidence of the Solar System's early history. However, the Late Heavy Bombardment as currently proposed has some issues and is not completely accepted.
One model for Mercury's exceptionally high density compared to other terrestrial planets is the stripping off of a significant amount of crust and/or mantle from extremely heavy bombardment.
=== Differentiation ===
As a large body, Earth can efficiently retain its internal heat (from its initial formation plus decay of its radioisotopes) over the long timescale of the Solar System. It thus retains a molten core, and has differentiated- dense materials have sunk to the core, while light materials float to form a crust.
Other bodies, by comparison, may or may not have differentiated, based on their formation history, radioisotope content, further energy input via bombardment, distance from the Sun, size, etc. Studying bodies of various sizes and distances from the Sun provides examples and places constraints on the differentiation process. Differentiation itself is evaluated indirectly, by the mineralogy of a body's surface, versus its expected bulk density and mineralogy, or via shape effects due to slight variations in gravity. Differentiation may also be measured directly, by the higher-order terms of a body's gravity field as measured by a flyby or gravitational assist, and in some cases by librations.
Edge cases include Vesta and some of the larger moons, which show differentiation but are assumed to have since fully solidified. The question of whether Earth's Moon has solidified, or retains some molten layers, has not been definitively answered. Additionally, differentiation processes are expected to vary along a continuum. Bodies may be composed of lighter and heavier rocks and metals, a high water ice and volatiles content (with less mechanical strength) in cooler regions of the Solar System, or primarily ices with a low rock/metal content even farther from the Sun. This continuum is thought to record the varying chemistries of the early Solar System, with refractories surviving in warm regions, and volatiles driven outward by the young Sun.
The cores of planets are inaccessible, studied indirectly by seismometry, gravimetry, and in some cases magnetometry. However, iron and stony-iron meteorites are likely fragments from the cores of parent bodies which have partially or completely differentiated, then shattered. These meteorites are thus the only means of directly examining deep-interior materials and their processes.
Gas giant planets represent another form of differentiation, with multiple fluid layers by density. Some distinguish further between true gas giants, and ice giants further from the Sun.
=== Tectonics ===
In turn, a molten core may allow plate tectonics, of which Earth shows major features. Mars, as a smaller body than Earth, shows no current tectonic activity, nor mountain ridges from geologically recent activity. This is assumed to be due to an interior that has cooled faster than the Earth (see geomagnetism below). An edge case may be Venus, which does not appear to have extant tectonics. However, in its history, it likely has had tectonic activity but lost it. It is possible tectonic activity on Venus may still be sufficient to restart after a long era of accumulation.
Io, despite having high volcanism, does not show any tectonic activity, possibly due to sulfur-based magmas with higher temperatures, or simply higher volumetric fluxes. Meanwhile, Vesta's fossae may be considered a form of tectonics, despite that body's small size and cool temperatures.
Europa is a key demonstration of outer-planet tectonics. Its surface shows movement of ice blocks or rafts, strike-slip faults, and possibly diapirs. The question of extant tectonics is far less certain, possibly having been replaced by local cryomagmatism. Ganymede and Triton may contain tectonically or cryovolcanically resurfaced areas, and Miranda's irregular terrains may be tectonic.
Earthquakes are well-studied on Earth, as multiple seismometers or large arrays can be used to derive quake waveforms in multiple dimensions. The Moon is the only other body to successfully receive a seismometer array; "marsquakes" and the Mars interior are based on simple models and Earth-derived assumptions. Venus has received negligible seismometry.
Gas giants may in turn show different forms of heat transfer and mixing. Furthermore, gas giants show different heat effects by size and distance to the Sun. Uranus shows a net negative heat budget to space, but the others (including Neptune, farther out) are net positive.
=== Geomagnetism ===
Two terrestrial planets (Earth and Mercury) display magnetospheres, and thus have molten metal layers. Similarly, all four gas giants have magnetospheres, which indicate layers of conductive fluids. Ganymede also shows a weak magnetosphere, taken as evidence of a subsurface layer of salt water, while the volume around Rhea shows symmetrical effects which may be rings or a magnetic phenomenon. Of these, Earth's magnetosphere is by far the most accessible, including from the surface. It is therefore the most studied, and extraterrestrial magnetospheres are examined in light of prior Earth studies.
Still, differences exist between magnetospheres, pointing to areas needing further research. Jupiter's magnetosphere is stronger than the other gas giants, while Earth's is stronger than Mercury's. Mercury and Uranus have offset magnetospheres, which have no satisfactory explanation yet. Uranus' tipped axis causes its magnetotail to corkscrew behind the planet, with no known analogue. Future Uranian studies may show new magnetospheric phenomena.
Mars shows remnants of an earlier, planetary-scale magnetic field, with stripes as on Earth. This is taken as evidence that the planet had a molten metal core in its prior history, allowing both a magnetosphere and tectonic activity (as on Earth). Both of these have since dissipated. Earth's Moon shows localized magnetic fields, indicating some process other than a large, molten metal core. This may be the source of lunar swirls, not seen on Earth.
=== Geochemistry ===
Apart from their distance to the Sun, different bodies show chemical variations indicating their formation and history. Neptune is denser than Uranus, taken as one piece of evidence that the two may have switched places in the early Solar System. Comets show both high volatile content, and grains containing refractory materials. This also indicates some mixing of materials through the Solar System when those comets formed. Mercury's inventory of materials by volatility is being used to evaluate different models for its formation and/or subsequent modification.
Isotopic abundances indicate processes over the history of the Solar System. To an extent, all bodies formed from the presolar nebula. Various subsequent processes then alter elemental and isotopic ratios. The gas giants in particular have enough gravity to retain primary atmospheres, taken largely from the presolar nebula, as opposed to the later outgassing and reactions of secondary atmospheres. Differences in gas giant atmospheres compared to solar abundances then indicate some process in that planet's history. Meanwhile, gases at small planets such as Venus and Mars have isotopic differences indicating atmospheric escape processes.{argon isotope ratio planet meteorite}{neon isotope ratio meteorite}
The various modifications of surface minerals, or space weathering, is used to evaluate meteorite and asteroid types and ages. Rocks and metals shielded by atmospheres (particularly thick ones), or other minerals, experience less weathering and fewer implantation chemistries and cosmic ray tracks. Asteroids are currently graded by their spectra, indicating surface properties and mineralogies. Some asteroids appear to have less space weathering, by various processes including a relatively recent formation date or a "freshening" event. As Earth's minerals are well shielded, space weathering is studied via extraterrestrial bodies, and preferably multiple examples.
Kuiper Belt Objects display very weathered or in some cases very fresh surfaces. As the long distances result in low spatial and spectral resolutions, KBO surface chemistries are currently evaluated via analogous moons and asteroids closer to Earth.
== Aeronomy and atmospheric physics ==
Earth's atmosphere is far thicker than that of Mars, while far thinner than Venus'. In turn, the envelopes of gas giants are a different class entirely, and show their own gradations. Meanwhile, smaller bodies show tenuous atmospheres ("surface-bound exospheres"), with the exception of Titan and arguably Triton. Comets vary between negligible atmospheres in the outer Solar System, and active comas millions of miles across at perihelion. Exoplanets may in turn possess atmospheric properties known and unknown in the Milky Way Galaxy.
=== Aeronomy ===
Atmospheric escape is largely a thermal process. The atmosphere a body can retain therefore varies from the warmer inner Solar System, to the cooler outer regions. Different bodies in different Solar System regions provide analogous or contrasting examples. The atmosphere of Titan is considered analogous to an early, colder Earth; the atmosphere of Pluto is considered analogous to an enormous comet.
The presence or absence of a magnetic field affects an upper atmosphere, and in turn the overall atmosphere. Impacts of solar wind particles create chemical reactions and ionic species, which may in turn affect magnetospheric phenomena. Earth serves as a counterexample to Venus and Mars, which have no planetary magnetospheres, and to Mercury, with a magnetosphere but negligible atmosphere.
Jupiter's moon Io creates sulfur emissions, and a feature of sulfur and some sodium around that planet. Similarly, Earth's Moon has trace sodium emissions, and a far weaker tail. Mercury also has a trace sodium atmosphere.
Jupiter itself is assumed to have some characteristics of extrasolar "super Jupiters" and brown dwarves.
=== Seasons ===
Uranus, tipped on its side, is postulated to have seasonal effects far stronger than on Earth. Similarly, Mars is postulated to have varied its axial tilt over eons, and to a far greater extent than on Earth. This is hypothesized to have dramatically altered not only seasons but climates on Mars, for which some evidence has been observed. Venus has negligible tilt, eliminating seasons, and a slow, retrograde rotation, causing different diurnal effects than on Earth and Mars.
=== Clouds and haze layers ===
From Earth, a planetwide cloud layer is the dominant feature of Venus in the visible spectrum; this is also true of Titan. Venus' cloud layer is composed of sulfur dioxide particles, while Titan's is a mixture of organics.
The gas giant planets display clouds or belts of various compositions, including ammonia and methane.
=== Circulation and winds ===
Venus and Titan, and to a lesser extent Earth, are super-rotators: the atmosphere turns about the planet faster than the surface beneath. While these atmospheres share physical processes, they exhibit diverse characteristics.
Hadley cells, first postulated and confirmed on Earth, are seen in different forms in other atmospheres. Earth has Hadley cells north and south of its equator, leading to additional cells by latitude. Mars' Hadley circulation is offset from its equator. Titan, a far smaller body, likely has one enormous cell, flipping polarity from northerly to southerly with its seasons.
The bands of Jupiter are thought to be numerous Hadley-like cells by latitude.
=== Storms and cyclonic activity ===
The large storms seen on the gas giants are considered analogous to Earth cyclones. However, this is an imperfect metaphor as expected, due to the large differences in sizes, temperature, and composition between Earth and the gas giants, and even between gas giants.
Polar vortices were observed on Venus and Saturn. In turn, Earth's thinner atmosphere shows weaker polar vorticity and effects.
=== Lightning and aurorae ===
Both lightning and aurorae have been observed on other bodies after extensive study at Earth. Lightning has been detected on Venus, and may be a sign of active volcanism on that planet, as volcanic lightning is known on Earth. Aurorae have been observed on Jupiter and its moon Ganymede.
=== Comparative climatology ===
An understanding of the evolutionary histories and current states of the Venus and Mars climates is directly relevant for studies of the past, present and future climates of Earth.
== Hydrology ==
A growing number of bodies display relict or current hydrological modification. Earth, the "ocean planet," is the prime example. Other bodies display lesser modifications, indicating their similarities and differences. This may be defined to include fluids other than water, such as light hydrocarbons on Titan, or possibly supercritical carbon dioxide on Mars, which do not persist in Earth conditions. Ancient lava flows in turn may be considered a form of hydrological modification, which may be confounded with other fluids. Io currently has lava calderas and lakes. Fluid modification may have occurred on bodies as small as Vesta; hydration in general has been observed.
If fluids include groundwater and vapor, the list of bodies with hydrological modification includes Earth, Mars, and Enceladus, to a lesser extent comets and some asteroids, likely Europa and Triton, and possibly Ceres, Titan, and Pluto. Venus may have had hydrology in its early history, which would since have been erased.
Fluid modification and mineral deposition on Mars, as observed by the MER and MSL rovers, is studied in light of Earth features and minerals. Minerals observed from orbiters and landers indicates formation in aqueous conditions; morphologies indicate fluid action and deposition.
Extant Mars hydrology includes brief, seasonal flows on slopes; however, most Martian water is frozen into its polar caps and subsurface, as indicated by ground penetrating radars and pedestal craters. Antifreeze mixtures such as salts, peroxides, and perchlorates may allow fluid flow at Martian temperatures.
Analogues of Mars landforms on Earth include Siberian and Hawaiian valleys, Greenland slopes, the Columbian Plateau, and various playas. Analogues for human expeditions (e.g. geology and hydrology fieldwork) include Devon Island, Canada, Antarctica, Utah, the Euro-Mars project, and Arkaroola, South Australia.
The Moon, on the other hand, is a natural laboratory for regolith processes and weathering on anhydrous airless bodies- modification and alteration by meteoroid and micrometeoroid impacts, the implantation of solar and interstellar charged particles, radiation damage, spallation, exposure to ultraviolet radiation, and so on. Knowledge of the processes that create and modify the lunar regolith is essential to understanding the compositional and structural attributes of other airless planet and asteroid regoliths.
Other possibilities include extrasolar planets completely covered by oceans, which would lack some Earthly processes.
== Dynamics ==
Earth, alone among terrestrial planets, possesses a large moon. This is thought to confer stability to Earth's axial tilt, and thus seasons and climates. The closest analogue is the Pluto-Charon system, though its axial tilt is completely different. Both the Moon and Charon are hypothesized to have formed via giant impacts.
Giant impacts are hypothesized to account for both the tilt of Uranus, and the retrograde rotation of Venus. Giant impacts are also candidates for the Mars ocean hypothesis, and the high density of Mercury.
Most giant planets (except Neptune) have retinues of moons, rings, ring shepherds, and moon Trojans analogous to mini-solar systems. These systems are postulated to have accreted from analogous gas clouds, and possibly with analogous migrations during their formation periods. The Cassini mission was defended on the grounds that Saturn system dynamics would contribute to studies of Solar System dynamics and formation.
Studies of ring systems inform us of many-body dynamics. This is applicable to the asteroid and Kuiper Belts, and the early Solar System, which had more objects, dust, and gas. It is relevant to the magnetospherics of those bodies. It is also relevant to the dynamics of the Milky Way galaxy and others. In turn, though the Saturnian system is readily studied (by Cassini, ground telescopes, and space telescopes), the simpler and lower mass ring systems of the other giants makes their explanations somewhat easier to fathom. The Jupiter ring system is perhaps more completely understood at present than any of the other three.
Asteroid families and gaps indicate their local dynamics. They are in turn indicative of the Kuiper Belt, and its hypothesized Kuiper cliff. The Hildas and Jupiter Trojans are then relevant to the Neptune Trojans and Plutinos, Twotinos, etc.
Neptune's relative lack of a moon system suggests its formation and dynamics. The migration of Triton explains the ejection or destruction of competing moons, analogous to Hot Jupiters (also in sparse systems), and the Grand Tack hypothesis of Jupiter itself, on a smaller scale.
The planets are considered to have formed by accretion of larger and larger particles, into asteroids and planetesimals, and into today's bodies. Vesta and Ceres are hypothesized to be the only surviving examples of planetesimals, and thus samples of the formative period of the Solar System.
Transits of Mercury and Venus have been observed as analogues of extrasolar transits. As Mercury and Venus transits are far closer and thus appear "deeper," they can be studied in far finer detail. Similarly, analogues to the Solar System's asteroid and Kuiper belts have been observed around other star systems, though in far less detail.
== Astrobiology ==
Earth is the only body known to contain life; this results in geologic and atmospheric life signatures apart from the organisms themselves. Methane observed on Mars has been postulated but cannot be definitively ascribed as a biosignature. Multiple processes of non-biological methane generation are seen on Earth as well.
The detection of biomarkers or biosignatures on other worlds is an active area of research. Although oxygen and/or ozone are generally considered strong signs of life, these too have alternate, non-biological explanations.
The Galileo mission, while performing a gravity assist flyby of Earth, treated the planet as an extraterrestrial one, in a test of life detection techniques. Conversely, the Deep Impact mission's High Resolution Imager, intended for examining comets starting from great distances, could be repurposed for exoplanet observations in its EPOXI extended mission.
Conversely, detection of life entails identification of those processes favoring or preventing life. This occurs primarily via study of Earth life and Earth processes, though this is in effect a sample size of one. Care must be taken to avoid observation and selection biases. Astrobiologists consider alternative chemistries for life, and study on Earth extremophile organisms that expand the potential definitions of habitable worlds.
== See also ==
Europlanet
List of Mars analogs
Lunar Crater National Natural Landmark
Terrestrial Analogue Sites
== Bibliography ==
Murray, B. Earthlike Planets (1981) W. H. Freeman and Company ISBN 0-7167-1148-6
Consolmagno, G.; Schaefer, M. (1994). Worlds Apart: A Textbook In Planetary Sciences. Prentice Hall. ISBN 978-0-13-964131-2.
Cattermole, P. (1995). Earth And Other Planets. Oxford University Press. ISBN 978-0-19-521138-2.
Petersen, C.; Beatty, K.; Chaikin, A. (1999). The New Solar System, 4th Edition. Cambridge University Press. ISBN 9780521645874.
K. Condie (2005). Earth as an Evolving Planetary System. Elsevier. ISBN 978-0-12-088392-9.
C. Cockell (2007). Space on Earth. Macmillan. ISBN 978-0-230-00752-9.
J. Bennett; et al. (2012). The Cosmic Perspective, 7th Edition. Addison-Wesley. ISBN 9780321841063.
== References ==
== External links ==
NASA Astrobiology
Astrobiology Magazine- Comparative Planetology
Laboratory for Comparative Planetology, Vernadsky Institute | Wikipedia/Comparative_planetary_science |
A terrestrial planet, tellurian planet, telluric planet, or rocky planet, is a planet that is composed primarily of silicate, rocks or metals. Within the Solar System, the terrestrial planets accepted by the IAU are the inner planets closest to the Sun: Mercury, Venus, Earth and Mars. Among astronomers who use the geophysical definition of a planet, two or three planetary-mass satellites – Earth's Moon, Io, and sometimes Europa – may also be considered terrestrial planets. The large rocky asteroids Pallas and Vesta are sometimes included as well, albeit rarely. The terms "terrestrial planet" and "telluric planet" are derived from Latin words for Earth (Terra and Tellus), as these planets are, in terms of structure, Earth-like. Terrestrial planets are generally studied by geologists, astronomers, and geophysicists.
Terrestrial planets have a solid planetary surface, making them substantially different from larger gaseous planets, which are composed mostly of some combination of hydrogen, helium, and water existing in various physical states.
== Structure ==
All terrestrial planets in the Solar System have the same basic structure, such as a central metallic core (mostly iron) with a surrounding silicate mantle.
The large rocky asteroid 4 Vesta has a similar structure; possibly so does the smaller one 21 Lutetia. Another rocky asteroid 2 Pallas is about the same size as Vesta, but is significantly less dense; it appears to have never differentiated a core and a mantle. The Earth's Moon and Jupiter's moon Io have similar structures to terrestrial planets, but Earth's Moon has a much smaller iron core. Another Jovian moon Europa has a similar density but has a significant ice layer on the surface: for this reason, it is sometimes considered an icy planet instead.
Terrestrial planets can have surface structures such as canyons, craters, mountains, volcanoes, and others, depending on the presence at any time of an erosive liquid or tectonic activity or both.
Terrestrial planets have secondary atmospheres, generated by volcanic out-gassing or from comet impact debris. This contrasts with the outer, giant planets, whose atmospheres are primary; primary atmospheres were captured directly from the original solar nebula.
== Terrestrial planets within the Solar System ==
The Solar System has four terrestrial planets under the dynamical definition: Mercury, Venus, Earth and Mars. The Earth's Moon as well as Jupiter's moons Io and Europa would also count geophysically, as well as perhaps the large protoplanet-asteroids Pallas and Vesta (though those are borderline cases). Among these bodies, only the Earth has an active surface hydrosphere. Europa is believed to have an active hydrosphere under its ice layer.
During the formation of the Solar System, there were many terrestrial planetesimals and proto-planets, but most merged with or were ejected by the four terrestrial planets, leaving only Pallas and Vesta to survive more or less intact. These two were likely both dwarf planets in the past, but have been battered out of equilibrium shapes by impacts. Some other protoplanets began to accrete and differentiate but suffered catastrophic collisions that left only a metallic or rocky core, like 16 Psyche or 8 Flora respectively. Many S-type and M-type asteroids may be such fragments.
The other round bodies from the asteroid belt outward are geophysically icy planets. They are similar to terrestrial planets in that they have a solid surface, but are composed of ice and rock rather than of rock and metal. These include the dwarf planets, such as Ceres, Pluto and Eris, which are found today only in the regions beyond the formation snow line where water ice was stable under direct sunlight in the early Solar System. It also includes the other round moons, which are ice-rock (e.g. Ganymede, Callisto, Titan, and Triton) or even almost pure (at least 99%) ice (Tethys and Iapetus). Some of these bodies are known to have subsurface hydrospheres (Ganymede, Callisto, Enceladus, and Titan), like Europa, and it is also possible for some others (e.g. Ceres, Mimas, Dione, Miranda, Ariel, Triton, and Pluto). Titan even has surface bodies of liquid, albeit liquid methane rather than water. Jupiter's Ganymede, though icy, does have a metallic core like the Moon, Io, Europa, and the terrestrial planets.
The name Terran world has been suggested to define all solid worlds (bodies assuming a rounded shape), without regard to their composition. It would thus include both terrestrial and icy planets.
=== Density trends ===
The uncompressed density of a terrestrial planet is the average density its materials would have at zero pressure. A greater uncompressed density indicates a greater metal content. Uncompressed density differs from the true average density (also often called "bulk" density) because compression within planet cores increases their density; the average density depends on planet size, temperature distribution, and material stiffness as well as composition.
Calculations to estimate uncompressed density inherently require a model of the planet's structure. Where there have been landers or multiple orbiting spacecraft, these models are constrained by seismological data and also moment of inertia data derived from the spacecraft's orbits. Where such data is not available, uncertainties are inevitably higher.
The uncompressed densities of the rounded terrestrial bodies directly orbiting the Sun trend towards lower values as the distance from the Sun increases, consistent with the temperature gradient that would have existed within the primordial solar nebula. The Galilean satellites show a similar trend going outwards from Jupiter; however, no such trend is observable for the icy satellites of Saturn or Uranus. The icy worlds typically have densities less than 2 g·cm−3. Eris is significantly denser (2.43±0.05 g·cm−3), and may be mostly rocky with some surface ice, like Europa. It is unknown whether extrasolar terrestrial planets in general will follow such a trend.
The data in the tables below are mostly taken from a list of gravitationally rounded objects of the Solar System and planetary-mass moon. All distances from the Sun are averages.
== Extrasolar terrestrial planets ==
Most of the planets discovered outside the Solar System are giant planets, because they are more easily detectable. But since 2005, hundreds of potentially terrestrial extrasolar planets have also been found, with several being confirmed as terrestrial. Most of these are super-Earths, i.e. planets with masses between Earth's and Neptune's; super-Earths may be gas planets or terrestrial, depending on their mass and other parameters.
During the early 1990s, the first extrasolar planets were discovered orbiting the pulsar PSR B1257+12, with masses of 0.02, 4.3, and 3.9 times that of Earth, by pulsar timing.
When 51 Pegasi b, the first planet found around a star still undergoing fusion, was discovered, many astronomers assumed it to be a gigantic terrestrial, because it was assumed no gas giant could exist as close to its star (0.052 AU) as 51 Pegasi b did. It was later found to be a gas giant.
In 2005, the first planets orbiting a main-sequence star and which showed signs of being terrestrial planets were found: Gliese 876 d and OGLE-2005-BLG-390Lb. Gliese 876 d orbits the red dwarf Gliese 876, 15 light years from Earth, and has a mass seven to nine times that of Earth and an orbital period of just two Earth days. OGLE-2005-BLG-390Lb has about 5.5 times the mass of Earth and orbits a star about 21,000 light-years away in the constellation Scorpius.
From 2007 to 2010, three (possibly four) potential terrestrial planets were found orbiting within the Gliese 581 planetary system. The smallest, Gliese 581e, is only about 1.9 Earth masses, but orbits very close to the star. Two others, Gliese 581c and the disputed Gliese 581d, are more-massive super-Earths orbiting in or close to the habitable zone of the star, so they could potentially be habitable, with Earth-like temperatures.
Another possibly terrestrial planet, HD 85512 b, was discovered in 2011; it has at least 3.6 times the mass of Earth.
The radius and composition of all these planets are unknown.
The first confirmed terrestrial exoplanet, Kepler-10b, was found in 2011 by the Kepler space telescope, specifically designed to discover Earth-size planets around other stars using the transit method.
In the same year, the Kepler space telescope mission team released a list of 1235 extrasolar planet candidates, including six that are "Earth-size" or "super-Earth-size" (i.e. they have a radius less than twice that of the Earth) and in the habitable zone of their star.
Since then, Kepler has discovered hundreds of planets ranging from Moon-sized to super-Earths, with many more candidates in this size range (see image).
In 2016, statistical modeling of the relationship between a planet's mass and radius using a broken power law appeared to suggest that the transition point between rocky, terrestrial worlds and mini-Neptunes without a defined surface was in fact very close to Earth and Venus's, suggesting that rocky worlds much larger than our own are in fact quite rare. This resulted in some advocating for the retirement of the term "super-earth" as being scientifically misleading. Since 2016 the catalog of known exoplanets has increased significantly, and there have been several published refinements of the mass-radius model. As of 2024, the expected transition point between rocky and intermediate-mass planets sits at roughly 4.4 earth masses, and roughly 1.6 earth radii.
In September 2020, astronomers using microlensing techniques reported the detection, for the first time, of an Earth-mass rogue planet (named OGLE-2016-BLG-1928) unbounded by any star, and free-floating in the Milky Way galaxy.
=== List of terrestrial exoplanets ===
The following exoplanets have a density of at least 5 g/cm3 and a mass below Neptune's and are thus very likely terrestrial:
Kepler-10b, Kepler-20b, Kepler-36b, Kepler-48d, Kepler 68c, Kepler-78b, Kepler-89b, Kepler-93b, Kepler-97b, Kepler-99b, Kepler-100b, Kepler-101c, Kepler-102b, Kepler-102d, Kepler-113b, Kepler-131b, Kepler-131c, Kepler-138c, Kepler-406b, Kepler-406c, Kepler-409b.
=== Frequency ===
In 2013, astronomers reported, based on Kepler space mission data, that there could be as many as 40 billion Earth- and super-Earth-sized planets orbiting in the habitable zones of Sun-like stars and red dwarfs within the Milky Way. Eleven billion of these estimated planets may be orbiting Sun-like stars. The nearest such planet may be 12 light-years away, according to the scientists. However, this does not give estimates for the number of extrasolar terrestrial planets, because there are planets as small as Earth that have been shown to be gas planets (see Kepler-138d).
Estimates show that about 80% of potentially habitable worlds are covered by land, and about 20% are ocean planets. Planets with rations more like those of Earth, which was 30% land and 70% ocean, only make up 1% of these worlds.
== Types ==
Several possible classifications for solid planets have been proposed.
Silicate planet
A solid planet like Venus, Earth, or Mars, made primarily of a silicon-based rocky mantle with a metallic (iron) core.
Carbon planet (also called "diamond planet")
A theoretical class of planets, composed of a metal core surrounded by primarily carbon-based minerals. They may be considered a type of terrestrial planet if the metal content dominates. The Solar System contains no carbon planets but does have carbonaceous asteroids, such as Ceres and Hygiea. It is unknown if Ceres has a rocky or metallic core.
Iron planet
A theoretical type of solid planet that consists almost entirely of iron and therefore has a greater density and a smaller radius than other solid planets of comparable mass. Mercury in the Solar System has a metallic core equal to 60–70% of its planetary mass, and is sometimes called an iron planet, though its surface is made of silicates and is iron-poor. Iron planets are thought to form in the high-temperature regions close to a star, like Mercury, and if the protoplanetary disk is rich in iron.
Icy planet
A type of solid planet with an icy surface of volatiles. In the Solar System, most planetary-mass moons (such as Titan, Triton, and Enceladus) and many dwarf planets (such as Pluto and Eris) have such a composition. Europa is sometimes considered an icy planet due to its surface ice, but its higher density indicates that its interior is mostly rocky. Such planets can have internal saltwater oceans and cryovolcanoes erupting liquid water (i.e. an internal hydrosphere, like Europa or Enceladus); they can have an atmosphere and hydrosphere made from methane or nitrogen (like Titan). A metallic core is possible, as exists on Ganymede.
Coreless planet
A theoretical type of solid planet that consists of silicate rock but has no metallic core, i.e. the opposite of an iron planet. Although the Solar System contains no coreless planets, chondrite asteroids and meteorites are common in the Solar System. Ceres and Pallas have mineral compositions similar to carbonaceous chondrites, though Pallas is significantly less hydrated. Coreless planets are thought to form farther from the star where volatile oxidizing material is more common.
== See also ==
Chthonian planet
Earth analog
List of potentially habitable exoplanets
Planetary habitability
Venus zone
List of gravitationally rounded objects of the Solar System
== References == | Wikipedia/Terrestrial_planet |
Planetary and Space Science (P&SS), published 15 times per year, is a peer-reviewed scientific journal established in 1959. It publishes original research articles along with short communications (letters). The main topic is Solar System processes which encompasses multiple areas of the natural sciences. Numerical simulations of solar system processes are also conducted at ground-based facilities or on-board space platforms. The editor-in-chief is Maria Cristina De Sanctis (National Institute of Astrophysics, Roma, Italy). It is published by Elsevier.
== Scope ==
Research that involves planetary and space sciences involves many disciplines, which is reflected by the scope of the journal.
=== Basic science ===
Celestial mechanics is part of these studies, as this science includes understanding the dynamic evolution of the Solar System, relativistic effects, among other areas of analysis and consideration.
Cosmochemistry is also part of the published research in this journal. Cosmochemistry in this instance, includes all aspects of the initial physical and chemical formation along with the subsequent evolution of the solar system pertaining to these physical and chemical processes.
=== The planets ===
The research expands to include the terrestrial planets, and their satellites. This involves the physics of the interior, the geology of the planet or satellite surface, the surface morphology, and studying their tectonics, mineralogy and dating. Observing the outer planets and their satellites includes studying formation and evolution. This method of observation and study involves remote sensing at all wavelengths and in situ measurements.
Planet formation and planet evolution is of interest when gathering and interpreting data for planetary atmospheres. Atmospheric circulation, meteorology, and boundary layers are also part of the original published research. Understanding is gained through remote sensing and laboratory simulation.
The study of planets also includes magnetospheres and ionospheres. The origin of their respective magnetic fields, magnetospheric plasma and radiation belts is also of interest. Included in this area is the interaction of magnetospheres and ionospheres with the Sun, solar wind, and their natural satellites.
=== Other studies ===
Research that involves the small bodies of the Solar System is also published. Small bodies describes dust, objects of rings, asteroids, comets, zodiacal light. This research also describes their interaction with solar radiation and the solar wind.
Beyond the Solar System, extrasolar system studies are also considered a field of interest for this journal. This includes detection of exoplanets, as well as determining whether or not given exoplanets or exosystems can be detected. Also the formation and evolution of these planets and systems are of interest.
History of planetary and space research is also part of the journal's scope.
== Abstracting and indexing ==
The journal is abstracted and indexed in:
According to the Journal Citation Reports, the journal has a 2020 impact factor of 2.03.
== References ==
== External links ==
Official website | Wikipedia/Planetary_and_Space_Science |
The Catalog of Nearby Habitable Systems (HabCat) is a catalogue of star systems which conceivably have habitable planets. The list was developed by scientists Jill Tarter and Margaret Turnbull under the auspices of Project Phoenix, a part of SETI.
The list was based upon the Hipparcos Catalogue (which has 118,218 stars) by filtering on a wide range of star system features. The current list contains 17,129 "HabStars".
== External links ==
Target Selection for SETI: 1. A Catalog of Nearby Habitable Stellar Systems, Turnbull, Tarter, submitted 31 Oct 2002 (last accessed 19 Jan 2010)
Target selection for SETI. II. Tycho-2 dwarfs, old open clusters, and the nearest 100 stars Archived 2008-08-21 at the Wayback Machine, by Turnbull and Tarter, (last accessed 19 Jan 2010)
HabStars Archived 2003-10-04 at the Wayback Machine - an article on the NASA website | Wikipedia/Catalog_of_Nearby_Habitable_Systems |
The habitability of neutron star systems is the potential of planets and moons orbiting a neutron star to provide suitable habitats for life. Of the roughly 3000 neutron stars known, only a handful have sub-stellar companions. The most famous of these are the low-mass planets around the millisecond pulsar PSR B1257+12.
Habitability is conventionally defined by the equilibrium temperature of a planet, which is a function of the amount of incoming radiation; a planet is defined "habitable" if liquid water can exist on its surface although even planets with little external energy can harbour underground life. Pulsars do not emit large quantities of radiation given their small size; the habitable zone can easily end up lying so close to the star that tidal effects destroy the planets. Additionally, it is often unclear how much radiation a given pulsar emits and how much of it can actually reach a hypothetical planet's surface; of the known pulsar planets, only those of PSR B1257+12 are close to the habitable zone and as of 2015, no known pulsar planet is likely to be habitable.
A habitable planet orbiting a neutron star must be between one and 10 times the mass of the Earth. If the planet were lighter, its atmosphere would be lost. Its atmosphere must also be thick enough to convert the intense X-ray radiation from the neutron star into heat on its surface allowing it to have a temperature suitable for life.
A magnetic field strong enough — the magnetosphere — would protect the planet from the strong solar winds. This could preserve the planet's atmosphere for several billion years. Such a planet could have liquid water on its surface.
A Dutch research team published an article on the subject in the journal Astronomy & Astrophysics in December 2017.
== See also ==
Habitability of red dwarf systems
Habitability of K-type main-sequence star systems
Habitability of natural satellites
Neutron stars in fiction
== References == | Wikipedia/Habitability_of_neutron_star_systems |
The theorized habitability of red dwarf systems is determined by a large number of factors. Modern evidence suggests that planets in red dwarf systems are unlikely to be habitable, due to high probability of tidal locking, likely lack of atmospheres, and the high stellar variation many such planets would experience. However, the sheer number and longevity of red dwarfs could likely provide ample opportunity to realize any small possibility of habitability.
Current arguments concerning the habitability of red dwarf systems are unresolved, and the area remains an open question of study in the fields of climate modeling and the evolution of life on Earth. Observational data and statistical arguments suggest that red dwarf systems are uninhabitable for indeterminate reasons. On the other hand, 3D climate models favor habitability and wider habitable zones for slow rotating and tidally locked planets.
A major impediment to the development of life in red dwarf systems is the intense tidal heating caused by the eccentric orbits of planets around their host stars. Other tidal effects reduce the probability of life around red dwarfs, such as the lack of planetary axial tilts and the extreme temperature differences created by one side of planet permanently facing the star and the other perpetually turned away. Still, a planetary atmosphere may redistribute the heat, making temperatures more uniform. However, it is important to bear in mind that many red dwarfs are flare stars, and their flare events could greatly reduce the habitability of their satellites by eroding their atmosphere (though a planetary magnetic field could protect from these flares). Non-tidal factors further reduce the prospects for life in red-dwarf systems, such as spectral energy distributions shifted toward the infrared side of the spectrum relative to the Sun and small circumstellar habitable zones due to low light output.
There are, however, a few factors that could increase the likelihood of life on red dwarf planets. Intense cloud formation on the star-facing side of a tidally locked planet can likely reduce overall thermal flux and equilibrium temperature differences between the two sides of the planet. In addition, the sheer number of red dwarfs statistically increases the probability that there might exist habitable planets orbiting some of them. Red dwarfs account for about 85% of stars in the Milky Way and constitute the vast majority of stars in spiral and elliptical galaxies. There are expected to be tens of billions of super-Earth planets in the habitable zones of red dwarf stars in the Milky Way. Investigating the habitability of red dwarf star systems could help determine the frequency of life in the universe and aid scientific understanding of the evolution of life.
== Background ==
Red dwarfs are the smallest, coolest, and most common type of star. Estimates of their abundance range from 70% of stars in spiral galaxies to more than 90% of all stars in elliptical galaxies, an often quoted median figure being 72–76% of the stars in the Milky Way (known since the 1990s from radio telescopic observation to be a barred spiral). Red dwarfs are usually defined as being of spectral type M, although some definitions are wider (including also some or all K-type stars). Given their low energy output, red dwarfs are almost never naked-eye visible from Earth: the closest red dwarf to the Sun, Proxima Centauri, is nowhere near visual magnitude. The brightest red dwarf in Earth's night sky, Lacaille 8760 (+6.7) is visible to the naked eye only under ideal viewing conditions.
== Longevity and ubiquity ==
Red dwarfs’ greatest advantage as candidate stars for life is their longevity. It took 4.5 billion years for intelligent life to evolve on Earth, and life as we know it will see suitable conditions for 1 to 2.3 billion years more. Red dwarfs, by contrast, could live for trillions of years, as their nuclear reactions are far slower than those of larger stars, meaning that life would have longer to evolve and survive.
While the likelihood of finding a planet in the habitable zone around any specific red dwarf is slight, the total amount of habitable zone around all red dwarfs combined is equal to the total amount around Sun-like stars, given their ubiquity. Furthermore, this total amount of habitable zone will last longer, because red dwarf stars live for hundreds of billions of years or even longer on the main sequence, potentially allowing for the evolution of microbial or intelligent life in the future.
== Luminosity and spectral composition ==
For years, astronomers have been pessimistic about red dwarfs as potential candidates for hosting life. The low masses of red dwarves (from roughly 0.08 to 0.60 solar masses (M☉)) cause their nuclear fusion reactions to proceed exceedingly slowly, giving them low luminosities ranging from 10% to just 0.0125% that of the Earth's Sun. Consequently, any planet orbiting a red dwarf would need a low semi-major axis in order to maintain an Earth-like surface temperature, from 0.268 astronomical units (AU) for a relatively luminous red dwarf like Lacaille 8760 to 0.032 AU for a smaller star like Proxima Centauri. Such a world would have a year lasting just 3 to 150 Earth days.
Photosynthesis on such a planet would be difficult, as much of the low luminosity falls under the lower energy infrared and red part of the electromagnetic spectrum, and would therefore require additional photons to achieve excitation potentials. Potential plants would likely adapt to a much wider spectrum (and as such appear black in visible light). However, further research, including a consideration of the amount of photosynthetically active radiation, has suggested that tidally locked planets in red dwarf systems might at least be habitable for higher plants. Additionally, some bacteria, such as purple bacteria, have pigments such as bacteriochlorophyll which absorb infrared light, making at least hotter red dwarfs potentially suitable for photosynthetic life.
In addition, because water strongly absorbs red and infrared light, less energy would be available for aquatic life on red dwarf planets. However, a similar effect of preferential absorption by water ice would increase its temperature relative to an equivalent amount of radiation from a Sun-like star, thereby extending the habitable zone of red dwarfs outward.
The evolution of the red dwarf stars may also inhibit habitability. As red dwarf stars have an extended pre-main sequence phase, their eventual habitable zones would be for around 1 billion years in a zone where water was not liquid but rather in a gaseous state. Thus, terrestrial planets in the actual habitable zones, if provided with abundant surface water in their formation, would have been subject to a runaway greenhouse effect for several hundred million years. During such an early runaway greenhouse phase, photolysis of water vapor would allow hydrogen escape to space and the loss of several Earth oceans of water, leaving a thick abiotic oxygen atmosphere. Nevertheless, photolysis could be at least slowed down with a sufficient ozone layer.
Since the lifespan of red dwarf stars exceeds the age of the known universe, the further evolution of red dwarfs is known only by theory and simulations. According to computer simulations, a red dwarf becomes a blue dwarf after exhausting its hydrogen supply. As this kind of star is more luminous than the previous red dwarf, planets orbiting it that were frozen during the former stage could be thawed during the several billions of years this evolutionary stage lasts (5 billion years, for example, for a 0.16 M☉ star), giving life an opportunity to appear and evolve.
== Tidal effects ==
For planets to retain significant amounts of water in the habitable zone of ultra-cool dwarfs, a planet must orbit very near to the star. At these close orbital distances, tidal locking to the host star is likely. Tidal locking makes the planet rotate on its axis once every revolution around the star. As a result, one side of the planet would eternally face the star and another side would perpetually face away, creating great extremes of temperature.
For many years, it was believed that life on such planets would be limited to a ring-like region known as the terminator, where the star would always appear on or close to the horizon. It was also believed that efficient heat transfer between the sides of the planet necessitates atmospheric circulation of an atmosphere so thick as to disallow photosynthesis. Due to differential heating, it was argued, a tidally locked planet would experience fierce winds with permanent torrential rain at the point directly facing the local star, the sub-solar point. In the opinion of one author this makes complex life improbable. Plant life would have to adapt to the constant gale, for example by anchoring securely into the soil and sprouting long flexible leaves that do not snap. Animals would rely on infrared vision, as signaling by calls or scents would be difficult over the din of the planet-wide gale. Underwater life would, however, be protected from fierce winds and flares, and vast blooms of black photosynthetic plankton and algae could support the sea life.
In contrast to the previously bleak picture for life, 1997 studies by NASA's Ames Research Center have shown that a planet's atmosphere (assuming it included greenhouse gases CO2 and H2O) need only be 100 millibar, or 10% of Earth's atmosphere, for the star's heat to be effectively carried to the night side, a figure well within the bounds of photosynthesis. Subsequent research has shown that seawater, too, could effectively circulate without freezing solid if the ocean basins were deep enough to allow free flow beneath the night side's ice cap. Additionally, a 2010 study concluded that Earth-like water worlds tidally locked to their stars would still have temperatures above 240 K (−33 °C) on the night side. Climate models constructed in 2013 indicate that cloud formation on tidally locked planets would minimize the temperature difference between the day and the night side, greatly improving habitability prospects for red dwarf planets.
The existence of a permanent day side and night side is not the only potential setback for life around red dwarfs. Tidal heating experienced by planets in the habitable zone of red dwarfs less than 30% of the mass of the Sun may cause them to be "baked out" and become "tidal Venuses." The eccentricity of over 150 planets found orbiting M dwarfs was measured, and it was found that two-thirds of these exoplanets are exposed to extreme tidal forces, rendering them uninhabitable due to the intense heat generated by tidal heating.
Combined with the other impediments to red dwarf habitability, this may make the probability of many red dwarfs hosting life as we know it very low compared to other star types. There may not even be enough water for habitable planets around many red dwarfs; what little water found on these planets, in particular Earth-sized ones, may be located on the cold night side of the planet. In contrast to the predictions of earlier studies on tidal Venuses, though, this "trapped water" may help to stave off runaway greenhouse effects and improve the habitability of red dwarf systems.
Note however that how quickly tidal locking occurs can depend upon a planet's oceans and even atmosphere, and it may mean that tidal locking fails to happen even after many billions of years. Additionally, tidal locking is not the only possible end state of tidal dampening. Mercury, for example, has had sufficient time to tidally lock, but is in a 3:2 spin orbit resonance due to an eccentric orbit.
== Variability ==
Red dwarfs are far more volatile than their larger, more stable cousins. Often, they are covered in starspots that can dim their emitted light by up to 40% for months at a time. At other times, red dwarfs emit gigantic flares that can double their brightness in a matter of minutes. Indeed, as more and more red dwarfs have been scrutinized for variability, more of them have been classified as flare stars to some degree or other. Such variation in brightness could be very damaging for life. Recent 3D climate models simulate flare events by altering the stellar flux received by any given planet. One study found that, should a tidally locked planet possess a sufficient atmosphere, cloud coverage and albedo increase monotonically with stellar flux, increasing the resilience of the planet to variations in radiation. This caveat has proven difficult, however, since flares produce torrents of charged particles that could strip off sizable portions of the planet's atmosphere. Scientists who believe in the Rare Earth hypothesis doubt that red dwarfs could support life amid strong flaring. Tidal locking would probably result in a relatively low planetary magnetic moment. Active red dwarfs that emit coronal mass ejections (CMEs) would bow back the magnetosphere until it contacted the planetary atmosphere. As a result, the atmosphere would undergo strong erosion, possibly leaving the planet uninhabitable.
However, it was found that red dwarfs have a much lower CME rate than expected from their rotation or flare activity, and large CMEs occur rarely. This suggests that atmospheric erosion is caused mainly by radiation rather than CMEs.
Otherwise, it is suggested that if the planet had a magnetic field, it would deflect the particles from the atmosphere (even the slow rotation of a tidally locked M-dwarf planet—it spins once for every time it orbits its star—would be enough to generate a magnetic field as long as part of the planet's interior remained molten).
This magnetic field should be much stronger compared to Earth's to give protection against flares of the observed magnitude (10–1000 G compared to Earth's ~0.5 G), which is unlikely to be generated.
Mathematical models additionally conclude that, even under the highest attainable dynamo-generated magnetic field strengths, exoplanets with masses similar to that of Earth lose a significant fraction of their atmospheres by the erosion of the exobase's atmosphere by CME bursts and XUV emissions (even those Earth-like planets closer than 0.8 AU, affecting also G and K stars, are prone to losing their atmospheres). Atmospheric erosion could likely trigger the depletion of water oceans as well. Planets shrouded by a thick haze of hydrocarbons, such as the ones on primordial Earth or Saturn's moon Titan might still survive the flares, as floating droplets of hydrocarbon are particularly efficient at absorbing ultraviolet radiation.
Measurements reject the presence of relevant atmospheres in two exoplanets orbiting a red dwarf: TRAPPIST-1b and TRAPPIST-1c. The two planets are bare rocks, or have very thin atmospheres. The rest of the TRAPPIST-1 planets, all of whom other than the exceptions of TRAPPIST-1h or possibly TRAPPIST-1d are in the habitable zone, are unlikely to have atmospheres, but their existence is not entirely ruled out. Other potentially habitable planets orbiting red dwarfs, such as LHS 1140b or K2-18b have likely atmospheres.
Another way that life could initially protect itself from radiation would be remaining underwater until the star had passed through its early flare stage, assuming the planet could retain enough of an atmosphere to sustain liquid oceans. Once life reached land, the low amount of UV produced by a quiet red dwarf means that life could thrive without an ozone layer, and thus never need to produce oxygen.
=== Flare activity ===
For a planet around a red dwarf star to support life, it would require a rapidly rotating magnetic field to protect it from the flares. A tidally locked planet rotates slowly, and so may not be able to produce a geodynamo at its core. The violent flaring period of a red dwarf's life cycle is estimated to last for only about the first 1.2 billion years of its existence. If a planet forms far away from a red dwarf so as to avoid atmospheric erosion, and then migrates into the star's habitable zone after this turbulent initial period, it is possible for life to develop. However, observations of the 7 to 12-billion year old Barnard's Star showcase that even old red dwarfs can have significant flare activity. Barnard's Star was long assumed to have little activity, but in 1998 astronomers observed an intense stellar flare, showing that it is a flare star.
It has been found that the largest flares happen at high latitudes near the stellar poles; if an exoplanet's orbit is aligned with the stellar rotation (as is the case with the planets of the Solar System), then it is less affected by the flares than previously thought.
== Methane habitable zone ==
If methane-based life is possible (similar to the hypothetical life on Titan), there would be a second habitable zone further out from the star corresponding to the region where methane is liquid. Titan's atmosphere is transparent to red and infrared light, so more of the light from red dwarfs would be expected to reach the surface of a Titan-like planet. This zone would lie at 2.573 astronomical units (AU) for Lacaille 8760, to 0.379 AU for Proxima Centauri.
== Frequency of Earth-sized worlds around ultra-cool dwarfs ==
A study of archival Spitzer data gives the first idea and estimate of how frequent Earth-sized worlds are around ultra-cool dwarf stars: 30–45%. A computer simulation finds that planets that form around stars with similar mass to TRAPPIST-1 (c. 0.084 M⊙) most likely have sizes similar to the Earth's.
== In fiction ==
Ark: In Stephen Baxter's Ark, after planet Earth is completely submerged by the oceans a small group of humans embark on an interstellar journey eventually making it to a planet named Earth III. The planet is cold, tidally locked and the plant life is black (in order to better absorb the light from the red dwarf).
Draco Tavern: In Larry Niven's Draco Tavern stories, the highly advanced Chirpsithra aliens evolved on a tide-locked oxygen world around a red dwarf. However, no detail is given beyond that it was about 1 terrestrial mass, a little colder, and used red dwarf sunlight.
Nemesis: Isaac Asimov avoids the tidal effect issues of the red dwarf Nemesis by making the habitable "planet" a satellite of a gas giant which is tidally locked to the star.
Star Maker: In Olaf Stapledon's 1937 science fiction novel Star Maker, one of the many alien civilizations in the Milky Way he describes is located in the terminator zone of a tidally locked planet of a red dwarf system. This planet is inhabited by intelligent plants that look like carrots with arms, legs, and a head, which "sleep" part of the time by inserting themselves in soil on plots of land and absorbing sunlight through photosynthesis, and which are awake part of the time, emerging from their plots of soil as locomoting beings who participate in all the complex activities of a modern industrial civilization. Stapledon also describes how life evolved on this planet.
Superman: Superman's home, Krypton, was in orbit around a red star called Rao which in some stories is described as being a red dwarf, although it is more often referred to as a red giant.
Ready Jet Go!: In the children's show Ready Jet Go!, Carrot, Celery and Jet are a family of aliens known as Bortronians who come from Bortron 7, a planet of the fictional red dwarf Bortron. They discovered Earth and the Sun when they picked up a "primitive" radio signal (Episode: "How We Found Your Sun"). They also gave a description of the planets in the Bortronian solar system in a song in the movie Ready Jet Go!: Back to Bortron 7.
Aurelia: This planet, seen in the speculative documentary Extraterrestrial (also known as Alien Worlds), details what scientist theorize alien life could be like on a planet orbiting a red dwarf star.
== See also ==
Acaryochloris marina
Astrobiology
Circumstellar habitable zone
Gliese 581g
Habitability of F-type main-sequence star systems
Habitability of K-type main-sequence star systems
Habitability of neutron star systems
Habitability of yellow dwarf systems
Kepler-186f
Planetary habitability
Search for extraterrestrial intelligence (SETI)
Learning materials from Wikiversity:
How Life could Evolve in a Red Dwarf Star System
== Notes ==
== References ==
== Further reading ==
Stevenson, David S. (2013). Under a crimson sun : prospects for life in a red dwarf system. New York, NY: Imprint: Springer. ISBN 978-1461481324.{{cite book}}: CS1 maint: publisher location (link)
== External links ==
"Red Dwarf Stars Probably Not Friendly for Earth 2.0". Seeker. 26 May 2015. | Wikipedia/Habitability_of_red_dwarf_systems |
Earth and Planetary Science Letters (EPSL) is a bimonthly peer-reviewed scientific journal covering research on physical, chemical and mechanical processes of the Earth and other planets, including extrasolar ones. Topics covered range from deep planetary interiors to atmospheres. The journal was established in 1966 and is published by Elsevier. The co-editors-in-chief are Tristan Horner, Yemane Asmerom, Jean-Philippe Avouac, James Badro, Huiming Bao, Rosemary Hickey-Vargas, Andrew Jacobson, Carolina Lithgow-Bertelloni, Olivier Mousis, Chiara Maria Petrone, Fang-Zhen Teng, Hans Thybo, Alexander Webb.
== Abstracting and indexing ==
The journal is abstracted and indexed in:
According to the Journal Citation Reports, the journal has a 2023 impact factor of 4.8.
== References ==
== External links ==
Official website | Wikipedia/Earth_and_Planetary_Science_Letters |
Habitability of yellow dwarf systems defines the suitability for life of exoplanets belonging to yellow dwarf stars. These systems are the object of study among the scientific community because they are considered the most suitable for harboring living organisms, together with those belonging to K-type stars.
Yellow dwarfs comprise the G-type stars of the main sequence, with masses between 0.9 and 1.1 M☉ and surface temperatures between 5000 and 6000 K, like the Sun. They are the third most common in the Milky Way Galaxy and the only ones in which the habitable zone coincides completely with the ultraviolet habitable zone.
Since the habitable zone is farther away in more massive and luminous stars, the separation between the main star and the inner edge of this region is greater in yellow dwarfs than in red and orange dwarfs. Therefore, planets located in this zone of G-type stars are safe from the intense stellar emissions that occur after their formation and are not as affected by the gravitational influence of their star as those belonging to smaller stellar bodies. Thus, all planets in the habitable zone of such stars exceed the tidal locking limit and their rotation is therefore not synchronized with their orbit.
The Earth, orbiting a yellow dwarf, represents the only known example of planetary habitability. For this reason, the main goal in the field of exoplanetology is to find an Earth analog planet that meets its main characteristics, such as size, average temperature and location around a star similar to the Sun. However, technological limitations make it difficult to find these objects due to either the infrequency of their transits or their small radial-velocity semi-amplitudes, both are consequences of the distance that separates them from their stars or semi-major axis.
== Characteristics ==
Yellow dwarf stars correspond to the G-class stars of the main sequence, with a mass between 0.9 and 1.1 M☉, and surface temperatures between 5000 and 6000 K. Since the Sun itself is a yellow dwarf, of type G2V, these types of stars are also known as solar analogs. They rank third among the most common main sequence stars, after red and orange dwarfs, with a representativeness of 10% of the total Milky Way. They remain in the main sequence for approximately 10 billion years. After the Sun, the closest G-type star to the Earth is Alpha Centauri A, 4.4 light-years away and belonging to a multiple star system.
All stars go through a phase of intense activity after their formation due to their rotation, which is much faster at the beginning of their lives. The duration of this period varies according to the mass of the object: the least massive stars can remain in this state for up to 3 billion years, compared to 500 million for G-type stars. Studies by the team of Edward Guinan, an astrophysicist at Villanova University, reveal that the Sun rotated ten times faster in its early days. Since the rotation speed of a star affects its magnetic field, the Sun's X-ray and UV emissions were hundreds of times more intense than they are today.
The extension of this phase in red dwarfs, as well as the probable tidal locking of their potentially habitable planets with respect to them, could wipe out the magnetic field of these planets, resulting in the loss of almost all their atmosphere and water to space by interaction with the stellar wind. In contrast, the semi-major axis of planetary objects belonging to the habitable zone of G-type stars is wide enough to allow planetary rotation. In addition, the duration of the period of intense stellar activity is too short to eliminate a significant part of the atmosphere on planets with masses similar to or greater than that of the Earth, which have a gravity and magnetosphere capable of counteracting the effects of stellar winds.
== Habitable area ==
The habitable zone around yellow dwarfs varies according to their size and luminosity, although the inner boundary is usually at 0.84 AU and the outer one at 1.67 in a G2V class dwarf like the Sun. For a G5V class star with a radius of 0.95 R☉—smaller than the Sun—the habitable zone would correspond to the region located between 0.8 and 1.58 AU with respect to the star. For a G0V star—larger than the Sun—it would be located at a distance of between 1 and 2 AU from the stellar body. In orbits smaller than the inner boundary of the habitable zone, a process of water evaporation, hydrogen separation by photolysis and loss of hydrogen to space by hydrodynamic escape would be triggered. Beyond the outer limit of the habitable zone, temperatures would be low enough to allow CO2 condensation, which would lead to an increase in albedo and a feedback reduction of the greenhouse effect until a permanent global glaciation would occur.
The size of the habitable zone is directly proportional to the mass and luminosity of its star, so the larger the star, the larger the habitable zone and the farther from its surface. Red dwarfs, the smallest of the main sequence, have a very small habitable zone close to them, which subjects any potentially habitable planets in the system to the effects of their star, including probable tidal locking. Even in a small yellow dwarf like Tau Ceti, of type G8.5V, the locking limit is at 0.4237 AU versus the 0.522 AU that marks the inner boundary of the habitable zone, so any planetary object orbiting a G-class star in this region will far exceed the locking limit, and will have day-night cycles like Earth.
In yellow dwarfs, this region coincides entirely with the ultraviolet habitability zone. This area is determined by an inner limit beyond which exposure to ultraviolet radiation would be too high for DNA and by an outer limit that provides the minimum levels for living things to carry out their biogenic processes. In the Solar System, this region is located between 0.71 and 1.9 AU with respect to the Sun, compared to the 0.84–1.67 AU that mark the extremes of the habitable zone.
== Life potential ==
Given the length of the main sequence in G-type stars, the levels of ultraviolet radiation in their habitable zone, the semi-major axis of the inner boundary of this region and the distance to their tidal locking limit, among other factors, yellow dwarfs are considered to be the most hospitable to life next to K-type stars.
One goal in exoplanetary research is to find an object that has the main characteristics of our planet, such as radius, mass, temperature, atmospheric composition and belonging to a star similar to the Sun. In theory, these Earth analogs should have comparable habitability conditions that would allow the proliferation of extraterrestrial life.
Based on the serious problems for planetary habitability presented by red dwarf systems and stellar bodies of type F or higher, the only stars that might offer a bearable scenario for life would be those of type K and G. Solar analogs used to be considered as the most likely candidates to host a solar-like planetary system, and as the best positioned to support carbon-based life forms and liquid water oceans. Subsequent studies, such as "Superhabitable Worlds" by René Heller and John Armstrong, establish that orange dwarfs may be more suitable for life than G-type dwarfs, and host hypothetical superhabitable planets.
However, yellow dwarfs still represent the only stellar type for which there is evidence of their suitability for life. Moreover, while in other types of stars the habitable zone does not coincide entirely with the ultraviolet habitable zone, in G-class stars the habitable zone lies entirely within the limits of the latter. Finally, yellow dwarfs have a much shorter initial phase of intense stellar activity than K-type and M-type stars, which allows planets belonging to solar analogs to preserve their primordial atmospheres more easily and to maintain them for much of the main sequence.
== Discoveries ==
Most of the exoplanets discovered have been detected by the Kepler space telescope, which uses the transit method to find planets around other systems. This procedure analyzes the brightness of stars to detect dips that indicate the passage of a planetary object in front of them from the perspective of the observatory. It is the method that has been most successful in exoplanetary research, together with the radial velocity method, which consists of analyzing the vibrations caused in the stars by the gravitational effects of the planets orbiting them. The use of these procedures with the limitations of current telescopes makes it difficult to find objects with orbits similar to the Earth's orbits or higher, which generates a bias in favor of planets with a short semi-major axis. As a consequence, most of the exoplanets detected are either excessively hot or belong to low-mass stars, whose habitable zone is close to them and any object orbiting in this region will have a significantly shorter year than the Earth.
Planetary bodies belonging to the habitable zone of yellow dwarfs, such as Kepler-22b, 82 G. Eridani d or Earth, take hundreds of days to complete an orbit around their star. The higher luminosity of these stars, the scarcity of transits and the semi-major axis of their planets located in the habitable zone reduce the probabilities of detecting this class of objects and considerably increase the number of false positives, as in the cases of KOI-5123.01 and KOI-5927.01. The ground-based and orbital observatories projected for the next ten years may increase the discoveries of Earth analogs in yellow dwarf systems.
=== Kepler-452b ===
Kepler-452b lies 1400 light-years from Earth, in the Cygnus constellation. Its radius of about 1.6 R⊕ places it right on the boundary separating telluric planets from mini-Neptunes established by the team of Courtney Dressing, a researcher at the Harvard-Smithsonian Center for Astrophysics (CfA). If the planet's density is similar to Earth's, its mass will be about 5 M⊕ and its gravity twice as great. A G2V-type yellow dwarf like the Sun belongs to Kepler-452, with an estimated age of 6 billion years (6 Ga) versus the solar system's 4.5 Ga.
The mass of its star is slightly higher than that of the Sun, 1.04 M☉, so despite the fact that it completes an orbit around it every 385 days versus 365 terrestrial days, it is warmer than the Earth. If it has similar albedo and atmospheric composition, the average surface temperature will be around 29 °C.
According to Jon Jenkins of NASA's Ames Research Center, it is not known whether Kepler-452b is a terrestrial planet, an ocean world or a mini-Neptune. If it is an Earth-like telluric object, it is likely to have a higher concentration of clouds, intense volcanic activity, and is about to suffer an uncontrolled greenhouse effect similar to that of Venus due to the constant increase in the luminosity of its star, after having remained throughout the main sequence in its habitable zone. Doug Caldwell, a SETI Institute scientist and member of the Kepler mission, estimates that Kepler-452b may be undergoing the same process that the Earth will undergo in a billion years.
=== Tau Ceti e ===
Tau Ceti e orbits a G8.5V-type star in the constellation Cetus, 12 light-years from Earth. It has a radius of 1.59 R⊕ and a mass of 4.29 M⊕, so like Kepler-452b it lies at the separation boundary between terrestrial and gaseous planets. With an orbital period of only 168 days, its temperature assuming an Earth-like atmospheric composition and albedo would be about 50 °C.
The planet is located just at the inner edge of the habitable zone and receives about 60% more light than Earth. Its size may also imply a higher concentration of gases in its atmosphere, making it a super-Venus type object. Otherwise, it could be the first thermoplanet discovered.
=== Kepler-22b ===
Kepler-22b is at a distance of 600 light-years, in the Cygnus constellation. It completes one orbit around its G5V-type star every 290 days. Its radius is 2.35 R⊕ and its estimated mass, for an Earth-like density, would be 20.36 M⊕. If the planet's atmosphere and albedo were similar to Earth's, its surface temperature would be around 22 °C.
It was the first exoplanet found by the Kepler telescope belonging to the habitability zone of its star. Because of its size, considering the limit established by Courtney Dressing's team, its probability to be a mini-Neptune is very high.
=== 82 G. Eridani d ===
In October 2024, the existence of a temperate 6.4-Earth mass planet orbiting the G-type star HD 20794 (at 19.7 light-years away) was confirmed. This planet orbits partially exterior to the circumstellar habitable zone.
== See also ==
Astrobiology
Circumstellar habitable zone
Earth analog
Superhabitable planet
Habitability of natural satellites
Habitability of red dwarf systems
Habitability of K-type main-sequence star systems
Habitability of F-type main-sequence star systems
List of potentially habitable exoplanets
== References ==
== Bibliography ==
Heller, René; Armstrong, John (2014). "Superhabitable Worlds". Astrobiology. 14 (1): 50–66. arXiv:1401.2392. Bibcode:2014AsBio..14...50H. doi:10.1089/ast.2013.1088. PMID 24380533. S2CID 1824897.
Kasting, James F.; Whitmire, Daniel P.; Reynolds, Ray T. (1993). "Habitable Zones around main Sequence Stars". Icarus. 1 (101): 101–128. Bibcode:1993Icar..101..108K. doi:10.1006/icar.1993.1010. PMID 11536936.
Perryman, Michael (2011). The Exoplanet Handbook. Cambridge University Press. ISBN 978-0-521-76559-6.
Ulmschneider, Peter (2006). Intelligent Life in the Universe: Principles and Requirements Behind Its Emergence (Advances in Astrobiology and Biogeophysics). Springer. ISBN 978-3540328360. | Wikipedia/Habitability_of_yellow_dwarf_systems |
K-type main-sequence stars, also known as orange dwarfs, may be candidates for supporting extraterrestrial life. These stars are known as "Goldilocks stars" as they emit enough radiation in the non-UV ray spectrum to provide a temperature that allows liquid water to exist on the surface of a planet; they also remain stable in the main sequence longer than the Sun by burning their hydrogen slower, allowing more time for life to form on a planet around a K-type main-sequence star. The planet's habitable zone, ranging from 0.1–0.4 to 0.3–1.3 astronomical units (AU), depending on the size of the star, is often far enough from the star so as not to be tidally locked to the star, and to have a sufficiently low solar flare activity not to be lethal to life. In comparison, red dwarf stars have too much solar activity and quickly tidally lock the planets in their habitable zones, making them less suitable for life. The odds of complex life arising may be better on planets around K-type main-sequence stars than around Sun-like stars, given the suitable temperature and extra time available for it to evolve. Some planets around K-type main-sequence stars are potential candidates for extraterrestrial life.
== Habitable zone ==
A K-type star's habitable zone approximately ranges between 0.1–0.4 to 0.3–1.3 AU from the star. Here, exoplanets will receive only a relatively small amount of ultraviolet radiation, especially so towards the outer edge. This is favorable to support life, as it means that there is enough radiated energy to allow liquid water to exist on the surface, but not so much, especially ionizing radiation, as to destroy life.
The habitable zone is also very stable, lasting for most of the K-type main-sequence star's main sequence phase and with little instability of luminosity during that phase.
== Radiation hazard ==
Despite K-stars' lower total UV output, in order for their planets to have habitable temperatures, they must orbit much nearer to their K-star hosts, offsetting or reversing any advantage of a lower total UV output. There is also growing evidence that K-type dwarf stars emit dangerously high levels of X-rays and far ultraviolet (FUV) radiation for considerably longer into their early main sequence phase than do either heavier G-type stars or lighter early M-type dwarf stars. This prolonged radiation saturation period may sterilise, destroy the atmospheres of, or at least delay the emergence of life for Earth-like planets orbiting inside the habitable zones around K-type dwarf stars.
== Potentially habitable planets ==
The super-Earth HD 40307 g around the K2.5V star HD 40307 orbits in the circumstellar habitable zone (CHZ), although it has a reasonably elliptical orbit (e=0.22). There may be many more, and the Kepler space telescope (now retired) was one of the main sources of information of these exoplanets. Kepler-62 and Kepler-442 are examples of discoveries by Kepler of systems consisting of a K-type dwarf with potentially habitable planets orbiting it.
HD 85512 b was originally thought to be a super-Earth with habitability potential orbiting a K-type main-sequence star, but it is now considered to be a false positive detection, an artifact caused by stellar rotation.
== See also ==
Astrobiology
Circumstellar habitable zone
Habitability of F-type main-sequence star systems
Habitability of neutron star systems
Habitability of red dwarf systems
Habitability of yellow dwarf systems
Planetary habitability
== References == | Wikipedia/Habitability_of_K-type_main-sequence_star_systems |
Optical resolution describes the ability of an imaging system to resolve detail, in the object that is being imaged.
An imaging system may have many individual components, including one or more lenses, and/or recording and display components. Each of these contributes (given suitable design, and adequate alignment) to the optical resolution of the system; the environment in which the imaging is done often is a further important factor.
== Lateral resolution ==
Resolution depends on the distance between two distinguishable radiating points. The sections below describe the theoretical estimates of resolution, but the real values may differ. The results below are based on mathematical models of Airy discs, which assumes an adequate level of contrast. In low-contrast systems, the resolution may be much lower than predicted by the theory outlined below. Real optical systems are complex, and practical difficulties often increase the distance between distinguishable point sources.
The resolution of a system is based on the minimum distance
r
{\displaystyle r}
at which the points can be distinguished as individuals. Several standards are used to determine, quantitatively, whether or not the points can be distinguished. One of the methods specifies that, on the line between the center of one point and the next, the contrast between the maximum and minimum intensity be at least 26% lower than the maximum. This corresponds to the overlap of one Airy disk on the first dark ring in the other. This standard for separation is also known as the Rayleigh criterion. In symbols, the distance is defined as follows:
r
=
1.22
λ
2
n
sin
θ
=
0.61
λ
N
A
{\displaystyle r={\frac {1.22\lambda }{2n\sin {\theta }}}={\frac {0.61\lambda }{\mathrm {NA} }}}
where
r
{\displaystyle r}
is the minimum distance between resolvable points, in the same units as
λ
{\displaystyle \lambda }
is specified
λ
{\displaystyle \lambda }
is the wavelength of light, emission wavelength, in the case of fluorescence,
n
{\displaystyle n}
is the index of refraction of the media surrounding the radiating points,
θ
{\displaystyle \theta }
is the half angle of the pencil of light that enters the objective, and
N
A
{\displaystyle \mathrm {NA} }
is the numerical aperture
This formula is suitable for confocal microscopy, but is also used in traditional microscopy. In confocal laser-scanned microscopes, the full-width half-maximum (FWHM) of the point spread function is often used to avoid the difficulty of measuring the Airy disc. This, combined with the rastered illumination pattern, results in better resolution, but it is still proportional to the Rayleigh-based formula given above.
r
=
0.4
λ
N
A
{\displaystyle r={\frac {0.4\lambda }{\mathrm {NA} }}}
Also common in the microscopy literature is a formula for resolution that treats the above-mentioned concerns about contrast differently. The resolution predicted by this formula is proportional to the Rayleigh-based formula, differing by about 20%. For estimating theoretical resolution, it may be adequate.
r
=
λ
2
n
sin
θ
=
λ
2
N
A
{\displaystyle r={\frac {\lambda }{2n\sin {\theta }}}={\frac {\lambda }{2\mathrm {NA} }}}
When a condenser is used to illuminate the sample, the shape of the pencil of light emanating from the condenser must also be included.
r
=
1.22
λ
N
A
obj
+
N
A
cond
{\displaystyle r={\frac {1.22\lambda }{\mathrm {NA} _{\text{obj}}+\mathrm {NA} _{\text{cond}}}}}
In a properly configured microscope,
N
A
obj
+
N
A
cond
=
2
N
A
obj
{\displaystyle \mathrm {NA} _{\text{obj}}+\mathrm {NA} _{\text{cond}}=2\mathrm {NA} _{\text{obj}}}
.
The above estimates of resolution are specific to the case in which two identical very small samples that radiate incoherently in all directions. Other considerations must be taken into account if the sources radiate at different levels of intensity, are coherent, large, or radiate in non-uniform patterns.
== Lens resolution ==
The ability of a lens to resolve detail is usually determined by the quality of the lens, but is ultimately limited by diffraction. Light coming from a point source in the object diffracts through the lens aperture such that it forms a diffraction pattern in the image, which has a central spot and surrounding bright rings, separated by dark nulls; this pattern is known as an Airy pattern, and the central bright lobe as an Airy disk. The angular radius of the Airy disk (measured from the center to the first null) is given by:
θ
=
1.22
λ
D
{\displaystyle \theta =1.22{\frac {\lambda }{D}}}
where
θ is the angular resolution in radians,
λ is the wavelength of light in meters,
and D is the diameter of the lens aperture in meters.
Two adjacent points in the object give rise to two diffraction patterns. If the angular separation of the two points is significantly less than the Airy disk angular radius, then the two points cannot be resolved in the image, but if their angular separation is much greater than this, distinct images of the two points are formed and they can therefore be resolved. Rayleigh defined the somewhat arbitrary "Rayleigh criterion" that two points whose angular separation is equal to the Airy disk radius to first null can be considered to be resolved. It can be seen that the greater the diameter of the lens or its aperture, the greater the resolution. Astronomical telescopes have increasingly large lenses so they can 'see' ever finer detail in the stars.
Only the very highest quality lenses have diffraction-limited resolution, however, and normally the quality of the lens limits its ability to resolve detail. This ability is expressed by the Optical Transfer Function which describes the spatial (angular) variation of the light signal as a function of spatial (angular) frequency. When the image is projected onto a flat plane, such as photographic film or a solid state detector, spatial frequency is the preferred domain, but when the image is referred to the lens alone, angular frequency is preferred. OTF may be broken down into the magnitude and phase components as follows:
O
T
F
(
ξ
,
η
)
=
M
T
F
(
ξ
,
η
)
⋅
P
T
F
(
ξ
,
η
)
{\displaystyle \mathbf {OTF(\xi ,\eta )} =\mathbf {MTF(\xi ,\eta )} \cdot \mathbf {PTF(\xi ,\eta )} }
where
M
T
F
(
ξ
,
η
)
=
|
O
T
F
(
ξ
,
η
)
|
{\displaystyle \mathbf {MTF(\xi ,\eta )} =|\mathbf {OTF(\xi ,\eta )} |}
P
T
F
(
ξ
,
η
)
=
e
−
i
2
⋅
π
⋅
λ
(
ξ
,
η
)
{\displaystyle \mathbf {PTF(\xi ,\eta )} =e^{-i2\cdot \pi \cdot \lambda (\xi ,\eta )}}
and
(
ξ
,
η
)
{\displaystyle (\xi ,\eta )}
are spatial frequency in the x- and y-plane, respectively.
The OTF accounts for aberration, which the limiting frequency expression above does not. The magnitude is known as the Modulation Transfer Function (MTF) and the phase portion is known as the Phase Transfer Function (PTF).
In imaging systems, the phase component is typically not captured by the sensor. Thus, the important measure with respect to imaging systems is the MTF.
Phase is critically important to adaptive optics and holographic systems.
== Sensor resolution (spatial) ==
Some optical sensors are designed to detect spatial differences in electromagnetic energy. These include photographic film, solid-state devices (CCD, CMOS sensors, and infrared detectors like PtSi and InSb), tube detectors (vidicon, plumbicon, and photomultiplier tubes used in night-vision devices), scanning detectors (mainly used for IR), pyroelectric detectors, and microbolometer detectors. The ability of such a detector to resolve those differences depends mostly on the size of the detecting elements.
Spatial resolution is typically expressed in line pairs per millimeter (lppmm), lines (of resolution, mostly for analog video), contrast vs. cycles/mm, or MTF (the modulus of OTF). The MTF may be found by taking the two-dimensional Fourier transform of the spatial sampling function. Smaller pixels result in wider MTF curves and thus better detection of higher frequency energy.
This is analogous to taking the Fourier transform of a signal sampling function; as in that case, the dominant factor is the sampling period, which is analogous to the size of the picture element (pixel).
Other factors include pixel noise, pixel cross-talk, substrate penetration, and fill factor.
A common problem among non-technicians is the use of the number of pixels on the detector to describe the resolution. If all sensors were the same size, this would be acceptable. Since they are not, the use of the number of pixels can be misleading. For example, a 2-megapixel camera of 20-micrometre-square pixels will have worse resolution than a 1-megapixel camera with 8-micrometre pixels, all else being equal.
For resolution measurement, film manufacturers typically publish a plot of Response (%) vs. Spatial Frequency (cycles per millimeter). The plot is derived experimentally. Solid state sensor and camera manufacturers normally publish specifications from which the user may derive a theoretical MTF according to the procedure outlined below. A few may also publish MTF curves, while others (especially intensifier manufacturers) will publish the response (%) at the Nyquist frequency, or, alternatively, publish the frequency at which the response is 50%.
To find a theoretical MTF curve for a sensor, it is necessary to know three characteristics of the sensor: the active sensing area, the area comprising the sensing area and the interconnection and support structures ("real estate"), and the total number of those areas (the pixel count). The total pixel count is almost always given. Sometimes the overall sensor dimensions are given, from which the real estate area can be calculated. Whether the real estate area is given or derived, if the active pixel area is not given, it may be derived from the real estate area and the fill factor, where fill factor is the ratio of the active area to the dedicated real estate area.
F
F
=
a
⋅
b
c
⋅
d
{\displaystyle \mathrm {FF} ={\frac {a\cdot b}{c\cdot d}}}
where
the active area of the pixel has dimensions a×b
the pixel real estate has dimensions c×d
In Gaskill's notation, the sensing area is a 2D comb(x, y) function of the distance between pixels (the pitch), convolved with a 2D rect(x, y) function of the active area of the pixel, bounded by a 2D rect(x, y) function of the overall sensor dimension. The Fourier transform of this is a
comb
(
ξ
,
η
)
{\displaystyle \operatorname {comb} (\xi ,\eta )}
function governed by the distance between pixels, convolved with a
sinc
(
ξ
,
η
)
{\displaystyle \operatorname {sinc} (\xi ,\eta )}
function governed by the number of pixels, and multiplied by the
sinc
(
ξ
,
η
)
{\displaystyle \operatorname {sinc} (\xi ,\eta )}
function corresponding to the active area. That last function serves as an overall envelope to the MTF function; so long as the number of pixels is much greater than one, then the active area size dominates the MTF.
Sampling function:
S
(
x
,
y
)
=
[
comb
(
x
c
,
y
d
)
∗
rect
(
x
a
,
y
b
)
]
⋅
rect
(
x
M
⋅
c
,
y
N
⋅
d
)
{\displaystyle \mathbf {S} (x,y)=\left[\operatorname {comb} \left({\frac {x}{c}},{\frac {y}{d}}\right)*\operatorname {rect} \left({\frac {x}{a}},{\frac {y}{b}}\right)\right]\cdot \operatorname {rect} \left({\frac {x}{M\cdot c}},{\frac {y}{N\cdot d}}\right)}
where the sensor has M×N pixels
M
T
F
s
e
n
s
o
r
(
ξ
,
η
)
=
F
F
(
S
(
x
,
y
)
)
=
[
sinc
(
(
M
⋅
c
)
⋅
ξ
,
(
N
⋅
d
)
⋅
η
)
∗
comb
(
c
⋅
ξ
,
d
⋅
η
)
]
⋅
sinc
(
a
⋅
ξ
,
b
⋅
η
)
{\displaystyle {\begin{aligned}\mathbf {MTF_{sensor}} (\xi ,\eta )&={\mathcal {FF}}(\mathbf {S} (x,y))\\&=[\operatorname {sinc} ((M\cdot c)\cdot \xi ,(N\cdot d)\cdot \eta )*\operatorname {comb} (c\cdot \xi ,d\cdot \eta )]\cdot \operatorname {sinc} (a\cdot \xi ,b\cdot \eta )\end{aligned}}}
== Sensor resolution (temporal) ==
An imaging system running at 24 frames per second is essentially a discrete sampling system that samples a 2D area. The same limitations described by Nyquist apply to this system as to any signal sampling system.
All sensors have a characteristic time response. Film is limited at both the short resolution and the long resolution extremes by reciprocity breakdown. These are typically held to be anything longer than 1 second and shorter than 1/10,000 second. Furthermore, film requires a mechanical system to advance it through the exposure mechanism, or a moving optical system to expose it. These limit the speed at which successive frames may be exposed.
CCD and CMOS are the modern preferences for video sensors. CCD is speed-limited by the rate at which the charge can be moved from one site to another. CMOS has the advantage of having individually addressable cells, and this has led to its advantage in the high speed photography industry.
Vidicons, Plumbicons, and image intensifiers have specific applications. The speed at which they can be sampled depends upon the decay rate of the phosphor used. For example, the P46 phosphor has a decay time of less than 2 microseconds, while the P43 decay time is on the order of 2-3 milliseconds. The P43 is therefore unusable at frame rates above 1000 frames per second (frame/s). See § External links for links to phosphor information.
Pyroelectric detectors respond to changes in temperature. Therefore, a static scene will not be detected, so they require choppers. They also have a decay time, so the pyroelectric system temporal response will be a bandpass, while the other detectors discussed will be a lowpass.
If objects within the scene are in motion relative to the imaging system, the resulting motion blur will result in lower spatial resolution. Short integration times will minimize the blur, but integration times are limited by sensor sensitivity. Furthermore, motion between frames in motion pictures will impact digital movie compression schemes (e.g. MPEG-1, MPEG-2). Finally, there are sampling schemes that require real or apparent motion inside the camera (scanning mirrors, rolling shutters) that may result in incorrect rendering of image motion. Therefore, sensor sensitivity and other time-related factors will have a direct impact on spatial resolution.
== Analog bandwidth effect on resolution ==
The spatial resolution of digital systems (e.g. HDTV and VGA) are fixed independently of the analog bandwidth because each pixel is digitized, transmitted, and stored as a discrete value. Digital cameras, recorders, and displays must be selected so that the resolution is identical from camera to display. However, in analog systems, the resolution of the camera, recorder, cabling, amplifiers, transmitters, receivers, and display may all be independent and the overall system resolution is governed by the bandwidth of the lowest performing component.
In analog systems, each horizontal line is transmitted as a high-frequency analog signal. Each picture element (pixel) is therefore converted to an analog electrical value (voltage), and changes in values between pixels therefore become changes in voltage. The transmission standards require that the sampling be done in a fixed time (outlined below), so more pixels per line becomes a requirement for more voltage changes per unit time, i.e. higher frequency. Since such signals are typically band-limited by cables, amplifiers, recorders, transmitters, and receivers, the band-limitation on the analog signal acts as an effective low-pass filter on the spatial resolution. The difference in resolutions between VHS (240 discernible lines per scanline), Betamax (280 lines), and the newer ED Beta format (500 lines) is explained primarily by the difference in the recording bandwidth.
In the NTSC transmission standard, each field contains 262.5 lines, and 59.94 fields are transmitted every second. Each line must therefore take 63 microseconds, 10.7 of which are for reset to the next line. Thus, the retrace rate is 15.734 kHz. For the picture to appear to have approximately the same horizontal and vertical resolution (see Kell factor), it should be able to display 228 cycles per line, requiring a bandwidth of 4.28 MHz. If the line (sensor) width is known, this may be converted directly into cycles per millimeter, the unit of spatial resolution.
B/G/I/K television system signals (usually used with PAL colour encoding) transmit frames less often (50 Hz), but the frame contains more lines and is wider, so bandwidth requirements are similar.
Note that a "discernible line" forms one half of a cycle (a cycle requires a dark and a light line), so "228 cycles" and "456 lines" are equivalent measures.
== System resolution ==
There are two methods by which to determine "system resolution" (in the sense that omits the eye, or other final reception of the optical information). The first is to perform a series of two-dimensional convolutions, first with the image and the lens, and then, with that procedure's result and a sensor (and so on through all of the components of the system). Not only is this computationally expensive, but normally it also requires repetition of the process, for each additional object that is to be imaged.
I
m
a
g
e
(
x
,
y
)
=
O
b
j
e
c
t
(
x
,
y
)
∗
P
S
F
a
t
m
o
s
p
h
e
r
e
(
x
,
y
)
∗
P
S
F
l
e
n
s
(
x
,
y
)
∗
P
S
F
s
e
n
s
o
r
(
x
,
y
)
∗
P
S
F
t
r
a
n
s
m
i
s
s
i
o
n
(
x
,
y
)
∗
P
S
F
d
i
s
p
l
a
y
(
x
,
y
)
{\displaystyle {\begin{aligned}\mathbf {Image(x,y)} ={}&\mathbf {Object(x,y)*PSF_{atmosphere}(x,y)*} \\&\mathbf {PSF_{lens}(x,y)*PSF_{sensor}(x,y)*} \\&\mathbf {PSF_{transmission}(x,y)*PSF_{display}(x,y)} \end{aligned}}}
The other method is to transform each of the components of the system into the spatial frequency domain, and then to multiply the 2-D results. A system response may be determined without reference to an object. Although this method is considerably more difficult to comprehend conceptually, it becomes easier to use computationally, especially when different design iterations or imaged objects are to be tested.
The transformation to be used is the Fourier transform.
M
T
F
s
y
s
(
ξ
,
η
)
=
M
T
F
a
t
m
o
s
p
h
e
r
e
(
ξ
,
η
)
⋅
M
T
F
l
e
n
s
(
ξ
,
η
)
⋅
M
T
F
s
e
n
s
o
r
(
ξ
,
η
)
⋅
M
T
F
t
r
a
n
s
m
i
s
s
i
o
n
(
ξ
,
η
)
⋅
M
T
F
d
i
s
p
l
a
y
(
ξ
,
η
)
{\displaystyle {\begin{aligned}\mathbf {MTF_{sys}(\xi ,\eta )} ={}&\mathbf {MTF_{atmosphere}(\xi ,\eta )\cdot MTF_{lens}(\xi ,\eta )\cdot } \\&\mathbf {MTF_{sensor}(\xi ,\eta )\cdot MTF_{transmission}(\xi ,\eta )\cdot } \\&\mathbf {MTF_{display}(\xi ,\eta )} \end{aligned}}}
== Ocular resolution ==
The human eye is a limiting feature of many systems, when the goal of the system is to present data to humans for processing.
For example, in a security or air traffic control function, the display and work station must be constructed so that average humans can detect problems and direct corrective measures. Other examples are when a human is using eyes to carry out a critical task such as flying (piloting by visual reference), driving a vehicle, and so forth.
The best visual acuity of the human eye at its optical centre (the fovea) is less than 1 arc minute per line pair, reducing rapidly away from the fovea.
The human brain requires more than just a line pair to understand what the eye is imaging. Johnson's criteria defines the number of line pairs of ocular resolution, or sensor resolution, needed to recognize or identify an item.
== Atmospheric resolution ==
Systems looking through long atmospheric paths may be limited by turbulence. A key measure of the quality of atmospheric turbulence is the seeing diameter, also known as Fried's seeing diameter. A path which is temporally coherent is known as an isoplanatic patch.
Large apertures may suffer from aperture averaging, the result of several paths being integrated into one image.
Turbulence scales with wavelength at approximately a 6/5 power. Thus, seeing is better at infrared wavelengths than at visible wavelengths.
Short exposures suffer from turbulence less than longer exposures due to the "inner" and "outer" scale turbulence; short is considered to be much less than 10 ms for visible imaging (typically, anything less than 2 ms). Inner scale turbulence arises due to the eddies in the turbulent flow, while outer scale turbulence arises from large air mass flow. These masses typically move slowly, and so are reduced by decreasing the integration period.
A system limited only by the quality of the optics is said to be diffraction-limited. However, since atmospheric turbulence is normally the limiting factor for visible systems looking through long atmospheric paths, most systems are turbulence-limited. Corrections can be made by using adaptive optics or post-processing techniques.
MTF
s
(
ν
)
=
e
−
3.44
⋅
(
λ
f
ν
/
r
0
)
5
/
3
⋅
[
1
−
b
⋅
(
λ
f
ν
/
D
)
1
/
3
]
{\displaystyle \operatorname {MTF} _{s}(\nu )=e^{-3.44\cdot (\lambda f\nu /r_{0})^{5/3}\cdot [1-b\cdot (\lambda f\nu /D)^{1/3}]}}
where
ν
{\displaystyle \nu }
is the spatial frequency
λ
{\displaystyle \lambda }
is the wavelength
f is the focal length
D is the aperture diameter
b is a constant (1 for far-field propagation)
and
r
0
{\displaystyle r_{0}}
is Fried's seeing diameter
== Measuring optical resolution ==
A variety of measurement systems are available, and use may depend upon the system being tested.
Typical test charts for Contrast Transfer Function (CTF) consist of repeated bar patterns (see Discussion below). The limiting resolution is measured by determining the smallest group of bars, both vertically and horizontally, for which the correct number of bars can be seen. By calculating the contrast between the black and white areas at several different frequencies, however, points of the CTF can be determined with the contrast equation.
contrast
=
C
max
−
C
min
C
max
+
C
min
{\displaystyle {\text{contrast}}={\frac {C_{\max }-C_{\min }}{C_{\max }+C_{\min }}}}
where
C
max
{\displaystyle C_{\max }}
is the normalized value of the maximum (for example, the voltage or grey value of the white area)
C
min
{\displaystyle C_{\min }}
is the normalized value of the minimum (for example, the voltage or grey value of the black area)
When the system can no longer resolve the bars, the black and white areas have the same value, so Contrast = 0. At very low spatial frequencies, Cmax = 1 and Cmin = 0 so Modulation = 1. Some modulation may be seen above the limiting resolution; these may be aliased and phase-reversed.
When using other methods, including the interferogram, sinusoid, and the edge in the ISO 12233 target, it is possible to compute the entire MTF curve. The response to the edge is similar to a step response, and the Fourier Transform of the first difference of the step response yields the MTF.
=== Interferogram ===
An interferogram created between two coherent light sources may be used for at least two resolution-related purposes. The first is to determine the quality of a lens system (see LUPI), and the second is to project a pattern onto a sensor (especially photographic film) to measure resolution.
=== NBS 1010a/ ISO #2 target ===
This 5 bar resolution test chart is often used for evaluation of microfilm systems and scanners. It is convenient for a 1:1 range (typically covering 1-18 cycles/mm) and is marked directly in cycles/mm. Details can be found in ISO-3334.
=== USAF 1951 target ===
The USAF 1951 resolution test target consists of a pattern of 3 bar targets. Often found covering a range of 0.25 to 228 cycles/mm. Each group consists of six elements. The group is designated by a group number (-2, -1, 0, 1, 2, etc.) which is the power to which 2 should be raised to obtain the spatial frequency of the first element (e.g., group -2 is 0.25 line pairs per millimeter). Each element is the 6th root of 2 smaller than the preceding element in the group (e.g. element 1 is 2^0, element 2 is 2^(-1/6), element 3 is 2(-1/3), etc.). By reading off the group and element number of the first element which cannot be resolved, the limiting resolution may be determined by inspection. The complex numbering system and use of a look-up chart can be avoided by use of an improved but not standardized layout chart, which labels the bars and spaces directly in cycles/mm using OCR-A extended font.
R
e
s
o
l
u
t
i
o
n
=
2
g
r
o
u
p
+
e
l
e
m
e
n
t
−
1
6
{\displaystyle Resolution=2^{{group}+{\frac {element-1}{6}}}}
=== NBS 1952 target ===
The NBS 1952 target is a 3 bar pattern (long bars). The spatial frequency is printed alongside each triple bar set, so the limiting resolution may be determined by inspection. This frequency is normally only as marked after the chart has been reduced in size (typically 25 times). The original application called for placing the chart at a distance 26 times the focal length of the imaging lens used. The bars above and to the left are in sequence, separated by approximately the square root of two (12, 17, 24, etc.), while the bars below and to the left have the same separation but a different starting point (14, 20, 28, etc.)
=== EIA 1956 video resolution target ===
The EIA 1956 resolution chart was specifically designed to be used with television systems. The gradually expanding lines near the center are marked with periodic indications of the corresponding spatial frequency. The limiting resolution may be determined by inspection. The most important measure is the limiting horizontal resolution, since the vertical resolution is typically determined by the applicable video standard (I/B/G/K/NTSC/NTSC-J).
=== IEEE Std 208-1995 target ===
The IEEE 208-1995 resolution target is similar to the EIA target. Resolution is measured in horizontal and vertical TV lines.
=== ISO 12233 target ===
The ISO 12233 target was developed for digital camera applications, since modern digital camera spatial resolution may exceed the limitations of the older targets. It includes several knife-edge targets for the purpose of computing MTF by Fourier transform. They are offset from the vertical by 5 degrees so that the edges will be sampled in many different phases, which allow estimation of the spatial frequency response beyond the Nyquist frequency of the sampling.
=== Random test patterns ===
The idea is analogous to the use of a white noise pattern in acoustics to determine system frequency response.
=== Monotonically increasing sinusoid patterns ===
The interferogram used to measure film resolution can be synthesized on personal computers and used to generate a pattern for measuring optical resolution. See especially Kodak MTF curves.
=== Multiburst ===
A multiburst signal is an electronic waveform used to test analog transmission, recording, and display systems. The test pattern consists of several short periods of specific frequencies. The contrast of each may be measured by inspection and recorded, giving a plot of attenuation vs. frequency. The NTSC3.58 multiburst pattern consists of 500 kHz, 1 MHz, 2 MHz, 3 MHz, and 3.58 MHz blocks. 3.58 MHz is important because it is the chrominance frequency for NTSC video.
=== Discussion ===
Using a bar target that the resulting measure is the contrast transfer function (CTF) and not the MTF. The difference arises from the subharmonics of the square waves and can be easily computed.
== See also ==
Angular resolution
Display resolution
Image resolution, in computing
Minimum resolvable contrast
Siemens star, a pattern used for resolution testing
Spatial resolution
Superlens
Superresolution
== References ==
Gaskill, Jack D. (1978), Linear Systems, Fourier Transforms, and Optics, Wiley-Interscience. ISBN 0-471-29288-5
Goodman, Joseph W. (2004), Introduction to Fourier Optics (Third Edition), Roberts & Company Publishers. ISBN 0-9747077-2-4
Fried, David L. (1966), "Optical resolution through a randomly inhomogeneous medium for very long and very short exposures.", J. Opt. Soc. Amer. 56:1372-9
Robin, Michael, and Poulin, Michael (2000), Digital Television Fundamentals (2nd edition), McGraw-Hill Professional. ISBN 0-07-135581-2
Smith, Warren J. (2000), Modern Optical Engineering (Third Edition), McGraw-Hill Professional. ISBN 0-07-136360-2
Accetta, J. S. and Shumaker, D. L. (1993), The Infrared and Electro-optical Systems Handbook, SPIE/ERIM. ISBN 0-8194-1072-1
Roggemann, Michael and Welsh, Byron (1996), Imaging Through Turbulence, CRC Press. ISBN 0-8493-3787-9
Tatarski, V. I. (1961), Wave Propagation in a Turbulent Medium, McGraw-Hill, NY
== External links ==
Norman Koren's website - includes several downloadable test patterns
UC Santa Cruz Prof. Claire Max's lectures and notes from Astronomy 289C, Adaptive Optics
George Ou's re-creation of the EIA 1956 chart from a high-resolution scan
Do Sensors “Outresolve” Lenses? - on lens and sensor resolution interaction | Wikipedia/Optical_resolution |
A general circulation model (GCM) is a type of climate model. It employs a mathematical model of the general circulation of a planetary atmosphere or ocean. It uses the Navier–Stokes equations on a rotating sphere with thermodynamic terms for various energy sources (radiation, latent heat). These equations are the basis for computer programs used to simulate the Earth's atmosphere or oceans. Atmospheric and oceanic GCMs (AGCM and OGCM) are key components along with sea ice and land-surface components.
GCMs and global climate models are used for weather forecasting, understanding the climate, and forecasting climate change.
Atmospheric GCMs (AGCMs) model the atmosphere and impose sea surface temperatures as boundary conditions. Coupled atmosphere-ocean GCMs (AOGCMs, e.g. HadCM3, EdGCM, GFDL CM2.X, ARPEGE-Climat) combine the two models. The first general circulation climate model that combined both oceanic and atmospheric processes was developed in the late 1960s at the NOAA Geophysical Fluid Dynamics Laboratory AOGCMs represent the pinnacle of complexity in climate models and internalise as many processes as possible. However, they are still under development and uncertainties remain. They may be coupled to models of other processes, such as the carbon cycle, so as to better model feedback effects. Such integrated multi-system models are sometimes referred to as either "earth system models" or "global climate models."
Versions designed for decade to century time scale climate applications were created by Syukuro Manabe and Kirk Bryan at the Geophysical Fluid Dynamics Laboratory (GFDL) in Princeton, New Jersey. These models are based on the integration of a variety of fluid dynamical, chemical and sometimes biological equations.
== Terminology ==
The acronym GCM originally stood for General Circulation Model. Recently, a second meaning came into use, namely Global Climate Model. While these do not refer to the same thing, General Circulation Models are typically the tools used for modeling climate, and hence the two terms are sometimes used interchangeably. However, the term "global climate model" is ambiguous and may refer to an integrated framework that incorporates multiple components including a general circulation model, or may refer to the general class of climate models that use a variety of means to represent the climate mathematically.
== Atmospheric and oceanic models ==
Atmospheric (AGCMs) and oceanic GCMs (OGCMs) can be coupled to form an atmosphere-ocean coupled general circulation model (CGCM or AOGCM). With the addition of submodels such as a sea ice model or a model for evapotranspiration over land, AOGCMs become the basis for a full climate model.
== Structure ==
General Circulation Models (GCMs) discretise the equations for fluid motion and energy transfer and integrate these over time. Unlike simpler models, GCMs divide the atmosphere and/or oceans into grids of discrete "cells", which represent computational units. Unlike simpler models which make mixing assumptions, processes internal to a cell—such as convection—that occur on scales too small to be resolved directly are parameterised at the cell level, while other functions govern the interface between cells.
Three-dimensional (more properly four-dimensional) GCMs apply discrete equations for fluid motion and integrate these forward in time. They contain parameterisations for processes such as convection that occur on scales too small to be resolved directly.
A simple general circulation model (SGCM) consists of a dynamic core that relates properties such as temperature to others such as pressure and velocity. Examples are programs that solve the primitive equations, given energy input and energy dissipation in the form of scale-dependent friction, so that atmospheric waves with the highest wavenumbers are most attenuated. Such models may be used to study atmospheric processes, but are not suitable for climate projections.
Atmospheric GCMs (AGCMs) model the atmosphere (and typically contain a land-surface model as well) using imposed sea surface temperatures (SSTs). They may include atmospheric chemistry.
AGCMs consist of a dynamical core that integrates the equations of fluid motion, typically for:
surface pressure
horizontal components of velocity in layers
temperature and water vapor in layers
radiation, split into solar/short wave and terrestrial/infrared/long wave
parameters for:
convection
land surface processes
albedo
hydrology
cloud cover
A GCM contains prognostic equations that are a function of time (typically winds, temperature, moisture, and surface pressure) together with diagnostic equations that are evaluated from them for a specific time period. As an example, pressure at any height can be diagnosed by applying the hydrostatic equation to the predicted surface pressure and the predicted values of temperature between the surface and the height of interest. Pressure is used to compute the pressure gradient force in the time-dependent equation for the winds.
OGCMs model the ocean (with fluxes from the atmosphere imposed) and may contain a sea ice model. For example, the standard resolution of HadOM3 is 1.25 degrees in latitude and longitude, with 20 vertical levels, leading to approximately 1,500,000 variables.
AOGCMs (e.g. HadCM3, GFDL CM2.X) combine the two submodels. They remove the need to specify fluxes across the interface of the ocean surface. These models are the basis for model predictions of future climate, such as are discussed by the IPCC. AOGCMs internalise as many processes as possible. They have been used to provide predictions at a regional scale. While the simpler models are generally susceptible to analysis and their results are easier to understand, AOGCMs may be nearly as hard to analyse as the climate itself.
=== Grid ===
The fluid equations for AGCMs are made discrete using either the finite difference method or the spectral method. For finite differences, a grid is imposed on the atmosphere. The simplest grid uses constant angular grid spacing (i.e., a latitude/longitude grid). However, non-rectangular grids (e.g., icosahedral) and grids of variable resolution are more often used. The LMDz model can be arranged to give high resolution over any given section of the planet. HadGEM1 (and other ocean models) use an ocean grid with higher resolution in the tropics to help resolve processes believed to be important for the El Niño Southern Oscillation (ENSO). Spectral models generally use a Gaussian grid, because of the mathematics of transformation between spectral and grid-point space. Typical AGCM resolutions are between 1 and 5 degrees in latitude or longitude: HadCM3, for example, uses 3.75 in longitude and 2.5 degrees in latitude, giving a grid of 96 by 73 points (96 x 72 for some variables); and has 19 vertical levels. This results in approximately 500,000 "basic" variables, since each grid point has four variables (u,v, T, Q), though a full count would give more (clouds; soil levels). HadGEM1 uses a grid of 1.875 degrees in longitude and 1.25 in latitude in the atmosphere; HiGEM, a high-resolution variant, uses 1.25 x 0.83 degrees respectively. These resolutions are lower than is typically used for weather forecasting. Ocean resolutions tend to be higher, for example, HadCM3 has 6 ocean grid points per atmospheric grid point in the horizontal.
For a standard finite difference model, uniform gridlines converge towards the poles. This would lead to computational instabilities (see CFL condition) and so the model variables must be filtered along lines of latitude close to the poles. Ocean models suffer from this problem too, unless a rotated grid is used in which the North Pole is shifted onto a nearby landmass. Spectral models do not suffer from this problem. Some experiments use geodesic grids and icosahedral grids, which (being more uniform) do not have pole-problems. Another approach to solving the grid spacing problem is to deform a Cartesian cube such that it covers the surface of a sphere.
=== Flux buffering ===
Some early versions of AOGCMs required an ad hoc process of "flux correction" to achieve a stable climate. This resulted from separately prepared ocean and atmospheric models that each used an implicit flux from the other component different than that component could produce. Such a model failed to match observations. However, if the fluxes were 'corrected', the factors that led to these unrealistic fluxes might be unrecognised, which could affect model sensitivity. As a result, the vast majority of models used in the current round of IPCC reports do not use them. The model improvements that now make flux corrections unnecessary include improved ocean physics, improved resolution in both atmosphere and ocean, and more physically consistent coupling between the atmosphere and ocean submodels. Improved models now maintain stable, multi-century simulations of surface climate that are considered to be of sufficient quality to allow their use for climate projections.
=== Convection ===
Moist convection releases latent heat and is important to the Earth's energy budget. Convection occurs on too small a scale to be resolved by climate models, and hence it must be handled via parameters. This has been done since the 1950s. Akio Arakawa did much of the early work, and variants of his scheme are still used, although a variety of different schemes are now in use. Clouds are also typically handled with a parameter, for a similar lack of scale. Limited understanding of clouds has limited the success of this strategy, but not due to some inherent shortcomings of the method.
=== Software ===
Most models include software to diagnose a wide range of variables for comparison with observations or study of atmospheric processes. An example is the 2-metre temperature, which is the standard height for near-surface observations of air temperature. This temperature is not directly predicted from the model but is deduced from surface and lowest-model-layer temperatures. Other software is used for creating plots and animations.
== Projections ==
Coupled AOGCMs use transient climate simulations to project/predict climate changes under various scenarios. These can be idealised scenarios (most commonly, CO2 emissions increasing at 1%/yr) or based on recent history (usually the "IS92a" or more recently the SRES scenarios). Which scenarios are most realistic remains uncertain.
The 2001 IPCC Third Assessment Report Figure 9.3 shows the global mean response of 19 different coupled models to an idealised experiment in which emissions increased at 1% per year. Figure 9.5 shows the response of a smaller number of models to more recent trends. For the 7 climate models shown there, the temperature change to 2100 varies from 2 to 4.5 °C with a median of about 3 °C.
Future scenarios do not include unknown events – for example, volcanic eruptions or changes in solar forcing. These effects are believed to be small in comparison to greenhouse gas (GHG) forcing in the long term, but large volcanic eruptions, for example, can exert a substantial temporary cooling effect.
Human GHG emissions are a model input, although it is possible to include an economic/technological submodel to provide these as well. Atmospheric GHG levels are usually supplied as an input, though it is possible to include a carbon cycle model that reflects vegetation and oceanic processes to calculate such levels.
=== Emissions scenarios ===
For the six SRES marker scenarios, IPCC (2007:7–8) gave a "best estimate" of global mean temperature increase (2090–2099 relative to the period 1980–1999) of 1.8 °C to 4.0 °C. Over the same time period, the "likely" range (greater than 66% probability, based on expert judgement) for these scenarios was for a global mean temperature increase of 1.1 to 6.4 °C.
In 2008 a study made climate projections using several emission scenarios. In a scenario where global emissions start to decrease by 2010 and then decline at a sustained rate of 3% per year, the likely global average temperature increase was predicted to be 1.7 °C above pre-industrial levels by 2050, rising to around 2 °C by 2100. In a projection designed to simulate a future where no efforts are made to reduce global emissions, the likely rise in global average temperature was predicted to be 5.5 °C by 2100. A rise as high as 7 °C was thought possible, although less likely.
Another no-reduction scenario resulted in a median warming over land (2090–99 relative to the period 1980–99) of 5.1 °C. Under the same emissions scenario but with a different model, the predicted median warming was 4.1 °C.
=== Model accuracy ===
AOGCMs internalise as many processes as are sufficiently understood. However, they are still under development and significant uncertainties remain. They may be coupled to models of other processes in Earth system models, such as the carbon cycle, so as to better model feedback. Most recent simulations show "plausible" agreement with the measured temperature anomalies over the past 150 years, when driven by observed changes in greenhouse gases and aerosols. Agreement improves by including both natural and anthropogenic forcings.
Imperfect models may nevertheless produce useful results. GCMs are capable of reproducing the general features of the observed global temperature over the past century.
A debate over how to reconcile climate model predictions that upper air (tropospheric) warming should be greater than observed surface warming, some of which appeared to show otherwise, was resolved in favour of the models, following data revisions.
Cloud effects are a significant area of uncertainty in climate models. Clouds have competing effects on climate. They cool the surface by reflecting sunlight into space; they warm it by increasing the amount of infrared radiation transmitted from the atmosphere to the surface. In the 2001 IPCC report possible changes in cloud cover were highlighted as a major uncertainty in predicting climate.
Climate researchers around the world use climate models to understand the climate system. Thousands of papers have been published about model-based studies. Part of this research is to improve the models.
In 2000, a comparison between measurements and dozens of GCM simulations of ENSO-driven tropical precipitation, water vapor, temperature, and outgoing longwave radiation found similarity between measurements and simulation of most factors. However, the simulated change in precipitation was about one-fourth less than what was observed. Errors in simulated precipitation imply errors in other processes, such as errors in the evaporation rate that provides moisture to create precipitation. The other possibility is that the satellite-based measurements are in error. Either indicates progress is required in order to monitor and predict such changes.
The precise magnitude of future changes in climate is still uncertain; for the end of the 21st century (2071 to 2100), for SRES scenario A2, the change of global average SAT change from AOGCMs compared with 1961 to 1990 is +3.0 °C (5.4 °F) and the range is +1.3 to +4.5 °C (+2.3 to 8.1 °F).
The IPCC's Fifth Assessment Report asserted "very high confidence that models reproduce the general features of the global-scale annual mean surface temperature increase over the historical period". However, the report also observed that the rate of warming over the period 1998–2012 was lower than that predicted by 111 out of 114 Coupled Model Intercomparison Project climate models.
== Relation to weather forecasting ==
The global climate models used for climate projections are similar in structure to (and often share computer code with) numerical models for weather prediction, but are nonetheless logically distinct.
Most weather forecasting is done on the basis of interpreting numerical model results. Since forecasts are typically a few days or a week and sea surface temperatures change relatively slowly, such models do not usually contain an ocean model but rely on imposed SSTs. They also require accurate initial conditions to begin the forecast – typically these are taken from the output of a previous forecast, blended with observations. Weather predictions are required at higher temporal resolutions than climate projections, often sub-hourly compared to monthly or yearly averages for climate. However, because weather forecasts only cover around 10 days the models can also be run at higher vertical and horizontal resolutions than climate mode. Currently the ECMWF runs at 9 km (5.6 mi) resolution as opposed to the 100-to-200 km (62-to-124 mi) scale used by typical climate model runs. Often local models are run using global model results for boundary conditions, to achieve higher local resolution: for example, the Met Office runs a mesoscale model with an 11 km (6.8 mi) resolution covering the UK, and various agencies in the US employ models such as the NGM and NAM models. Like most global numerical weather prediction models such as the GFS, global climate models are often spectral models instead of grid models. Spectral models are often used for global models because some computations in modeling can be performed faster, thus reducing run times.
== Computations ==
Climate models use quantitative methods to simulate the interactions of the atmosphere, oceans, land surface and ice.
All climate models take account of incoming energy as short wave electromagnetic radiation, chiefly visible and short-wave (near) infrared, as well as outgoing energy as long wave (far) infrared electromagnetic radiation from the earth. Any imbalance results in a change in temperature.
The most talked-about models of recent years relate temperature to emissions of greenhouse gases. These models project an upward trend in the surface temperature record, as well as a more rapid increase in temperature at higher altitudes.
Three (or more properly, four since time is also considered) dimensional GCM's discretise the equations for fluid motion and energy transfer and integrate these over time. They also contain parametrisations for processes such as convection that occur on scales too small to be resolved directly.
Atmospheric GCMs (AGCMs) model the atmosphere and impose sea surface temperatures as boundary conditions. Coupled atmosphere-ocean GCMs (AOGCMs, e.g. HadCM3, EdGCM, GFDL CM2.X, ARPEGE-Climat) combine the two models.
Models range in complexity:
A simple radiant heat transfer model treats the earth as a single point and averages outgoing energy
This can be expanded vertically (radiative-convective models), or horizontally
Finally, (coupled) atmosphere–ocean–sea ice global climate models discretise and solve the full equations for mass and energy transfer and radiant exchange.
Box models treat flows across and within ocean basins.
Other submodels can be interlinked, such as land use, allowing researchers to predict the interaction between climate and ecosystems.
== Comparison with other climate models ==
=== Earth-system models of intermediate complexity (EMICs) ===
The Climber-3 model uses a 2.5-dimensional statistical-dynamical model with 7.5° × 22.5° resolution and time step of 1/2 a day. An oceanic submodel is MOM-3 (Modular Ocean Model) with a 3.75° × 3.75° grid and 24 vertical levels.
=== Radiative-convective models (RCM) ===
One-dimensional, radiative-convective models were used to verify basic climate assumptions in the 1980s and 1990s.
=== Earth system models ===
GCMs can form part of Earth system models, e.g. by coupling ice sheet models for the dynamics of the Greenland and Antarctic ice sheets, and one or more chemical transport models (CTMs) for species important to climate. Thus a carbon chemistry transport model may allow a GCM to better predict anthropogenic changes in carbon dioxide concentrations. In addition, this approach allows accounting for inter-system feedback: e.g. chemistry-climate models allow the effects of climate change on the ozone hole to be studied.
== History ==
In 1956, Norman Phillips developed a mathematical model that could realistically depict monthly and seasonal patterns in the troposphere. It became the first successful climate model. Following Phillips's work, several groups began working to create GCMs. The first to combine both oceanic and atmospheric processes was developed in the late 1960s at the NOAA Geophysical Fluid Dynamics Laboratory. By the early 1980s, the United States' National Center for Atmospheric Research had developed the Community Atmosphere Model; this model has been continuously refined. In 1996, efforts began to model soil and vegetation types. Later the Hadley Centre for Climate Prediction and Research's HadCM3 model coupled ocean-atmosphere elements. The role of gravity waves was added in the mid-1980s. Gravity waves are required to simulate regional and global scale circulations accurately.
== See also ==
Atmospheric Model Intercomparison Project (AMIP)
Atmospheric Radiation Measurement (ARM) (in the US)
Earth Simulator
Global Environmental Multiscale Model
Ice-sheet model
Intermediate General Circulation Model
NCAR
Prognostic variable
Charney Report
== References ==
IPCC AR4 SYR (2007), Core Writing Team; Pachauri, R.K; Reisinger, A. (eds.), Climate Change 2007: Synthesis Report (SYR), Contribution of Working Groups I, II and III to the Fourth Assessment Report (AR4) of the Intergovernmental Panel on Climate Change, Geneva, Switzerland: IPCC, ISBN 978-92-9169-122-7{{citation}}: CS1 maint: numeric names: authors list (link).
== Further reading ==
Ian Roulstone & John Norbury (2013). Invisible in the Storm: the role of mathematics in understanding weather. Princeton University Press. ISBN 978-0691152721.
== External links ==
IPCC AR5, Evaluation of Climate Models
"High Resolution Climate Modeling". – with media including videos, animations, podcasts and transcripts on climate models
"Flexible Modeling System (FMS)". Geophysical Fluid Dynamics Laboratory. – GFDL's Flexible Modeling System containing code for the climate models
Program for climate model diagnosis and intercomparison (PCMDI/CMIP)
National Operational Model Archive and Distribution System (NOMADS) Archived 30 January 2016 at the Wayback Machine
Hadley Centre for Climate Prediction and Research – model info
NCAR/UCAR Community Climate System Model (CESM)
Climate prediction, community modeling
NASA/GISS, primary research GCM model
EDGCM/NASA: Educational Global Climate Modeling Archived 23 March 2015 at the Wayback Machine
NOAA/GFDL Archived 4 March 2016 at the Wayback Machine
MAOAM: Martian Atmosphere Observation and Modeling / MPI & MIPT | Wikipedia/Global_climate_model |
Meteoritics & Planetary Science is a monthly peer-reviewed scientific journal published by Wiley-Blackwell on behalf of the Meteoritical Society. It specialises in the fields of meteoritics and planetary science.
The journal was established as Meteoritics in 1953, adopting its current name when the scope was broadened in 1996. Since January 1, 2003, the editor-in-chief is A.J. Timothy Jull (Arizona Accelerator Mass Spectrometry Laboratory).
== History ==
The journal was established in 1953 as the successor of the Notes and Contributions that were published on behalf of the Meteoritical Society in Popular Astronomy, from 1933 to 1951. Initially titled Meteoritics, with the 1996 January issue the journal became Meteoritics and Planetary Science.
== Scope ==
Coverage encompasses planets, natural satellites, interplanetary dust, interstellar medium, lunar samples, meteors, meteorites, asteroids, comets, craters, and tektites and comes from multiple disciplines, such as astronomy, astrophysics, physics, geophysics, chemistry, isotope geochemistry, mineralogy, Earth science, geology, or biology
The journal publishes original research papers, invited reviews, editorials, and book reviews.
== Abstracting and indexing ==
Meteoritics & Planetary Science is indexed and abstracted in:
Current Contents/Physical, Chemical & Earth Sciences
GEOBASE/Geographical & Geological Abstracts
Meteorological & Geoastrophysical Abstracts
Science Citation Index
Scopus
According to the Journal Citation Reports, the journal has a 2019 impact factor of 2.863, ranking it 37h out of 85 journals in the category "Geochemistry & Geophysics".
== References ==
== External links ==
Official website
Meteoritics & Planetary Science (2002-2009) at The University of Arizona Institutional Repository | Wikipedia/Meteoritics_and_Planetary_Science |
The curation of extraterrestrial samples (astromaterials) obtained by sample-return missions takes place at facilities specially designed to preserve both the sample integrity and protect the Earth. Astromaterials are classified as either non-restricted or restricted, depending on the nature of the Solar System body. Non-restricted samples include the Moon, asteroids, comets, solar particles and space dust. Restricted bodies include planets or moons suspected to have either past or present habitable environments to microscopic life, and therefore must be treated as extremely biohazardous.
== Overview ==
Spacecraft instruments are subject to mass and power constraints, in addition to the limitations imposed by the extreme environment of outer space on the sensitive science instruments, so bringing extraterrestrial material to Earth is desired for extensive scientific analyses. For the purpose of planetary protection, astromaterial samples brought to Earth by sample-return missions must be received and curated in a specially-designed and equipped biocontainment facility that must also double as a cleanroom to preserve the science value of the samples.
Samples brought from non-restricted bodies such as the Moon, asteroids, comets, solar particles and space dust, are processed at specialized facilities rated Biosafety level-3 (BSL-3). Samples brought to Earth from a planet or moon suspected to have either past or present habitable environments to microscopic life would make it a Category V body, and must be curated at facilities rated Biosafety level-4 (BSL-4), as agreed in the Article IX of the Outer Space Treaty. However, the existing BSL-4 facilities in the world do not have the complex requirements to ensure the preservation and protection of Earth and the sample simultaneously. While existing BSL-4 facilities deal primarily with fairly well-known organisms, a BSL-4 facility focused on extraterrestrial samples must pre-plan the systems carefully while being mindful that there will be unforeseen issues during sample evaluation and curation that will require independent thinking and solutions. A challenge is that, while it is relatively easy to simply contain the samples once returned to Earth, researchers will want to take a portion and perform analyses. During all these handling procedures, the samples would need to be protected from Earthly contamination and from contact with the atmosphere.
== Non-restricted materials ==
As of 2019, only the Japanese space agency JAXA and the United States space agency NASA operate BSL-3 laboratories in the world exclusively dedicated to the curation of samples from non-restricted bodies. The key feature of JAXA's curation facility, the Extraterrestrial Sample Curation Center, is the ability to observe, take out a portion and preserve a precious return-sample without being exposed to the atmosphere and other contaminants.
The Luna Soviet missions samples are studied and stored at the Vernadsky Institute of Geochemistry and Analytical Chemistry at the Russian Academy of Sciences.
== Restricted materials ==
Return-samples obtained from a Category V body, must be curated at facilities rated Biosafety level-4 (BSL-4). Because the existing BSL-4 facilities in the world do not have the complex requirements to ensure the preservation and protection of Earth and the sample simultaneously, there are currently at least two proposals to build a BSL-4 facility dedicated to curation of restricted (potentially biohazard) extraterrestrial materials.
The first is the European Sample Curation Facility (ESCF), proposed to be built in Vienna, would curate non-restricted samples as well as BSL-4 biocontainment of restricted material obtained from Category V bodies such as Mars, Europa, Enceladus, etc.
The other proposal is by NASA and is tentatively known as the Mars Sample-Return Receiving Facility (MSRRF). At least three different designs were submitted in 2009. If funded, this American facility would be expected to take 7 to 10 years from design to completion, and an additional two years is recommended for the staff to become proficient and accustomed to the facilities. NASA is also assessing a 2017 proposal to build a mobile and modular BSL-4 facility to secure a sample return capsule at the landing site to conduct preliminary biohazard analyses. After completion of biohazard testing, decisions could be made to sterilize the sample or transport all or portions to a permanent quarantine storage facility anywhere in the world.
The systems of such facilities must be able to contain unknown biohazards, as the sizes of any putative alien microorganisms or infectious agents are unknown. Ideally it should filter particles of 0.01 μm or larger, and release of a particle 0.05 μm or larger is unacceptable under any circumstance. The reason for this extremely small size limit of 0.01 μm is for consideration of gene transfer agents (GTAs) which are virus-like particles that are produced by some microorganisms that package random segments of DNA capable of horizontal gene transfer. These randomly incorporate segments of the host genome and can transfer them to other evolutionarily distant hosts, and do that without killing the new host. In this way many archaea and bacteria can swap DNA with each other. This raises the possibility that Martian life, if it has a common origin with Earth life in the distant past, could swap DNA with Earth microorganisms in the same way. Another reason for the 0.01 μm limit is because of the discovery of ultramicrobacteria as small as 0.2 μm across.
Robotic advocates consider that humans represent a significant source of contamination for the samples, and that a BSL-4 facility with robotic systems is the best way forward.
== See also ==
== References == | Wikipedia/Extraterrestrial_sample_curation |
Orbital mechanics or astrodynamics is the application of ballistics and celestial mechanics to rockets, satellites, and other spacecraft. The motion of these objects is usually calculated from Newton's laws of motion and the law of universal gravitation. Astrodynamics is a core discipline within space-mission design and control.
Celestial mechanics treats more broadly the orbital dynamics of systems under the influence of gravity, including both spacecraft and natural astronomical bodies such as star systems, planets, moons, and comets. Orbital mechanics focuses on spacecraft trajectories, including orbital maneuvers, orbital plane changes, and interplanetary transfers, and is used by mission planners to predict the results of propulsive maneuvers.
General relativity is a more exact theory than Newton's laws for calculating orbits, and it is sometimes necessary to use it for greater accuracy or in high-gravity situations (e.g. orbits near the Sun).
== History ==
Until the rise of space travel in the twentieth century, there was little distinction between orbital and celestial mechanics. At the time of Sputnik, the field was termed 'space dynamics'. The fundamental techniques, such as those used to solve the Keplerian problem (determining position as a function of time), are therefore the same in both fields. Furthermore, the history of the fields is almost entirely shared.
Johannes Kepler was the first to successfully model planetary orbits to a high degree of accuracy, publishing his laws in 1609. Isaac Newton published more general laws of celestial motion in the first edition of Philosophiæ Naturalis Principia Mathematica (1687), which gave a method for finding the orbit of a body following a parabolic path from three observations. This was used by Edmund Halley to establish the orbits of various comets, including that which bears his name. Newton's method of successive approximation was formalised into an analytic method by Leonhard Euler in 1744, whose work was in turn generalised to elliptical and hyperbolic orbits by Johann Lambert in 1761–1777.
Another milestone in orbit determination was Carl Friedrich Gauss's assistance in the "recovery" of the dwarf planet Ceres in 1801. Gauss's method was able to use just three observations (in the form of pairs of right ascension and declination), to find the six orbital elements that completely describe an orbit. The theory of orbit determination has subsequently been developed to the point where today it is applied in GPS receivers as well as the tracking and cataloguing of newly observed minor planets. Modern orbit determination and prediction are used to operate all types of satellites and space probes, as it is necessary to know their future positions to a high degree of accuracy.
Astrodynamics was developed by astronomer Samuel Herrick beginning in the 1930s. He consulted the rocket scientist Robert Goddard and was encouraged to continue his work on space navigation techniques, as Goddard believed they would be needed in the future. Numerical techniques of astrodynamics were coupled with new powerful computers in the 1960s, and humans were ready to travel to the Moon and return.
== Practical techniques ==
=== Rules of thumb ===
The following rules of thumb are useful for situations approximated by classical mechanics under the standard assumptions of astrodynamics outlined below. The specific example discussed is of a satellite orbiting a planet, but the rules of thumb could also apply to other situations, such as orbits of small bodies around a star such as the Sun.
Kepler's laws of planetary motion:
Orbits are elliptical, with the heavier body at one focus of the ellipse. A special case of this is a circular orbit (a circle is a special case of ellipse) with the planet at the center.
A line drawn from the planet to the satellite sweeps out equal areas in equal times no matter which portion of the orbit is measured.
The square of a satellite's orbital period is proportional to the cube of its average distance from the planet.
Without applying force (such as firing a rocket engine), the period and shape of the satellite's orbit will not change.
A satellite in a low orbit (or a low part of an elliptical orbit) moves more quickly with respect to the surface of the planet than a satellite in a higher orbit (or a high part of an elliptical orbit), due to the stronger gravitational attraction closer to the planet.
If thrust is applied at only one point in the satellite's orbit, it will return to that same point on each subsequent orbit, though the rest of its path will change. Thus one cannot move from one circular orbit to another with only one brief application of thrust.
From a circular orbit, thrust applied in a direction opposite to the satellite's motion changes the orbit to an elliptical one; the satellite will descend and reach the lowest orbital point (the periapse) at 180 degrees away from the firing point; then it will ascend back. The period of the resultant orbit will be less than that of the original circular orbit. Thrust applied in the direction of the satellite's motion creates an elliptical orbit with its highest point (apoapse) 180 degrees away from the firing point. The period of the resultant orbit will be longer than that of the original circular orbit.
The consequences of the rules of orbital mechanics are sometimes counter-intuitive. For example, if two spacecrafts are in the same circular orbit and wish to dock, the trailing craft cannot simply fire its engines to accelerate towards the leading craft. This will change the shape of its orbit, causing it to gain altitude and slow down relative to the leading craft, thus moving away from the target. The space rendezvous before docking normally takes multiple precisely calculated engine firings in multiple orbital periods, requiring hours or even days to complete.
To the extent that the standard assumptions of astrodynamics do not hold, actual trajectories will vary from those calculated. For example, simple atmospheric drag is another complicating factor for objects in low Earth orbit.
These rules of thumb are decidedly inaccurate when describing two or more bodies of similar mass, such as a binary star system (see n-body problem). Celestial mechanics uses more general rules applicable to a wider variety of situations. Kepler's laws of planetary motion, which can be mathematically derived from Newton's laws, hold strictly only in describing the motion of two gravitating bodies in the absence of non-gravitational forces; they also describe parabolic and hyperbolic trajectories. In the close proximity of large objects like stars the differences between classical mechanics and general relativity also become important.
== Laws of astrodynamics ==
The fundamental laws of astrodynamics are Newton's law of universal gravitation and Newton's laws of motion, while the fundamental mathematical tool is differential calculus.
In a Newtonian framework, the laws governing orbits and trajectories are in principle time-symmetric.
Standard assumptions in astrodynamics include non-interference from outside bodies, negligible mass for one of the bodies, and negligible other forces (such as from the solar wind, atmospheric drag, etc.). More accurate calculations can be made without these simplifying assumptions, but they are more complicated. The increased accuracy often does not make enough of a difference in the calculation to be worthwhile.
Kepler's laws of planetary motion may be derived from Newton's laws, when it is assumed that the orbiting body is subject only to the gravitational force of the central attractor. When an engine thrust or propulsive force is present, Newton's laws still apply, but Kepler's laws are invalidated. When the thrust stops, the resulting orbit will be different but will once again be described by Kepler's laws which have been set out above. The three laws are:
The orbit of every planet is an ellipse with the Sun at one of the foci.
A line joining a planet and the Sun sweeps out equal areas during equal intervals of time.
The squares of the orbital periods of planets are directly proportional to the cubes of the semi-major axis of the orbits.
=== Escape velocity ===
The formula for an escape velocity is derived as follows. The specific energy (energy per unit mass) of any space vehicle is composed of two components, the specific potential energy and the specific kinetic energy. The specific potential energy associated with a planet of mass M is given by
ϵ
p
=
−
G
M
r
{\displaystyle \epsilon _{p}=-{\frac {GM}{r}}\,}
where G is the gravitational constant and r is the distance between the two bodies;
while the specific kinetic energy of an object is given by
ϵ
k
=
v
2
2
{\displaystyle \epsilon _{k}={\frac {v^{2}}{2}}\,}
where v is its Velocity;
and so the total specific orbital energy is
ϵ
=
ϵ
k
+
ϵ
p
=
v
2
2
−
G
M
r
{\displaystyle \epsilon =\epsilon _{k}+\epsilon _{p}={\frac {v^{2}}{2}}-{\frac {GM}{r}}\,}
Since energy is conserved,
ϵ
{\displaystyle \epsilon }
cannot depend on the distance,
r
{\displaystyle r}
, from the center of the central body to the space vehicle in question, i.e. v must vary with r to keep the specific orbital energy constant. Therefore, the object can reach infinite
r
{\displaystyle r}
only if this quantity is nonnegative, which implies
v
≥
2
G
M
r
.
{\displaystyle v\geq {\sqrt {\frac {2GM}{r}}}.}
The escape velocity from the Earth's surface is about 11 km/s, but that is insufficient to send the body an infinite distance because of the gravitational pull of the Sun. To escape the Solar System from a location at a distance from the Sun equal to the distance Sun–Earth, but not close to the Earth, requires around 42 km/s velocity, but there will be "partial credit" for the Earth's orbital velocity for spacecraft launched from Earth, if their further acceleration (due to the propulsion system) carries them in the same direction as Earth travels in its orbit.
=== Formulae for free orbits ===
Orbits are conic sections, so the formula for the distance of a body for a given angle corresponds to the formula for that curve in polar coordinates, which is:
r
=
p
1
+
e
cos
θ
{\displaystyle r={\frac {p}{1+e\cos \theta }}}
μ
=
G
(
m
1
+
m
2
)
{\displaystyle \mu =G(m_{1}+m_{2})\,}
p
=
h
2
/
μ
{\displaystyle p=h^{2}/\mu \,}
μ
{\displaystyle \mu }
is called the gravitational parameter.
m
1
{\displaystyle m_{1}}
and
m
2
{\displaystyle m_{2}}
are the masses of objects 1 and 2, and
h
{\displaystyle h}
is the specific angular momentum of object 2 with respect to object 1. The parameter
θ
{\displaystyle \theta }
is known as the true anomaly,
p
{\displaystyle p}
is the semi-latus rectum, while
e
{\displaystyle e}
is the orbital eccentricity, all obtainable from the various forms of the six independent orbital elements.
=== Circular orbits ===
All bounded orbits where the gravity of a central body dominates are elliptical in nature. A special case of this is the circular orbit, which is an ellipse of zero eccentricity. The formula for the velocity of a body in a circular orbit at distance r from the center of gravity of mass M can be derived as follows:
Centrifugal acceleration matches the acceleration due to gravity.
So,
v
2
r
=
G
M
r
2
{\displaystyle {\frac {v^{2}}{r}}={\frac {GM}{r^{2}}}}
Therefore,
v
=
G
M
r
{\displaystyle \ v={\sqrt {{\frac {GM}{r}}\ }}}
where
G
{\displaystyle G}
is the gravitational constant, equal to
6.6743 × 10−11 m3/(kg·s2)
To properly use this formula, the units must be consistent; for example,
M
{\displaystyle M}
must be in kilograms, and
r
{\displaystyle r}
must be in meters. The answer will be in meters per second.
The quantity
G
M
{\displaystyle GM}
is often termed the standard gravitational parameter, which has a different value for every planet or moon in the Solar System.
Once the circular orbital velocity is known, the escape velocity is easily found by multiplying by
2
{\displaystyle {\sqrt {2}}}
:
v
=
2
G
M
r
=
2
G
M
r
.
{\displaystyle \ v={\sqrt {2}}{\sqrt {{\frac {GM}{r}}\ }}={\sqrt {{\frac {2GM}{r}}\ }}.}
To escape from gravity, the kinetic energy must at least match the negative potential energy. Therefore,
1
2
m
v
2
=
G
M
m
r
{\displaystyle {\frac {1}{2}}mv^{2}={\frac {GMm}{r}}}
v
=
2
G
M
r
.
{\displaystyle v={\sqrt {{\frac {2GM}{r}}\ }}.}
=== Elliptical orbits ===
If
0
<
e
<
1
{\displaystyle 0<e<1}
, then the denominator of the equation of free orbits varies with the true anomaly
θ
{\displaystyle \theta }
, but remains positive, never becoming zero. Therefore, the relative position vector remains bounded, having its smallest magnitude at periapsis
r
p
{\displaystyle r_{p}}
, which is given by:
r
p
=
p
1
+
e
{\displaystyle r_{p}={\frac {p}{1+e}}}
The maximum value
r
{\displaystyle r}
is reached when
θ
=
180
∘
{\displaystyle \theta =180^{\circ }}
. This point is called the apoapsis, and its radial coordinate, denoted
r
a
{\displaystyle r_{a}}
, is
r
a
=
p
1
−
e
{\displaystyle r_{a}={\frac {p}{1-e}}}
Let
2
a
{\displaystyle 2a}
be the distance measured along the apse line from periapsis
P
{\displaystyle P}
to apoapsis
A
{\displaystyle A}
, as illustrated in the equation below:
2
a
=
r
p
+
r
a
{\displaystyle 2a=r_{p}+r_{a}}
Substituting the equations above, we get:
a
=
p
1
−
e
2
{\displaystyle a={\frac {p}{1-e^{2}}}}
a is the semimajor axis of the ellipse. Solving for
p
{\displaystyle p}
, and substituting the result in the conic section curve formula above, we get:
r
=
a
(
1
−
e
2
)
1
+
e
cos
θ
{\displaystyle r={\frac {a(1-e^{2})}{1+e\cos \theta }}}
==== Orbital period ====
Under standard assumptions the orbital period (
T
{\displaystyle T\,\!}
) of a body traveling along an elliptic orbit can be computed as:
T
=
2
π
a
3
μ
{\displaystyle T=2\pi {\sqrt {a^{3} \over {\mu }}}}
where:
μ
{\displaystyle \mu \,}
is the standard gravitational parameter,
a
{\displaystyle a\,\!}
is the length of the semi-major axis.
Conclusions:
The orbital period is equal to that for a circular orbit with the orbit radius equal to the semi-major axis (
a
{\displaystyle a\,\!}
),
For a given semi-major axis the orbital period does not depend on the eccentricity (See also: Kepler's third law).
==== Velocity ====
Under standard assumptions the orbital speed (
v
{\displaystyle v\,}
) of a body traveling along an elliptic orbit can be computed from the Vis-viva equation as:
v
=
μ
(
2
r
−
1
a
)
{\displaystyle v={\sqrt {\mu \left({2 \over {r}}-{1 \over {a}}\right)}}}
where:
μ
{\displaystyle \mu \,}
is the standard gravitational parameter,
r
{\displaystyle r\,}
is the distance between the orbiting bodies.
a
{\displaystyle a\,\!}
is the length of the semi-major axis.
The velocity equation for a hyperbolic trajectory is
v
=
μ
(
2
r
+
|
1
a
|
)
{\displaystyle v={\sqrt {\mu \left({2 \over {r}}+\left\vert {1 \over {a}}\right\vert \right)}}}
.
==== Energy ====
Under standard assumptions, specific orbital energy (
ϵ
{\displaystyle \epsilon \,}
) of elliptic orbit is negative and the orbital energy conservation equation (the Vis-viva equation) for this orbit can take the form:
v
2
2
−
μ
r
=
−
μ
2
a
=
ϵ
<
0
{\displaystyle {v^{2} \over {2}}-{\mu \over {r}}=-{\mu \over {2a}}=\epsilon <0}
where:
v
{\displaystyle v\,}
is the speed of the orbiting body,
r
{\displaystyle r\,}
is the distance of the orbiting body from the center of mass of the central body,
a
{\displaystyle a\,}
is the semi-major axis,
μ
{\displaystyle \mu \,}
is the standard gravitational parameter.
Conclusions:
For a given semi-major axis the specific orbital energy is independent of the eccentricity.
Using the virial theorem we find:
the time-average of the specific potential energy is equal to
2
ϵ
{\displaystyle 2\epsilon }
the time-average of
r
−
1
{\displaystyle r^{-1}}
is
a
−
1
{\displaystyle a^{-1}}
the time-average of the specific kinetic energy is equal to
−
ϵ
{\displaystyle -\epsilon }
=== Parabolic orbits ===
If the eccentricity equals 1, then the orbit equation becomes:
r
=
h
2
μ
1
1
+
cos
θ
{\displaystyle r={{h^{2}} \over {\mu }}{{1} \over {1+\cos \theta }}}
where:
r
{\displaystyle r\,}
is the radial distance of the orbiting body from the mass center of the central body,
h
{\displaystyle h\,}
is specific angular momentum of the orbiting body,
θ
{\displaystyle \theta \,}
is the true anomaly of the orbiting body,
μ
{\displaystyle \mu \,}
is the standard gravitational parameter.
As the true anomaly θ approaches 180°, the denominator approaches zero, so that r tends towards infinity. Hence, the energy of the trajectory for which e=1 is zero, and is given by:
ϵ
=
v
2
2
−
μ
r
=
0
{\displaystyle \epsilon ={v^{2} \over 2}-{\mu \over {r}}=0}
where:
v
{\displaystyle v\,}
is the speed of the orbiting body.
In other words, the speed anywhere on a parabolic path is:
v
=
2
μ
r
{\displaystyle v={\sqrt {2\mu \over {r}}}}
=== Hyperbolic orbits ===
If
e
>
1
{\displaystyle e>1}
, the orbit formula,
r
=
h
2
μ
1
1
+
e
cos
θ
{\displaystyle r={{h^{2}} \over {\mu }}{{1} \over {1+e\cos \theta }}}
describes the geometry of the hyperbolic orbit. The system consists of two symmetric curves. The orbiting body occupies one of them; the other one is its empty mathematical image. Clearly, the denominator of the equation above goes to zero when
cos
θ
=
−
1
/
e
{\displaystyle \cos \theta =-1/e}
. we denote this value of true anomaly
θ
∞
=
cos
−
1
(
−
1
e
)
{\displaystyle \theta _{\infty }=\cos ^{-1}\left(-{\frac {1}{e}}\right)}
since the radial distance approaches infinity as the true anomaly approaches
θ
∞
{\displaystyle \theta _{\infty }}
, known as the true anomaly of the asymptote. Observe that
θ
∞
{\displaystyle \theta _{\infty }}
lies between 90° and 180°. From the trigonometric identity
sin
2
θ
+
cos
2
θ
=
1
{\displaystyle \sin ^{2}\theta +\cos ^{2}\theta =1}
it follows that:
sin
θ
∞
=
1
e
e
2
−
1
{\displaystyle \sin \theta _{\infty }={\frac {1}{e}}{\sqrt {e^{2}-1}}}
==== Energy ====
Under standard assumptions, specific orbital energy (
ϵ
{\displaystyle \epsilon \,}
) of a hyperbolic trajectory is greater than zero and the orbital energy conservation equation for this kind of trajectory takes form:
ϵ
=
v
2
2
−
μ
r
=
μ
−
2
a
{\displaystyle \epsilon ={v^{2} \over 2}-{\mu \over {r}}={\mu \over {-2a}}}
where:
v
{\displaystyle v\,}
is the orbital velocity of orbiting body,
r
{\displaystyle r\,}
is the radial distance of orbiting body from central body,
a
{\displaystyle a\,}
is the negative semi-major axis of the orbit's hyperbola,
μ
{\displaystyle \mu \,}
is standard gravitational parameter.
==== Hyperbolic excess velocity ====
Under standard assumptions the body traveling along a hyperbolic trajectory will attain at
r
=
{\displaystyle r=}
infinity an orbital velocity called hyperbolic excess velocity (
v
∞
{\displaystyle v_{\infty }\,\!}
) that can be computed as:
v
∞
=
μ
−
a
{\displaystyle v_{\infty }={\sqrt {\mu \over {-a}}}\,\!}
where:
μ
{\displaystyle \mu \,\!}
is standard gravitational parameter,
a
{\displaystyle a\,\!}
is the negative semi-major axis of orbit's hyperbola.
The hyperbolic excess velocity is related to the specific orbital energy or characteristic energy by
2
ϵ
=
C
3
=
v
∞
2
{\displaystyle 2\epsilon =C_{3}=v_{\infty }^{2}\,\!}
== Calculating trajectories ==
=== Kepler's equation ===
One approach to calculating orbits (mainly used historically) is to use Kepler's equation:
M
=
E
−
ϵ
⋅
sin
E
{\displaystyle M=E-\epsilon \cdot \sin E}
.
where M is the mean anomaly, E is the eccentric anomaly, and
ϵ
{\displaystyle \epsilon }
is the eccentricity.
With Kepler's formula, finding the time-of-flight to reach an angle (true anomaly) of
θ
{\displaystyle \theta }
from periapsis is broken into two steps:
Compute the eccentric anomaly
E
{\displaystyle E}
from true anomaly
θ
{\displaystyle \theta }
Compute the time-of-flight
t
{\displaystyle t}
from the eccentric anomaly
E
{\displaystyle E}
Finding the eccentric anomaly at a given time (the inverse problem) is more difficult. Kepler's equation is transcendental in
E
{\displaystyle E}
, meaning it cannot be solved for
E
{\displaystyle E}
algebraically. Kepler's equation can be solved for
E
{\displaystyle E}
analytically by inversion.
A solution of Kepler's equation, valid for all real values of
ϵ
{\displaystyle \textstyle \epsilon }
is:
E
=
{
∑
n
=
1
∞
M
n
3
n
!
lim
θ
→
0
(
d
n
−
1
d
θ
n
−
1
[
(
θ
θ
−
sin
(
θ
)
3
)
n
]
)
,
ϵ
=
1
∑
n
=
1
∞
M
n
n
!
lim
θ
→
0
(
d
n
−
1
d
θ
n
−
1
[
(
θ
θ
−
ϵ
⋅
sin
(
θ
)
)
n
]
)
,
ϵ
≠
1
{\displaystyle E={\begin{cases}\displaystyle \sum _{n=1}^{\infty }{\frac {M^{\frac {n}{3}}}{n!}}\lim _{\theta \to 0}\left({\frac {\mathrm {d} ^{\,n-1}}{\mathrm {d} \theta ^{\,n-1}}}\left[\left({\frac {\theta }{\sqrt[{3}]{\theta -\sin(\theta )}}}\right)^{n}\right]\right),&\epsilon =1\\\displaystyle \sum _{n=1}^{\infty }{\frac {M^{n}}{n!}}\lim _{\theta \to 0}\left({\frac {\mathrm {d} ^{\,n-1}}{\mathrm {d} \theta ^{\,n-1}}}\left[\left({\frac {\theta }{\theta -\epsilon \cdot \sin(\theta )}}\right)^{n}\right]\right),&\epsilon \neq 1\end{cases}}}
Evaluating this yields:
E
=
{
x
+
1
60
x
3
+
1
1400
x
5
+
1
25200
x
7
+
43
17248000
x
9
+
1213
7207200000
x
11
+
151439
12713500800000
x
13
⋯
|
x
=
(
6
M
)
1
3
,
ϵ
=
1
1
1
−
ϵ
M
−
ϵ
(
1
−
ϵ
)
4
M
3
3
!
+
(
9
ϵ
2
+
ϵ
)
(
1
−
ϵ
)
7
M
5
5
!
−
(
225
ϵ
3
+
54
ϵ
2
+
ϵ
)
(
1
−
ϵ
)
10
M
7
7
!
+
(
11025
ϵ
4
+
4131
ϵ
3
+
243
ϵ
2
+
ϵ
)
(
1
−
ϵ
)
13
M
9
9
!
⋯
,
ϵ
≠
1
{\displaystyle E={\begin{cases}\displaystyle x+{\frac {1}{60}}x^{3}+{\frac {1}{1400}}x^{5}+{\frac {1}{25200}}x^{7}+{\frac {43}{17248000}}x^{9}+{\frac {1213}{7207200000}}x^{11}+{\frac {151439}{12713500800000}}x^{13}\cdots \ |\ x=(6M)^{\frac {1}{3}},&\epsilon =1\\\\\displaystyle {\frac {1}{1-\epsilon }}M-{\frac {\epsilon }{(1-\epsilon )^{4}}}{\frac {M^{3}}{3!}}+{\frac {(9\epsilon ^{2}+\epsilon )}{(1-\epsilon )^{7}}}{\frac {M^{5}}{5!}}-{\frac {(225\epsilon ^{3}+54\epsilon ^{2}+\epsilon )}{(1-\epsilon )^{10}}}{\frac {M^{7}}{7!}}+{\frac {(11025\epsilon ^{4}+4131\epsilon ^{3}+243\epsilon ^{2}+\epsilon )}{(1-\epsilon )^{13}}}{\frac {M^{9}}{9!}}\cdots ,&\epsilon \neq 1\end{cases}}}
Alternatively, Kepler's Equation can be solved numerically. First one must guess a value of
E
{\displaystyle E}
and solve for time-of-flight; then adjust
E
{\displaystyle E}
as necessary to bring the computed time-of-flight closer to the desired value until the required precision is achieved. Usually, Newton's method is used to achieve relatively fast convergence.
The main difficulty with this approach is that it can take prohibitively long to converge for the extreme elliptical orbits. For near-parabolic orbits, eccentricity
ϵ
{\displaystyle \epsilon }
is nearly 1, and substituting
e
=
1
{\displaystyle e=1}
into the formula for mean anomaly,
E
−
sin
E
{\displaystyle E-\sin E}
, we find ourselves subtracting two nearly-equal values, and accuracy suffers. For near-circular orbits, it is hard to find the periapsis in the first place (and truly circular orbits have no periapsis at all). Furthermore, the equation was derived on the assumption of an elliptical orbit, and so it does not hold for parabolic or hyperbolic orbits. These difficulties are what led to the development of the universal variable formulation, described below.
=== Conic orbits ===
For simple procedures, such as computing the delta-v for coplanar transfer ellipses, traditional approaches are fairly effective. Others, such as time-of-flight are far more complicated, especially for near-circular and hyperbolic orbits.
=== The patched conic approximation ===
The Hohmann transfer orbit alone is a poor approximation for interplanetary trajectories because it neglects the planets' own gravity. Planetary gravity dominates the behavior of the spacecraft in the vicinity of a planet and in most cases Hohmann severely overestimates delta-v, and produces highly inaccurate prescriptions for burn timings. A relatively simple way to get a first-order approximation of delta-v is based on the 'Patched Conic Approximation' technique. One must choose the one dominant gravitating body in each region of space through which the trajectory will pass, and to model only that body's effects in that region. For instance, on a trajectory from the Earth to Mars, one would begin by considering only the Earth's gravity until the trajectory reaches a distance where the Earth's gravity no longer dominates that of the Sun. The spacecraft would be given escape velocity to send it on its way to interplanetary space. Next, one would consider only the Sun's gravity until the trajectory reaches the neighborhood of Mars. During this stage, the transfer orbit model is appropriate. Finally, only Mars's gravity is considered during the final portion of the trajectory where Mars's gravity dominates the spacecraft's behavior. The spacecraft would approach Mars on a hyperbolic orbit, and a final retrograde burn would slow the spacecraft enough to be captured by Mars. Friedrich Zander was one of the first to apply the patched-conics approach for astrodynamics purposes, when proposing the use of intermediary bodies' gravity for interplanetary travels, in what is known today as a gravity assist.
The size of the "neighborhoods" (or spheres of influence) vary with radius
r
S
O
I
{\displaystyle r_{SOI}}
:
r
S
O
I
=
a
p
(
m
p
m
s
)
2
/
5
{\displaystyle r_{SOI}=a_{p}\left({\frac {m_{p}}{m_{s}}}\right)^{2/5}}
where
a
p
{\displaystyle a_{p}}
is the semimajor axis of the planet's orbit relative to the Sun;
m
p
{\displaystyle m_{p}}
and
m
s
{\displaystyle m_{s}}
are the masses of the planet and Sun, respectively.
This simplification is sufficient to compute rough estimates of fuel requirements, and rough time-of-flight estimates, but it is not generally accurate enough to guide a spacecraft to its destination. For that, numerical methods are required.
=== The universal variable formulation ===
To address computational shortcomings of traditional approaches for solving the 2-body problem, the universal variable formulation was developed. It works equally well for the circular, elliptical, parabolic, and hyperbolic cases, the differential equations converging well when integrated for any orbit. It also generalizes well to problems incorporating perturbation theory.
=== Perturbations ===
The universal variable formulation works well with the variation of parameters technique, except now, instead of the six Keplerian orbital elements, we use a different set of orbital elements: namely, the satellite's initial position and velocity vectors
x
0
{\displaystyle x_{0}}
and
v
0
{\displaystyle v_{0}}
at a given epoch
t
=
0
{\displaystyle t=0}
. In a two-body simulation, these elements are sufficient to compute the satellite's position and velocity at any time in the future, using the universal variable formulation. Conversely, at any moment in the satellite's orbit, we can measure its position and velocity, and then use the universal variable approach to determine what its initial position and velocity would have been at the epoch. In perfect two-body motion, these orbital elements would be invariant (just like the Keplerian elements would be).
However, perturbations cause the orbital elements to change over time. Hence, the position element is written as
x
0
(
t
)
{\displaystyle x_{0}(t)}
and the velocity element as
v
0
(
t
)
{\displaystyle v_{0}(t)}
, indicating that they vary with time. The technique to compute the effect of perturbations becomes one of finding expressions, either exact or approximate, for the functions
x
0
(
t
)
{\displaystyle x_{0}(t)}
and
v
0
(
t
)
{\displaystyle v_{0}(t)}
.
The following are some effects which make real orbits differ from the simple models based on a spherical Earth. Most of them can be handled on short timescales (perhaps less than a few thousand orbits) by perturbation theory because they are small relative to the corresponding two-body effects.
Equatorial bulges cause precession of the node and the perigee
Tesseral harmonics of the gravity field introduce additional perturbations
Lunar and solar gravity perturbations alter the orbits
Atmospheric drag reduces the semi-major axis unless make-up thrust is used
Over very long timescales (perhaps millions of orbits), even small perturbations can dominate, and the behavior can become chaotic. On the other hand, the various perturbations can be orchestrated by clever astrodynamicists to assist with orbit maintenance tasks, such as station-keeping, ground track maintenance or adjustment, or phasing of perigee to cover selected targets at low altitude.
== Orbital maneuver ==
In spaceflight, an orbital maneuver is the use of propulsion systems to change the orbit of a spacecraft. For spacecraft far from Earth—for example those in orbits around the Sun—an orbital maneuver is called a deep-space maneuver (DSM).
=== Orbital transfer ===
Transfer orbits are usually elliptical orbits that allow spacecraft to move from one (usually substantially circular) orbit to another. Usually they require a burn at the start, a burn at the end, and sometimes one or more burns in the middle.
The Hohmann transfer orbit requires a minimal delta-v.
A bi-elliptic transfer can require less energy than the Hohmann transfer, if the ratio of orbits is 11.94 or greater, but comes at the cost of increased trip time over the Hohmann transfer.
Faster transfers may use any orbit that intersects both the original and destination orbits, at the cost of higher delta-v.
Using low thrust engines (such as electrical propulsion), if the initial orbit is supersynchronous to the final desired circular orbit then the optimal transfer orbit is achieved by thrusting continuously in the direction of the velocity at apogee. This method however takes much longer due to the low thrust.
For the case of orbital transfer between non-coplanar orbits, the change-of-plane thrust must be made at the point where the orbital planes intersect (the "node"). As the objective is to change the direction of the velocity vector by an angle equal to the angle between the planes, almost all of this thrust should be made when the spacecraft is at the node near the apoapse, when the magnitude of the velocity vector is at its lowest. However, a small fraction of the orbital inclination change can be made at the node near the periapse, by slightly angling the transfer orbit injection thrust in the direction of the desired inclination change. This works because the cosine of a small angle is very nearly one, resulting in the small plane change being effectively "free" despite the high velocity of the spacecraft near periapse, as the Oberth Effect due to the increased, slightly angled thrust exceeds the cost of the thrust in the orbit-normal axis.
=== Gravity assist and the Oberth effect ===
In a gravity assist, a spacecraft swings by a planet and leaves in a different direction, at a different speed. This is useful to speed or slow a spacecraft instead of carrying more fuel.
This maneuver can be approximated by an elastic collision at large distances, though the flyby does not involve any physical contact. Due to Newton's third law (equal and opposite reaction), any momentum gained by a spacecraft must be lost by the planet, or vice versa. However, because the planet is much, much more massive than the spacecraft, the effect on the planet's orbit is negligible.
The Oberth effect can be employed, particularly during a gravity assist operation. This effect is that use of a propulsion system works better at high speeds, and hence course changes are best done when close to a gravitating body; this can multiply the effective delta-v.
=== Interplanetary Transport Network and fuzzy orbits ===
It is now possible to use computers to search for routes using the nonlinearities in the gravity of the planets and moons of the Solar System. For example, it is possible to plot an orbit from high Earth orbit to Mars, passing close to one of the Earth's trojan points. Collectively referred to as the Interplanetary Transport Network, these highly perturbative, even chaotic, orbital trajectories in principle need no fuel beyond that needed to reach the Lagrange point (in practice keeping to the trajectory requires some course corrections). The biggest problem with them is they can be exceedingly slow, taking many years. In addition launch windows can be very far apart.
They have, however, been employed on projects such as Genesis. This spacecraft visited the Earth-Sun L1 point and returned using very little propellant.
== See also ==
Celestial mechanics
Chaos theory
Kepler orbit
Lagrange point
Mechanical engineering
N-body problem
Roche limit
Spacecraft propulsion
Universal variable formulation
== References ==
== Further reading ==
Lynnane George. Introduction to Orbital Mechanics.
Sellers, Jerry J.; Astore, William J.; Giffen, Robert B.; Larson, Wiley J. (2004). Kirkpatrick, Douglas H. (ed.). Understanding Space: An Introduction to Astronautics (2 ed.). McGraw Hill. p. 228. ISBN 0-07-242468-0.
"Air University Space Primer, Chapter 8 - Orbital Mechanics" (PDF). USAF. Archived from the original (PDF) on 2013-02-14. Retrieved 2007-10-13.
Bate, R.R.; Mueller, D.D.; White, J.E. (1971). Fundamentals of Astrodynamics. Dover Publications, New York. ISBN 978-0-486-60061-1.
Vallado, D. A. (2001). Fundamentals of Astrodynamics and Applications (2nd ed.). Springer. ISBN 978-0-7923-6903-5.
Battin, R.H. (1999). An Introduction to the Mathematics and Methods of Astrodynamics. American Institute of Aeronautics & Ast, Washington, D.C. ISBN 978-1-56347-342-5.
Chobotov, V.A., ed. (2002). Orbital Mechanics (3rd ed.). American Institute of Aeronautics & Ast, Washington, D.C. ISBN 978-1-56347-537-5.
Herrick, S. (1971). Astrodynamics: Orbit Determination, Space Navigation, Celestial Mechanics, Volume 1. Van Nostrand Reinhold, London. ISBN 978-0-442-03370-5.
Herrick, S. (1972). Astrodynamics: Orbit Correction, Perturbation Theory, Integration, Volume 2. Van Nostrand Reinhold, London. ISBN 978-0-442-03371-2.
Kaplan, M.H. (1976). Modern Spacecraft Dynamics and Controls. Wiley, New York. ISBN 978-0-471-45703-9.
Tom Logsdon (1997). Orbital Mechanics. Wiley-Interscience, New York. ISBN 978-0-471-14636-0.
John E. Prussing & Bruce A. Conway (1993). Orbital Mechanics. Oxford University Press, New York. ISBN 978-0-19-507834-3.
M.J. Sidi (2000). Spacecraft Dynamics and Control. Cambridge University Press, New York. ISBN 978-0-521-78780-2.
W.E. Wiesel (1996). Spaceflight Dynamics (2nd ed.). McGraw-Hill, New York. ISBN 978-0-07-070110-6.
J.P. Vinti (1998). Orbital and Celestial Mechanics. American Institute of Aeronautics & Ast, Reston, Virginia. ISBN 978-1-56347-256-5.
P. Gurfil (2006). Modern Astrodynamics. Butterworth-Heinemann. ISBN 978-0-12-373562-1.
== External links ==
ORBITAL MECHANICS (Rocket and Space Technology)
Java Astrodynamics Toolkit
Astrodynamics-based Space Traffic and Event Knowledge Graph | Wikipedia/Orbital_mechanics |
The Division for Planetary Sciences (DPS) is a division within the American Astronomical Society (AAS) devoted to Solar System research.
It was founded in 1968. The first organizing committee members were: Edward Anders, Lewis Branscomb, Joseph W. Chamberlain, Richard M. Goody, John S. Hall, Arvidas Kliore, Michael B. McElroy, Tobias Owen, Gordon Pettengill, Carl Sagan, and Harlan James Smith. As of 2009, it is the largest special-interest division within the AAS. As of Oct 2010, membership totaled approximately 1415 planetary scientists and astronomers, including about 20% residing outside the U.S.
DPS sponsors six prizes. The Kuiper Prize honors outstanding contributions to the field of planetary science. The Urey Prize recognizes outstanding achievement in planetary research by a young scientist. The Masursky Award acknowledges outstanding service to planetary science and exploration. The Carl Sagan Medal recognizes and honors outstanding communication by an active planetary scientist to the general public. The Jonathan Eberhart Planetary Sciences Journalism Award is a prize that recognizes and stimulates distinguished popular writing on planetary sciences. The Claudia J. Alexander Prize recognizes outstanding achievement in planetary research by a mid-career scientist.
DPS has held meetings annually since 1970.
The official journal of the DPS is Icarus.
== See also ==
List of astronomical societies
== References == | Wikipedia/Division_for_Planetary_Sciences |
The discovery of extrasolar Earth-sized planets has encouraged research into their potential for habitability. One of the generally agreed requirements for a life-sustaining planet is a mobile, fractured lithosphere cyclically recycled into a vigorously convecting mantle, in a process commonly known as plate tectonics. Plate tectonics provide a means of geochemical regulation of atmospheric particulates, as well as removal of carbon from the atmosphere. This prevents a “runaway greenhouse” effect that can result in inhospitable surface temperatures and vaporization of liquid surface water. Planetary scientists have not reached a consensus on whether Earth-like exoplanets have plate tectonics, but it is widely thought that the likelihood of plate tectonics on an Earth-like exoplanet is a function of planetary radius, initial temperature upon coalescence, insolation, and presence or absence of liquid-phase surface water.
== Potential exoplanet geodynamic regimes ==
In order to characterize the geodynamic regime of an Earth-like exoplanet, the basic assumption is made that such a planet is Earth-like or “rocky”. This implies a three-layer stratigraphy of (from center to surface) a partially molten iron core, a silicate mantle that convects over geologic timescales, and a relatively cold, brittle silicate lithosphere. Within these parameters, the geodynamic regime at a given time point in the planet's history is likely to fall within one of three categories:
=== Plate tectonics ===
The mantle of a planet with plate tectonics has driving forces that exceed the yield strength of the brittle lithosphere, causing the lithosphere to fracture into plates that move relative to each other. A critical element of the plate tectonic system is these lithospheric plates become negatively buoyant at some point in their evolution, sinking into the mantle. The surface mass deficit is balanced by new plate being formed elsewhere through upwelling mantle plumes. Plate tectonics is an efficient method of heat transfer from the interior of the planet to the surface. Earth is the only planet plate tectonics is known to occur on, although evidence has been presented for Jupiter's moon Europa undergoing a form of plate tectonics analogous to Earth's.
=== Stagnant lid ===
A stagnant lid regime occurs when mantle driving forces do not exceed the lithospheric yield strength, resulting in a single, continuous rigid plate overlying the mantle. Stagnant lids only develop when the viscosity contrast between the surface and planetary interior exceeds about four orders of magnitude.
=== Episodic tectonics ===
Episodic tectonics is a general term for a geodynamic regime that possesses aspects of both plate tectonics and stagnant lid dynamics. Planets with episodic tectonic regimes will have immobile surface lids for geologically long spans of time, until a shift in equilibrium conditions is precipitated by either weakening lithosphere or increasing mantle driving forces. When this occurs, the shift to plate tectonics is usually catastrophic in nature and can involve resurfacing of the entire planet. After such a resurfacing event (or period of resurfacing events), stagnant lid equilibrium conditions are regained, resulting in a quiescent, immobile lid.
== Methods of predicting exoplanet geodynamic regimes ==
Exoplanets have been directly observed and remotely sensed, but due to their great distance and proximity to obscuring energy sources (the stars they orbit), there is little concrete knowledge of their composition and geodynamic regime. Therefore, the majority of information and conjectures made about them come from alternative sources.
=== Solar System analogues ===
All the rocky planets in the Solar System except Earth are generally believed to be in the stagnant lid geodynamic regime. Mars and particularly Venus have evidence of prior resurfacing events, but appear to be tectonically quiescent today. Geodynamic inferences about Solar System planets have been extrapolated to exoplanets in order to constrain what kind of geodynamic regimes can be expected given a set of physical criterion such as planetary radius, presence of surface water, and insolation. In particular, the planet Venus has been intensely studied due to its general physical similarities to Earth yet completely different geodynamic regime. Proposed explanations include a lack of surface water, the lack of a magnetic geodynamo, or large-scale evacuation of interior heat shortly after planetary coalescence.
Another source of insight within the Solar System is the history of the planet Earth, which may have had several episodes of stagnant lid geodynamics during its history. These stagnant-lid periods were not necessarily planet-wide; when supercontinents such as Gondwanaland existed, their presence may have shut off plate motion over large expanses of the Earth's surface until mantle heat buildup underneath the superplate was sufficient to break them apart.
=== Observation of exoplanets ===
Indirect and direct observation methods such as radial velocity and coronagraphs can give envelope estimates of exoplanet parameters such as mass, planetary radius, and orbital radius/eccentricity. Since distance from the host star and planetary size are generally believed to influence exoplanet geodynamic regime, inferences can be drawn from such information. For example, an exoplanet close enough to its host star to be tidally locked may have drastically different "dark" and "light" side temperatures and correspondingly bipolar geodynamic regimes (see insolation section below).
Spectroscopy has been used to characterize extrasolar gas giants, but has not yet been used on rocky exoplanets. However, numerical modeling has demonstrated that spectroscopy could detect atmospheric sulfur dioxide levels as low as 1 ppm; presence of sulfur dioxide at this concentration may be indicative of a planet without surface water and with volcanism 1500–80000 times higher than Earth.
=== Numerical modeling ===
Since real data on exoplanets is currently limited, a large amount of the dialogue regarding rocky exoplanet tectonics has been driven by the results of numerical modeling studies. In such models, different planetary physical parameters are manipulated (i.e. mantle viscosity, core-mantle boundary temperature, insolation, “wetness” or hydration of subducting lithosphere) and the resultant impact on the geodynamic regime is reported. Due to computational limitations the large amount of variables that control planet geodynamics in real life cannot be accounted for; models therefore ignore certain parameters believed to be less important and emphasize others to try to isolate disproportionately important driving factors. Some of these parameters include:
==== Scaling parameters ====
Early models of rocky exoplanets scaled different factors (namely mantle viscosity, lithospheric yield strength, and planetary size) up and down to predict the geodynamic regime of an exoplanet with given parameters. Two scaling studies of exoplanet size published in 2007 came to fundamentally different conclusions: O'Neill and Lenardic (2007) showed that a planet of 1.1 Earth mass would have Earth-like lithospheric yield stress but reduced mantle driving stresses, resulting in a stagnant lid regime. Conversely, Valencia et al. (2007) concluded the increase in mantle velocity (driving force) is large compared to the gravitationally-forced increase of plate viscosity as planets increase beyond one Earth mass, increasing the likelihood of plate tectonics with planet size.
==== Viscoelastic-plastic rheology ====
Most models simulate lithospheric plates with a viscoelastic-plastic rheology. In this simulation, plates deform viscoelastically up to a threshold level of stress, at which point they deform in a plastic manner. The lithospheric yield stress is a function of pressure, stress, composition, but temperature has a disproportionate effect on it. Therefore, changes to the lithospheric temperature, whether from external sources (insolation) or internal (mantle heating) will increase or decrease the likelihood of plate tectonics in viscoelastic-plastic models. Models with different modes of mantle heating (heat originating from the core-mantle boundary versus in-situ mantle heating) can produce dramatically different geodynamic regimes.
==== Time-dependent versus quasi-steady states ====
For computational purposes, early exoplanet mantle convection models assumed the planet was in a quasi-steady state, that is, the heat input from the core-mantle boundary or internal mantle heating remained constant throughout the model run. Later studies such as that of Noack and Breuer (2014) show that this assumption may have important implications, resulting in a gradual increase of the temperature differential between the core and mantle. A planet modeled with realistic decrease of internal heating throughout time had a lower likelihood of entering a plate tectonic regime compared to the quasi-steady state model.
==== Damage theory ====
A flaw of viscoelastic-plastic models of exoplanet geodynamics is in order for plate tectonics to be initiated, unrealistically low yield stress values are required. Additionally, plates in viscoelastic-plastic models have no deformation memory, i.e. as soon as the stress on a lithospheric plate drops below its yield stress it returns to its pre-deformation strength. This stands in contrast to Earth-based observations, which show that plates preferentially break along preexisting areas of deformation.
Damage theory attempts to address this model flaw by simulating voids created in areas of strain, representing the mechanical pulverization of coarse grains of rock into finer grains. In such models, damage is balanced by “healing”, or the temperature and pressure-driven dynamic recrystallization of smaller grains into larger ones. If the reduction of grain size (damage) is intensely localized in a stagnant lid, an incipient crack in the mantle can turn into a full-blown rift, initiating plate tectonics. Conversely, a high surface temperature will have more efficient lithospheric healing, which is another potential explanation for why Venus has a stagnant lid and Earth does not.
== Potential determining factors for Earth-like exoplanet geodynamic regimes ==
=== Initial temperature ===
For rocky exoplanets larger than Earth, the initial interior temperature after planetary convalescence may be an important controlling factor of surface motion. Noack and Breuer (2014) demonstrated that a core-mantle boundary initial temperature of 6100 K would likely form a stagnant lid, while a planet of the same dimensions with an initial core-mantle boundary 2000 K hotter will likely eventually evolve plate tectonics. This effect is diminished on planets smaller than Earth, because their smaller planetary interiors efficiently redistribute heat, reducing core-mantle heat gradients that drive mantle convection.
=== Insolation ===
External sources of planetary heat (namely, radiation from a planet's host star) can have drastic effects on geodynamic regime. With all other variables held constant, an Earth-sized exoplanet with a surface temperature of 273 K will evolve over its geological lifetime from a plate tectonic regime, to episodic periods of plate tectonics interspersed with stagnant lid geodynamics, to a terminal stagnant lid phase as interior heat is exhausted. Meanwhile, a "hot" planet (759 K surface temperature) under the same initial conditions will have an amorphous surface (due to lithospheric yield stress being constantly exceeded) to a stagnant lid as interior heat is exhausted, with no plate tectonics observed.
Planets closer than 0.5 astronomical units from their star are likely to be tidally locked; these planets are expected to have drastically different temperature regimes on their "day" and "night" sides. When this scenario is modeled, the day side displays mobile lid convection with diffuse surface deformation flowing toward the night side, while the night side has a plate tectonic regime of downwelling plates and a deep mantle return flow in the direction of the night side. A temperature contrast of 400 K between day and night sides is required to create such a stable system.
=== Presence of surface water ===
While early modeling studies emphasized the size of a given exoplanet as a critical factor of geodynamic regime, later studies showed that the influence of size may be small to the point of irrelevance compared to the presence of surface water. For plate tectonics to be a sustained, rather than episodic process, the friction coefficient at the upper boundary layer (the mantle-lithosphere interface) must be below a critical value; while some models arrive at a critically low friction coefficient via increased upper boundary layer temperature (and subsequent decreased viscosity), Korenaga (2010) demonstrates high pore fluid content can lower the coefficient of friction below the critical value as well.
== Implications of exoplanet geodynamic regime ==
A planet in a stagnant lid regime has a much lower likelihood of being habitable than one with active surface recycling. The outgassing of mantle-derived carbon and sulfur that occurs along plate margins is critical for producing and maintaining an atmosphere, which insulates a planet from solar radiation and wind. The same atmosphere also regulates surface temperature, providing a clement condition for biological activity. It is for these reasons the search for exoplanets will be steered largely towards finding ones with a plate tectonic geodynamic regime, since they are better candidates for human habitation.
== References == | Wikipedia/Geodynamics_of_terrestrial_exoplanets |
Selenography is the study of the surface and physical features of the Moon (also known as geography of the Moon, or selenodesy). Like geography and areography, selenography is a subdiscipline within the field of planetary science. Historically, the principal concern of selenographists was the mapping and naming of the lunar terrane identifying maria, craters, mountain ranges, and other various features. This task was largely finished when high resolution images of the near and far sides of the Moon were obtained by orbiting spacecraft during the early space era. Nevertheless, some regions of the Moon remain poorly imaged (especially near the poles) and the exact locations of many features (like crater depths) are uncertain by several kilometers. Today, selenography is considered to be a subdiscipline of selenology, which itself is most often referred to as simply "lunar science."
== History ==
The word" selenography" is derived from the Greek word Σελήνη (Selene, meaning Moon) and γράφω (graphō, meaning to write).
The idea that the Moon is not perfectly smooth originates to at least c. 450 BC, when Democritus asserted that the Moon's "lofty mountains and hollow valleys" were the cause of its markings. However, not until the end of the 15th century AD did serious selenography begin. Around AD 1603, William Gilbert made the first lunar drawing based on naked-eye observation. Others soon followed, and when the telescope was invented, initial drawings of poor accuracy were made, but soon thereafter improved in tandem with optics. In the early 18th century, the librations of the Moon were measured, which revealed that more than half of the lunar surface was visible to observers on Earth. In 1750, Johann Meyer produced the first reliable set of lunar coordinates that permitted astronomers to locate lunar features.
Lunar mapping became systematic in 1779 when Johann Schröter began meticulous observation and measurement of lunar topography. In 1834 Johann Heinrich von Mädler published the first large cartograph (map) of the Moon, comprising 4 sheets, and he subsequently published The Universal Selenography. All lunar measurement was based on direct observation until March 1840, when J.W. Draper, using a 5-inch reflector, produced a daguerreotype of the Moon and thus introduced photography to astronomy. At first, the images were of very poor quality, but as with the telescope 200 years earlier, their quality rapidly improved. By 1890 lunar photography had become a recognized subdiscipline of astronomy.
== Lunar photography ==
The 20th century witnessed more advances in selenography. In 1959, the Soviet spacecraft Luna 3 transmitted the first photographs of the far side of the Moon, giving the first view of it in history. The United States launched the Ranger spacecraft between 1961 and 1965 to photograph the lunar surface until the instant they impacted it, the Lunar Orbiters between 1966 and 1967 to photograph the Moon from orbit, and the Surveyors between 1966 and 1968 to photograph and softly land on the lunar surface. The Soviet Lunokhods 1 (1970) and 2 (1973) traversed almost 50 km of the lunar surface, making detailed photographs of the lunar surface. The Clementine spacecraft obtained the first nearly global cartograph (map) of the lunar topography, and also multispectral images. Successive missions transmitted photographs of increasing resolution.
== Lunar topography ==
The Moon has been measured by the methods of laser altimetry and stereo image analysis, including data obtained during several missions. The most visible topographical feature is the giant far-side South Pole-Aitken basin, which possesses the lowest elevations of the Moon. The highest elevations are found just to the northeast of this basin, and it has been suggested that this area might represent thick ejecta deposits that were emplaced during an oblique South Pole-Aitken basin impact event. Other large impact basins, such as the maria Imbrium, Serenitatis, Crisium, Smythii, and Orientale, also possess regionally low elevations and elevated rims.
Another distinguishing feature of the Moon's shape is that the elevations are on average about 1.9 km higher on the far side than the near side. If it is assumed that the crust is in isostatic equilibrium, and that the density of the crust is everywhere the same, then the higher elevations would be associated with a thicker crust. Using gravity, topography and seismic data, the crust is thought to be on average about 50 ± 15 km thick, with the far-side crust being on average thicker than the near side by about 15 km.
== Lunar cartography and toponymy ==
The oldest known illustration of the Moon was found in a passage grave in Knowth, County Meath, Ireland. The tomb was carbon dated to 3330–2790 BC. Leonardo da Vinci made and annotated some sketches of the Moon in c. 1500. William Gilbert made a drawing of the Moon in which he denominated a dozen surface features in the late 16th century; it was published posthumously in De Mondo Nostro Sublunari Philosophia Nova. After the invention of the telescope, Thomas Harriot (1609), Galileo Galilei (1609), and Christoph Scheiner (1614) made drawings also.
Denominations of the surface features of the Moon, based on telescopic observation, were made by Michael van Langren in 1645. Many of his denominations were distinctly Catholic, denominating craters in honor of Catholic royalty and capes and promontories in honor of Catholic saints. The lunar maria were denominated in Latin for terrestrial seas and oceans. Minor craters were denominated in honor of astronomers, mathematicians, and other famous scholars.
In 1647, Johannes Hevelius produced the rival work Selenographia, which was the first lunar atlas. Hevelius ignored the nomenclature of Van Langren and instead denominated the lunar topography according to terrestrial features, such that the names of lunar features corresponded to the toponyms of their geographical terrestrial counterparts, especially as the latter were denominated by the ancient Roman and Greek civilizations. This work of Hevelius influenced his contemporary European astronomers, and the Selenographia was the standard reference on selenography for over a century.
Giambattista Riccioli, SJ, a Catholic priest and scholar who lived in northern Italy authored the present scheme of Latin lunar nomenclature. His Almagestum novum was published in 1651 as summary of then current astronomical thinking and recent developments. In particular he outlined the arguments in favor of and against various cosmological models, both heliocentric and geocentric. Almagestum Novum contained scientific reference matter based on contemporary knowledge, and contemporary educators across Europe widely used it. Although this handbook of astronomy has long since been superseded, its system of lunar nomenclature is used even today.
The lunar illustrations in the Almagestum novum were drawn by a fellow Jesuit educator named Francesco Grimaldi, SJ. The nomenclature was based on a subdivision of the visible lunar surface into octants that were numbered in Roman style from I to VIII. Octant I referenced the northwest section and subsequent octants proceeded clockwise in alignment with compass directions. Thus Octant VI was to the south and included Clavius and Tycho Craters.
The Latin nomenclature had two components: the first denominated the broad features of terrae (lands) and maria (seas) and the second denominated the craters. Riccioli authored lunar toponyms derived from the names of various conditions, including climactic ones, whose causes were historically attributed to the Moon. Thus there were the seas of crises ("Mare Crisium"), serenity ("Mare Serenitatis"), and fertility ("Mare Fecunditatis"). There were also the seas of rain ("Mare Imbrium"), clouds ("Mare Nubium"), and cold ("Mare Frigoris"). The topographical features between the maria were comparably denominated, but were opposite the toponyms of the maria. Thus there were the lands of sterility ("Terra Sterilitatis"), heat ("Terra Caloris"), and life ("Terra Vitae"). However, these names for the highland regions were supplanted on later cartographs (maps). See List of features on the Moon for a complete list.
Many of the craters were denominated topically pursuant to the octant in which they were located. Craters in Octants I, II, and III were primarily denominated based on names from ancient Greece, such as Plato, Atlas, and Archimedes. Toward the middle in Octants IV, V, and VI craters were denominated based on names from the ancient Roman Empire, such as Julius Caesar, Tacitus, and Taruntius. Toward the southern half of the lunar cartograph (map) craters were denominated in honor of scholars, writers, and philosophers of medieval Europe and Arabic regions. The outer extremes of Octants V, VI, and VII, and all of Octant VIII were denominated in honor of contemporaries of Giambattista Riccioli. Features of Octant VIII were also denominated in honor of Copernicus, Kepler, and Galileo. These persons were "banished" to it far from the "ancients," as a gesture to the Catholic Church. Many craters around the Mare Nectaris were denominated in honor of Catholic saints pursuant to the nomenclature of Van Langren. All of them were, however, connected in some mode with astronomy. Later cartographs (maps) removed the "St." from their toponyms.
The lunar nomenclature of Giambattista Riccioli was widely used after the publication of his Almagestum Novum, and many of its toponyms are presently used. The system was scientifically inclusive and was considered eloquent and poetic in style, and therefore it appealed widely to his contemporaries. It was also readily extensible with new toponyms for additional features. Thus it replaced the nomenclature of Van Langren and Hevelius.
Later astronomers and lunar cartographers augmented the nomenclature with additional toponyms. The most notable among these contributors was Johann H. Schröter, who published a very detailed cartograph (map) of the Moon in 1791 titled the Selenotopografisches Fragmenten. Schröter's adoption of Riccioli's nomenclature perpetuated it as the universally standard lunar nomenclature. A vote of the International Astronomical Union (IAU) in 1935 established the lunar nomenclature of Riccioli, which included 600 lunar toponyms, as universally official and doctrinal.
The IAU later expanded and updated the lunar nomenclature in the 1960s, but new toponyms were limited to toponyms honoring deceased scientists. After Soviet spacecraft photographed the far side of the Moon, many of the newly discovered features were denominated in honor of Soviet scientists and engineers. The IAU assigned all subsequent new lunar toponyms. Some craters were denominated in honor of space explorers.
=== Satellite craters ===
Johann H. Mädler authored the nomenclature for satellite craters. The subsidiary craters surrounding a major crater were identified by a letter. These subsidiary craters were usually smaller than the crater with which they were associated, with some exceptions. The craters could be assigned letters "A" through "Z," with "I" omitted. Because the great majority of the toponyms of craters were masculine, the major craters were generically denominated "patronymic" craters.
The assignment of the letters to satellite craters was originally somewhat haphazard. Letters were typically assigned to craters in order of significance rather than location. Precedence depended on the angle of illumination from the Sun at the time of the telescopic observation, which could change during the lunar day. In many cases the assignments were seemingly random. In a number of cases the satellite crater was located closer to a major crater with which it was not associated. To identify the patronymic crater, Mädler placed the identifying letter to the side of the midpoint of the feature that was closest to the associated major crater. This also had the advantage of permitting omission of the toponyms of the major craters from the cartographs (maps) when their subsidiary features were labelled.
Over time, lunar observers assigned many of the satellite craters an eponym. The International Astronomical Union (IAU) assumed authority to denominate lunar features in 1919. The commission for denominating these features formally adopted the convention of using capital Roman letters to identify craters and valleys.
When suitable maps of the far side of the Moon became available by 1966, Ewen Whitaker denominated satellite features based on the angle of their location relative to the major crater with which they were associated. A satellite crater located due north of the major crater was identified as "Z". The full 360° circle around the major crater was then subdivided evenly into 24 parts, like a 24-hour clock. Each "hour" angle, running clockwise, was assigned a letter, beginning with "A" at 1 o'clock. The letters "I" and "O" were omitted, resulting in only 24 letters. Thus a crater due south of its major crater was identified as "M".
== Reference elevation ==
The Moon obviously lacks any mean sea level to be used as vertical datum.
The USGS's Lunar Orbiter Laser Altimeter (LOLA), an instrument on NASA's Lunar Reconnaissance Orbiter (LRO), employs a digital elevation model (DEM) that uses the nominal lunar radius of 1,737.4 km (1,079.6 mi).
The selenoid (the geoid for the Moon) has been measured gravimetrically by the GRAIL twin satellites.
== Historical lunar maps ==
The following historically notable lunar maps and atlases are arranged in chronological order by publication date.
Michael van Langren, engraved map, 1645.
Johannes Hevelius, Selenographia, 1647.
Giovanni Battista Riccioli and Francesco Maria Grimaldi, Almagestum novum, 1651.
Giovanni Domenico Cassini, engraved map, 1679 (reprinted in 1787).
Tobias Mayer, engraved map, 1749, published in 1775.
Johann Hieronymus Schröter, Selenotopografisches Fragmenten, 1st volume 1791, 2nd volume 1802.
John Russell, engraved images, 1805.
Wilhelm Lohrmann, Topographie der sichtbaren Mondoberflaeche, Leipzig, 1824.
Wilhelm Beer and Johann Heinrich Mädler, Mappa Selenographica totam Lunae hemisphaeram visibilem complectens, Berlin, 1834-36.
Edmund Neison, The Moon, London, 1876.
Julius Schmidt, Charte der Gebirge des Mondes, Berlin, 1878.
Thomas Gwyn Elger, The Moon, London, 1895.
Johann Krieger, Mond-Atlas, 1898. Two additional volumes were published posthumously in 1912 by the Vienna Academy of Sciences.
Walter Goodacre, Map of the Moon, London, 1910.
Mary A. Blagg and Karl Müller, Named Lunar Formations, 2 volumes, London, 1935.
Philipp Fauth, Unser Mond, Bremen, 1936.
Hugh P. Wilkins, 300-inch Moon map, 1951.
Gerard Kuiper et al., Photographic Lunar Atlas, Chicago, 1960.
Ewen A. Whitaker et al., Rectified Lunar Atlas, Tucson, 1963.
Hermann Fauth and Philipp Fauth (posthumously), Mondatlas, 1964.
Gerard Kuiper et al., System of Lunar Craters, 1966.
Yu I. Efremov et al., Atlas Obratnoi Storony Luny, Moscow, 1967–1975.
NASA, Lunar Topographic Orthophotomaps, 1978.
Antonín Rükl, Atlas of the Moon, 2004.
== Galleries ==
== See also ==
Gravitation of the Moon
Google Moon
Grazing lunar occultation
Planetary nomenclature
Selenographic coordinate system
List of maria on the Moon
List of craters on the Moon
List of mountains on the Moon
List of valleys on the Moon
== References ==
=== Citations ===
=== Bibliography ===
Scott L. Montgomery (1999). The Moon and Western Imagination. University of Arizona Press. ISBN 0-8165-1711-8.
Ewen A. Whitaker (1999). Mapping and Naming the Moon: A History of Lunar Cartography and Nomenclature. Cambridge University Press. ISBN 0-521-62248-4.
William P. Sheehan; Thomas A. Dobbins (2001). Epic Moon: A history of lunar exploration in the age of the telescope. Willmann-Bell.
== External links ==
NASA Catalogue of Lunar Nomenclature (1982), Leif E. Andersson and Ewen A. Whitaker
The Galileo Project: The Moon
Observing the Moon: The Modern Astronomer's Guide
Lunar control networks (USGS)
The Rise And Fall of Lunar Observing Archived 2017-08-09 at the Wayback Machine, Kevin S. Jung
Consolidated Lunar Atlas
Virtual exhibition about the topography of the Moon on the digital library of Paris Observatory | Wikipedia/Selenography |
A stratigraphic column is a representation used in geology and its subfield of stratigraphy to describe the vertical location of rock units in a particular area. A typical stratigraphic column shows a sequence of sedimentary rocks, with the oldest rocks on the bottom and the youngest on top.
In areas that are more geologically complex, such as those that contain intrusive rocks, faults, and/or metamorphism, stratigraphic columns can still indicate the relative locations of these units with respect to one another. However, in these cases, the stratigraphic column must either be a structural column, in which the units are stacked with respect to how they are observed in the field to have been moved by the faults, or a time column, in which the units are stacked in the order in which they were formed.
Stratigraphy is a branch of geology that concerns the order and relative position of geologic strata and their relationship to the geologic time scale. The relative time sequencing requires the analysis of the order and position of layers of archaeological remains and the structure of a particular set of strata.
The columns can include igneous and metamorphic rocks, however, sedimentary rocks are important geologically because of Classical Laws of Geology and how they relate to the accumulation of sediments and the formation of sedimentary environments. Lithology is a study of bedrock that occurs at a specific location. The strata may contain fossils which aid in determining how old they are and geologist's understanding of sequence and timing. Geologists group together similar lithologies, and call these larger sedimentary sequence formations. There are rules on how formations are named, related to where they are located and what rock type(s) are present. All sedimentary formations shall receive distinctive designations. The most desirable names are binomial, the first part being geographic and the other lithologic. If the rock type is the same, then the formation may be called the "Lyons Sandstone," or the "Benton Shale." When there are several different lithologies within the formation, a more general terminology is used, such as the "Morrison Formation," which contains siltstone, sandstone, and limestone. “For regional studies, geologists will study the stratigraphy of as many separate areas as they can, prepare a stratigraphic column for each, and combine them in an attempt to understand the regional geologic history of the area”.
== Laws and principles of geology ==
Principle of Uniformitarianism: defined in the authoritative Glossary of Geology as "the fundamental principle or doctrine that geologic processes and natural laws now operating to modify the Earth's crust have acted in the same regular manner and with essentially the same intensity throughout geologic time, and that past geologic events can be explained by phenomena and forces observable today; the classical concept that 'the present is the key to the past'.".
Law of Original Horizontality: sedimentary rocks are always deposited as horizontal, or nearly horizontal, strata, although these may be disturbed by later earth movements. This law was proposed by Nicolaus Steno in the mid-17th century.
Law of Superposition: general law upon which all geologic chronology is based: In any sequence of layered rocks, sedimentary or extrusive volcanic, that has not been overturned, the youngest stratum is at the top and the oldest at the base; i.e., each bed is younger than the bed beneath, but older than the bed above it. The law was stated by Steno in 1669.
Cross-cutting relationships: cross-cutting relationships is a principle of geology that states that the geologic feature which cuts another is the younger of the two features. It is a relative dating technique used commonly by geologists.
There are two main processes that are relevant to sedimentary strata formation: tectonic forces which build mountains and the surface, and erosional processes that transport the sediments to lower energy environments where they are then deposited. These processes results in large piles of accumulated sediments whenever there is a change in the depositional environment. The sedimentary particles are deposited dependent on the net energy in the transportation vector, typically water when dealing with sediments clasts.
“Brief descriptions of the units may be lettered to the right of the column, as in the figure, or the column may be accompanied by an explanation consisting of a small box for each lithologic symbol and for the other symbols alongside the column. Columns are constructed from the stratigraphic base upward and should be plotted first in pencil in order to insure spaces for gaps at faults and unconformities. Sections that are thicker than the height of the plate can be broken into two or more segments, with the stratigraphic base at the lower left and the top at the upper right. Bedding and unit boundaries are drawn horizontally, except in detailed sections or generalized sections of distinctly nontabular deposits, as some gravels and volcanic units”.
The following elements of a stratigraphic column are essential and are generally keyed to the figure:
title, indicating topic, general location, and whether the section is single (measured in one coherent course), composite (pieced from two or more section segments), averaged, or generalized;
name(s) of geologist(s) and date of the survey;
method of measurement;
graphic scale;
map or description of locality;
major chronostratigraphic units, if known;
lesser chronostratigraphic units, if known;
names and boundaries of rock units;
graphic column composed of standard lithologic patterns;
unconformities;
faults, with thickness of tectonic gaps, if known;
covered intervals, as measured,
positions of key beds; and
positions of important samples, with number and perhaps data. Other kinds of information may be included also.
This recorded information from above will give geologist a description of what rocks are in a cliff or underground. This description allows a better understanding to the entire geology of that area. Can be used to decide whether there is potential for oil or natural gas that exists in these rocks. “The differences between rock unit types and fossils observed within the rock determine how these rocks are grouped for diagramming purposes. The column displays what types of rocks these units are composed of in two ways. The unit name itself reveals to geologists the rock type. and displays the relative thickness of the rock units”.
== References ==
== Further reading ==
Neuendorf, Klaus KE. Glossary of geology. Springer Science & Business Media, 2005.
Survey, Geological, and Director George Oils Smith. (n.d.): n. pag. United States Department of Interior Geologic Survey. USGS, 1931. Web. 7 May 2017. | Wikipedia/Stratigraphic_column |
Planetary oceanography, also called astro-oceanography or exo-oceanography, is the study of oceans on planets and moons other than Earth. Unlike other planetary sciences like astrobiology, astrochemistry, and planetary geology, it only began after the discovery of underground oceans in Saturn's moon Titan and Jupiter's moon Europa. This field remains speculative until further missions reach the oceans beneath the rock or ice layer of the moons. There are many theories about oceans or even ocean worlds of celestial bodies in the Solar System, from oceans made of liquid carbon with floating diamonds in Neptune to a gigantic ocean of liquid hydrogen that may exist underneath Jupiter's surface.
Early in their geologic histories, Mars and Venus are theorized to have had large water oceans. The Mars ocean hypothesis suggests that nearly a third of the surface of Mars was once covered by water, and a runaway greenhouse effect may have boiled away the global ocean of Venus. Compounds such as salts and ammonia, when dissolved in water, will lower water's freezing point, so that water might exist in large quantities in extraterrestrial environments as brine, or convecting ice. Unconfirmed oceans are speculated to exist beneath the surfaces of many dwarf planets and natural satellites; notably, the ocean of the moon Europa is estimated to have over twice the water volume of Earth's. The Solar System's giant planets are also thought to have liquid atmospheric layers of yet-to-be-confirmed compositions. Oceans may also exist on exoplanets and exomoons, including surface oceans of liquid water within a circumstellar habitable zone. Ocean planets are a hypothetical type of planet with a surface completely covered with liquid.
Extraterrestrial oceans may be composed of water, or other elements and compounds. The only confirmed large, stable bodies of extraterrestrial surface liquids are the lakes of Titan, which are made of hydrocarbons instead of water. However, there is strong evidence for the existence of subsurface water oceans elsewhere in the Solar System. The best-established candidates for subsurface water oceans in the Solar System are Jupiter's moons Europa, Ganymede, and Callisto, and Saturn's moons Enceladus and Titan.
Although Earth is the only known planet with large stable bodies of liquid water on its surface, and the only such planet in the Solar System, other celestial bodies are thought to have large oceans. In June 2020, NASA scientists reported that it is likely that exoplanets with oceans may be common in the Milky Way galaxy, based on mathematical modeling studies.
The inner structure of gas giants remain poorly understood. Scientists suspect that, under extreme pressure, hydrogen would act as a supercritical fluid, hence the likelihood of oceans of liquid hydrogen deep in the interior of gas giants like Jupiter. Oceans of liquid carbon have been hypothesized to exist on ice giants, notably Neptune and Uranus. Magma oceans exist during periods of accretion on any planet and some natural satellites when the planet or natural satellite is completely or partly molten.
== Extraterrestrial oceans ==
=== Planets ===
The gas giants, Jupiter and Saturn, are thought to lack surfaces and instead have a stratum of liquid hydrogen; however their planetary geology is not well understood. The possibility of the ice giants Uranus and Neptune having hot, highly compressed, supercritical water under their thick atmospheres has been hypothesised. Although their composition is still not fully understood, a 2006 study by Wiktorowicz and Ingersall ruled out the possibility of such a water "ocean" existing on Neptune, though oceans of metallic liquid carbon are possible.
The Mars ocean hypothesis suggests that nearly a third of the surface of Mars was once covered by water, though the water on Mars is no longer oceanic (much of it residing in the ice caps). The possibility continues to be studied along with reasons for their apparent disappearance. Some astronomers now propose that Venus may have had liquid water and perhaps oceans for over 2 billion years.
=== Natural satellites ===
A global layer of liquid water thick enough to decouple the crust from the mantle is thought to be present on the natural satellites Titan, Europa, Enceladus, Ganymede, and Triton; and, with less certainty, in Callisto, Mimas, Miranda, and Ariel. A magma ocean is thought to be present on Io. Geysers or fumaroles have been found on Saturn's moon Enceladus, possibly originating from an ocean about 10 kilometers (6 mi) beneath the surface ice shell. Other icy moons may also have internal oceans, or may once have had internal oceans that have now frozen.
Large bodies of liquid hydrocarbons are thought to be present on the surface of Titan, although they are not large enough to be considered oceans and are sometimes referred to as lakes or seas. The Cassini–Huygens space mission initially discovered only what appeared to be dry lakebeds and empty river channels, suggesting that Titan had lost what surface liquids it might have had. Later flybys of Titan provided radar and infrared images that showed a series of hydrocarbon lakes in the colder polar regions. Titan is thought to have a subsurface liquid-water ocean under the ice in addition to the hydrocarbon mix that forms atop its outer crust.
=== Dwarf planets and trans-Neptunian objects ===
Ceres appears to be differentiated into a rocky core and icy mantle and may harbour a liquid-water ocean under its surface.
Not enough is known of the larger trans-Neptunian objects to determine whether they are differentiated bodies capable of supporting oceans, although models of radioactive decay suggest that Pluto, Eris, Sedna, and Orcus have oceans beneath solid icy crusts approximately 100 to 180 kilometers (60 to 110 mi) thick. In June 2020, astronomers reported evidence that the dwarf planet Pluto may have had a subsurface ocean, and consequently may have been habitable, when it was first formed.
=== Extrasolar ===
Some planets and natural satellites outside the Solar System are likely to have oceans, including possible water ocean planets similar to Earth in the habitable zone or "liquid-water belt". The detection of oceans, even through the spectroscopy method, however is likely extremely difficult and inconclusive.
Theoretical models have been used to predict with high probability that GJ 1214 b, detected by transit, is composed of exotic form of ice VII, making up 75% of its mass,
making it an ocean planet.
Other possible candidates are merely speculative based on their mass and position in the habitable zone include planet though little is actually known of their composition. Some scientists speculate Kepler-22b may be an "ocean-like" planet. Models have been proposed for Gliese 581 d that could include surface oceans. Gliese 436 b is speculated to have an ocean of "hot ice". Exomoons orbiting planets, particularly gas giants within their parent star's habitable zone may theoretically have surface oceans.
Terrestrial planets will acquire water during their accretion, some of which will be buried in the magma ocean but most of it will go into a steam atmosphere, and when the atmosphere cools it will collapse on to the surface forming an ocean. There will also be outgassing of water from the mantle as the magma solidifies—this will happen even for planets with a low percentage of their mass composed of water, so "super-Earth exoplanets may be expected to commonly produce water oceans within tens to hundreds of millions of years of their last major accretionary impact."
== Non-water surface liquids ==
Oceans, seas, lakes and other bodies of liquids can be composed of liquids other than water, for example the hydrocarbon lakes on Titan. The possibility of seas of nitrogen on Triton was also considered but ruled out. There is evidence that the icy surfaces of the moons Ganymede, Callisto, Europa, Titan and Enceladus are shells floating on oceans of very dense liquid water or water–ammonia solution.
Extrasolar terrestrial planets that are extremely close to their parent star will be tidally locked and so one half of the planet will be a magma ocean. It is also possible that terrestrial planets had magma oceans at some point during their formation as a result of giant impacts. Hot Neptunes close to their star could lose their atmospheres via hydrodynamic escape, leaving behind their cores with various liquids on the surface. Where there are suitable temperatures and pressures, volatile chemicals that might exist as liquids in abundant quantities on planets (thalassogens) include ammonia, argon, carbon disulfide, ethane, hydrazine, hydrogen, hydrogen cyanide, hydrogen sulfide, methane, neon, nitrogen, nitric oxide, phosphine, silane, sulfuric acid, and water.
Supercritical fluids, although not liquids, do share various properties with liquids. Underneath the thick atmospheres of the planets Uranus and Neptune, it is expected that these planets are composed of oceans of hot high-density fluid mixtures of water, ammonia and other volatiles. The gaseous outer layers of Jupiter and Saturn transition smoothly into oceans of supercritical hydrogen. The atmosphere of Venus is 96.5% carbon dioxide, and is a supercritical fluid at the surface.
== See also ==
Extraterrestrial liquid water
Lava planet
List of largest lakes and seas in the Solar System
Magma ocean
Ocean world
== References == | Wikipedia/Astrooceanography |
Modern portfolio theory (MPT), or mean-variance analysis, is a mathematical framework for assembling a portfolio of assets such that the expected return is maximized for a given level of risk. It is a formalization and extension of diversification in investing, the idea that owning different kinds of financial assets is less risky than owning only one type. Its key insight is that an asset's risk and return should not be assessed by itself, but by how it contributes to a portfolio's overall risk and return. The variance of return (or its transformation, the standard deviation) is used as a measure of risk, because it is tractable when assets are combined into portfolios. Often, the historical variance and covariance of returns is used as a proxy for the forward-looking versions of these quantities, but other, more sophisticated methods are available.
Economist Harry Markowitz introduced MPT in a 1952 paper, for which he was later awarded a Nobel Memorial Prize in Economic Sciences; see Markowitz model.
In 1940, Bruno de Finetti published the mean-variance analysis method, in the context of proportional reinsurance, under a stronger assumption. The paper was obscure and only became known to economists of the English-speaking world in 2006.
== Mathematical model ==
=== Risk and expected return ===
MPT assumes that investors are risk averse, meaning that given two portfolios that offer the same expected return, investors will prefer the less risky one. Thus, an investor will take on increased risk only if compensated by higher expected returns. Conversely, an investor who wants higher expected returns must accept more risk. The exact trade-off will not be the same for all investors. Different investors will evaluate the trade-off differently based on individual risk aversion characteristics. The implication is that a rational investor will not invest in a portfolio if a second portfolio exists with a more favorable risk vs expected return profile — i.e., if for that level of risk an alternative portfolio exists that has better expected returns.
Under the model:
Portfolio return is the proportion-weighted combination of the constituent assets' returns.
Portfolio return volatility
σ
p
{\displaystyle \sigma _{p}}
is a function of the correlations ρij of the component assets, for all asset pairs (i, j). The volatility gives insight into the risk which is associated with the investment. The higher the volatility, the higher the risk.
In general:
Expected return:
E
(
R
p
)
=
∑
i
w
i
E
(
R
i
)
{\displaystyle \operatorname {E} (R_{p})=\sum _{i}w_{i}\operatorname {E} (R_{i})\quad }
where
R
p
{\displaystyle R_{p}}
is the return on the portfolio,
R
i
{\displaystyle R_{i}}
is the return on asset i and
w
i
{\displaystyle w_{i}}
is the weighting of component asset
i
{\displaystyle i}
(that is, the proportion of asset "i" in the portfolio, so that
∑
i
w
i
=
1
{\displaystyle \sum _{i}w_{i}=1}
).
Portfolio return variance:
σ
p
2
=
∑
i
w
i
2
σ
i
2
+
∑
i
∑
j
≠
i
w
i
w
j
σ
i
σ
j
ρ
i
j
{\displaystyle \sigma _{p}^{2}=\sum _{i}w_{i}^{2}\sigma _{i}^{2}+\sum _{i}\sum _{j\neq i}w_{i}w_{j}\sigma _{i}\sigma _{j}\rho _{ij}}
,
where
σ
i
{\displaystyle \sigma _{i}}
is the (sample) standard deviation of the periodic returns on an asset i, and
ρ
i
j
{\displaystyle \rho _{ij}}
is the correlation coefficient between the returns on assets i and j. Alternatively the expression can be written as:
σ
p
2
=
∑
i
∑
j
w
i
w
j
σ
i
σ
j
ρ
i
j
{\displaystyle \sigma _{p}^{2}=\sum _{i}\sum _{j}w_{i}w_{j}\sigma _{i}\sigma _{j}\rho _{ij}}
,
where
ρ
i
j
=
1
{\displaystyle \rho _{ij}=1}
for
i
=
j
{\displaystyle i=j}
, or
σ
p
2
=
∑
i
∑
j
w
i
w
j
σ
i
j
{\displaystyle \sigma _{p}^{2}=\sum _{i}\sum _{j}w_{i}w_{j}\sigma _{ij}}
,
where
σ
i
j
=
σ
i
σ
j
ρ
i
j
{\displaystyle \sigma _{ij}=\sigma _{i}\sigma _{j}\rho _{ij}}
is the (sample) covariance of the periodic returns on the two assets, or alternatively denoted as
σ
(
i
,
j
)
{\displaystyle \sigma (i,j)}
,
cov
i
j
{\displaystyle {\text{cov}}_{ij}}
or
cov
(
i
,
j
)
{\displaystyle {\text{cov}}(i,j)}
.
Portfolio return volatility (standard deviation):
σ
p
=
σ
p
2
{\displaystyle \sigma _{p}={\sqrt {\sigma _{p}^{2}}}}
For a two-asset portfolio:
Portfolio expected return:
E
(
R
p
)
=
w
A
E
(
R
A
)
+
w
B
E
(
R
B
)
=
w
A
E
(
R
A
)
+
(
1
−
w
A
)
E
(
R
B
)
.
{\displaystyle \operatorname {E} (R_{p})=w_{A}\operatorname {E} (R_{A})+w_{B}\operatorname {E} (R_{B})=w_{A}\operatorname {E} (R_{A})+(1-w_{A})\operatorname {E} (R_{B}).}
Portfolio variance:
σ
p
2
=
w
A
2
σ
A
2
+
w
B
2
σ
B
2
+
2
w
A
w
B
σ
A
σ
B
ρ
A
B
{\displaystyle \sigma _{p}^{2}=w_{A}^{2}\sigma _{A}^{2}+w_{B}^{2}\sigma _{B}^{2}+2w_{A}w_{B}\sigma _{A}\sigma _{B}\rho _{AB}}
For a three-asset portfolio:
Portfolio expected return:
E
(
R
p
)
=
w
A
E
(
R
A
)
+
w
B
E
(
R
B
)
+
w
C
E
(
R
C
)
{\displaystyle \operatorname {E} (R_{p})=w_{A}\operatorname {E} (R_{A})+w_{B}\operatorname {E} (R_{B})+w_{C}\operatorname {E} (R_{C})}
Portfolio variance:
σ
p
2
=
w
A
2
σ
A
2
+
w
B
2
σ
B
2
+
w
C
2
σ
C
2
+
2
w
A
w
B
σ
A
σ
B
ρ
A
B
+
2
w
A
w
C
σ
A
σ
C
ρ
A
C
+
2
w
B
w
C
σ
B
σ
C
ρ
B
C
{\displaystyle \sigma _{p}^{2}=w_{A}^{2}\sigma _{A}^{2}+w_{B}^{2}\sigma _{B}^{2}+w_{C}^{2}\sigma _{C}^{2}+2w_{A}w_{B}\sigma _{A}\sigma _{B}\rho _{AB}+2w_{A}w_{C}\sigma _{A}\sigma _{C}\rho _{AC}+2w_{B}w_{C}\sigma _{B}\sigma _{C}\rho _{BC}}
The algebra can be much simplified by expressing the quantities involved in matrix notation. Arrange the returns of N risky assets in an
N
×
1
{\displaystyle N\times 1}
vector
R
{\displaystyle R}
, where the first element is the return of the first asset, the second element of the second asset, and so on. Arrange their expected returns in a column vector
μ
{\displaystyle \mu }
, and their variances and covariances in a covariance matrix
Σ
{\displaystyle \Sigma }
. Consider a portfolio of risky assets whose weights in each of the N risky assets is given by the corresponding element of the weight vector
w
{\displaystyle w}
. Then:
Portfolio expected return:
w
′
μ
{\displaystyle w'\mu }
and
Portfolio variance:
w
′
Σ
w
{\displaystyle w'\Sigma w}
For the case where there is investment in a riskfree asset with return
R
f
{\displaystyle R_{f}}
, the weights of the weight vector do not sum to 1, and the portfolio expected return becomes
w
′
μ
+
(
1
−
w
′
1
)
R
f
{\displaystyle w'\mu +(1-w'1)R_{f}}
. The expression for the portfolio variance is unchanged.
=== Diversification ===
An investor can reduce portfolio risk (especially
σ
p
{\displaystyle \sigma _{p}}
) simply by holding combinations of instruments that are not perfectly positively correlated (correlation coefficient
−
1
≤
ρ
i
j
<
1
{\displaystyle -1\leq \rho _{ij}<1}
). In other words, investors can reduce their exposure to individual asset risk by holding a diversified portfolio of assets. Diversification may allow for the same portfolio expected return with reduced risk.
If all the asset pairs have correlations of 0 — they are perfectly uncorrelated — the portfolio's return variance is the sum over all assets of the square of the fraction held in the asset times the asset's return variance (and the portfolio standard deviation is the square root of this sum).
If all the asset pairs have correlations of 1 — they are perfectly positively correlated — then the portfolio return's standard deviation is the sum of the asset returns' standard deviations weighted by the fractions held in the portfolio. For given portfolio weights and given standard deviations of asset returns, the case of all correlations being 1 gives the highest possible standard deviation of portfolio return.
=== Efficient frontier with no risk-free asset ===
The MPT is a mean-variance theory, and it compares the expected (mean) return of a portfolio with the standard deviation of the same portfolio. The image shows expected return on the vertical axis, and the standard deviation on the horizontal axis (volatility). Volatility is described by standard deviation and it serves as a measure of risk. The return - standard deviation space is sometimes called the space of 'expected return vs risk'. Every possible combination of risky assets, can be plotted in this risk-expected return space, and the collection of all such possible portfolios defines a region in this space. The left boundary of this region is hyperbolic, and the upper part of the hyperbolic boundary is the efficient frontier in the absence of a risk-free asset (sometimes called "the Markowitz bullet"). Combinations along this upper edge represent portfolios (including no holdings of the risk-free asset) for which there is lowest risk for a given level of expected return. Equivalently, a portfolio lying on the efficient frontier represents the combination offering the best possible expected return for given risk level. The tangent to the upper part of the hyperbolic boundary is the capital allocation line (CAL).
Matrices are preferred for calculations of the efficient frontier.
In matrix form, for a given "risk tolerance"
q
∈
[
0
,
∞
)
{\displaystyle q\in [0,\infty )}
, the efficient frontier is found by minimizing the following expression:
w
T
Σ
w
−
q
R
T
w
{\displaystyle w^{T}\Sigma w-qR^{T}w}
where
w
∈
R
N
{\displaystyle w\in \mathbb {R} ^{N}}
is a vector of portfolio weights and
∑
i
=
1
N
w
i
=
1.
{\displaystyle \sum _{i=1}^{N}w_{i}=1.}
(The weights can be negative);
Σ
∈
R
N
×
N
{\displaystyle \Sigma \in \mathbb {R} ^{N\times N}}
is the covariance matrix for the returns on the assets in the portfolio;
q
≥
0
{\displaystyle q\geq 0}
is a "risk tolerance" factor, where 0 results in the portfolio with minimal risk and
∞
{\displaystyle \infty }
results in the portfolio infinitely far out on the frontier with both expected return and risk unbounded; and
R
∈
R
N
{\displaystyle R\in \mathbb {R} ^{N}}
is a vector of expected returns.
w
T
Σ
w
∈
R
{\displaystyle w^{T}\Sigma w\in \mathbb {R} }
is the variance of portfolio return.
R
T
w
∈
R
{\displaystyle R^{T}w\in \mathbb {R} }
is the expected return on the portfolio.
The above optimization finds the point on the frontier at which the inverse of the slope of the frontier would be q if portfolio return variance instead of standard deviation were plotted horizontally. The frontier in its entirety is parametric on q.
Harry Markowitz developed a specific procedure for solving the above problem, called the critical line algorithm, that can handle additional linear constraints, upper and lower bounds on assets, and which is proved to work with a semi-positive definite covariance matrix. Examples of implementation of the critical line algorithm exist in Visual Basic for Applications, in JavaScript and in a few other languages.
Also, many software packages, including MATLAB, Microsoft Excel, Mathematica and R, provide generic optimization routines so that using these for solving the above problem is possible, with potential caveats (poor numerical accuracy, requirement of positive definiteness of the covariance matrix...).
An alternative approach to specifying the efficient frontier is to do so parametrically on the expected portfolio return
R
T
w
.
{\displaystyle R^{T}w.}
This version of the problem requires that we minimize
w
T
Σ
w
{\displaystyle w^{T}\Sigma w}
subject to
R
T
w
=
μ
{\displaystyle R^{T}w=\mu }
and
∑
i
=
1
N
w
i
=
1
{\displaystyle \sum _{i=1}^{N}w_{i}=1}
for parameter
μ
{\displaystyle \mu }
. This problem is easily solved using a Lagrange multiplier which leads to the following linear system of equations:
[
2
Σ
−
R
−
1
R
T
0
0
1
T
0
0
]
[
w
λ
1
λ
2
]
=
[
0
μ
1
]
{\displaystyle {\begin{bmatrix}2\Sigma &-R&-{\bf {1}}\\R^{T}&0&0\\{\bf {1}}^{T}&0&0\end{bmatrix}}{\begin{bmatrix}w\\\lambda _{1}\\\lambda _{2}\end{bmatrix}}={\begin{bmatrix}0\\\mu \\1\end{bmatrix}}}
=== Two mutual fund theorem ===
One key result of the above analysis is the two mutual fund theorem. This theorem states that any portfolio on the efficient frontier can be generated by holding a combination of any two given portfolios on the frontier; the latter two given portfolios are the "mutual funds" in the theorem's name. So in the absence of a risk-free asset, an investor can achieve any desired efficient portfolio even if all that is accessible is a pair of efficient mutual funds. If the location of the desired portfolio on the frontier is between the locations of the two mutual funds, both mutual funds will be held in positive quantities. If the desired portfolio is outside the range spanned by the two mutual funds, then one of the mutual funds must be sold short (held in negative quantity) while the size of the investment in the other mutual fund must be greater than the amount available for investment (the excess being funded by the borrowing from the other fund).
=== Risk-free asset and the capital allocation line ===
The risk-free asset is the (hypothetical) asset that pays a risk-free rate. In practice, short-term government securities (such as US treasury bills) are used as a risk-free asset, because they pay a fixed rate of interest and have exceptionally low default risk. The risk-free asset has zero variance in returns if held to maturity (hence is risk-free); it is also uncorrelated with any other asset (by definition, since its variance is zero). As a result, when it is combined with any other asset or portfolio of assets, the change in return is linearly related to the change in risk as the proportions in the combination vary.
When a risk-free asset is introduced, the half-line shown in the figure is the new efficient frontier. It is tangent to the hyperbola at the pure risky portfolio with the highest Sharpe ratio. Its vertical intercept represents a portfolio with 100% of holdings in the risk-free asset; the tangency with the hyperbola represents a portfolio with no risk-free holdings and 100% of assets held in the portfolio occurring at the tangency point; points between those points are portfolios containing positive amounts of both the risky tangency portfolio and the risk-free asset; and points on the half-line beyond the tangency point are portfolios involving negative holdings of the risk-free asset and an amount invested in the tangency portfolio equal to more than 100% of the investor's initial capital. This efficient half-line is called the capital allocation line (CAL), and its formula can be shown to be
E
(
R
C
)
=
R
F
+
σ
C
E
(
R
P
)
−
R
F
σ
P
.
{\displaystyle E(R_{C})=R_{F}+\sigma _{C}{\frac {E(R_{P})-R_{F}}{\sigma _{P}}}.}
In this formula P is the sub-portfolio of risky assets at the tangency with the Markowitz bullet, F is the risk-free asset, and C is a combination of portfolios P and F.
By the diagram, the introduction of the risk-free asset as a possible component of the portfolio has improved the range of risk-expected return combinations available, because everywhere except at the tangency portfolio the half-line gives a higher expected return than the hyperbola does at every possible risk level. The fact that all points on the linear efficient locus can be achieved by a combination of holdings of the risk-free asset and the tangency portfolio is known as the one mutual fund theorem, where the mutual fund referred to is the tangency portfolio.
=== Geometric intuition ===
The efficient frontier can be pictured as a problem in quadratic curves. On the market, we have the assets
R
1
,
R
2
,
…
,
R
n
{\displaystyle R_{1},R_{2},\dots ,R_{n}}
. We have some funds, and a portfolio is a way to divide our funds into the assets. Each portfolio can be represented as a vector
w
1
,
w
2
,
…
,
w
n
{\displaystyle w_{1},w_{2},\dots ,w_{n}}
, such that
∑
i
w
i
=
1
{\displaystyle \sum _{i}w_{i}=1}
, and we hold the assets according to
w
T
R
=
∑
i
w
i
R
i
{\displaystyle w^{T}R=\sum _{i}w_{i}R_{i}}
.
==== Markowitz bullet ====
Since we wish to maximize expected return while minimizing the standard deviation of the return, we are to solve a quadratic optimization problem:
{
E
[
w
T
R
]
=
μ
min
σ
2
=
V
a
r
[
w
T
R
]
∑
i
w
i
=
1
{\displaystyle {\begin{cases}E[w^{T}R]=\mu \\\min \sigma ^{2}=Var[w^{T}R]\\\sum _{i}w_{i}=1\end{cases}}}
Portfolios are points in the Euclidean space
R
n
{\displaystyle \mathbb {R} ^{n}}
. The third equation states that the portfolio should fall on a plane defined by
∑
i
w
i
=
1
{\displaystyle \sum _{i}w_{i}=1}
. The first equation states that the portfolio should fall on a plane defined by
w
T
E
[
R
]
=
μ
{\displaystyle w^{T}E[R]=\mu }
. The second condition states that the portfolio should fall on the contour surface for
∑
i
j
w
i
ρ
i
j
w
j
{\displaystyle \sum _{ij}w_{i}\rho _{ij}w_{j}}
that is as close to the origin as possible. Since the equation is quadratic, each such contour surface is an ellipsoid (assuming that the covariance matrix
ρ
i
j
{\displaystyle \rho _{ij}}
is invertible). Therefore, we can solve the quadratic optimization graphically by drawing ellipsoidal contours on the plane
∑
i
w
i
=
1
{\displaystyle \sum _{i}w_{i}=1}
, then intersect the contours with the plane
{
w
:
w
T
E
[
R
]
=
μ
and
∑
i
w
i
=
1
}
{\displaystyle \{w:w^{T}E[R]=\mu {\text{ and }}\sum _{i}w_{i}=1\}}
. As the ellipsoidal contours shrink, eventually one of them would become exactly tangent to the plane, before the contours become completely disjoint from the plane. The tangent point is the optimal portfolio at this level of expected return.
As we vary
μ
{\displaystyle \mu }
, the tangent point varies as well, but always falling on a single line (this is the two mutual funds theorem).
Let the line be parameterized as
{
w
+
w
′
t
:
t
∈
R
}
{\displaystyle \{w+w't:t\in \mathbb {R} \}}
. We find that along the line,
{
μ
=
(
w
′
T
E
[
R
]
)
t
+
w
T
E
[
R
]
σ
2
=
(
w
′
T
ρ
w
′
)
t
2
+
2
(
w
T
ρ
w
′
)
t
+
(
w
T
ρ
w
)
{\displaystyle {\begin{cases}\mu &=(w'^{T}E[R])t+w^{T}E[R]\\\sigma ^{2}&=(w'^{T}\rho w')t^{2}+2(w^{T}\rho w')t+(w^{T}\rho w)\end{cases}}}
giving a hyperbola in the
(
σ
,
μ
)
{\displaystyle (\sigma ,\mu )}
plane. The hyperbola has two branches, symmetric with respect to the
μ
{\displaystyle \mu }
axis. However, only the branch with
σ
>
0
{\displaystyle \sigma >0}
is meaningful. By symmetry, the two asymptotes of the hyperbola intersect at a point
μ
M
V
P
{\displaystyle \mu _{MVP}}
on the
μ
{\displaystyle \mu }
axis. The point
μ
m
i
d
{\displaystyle \mu _{mid}}
is the height of the leftmost point of the hyperbola, and can be interpreted as the expected return of the global minimum-variance portfolio (global MVP).
==== Tangency portfolio ====
The tangency portfolio exists if and only if
μ
R
F
<
μ
M
V
P
{\displaystyle \mu _{RF}<\mu _{MVP}}
.
In particular, if the risk-free return is greater or equal to
μ
M
V
P
{\displaystyle \mu _{MVP}}
, then the tangent portfolio does not exist. The capital market line (CML) becomes parallel to the upper asymptote line of the hyperbola. Points on the CML become impossible to achieve, though they can be approached from below.
It is usually assumed that the risk-free return is less than the return of the global MVP, in order that the tangency portfolio exists. However, even in this case, as
μ
R
F
{\displaystyle \mu _{RF}}
approaches
μ
M
V
P
{\displaystyle \mu _{MVP}}
from below, the tangency portfolio diverges to a portfolio with infinite return and variance. Since there are only finitely many assets in the market, such a portfolio must be shorting some assets heavily while longing some other assets heavily. In practice, such a tangency portfolio would be impossible to achieve, because one cannot short an asset too much due to short sale constraints, and also because of price impact, that is, longing a large amount of an asset would push up its price, breaking the assumption that the asset prices do not depend on the portfolio.
==== Non-invertible covariance matrix ====
If the covariance matrix is not invertible, then there exists some nonzero vector
v
{\displaystyle v}
, such that
v
T
R
{\displaystyle v^{T}R}
is a random variable with zero variance—that is, it is not random at all.
Suppose
∑
i
v
i
=
0
{\displaystyle \sum _{i}v_{i}=0}
and
v
T
R
=
0
{\displaystyle v^{T}R=0}
, then that means one of the assets can be exactly replicated using the other assets at the same price and the same return. Therefore, there is never a reason to buy that asset, and we can remove it from the market.
Suppose
∑
i
v
i
=
0
{\displaystyle \sum _{i}v_{i}=0}
and
v
T
R
≠
0
{\displaystyle v^{T}R\neq 0}
, then that means there is free money, breaking the no arbitrage assumption.
Suppose
∑
i
v
i
≠
0
{\displaystyle \sum _{i}v_{i}\neq 0}
, then we can scale the vector to
∑
i
v
i
=
1
{\displaystyle \sum _{i}v_{i}=1}
. This means that we have constructed a risk-free asset with return
v
T
R
{\displaystyle v^{T}R}
. We can remove each such asset from the market, constructing one risk-free asset for each such asset removed. By the no arbitrage assumption, all their return rates are equal. For the assets that still remain in the market, their covariance matrix is invertible.
== Asset pricing ==
The above analysis describes optimal behavior of an individual investor. Asset pricing theory builds on this analysis, allowing MPT to derive the required expected return for a correctly priced asset in this context.
Intuitively (in a perfect market with rational investors), if a security was expensive relative to others - i.e. too much risk for the price - demand would fall and its price would drop correspondingly; if cheap, demand and price would increase likewise.
This would continue until all such adjustments had ceased - a state of "market equilibrium".
In this equilibrium, relative supplies will equal relative demands:
given the relationship of price with supply and demand, since the risk-to-reward ratio is "identical" across all securities, proportions of each security in any fully-diversified portfolio would correspondingly be the same as in the overall market.
More formally, then, since everyone holds the risky assets in identical proportions to each other — namely in the proportions given by the tangency portfolio — in market equilibrium the risky assets' prices, and therefore their expected returns, will adjust so that the ratios in the tangency portfolio are the same as the ratios in which the risky assets are supplied to the market.
The result for expected return then follows, as below.
=== Systematic risk and specific risk ===
Specific risk is the risk associated with individual assets - within a portfolio these risks can be reduced through diversification (specific risks "cancel out"). Specific risk is also called diversifiable, unique, unsystematic, or idiosyncratic risk. Systematic risk (a.k.a. portfolio risk or market risk) refers to the risk common to all securities—except for selling short as noted below, systematic risk cannot be diversified away (within one market). Within the market portfolio, asset specific risk will be diversified away to the extent possible. Systematic risk is therefore equated with the risk (standard deviation) of the market portfolio.
Since a security will be purchased only if it improves the risk-expected return characteristics of the market portfolio, the relevant measure of the risk of a security is the risk it adds to the market portfolio, and not its risk in isolation.
In this context, the volatility of the asset, and its correlation with the market portfolio, are historically observed and are therefore given. (There are several approaches to asset pricing that attempt to price assets by modelling the stochastic properties of the moments of assets' returns - these are broadly referred to as conditional asset pricing models.)
Systematic risks within one market can be managed through a strategy of using both long and short positions within one portfolio, creating a "market neutral" portfolio. Market neutral portfolios, therefore, will be uncorrelated with broader market indices.
=== Capital asset pricing model ===
The asset return depends on the amount paid for the asset today. The price paid must ensure that the market portfolio's risk / return characteristics improve when the asset is added to it. The CAPM is a model that derives the theoretical required expected return (i.e., discount rate) for an asset in a market, given the risk-free rate available to investors and the risk of the market as a whole. The CAPM is usually expressed:
E
(
R
i
)
=
R
f
+
β
i
(
E
(
R
m
)
−
R
f
)
{\displaystyle \operatorname {E} (R_{i})=R_{f}+\beta _{i}(\operatorname {E} (R_{m})-R_{f})}
β, Beta, is the measure of asset sensitivity to a movement in the overall market; Beta is usually found via regression on historical data. Betas exceeding one signify more than average "riskiness" in the sense of the asset's contribution to overall portfolio risk; betas below one indicate a lower than average risk contribution.
(
E
(
R
m
)
−
R
f
)
{\displaystyle (\operatorname {E} (R_{m})-R_{f})}
is the market premium, the expected excess return of the market portfolio's expected return over the risk-free rate.
A derivation
is as follows:
(1) The incremental impact on risk and expected return when an additional risky asset, a, is added to the market portfolio, m, follows from the formulae for a two-asset portfolio. These results are used to derive the asset-appropriate discount rate.
Updated portfolio risk =
(
w
m
2
σ
m
2
+
[
w
a
2
σ
a
2
+
2
w
m
w
a
ρ
a
m
σ
a
σ
m
]
)
{\displaystyle (w_{m}^{2}\sigma _{m}^{2}+[w_{a}^{2}\sigma _{a}^{2}+2w_{m}w_{a}\rho _{am}\sigma _{a}\sigma _{m}])}
Hence, risk added to portfolio =
[
w
a
2
σ
a
2
+
2
w
m
w
a
ρ
a
m
σ
a
σ
m
]
{\displaystyle [w_{a}^{2}\sigma _{a}^{2}+2w_{m}w_{a}\rho _{am}\sigma _{a}\sigma _{m}]}
but since the weight of the asset will be very low re. the overall market,
w
a
2
≈
0
{\displaystyle w_{a}^{2}\approx 0}
i.e. additional risk =
[
2
w
m
w
a
ρ
a
m
σ
a
σ
m
]
{\displaystyle [2w_{m}w_{a}\rho _{am}\sigma _{a}\sigma _{m}]\quad }
Updated expected return =
(
w
m
E
(
R
m
)
+
[
w
a
E
(
R
a
)
]
)
{\displaystyle (w_{m}\operatorname {E} (R_{m})+[w_{a}\operatorname {E} (R_{a})])}
Hence additional expected return =
[
w
a
E
(
R
a
)
]
{\displaystyle [w_{a}\operatorname {E} (R_{a})]}
(2) If an asset, a, is correctly priced, the improvement for an investor in her risk-to-expected return ratio achieved by adding it to the market portfolio, m, will at least (in equilibrium, exactly) match the gains of spending that money on an increased stake in the market portfolio. The assumption is that the investor will purchase the asset with funds borrowed at the risk-free rate,
R
f
{\displaystyle R_{f}}
; this is rational if
E
(
R
a
)
>
R
f
{\displaystyle \operatorname {E} (R_{a})>R_{f}}
.
Thus:
[
w
a
(
E
(
R
a
)
−
R
f
)
]
/
[
2
w
m
w
a
ρ
a
m
σ
a
σ
m
]
=
[
w
a
(
E
(
R
m
)
−
R
f
)
]
/
[
2
w
m
w
a
σ
m
σ
m
]
{\displaystyle [w_{a}(\operatorname {E} (R_{a})-R_{f})]/[2w_{m}w_{a}\rho _{am}\sigma _{a}\sigma _{m}]=[w_{a}(\operatorname {E} (R_{m})-R_{f})]/[2w_{m}w_{a}\sigma _{m}\sigma _{m}]}
i.e.:
[
E
(
R
a
)
]
=
R
f
+
[
E
(
R
m
)
−
R
f
]
∗
[
ρ
a
m
σ
a
σ
m
]
/
[
σ
m
σ
m
]
{\displaystyle [\operatorname {E} (R_{a})]=R_{f}+[\operatorname {E} (R_{m})-R_{f}]*[\rho _{am}\sigma _{a}\sigma _{m}]/[\sigma _{m}\sigma _{m}]}
i.e.:
[
E
(
R
a
)
]
=
R
f
+
[
E
(
R
m
)
−
R
f
]
∗
[
σ
a
m
]
/
[
σ
m
m
]
{\displaystyle [\operatorname {E} (R_{a})]=R_{f}+[\operatorname {E} (R_{m})-R_{f}]*[\sigma _{am}]/[\sigma _{mm}]}
(since
ρ
X
Y
=
σ
X
Y
/
(
σ
X
σ
Y
)
{\displaystyle \rho _{XY}=\sigma _{XY}/(\sigma _{X}\sigma _{Y})}
)
[
σ
a
m
]
/
[
σ
m
m
]
{\displaystyle [\sigma _{am}]/[\sigma _{mm}]\quad }
is the "beta",
β
{\displaystyle \beta }
return mentioned — the covariance between the asset's return and the market's return divided by the variance of the market return — i.e. the sensitivity of the asset price to movement in the market portfolio's value (see also Beta (finance) § Adding an asset to the market portfolio).
This equation can be estimated statistically using the following regression equation:
S
C
L
:
R
i
,
t
−
R
f
=
α
i
+
β
i
(
R
M
,
t
−
R
f
)
+
ϵ
i
,
t
{\displaystyle \mathrm {SCL} :R_{i,t}-R_{f}=\alpha _{i}+\beta _{i}\,(R_{M,t}-R_{f})+\epsilon _{i,t}{\frac {}{}}}
where αi is called the asset's alpha, βi is the asset's beta coefficient and SCL is the security characteristic line.
Once an asset's expected return,
E
(
R
i
)
{\displaystyle E(R_{i})}
, is calculated using CAPM, the future cash flows of the asset can be discounted to their present value using this rate to establish the correct price for the asset. A riskier stock will have a higher beta and will be discounted at a higher rate; less sensitive stocks will have lower betas and be discounted at a lower rate. In theory, an asset is correctly priced when its observed price is the same as its value calculated using the CAPM derived discount rate. If the observed price is higher than the valuation, then the asset is overvalued; it is undervalued for a too low price.
== Criticisms ==
Despite its theoretical importance, critics of MPT question whether it is an ideal investment tool, because its model of financial markets does not match the real world in many ways.
The risk, return, and correlation measures used by MPT are based on expected values, which means that they are statistical statements about the future (the expected value of returns is explicit in the above equations, and implicit in the definitions of variance and covariance). Such measures often cannot capture the true statistical features of the risk and return which often follow highly skewed distributions (e.g. the log-normal distribution) and can give rise to, besides reduced volatility, also inflated growth of return. In practice, investors must substitute predictions based on historical measurements of asset return and volatility for these values in the equations. Very often such expected values fail to take account of new circumstances that did not exist when the historical data was generated. An optimal approach to capturing trends, which differs from Markowitz optimization by utilizing invariance properties, is also derived from physics. Instead of transforming the normalized expectations using the inverse of the correlation matrix, the invariant portfolio employs the inverse of the square root of the correlation matrix. The optimization problem is solved under the assumption that expected values are uncertain and correlated. The Markowitz solution corresponds only to the case where the correlation between expected returns is similar to the correlation between returns.
More fundamentally, investors are stuck with estimating key parameters from past market data because MPT attempts to model risk in terms of the likelihood of losses, but says nothing about why those losses might occur. The risk measurements used are probabilistic in nature, not structural. This is a major difference as compared to many engineering approaches to risk management.
Options theory and MPT have at least one important conceptual difference from the probabilistic risk assessment done by nuclear power [plants]. A PRA is what economists would call a structural model. The components of a system and their relationships are modeled in Monte Carlo simulations. If valve X fails, it causes a loss of back pressure on pump Y, causing a drop in flow to vessel Z, and so on.
But in the Black–Scholes equation and MPT, there is no attempt to explain an underlying structure to price changes. Various outcomes are simply given probabilities. And, unlike the PRA, if there is no history of a particular system-level event like a liquidity crisis, there is no way to compute the odds of it. If nuclear engineers ran risk management this way, they would never be able to compute the odds of a meltdown at a particular plant until several similar events occurred in the same reactor design.
Mathematical risk measurements are also useful only to the degree that they reflect investors' true concerns—there is no point minimizing a variable that nobody cares about in practice. In particular, variance is a symmetric measure that counts abnormally high returns as just as risky as abnormally low returns. The psychological phenomenon of loss aversion is the idea that investors are more concerned about losses than gains, meaning that our intuitive concept of risk is fundamentally asymmetric in nature. There many other risk measures (like coherent risk measures) might better reflect investors' true preferences.
Modern portfolio theory has also been criticized because it assumes that returns follow a Gaussian distribution. Already in the 1960s, Benoit Mandelbrot and Eugene Fama showed the inadequacy of this assumption and proposed the use of more general stable distributions instead. Stefan Mittnik and Svetlozar Rachev presented strategies for deriving optimal portfolios in such settings. More recently, Nassim Nicholas Taleb has also criticized modern portfolio theory on this ground, writing:After the stock market crash (in 1987), they rewarded two theoreticians, Harry Markowitz and William Sharpe, who built beautifully Platonic models on a Gaussian base, contributing to what is called Modern Portfolio Theory. Simply, if you remove their Gaussian assumptions and treat prices as scalable, you are left with hot air. The Nobel Committee could have tested the Sharpe and Markowitz models—they work like quack remedies sold on the Internet—but nobody in Stockholm seems to have thought about it.
Contrarian investors and value investors typically do not subscribe to Modern Portfolio Theory. One objection is that the MPT relies on the efficient-market hypothesis and uses fluctuations in share price as a substitute for risk. Sir John Templeton believed in diversification as a concept, but also felt the theoretical foundations of MPT were questionable, and concluded (as described by a biographer): "the notion that building portfolios on the basis of unreliable and irrelevant statistical inputs, such as historical volatility, was doomed to failure."
A few studies have argued that "naive diversification", splitting capital equally among available investment options, might have advantages over MPT in some situations.
When applied to certain universes of assets, the Markowitz model has been identified by academics to be inadequate due to its susceptibility to model instability which may arise, for example, among a universe of highly correlated assets.
== Extensions ==
Since MPT's introduction in 1952, many attempts have been made to improve the model, especially by using more realistic assumptions.
Post-modern portfolio theory extends MPT by adopting non-normally distributed, asymmetric, and fat-tailed measures of risk. This helps with some of these problems, but not others.
Black–Litterman model optimization is an extension of unconstrained Markowitz optimization that incorporates relative and absolute 'views' on inputs of risk and returns from.
The model is also extended by assuming that expected returns are uncertain, and the correlation matrix in this case can differ from the correlation matrix between returns.
== Connection with rational choice theory ==
Modern portfolio theory is inconsistent with main axioms of rational choice theory, most notably with monotonicity axiom, stating that, if investing into portfolio X will, with probability one, return more money than investing into portfolio Y, then a rational investor should prefer X to Y. In contrast, modern portfolio theory is based on a different axiom, called variance aversion,
and may recommend to invest into Y on the basis that it has lower variance. Maccheroni et al. described choice theory which is the closest possible to the modern portfolio theory, while satisfying monotonicity axiom. Alternatively, mean-deviation analysis
is a rational choice theory resulting from replacing variance by an appropriate deviation risk measure.
== Other applications ==
In the 1970s, concepts from MPT found their way into the field of regional science. In a series of seminal works, Michael Conroy modeled the labor force in the economy using portfolio-theoretic methods to examine growth and variability in the labor force. This was followed by a long literature on the relationship between economic growth and volatility.
More recently, modern portfolio theory has been used to model the self-concept in social psychology. When the self attributes comprising the self-concept constitute a well-diversified portfolio, then psychological outcomes at the level of the individual such as mood and self-esteem should be more stable than when the self-concept is undiversified. This prediction has been confirmed in studies involving human subjects.
Recently, modern portfolio theory has been applied to modelling the uncertainty and correlation between documents in information retrieval. Given a query, the aim is to maximize the overall relevance of a ranked list of documents and at the same time minimize the overall uncertainty of the ranked list.
=== Project portfolios and other "non-financial" assets ===
Some experts apply MPT to portfolios of projects and other assets besides financial instruments. When MPT is applied outside of traditional financial portfolios, some distinctions between the different types of portfolios must be considered.
The assets in financial portfolios are, for practical purposes, continuously divisible while portfolios of projects are "lumpy". For example, while we can compute that the optimal portfolio position for 3 stocks is, say, 44%, 35%, 21%, the optimal position for a project portfolio may not allow us to simply change the amount spent on a project. Projects might be all or nothing or, at least, have logical units that cannot be separated. A portfolio optimization method would have to take the discrete nature of projects into account.
The assets of financial portfolios are liquid; they can be assessed or re-assessed at any point in time. But opportunities for launching new projects may be limited and may occur in limited windows of time. Projects that have already been initiated cannot be abandoned without the loss of the sunk costs (i.e., there is little or no recovery/salvage value of a half-complete project).
Neither of these necessarily eliminate the possibility of using MPT and such portfolios. They simply indicate the need to run the optimization with an additional set of mathematically expressed constraints that would not normally apply to financial portfolios.
Furthermore, some of the simplest elements of Modern Portfolio Theory are applicable to virtually any kind of portfolio. The concept of capturing the risk tolerance of an investor by documenting how much risk is acceptable for a given return may be applied to a variety of decision analysis problems. MPT uses historical variance as a measure of risk, but portfolios of assets like major projects do not have a well-defined "historical variance". In this case, the MPT investment boundary can be expressed in more general terms like "chance of an ROI less than cost of capital" or "chance of losing more than half of the investment". When risk is put in terms of uncertainty about forecasts and possible losses then the concept is transferable to various types of investment.
== See also ==
Outline of finance § Portfolio theory
Beta (finance)
Bias ratio (finance)
Black–Litterman model
Financial economics § Uncertainty
Financial risk management § Investment management
Intertemporal portfolio choice
Investment theory
Kelly criterion
Marginal conditional stochastic dominance
Markowitz model
Mutual fund separation theorem
Omega ratio
Post-modern portfolio theory
Sortino ratio
Treynor ratio
Two-moment decision models
Universal portfolio algorithm
== References ==
== Further reading ==
Lintner, John (1965). "The Valuation of Risk Assets and the Selection of Risky Investments in Stock Portfolios and Capital Budgets". The Review of Economics and Statistics. 47 (1): 13–39. doi:10.2307/1924119. JSTOR 1924119.
Sharpe, William F. (1964). "Capital asset prices: A theory of market equilibrium under conditions of risk". Journal of Finance. 19 (3): 425–442. doi:10.2307/2977928. hdl:10.1111/j.1540-6261.1964.tb02865.x. JSTOR 2977928.
Tobin, James (1958). "Liquidity preference as behavior towards risk" (PDF). The Review of Economic Studies. 25 (2): 65–86. doi:10.2307/2296205. JSTOR 2296205.
== External links ==
Macro-Investment Analysis, Prof. William F. Sharpe, Stanford University
Portfolio Optimization, Prof. Stephen P. Boyd, Stanford University
"New Approaches for Portfolio Optimization: Parting with the Bell Curve" — Interview with Prof. Svetlozar Rachev and Prof. Stefan Mittnik
"Bruno de Finetti and Mean-Variance Portfolio Selection Article by Mark Rubinstein on Bruno de Finetti's discovery and comments by Markowitz. | Wikipedia/Modern_portfolio_theory |
Physics of financial markets is a non-orthodox economics discipline that studies financial markets as physical systems. It seeks to understand the nature of financial processes and phenomena by employing the scientific method and avoiding beliefs, unverifiable assumptions and immeasurable notions, not uncommon to economic disciplines.
Physics of financial markets addresses issues such as theory of price formation, price dynamics, market ergodicity, collective phenomena, market self-action, and market instabilities.
Physics of financial markets should not be confused with mathematical finance, which are only concerned with descriptive mathematical modeling of financial instruments without seeking to understand nature of underlying processes.
== See also ==
Econophysics
Social physics
Quantum economics
Thermoeconomics
Quantum finance
Kinetic exchange models of markets
Brownian model of financial markets
Ergodicity economics
== References == | Wikipedia/Physics_of_financial_markets |
The gravity model of international trade in international economics is a model that, in its traditional form, predicts bilateral trade flows based on the economic sizes and distance between two units. Research shows that there is "overwhelming evidence that trade tends to fall with distance."
The model was first introduced by Walter Isard in 1954, who elaborated the concept of "income potential" within the framework of international economics, building upon John Quincy Stewart's earlier idea of demographic gravitation, which had been introduced in 1941. Similarly, Stewart's work on population potential from 1947 had a significant impact on Chauncy Harris, who, in 1954, proposed the economic concept of market potential.
The basic model for trade between two countries (i and j) takes the form of
F
i
j
=
G
⋅
M
i
M
j
D
i
j
.
{\displaystyle F_{ij}=G\cdot {\frac {M_{i}M_{j}}{D_{ij}}}.}
In this formula G is a constant, F stands for trade flow, D stands for the distance and M stands for the economic dimensions of the countries that are being measured. The equation can be changed into a linear form for the purpose of econometric analyses by employing logarithms. The model has been used by economists to analyse the determinants of bilateral trade flows such as common borders, common languages, common legal systems, common currencies, common colonial legacies, and it has been used to test the effectiveness of trade agreements and organizations such as the North American Free Trade Agreement (NAFTA) and the World Trade Organization (WTO) (Head and Mayer 2014). The model has also been used in international relations to evaluate the impact of treaties and alliances on trade (Head and Mayer).
The model has also been applied to other bilateral flow data (also known as "dyadic" data) such as migration, traffic, remittances and foreign direct investment.
== Theoretical justifications and research ==
The model has been an empirical success in that it accurately predicts trade flows between countries for many goods and services, but for a long time some scholars believed that there was no theoretical justification for the gravity equation. However, a gravity relationship can arise in almost any trade model that includes trade costs that increase with distance.
The gravity model estimates the pattern of international trade. While the model’s basic form consists of factors that have more to do with geography and spatiality, the gravity model has been used to test hypotheses rooted in purer economic theories of trade as well. One such theory predicts that trade will be based on relative factor abundances. One of the common relative factor abundance models is the Heckscher–Ohlin model. Those countries with a relative abundance of one factor would be expected to produce goods that require a relatively large amount of that factor in their production. While a generally accepted theory of trade, many economists in the Chicago School believed that the Heckscher–Ohlin model alone was sufficient to describe all trade, while Bertil Ohlin himself argued that in fact the world is more complicated. Investigations into real-world trading patterns have produced a number of results that do not match the expectations of comparative advantage theories. Notably, a study by Wassily Leontief found that the United States, the most capital-endowed country in the world, actually exports more in labor-intensive industries. Comparative advantage in factor endowments would suggest the opposite would occur. Other theories of trade and explanations for this relationship were proposed in order to explain the discrepancy between Leontief’s empirical findings and economic theory. The problem has become known as the Leontief paradox.
An alternative theory, first proposed by Staffan Linder, predicts that patterns of trade will be determined by the aggregated preferences for goods within countries. Those countries with similar preferences would be expected to develop similar industries. With continued similar demand, these countries would continue to trade back and forth in differentiated but similar goods since both demand and produce similar products. For instance, both Germany and the United States are industrialized countries with a high preference for automobiles. Both countries have automobile industries, and both trade cars. The empirical validity of the Linder hypothesis is somewhat unclear. Several studies have found a significant impact of the Linder effect, but others have had weaker results. Studies that do not support Linder have only counted countries that actually trade; they do not input zero values for the dyads where trade could happen but does not. This has been cited as a possible explanation for their findings. Also, Linder never presented a formal model for his theory, so different studies have tested his hypothesis in different ways.
Elhanan Helpman and Paul Krugman asserted that the theory behind comparative advantage does not predict the relationships in the gravity model. Using the gravity model, countries with similar levels of income have been shown to trade more. Helpman and Krugman see this as evidence that these countries are trading in differentiated goods because of their similarities. This casts some doubt about the impact Heckscher–Ohlin has on the real world. Jeffrey Frankel sees the Helpman–Krugman setup here as distinct from Linder’s proposal. However, he does say Helpman–Krugman is different from the usual interpretation of Linder, but, since Linder made no clear model, the association between the two should not be completely discounted. Alan Deardorff adds the possibility, that, while not immediately apparent, the basic gravity model can be derived from Heckscher–Ohlin as well as the Linder and Helpman–Krugman hypotheses. Deardorff concludes that, considering how many models can be tied to the gravity model equation, it is not useful for evaluating the empirical validity of theories.
Bridging economic theory with empirical tests, James Anderson and Jeffrey Bergstrand develop econometric models, grounded in the theories of differentiated goods, which measure the gains from trade liberalizations and the magnitude of the border barriers on trade (see Home bias in trade puzzle). A recent synthesis of empirical research using the gravity equations, however, shows that the effect of border barriers on trade is relatively modest.
Adding to the problem of bridging economic theory with empirical results, some economists have pointed to the possibility of intra-industry trade not as the result of differentiated goods, but because of “reciprocal dumping.” In these models, the countries involved are said to have imperfect competition and segmented markets in homogeneous goods, which leads to intra-industry trade as firms in imperfect competition seek to expand their markets to other countries and trade goods that are not differentiated yet for which they do not have a comparative advantage, since there is no specialization. This model of trade is consistent with the gravity model as it would predict that trade depends on country size.
The reciprocal dumping model has held up to some empirical testing, suggesting that the specialization and differentiated goods models for the gravity equation might not fully explain the gravity equation. Feenstra, Markusen, and Rose (2001) provided evidence for reciprocal dumping by assessing the home market effect in separate gravity equations for differentiated and homogeneous goods. The home market effect showed a relationship in the gravity estimation for differentiated goods, but showed the inverse relationship for homogeneous goods. The authors show that this result matches the theoretical predictions of reciprocal dumping playing a role in homogeneous markets.
Past research using the gravity model has also sought to evaluate the impact of various variables in addition to the basic gravity equation. Among these, price level and exchange rate variables have been shown to have a relationship in the gravity model that accounts for a significant amount of the variance not explained by the basic gravity equation. According to empirical results on price level, the effect of price level varies according to the relationship being examined. For instance, if exports are being examined, a relatively high price level on the part of the importer would be expected to increase trade with that country. A non-linear system of equations are used by Anderson and van Wincoop (2003) to account for the endogenous change in these price terms from trade liberalization. A more simple method is to use a first order log-linearization of this system of equations (Baier and Bergstrand (2009)), or exporter-country-year and importer-country-year dummy variables. For counterfactual analysis, however, one would still need to account for the change in world prices.
== Econometric estimation of gravity equations ==
Since the gravity model for trade does not hold exactly, in econometric applications it is customary to specify
F
i
j
=
G
M
i
β
1
M
j
β
2
D
i
j
β
3
η
i
j
{\displaystyle F_{ij}=G{\frac {M_{i}^{\beta _{1}}M_{j}^{\beta _{2}}}{D_{ij}^{\beta _{3}}}}\eta _{ij}}
where
F
i
j
{\displaystyle F_{ij}}
represents volume of trade from country
i
{\displaystyle i}
to country
j
{\displaystyle j}
,
M
i
{\displaystyle M_{i}}
and
M
j
{\displaystyle M_{j}}
typically represent the GDPs for countries
i
{\displaystyle i}
and
j
{\displaystyle j}
,
D
i
j
{\displaystyle D_{ij}}
denotes the distance between the two countries, and
η
{\displaystyle \eta }
represents an error term with expectation equal to 1.
The traditional approach to estimating this equation consists in taking logs of both sides, leading to a log-log model of the form (note: constant G becomes part of
β
0
{\displaystyle \beta _{0}}
):
ln
(
F
i
j
)
=
β
0
+
β
1
ln
(
M
i
)
+
β
2
ln
(
M
j
)
−
β
3
ln
(
D
i
j
)
+
ε
i
j
.
{\displaystyle \ln(F_{ij})=\beta _{0}+\beta _{1}\ln(M_{i})+\beta _{2}\ln(M_{j})-\beta _{3}\ln(D_{ij})+\varepsilon _{ij}.}
However, this approach has two major problems. First, it obviously cannot be used when there are observations for which
F
i
j
{\displaystyle F_{ij}}
is equal to zero. Second, Santos Silva and Tenreyro (2006) argued that estimating the log-linearized equation by least squares (OLS) can lead to significant biases if the researcher believes the true model to be nonlinear in its parameters. As an alternative, these authors have suggested that the model should be estimated in its multiplicative form, i.e.,
F
i
j
=
exp
[
β
0
+
β
1
ln
(
M
i
)
+
β
2
ln
(
M
j
)
−
β
3
ln
(
D
i
j
)
]
η
i
j
,
{\displaystyle F_{ij}=\exp[\beta _{0}+\beta _{1}\ln(M_{i})+\beta _{2}\ln(M_{j})-\beta _{3}\ln(D_{ij})]\eta _{ij},}
using a Poisson pseudo-maximum likelihood (PPML) estimator based on the Poisson model usually used for count data. As shown by Santos Silva and Tenreyro (2006), PPML estimates of common gravity variables can be different from their OLS counterparts. In particular, they found that the trade-reducing effects of distance were smaller and that the effects of colonial ties were statistically insignificant.
Though PPML does allow the inclusion of observations where
F
i
j
=
0
{\displaystyle F_{ij}=0}
, it is not necessarily a perfect solution to the "zeroes" problem. Martin and Pham (2008) argued that using PPML on gravity severely biases estimates when zero trade flows are frequent and reflect non-random selection. However, their results were challenged by Santos Silva and Tenreyro (2011), who argued that the simulation results of Martin and Pham (2008) are based on misspecified models and showed that the PPML estimator performs well even when the proportions of zeros is very large. The latter argument assumes that the number of trading firms can be generated via a count data model, with zero trade flows in the data reflecting the probability that no firms engage in trade. This idea was formalized further by Eaton, Kortum, and Sotelo (2012), who advocated for using the bilateral expenditure share as the dependent variable in place of the level of bilateral trade flows.
In applied work, the gravity model is often extended by including variables to account for language relationships, tariffs, contiguity, access to sea, colonial history, and exchange rate regimes. Yet the estimation of structural gravity, based on Anderson and van Wincoop (2003), requires the inclusion of importer and exporter fixed effects, thus limiting the gravity analysis to bilateral trade costs (Baldwin and Taglioni 2007). Aside from OLS and PPML, other methods for gravity estimation include Gamma Pseudo-maximum Likelihood and the "tetrads" method of Head, Mayer, and Ries (2010). The latter involves first transforming the dependent variable in order to cancel out any country-specific factors. This provides another way of focusing only on bilateral trade costs.
== See also ==
Gravity model of migration
Internationalization
Radiation law for human mobility
== Further reading ==
Anderson, James E. (2011). "The Gravity Model". Annual Review of Economics. 3: 133–160.
== Notes ==
== References ==
== External links ==
=== Information ===
Gravity Portal at the United States International Trade Commission
World Bank presentation on the gravity model
Global multi-market simulation using World Bank's World Integrated Trade Solution *Global Tariff Cuts and Trade Simulator
A page on the implementation of the Poisson pseudo-maximum likelihood estimator
=== Data ===
World Bank's Trade and Production Database
Resources for data on trade, including the gravity model Archived 2010-08-23 at the Wayback Machine
UNESCAP dataset for gravity model | Wikipedia/Gravity_model_of_trade |
Neoclassical economics is an approach to economics in which the production, consumption, and valuation (pricing) of goods and services are observed as driven by the supply and demand model. According to this line of thought, the value of a good or service is determined through a hypothetical maximization of utility by income-constrained individuals and of profits by firms facing production costs and employing available information and factors of production. This approach has often been justified by appealing to rational choice theory.
Neoclassical economics is the dominant approach to microeconomics and, together with Keynesian economics, formed the neoclassical synthesis which dominated mainstream economics as "neo-Keynesian economics" from the 1950s onward.
== Classification ==
The term was originally introduced by Thorstein Veblen in his 1900 article "Preconceptions of Economic Science", in which he related marginalists in the tradition of Alfred Marshall et al. to those in the Austrian School.
No attempt will here be made even to pass a verdict on the relative claims of the recognized two or three main "schools" of theory, beyond the somewhat obvious finding that, for the purpose in hand, the so-called Austrian school is scarcely distinguishable from the neo-classical, unless it be in the different distribution of emphasis. The divergence between the modernized classical views, on the one hand, and the historical and Marxist schools, on the other hand, is wider, so much so, indeed, as to bar out a consideration of the postulates of the latter under the same head of inquiry with the former.
It was later used by John Hicks, George Stigler, and others to include the work of Carl Menger, William Stanley Jevons, Léon Walras, John Bates Clark, and many others. Today it is usually used to refer to mainstream economics, although it has also been used as an umbrella term encompassing a number of other schools of thought, notably excluding institutional economics, various historical schools of economics, and Marxian economics, in addition to various other heterodox approaches to economics.
Neoclassical economics is characterized by several assumptions common to many schools of economic thought. There is not a complete agreement on what is meant by neoclassical economics, and the result is a wide range of neoclassical approaches to various problem areas and domains—ranging from neoclassical theories of labor to neoclassical theories of demographic changes.
== Theory ==
=== Assumptions and objectives ===
It was expressed by E. Roy Weintraub that neoclassical economics rests on three assumptions, although certain branches of neoclassical theory may have different approaches:
People have rational preferences between outcomes that can be identified and associated with values.
Individuals maximize utility and firms maximize profits.
People act independently on the basis of full and relevant information.
From these three assumptions, neoclassical economists have built a structure to understand the allocation of scarce resources among alternative ends—in fact, understanding such allocation is often considered the definition of economics to neoclassical theorists. Here is how William Stanley Jevons presented "the problem of Economics".
Given, a certain population, with various needs and powers of production, in possession of certain lands and other sources of material: required, the mode of employing their labor which will maximize the utility of their produce.
From the basic assumptions of neoclassical economics comes a wide range of theories about various areas of economic activity. For example, profit maximization lies behind the neoclassical theory of the firm, while the derivation of demand curves leads to an understanding of consumer goods, and the supply curve allows an analysis of the factors of production. Utility maximization is the source for the neoclassical theory of consumption, the derivation of demand curves for consumer goods, and the derivation of labor supply curves and reservation demand.
=== Supply and demand model ===
Market analysis is typically the neoclassical answer to price questions, such as why does an apple cost less than an automobile, why does the performance of work command a wage, or how to account for interest as a reward for saving. An important device of neoclassical market analysis is the graph presenting supply and demand curves. The curves reflect the behavior of individual buyers and individual sellers. Buyers and sellers interact with each other in and through these markets, and their interactions determine the market prices of anything they buy and sell. In the following graph, the specific price of the commodity being bought/sold is represented by P*.
In reaching agreed outcomes of their interactions, the market behaviors of buyers and sellers are driven by their preferences (= wants, utilities, tastes, choices) and productive abilities (= technologies, resources). This creates a complex relationship between buyers and sellers. Thus, the geometrical analytics of supply and demand is only a simplified way how to describe and explore their interaction.
Market supply and demand are aggregated across firms and individuals. Their interactions determine equilibrium output and price. The market supply and demand for each factor of production is derived analogously to those for market final output to determine equilibrium income and the income distribution. Factor demand incorporates the marginal productivity relationship of that factor in the output market.
Neoclassical economics emphasizes equilibria, which are the solutions of agent maximization problems. Regularities in economies are explained by methodological individualism, the position that economic phenomena can be explained by aggregating over the behavior of agents. The emphasis is on microeconomics. Institutions, which might be considered as before and conditioning individual behavior, are de-emphasized. Economic subjectivism accompanies these emphases. See also general equilibrium.
=== Utility theory of value ===
Neoclassical economics uses the utility theory of value, which states that the value of a good is determined by the marginal utility experienced by the user. This is one of the main distinguishing factors between neoclassical economics and other earlier economic theories, such as Classical and Marxian, which use the labor theory of value that value is determined by the labor required for production.
The partial definition of the neoclassical theory of value states that the value of an object of market exchange is determined by human interaction between the preferences and productive abilities of individuals. This is one of the most important neoclassical hypotheses. However, the neoclassical theory also asks what exactly is causing the supply and demand behaviors of buyers and sellers, and how exactly the preferences and productive abilities of people determine the market prices. Therefore, the neoclassical theory of value is a theory of these forces: the preferences and productive abilities of humans. They are the final causal determinants of the behavior of supply and demand and therefore of value. According to neoclassical economics, individual preferences and productive abilities are the essential forces that generate all other economic events (demands, supplies, and prices).
=== Market failure and externalities ===
Despite favoring markets to organize economic activity, neoclassical theory acknowledges that markets do not always produce the socially desirable outcome due to the presence of externalities. Externalities are considered a form of market failure. Neoclassical economists vary in terms of the significance they ascribe to externalities in market outcomes.
=== Pareto criterion ===
In a market with a very large number of participants and under appropriate conditions, for each good, there will be a unique price that allows all welfare–improving transactions to take place. This price is determined by the actions of the individuals pursuing their preferences. If these prices are flexible, meaning that all parties are able to pursue transactions at any rates they find mutually beneficial, they will, under appropriate assumptions, tend to settle at price levels that allow for all welfare–improving transactions. Under these assumptions, free-market processes yield an optimum of social welfare. This type of group welfare is called the Pareto optimum (criterion) after its discoverer Vilfredo Pareto.
Wolff and Resnick (2012) describe the Pareto optimality in another way. According to them, the term "Pareto optimal point" signifies the equality of consumption and production, which indicates that the demand (as a ratio of marginal utilities) and supply (as a ratio of marginal costs) sides of an economy are in balance with each other. The Pareto optimum point also signifies that society has fully realized its potential output.
Normative judgments in neoclassical economics are shaped by the Pareto criterion. As a result, many neoclassical economists favor a relatively laissez-faire approach to government intervention in markets, since it is very difficult to make a change where no one will be worse off. However, many less conservative neoclassical economists instead use the compensation principle, which says that an intervention is good if the total gains are larger than the total losses, even if losers are not compensated in practice.
=== International trade ===
Neoclassical economics favors free trade according to David Ricardo's theory of comparative advantage. This idea holds that free trade between two countries is mutually beneficial because it allows the greatest total consumption in both countries.
== Origins ==
Classical economics, developed in the 18th and 19th centuries, included a value theory and distribution theory. The value of a product was thought to depend on the costs involved in producing that product. The explanation of costs in classical economics was simultaneously an explanation of distribution. A landlord received rent, workers received wages, and a capitalist tenant farmer received profits on their investment. This classic approach included the work of Adam Smith and David Ricardo.
However, some economists gradually began emphasizing the perceived value of a good to the consumer. They proposed a theory that the value of a product was to be explained with differences in utility (usefulness) to the consumer. (In England, economists tended to conceptualize utility in keeping with the utilitarianism of Jeremy Bentham and later of John Stuart Mill.)
The third step from political economy to economics was the introduction of marginalism and the proposition that economic actors made decisions based on margins. For example, a person decides to buy a second sandwich based on how full he or she is after the first one, a firm hires a new employee based on the expected increase in profits the employee will bring. This differs from the aggregate decision-making of classical political economy in that it explains how vital goods such as water can be cheap, while luxuries can be expensive.
=== Marginal revolution ===
The change in economic theory from classical to neoclassical economics has been called the "marginal revolution", although it has been argued that the process was slower than the term suggests. It is frequently dated from William Stanley Jevons's Theory of Political Economy (1871), Carl Menger's Principles of Economics (1871), and Léon Walras's Elements of Pure Economics (1874–1877). Historians of economics and economists have debated:
Whether utility or marginalism was more essential to this revolution (whether the noun or the adjective in the phrase "marginal utility" is more important)
Whether there was a revolutionary change of thought or merely a gradual development and change of emphasis from their predecessors
Whether grouping these economists together disguises differences more important than their similarities.
In particular, Jevons saw his economics as an application and development of Jeremy Bentham's utilitarianism and never had a fully developed general equilibrium theory. Menger did not embrace this hedonic conception, explained diminishing marginal utility in terms of subjective prioritization of possible uses, and emphasized disequilibrium and the discrete; further, Menger had an objection to the use of mathematics in economics, while the other two modeled their theories after 19th-century mechanics. Jevons built on the hedonic conception of Bentham or of Mill, while Walras was more interested in the interaction of markets than in explaining the individual psyche.
Alfred Marshall's textbook, Principles of Economics (1890), was the dominant textbook in England a generation later. Marshall's influence extended elsewhere; Italians would compliment Maffeo Pantaleoni by calling him the "Marshall of Italy". Marshall thought classical economics attempted to explain prices by the cost of production. He asserted that earlier marginalists went too far in correcting this imbalance by overemphasizing utility and demand. Marshall thought that "We might as reasonably dispute whether it is the upper
or the under blade of a pair of scissors that cuts a piece of paper, as to whether the value is governed by utility or cost of production".
Marshall explained price by the intersection of supply and demand curves. The introduction of different market "periods" was an important innovation of Marshall's:
Market period. The goods produced for sale on the market are taken as given data, e.g. in a fish market. Prices quickly adjust to clear markets.
Short period. Industrial capacity is taken as given. The level of output, the level of employment, the inputs of raw materials, and prices fluctuate to equate marginal cost and marginal revenue, where profits are maximized. Economic rents exist in short period equilibrium for fixed factors, and the rate of profit is not equated across sectors.
Long period. The stock of capital goods, such as factories and machines, is not taken as given. Profit-maximizing equilibria determine both industrial capacity and the level at which it is operated.
Very long period. Technology, population trends, habits, and customs are not taken as given but allowed to vary in very long period models.
Marshall took supply and demand as stable functions and extended supply and demand explanations of prices to all runs. He argued supply was easier to vary in longer runs, and thus became a more important determinant of price in the very long run.
=== Cambridge and Lausanne school ===
Cambridge and Lausanne School of economics form the basis of neoclassical economics. Until the 1930s, the evolution of neoclassical economics was determined by the Cambridge school and was based on the marginal equilibrium theory. At the beginning of the 1930s, the Lausanne general equilibrium theory became the general basis of neoclassical economics and the marginal equilibrium theory was understood as its simplification.
The thinking of the Cambridge school continued in the steps of classical political economics and its traditions but was based on the new approach that originated from the marginalist revolution. Its founder was Alfred Marshall, and among the main representatives were Arthur Cecil Pigou, Ralph George Hawtrey and Dennis Holme Robertson. Pigou worked on the theory of welfare economics and the quantity theory of money. Hawtrey and Robertson developed the Cambridge cash balance approach to theory of money and influenced the trade cycle theory. Until the 1930s, John Maynard Keynes was also influencing the theoretical concepts of the Cambridge school. The key characteristic of the Cambridge school was its instrumental approach to the economy – the role of the theoretical economist is first to define theoretical instruments of economic analysis and only just then apply them to real economic problems.
The main representatives of the Lausanne school of economic thought were Léon Walras, Vilfredo Pareto and Enrico Barone. The school became famous for developing the general equilibrium theory. In the contemporary economy, the general equilibrium theory is the methodologic basis of mainstream economics in the form of New classical macroeconomics and New Keynesian macroeconomics.
== Evolution ==
The evolution of neoclassical economics can be divided into three phases. The first phase (= a pre-Keynesian phase) is dated between the initial forming of neoclassical economics (the second half of the nineteenth century) and the arrival of Keynesian economics in the 1930s. The second phase is dated between the year 1940 and the half of the 1970s. During this era, Keynesian economics was dominating the world's economy but neoclassical economics did not cease to exist. It continued in the development of its microeconomics theory and began creating its own macroeconomics theory. The development of the neoclassical macroeconomic theory was based on the development of the quantity theory of money and the theory of distribution. One of the products of the second phase was the Neoclassical synthesis, representing a special combination of neoclassical microeconomics and Keynesian macroeconomics. The third phase began in the 1970s. During this era, Keynesian economics was in crisis, which encouraged the creation of new neoclassical lines of thoughts such as Monetarism and New classical macroeconomics. Despite the diverse focus and approach of these theories, they are all based on the theoretic and methodologic principles of traditional neoclassical economics.
An important change in neoclassical economics occurred around 1933. Joan Robinson and Edward H. Chamberlin, with the nearly simultaneous publication of their respective books, The Economics of Imperfect Competition (1933) and The Theory of Monopolistic Competition (1933), introduced models of imperfect competition. Theories of market forms and industrial organization grew out of this work. They also emphasized certain tools, such as the marginal revenue curve. In her book, Robinson formalized a type of limited competition. The conclusions of her work for welfare economics were worrying: they were implying that the market mechanism operates in a way that the workers are not paid according to the full value of their marginal productivity of labor and that also the principle of consumer sovereignty is impaired. This theory heavily influenced the anti–trust policies of many Western countries in the 1940s and 1950s.
Joan Robinson's work on imperfect competition, at least, was a response to certain problems of Marshallian partial equilibrium theory highlighted by Piero Sraffa. Anglo-American economists also responded to these problems by turning towards general equilibrium theory, developed on the European continent by Walras and Vilfredo Pareto. J. R. Hicks's Value and Capital (1939) was influential in introducing his English-speaking colleagues to these traditions. He, in turn, was influenced by the Austrian School economist Friedrich Hayek's move to the London School of Economics, where Hicks then studied.
These developments were accompanied by the introduction of new tools, such as indifference curves and the theory of ordinal utility. The level of mathematical sophistication of neoclassical economics increased. Paul Samuelson's Foundations of Economic Analysis (1947) contributed to this increase in mathematical modeling.
The interwar period in American economics has been argued to have been pluralistic, with neoclassical economics and institutionalism competing for allegiance. Frank Knight, an early Chicago school economist attempted to combine both schools. But this increase in mathematics was accompanied by greater dominance of neoclassical economics in Anglo-American universities after World War II. Some argue that outside political interventions, such as McCarthyism, and internal ideological bullying played an important role in this rise to dominance.
Hicks' book, Value and Capital had two main parts. The second, which was arguably not immediately influential, presented a model of temporary equilibrium. Hicks was influenced directly by Hayek's notion of intertemporal coordination and paralleled by earlier work by Lindhal. This was part of an abandonment of disaggregated long-run models. This trend probably reached its culmination with the Arrow–Debreu model of intertemporal equilibrium. The Arrow–Debreu model has canonical presentations in Gérard Debreu's Theory of Value (1959) and in Arrow and Hahn's "General Competitive Analysis" (1971).
=== Neoclassical synthesis ===
Many of these developments were against the backdrop of improvements in both econometrics, that is the ability to measure prices and changes in goods and services, as well as their aggregate quantities, and in the creation of macroeconomics, or the study of whole economies. The attempt to combine neo-classical microeconomics and Keynesian macroeconomics would lead to the neoclassical synthesis which was the dominant paradigm of economic reasoning in English-speaking countries from the 1950s till the 1970s. Hicks and Samuelson were for example instrumental in mainstreaming Keynesian economics.
The dominance of Keynesian economics was upset by its inability to explain the economic crises of the 1970s- neoclassical economics emerged distinctly in macroeconomics as the new classical school, which sought to explain macroeconomic phenomenon using neoclassical microeconomics. It and its contemporary New Keynesian economics contributed to the new neoclassical synthesis of the 1990s, which informs much of mainstream macroeconomics today.
=== Cambridge capital controversy ===
Problems exist with making the neoclassical general equilibrium theory compatible with an economy that develops over time and includes capital goods. This was explored in a major debate in the 1960s—the "Cambridge capital controversy"—about the validity of neoclassical economics, with an emphasis on economic growth, capital, aggregate theory, and the marginal productivity theory of distribution. There were also internal attempts by neoclassical economists to extend the Arrow–Debreu model to disequilibrium investigations of stability and uniqueness. However, a result known as the Sonnenschein–Mantel–Debreu theorem suggests that the assumptions that must be made to ensure that equilibrium is stable and unique are quite restrictive.
== Criticisms ==
Although the neoclassical approach is dominant in economics, the field of economics includes others, such as Marxist, behavioral, Schumpeterian, developmentalist, Austrian, post-Keynesian, Humanistic economics, real-world economics and institutionalist schools. All of these schools differ with the neoclassical school and each other, and incorporate various criticisms of the neoclassical economics. Not all criticism comes from other schools: some prominent economists such as Nobel Prize recipient and former chief economist of the World Bank Joseph Stiglitz are vocally critical of mainstream neoclassical economics.
=== Methodology and mathematical models ===
Some see mathematical models used in contemporary research in mainstream economics as having transcended neoclassical economics, while others disagree. Mathematical models also include those in game theory, linear programming, and econometrics. Critics of neoclassical economics are divided into those who think that highly mathematical method is inherently wrong and those who think that mathematical method is useful even if neoclassical economics has other problems.
Critics such as Tony Lawson contend that neoclassical economics' reliance on functional relations is inadequate for social phenomena in which knowledge of one variable does not reliably predict another. The different factors affecting economic outcomes cannot be experimentally isolated from one another in a laboratory; therefore the explanatory and predictive power of mathematical economic analysis is limited. Lawson proposes an alternative approach called the contrast explanation which he says is better suited for determining causes of events in social sciences. More broadly, critics of economics as a science vary, with some believing that all mathematical economics is problematic or even pseudoscience and others believing it is still useful but has less certainty and higher risk of methodology problems than in other fields.
Milton Friedman, one of the most prominent and influential neoclassical economists of the 20th century, responded to criticisms that assumptions in economic models were often unrealistic by saying that theories should be judged by their ability to predict events rather than by the supposed realism of their assumptions. He claimed that, on the contrary, a theory with more absurd assumptions has stronger predictive power. He argued that a theory's ability to theoretically explain reality is irrelevant compared to its ability to empirically predict reality, no matter the method of getting to that prediction.
=== Objectivity and pluralism ===
Neoclassical economics is often criticized for having a normative bias despite sometimes claiming to be "value-free". Such critics argue an ideological side of neoclassical economics, generally to argue that students should be taught more than one economic theory and that economics departments should be more pluralistic.
=== Rational behavior assumptions ===
One of the most widely criticized aspects of neoclassical economics is its set of assumptions about human behavior and rationality. The "economic man", or a hypothetical human who acts according to neoclassical assumptions, does not necessarily behave the same way as humans do in reality. The economist and critic of capitalism Thorstein Veblen claimed that neoclassical economics assumes a person to be "a lightning calculator of pleasures and pains, who oscillates like a homogeneous globule of desire of happiness under the impulse of stimuli that shift about the area, but leave him intact."
Veblen's characterization references a number of commonly criticized rationality assumptions: that people make decisions using a rigid utilitarian framework, have perfect information available about their options, have perfect information processing ability allowing them to immediately calculate utility for all possible options, and are independent decision-makers whose choices are unaffected by their surroundings or by other people. While Veblen is from the Institutional school, the Behavioral school of economics is focused on studying the mechanisms of human decision-making and how they differ from neoclassical assumptions of rationality. Altruistic or empathy-based behavior is another form of "non-rational" decision making studied by behavioral economists, which differs from the neoclassical assumption that people only act in self-interest. Behavioral economists account for how psychological, neurological, and even emotional factors significantly affect economic perceptions and behaviors.
Rational choice theory need not be problematic according to a paper written by the economist Gary Becker which was published in 1962 in the Journal of Political Economy called "Irrational Behavior and Economic Theory". According to Becker, this paper demonstrates "how the important theorems of modern economics result from a general principle which not only includes rational behavior and survivor arguments as special cases, but also much irrational behavior." The specific important theorems and results which are shown to result from a broad range of different type of irrational behavior, as well as rational behavior by market participants in the paper, are that market demand curves are downward sloping or "negatively inclined", and that if an industry transformed from a competitive industry to a completely monopolistic cartel and profits are always maximized, then output per firm under the cartel would decrease compared to its equilibrium level when the industry was competitive.
This paper was largely based on the 1950 paper "Uncertainty, Evolution, and Economic Theory" by Armen Alchian. The paper sets out a justification for supply analysis separate from relying on the assumption of rational consumption, the representative firm and the way neoclassical economists analyze firm behavior in markets which does not rely on rational behavior by the decision makers in those firms, nor any other type of foresighted or goal directed behavior by them. Becker's subsequent 1962 paper provides an independent justification for neoclassical market demand analysis. The two papers offer separate justifications for the use of neoclassical methodology for supply and demand analysis without relying on assumptions otherwise criticised as implausible.
=== Methodological individualism ===
Neoclassical economics offers an approach to studying the economic behavior of homo-economicus. This theory is based on methodological individualism and adopts an atomistic approach to social phenomena, according to which social atoms are the individuals and their actions. According to this doctrine, individuals are independent of social phenomena, but the opposite is not true. Individuals' actions can explain macro-scale behavior, and social collections are nothing more than aggregates, and they do not add anything to its components (Ibid). Although methodological individualism does not negate complex social phenomena such as institutions or behavioral rules, it argues any explanation should be based on constituent components' characteristics of those institutions. This is a reductionist approach based on which it is believed that the characteristics of the social system are derived from the individuals' preferences and their actions.
A critique of this approach is that the individuals' preferences and interests are not fixed. The structures contextualize individual's. According to social constructivists, systems are co-constituted alongside the actors, and ideas within the system define actors' identities, their interests, and thus their behavior. In this regard, actors in various circumstances (exposed to different impressions and experiences) will construct their interests and preferences differently, both within each other and over time. Given the individualistic foundation of the economic theory, critics argue that this theory should consider individual action's structural contexts.
=== Inequality ===
Neoclassical economics is often criticized as promoting policies that increase inequality and as failing to recognise the impact of inequality on economic outcomes. In the case of the former claim, neoclassical economics is often used for analysis in support of policies reducing economic inequality—in particular through determining the diminishing marginal utility of income, whereby poorer individuals gain greater net benefits from a given increase in income than comparable richer individuals, but more generally by being the primary means by which the impact on inequality of any given policy is assessed. In the case of the latter claim, neoclassical economics is the prevailing lens through which the relationship between inequality and economic outcomes is studied.
=== Ethics of markets ===
Neoclassical economics tends to promote commodification and privatization of goods due to its principle that market exchange generally results in the most effective allocation of goods. For example, some economists support markets for human organs, on the basis that it increases supply of life-saving organs and benefits willing donors financially. However, there are arguments in moral philosophy that use of markets for certain goods is inherently unethical. Political philosopher Michael Sandel summarizes that market exchanges have two ethical problems: coercion and corruption. Coercion happens because market participation may not be as free as proponents often claim: people often participate in markets because it is the only way to survive, which is not truly voluntary. Corruption describes how commodification of a good can inherently degrade its value.
== See also ==
Marginalism
Market economy
Microeconomics
Static equilibrium (economics)
== References ==
== External links ==
Weintraub, E. Roy (2002). "Neoclassical Economics". In David R. Henderson (ed.). Concise Encyclopedia of Economics (1st ed.). Library of Economics and Liberty. Archived from the original on February 11, 2021. Retrieved September 26, 2010. OCLC 317650570, 50016270, 163149563
Neoclassical Economics, William King, Drexel University | Wikipedia/Neoclassical_economic_theory |
The Bianconi–Barabási model is a model in network science that explains the growth of complex evolving networks. This model can explain that nodes with different characteristics acquire links at different rates. It predicts that a node's growth depends on its fitness and can calculate the degree distribution. The Bianconi–Barabási model is named after its inventors Ginestra Bianconi and Albert-László Barabási. This model is a variant of the Barabási–Albert model. The model can be mapped to a Bose gas and this mapping can predict a topological phase transition between a "rich-get-richer" phase and a "winner-takes-all" phase.
== Concepts ==
The Barabási–Albert (BA) model uses two concepts: growth and preferential attachment. Here, growth indicates the increase in the number of nodes in the network with time, and preferential attachment means that more connected nodes receive more links. The Bianconi–Barabási model, on top of these two concepts, uses another new concept called the fitness. This model makes use of an analogy with evolutionary models. It assigns an intrinsic fitness value to each node, which embodies all the properties other than the degree. The higher the fitness, the higher the probability of attracting new edges. Fitness can be defined as the ability to attract new links – "a quantitative measure of a node's ability to stay in front of the competition".
While the Barabási–Albert (BA) model explains the "first mover advantage" phenomenon, the Bianconi–Barabási model explains how latecomers also can win. In a network where fitness is an attribute, a node with higher fitness will acquire links at a higher rate than less fit nodes. This model explains that age is not the best predictor of a node's success, rather latecomers also have the chance to attract links to become a hub.
The Bianconi–Barabási model can reproduce the degree correlations of the Internet Autonomous Systems. This model can also show condensation phase transitions in the evolution of complex network.
The BB model can predict the topological properties of Internet.
== Algorithm ==
The fitness network begins with a fixed number of interconnected nodes. They have different fitness, which can be described with fitness parameter,
η
j
{\displaystyle \eta _{j}}
which is chosen from a fitness distribution
ρ
(
η
)
{\displaystyle \rho (\eta )}
.
=== Growth ===
The assumption here is that a node’s fitness is independent of time, and is fixed.
A new node j with m links and a fitness
η
j
{\displaystyle \eta _{j}}
is added with each time-step.
=== Preferential attachment ===
The probability
Π
i
{\displaystyle \Pi _{i}}
that a new node connects to one of the existing links to a node
i
{\displaystyle i}
in the network depends on the number of edges,
k
i
{\displaystyle k_{i}}
, and on the fitness
η
i
{\displaystyle \eta _{i}}
of node
i
{\displaystyle i}
, such that,
Π
i
=
η
i
k
i
∑
j
η
j
k
j
.
{\displaystyle \Pi _{i}={\frac {\eta _{i}k_{i}}{\sum _{j}\eta _{j}k_{j}}}.}
Each node’s evolution with time can be predicted using the continuum theory. If initial number of node is
m
{\displaystyle m}
, then the degree of node
i
{\displaystyle i}
changes at the rate:
∂
k
i
∂
t
=
m
η
i
k
i
∑
j
η
j
k
j
{\displaystyle {\frac {\partial k_{i}}{\partial t}}=m{\frac {\eta _{i}k_{i}}{\sum _{j}\eta _{j}k_{j}}}}
Assuming the evolution of
k
i
{\displaystyle k_{i}}
follows a power law with a fitness exponent
β
(
η
i
)
{\displaystyle \beta (\eta _{i})}
k
(
t
,
t
i
,
η
i
)
=
m
(
t
t
i
)
β
(
η
i
)
{\displaystyle k(t,t_{i},\eta _{i})=m\left({\frac {t}{t_{i}}}\right)^{\beta (\eta _{i})}}
,
where
t
i
{\displaystyle t_{i}}
is the time since the creation of node
i
{\displaystyle i}
.
Here,
β
(
η
)
=
η
C
and
C
=
∫
ρ
(
η
)
η
1
−
β
(
η
)
d
η
.
{\displaystyle \beta (\eta )={\frac {\eta }{C}}{\text{ and }}C=\int \rho (\eta ){\frac {\eta }{1-\beta (\eta )}}\,d\eta .}
== Properties ==
=== Equal fitnesses ===
If all fitnesses are equal in a fitness network, the Bianconi–Barabási model reduces to the Barabási–Albert model, when the degree is not considered, the model reduces to the fitness model (network theory).
When fitnesses are equal, the probability
Π
i
{\displaystyle \Pi _{i}}
that the new node is connected to node
i
{\displaystyle i}
when
k
i
{\displaystyle k_{i}}
is the degree of node
i
{\displaystyle i}
is,
Π
i
=
k
i
∑
j
k
j
.
{\displaystyle \Pi _{i}={\frac {k_{i}}{\sum _{j}k_{j}}}.}
=== Degree distribution ===
Degree distribution of the Bianconi–Barabási model depends on the fitness distribution
ρ
(
η
)
{\displaystyle \rho (\eta )}
. There are two scenarios that can happen based on the probability distribution. If the fitness distribution has a finite domain, then the degree distribution will have a power-law just like the BA model. In the second case, if the fitness distribution has an infinite domain, then the node with the highest fitness value will attract a large number of nodes and show a winners-take-all scenario.
=== Measuring node fitnesses from empirical network data ===
There are various statistical methods to measure node fitnesses
η
i
{\displaystyle \eta _{i}}
in the Bianconi–Barabási model from real-world network data. From the measurement, one can investigate the fitness distribution
ρ
(
η
)
{\displaystyle \rho (\eta )}
or compare the Bianconi–Barabási model with various competing network models in that particular network.
=== Variations of the Bianconi–Barabási model ===
The Bianconi–Barabási model has been extended to weighted networks displaying linear and superlinear scaling of the strength with the degree of the nodes as observed in real network data. This weighted model can lead to condensation of the weights of the network when few links acquire a finite fraction of the weight of the entire network.
Recently it has been shown that the Bianconi–Barabási model can be interpreted as a limit case of the model for emergent hyperbolic network geometry called Network Geometry with Flavor.
The Bianconi–Barabási model can be also modified to study static networks where the number of nodes is fixed.
== Bose-Einstein condensation ==
Bose–Einstein condensation in networks is a phase transition observed in complex networks that can be described by the Bianconi–Barabási model. This phase transition predicts a "winner-takes-all" phenomena in complex networks and can be mathematically mapped to the mathematical model explaining Bose–Einstein condensation in physics.
=== Background ===
In physics, a Bose–Einstein condensate is a state of matter that occurs in certain gases at very low temperatures. Any elementary particle, atom, or molecule, can be classified as one of two types: a boson or a fermion. For example, an electron is a fermion, while a photon or a helium atom is a boson. In quantum mechanics, the energy of a (bound) particle is limited to a set of discrete values, called energy levels. An important characteristic of a fermion is that it obeys the Pauli exclusion principle, which states that no two fermions may occupy the same state. Bosons, on the other hand, do not obey the exclusion principle, and any number can exist in the same state. As a result, at very low energies (or temperatures), a great majority of the bosons in a Bose gas can be crowded into the lowest energy state, creating a Bose–Einstein condensate.
Bose and Einstein have established that the statistical properties of a Bose gas are governed by the Bose–Einstein statistics. In Bose–Einstein statistics, any number of identical bosons can be in the same state. In particular, given an energy state ε, the number of non-interacting bosons in thermal equilibrium at temperature T = 1/β is given by the Bose occupation number
n
(
ε
)
=
1
e
β
(
ε
−
μ
)
−
1
{\displaystyle n(\varepsilon )={\frac {1}{e^{\beta (\varepsilon -\mu )}-1}}}
where the constant μ is determined by an equation describing the conservation of the number of particles
N
=
∫
d
ε
g
(
ε
)
n
(
ε
)
{\displaystyle N=\int d\varepsilon \,g(\varepsilon )\,n(\varepsilon )}
with g(ε) being the density of states of the system.
This last equation may lack a solution at low enough temperatures when g(ε) → 0 for ε → 0. In this case a critical temperature Tc is found such that for T < Tc the system is in a Bose-Einstein condensed phase and a finite fraction of the bosons are in the ground state.
The density of states g(ε) depends on the dimensionality of the space. In particular
g
(
ε
)
∼
ε
d
−
2
2
{\displaystyle g(\varepsilon )\sim \varepsilon ^{\frac {d-2}{2}}}
therefore g(ε) → 0 for ε → 0 only in dimensions d > 2. Therefore, a Bose-Einstein condensation of an ideal Bose gas can only occur for dimensions d > 2.
=== The concept ===
The evolution of many complex systems, including the World Wide Web, business, and citation networks, is encoded in the dynamic web describing the interactions between the system’s constituents. The evolution of these networks is captured by the Bianconi-Barabási model, which includes two main characteristics of growing networks: their constant growth by the addition of new nodes and links and the heterogeneous ability of each node to acquire new links described by the node fitness. Therefore the model is also known as fitness model.
Despite their irreversible and nonequilibrium nature, these networks follow the Bose statistics and can be mapped to a Bose gas.
In this mapping, each node is mapped to an energy state determined by its fitness and each new link attached to a given node is mapped to a Bose particle occupying the corresponding energy state. This mapping predicts that the Bianconi–Barabási model can undergo a topological phase transition in correspondence to the Bose–Einstein condensation of the Bose gas. This phase transition is therefore called Bose-Einstein condensation in complex networks.
Consequently addressing the dynamical properties of these nonequilibrium systems within the framework of equilibrium quantum gases predicts that the “first-mover-advantage,” “fit-get-rich (FGR),” and “winner-takes-all” phenomena observed in a competitive systems are thermodynamically distinct phases of the underlying evolving networks.
=== The mathematical mapping of the network evolution to the Bose gas ===
Starting from the Bianconi-Barabási model, the mapping of a Bose gas to a network can be done by assigning an energy εi to each node, determined by its fitness through the relation
ε
i
=
−
1
β
ln
η
i
{\displaystyle \varepsilon _{i}=-{\frac {1}{\beta }}\ln {\eta _{i}}}
where β = 1 / T . In particular when β = 0 all the nodes have equal fitness, when instead β ≫ 1 nodes with different "energy" have very different fitness. We assume that the network evolves through a modified preferential attachment mechanism. At each time a new node i with energy εi drawn from a probability distribution p(ε) enters in the network and attach a new link to a node j chosen with probability:
Π
j
=
e
−
β
ε
j
k
j
∑
r
e
−
β
ε
r
k
r
.
{\displaystyle \Pi _{j}={\frac {e^{-\beta \varepsilon _{j}}k_{j}}{\sum _{r}e^{-\beta \varepsilon _{r}}k_{r}}}.}
In the mapping to a Bose gas, we assign to every new link linked by preferential attachment to node j a particle in the energy state εj.
The continuum theory predicts that the rate at which links accumulate on node i with "energy" εi is given by
∂
k
i
(
ε
i
,
t
,
t
i
)
∂
t
=
m
e
−
β
ε
i
k
i
(
ε
i
,
t
,
t
i
)
Z
t
{\displaystyle {\frac {\partial k_{i}(\varepsilon _{i},t,t_{i})}{\partial t}}=m{\frac {e^{-\beta \varepsilon _{i}}k_{i}(\varepsilon _{i},t,t_{i})}{Z_{t}}}}
where
k
i
(
ε
i
,
t
,
t
i
)
{\displaystyle k_{i}(\varepsilon _{i},t,t_{i})}
indicating the number of links attached to node i that was added to the network at the time step
t
i
{\displaystyle t_{i}}
.
Z
t
{\displaystyle Z_{t}}
is the partition function, defined as:
Z
t
=
∑
i
e
−
β
ε
i
k
i
(
ε
i
,
t
,
t
i
)
.
{\displaystyle Z_{t}=\sum _{i}e^{-\beta \varepsilon _{i}}k_{i}(\varepsilon _{i},t,t_{i}).}
The solution of this differential equation is:
k
i
(
ε
i
,
t
,
t
i
)
=
m
(
t
t
i
)
f
(
ε
i
)
{\displaystyle k_{i}(\varepsilon _{i},t,t_{i})=m\left({\frac {t}{t_{i}}}\right)^{f(\varepsilon _{i})}}
where the dynamic exponent
f
(
ε
)
{\displaystyle f(\varepsilon )}
satisfies
f
(
ε
)
=
e
−
β
(
ε
−
μ
)
{\displaystyle f(\varepsilon )=e^{-\beta (\varepsilon -\mu )}}
, μ plays the role of the chemical potential, satisfying the equation
∫
d
ε
p
(
ε
)
1
e
β
(
ε
−
μ
)
−
1
=
1
{\displaystyle \int d\varepsilon \,p(\varepsilon ){\frac {1}{e^{\beta (\varepsilon -\mu )}-1}}=1}
where p(ε) is the probability that a node has "energy" ε and "fitness" η = e−βε. In the limit, t → ∞, the occupation number, giving the number of links linked to nodes with "energy" ε, follows the familiar Bose statistics
n
(
ε
)
=
1
e
β
(
ε
−
μ
)
−
1
.
{\displaystyle n(\varepsilon )={\frac {1}{e^{\beta (\varepsilon -\mu )}-1}}.}
The definition of the constant μ in the network models is surprisingly similar to the definition of the chemical potential in a Bose gas. In particular for probabilities p(ε) such that p(ε) → 0 for ε → 0 at high enough value of β we have a condensation phase transition in the network model. When this occurs, one node, the one with higher fitness acquires a finite fraction of all the links. The Bose–Einstein condensation in complex networks is, therefore, a topological phase transition after which the network has a star-like dominant structure.
=== Bose–Einstein phase transition in complex networks ===
The mapping of a Bose gas predicts the existence of two distinct phases as a function of the energy distribution. In the fit-get-rich phase, describing the case of uniform fitness, the fitter nodes acquire edges at a higher rate than older but less fit nodes. In the end the fittest node will have the most edges, but the richest node is not the absolute winner, since its share of the edges (i.e. the ratio of its edges to the total number of edges in the system) reduces to zero in the limit of large system sizes (Fig.2(b)). The unexpected outcome of this mapping is the possibility of Bose–Einstein condensation for T < TBE, when the fittest node acquires a finite fraction of the edges and maintains this share of edges over time (Fig.2(c)).
A representative fitness distribution
ρ
(
η
)
{\displaystyle \rho (\eta )}
that leads to condensation is given by
ρ
(
η
)
=
(
λ
+
1
)
(
1
−
η
)
λ
,
{\displaystyle \rho (\eta )=(\lambda +1)(1-\eta )^{\lambda },}
where
λ
=
1
{\displaystyle \lambda =1}
.
However, the existence of the Bose–Einstein condensation or the fit-get-rich phase does not depend on the temperature or β of the system but depends only on the functional form of the fitness distribution
ρ
(
η
)
{\displaystyle \rho (\eta )}
of the system. In the end, β falls out of all topologically important quantities. In fact, it can be shown that Bose–Einstein condensation exists in the fitness model even without mapping to a Bose gas. A similar gelation can be seen in models with superlinear preferential attachment, however, it is not clear whether this is an accident or a deeper connection lies between this and the fitness model.
== See also ==
Barabási–Albert model
== References ==
== External links ==
Networks: A Very Short Introduction
Advance Network Dynamics | Wikipedia/Bose–Einstein_condensation_(network_theory) |
The Sznajd model or United we stand, divided we fall (USDF) model is a sociophysics model introduced in 2000 to gain fundamental understanding about opinion dynamics. The Sznajd model implements a phenomenon called social validation and thus extends the Ising spin model. In simple words, the model states:
Social validation: If two people share the same opinion, their neighbors will start to agree with them.
Discord destroys: If a block of adjacent persons disagree, their neighbors start to argue with them.
== Mathematical formulation ==
For simplicity, one assumes that each individual
i
{\displaystyle i}
has
an opinion Si which might be Boolean (
S
i
=
−
1
{\displaystyle S_{i}=-1}
for no,
S
i
=
1
{\displaystyle S_{i}=1}
for yes) in its simplest formulation, which means that each individual either agrees or disagrees to a given question.
In the original 1D-formulation, each individual has exactly two neighbors just like beads on a bracelet. At each time step a pair of individual
S
i
{\displaystyle S_{i}}
and
S
i
+
1
{\displaystyle S_{i+1}}
is chosen at random to change their nearest neighbors' opinion (or: Ising spins)
S
i
−
1
{\displaystyle S_{i-1}}
and
S
i
+
2
{\displaystyle S_{i+2}}
according to two dynamical rules:
If
S
i
=
S
i
+
1
{\displaystyle S_{i}=S_{i+1}}
then
S
i
−
1
=
S
i
{\displaystyle S_{i-1}=S_{i}}
and
S
i
+
2
=
S
i
{\displaystyle S_{i+2}=S_{i}}
. This models social validation, if two people share the same opinion, their neighbors will change their opinion.
If
S
i
=
−
S
i
+
1
{\displaystyle S_{i}=-S_{i+1}}
then
S
i
−
1
=
S
i
+
1
{\displaystyle S_{i-1}=S_{i+1}}
and
S
i
+
2
=
S
i
{\displaystyle S_{i+2}=S_{i}}
. Intuitively: If the given pair of people disagrees, both adopt the opinion of their other neighbor.
=== Findings for the original formulations ===
In a closed (1 dimensional) community, two steady states are always reached, namely complete consensus (which is called ferromagnetic state in physics) or stalemate (the antiferromagnetic state).
Furthermore, Monte Carlo simulations showed that these simple rules lead to complicated dynamics, in particular to a power law in the decision time distribution with an exponent of -1.5.
=== Modifications ===
The final (antiferromagnetic) state of alternating all-on and all-off is unrealistic to represent the behavior of a community. It would mean that the complete population uniformly changes their opinion from one time step to the next. For this reason an alternative dynamical rule was proposed. One possibility is that two spins
S
i
{\displaystyle S_{i}}
and
S
i
+
1
{\displaystyle S_{i+1}}
change their nearest neighbors according to the two following rules:
Social validation remains unchanged: If
S
i
=
S
i
+
1
{\displaystyle S_{i}=S_{i+1}}
then
S
i
−
1
=
S
i
{\displaystyle S_{i-1}=S_{i}}
and
S
i
+
2
=
S
i
{\displaystyle S_{i+2}=S_{i}}
.
If
S
i
=
−
S
i
+
1
{\displaystyle S_{i}=-S_{i+1}}
then
S
i
−
1
=
S
i
{\displaystyle S_{i-1}=S_{i}}
and
S
i
+
2
=
S
i
+
1
{\displaystyle S_{i+2}=S_{i+1}}
== Relevance ==
In recent years, statistical physics has been accepted as modeling framework for phenomena outside the traditional physics. Fields as econophysics or sociophysics formed, and many quantitative analysts in finance are physicists. The Ising model in statistical physics has been a very important step in the history of studying collective (critical) phenomena. The Sznajd model is a simple but yet important variation of prototypical Ising system.
In 2007, Katarzyna Sznajd-Weron has been recognized by the Young Scientist Award for Socio- and Econophysics of the Deutsche Physikalische Gesellschaft (German Physical Society) for an outstanding original contribution using physical methods to develop a better understanding of socio-economic problems.
=== Applications ===
The Sznajd model belongs to the class of binary-state dynamics on a networks also referred to as Boolean networks. This class of systems includes the Ising model, the voter model and the q-voter model, the Bass diffusion model, threshold models and others.
The Sznajd model can be applied to various fields:
The finance interpretation considers the spin-state
S
i
=
1
{\displaystyle S_{i}=1}
as a bullish trader placing orders, whereas a
S
i
=
0
{\displaystyle S_{i}=0}
would correspond to a trader who is bearish and places sell orders.
== References ==
== External links ==
Katarzyna Sznajd-Weron currently works at the Wrocław University of Technology performing research on interdisciplinary applications of statistical physics, complex systems, critical phenomena, sociophysics and agent-based modeling. | Wikipedia/Sznajd_model |
In probability theory and mathematical physics, a random matrix is a matrix-valued random variable—that is, a matrix in which some or all of its entries are sampled randomly from a probability distribution. Random matrix theory (RMT) is the study of properties of random matrices, often as they become large. RMT provides techniques like mean-field theory, diagrammatic methods, the cavity method, or the replica method to compute quantities like traces, spectral densities, or scalar products between eigenvectors. Many physical phenomena, such as the spectrum of nuclei of heavy atoms, the thermal conductivity of a lattice, or the emergence of quantum chaos, can be modeled mathematically as problems concerning large, random matrices.
== Applications ==
=== Physics ===
In nuclear physics, random matrices were introduced by Eugene Wigner to model the nuclei of heavy atoms. Wigner postulated that the spacings between the lines in the spectrum of a heavy atom nucleus should resemble the spacings between the eigenvalues of a random matrix, and should depend only on the symmetry class of the underlying evolution. In solid-state physics, random matrices model the behaviour of large disordered Hamiltonians in the mean-field approximation.
In quantum chaos, the Bohigas–Giannoni–Schmit (BGS) conjecture asserts that the spectral statistics of quantum systems whose classical counterparts exhibit chaotic behaviour are described by random matrix theory.
In quantum optics, transformations described by random unitary matrices are crucial for demonstrating the advantage of quantum over classical computation (see, e.g., the boson sampling model). Moreover, such random unitary transformations can be directly implemented in an optical circuit, by mapping their parameters to optical circuit components (that is beam splitters and phase shifters).
Random matrix theory has also found applications to the chiral Dirac operator in quantum chromodynamics, quantum gravity in two dimensions, mesoscopic physics, spin-transfer torque, the fractional quantum Hall effect, Anderson localization, quantum dots, and superconductors
=== Mathematical statistics and numerical analysis ===
In multivariate statistics, random matrices were introduced by John Wishart, who sought to estimate covariance matrices of large samples. Chernoff-, Bernstein-, and Hoeffding-type inequalities can typically be strengthened when applied to the maximal eigenvalue (i.e. the eigenvalue of largest magnitude) of a finite sum of random Hermitian matrices. Random matrix theory is used to study the spectral properties of random matrices—such as sample covariance matrices—which is of particular interest in high-dimensional statistics. Random matrix theory also saw applications in neural networks and deep learning, with recent work utilizing random matrices to show that hyper-parameter tunings can be cheaply transferred between large neural networks without the need for re-training.
In numerical analysis, random matrices have been used since the work of John von Neumann and Herman Goldstine to describe computation errors in operations such as matrix multiplication. Although random entries are traditional "generic" inputs to an algorithm, the concentration of measure associated with random matrix distributions implies that random matrices will not test large portions of an algorithm's input space.
=== Number theory ===
In number theory, the distribution of zeros of the Riemann zeta function (and other L-functions) is modeled by the distribution of eigenvalues of certain random matrices. The connection was first discovered by Hugh Montgomery and Freeman Dyson. It is connected to the Hilbert–Pólya conjecture.
=== Free probability ===
The relation of free probability with random matrices is a key reason for the wide use of free probability in other subjects. Voiculescu introduced the concept of freeness around 1983 in an operator algebraic context; at the beginning there was no relation at all with random matrices. This connection was only revealed later in 1991 by Voiculescu; he was motivated by the fact that the limit distribution which he found in his free central limit theorem had appeared before in Wigner's semi-circle law in the random matrix context.
=== Computational neuroscience ===
In the field of computational neuroscience, random matrices are increasingly used to model the network of synaptic connections between neurons in the brain. Dynamical models of neuronal networks with random connectivity matrix were shown to exhibit a phase transition to chaos when the variance of the synaptic weights crosses a critical value, at the limit of infinite system size. Results on random matrices have also shown that the dynamics of random-matrix models are insensitive to mean connection strength. Instead, the stability of fluctuations depends on connection strength variation and time to synchrony depends on network topology.
In the analysis of massive data such as fMRI, random matrix theory has been applied in order to perform dimension reduction. When applying an algorithm such as PCA, it is important to be able to select the number of significant components. The criteria for selecting components can be multiple (based on explained variance, Kaiser's method, eigenvalue, etc.). Random matrix theory in this content has its representative the Marchenko-Pastur distribution, which guarantees the theoretical high and low limits of the eigenvalues associated with a random variable covariance matrix. This matrix calculated in this way becomes the null hypothesis that allows one to find the eigenvalues (and their eigenvectors) that deviate from the theoretical random range. The components thus excluded become the reduced dimensional space (see examples in fMRI ).
=== Optimal control ===
In optimal control theory, the evolution of n state variables through time depends at any time on their own values and on the values of k control variables. With linear evolution, matrices of coefficients appear in the state equation (equation of evolution). In some problems the values of the parameters in these matrices are not known with certainty, in which case there are random matrices in the state equation and the problem is known as one of stochastic control.: ch. 13 A key result in the case of linear-quadratic control with stochastic matrices is that the certainty equivalence principle does not apply: while in the absence of multiplier uncertainty (that is, with only additive uncertainty) the optimal policy with a quadratic loss function coincides with what would be decided if the uncertainty were ignored, the optimal policy may differ if the state equation contains random coefficients.
=== Computational mechanics ===
In computational mechanics, epistemic uncertainties underlying the lack of knowledge about the physics of the modeled system give rise to mathematical operators associated with the computational model, which are deficient in a certain sense. Such operators lack certain properties linked to unmodeled physics. When such operators are discretized to perform computational simulations, their accuracy is limited by the missing physics. To compensate for this deficiency of the mathematical operator, it is not enough to make the model parameters random, it is necessary to consider a mathematical operator that is random and can thus generate families of computational models in the hope that one of these captures the missing physics. Random matrices have been used in this sense, with applications in vibroacoustics, wave propagations, materials science, fluid mechanics, heat transfer, etc.
=== Engineering ===
Random matrix theory can be applied to the electrical and communications engineering research efforts to study, model and develop Massive Multiple-Input Multiple-Output (MIMO) radio systems.
== History ==
Random matrix theory first gained attention beyond mathematics literature in the context of nuclear physics. Experiments by Enrico Fermi and others demonstrated evidence that individual nucleons cannot be approximated to move independently, leading Niels Bohr to formulate the idea of a compound nucleus. Because there was no knowledge of direct nucleon-nucleon interactions, Eugene Wigner and Leonard Eisenbud approximated that the nuclear Hamiltonian could be modeled as a random matrix. For larger atoms, the distribution of the energy eigenvalues of the Hamiltonian could be computed in order to approximate scattering cross sections by invoking the Wishart distribution.
== Gaussian ensembles ==
The most-commonly studied random matrix distributions are the Gaussian ensembles: GOE, GUE and GSE. They are often denoted by their Dyson index, β = 1 for GOE, β = 2 for GUE, and β = 4 for GSE. This index counts the number of real components per matrix element.
=== Definitions ===
The Gaussian unitary ensemble
GUE
(
n
)
{\displaystyle {\text{GUE}}(n)}
is described by the Gaussian measure with density
1
Z
GUE
(
n
)
e
−
n
2
t
r
H
2
{\displaystyle {\frac {1}{Z_{{\text{GUE}}(n)}}}e^{-{\frac {n}{2}}\mathrm {tr} H^{2}}}
on the space of
n
×
n
{\displaystyle n\times n}
Hermitian matrices
H
=
(
H
i
j
)
i
,
j
=
1
n
{\displaystyle H=(H_{ij})_{i,j=1}^{n}}
. Here
Z
GUE
(
n
)
=
2
n
/
2
(
π
n
)
1
2
n
2
{\displaystyle Z_{{\text{GUE}}(n)}=2^{n/2}\left({\frac {\pi }{n}}\right)^{{\frac {1}{2}}n^{2}}}
is a normalization constant, chosen so that the integral of the density is equal to one. The term unitary refers to the fact that the distribution is invariant under unitary conjugation. The Gaussian unitary ensemble models Hamiltonians lacking time-reversal symmetry.
The Gaussian orthogonal ensemble
GOE
(
n
)
{\displaystyle {\text{GOE}}(n)}
is described by the Gaussian measure with density
1
Z
GOE
(
n
)
e
−
n
4
t
r
H
2
{\displaystyle {\frac {1}{Z_{{\text{GOE}}(n)}}}e^{-{\frac {n}{4}}\mathrm {tr} H^{2}}}
on the space of n × n real symmetric matrices H = (Hij)ni,j=1. Its distribution is invariant under orthogonal conjugation, and it models Hamiltonians with time-reversal symmetry. Equivalently, it is generated by
H
=
(
G
+
G
T
)
/
2
n
{\displaystyle H=(G+G^{T})/{\sqrt {2n}}}
, where
G
{\displaystyle G}
is an
n
×
n
{\displaystyle n\times n}
matrix with IID samples from the standard normal distribution.
The Gaussian symplectic ensemble
GSE
(
n
)
{\displaystyle {\text{GSE}}(n)}
is described by the Gaussian measure with density
1
Z
GSE
(
n
)
e
−
n
t
r
H
2
{\displaystyle {\frac {1}{Z_{{\text{GSE}}(n)}}}e^{-n\mathrm {tr} H^{2}}}
on the space of n × n Hermitian quaternionic matrices, e.g. symmetric square matrices composed of quaternions, H = (Hij)ni,j=1. Its distribution is invariant under conjugation by the symplectic group, and it models Hamiltonians with time-reversal symmetry but no rotational symmetry.
=== Point correlation functions ===
The ensembles as defined here have Gaussian distributed matrix elements with mean ⟨Hij⟩ = 0, and two-point correlations given by
⟨
H
i
j
H
m
n
∗
⟩
=
⟨
H
i
j
H
n
m
⟩
=
1
n
δ
i
m
δ
j
n
+
2
−
β
n
β
δ
i
n
δ
j
m
,
{\displaystyle \langle H_{ij}H_{mn}^{*}\rangle =\langle H_{ij}H_{nm}\rangle ={\frac {1}{n}}\delta _{im}\delta _{jn}+{\frac {2-\beta }{n\beta }}\delta _{in}\delta _{jm},}
from which all higher correlations follow by Isserlis' theorem.
=== Moment generating functions ===
The moment generating function for the GOE is
E
[
e
t
r
(
V
H
)
]
=
e
1
4
N
‖
V
+
V
T
‖
F
2
{\displaystyle E[e^{tr(VH)}]=e^{{\frac {1}{4N}}\|V+V^{T}\|_{F}^{2}}}
where
‖
⋅
‖
F
{\displaystyle \|\cdot \|_{F}}
is the Frobenius norm.
=== Spectral density ===
The joint probability density for the eigenvalues λ1, λ2, ..., λn of GUE/GOE/GSE is given by
where Zβ,n is a normalization constant which can be explicitly computed, see Selberg integral. In the case of GUE (β = 2), the formula (1) describes a determinantal point process. Eigenvalues repel as the joint probability density has a zero (of
β
{\displaystyle \beta }
th order) for coinciding eigenvalues
λ
j
=
λ
i
{\displaystyle \lambda _{j}=\lambda _{i}}
, and
Z
2
,
n
=
(
2
π
)
n
/
2
∏
k
=
1
n
k
!
{\displaystyle Z_{2,n}=(2\pi )^{n/2}\prod _{k=1}^{n}k!}
.
More succinctly,
1
Z
β
,
n
e
−
β
4
‖
λ
‖
2
2
|
Δ
n
(
λ
)
|
β
{\displaystyle {\frac {1}{Z_{\beta ,n}}}e^{-{\frac {\beta }{4}}\|\lambda \|_{2}^{2}}|\Delta _{n}(\lambda )|^{\beta }}
where
Δ
n
{\displaystyle \Delta _{n}}
is the Vandermonde determinant.
The distribution of the largest eigenvalue for GOE, and GUE, are explicitly solvable. They converge to the Tracy–Widom distribution after shifting and scaling appropriately.
=== Convergence to Wigner semicircular distribution ===
The spectrum, divided by
N
σ
2
{\displaystyle {\sqrt {N\sigma ^{2}}}}
, converges in distribution to the semicircular distribution on the interval
[
−
2
,
+
2
]
{\displaystyle [-2,+2]}
:
ρ
(
x
)
=
1
2
π
4
−
x
2
{\displaystyle \rho (x)={\frac {1}{2\pi }}{\sqrt {4-x^{2}}}}
. Here
σ
2
{\displaystyle \sigma ^{2}}
is the variance of off-diagonal entries. The variance of the on-diagonal entries do not matter.
=== Distribution of level spacings ===
From the ordered sequence of eigenvalues
λ
1
<
…
<
λ
n
<
λ
n
+
1
<
…
{\displaystyle \lambda _{1}<\ldots <\lambda _{n}<\lambda _{n+1}<\ldots }
, one defines the normalized spacings
s
=
(
λ
n
+
1
−
λ
n
)
/
⟨
s
⟩
{\displaystyle s=(\lambda _{n+1}-\lambda _{n})/\langle s\rangle }
, where
⟨
s
⟩
=
⟨
λ
n
+
1
−
λ
n
⟩
{\displaystyle \langle s\rangle =\langle \lambda _{n+1}-\lambda _{n}\rangle }
is the mean spacing. The probability distribution of spacings is approximately given by,
p
1
(
s
)
=
π
2
s
e
−
π
4
s
2
{\displaystyle p_{1}(s)={\frac {\pi }{2}}s\,e^{-{\frac {\pi }{4}}s^{2}}}
for the orthogonal ensemble GOE
β
=
1
{\displaystyle \beta =1}
,
p
2
(
s
)
=
32
π
2
s
2
e
−
4
π
s
2
{\displaystyle p_{2}(s)={\frac {32}{\pi ^{2}}}s^{2}\mathrm {e} ^{-{\frac {4}{\pi }}s^{2}}}
for the unitary ensemble GUE
β
=
2
{\displaystyle \beta =2}
, and
p
4
(
s
)
=
2
18
3
6
π
3
s
4
e
−
64
9
π
s
2
{\displaystyle p_{4}(s)={\frac {2^{18}}{3^{6}\pi ^{3}}}s^{4}e^{-{\frac {64}{9\pi }}s^{2}}}
for the symplectic ensemble GSE
β
=
4
{\displaystyle \beta =4}
.
The numerical constants are such that
p
β
(
s
)
{\displaystyle p_{\beta }(s)}
is normalized:
∫
0
∞
d
s
p
β
(
s
)
=
1
{\displaystyle \int _{0}^{\infty }ds\,p_{\beta }(s)=1}
and the mean spacing is,
∫
0
∞
d
s
s
p
β
(
s
)
=
1
,
{\displaystyle \int _{0}^{\infty }ds\,s\,p_{\beta }(s)=1,}
for
β
=
1
,
2
,
4
{\displaystyle \beta =1,2,4}
.
== Generalizations ==
Wigner matrices are random Hermitian matrices
H
n
=
(
H
n
(
i
,
j
)
)
i
,
j
=
1
n
{\textstyle H_{n}=(H_{n}(i,j))_{i,j=1}^{n}}
such that the entries
{
H
n
(
i
,
j
)
,
1
≤
i
≤
j
≤
n
}
{\displaystyle \left\{H_{n}(i,j)~,\,1\leq i\leq j\leq n\right\}}
above the main diagonal are independent random variables with zero mean and have identical second moments.
The Gaussian ensembles can be extended for
β
≠
1
,
2
,
4
{\displaystyle \beta \neq 1,2,4}
using the Dumitriu-Edelman tridiagonal ensemble.
Invariant matrix ensembles are random Hermitian matrices with density on the space of real symmetric/Hermitian/quaternionic Hermitian matrices, which is of the form
1
Z
n
e
−
n
V
(
t
r
(
H
)
)
,
{\textstyle {\frac {1}{Z_{n}}}e^{-nV(\mathrm {tr} (H))}~,}
where the function V is called the potential.
The Gaussian ensembles are the only common special cases of these two classes of random matrices. This is a consequence of a theorem by Porter and Rosenzweig.
== Spectral theory of random matrices ==
The spectral theory of random matrices studies the distribution of the eigenvalues as the size of the matrix goes to infinity.
=== Empirical spectral measure ===
The empirical spectral measure
μ
H
{\displaystyle \mu _{H}}
of
H
{\displaystyle H}
is defined by
μ
H
(
A
)
=
1
n
#
{
eigenvalues of
H
in
A
}
=
N
1
A
,
H
,
A
⊂
R
.
{\displaystyle \mu _{H}(A)={\frac {1}{n}}\,\#\left\{{\text{eigenvalues of }}H{\text{ in }}A\right\}=N_{1_{A},H},\quad A\subset \mathbb {R} .}
or more succinctly, if
λ
1
,
…
,
λ
n
{\displaystyle \lambda _{1},\ldots ,\lambda _{n}}
are the eigenvalues of
H
{\displaystyle H}
μ
H
(
d
λ
)
=
1
n
∑
i
δ
λ
i
(
d
λ
)
.
{\displaystyle \mu _{H}(d\lambda )={\frac {1}{n}}\sum _{i}\delta _{\lambda _{i}}(d\lambda ).}
Usually, the limit of
μ
H
{\displaystyle \mu _{H}}
is a deterministic measure; this is a particular case of self-averaging. The cumulative distribution function of the limiting measure is called the integrated density of states and is denoted N(λ). If the integrated density of states is differentiable, its derivative is called the density of states and is denoted ρ(λ).
=== Types of convergence ===
Given a matrix ensemble, we say that its spectral measures converge weakly to
ρ
{\displaystyle \rho }
iff for any measurable set
A
{\displaystyle A}
, the ensemble-average converges:
lim
n
→
∞
E
H
[
μ
H
(
A
)
]
=
ρ
(
A
)
{\displaystyle \lim _{n\to \infty }\mathbb {E} _{H}[\mu _{H}(A)]=\rho (A)}
Convergence weakly almost surely: If we sample
H
1
,
H
2
,
H
3
,
…
{\displaystyle H_{1},H_{2},H_{3},\dots }
independently from the ensemble, then with probability 1,
lim
n
→
∞
μ
H
n
(
A
)
=
ρ
(
A
)
{\displaystyle \lim _{n\to \infty }\mu _{H_{n}}(A)=\rho (A)}
for any measurable set
A
{\displaystyle A}
.
In another sense, weak almost sure convergence means that we sample
H
1
,
H
2
,
H
3
,
…
{\displaystyle H_{1},H_{2},H_{3},\dots }
, not independently, but by "growing" (a stochastic process), then with probability 1,
lim
n
→
∞
μ
H
n
(
A
)
=
ρ
(
A
)
{\displaystyle \lim _{n\to \infty }\mu _{H_{n}}(A)=\rho (A)}
for any measurable set
A
{\displaystyle A}
.
For example, we can "grow" a sequence of matrices from the Gaussian ensemble as follows:
Sample an infinite doubly infinite sequence of standard random variables
{
G
i
,
j
}
i
,
j
=
1
,
2
,
3
,
…
{\displaystyle \{G_{i,j}\}_{i,j=1,2,3,\dots }}
.
Define each
H
n
=
(
G
n
+
G
n
T
)
/
2
n
{\displaystyle H_{n}=(G_{n}+G_{n}^{T})/{\sqrt {2n}}}
where
G
n
{\displaystyle G_{n}}
is the matrix made of entries
{
G
i
,
j
}
i
,
j
=
1
,
2
,
…
,
n
{\displaystyle \{G_{i,j}\}_{i,j=1,2,\dots ,n}}
.
Note that generic matrix ensembles do not allow us to grow, but most of the common ones, such as the three Gaussian ensembles, do allow us to grow.
=== Global regime ===
In the global regime, one is interested in the distribution of linear statistics of the form
N
f
,
H
=
n
−
1
tr
f
(
H
)
{\displaystyle N_{f,H}=n^{-1}{\text{tr}}f(H)}
.
The limit of the empirical spectral measure for Wigner matrices was described by Eugene Wigner; see Wigner semicircle distribution and Wigner surmise. As far as sample covariance matrices are concerned, a theory was developed by Marčenko and Pastur.
The limit of the empirical spectral measure of invariant matrix ensembles is described by a certain integral equation which arises from potential theory.
==== Fluctuations ====
For the linear statistics Nf,H = n−1 Σ f(λj), one is also interested in the fluctuations about ∫ f(λ) dN(λ). For many classes of random matrices, a central limit theorem of the form
N
f
,
H
−
∫
f
(
λ
)
d
N
(
λ
)
σ
f
,
n
⟶
D
N
(
0
,
1
)
{\displaystyle {\frac {N_{f,H}-\int f(\lambda )\,dN(\lambda )}{\sigma _{f,n}}}{\overset {D}{\longrightarrow }}N(0,1)}
is known.
==== The variational problem for the unitary ensembles ====
Consider the measure
d
μ
N
(
μ
)
=
1
Z
~
N
e
−
H
N
(
λ
)
d
λ
,
H
N
(
λ
)
=
−
∑
j
≠
k
ln
|
λ
j
−
λ
k
|
+
N
∑
j
=
1
N
Q
(
λ
j
)
,
{\displaystyle \mathrm {d} \mu _{N}(\mu )={\frac {1}{{\widetilde {Z}}_{N}}}e^{-H_{N}(\lambda )}\mathrm {d} \lambda ,\qquad H_{N}(\lambda )=-\sum \limits _{j\neq k}\ln |\lambda _{j}-\lambda _{k}|+N\sum \limits _{j=1}^{N}Q(\lambda _{j}),}
where
Q
(
M
)
{\displaystyle Q(M)}
is the potential of the ensemble and let
ν
{\displaystyle \nu }
be the empirical spectral measure.
We can rewrite
H
N
(
λ
)
{\displaystyle H_{N}(\lambda )}
with
ν
{\displaystyle \nu }
as
H
N
(
λ
)
=
N
2
[
−
∫
∫
x
≠
y
ln
|
x
−
y
|
d
ν
(
x
)
d
ν
(
y
)
+
∫
Q
(
x
)
d
ν
(
x
)
]
,
{\displaystyle H_{N}(\lambda )=N^{2}\left[-\int \int _{x\neq y}\ln |x-y|\mathrm {d} \nu (x)\mathrm {d} \nu (y)+\int Q(x)\mathrm {d} \nu (x)\right],}
the probability measure is now of the form
d
μ
N
(
μ
)
=
1
Z
~
N
e
−
N
2
I
Q
(
ν
)
d
λ
,
{\displaystyle \mathrm {d} \mu _{N}(\mu )={\frac {1}{{\widetilde {Z}}_{N}}}e^{-N^{2}I_{Q}(\nu )}\mathrm {d} \lambda ,}
where
I
Q
(
ν
)
{\displaystyle I_{Q}(\nu )}
is the above functional inside the squared brackets.
Let now
M
1
(
R
)
=
{
ν
:
ν
≥
0
,
∫
R
d
ν
=
1
}
{\displaystyle M_{1}(\mathbb {R} )=\left\{\nu :\nu \geq 0,\ \int _{\mathbb {R} }\mathrm {d} \nu =1\right\}}
be the space of one-dimensional probability measures and consider the minimizer
E
Q
=
inf
ν
∈
M
1
(
R
)
−
∫
∫
x
≠
y
ln
|
x
−
y
|
d
ν
(
x
)
d
ν
(
y
)
+
∫
Q
(
x
)
d
ν
(
x
)
.
{\displaystyle E_{Q}=\inf \limits _{\nu \in M_{1}(\mathbb {R} )}-\int \int _{x\neq y}\ln |x-y|\mathrm {d} \nu (x)\mathrm {d} \nu (y)+\int Q(x)\mathrm {d} \nu (x).}
For
E
Q
{\displaystyle E_{Q}}
there exists a unique equilibrium measure
ν
Q
{\displaystyle \nu _{Q}}
through the Euler-Lagrange variational conditions for some real constant
l
{\displaystyle l}
2
∫
R
log
|
x
−
y
|
d
ν
(
y
)
−
Q
(
x
)
=
l
,
x
∈
J
{\displaystyle 2\int _{\mathbb {R} }\log |x-y|\mathrm {d} \nu (y)-Q(x)=l,\quad x\in J}
2
∫
R
log
|
x
−
y
|
d
ν
(
y
)
−
Q
(
x
)
≤
l
,
x
∈
R
∖
J
{\displaystyle 2\int _{\mathbb {R} }\log |x-y|\mathrm {d} \nu (y)-Q(x)\leq l,\quad x\in \mathbb {R} \setminus J}
where
J
=
⋃
j
=
1
q
[
a
j
,
b
j
]
{\displaystyle J=\bigcup \limits _{j=1}^{q}[a_{j},b_{j}]}
is the support of the measure and define
q
(
x
)
=
−
(
Q
′
(
x
)
2
)
2
+
∫
Q
′
(
x
)
−
Q
′
(
y
)
x
−
y
d
ν
Q
(
y
)
{\displaystyle q(x)=-\left({\frac {Q'(x)}{2}}\right)^{2}+\int {\frac {Q'(x)-Q'(y)}{x-y}}\mathrm {d} \nu _{Q}(y)}
.
The equilibrium measure
ν
Q
{\displaystyle \nu _{Q}}
has the following Radon–Nikodym density
d
ν
Q
(
x
)
d
x
=
1
π
q
(
x
)
.
{\displaystyle {\frac {\mathrm {d} \nu _{Q}(x)}{\mathrm {d} x}}={\frac {1}{\pi }}{\sqrt {q(x)}}.}
=== Mesoscopic regime ===
The typical statement of the Wigner semicircular law is equivalent to the following statement: For each fixed interval
[
λ
0
−
Δ
λ
,
λ
0
+
Δ
λ
]
{\displaystyle [\lambda _{0}-\Delta \lambda ,\lambda _{0}+\Delta \lambda ]}
centered at a point
λ
0
{\displaystyle \lambda _{0}}
, as
N
{\displaystyle N}
, the number of dimensions of the gaussian ensemble increases, the proportion of the eigenvalues falling within the interval converges to
∫
[
λ
0
−
Δ
λ
,
λ
0
+
Δ
λ
]
ρ
(
t
)
d
t
{\displaystyle \int _{[\lambda _{0}-\Delta \lambda ,\lambda _{0}+\Delta \lambda ]}\rho (t)dt}
, where
ρ
(
t
)
{\displaystyle \rho (t)}
is the density of the semicircular distribution.
If
Δ
λ
{\displaystyle \Delta \lambda }
can be allowed to decrease as
N
{\displaystyle N}
increases, then we obtain strictly stronger theorems, named "local laws" or "mesoscopic regime".
The mesoscopic regime is intermediate between the local and the global. In the mesoscopic regime, one is interested in the limit distribution of eigenvalues in a set that shrinks to zero, but slow enough, such that the number of eigenvalues inside
→
∞
{\displaystyle \to \infty }
.
For example, the Ginibre ensemble has a mesoscopic law: For any sequence of shrinking disks with areas
u
{\displaystyle u}
inside the unite disk, if the disks have area
A
n
=
O
(
n
−
1
+
ϵ
)
{\displaystyle A_{n}=O(n^{-1+\epsilon })}
, the conditional distribution of the spectrum inside the disks also converges to a uniform distribution. That is, if we cut the shrinking disks along with the spectrum falling inside the disks, then scale the disks up to unit area, we would see the spectra converging to a flat distribution in the disks.
=== Local regime ===
In the local regime, one is interested in the limit distribution of eigenvalues in a set that shrinks so fast that the number of eigenvalues remains
O
(
1
)
{\displaystyle O(1)}
.
Typically this means the study of spacings between eigenvalues, and, more generally, in the joint distribution of eigenvalues in an interval of length of order 1/n. One distinguishes between bulk statistics, pertaining to intervals inside the support of the limiting spectral measure, and edge statistics, pertaining to intervals near the boundary of the support.
==== Bulk statistics ====
Formally, fix
λ
0
{\displaystyle \lambda _{0}}
in the interior of the support of
N
(
λ
)
{\displaystyle N(\lambda )}
. Then consider the point process
Ξ
(
λ
0
)
=
∑
j
δ
(
⋅
−
n
ρ
(
λ
0
)
(
λ
j
−
λ
0
)
)
,
{\displaystyle \Xi (\lambda _{0})=\sum _{j}\delta {\Big (}{\cdot }-n\rho (\lambda _{0})(\lambda _{j}-\lambda _{0}){\Big )}~,}
where
λ
j
{\displaystyle \lambda _{j}}
are the eigenvalues of the random matrix.
The point process
Ξ
(
λ
0
)
{\displaystyle \Xi (\lambda _{0})}
captures the statistical properties of eigenvalues in the vicinity of
λ
0
{\displaystyle \lambda _{0}}
. For the Gaussian ensembles, the limit of
Ξ
(
λ
0
)
{\displaystyle \Xi (\lambda _{0})}
is known; thus, for GUE it is a determinantal point process with the kernel
K
(
x
,
y
)
=
sin
π
(
x
−
y
)
π
(
x
−
y
)
{\displaystyle K(x,y)={\frac {\sin \pi (x-y)}{\pi (x-y)}}}
(the sine kernel).
The universality principle postulates that the limit of
Ξ
(
λ
0
)
{\displaystyle \Xi (\lambda _{0})}
as
n
→
∞
{\displaystyle n\to \infty }
should depend only on the symmetry class of the random matrix (and neither on the specific model of random matrices nor on
λ
0
{\displaystyle \lambda _{0}}
). Rigorous proofs of universality are known for invariant matrix ensembles and Wigner matrices.
==== Edge statistics ====
One example of edge statistics is the Tracy–Widom distribution.
As another example, consider the Ginibre ensemble. It can be real or complex. The real Ginibre ensemble has i.i.d. standard Gaussian entries
N
(
0
,
1
)
{\displaystyle {\mathcal {N}}(0,1)}
, and the complex Ginibre ensemble has i.i.d. standard complex Gaussian entries
N
(
0
,
1
/
2
)
+
i
N
(
0
,
1
/
2
)
{\displaystyle {\mathcal {N}}(0,1/2)+i{\mathcal {N}}(0,1/2)}
.
Now let
G
n
{\displaystyle G_{n}}
be sampled from the real or complex ensemble, and let
ρ
(
G
n
)
{\displaystyle \rho (G_{n})}
be the absolute value of its maximal eigenvalue:
ρ
(
G
n
)
:=
max
j
|
λ
j
|
{\displaystyle \rho (G_{n}):=\max _{j}|\lambda _{j}|}
We have the following theorem for the edge statistics:
This theorem refines the circular law of the Ginibre ensemble. In words, the circular law says that the spectrum of
1
n
G
n
{\displaystyle {\frac {1}{\sqrt {n}}}G_{n}}
almost surely falls uniformly on the unit disc. and the edge statistics theorem states that the radius of the almost-unit-disk is about
1
−
γ
n
4
n
{\displaystyle 1-{\sqrt {\frac {\gamma _{n}}{4n}}}}
, and fluctuates on a scale of
1
4
n
γ
n
{\displaystyle {\frac {1}{\sqrt {4n\gamma _{n}}}}}
, according to the Gumbel law.
== Correlation functions ==
The joint probability density of the eigenvalues of
n
×
n
{\displaystyle n\times n}
random Hermitian matrices
M
∈
H
n
×
n
{\displaystyle M\in \mathbf {H} ^{n\times n}}
, with partition functions of the form
Z
n
=
∫
M
∈
H
n
×
n
d
μ
0
(
M
)
e
tr
(
V
(
M
)
)
{\displaystyle Z_{n}=\int _{M\in \mathbf {H} ^{n\times n}}d\mu _{0}(M)e^{{\text{tr}}(V(M))}}
where
V
(
x
)
:=
∑
j
=
1
∞
v
j
x
j
{\displaystyle V(x):=\sum _{j=1}^{\infty }v_{j}x^{j}}
and
d
μ
0
(
M
)
{\displaystyle d\mu _{0}(M)}
is the standard Lebesgue measure on the space
H
n
×
n
{\displaystyle \mathbf {H} ^{n\times n}}
of Hermitian
n
×
n
{\displaystyle n\times n}
matrices, is given by
p
n
,
V
(
x
1
,
…
,
x
n
)
=
1
Z
n
,
V
∏
i
<
j
(
x
i
−
x
j
)
2
e
−
∑
i
V
(
x
i
)
.
{\displaystyle p_{n,V}(x_{1},\dots ,x_{n})={\frac {1}{Z_{n,V}}}\prod _{i<j}(x_{i}-x_{j})^{2}e^{-\sum _{i}V(x_{i})}.}
The
k
{\displaystyle k}
-point correlation functions (or marginal distributions)
are defined as
R
n
,
V
(
k
)
(
x
1
,
…
,
x
k
)
=
n
!
(
n
−
k
)
!
∫
R
d
x
k
+
1
⋯
∫
R
d
x
n
p
n
,
V
(
x
1
,
x
2
,
…
,
x
n
)
,
{\displaystyle R_{n,V}^{(k)}(x_{1},\dots ,x_{k})={\frac {n!}{(n-k)!}}\int _{\mathbf {R} }dx_{k+1}\cdots \int _{\mathbb {R} }dx_{n}\,p_{n,V}(x_{1},x_{2},\dots ,x_{n}),}
which are skew symmetric functions of their variables.
In particular, the one-point correlation function, or density of states, is
R
n
,
V
(
1
)
(
x
1
)
=
n
∫
R
d
x
2
⋯
∫
R
d
x
n
p
n
,
V
(
x
1
,
x
2
,
…
,
x
n
)
.
{\displaystyle R_{n,V}^{(1)}(x_{1})=n\int _{\mathbb {R} }dx_{2}\cdots \int _{\mathbf {R} }dx_{n}\,p_{n,V}(x_{1},x_{2},\dots ,x_{n}).}
Its integral over a Borel set
B
⊂
R
{\displaystyle B\subset \mathbf {R} }
gives the expected number of eigenvalues contained in
B
{\displaystyle B}
:
∫
B
R
n
,
V
(
1
)
(
x
)
d
x
=
E
(
#
{
eigenvalues in
B
}
)
.
{\displaystyle \int _{B}R_{n,V}^{(1)}(x)dx=\mathbf {E} \left(\#\{{\text{eigenvalues in }}B\}\right).}
The following result expresses these correlation functions as determinants of the matrices formed from evaluating the appropriate integral kernel at the pairs
(
x
i
,
x
j
)
{\displaystyle (x_{i},x_{j})}
of points appearing within the correlator.
Theorem [Dyson-Mehta]
For any
k
{\displaystyle k}
,
1
≤
k
≤
n
{\displaystyle 1\leq k\leq n}
the
k
{\displaystyle k}
-point correlation function
R
n
,
V
(
k
)
{\displaystyle R_{n,V}^{(k)}}
can be written as a determinant
R
n
,
V
(
k
)
(
x
1
,
x
2
,
…
,
x
k
)
=
det
1
≤
i
,
j
≤
k
(
K
n
,
V
(
x
i
,
x
j
)
)
,
{\displaystyle R_{n,V}^{(k)}(x_{1},x_{2},\dots ,x_{k})=\det _{1\leq i,j\leq k}\left(K_{n,V}(x_{i},x_{j})\right),}
where
K
n
,
V
(
x
,
y
)
{\displaystyle K_{n,V}(x,y)}
is the
n
{\displaystyle n}
th Christoffel-Darboux kernel
K
n
,
V
(
x
,
y
)
:=
∑
k
=
0
n
−
1
ψ
k
(
x
)
ψ
k
(
y
)
,
{\displaystyle K_{n,V}(x,y):=\sum _{k=0}^{n-1}\psi _{k}(x)\psi _{k}(y),}
associated to
V
{\displaystyle V}
, written in terms of the quasipolynomials
ψ
k
(
x
)
=
1
h
k
p
k
(
z
)
e
−
V
(
z
)
/
2
,
{\displaystyle \psi _{k}(x)={1 \over {\sqrt {h_{k}}}}\,p_{k}(z)\,e^{-V(z)/2},}
where
{
p
k
(
x
)
}
k
∈
N
{\displaystyle \{p_{k}(x)\}_{k\in \mathbf {N} }}
is a complete sequence of monic polynomials, of the degrees indicated, satisfying the orthogonilty conditions
∫
R
ψ
j
(
x
)
ψ
k
(
x
)
d
x
=
δ
j
k
.
{\displaystyle \int _{\mathbf {R} }\psi _{j}(x)\psi _{k}(x)dx=\delta _{jk}.}
== Other classes of random matrices ==
=== Wishart matrices ===
Wishart matrices are n × n random matrices of the form H = X X*, where X is an n × m random matrix (m ≥ n) with independent entries, and X* is its conjugate transpose. In the important special case considered by Wishart, the entries of X are identically distributed Gaussian random variables (either real or complex).
The limit of the empirical spectral measure of Wishart matrices was found by Vladimir Marchenko and Leonid Pastur.
=== Random unitary matrices ===
=== Non-Hermitian random matrices ===
== Selected bibliography ==
=== Books ===
Mehta, M.L. (2004). Random Matrices. Amsterdam: Elsevier/Academic Press. ISBN 0-12-088409-7.
Anderson, G.W.; Guionnet, A.; Zeitouni, O. (2010). An introduction to random matrices. Cambridge: Cambridge University Press. ISBN 978-0-521-19452-5.
Akemann, G.; Baik, J.; Di Francesco, P. (2011). The Oxford Handbook of Random Matrix Theory. Oxford: Oxford University Press. ISBN 978-0-19-957400-1.
Potters, Marc; Bouchaud, Jean-Philippe (2020-11-30). A First Course in Random Matrix Theory: for Physicists, Engineers and Data Scientists. Cambridge University Press. doi:10.1017/9781108768900. ISBN 978-1-108-76890-0.
=== Survey articles ===
Edelman, A.; Rao, N.R (2005). "Random matrix theory". Acta Numerica. 14: 233–297. Bibcode:2005AcNum..14..233E. doi:10.1017/S0962492904000236. S2CID 16038147.
Pastur, L.A. (1973). "Spectra of random self-adjoint operators". Russ. Math. Surv. 28 (1): 1–67. Bibcode:1973RuMaS..28....1P. doi:10.1070/RM1973v028n01ABEH001396. S2CID 250796916.
Diaconis, Persi (2003). "Patterns in eigenvalues: the 70th Josiah Willard Gibbs lecture". Bulletin of the American Mathematical Society. New Series. 40 (2): 155–178. doi:10.1090/S0273-0979-03-00975-3. MR 1962294.
Diaconis, Persi (2005). "What is ... a random matrix?". Notices of the American Mathematical Society. 52 (11): 1348–1349. ISSN 0002-9920. MR 2183871.
Eynard, Bertrand; Kimura, Taro; Ribault, Sylvain (2015-10-15). "Random matrices". arXiv:1510.04430v2 [math-ph].
Beenakker, Carlo (1997). "Random-matrix theory of quantum transport". Reviews of Modern Physics. 69: 731. arXiv:cond-mat/9612179. doi:10.1103/RevModPhys.69.731.
=== Historic works ===
Wigner, E. (1955). "Characteristic vectors of bordered matrices with infinite dimensions". Annals of Mathematics. 62 (3): 548–564. doi:10.2307/1970079. JSTOR 1970079.
Wishart, J. (1928). "Generalized product moment distribution in samples". Biometrika. 20A (1–2): 32–52. doi:10.1093/biomet/20a.1-2.32.
von Neumann, J.; Goldstine, H.H. (1947). "Numerical inverting of matrices of high order". Bull. Amer. Math. Soc. 53 (11): 1021–1099. doi:10.1090/S0002-9904-1947-08909-6.
== References ==
== External links ==
Fyodorov, Y. (2011). "Random matrix theory". Scholarpedia. 6 (3): 9886. Bibcode:2011SchpJ...6.9886F. doi:10.4249/scholarpedia.9886.
Weisstein, E. W. "Random Matrix". Wolfram MathWorld. | Wikipedia/Random_matrix_theory |
The kinetic theory of gases is a simple classical model of the thermodynamic behavior of gases. Its introduction allowed many principal concepts of thermodynamics to be established. It treats a gas as composed of numerous particles, too small to be seen with a microscope, in constant, random motion. These particles are now known to be the atoms or molecules of the gas. The kinetic theory of gases uses their collisions with each other and with the walls of their container to explain the relationship between the macroscopic properties of gases, such as volume, pressure, and temperature, as well as transport properties such as viscosity, thermal conductivity and mass diffusivity.
The basic version of the model describes an ideal gas. It treats the collisions as perfectly elastic and as the only interaction between the particles, which are additionally assumed to be much smaller than their average distance apart.
Due to the time reversibility of microscopic dynamics (microscopic reversibility), the kinetic theory is also connected to the principle of detailed balance, in terms of the fluctuation-dissipation theorem (for Brownian motion) and the Onsager reciprocal relations.
The theory was historically significant as the first explicit exercise of the ideas of statistical mechanics.
== History ==
=== Kinetic theory of matter ===
==== Antiquity ====
In about 50 BCE, the Roman philosopher Lucretius proposed that apparently static macroscopic bodies were composed on a small scale of rapidly moving atoms all bouncing off each other. This Epicurean atomistic point of view was rarely considered in the subsequent centuries, when Aristotlean ideas were dominant.
==== Modern era ====
===== "Heat is motion" =====
One of the first and boldest statements on the relationship between motion of particles and heat was by the English philosopher Francis Bacon in 1620. "It must not be thought that heat generates motion, or motion heat (though in some respects this be true), but that the very essence of heat ... is motion and nothing else." "not a ... motion of the whole, but of the small particles of the body." In 1623, in The Assayer, Galileo Galilei, in turn, argued that heat, pressure, smell and other phenomena perceived by our senses are apparent properties only, caused by the movement of particles, which is a real phenomenon.
In 1665, in Micrographia, the English polymath Robert Hooke repeated Bacon's assertion, and in 1675, his colleague, Anglo-Irish scientist Robert Boyle noted that a hammer's "impulse" is transformed into the motion of a nail's constituent particles, and that this type of motion is what heat consists of. Boyle also believed that all macroscopic properties, including color, taste and elasticity, are caused by and ultimately consist of nothing but the arrangement and motion of indivisible particles of matter. In a lecture of 1681, Hooke asserted a direct relationship between the temperature of an object and the speed of its internal particles. "Heat ... is nothing but the internal Motion of the Particles of [a] Body; and the hotter a Body is, the more violently are the Particles moved." In a manuscript published 1720, the English philosopher John Locke made a very similar statement: "What in our sensation is heat, in the object is nothing but motion." Locke too talked about the motion of the internal particles of the object, which he referred to as its "insensible parts".
In his 1744 paper Meditations on the Cause of Heat and Cold, Russian polymath Mikhail Lomonosov made a relatable appeal to everyday experience to gain acceptance of the microscopic and kinetic nature of matter and heat:Movement should not be denied based on the fact it is not seen. Who would deny that the leaves of trees move when rustled by a wind, despite it being unobservable from large distances? Just as in this case motion remains hidden due to perspective, it remains hidden in warm bodies due to the extremely small sizes of the moving particles. In both cases, the viewing angle is so small that neither the object nor their movement can be seen.Lomonosov also insisted that movement of particles is necessary for the processes of dissolution, extraction and diffusion, providing as examples the dissolution and diffusion of salts by the action of water particles on the of the “molecules of salt”, the dissolution of metals in mercury, and the extraction of plant pigments by alcohol.
Also the transfer of heat was explained by the motion of particles. Around 1760, Scottish physicist and chemist Joseph Black wrote: "Many have supposed that heat is a tremulous ... motion of the particles of matter, which ... motion they imagined to be communicated from one body to another."
=== Kinetic theory of gases ===
In 1738 Daniel Bernoulli published Hydrodynamica, which laid the basis for the kinetic theory of gases. In this work, Bernoulli posited the argument, that gases consist of great numbers of molecules moving in all directions, that their impact on a surface causes the pressure of the gas, and that their average kinetic energy determines the temperature of the gas. The theory was not immediately accepted, in part because conservation of energy had not yet been established, and it was not obvious to physicists how the collisions between molecules could be perfectly elastic.: 36–37
Pioneers of the kinetic theory, whose work was also largely neglected by their contemporaries, were Mikhail Lomonosov (1747), Georges-Louis Le Sage (ca. 1780, published 1818), John Herapath (1816) and John James Waterston (1843), which connected their research with the development of mechanical explanations of gravitation.
In 1856 August Krönig created a simple gas-kinetic model, which only considered the translational motion of the particles. In 1857 Rudolf Clausius developed a similar, but more sophisticated version of the theory, which included translational and, contrary to Krönig, also rotational and vibrational molecular motions. In this same work he introduced the concept of mean free path of a particle. In 1859, after reading a paper about the diffusion of molecules by Clausius, Scottish physicist James Clerk Maxwell formulated the Maxwell distribution of molecular velocities, which gave the proportion of molecules having a certain velocity in a specific range. This was the first-ever statistical law in physics. Maxwell also gave the first mechanical argument that molecular collisions entail an equalization of temperatures and hence a tendency towards equilibrium. In his 1873 thirteen page article 'Molecules', Maxwell states: "we are told that an 'atom' is a material point, invested and surrounded by 'potential forces' and that when 'flying molecules' strike against a solid body in constant succession it causes what is called pressure of air and other gases."
In 1871, Ludwig Boltzmann generalized Maxwell's achievement and formulated the Maxwell–Boltzmann distribution. The logarithmic connection between entropy and probability was also first stated by Boltzmann.
At the beginning of the 20th century, atoms were considered by many physicists to be purely hypothetical constructs, rather than real objects. An important turning point was Albert Einstein's (1905) and Marian Smoluchowski's (1906) papers on Brownian motion, which succeeded in making certain accurate quantitative predictions based on the kinetic theory.
Following the development of the Boltzmann equation, a framework for its use in developing transport equations was developed independently by David Enskog and Sydney Chapman in 1917 and 1916. The framework provided a route to prediction of the transport properties of dilute gases, and became known as Chapman–Enskog theory. The framework was gradually expanded throughout the following century, eventually becoming a route to prediction of transport properties in real, dense gases.
== Assumptions ==
The application of kinetic theory to ideal gases makes the following assumptions:
The gas consists of very small particles. This smallness of their size is such that the sum of the volume of the individual gas molecules is negligible compared to the volume of the container of the gas. This is equivalent to stating that the average distance separating the gas particles is large compared to their size, and that the elapsed time during a collision between particles and the container's wall is negligible when compared to the time between successive collisions.
The number of particles is so large that a statistical treatment of the problem is well justified. This assumption is sometimes referred to as the thermodynamic limit.
The rapidly moving particles constantly collide among themselves and with the walls of the container, and all these collisions are perfectly elastic.
Interactions (i.e. collisions) between particles are strictly binary and uncorrelated, meaning that there are no three-body (or higher) interactions, and the particles have no memory.
Except during collisions, the interactions among molecules are negligible. They exert no other forces on one another.
Thus, the dynamics of particle motion can be treated classically, and the equations of motion are time-reversible.
As a simplifying assumption, the particles are usually assumed to have the same mass as one another; however, the theory can be generalized to a mass distribution, with each mass type contributing to the gas properties independently of one another in agreement with Dalton's law of partial pressures. Many of the model's predictions are the same whether or not collisions between particles are included, so they are often neglected as a simplifying assumption in derivations (see below).
More modern developments, such as the revised Enskog theory and the extended Bhatnagar–Gross–Krook model, relax one or more of the above assumptions. These can accurately describe the properties of dense gases, and gases with internal degrees of freedom, because they include the volume of the particles as well as contributions from intermolecular and intramolecular forces as well as quantized molecular rotations, quantum rotational-vibrational symmetry effects, and electronic excitation. While theories relaxing the assumptions that the gas particles occupy negligible volume and that collisions are strictly elastic have been successful, it has been shown that relaxing the requirement of interactions being binary and uncorrelated will eventually lead to divergent results.
== Equilibrium properties ==
=== Pressure and kinetic energy ===
In the kinetic theory of gases, the pressure is assumed to be equal to the force (per unit area) exerted by the individual gas atoms or molecules hitting and rebounding from the gas container's surface.
Consider a gas particle traveling at velocity,
v
i
{\textstyle v_{i}}
, along the
i
^
{\displaystyle {\hat {i}}}
-direction in an enclosed volume with characteristic length,
L
i
{\displaystyle L_{i}}
, cross-sectional area,
A
i
{\displaystyle A_{i}}
, and volume,
V
=
A
i
L
i
{\displaystyle V=A_{i}L_{i}}
. The gas particle encounters a boundary after characteristic time
t
=
L
i
/
v
i
.
{\displaystyle t=L_{i}/v_{i}.}
The momentum of the gas particle can then be described as
p
i
=
m
v
i
=
m
L
i
/
t
.
{\displaystyle p_{i}=mv_{i}=mL_{i}/t.}
We combine the above with Newton's second law, which states that the force experienced by a particle is related to the time rate of change of its momentum, such that
F
i
=
d
p
i
d
t
=
m
L
i
t
2
=
m
v
i
2
L
i
.
{\displaystyle F_{i}={\frac {\mathrm {d} p_{i}}{\mathrm {d} t}}={\frac {mL_{i}}{t^{2}}}={\frac {mv_{i}^{2}}{L_{i}}}.}
Now consider a large number,
N
{\displaystyle N}
, of gas particles with random orientation in a three-dimensional volume. Because the orientation is random, the average particle speed,
v
{\textstyle v}
, in every direction is identical
v
x
2
=
v
y
2
=
v
z
2
.
{\displaystyle v_{x}^{2}=v_{y}^{2}=v_{z}^{2}.}
Further, assume that the volume is symmetrical about its three dimensions,
i
^
,
j
^
,
k
^
{\displaystyle {\hat {i}},{\hat {j}},{\hat {k}}}
, such that
V
=
V
i
=
V
j
=
V
k
,
F
=
F
i
=
F
j
=
F
k
,
A
i
=
A
j
=
A
k
.
{\displaystyle {\begin{aligned}V={}&V_{i}=V_{j}=V_{k},\\F={}&F_{i}=F_{j}=F_{k},\\&A_{i}=A_{j}=A_{k}.\end{aligned}}}
The total surface area on which the gas particles act is therefore
A
=
3
A
i
.
{\displaystyle A=3A_{i}.}
The pressure exerted by the collisions of the
N
{\displaystyle N}
gas particles with the surface can then be found by adding the force contribution of every particle and dividing by the interior surface area of the volume,
P
=
N
F
¯
A
=
N
L
F
V
{\displaystyle P={\frac {N{\overline {F}}}{A}}={\frac {NLF}{V}}}
⇒
P
V
=
N
L
F
=
N
3
m
v
2
.
{\displaystyle \Rightarrow PV=NLF={\frac {N}{3}}mv^{2}.}
The total translational kinetic energy
K
t
{\displaystyle K_{\text{t}}}
of the gas is defined as
K
t
=
N
2
m
v
2
,
{\displaystyle K_{\text{t}}={\frac {N}{2}}mv^{2},}
providing the result
P
V
=
2
3
K
t
.
{\displaystyle PV={\frac {2}{3}}K_{\text{t}}.}
This is an important, non-trivial result of the kinetic theory because it relates pressure, a macroscopic property, to the translational kinetic energy of the molecules, which is a microscopic property.
The mass density of a gas
ρ
{\displaystyle \rho }
is expressed through the total mass of gas particles and through volume of this gas:
ρ
=
N
m
V
{\displaystyle \rho ={\frac {Nm}{V}}}
. Taking this into account, the pressure is equal to
P
=
ρ
v
2
3
.
{\displaystyle P={\frac {\rho v^{2}}{3}}.}
Relativistic expression for this formula is
P
=
2
ρ
c
2
3
(
(
1
−
v
2
¯
/
c
2
)
−
1
/
2
−
1
)
,
{\displaystyle P={\frac {2\rho c^{2}}{3}}\left({\left(1-{\overline {v^{2}}}/c^{2}\right)}^{-1/2}-1\right),}
where
c
{\displaystyle c}
is speed of light. In the limit of small speeds, the expression becomes
P
≈
ρ
v
2
¯
/
3
{\displaystyle P\approx \rho {\overline {v^{2}}}/3}
.
=== Temperature and kinetic energy ===
Rewriting the above result for the pressure as
P
V
=
1
3
N
m
v
2
{\textstyle PV={\frac {1}{3}}Nmv^{2}}
, we may combine it with the ideal gas law
where
k
B
{\displaystyle k_{\mathrm {B} }}
is the Boltzmann constant and
T
{\displaystyle T}
is the absolute temperature defined by the ideal gas law, to obtain
k
B
T
=
1
3
m
v
2
,
{\displaystyle k_{\mathrm {B} }T={\frac {1}{3}}mv^{2},}
which leads to a simplified expression of the average translational kinetic energy per molecule,
1
2
m
v
2
=
3
2
k
B
T
.
{\displaystyle {\frac {1}{2}}mv^{2}={\frac {3}{2}}k_{\mathrm {B} }T.}
The translational kinetic energy of the system is
N
{\displaystyle N}
times that of a molecule, namely
K
t
=
1
2
N
m
v
2
{\textstyle K_{\text{t}}={\frac {1}{2}}Nmv^{2}}
. The temperature,
T
{\displaystyle T}
is related to the translational kinetic energy by the description above, resulting in
which becomes
Equation (3) is one important result of the kinetic theory:
The average molecular kinetic energy is proportional to the ideal gas law's absolute temperature.
From equations (1) and (3), we have
Thus, the product of pressure and volume per mole is proportional to the average
translational molecular kinetic energy.
Equations (1) and (4) are called the "classical results", which could also be derived from statistical mechanics;
for more details, see:
The equipartition theorem requires that kinetic energy is partitioned equally between all kinetic degrees of freedom, D. A monatomic gas is axially symmetric about each spatial axis, so that D = 3 comprising translational motion along each axis. A diatomic gas is axially symmetric about only one axis, so that D = 5, comprising translational motion along three axes and rotational motion along two axes. A polyatomic gas, like water, is not radially symmetric about any axis, resulting in D = 6, comprising 3 translational and 3 rotational degrees of freedom.
Because the equipartition theorem requires that kinetic energy is partitioned equally, the total kinetic energy is
K
=
D
K
t
=
D
2
N
m
v
2
.
{\displaystyle K=DK_{\text{t}}={\frac {D}{2}}Nmv^{2}.}
Thus, the energy added to the system per gas particle kinetic degree of freedom is
K
N
D
=
1
2
k
B
T
.
{\displaystyle {\frac {K}{ND}}={\frac {1}{2}}k_{\text{B}}T.}
Therefore, the kinetic energy per kelvin of one mole of monatomic ideal gas (D = 3) is
K
=
D
2
k
B
N
A
=
3
2
R
,
{\displaystyle K={\frac {D}{2}}k_{\text{B}}N_{\text{A}}={\frac {3}{2}}R,}
where
N
A
{\displaystyle N_{\text{A}}}
is the Avogadro constant, and R is the ideal gas constant.
Thus, the ratio of the kinetic energy to the absolute temperature of an ideal monatomic gas can be calculated easily:
per mole: 12.47 J/K
per molecule: 20.7 yJ/K = 129 μeV/K
At standard temperature (273.15 K), the kinetic energy can also be obtained:
per mole: 3406 J
per molecule: 5.65 zJ = 35.2 meV.
At higher temperatures (typically thousands of kelvins), vibrational modes become active to provide additional degrees of freedom, creating a temperature-dependence on D and the total molecular energy. Quantum statistical mechanics is needed to accurately compute these contributions.
=== Collisions with container wall ===
For an ideal gas in equilibrium, the rate of collisions with the container wall and velocity distribution of particles hitting the container wall can be calculated based on naive kinetic theory, and the results can be used for analyzing effusive flow rates, which is useful in applications such as the gaseous diffusion method for isotope separation.
Assume that in the container, the number density (number per unit volume) is
n
=
N
/
V
{\displaystyle n=N/V}
and that the particles obey Maxwell's velocity distribution:
f
Maxwell
(
v
x
,
v
y
,
v
z
)
d
v
x
d
v
y
d
v
z
=
(
m
2
π
k
B
T
)
3
/
2
e
−
m
v
2
2
k
B
T
d
v
x
d
v
y
d
v
z
{\displaystyle f_{\text{Maxwell}}(v_{x},v_{y},v_{z})\,dv_{x}\,dv_{y}\,dv_{z}=\left({\frac {m}{2\pi k_{\text{B}}T}}\right)^{3/2}e^{-{\frac {mv^{2}}{2k_{\text{B}}T}}}\,dv_{x}\,dv_{y}\,dv_{z}}
Then for a small area
d
A
{\displaystyle dA}
on the container wall, a particle with speed
v
{\displaystyle v}
at angle
θ
{\displaystyle \theta }
from the normal of the area
d
A
{\displaystyle dA}
, will collide with the area within time interval
d
t
{\displaystyle dt}
, if it is within the distance
v
d
t
{\displaystyle v\,dt}
from the area
d
A
{\displaystyle dA}
. Therefore, all the particles with speed
v
{\displaystyle v}
at angle
θ
{\displaystyle \theta }
from the normal that can reach area
d
A
{\displaystyle dA}
within time interval
d
t
{\displaystyle dt}
are contained in the tilted pipe with a height of
v
cos
(
θ
)
d
t
{\displaystyle v\cos(\theta )dt}
and a volume of
v
cos
(
θ
)
d
A
d
t
{\displaystyle v\cos(\theta )\,dA\,dt}
.
The total number of particles that reach area
d
A
{\displaystyle dA}
within time interval
d
t
{\displaystyle dt}
also depends on the velocity distribution; All in all, it calculates to be:
n
v
cos
(
θ
)
d
A
d
t
×
(
m
2
π
k
B
T
)
3
/
2
e
−
m
v
2
2
k
B
T
(
v
2
sin
(
θ
)
d
v
d
θ
d
ϕ
)
.
{\displaystyle nv\cos(\theta )\,dA\,dt\times \left({\frac {m}{2\pi k_{\text{B}}T}}\right)^{3/2}e^{-{\frac {mv^{2}}{2k_{\text{B}}T}}}\left(v^{2}\sin(\theta )\,dv\,d\theta \,d\phi \right).}
Integrating this over all appropriate velocities within the constraint
v
>
0
{\displaystyle v>0}
,
0
<
θ
<
π
2
{\textstyle 0<\theta <{\frac {\pi }{2}}}
,
0
<
ϕ
<
2
π
{\displaystyle 0<\phi <2\pi }
yields the number of atomic or molecular collisions with a wall of a container per unit area per unit time:
J
collision
=
∫
0
π
/
2
cos
(
θ
)
sin
(
θ
)
d
θ
∫
0
π
sin
(
θ
)
d
θ
×
n
v
¯
=
1
4
n
v
¯
=
n
4
8
k
B
T
π
m
.
{\displaystyle J_{\text{collision}}={\frac {\displaystyle \int _{0}^{\pi /2}\cos(\theta )\sin(\theta )\,d\theta }{\displaystyle \int _{0}^{\pi }\sin(\theta )\,d\theta }}\times n{\bar {v}}={\frac {1}{4}}n{\bar {v}}={\frac {n}{4}}{\sqrt {\frac {8k_{\mathrm {B} }T}{\pi m}}}.}
This quantity is also known as the "impingement rate" in vacuum physics. Note that to calculate the average speed
v
¯
{\displaystyle {\bar {v}}}
of the Maxwell's velocity distribution, one has to integrate over
v
>
0
{\displaystyle v>0}
,
0
<
θ
<
π
{\displaystyle 0<\theta <\pi }
,
0
<
ϕ
<
2
π
{\displaystyle 0<\phi <2\pi }
.
The momentum transfer to the container wall from particles hitting the area
d
A
{\displaystyle dA}
with speed
v
{\displaystyle v}
at angle
θ
{\displaystyle \theta }
from the normal, in time interval
d
t
{\displaystyle dt}
is:
[
2
m
v
cos
(
θ
)
]
×
n
v
cos
(
θ
)
d
A
d
t
×
(
m
2
π
k
B
T
)
3
/
2
e
−
m
v
2
2
k
B
T
(
v
2
sin
(
θ
)
d
v
d
θ
d
ϕ
)
.
{\displaystyle [2mv\cos(\theta )]\times nv\cos(\theta )\,dA\,dt\times \left({\frac {m}{2\pi k_{\text{B}}T}}\right)^{3/2}e^{-{\frac {mv^{2}}{2k_{\text{B}}T}}}\left(v^{2}\sin(\theta )\,dv\,d\theta \,d\phi \right).}
Integrating this over all appropriate velocities within the constraint
v
>
0
{\displaystyle v>0}
,
0
<
θ
<
π
2
{\textstyle 0<\theta <{\frac {\pi }{2}}}
,
0
<
ϕ
<
2
π
{\displaystyle 0<\phi <2\pi }
yields the pressure (consistent with Ideal gas law):
P
=
2
∫
0
π
/
2
cos
2
(
θ
)
sin
(
θ
)
d
θ
∫
0
π
sin
(
θ
)
d
θ
×
n
m
v
rms
2
=
1
3
n
m
v
rms
2
=
2
3
n
⟨
E
kin
⟩
=
n
k
B
T
{\displaystyle P={\frac {\displaystyle 2\int _{0}^{\pi /2}\cos ^{2}(\theta )\sin(\theta )\,d\theta }{\displaystyle \int _{0}^{\pi }\sin(\theta )\,d\theta }}\times nmv_{\text{rms}}^{2}={\frac {1}{3}}nmv_{\text{rms}}^{2}={\frac {2}{3}}n\langle E_{\text{kin}}\rangle =nk_{\mathrm {B} }T}
If this small area
A
{\displaystyle A}
is punched to become a small hole, the effusive flow rate will be:
Φ
effusion
=
J
collision
A
=
n
A
k
B
T
2
π
m
.
{\displaystyle \Phi _{\text{effusion}}=J_{\text{collision}}A=nA{\sqrt {\frac {k_{\mathrm {B} }T}{2\pi m}}}.}
Combined with the ideal gas law, this yields
Φ
effusion
=
P
A
2
π
m
k
B
T
.
{\displaystyle \Phi _{\text{effusion}}={\frac {PA}{\sqrt {2\pi mk_{\mathrm {B} }T}}}.}
The above expression is consistent with Graham's law.
To calculate the velocity distribution of particles hitting this small area, we must take into account that all the particles with
(
v
,
θ
,
ϕ
)
{\displaystyle (v,\theta ,\phi )}
that hit the area
d
A
{\displaystyle dA}
within the time interval
d
t
{\displaystyle dt}
are contained in the tilted pipe with a height of
v
cos
(
θ
)
d
t
{\displaystyle v\cos(\theta )\,dt}
and a volume of
v
cos
(
θ
)
d
A
d
t
{\displaystyle v\cos(\theta )\,dA\,dt}
; Therefore, compared to the Maxwell distribution, the velocity distribution will have an extra factor of
v
cos
θ
{\displaystyle v\cos \theta }
:
f
(
v
,
θ
,
ϕ
)
d
v
d
θ
d
ϕ
=
λ
v
cos
θ
(
m
2
π
k
T
)
3
/
2
e
−
m
v
2
2
k
B
T
(
v
2
sin
θ
d
v
d
θ
d
ϕ
)
{\displaystyle {\begin{aligned}f(v,\theta ,\phi )\,dv\,d\theta \,d\phi &=\lambda v\cos {\theta }\left({\frac {m}{2\pi kT}}\right)^{3/2}e^{-{\frac {mv^{2}}{2k_{\mathrm {B} }T}}}(v^{2}\sin {\theta }\,dv\,d\theta \,d\phi )\end{aligned}}}
with the constraint
v
>
0
{\textstyle v>0}
,
0
<
θ
<
π
2
{\textstyle 0<\theta <{\frac {\pi }{2}}}
,
0
<
ϕ
<
2
π
{\displaystyle 0<\phi <2\pi }
. The constant
λ
{\displaystyle \lambda }
can be determined by the normalization condition
∫
f
(
v
,
θ
,
ϕ
)
d
v
d
θ
d
ϕ
=
1
{\textstyle \int f(v,\theta ,\phi )\,dv\,d\theta \,d\phi =1}
to be
4
/
v
¯
{\textstyle 4/{\bar {v}}}
, and overall:
f
(
v
,
θ
,
ϕ
)
d
v
d
θ
d
ϕ
=
1
2
π
(
m
k
B
T
)
2
e
−
m
v
2
2
k
B
T
(
v
3
sin
θ
cos
θ
d
v
d
θ
d
ϕ
)
;
v
>
0
,
0
<
θ
<
π
2
,
0
<
ϕ
<
2
π
{\displaystyle {\begin{aligned}f(v,\theta ,\phi )\,dv\,d\theta \,d\phi &={\frac {1}{2\pi }}\left({\frac {m}{k_{\mathrm {B} }T}}\right)^{2}e^{-{\frac {mv^{2}}{2k_{\mathrm {B} }T}}}(v^{3}\sin {\theta }\cos {\theta }\,dv\,d\theta \,d\phi )\\\end{aligned}};\quad v>0,\,0<\theta <{\frac {\pi }{2}},\,0<\phi <2\pi }
=== Speed of molecules ===
From the kinetic energy formula it can be shown that
v
p
=
2
⋅
k
B
T
m
,
{\displaystyle v_{\text{p}}={\sqrt {2\cdot {\frac {k_{\mathrm {B} }T}{m}}}},}
v
¯
=
2
π
v
p
=
8
π
⋅
k
B
T
m
,
{\displaystyle {\bar {v}}={\frac {2}{\sqrt {\pi }}}v_{p}={\sqrt {{\frac {8}{\pi }}\cdot {\frac {k_{\mathrm {B} }T}{m}}}},}
v
rms
=
3
2
v
p
=
3
⋅
k
B
T
m
,
{\displaystyle v_{\text{rms}}={\sqrt {\frac {3}{2}}}v_{p}={\sqrt {{3}\cdot {\frac {k_{\mathrm {B} }T}{m}}}},}
where v is in m/s, T is in kelvin, and m is the mass of one molecule of gas in kg. The most probable (or mode) speed
v
p
{\displaystyle v_{\text{p}}}
is 81.6% of the root-mean-square speed
v
rms
{\displaystyle v_{\text{rms}}}
, and the mean (arithmetic mean, or average) speed
v
¯
{\displaystyle {\bar {v}}}
is 92.1% of the rms speed (isotropic distribution of speeds).
See:
Average,
Root-mean-square speed
Arithmetic mean
Mean
Mode (statistics)
=== Mean free path ===
In kinetic theory of gases, the mean free path is the average distance traveled by a molecule, or a number of molecules per volume, before they make their first collision. Let
σ
{\displaystyle \sigma }
be the collision cross section of one molecule colliding with another. As in the previous section, the number density
n
{\displaystyle n}
is defined as the number of molecules per (extensive) volume, or
n
=
N
/
V
{\displaystyle n=N/V}
. The collision cross section per volume or collision cross section density is
n
σ
{\displaystyle n\sigma }
, and it is related to the mean free path
ℓ
{\displaystyle \ell }
by
ℓ
=
1
n
σ
2
{\displaystyle \ell ={\frac {1}{n\sigma {\sqrt {2}}}}}
Notice that the unit of the collision cross section per volume
n
σ
{\displaystyle n\sigma }
is reciprocal of length.
== Transport properties ==
The kinetic theory of gases deals not only with gases in thermodynamic equilibrium, but also very importantly with gases not in thermodynamic equilibrium. This means using Kinetic Theory to consider what are known as "transport properties", such as viscosity, thermal conductivity, mass diffusivity and thermal diffusion.
In its most basic form, Kinetic gas theory is only applicable to dilute gases. The extension of Kinetic gas theory to dense gas mixtures, Revised Enskog Theory, was developed in 1983-1987 by E. G. D. Cohen, J. M. Kincaid and M. Lòpez de Haro, building on work by H. van Beijeren and M. H. Ernst.
=== Viscosity and kinetic momentum ===
In books on elementary kinetic theory one can find results for dilute gas modeling that are used in many fields. Derivation of the kinetic model for shear viscosity usually starts by considering a Couette flow where two parallel plates are separated by a gas layer. The upper plate is moving at a constant velocity to the right due to a force F. The lower plate is stationary, and an equal and opposite force must therefore be acting on it to keep it at rest. The molecules in the gas layer have a forward velocity component
u
{\displaystyle u}
which increase uniformly with distance
y
{\displaystyle y}
above the lower plate. The non-equilibrium flow is superimposed on a Maxwell-Boltzmann equilibrium distribution of molecular motions.
Inside a dilute gas in a Couette flow setup, let
u
0
{\displaystyle u_{0}}
be the forward velocity of the gas at a horizontal flat layer (labeled as
y
=
0
{\displaystyle y=0}
);
u
0
{\displaystyle u_{0}}
is along the horizontal direction. The number of molecules arriving at the area
d
A
{\displaystyle dA}
on one side of the gas layer, with speed
v
{\displaystyle v}
at angle
θ
{\displaystyle \theta }
from the normal, in time interval
d
t
{\displaystyle dt}
is
n
v
cos
(
θ
)
d
A
d
t
×
(
m
2
π
k
B
T
)
3
/
2
e
−
m
v
2
2
k
B
T
(
v
2
sin
θ
d
v
d
θ
d
ϕ
)
{\displaystyle nv\cos({\theta })\,dA\,dt\times \left({\frac {m}{2\pi k_{\mathrm {B} }T}}\right)^{3/2}\,e^{-{\frac {mv^{2}}{2k_{\mathrm {B} }T}}}(v^{2}\sin {\theta }\,dv\,d\theta \,d\phi )}
These molecules made their last collision at
y
=
±
ℓ
cos
θ
{\displaystyle y=\pm \ell \cos \theta }
, where
ℓ
{\displaystyle \ell }
is the mean free path. Each molecule will contribute a forward momentum of
p
x
±
=
m
(
u
0
±
ℓ
cos
θ
d
u
d
y
)
,
{\displaystyle p_{x}^{\pm }=m\left(u_{0}\pm \ell \cos \theta {\frac {du}{dy}}\right),}
where plus sign applies to molecules from above, and minus sign below. Note that the forward velocity gradient
d
u
/
d
y
{\displaystyle du/dy}
can be considered to be constant over a distance of mean free path.
Integrating over all appropriate velocities within the constraint
v
>
0
{\displaystyle v>0}
,
0
<
θ
<
π
2
{\textstyle 0<\theta <{\frac {\pi }{2}}}
,
0
<
ϕ
<
2
π
{\displaystyle 0<\phi <2\pi }
yields the forward momentum transfer per unit time per unit area (also known as shear stress):
τ
±
=
1
4
v
¯
n
⋅
m
(
u
0
±
2
3
ℓ
d
u
d
y
)
{\displaystyle \tau ^{\pm }={\frac {1}{4}}{\bar {v}}n\cdot m\left(u_{0}\pm {\frac {2}{3}}\ell {\frac {du}{dy}}\right)}
The net rate of momentum per unit area that is transported across the imaginary surface is thus
τ
=
τ
+
−
τ
−
=
1
3
v
¯
n
m
⋅
ℓ
d
u
d
y
{\displaystyle \tau =\tau ^{+}-\tau ^{-}={\frac {1}{3}}{\bar {v}}nm\cdot \ell {\frac {du}{dy}}}
Combining the above kinetic equation with Newton's law of viscosity
τ
=
η
d
u
d
y
{\displaystyle \tau =\eta {\frac {du}{dy}}}
gives the equation for shear viscosity, which is usually denoted
η
0
{\displaystyle \eta _{0}}
when it is a dilute gas:
η
0
=
1
3
v
¯
n
m
ℓ
{\displaystyle \eta _{0}={\frac {1}{3}}{\bar {v}}nm\ell }
Combining this equation with the equation for mean free path gives
η
0
=
1
3
2
m
v
¯
σ
{\displaystyle \eta _{0}={\frac {1}{3{\sqrt {2}}}}{\frac {m{\bar {v}}}{\sigma }}}
Maxwell-Boltzmann distribution gives the average (equilibrium) molecular speed as
v
¯
=
2
π
v
p
=
2
2
π
k
B
T
m
{\displaystyle {\bar {v}}={\frac {2}{\sqrt {\pi }}}v_{p}=2{\sqrt {{\frac {2}{\pi }}{\frac {k_{\mathrm {B} }T}{m}}}}}
where
v
p
{\displaystyle v_{p}}
is the most probable speed. We note that
k
B
N
A
=
R
and
M
=
m
N
A
{\displaystyle k_{\text{B}}N_{\text{A}}=R\quad {\text{and}}\quad M=mN_{\text{A}}}
and insert the velocity in the viscosity equation above. This gives the well known equation (with
σ
{\displaystyle \sigma }
subsequently estimated below) for shear viscosity for dilute gases:
η
0
=
2
3
π
⋅
m
k
B
T
σ
=
2
3
π
⋅
M
R
T
σ
N
A
{\displaystyle \eta _{0}={\frac {2}{3{\sqrt {\pi }}}}\cdot {\frac {\sqrt {mk_{\mathrm {B} }T}}{\sigma }}={\frac {2}{3{\sqrt {\pi }}}}\cdot {\frac {\sqrt {MRT}}{\sigma N_{\text{A}}}}}
and
M
{\displaystyle M}
is the molar mass. The equation above presupposes that the gas density is low (i.e. the pressure is low). This implies that the transport of momentum through the gas due to the translational motion of molecules is much larger than the transport due to momentum being transferred between molecules during collisions. The transfer of momentum between molecules is explicitly accounted for in Revised Enskog theory, which relaxes the requirement of a gas being dilute. The viscosity equation further presupposes that there is only one type of gas molecules, and that the gas molecules are perfect elastic and hard core particles of spherical shape. This assumption of elastic, hard core spherical molecules, like billiard balls, implies that the collision cross section of one molecule can be estimated by
σ
=
π
(
2
r
)
2
=
π
d
2
{\displaystyle \sigma =\pi \left(2r\right)^{2}=\pi d^{2}}
The radius
r
{\displaystyle r}
is called collision cross section radius or kinetic radius, and the diameter
d
{\displaystyle d}
is called collision cross section diameter or kinetic diameter of a molecule in a monomolecular gas. There are no simple general relation between the collision cross section and the hard core size of the (fairly spherical) molecule. The relation depends on shape of the potential energy of the molecule. For a real spherical molecule (i.e. a noble gas atom or a reasonably spherical molecule) the interaction potential is more like the Lennard-Jones potential or Morse potential which have a negative part that attracts the other molecule from distances longer than the hard core radius. The radius for zero Lennard-Jones potential may then be used as a rough estimate for the kinetic radius. However, using this estimate will typically lead to an erroneous temperature dependency of the viscosity. For such interaction potentials, significantly more accurate results are obtained by numerical evaluation of the required collision integrals.
The expression for viscosity obtained from Revised Enskog Theory reduces to the above expression in the limit of infinite dilution, and can be written as
η
=
(
1
+
α
η
)
η
0
+
η
c
{\displaystyle \eta =(1+\alpha _{\eta })\eta _{0}+\eta _{c}}
where
α
η
{\displaystyle \alpha _{\eta }}
is a term that tends to zero in the limit of infinite dilution that accounts for excluded volume, and
η
c
{\displaystyle \eta _{c}}
is a term accounting for the transfer of momentum over a non-zero distance between particles during a collision.
=== Thermal conductivity and heat flux ===
Following a similar logic as above, one can derive the kinetic model for thermal conductivity of a dilute gas:
Consider two parallel plates separated by a gas layer. Both plates have uniform temperatures, and are so massive compared to the gas layer that they can be treated as thermal reservoirs. The upper plate has a higher temperature than the lower plate. The molecules in the gas layer have a molecular kinetic energy
ε
{\displaystyle \varepsilon }
which increases uniformly with distance
y
{\displaystyle y}
above the lower plate. The non-equilibrium energy flow is superimposed on a Maxwell-Boltzmann equilibrium distribution of molecular motions.
Let
ε
0
{\displaystyle \varepsilon _{0}}
be the molecular kinetic energy of the gas at an imaginary horizontal surface inside the gas layer. The number of molecules arriving at an area
d
A
{\displaystyle dA}
on one side of the gas layer, with speed
v
{\displaystyle v}
at angle
θ
{\displaystyle \theta }
from the normal, in time interval
d
t
{\displaystyle dt}
is
n
v
cos
(
θ
)
d
A
d
t
×
(
m
2
π
k
B
T
)
3
/
2
e
−
m
v
2
2
k
B
T
(
v
2
sin
(
θ
)
d
v
d
θ
d
ϕ
)
{\displaystyle nv\cos(\theta )\,dA\,dt\times \left({\frac {m}{2\pi k_{\mathrm {B} }T}}\right)^{3/2}e^{-{\frac {mv^{2}}{2k_{\text{B}}T}}}(v^{2}\sin(\theta )\,dv\,d\theta \,d\phi )}
These molecules made their last collision at a distance
ℓ
cos
θ
{\displaystyle \ell \cos \theta }
above and below the gas layer, and each will contribute a molecular kinetic energy of
ε
±
=
(
ε
0
±
m
c
v
ℓ
cos
θ
d
T
d
y
)
,
{\displaystyle \varepsilon ^{\pm }=\left(\varepsilon _{0}\pm mc_{v}\ell \cos \theta \,{\frac {dT}{dy}}\right),}
where
c
v
{\displaystyle c_{v}}
is the specific heat capacity. Again, plus sign applies to molecules from above, and minus sign below. Note that the temperature gradient
d
T
/
d
y
{\displaystyle dT/dy}
can be considered to be constant over a distance of mean free path.
Integrating over all appropriate velocities within the constraint
v
>
0
{\displaystyle v>0}
,
0
<
θ
<
π
2
{\textstyle 0<\theta <{\frac {\pi }{2}}}
,
0
<
ϕ
<
2
π
{\displaystyle 0<\phi <2\pi }
yields the energy transfer per unit time per unit area (also known as heat flux):
q
y
±
=
−
1
4
v
¯
n
⋅
(
ε
0
±
2
3
m
c
v
ℓ
d
T
d
y
)
{\displaystyle q_{y}^{\pm }=-{\frac {1}{4}}{\bar {v}}n\cdot \left(\varepsilon _{0}\pm {\frac {2}{3}}mc_{v}\ell {\frac {dT}{dy}}\right)}
Note that the energy transfer from above is in the
−
y
{\displaystyle -y}
direction, and therefore the overall minus sign in the equation. The net heat flux across the imaginary surface is thus
q
=
q
y
+
−
q
y
−
=
−
1
3
v
¯
n
m
c
v
ℓ
d
T
d
y
{\displaystyle q=q_{y}^{+}-q_{y}^{-}=-{\frac {1}{3}}{\bar {v}}nmc_{v}\ell \,{\frac {dT}{dy}}}
Combining the above kinetic equation with Fourier's law
q
=
−
κ
d
T
d
y
{\displaystyle q=-\kappa \,{\frac {dT}{dy}}}
gives the equation for thermal conductivity, which is usually denoted
κ
0
{\displaystyle \kappa _{0}}
when it is a dilute gas:
κ
0
=
1
3
v
¯
n
m
c
v
ℓ
{\displaystyle \kappa _{0}={\frac {1}{3}}{\bar {v}}nmc_{v}\ell }
Similarly to viscosity, Revised Enskog theory yields an expression for thermal conductivity that reduces to the above expression in the limit of infinite dilution, and which can be written as
κ
=
α
κ
κ
0
+
κ
c
{\displaystyle \kappa =\alpha _{\kappa }\kappa _{0}+\kappa _{c}}
where
α
κ
{\displaystyle \alpha _{\kappa }}
is a term that tends to unity in the limit of infinite dilution, accounting for excluded volume, and
κ
c
{\displaystyle \kappa _{c}}
is a term accounting for the transfer of energy across a non-zero distance between particles during a collision.
=== Diffusion coefficient and diffusion flux ===
Following a similar logic as above, one can derive the kinetic model for mass diffusivity of a dilute gas:
Consider a steady diffusion between two regions of the same gas with perfectly flat and parallel boundaries separated by a layer of the same gas. Both regions have uniform number densities, but the upper region has a higher number density than the lower region. In the steady state, the number density at any point is constant (that is, independent of time). However, the number density
n
{\displaystyle n}
in the layer increases uniformly with distance
y
{\displaystyle y}
above the lower plate. The non-equilibrium molecular flow is superimposed on a Maxwell–Boltzmann equilibrium distribution of molecular motions.
Let
n
0
{\displaystyle n_{0}}
be the number density of the gas at an imaginary horizontal surface inside the layer. The number of molecules arriving at an area
d
A
{\displaystyle dA}
on one side of the gas layer, with speed
v
{\displaystyle v}
at angle
θ
{\displaystyle \theta }
from the normal, in time interval
d
t
{\displaystyle dt}
is
n
v
cos
(
θ
)
d
A
d
t
×
(
m
2
π
k
B
T
)
3
/
2
e
−
m
v
2
2
k
B
T
(
v
2
sin
(
θ
)
d
v
d
θ
d
ϕ
)
{\displaystyle nv\cos(\theta )\,dA\,dt\times \left({\frac {m}{2\pi k_{\mathrm {B} }T}}\right)^{3/2}e^{-{\frac {mv^{2}}{2k_{\text{B}}T}}}(v^{2}\sin(\theta )\,dv\,d\theta \,d\phi )}
These molecules made their last collision at a distance
ℓ
cos
θ
{\displaystyle \ell \cos \theta }
above and below the gas layer, where the local number density is
n
±
=
(
n
0
±
ℓ
cos
θ
d
n
d
y
)
{\displaystyle n^{\pm }=\left(n_{0}\pm \ell \cos \theta \,{\frac {dn}{dy}}\right)}
Again, plus sign applies to molecules from above, and minus sign below. Note that the number density gradient
d
n
/
d
y
{\displaystyle dn/dy}
can be considered to be constant over a distance of mean free path.
Integrating over all appropriate velocities within the constraint
v
>
0
{\displaystyle v>0}
,
0
<
θ
<
π
2
{\textstyle 0<\theta <{\frac {\pi }{2}}}
,
0
<
ϕ
<
2
π
{\displaystyle 0<\phi <2\pi }
yields the molecular transfer per unit time per unit area (also known as diffusion flux):
J
y
±
=
−
1
4
v
¯
⋅
(
n
0
±
2
3
ℓ
d
n
d
y
)
{\displaystyle J_{y}^{\pm }=-{\frac {1}{4}}{\bar {v}}\cdot \left(n_{0}\pm {\frac {2}{3}}\ell \,{\frac {dn}{dy}}\right)}
Note that the molecular transfer from above is in the
−
y
{\displaystyle -y}
direction, and therefore the overall minus sign in the equation. The net diffusion flux across the imaginary surface is thus
J
=
J
y
+
−
J
y
−
=
−
1
3
v
¯
ℓ
d
n
d
y
{\displaystyle J=J_{y}^{+}-J_{y}^{-}=-{\frac {1}{3}}{\bar {v}}\ell {\frac {dn}{dy}}}
Combining the above kinetic equation with Fick's first law of diffusion
J
=
−
D
d
n
d
y
{\displaystyle J=-D{\frac {dn}{dy}}}
gives the equation for mass diffusivity, which is usually denoted
D
0
{\displaystyle D_{0}}
when it is a dilute gas:
D
0
=
1
3
v
¯
ℓ
{\displaystyle D_{0}={\frac {1}{3}}{\bar {v}}\ell }
The corresponding expression obtained from Revised Enskog Theory may be written as
D
=
α
D
D
0
{\displaystyle D=\alpha _{D}D_{0}}
where
α
D
{\displaystyle \alpha _{D}}
is a factor that tends to unity in the limit of infinite dilution, which accounts for excluded volume and the variation chemical potentials with density.
== Detailed balance ==
=== Fluctuation and dissipation ===
The kinetic theory of gases entails that due to the microscopic reversibility of the gas particles' detailed dynamics, the system must obey the principle of detailed balance. Specifically, the fluctuation-dissipation theorem applies to the Brownian motion (or diffusion) and the drag force, which leads to the Einstein–Smoluchowski equation:
D
=
μ
k
B
T
,
{\displaystyle D=\mu \,k_{\text{B}}T,}
where
D is the mass diffusivity;
μ is the "mobility", or the ratio of the particle's terminal drift velocity to an applied force, μ = vd/F;
kB is the Boltzmann constant;
T is the absolute temperature.
Note that the mobility μ = vd/F can be calculated based on the viscosity of the gas; Therefore, the Einstein–Smoluchowski equation also provides a relation between the mass diffusivity and the viscosity of the gas.
=== Onsager reciprocal relations ===
The mathematical similarities between the expressions for shear viscocity, thermal conductivity and diffusion coefficient of the ideal (dilute) gas is not a coincidence; It is a direct result of the Onsager reciprocal relations (i.e. the detailed balance of the reversible dynamics of the particles), when applied to the convection (matter flow due to temperature gradient, and heat flow due to pressure gradient) and advection (matter flow due to the velocity of particles, and momentum transfer due to pressure gradient) of the ideal (dilute) gas.
== See also ==
Bogoliubov-Born-Green-Kirkwood-Yvon hierarchy of equations
Boltzmann equation
Chapman–Enskog theory
Collision theory
Critical temperature
Gas laws
Heat
Interatomic potential
Magnetohydrodynamics
Maxwell–Boltzmann distribution
Mixmaster universe
Thermodynamics
Vicsek model
Vlasov equation
== References ==
=== Citations ===
=== Sources cited ===
== Further reading ==
Sydney Chapman and Thomas George Cowling (1939/1970), The Mathematical Theory of Non-uniform Gases: An Account of the Kinetic Theory of Viscosity, Thermal Conduction and Diffusion in Gases, (first edition 1939, second edition 1952), third edition 1970 prepared in co-operation with D. Burnett, Cambridge University Press, London
Joseph Oakland Hirschfelder, Charles Francis Curtiss, and Robert Byron Bird (1964), Molecular Theory of Gases and Liquids, revised edition (Wiley-Interscience), ISBN 978-0471400653
Richard Lawrence Liboff (2003), Kinetic Theory: Classical, Quantum, and Relativistic Descriptions, third edition (Springer), ISBN 978-0-387-21775-8
Behnam Rahimi and Henning Struchtrup Archived 2021-07-25 at the Wayback Machine (2016), "Macroscopic and kinetic modelling of rarefied polyatomic gases", Journal of Fluid Mechanics, 806, 437–505, DOI 10.1017/jfm.2016.604
== External links ==
PHYSICAL CHEMISTRY – Gases
Early Theories of Gases
Thermodynamics Archived 2017-02-28 at the Wayback Machine - a chapter from an online textbook
Temperature and Pressure of an Ideal Gas: The Equation of State on Project PHYSNET.
Introduction to the kinetic molecular theory of gases, from The Upper Canada District School Board
Java animation illustrating the kinetic theory from University of Arkansas
Flowchart linking together kinetic theory concepts, from HyperPhysics
Interactive Java Applets allowing high school students to experiment and discover how various factors affect rates of chemical reactions.
https://www.youtube.com/watch?v=47bF13o8pb8&list=UUXrJjdDeqLgGjJbP1sMnH8A A demonstration apparatus for the thermal agitation in gases. | Wikipedia/Kinetic_theory_of_gas |
Laser science or laser physics is a branch of optics that describes the theory and practice of lasers.
Laser science is principally concerned with quantum electronics, laser construction, optical cavity design, the physics of producing a population inversion in laser media, and the temporal evolution of the light field in the laser. It is also concerned with the physics of laser beam propagation, particularly the physics of Gaussian beams, with laser applications, and with associated fields such as nonlinear optics and quantum optics.
== History ==
Laser science predates the invention of the laser itself. Albert Einstein created the foundations for the laser and maser in 1917, via a paper in which he re-derived Max Planck’s law of radiation using a formalism based on probability coefficients (Einstein coefficients) for the absorption, spontaneous emission, and stimulated emission of electromagnetic radiation. The existence of stimulated emission was confirmed in 1928 by Rudolf W. Ladenburg. In 1939, Valentin A. Fabrikant made the earliest laser proposal. He specified the conditions required for light amplification using stimulated emission. In 1947, Willis E. Lamb and R. C. Retherford found apparent stimulated emission in hydrogen spectra and effected the first demonstration of stimulated emission; in 1950, Alfred Kastler (Nobel Prize for Physics 1966) proposed the method of optical pumping, experimentally confirmed, two years later, by Brossel, Kastler, and Winter.
The theoretical principles describing the operation of a microwave laser (a maser) were first described by Nikolay Basov and Alexander Prokhorov at the All-Union Conference on Radio Spectroscopy in May 1952. The first maser was built by Charles H. Townes, James P. Gordon, and Herbert J. Zeiger in 1953. Townes, Basov and Prokhorov were awarded the Nobel Prize in Physics in 1964 for their research in the field of stimulated emission. Arthur Ashkin, Gérard Mourou, and Donna Strickland were awarded the Nobel Prize in Physics in 2018 for groundbreaking inventions in the field of laser physics.
The first working laser (a pulsed ruby laser) was demonstrated on May 16, 1960, by Theodore Maiman at the Hughes Research Laboratories.
== See also ==
Laser acronyms
List of laser types
== References ==
== External links ==
A very detailed tutorial on lasers | Wikipedia/Laser_science |
In quantum field theory, a ghost, ghost field, ghost particle, or gauge ghost refers to an unphysical state in a gauge theory. These Ghosts are introduced to maintain gauge invariance in theories where the local field components exceeds the number of physical degrees of freedom. Ghosts ensure mathematical consistency in gauge theories.
If a given theory is self-consistent by the introduction of ghosts, these states are labeled "good". Good ghosts are virtual particles that are introduced for regularization, like Faddeev–Popov ghosts. Otherwise, "bad" ghosts admit undesired non-virtual states in a theory, like Pauli–Villars ghosts that introduce particles with negative kinetic energy.
An example of the need of ghost fields is the photon, which is usually described by a four component vector potential Aμ, even if light has only two allowed polarizations in the vacuum. To remove the unphysical degrees of freedom, it is necessary to enforce some restrictions; one way to do this reduction is to introduce some ghost field in the theory. While it is not always necessary to add ghosts to quantize the electromagnetic field, ghost fields are strictly needed to consistently and rigorously quantize non-Abelian Yang–Mills theory, such as done with BRST quantization.
A field with a negative ghost number (the number of ghosts excitations in the field) is called an anti-ghost.
== Good ghosts ==
Good ghosts are virtual particles, that are introduced to maintain mathematical consistencies in a gauge theory, and they often serve as a tool for regularization.
A popular example is the Faddeev–Popov ghosts, which arise in the quantization of non-abelian gauge theories. These ghosts assist in the elimination of unphysical degrees of freedom and preserve gauge invariance.
=== Faddeev–Popov ghosts ===
Faddeev–Popov ghosts are extraneous anticommuting fields which are introduced to maintain the consistency of the path integral formulation in non-abelian gauge theories, such as the ones describing strong force.
Here's how this works:
Person A tries to describe the motion of X particle, but his description consists of too many unnecessary, unphysical variables —many of which don't correspond to anything real or observable. This exact same thing occurs in gauge theories due to their symmetry properties. To remove these unphysical variables, the physicists Ludvig Faddeev and Victor Popov introduced the Faddeev–Popov ghosts, which act like virtual erasers, eliminating the contributions of unphysical variables, and ensuring that only the physical ones exist, preserving the gauge invariance.
They are named after Ludvig Faddeev and Victor Popov.
=== Goldstone bosons ===
Goldstone bosons are sometimes referred to as ghosts, mainly when speaking about the vanishing bosons of the spontaneous symmetry breaking of the electroweak symmetry through the Higgs mechanism. These good ghosts are artifacts of gauge fixing. The longitudinal polarization components of the W and Z bosons correspond to the Goldstone bosons of the spontaneously broken part of the electroweak symmetry SU(2)⊗U(1), which, however, are not observable. Because this symmetry is gauged, the three would-be Goldstone bosons, or ghosts, are "eaten" by the three gauge bosons (W± and Z) corresponding to the three broken generators; this gives these three gauge bosons a mass, and the associated necessary third polarization degree of freedom.
== Bad ghosts ==
"Bad ghosts" represent another, more general meaning of the word "ghost" in theoretical physics: states of negative norm, or fields with the wrong sign of the kinetic term, such as Pauli–Villars ghosts, whose existence allows the probabilities to be negative thus violating unitarity.
=== Ghost condensate ===
A ghost condensate is a speculative proposal in which a ghost, an excitation of a field with a wrong sign of the kinetic term, acquires a vacuum expectation value. This phenomenon breaks Lorentz invariance spontaneously. Around the new vacuum state, all excitations have a positive norm, and therefore the probabilities are positive definite.
We have a real scalar field φ with the following action
S
=
∫
d
4
x
[
a
X
2
−
b
X
]
{\displaystyle S=\int d^{4}x\left[aX^{2}-bX\right]}
where a and b are positive constants and
X
=
d
e
f
1
2
∂
μ
ϕ
∂
μ
ϕ
{\displaystyle X\ {\stackrel {\mathrm {def} }{=}}\ {\frac {1}{2}}\partial ^{\mu }\phi \partial _{\mu }\phi }
The theories of ghost condensate predict specific non-Gaussianities of the cosmic microwave background. These theories have been proposed by Nima Arkani-Hamed, Markus Luty, and others.
Unfortunately, this theory allows for superluminal propagation of information in some cases and has no lower bound on its energy. This model doesn't admit a Hamiltonian formulation (the Legendre transform is multi-valued because the momentum function isn't convex) because it is acausal. Quantizing this theory leads to problems.
=== Landau ghost ===
The Landau pole is sometimes referred as the Landau ghost. Named after Lev Landau, this ghost is an inconsistency in the renormalization procedure in which there is no asymptotic freedom at large energy scales.
== See also ==
No-ghost theorem, related to bad ghosts
BRST quantization, scheme to deal with ghosts
Quantum scar (sometimes called ghosts)
Phantom energy
== References ==
== External links ==
Copeland, Ed; Padilla, Antonio (26 October 2011). Haran, Brady (ed.). Ghost Particles (video). Sixty Symbols. University of Nottingham. | Wikipedia/Ghost_(physics) |
In experimental physics, researchers have proposed non-extensive self-consistent thermodynamic theory to describe phenomena observed in the Large Hadron Collider (LHC). This theory investigates a fireball for high-energy particle collisions, while using Tsallis non-extensive thermodynamics. Fireballs lead to the bootstrap idea, or self-consistency principle, just as in the Boltzmann statistics used by Rolf Hagedorn. Assuming the distribution function gets variations, due to possible symmetrical change, Abdel Nasser Tawfik applied the non-extensive concepts of high-energy particle production.
The motivation to use the non-extensive statistics from Tsallis comes from the results obtained by Bediaga et al. They showed that with the substitution of the Boltzmann factor in Hagedorn's theory by the q-exponential function, it was possible to recover good agreement between calculation and experiment, even at energies as high as those achieved at the LHC, with q>1.
== Non-extensive entropy for ideal quantum gas ==
The starting point of the theory is entropy for a non-extensive quantum gas of bosons and fermions, as proposed by Conroy, Miller and Plastino, which is given by
S
q
=
S
q
F
D
+
S
q
B
E
{\displaystyle S_{q}=S_{q}^{FD}+S_{q}^{BE}}
where
S
q
F
D
{\displaystyle S_{q}^{FD}}
is the non-extended version of the Fermi–Dirac entropy and
S
q
B
E
{\displaystyle S_{q}^{BE}}
is the non-extended version of the Bose–Einstein entropy.
That group and also Clemens and Worku, the entropy just defined leads to occupation number formulas that reduce to Bediaga's. C. Beck, shows the power-like tails present in the distributions found in high energy physics experiments.
== Non-extensive partition function for ideal quantum gas ==
Using the entropy defined above, the partition function results are
ln
[
1
+
Z
q
(
V
o
,
T
)
]
=
V
o
2
π
2
∑
n
=
1
∞
1
n
∫
0
∞
d
m
∫
0
∞
d
p
p
2
ρ
(
n
;
m
)
[
1
+
(
q
−
1
)
β
p
2
+
m
2
]
−
n
q
(
q
−
1
)
.
{\displaystyle \ln[1+Z_{q}(V_{o},T)]={\frac {V_{o}}{2\pi ^{2}}}\sum _{n=1}^{\infty }{\frac {1}{n}}\int _{0}^{\infty }dm\int _{0}^{\infty }dp\,p^{2}\rho (n;m)[1+(q-1)\beta {\sqrt {p^{2}+m^{2}}}]^{-{\frac {nq}{(q-1)}}}\,.}
Since experiments have shown that
q
>
1
{\displaystyle q>1}
, this restriction is adopted.
Another way to write the non-extensive partition function for a fireball is
Z
q
(
V
o
,
T
)
=
∫
0
∞
σ
(
E
)
[
1
+
(
q
−
1
)
β
E
]
−
q
(
q
−
1
)
d
E
,
{\displaystyle Z_{q}(V_{o},T)=\int _{0}^{\infty }\sigma (E)[1+(q-1)\beta E]^{-{\frac {q}{(q-1)}}}dE\,,}
where
σ
(
E
)
{\displaystyle \sigma (E)}
is the density of states of the fireballs.
== Self-consistency principle ==
Self-consistency implies that both forms of partition functions must be asymptotically equivalent and that the mass spectrum and the density of states must be related to each other by
l
o
g
[
ρ
(
m
)
]
=
l
o
g
[
σ
(
E
)
]
{\displaystyle log[\rho (m)]=log[\sigma (E)]}
,
in the limit of
m
,
E
{\displaystyle m,E}
sufficiently large.
The self-consistency can be asymptotically achieved by choosing
m
3
/
2
ρ
(
m
)
=
γ
m
[
1
+
(
q
o
−
1
)
β
o
m
]
1
q
o
−
1
=
γ
m
[
1
+
(
q
o
′
−
1
)
m
]
β
o
q
o
′
−
1
{\displaystyle m^{3/2}\rho (m)={\frac {\gamma }{m}}{\big [}1+(q_{o}-1)\beta _{o}m{\big ]}^{\frac {1}{q_{o}-1}}={\frac {\gamma }{m}}[1+(q'_{o}-1)m]^{\frac {\beta _{o}}{q'_{o}-1}}}
and
σ
(
E
)
=
b
E
a
[
1
+
(
q
o
′
−
1
)
E
]
β
o
q
o
′
−
1
,
{\displaystyle \sigma (E)=bE^{a}{\big [}1+(q'_{o}-1)E{\big ]}^{\frac {\beta _{o}}{q'_{o}-1}}\,,}
where
γ
{\displaystyle \gamma }
is a constant and
q
o
′
−
1
=
β
o
(
q
o
−
1
)
{\displaystyle q'_{o}-1=\beta _{o}(q_{o}-1)}
. Here,
a
,
b
,
γ
{\displaystyle a,b,\gamma }
are arbitrary constants. For
q
′
→
1
{\displaystyle q'\rightarrow 1}
the two expressions above approach the corresponding expressions in Hagedorn's theory.
== Main results ==
With the mass spectrum and density of states given above, the asymptotic form of the partition function is
Z
q
(
V
o
,
T
)
→
(
1
β
−
β
o
)
α
{\displaystyle Z_{q}(V_{o},T)\rightarrow {\bigg (}{\frac {1}{\beta -\beta _{o}}}{\bigg )}^{\alpha }}
where
α
=
γ
V
o
2
π
2
β
3
/
2
,
{\displaystyle \alpha ={\frac {\gamma V_{o}}{2\pi ^{2}\beta ^{3/2}}}\,,}
with
a
+
1
=
α
=
γ
V
o
2
π
2
β
3
/
2
.
{\displaystyle a+1=\alpha ={\frac {\gamma V_{o}}{2\pi ^{2}\beta ^{3/2}}}\,.}
One immediate consequence of the expression for the partition function is the existence of a limiting temperature
T
o
=
1
/
β
o
{\displaystyle T_{o}=1/\beta _{o}}
. This result is equivalent to Hagedorn's result. With these results, it is expected that at sufficiently high energy, the fireball presents a constant temperature and constant entropic factor.
The connection between Hagedorn's theory and Tsallis statistics has been established through the concept of thermofractals, where it is shown that non extensivity can emerge from a fractal structure. This result is interesting because Hagedorn's definition of fireball characterizes it as a fractal.
== Experimental evidence ==
Experimental evidence of the existence of a limiting temperature and of a limiting entropic index can be found in J. Cleymans and collaborators, and by I. Sena and A. Deppman.
== See also ==
Self-consistency principle in high energy physics
== References == | Wikipedia/Non-extensive_self-consistent_thermodynamical_theory |
HyperPhysics is an educational website about physics topics.
The information architecture of the website is based on HyperCard, the platform on which the material was originally developed, and a thesaurus organization, with thousands of controlled links and usual trees organizing topics from general to specific. It also exploits concept maps to facilitate smooth navigation. HyperPhysics is hosted by Georgia State University and authored by Georgia State faculty member Dr. Rod Nave.
In the early 2000s, various teaching and education facilitators made use of HyperPhysics material through projects
and organizations, and also publishers which use SciLinks.
== References ==
== External links ==
Official website
"HyperPhysics". A 'Google Custom Search' engine. Search utility for Georgia State University's HyperPhysics
Wild, Sarah (2023-02-07). "HyperPhysics, the popular online physics resource, turns 25". Physics Today. 2023 (4): 0207a. Bibcode:2023PhT..2023d.207.. doi:10.1063/PT.6.4.20230207a. S2CID 257170424. | Wikipedia/HyperPhysics |
A timeline of atomic and subatomic physics, including particle physics.
== Antiquity ==
6th - 2nd Century BCE Kanada (philosopher) proposes that anu is an indestructible particle of matter, an "atom"; anu is an abstraction and not observable.
430 BCE Democritus speculates about fundamental indivisible particles—calls them "atoms"
== The beginning of chemistry ==
1766 Henry Cavendish discovers and studies hydrogen
1778 Carl Scheele and Antoine Lavoisier discover that air is composed mostly of nitrogen and oxygen
1781 Joseph Priestley creates water by igniting hydrogen and oxygen
1800 William Nicholson and Anthony Carlisle use electrolysis to separate water into hydrogen and oxygen
1803 John Dalton introduces atomic ideas into chemistry and states that matter is composed of atoms of different weights
1805 (approximate time) Thomas Young conducts the double-slit experiment with light
1811 Amedeo Avogadro claims that equal volumes of gases should contain equal numbers of molecules
1815 William Prout hypothesizes that all matter is built up from hydrogen, adumbrating the proton;
1832 Michael Faraday states his laws of electrolysis
1838 Richard Laming hypothesized a subatomic particle carrying electric charge;
1839 Alexandre Edmond Becquerel discovered the photovoltaic effect
1858 Julius Plücker produced cathode rays;
1871 Dmitri Mendeleyev systematically examines the periodic table and predicts the existence of gallium, scandium, and germanium
1873 Johannes van der Waals introduces the idea of weak attractive forces between molecules
1874 George Johnstone Stoney hypothesizes a minimum unit of electric charge. In 1891, he coins the word electron for it;
1885 Johann Balmer finds a mathematical expression for observed hydrogen line wavelengths
1886 Eugen Goldstein produced anode rays;
1887 Heinrich Hertz discovers the photoelectric effect
1894 Lord Rayleigh and William Ramsay discover argon by spectroscopically analyzing the gas left over after nitrogen and oxygen are removed from air
1895 William Ramsay discovers terrestrial helium by spectroscopically analyzing gas produced by decaying uranium
1896 Antoine Henri Becquerel discovers the radioactivity of uranium
1896 Pieter Zeeman studies the splitting of sodium D lines when sodium is held in a flame between strong magnetic poles
1897 J. J. Thomson discovered the electron;
1897 Emil Wiechert, Walter Kaufmann and J.J. Thomson discover the electron
1898 Marie and Pierre Curie discovered the existence of the radioactive elements radium and polonium in their research of pitchblende
1898 William Ramsay and Morris Travers discover neon, and negatively charged beta particles
== The age of quantum mechanics ==
1887 Heinrich Rudolf Hertz discovers the photoelectric effect that will play a very important role in the development of the quantum theory with Einstein's explanation of this effect in terms of quanta of light
1896 Wilhelm Conrad Röntgen discovers the X-rays while studying electrons in plasma; scattering X-rays—that were considered as 'waves' of high-energy electromagnetic radiation—Arthur Compton will be able to demonstrate in 1922 the 'particle' aspect of electromagnetic radiation.
1899 Ernest Rutherford discovered the alpha and beta particles emitted by uranium;
1900 Johannes Rydberg refines the expression for observed hydrogen line wavelengths
1900 Max Planck states his quantum hypothesis and blackbody radiation law
1900 Paul Villard discovers gamma-rays while studying uranium decay
1902 Philipp Lenard observes that maximum photoelectron energies are independent of illuminating intensity but depend on frequency
1905 Albert Einstein explains the photoelectric effect
1906 Charles Barkla discovers that each element has a characteristic X-ray and that the degree of penetration of these X-rays is related to the atomic weight of the element
1908-1911 Jean Perrin proves the existence of atoms and molecules with experimental work to test Einstein's theoretical explanation of Brownian motion
1909 Ernest Rutherford and Thomas Royds demonstrate that alpha particles are doubly ionized helium atoms
1909 Hans Geiger and Ernest Marsden discover large angle deflections of alpha particles by thin metal foils
1911 Ernest Rutherford explains the Geiger–Marsden experiment by invoking a nuclear atom model and derives the Rutherford cross section
1911 Ștefan Procopiu measures the magnetic dipole moment of the electron
1912 Max von Laue suggests using crystal lattices to diffract X-rays
1912 Walter Friedrich and Paul Knipping diffract X-rays in zinc blende
1913 Henry Moseley shows that nuclear charge is the real basis for numbering the elements
1913 Johannes Stark demonstrates that strong electric fields will split the Balmer spectral line series of hydrogen
1913 Niels Bohr presents his quantum model of the atom
1913 Robert Millikan measures the fundamental unit of electric charge
1913 William Henry Bragg and William Lawrence Bragg work out the Bragg condition for strong X-ray reflection
1914 Ernest Rutherford suggests that the positively charged atomic nucleus contains protons
1914 James Franck and Gustav Hertz observe atomic excitation
1915 Arnold Sommerfeld develops a modified Bohr atomic model with elliptic orbits to explain relativistic fine structure
1916 Gilbert N. Lewis and Irving Langmuir formulate an electron shell model of chemical bonding
1917 Albert Einstein introduces the idea of stimulated radiation emission
1918 Ernest Rutherford notices that, when alpha particles were shot into nitrogen gas, his scintillation detectors showed the signatures of hydrogen nuclei.
1921 Alfred Landé introduces the Landé g-factor
1922 Arthur Compton studies X-ray photon scattering by electrons demonstrating the 'particle' aspect of electromagnetic radiation.
1922 Otto Stern and Walther Gerlach show "spin quantization"
1923 Lise Meitner discovers what is now referred to as the Auger process
1924 John Lennard-Jones proposes a semiempirical interatomic force law
1924 Louis de Broglie suggests that electrons may have wavelike properties in addition to their 'particle' properties; the wave–particle duality has been later extended to all fermions and bosons.
1924 Santiago Antúnez de Mayolo proposes a neutron.
1924 Satyendra Bose and Albert Einstein introduce Bose–Einstein statistics
1925 George Uhlenbeck and Samuel Goudsmit postulate electron spin
1925 Pierre Auger discovers the Auger process (2 years after Lise Meitner)
1925 Werner Heisenberg, Max Born, and Pascual Jordan formulate quantum matrix mechanics
1925 Wolfgang Pauli states the quantum exclusion principle for electrons
1926 Enrico Fermi discovers the spin–statistics connection, for particles that are now called 'fermions', such as the electron (of spin-1/2).
1926 Erwin Schrödinger proves that the wave and matrix formulations of quantum theory are mathematically equivalent
1926 Erwin Schrödinger states his nonrelativistic quantum wave equation and formulates quantum wave mechanics
1926 Gilbert N. Lewis introduces the term "photon", thought by him to be "the carrier of radiant energy."
1926 Oskar Klein and Walter Gordon state their relativistic quantum wave equation, now the Klein–Gordon equation
1926 Paul Dirac introduces Fermi–Dirac statistics
1927 Charles Drummond Ellis (along with James Chadwick and colleagues) finally establish clearly that the beta decay spectrum is in fact continuous and not discrete, posing a problem that will later be solved by theorizing (and later discovering) the existence of the neutrino.
1927 Clinton Davisson, Lester Germer, and George Paget Thomson confirm the wavelike nature of electrons
1927 Thomas and Fermi develop the Thomas–Fermi model
1927 Max Born interprets the probabilistic nature of wavefunctions
1927 Max Born and Robert Oppenheimer introduce the Born–Oppenheimer approximation
1927 Walter Heitler and Fritz London introduce the concepts of valence bond theory and apply it to the hydrogen molecule.
1927 Werner Heisenberg states the quantum uncertainty principle
1928 Chandrasekhara Raman studies optical photon scattering by electrons
1928 Charles G. Darwin and Walter Gordon solve the Dirac equation for a Coulomb potential
1928 Friedrich Hund and Robert S. Mulliken introduce the concept of molecular orbital
1928 Paul Dirac states the Dirac equation
1929 Nevill Mott derives the Mott cross section for the Coulomb scattering of relativistic electrons
1929 Oskar Klein discovers the Klein paradox
1929 Oskar Klein and Yoshio Nishina derive the Klein–Nishina cross section for high energy photon scattering by electrons
1930 Wolfgang Pauli postulated the neutrino to explain the energy spectrum of beta decays;
1930 Erwin Schrödinger predicts the zitterbewegung motion
1930 Fritz London explains van der Waals forces as due to the interacting fluctuating dipole moments between molecules
1930 Paul Dirac introduces electron hole theory
1931 Harold Urey discovers deuterium using evaporation concentration techniques and spectroscopy
1931 Irène Joliot-Curie and Frédéric Joliot observe but misinterpret neutron scattering in paraffin
1931 John Lennard-Jones proposes the Lennard-Jones interatomic potential
1931 Linus Pauling discovers resonance bonding and uses it to explain the high stability of symmetric planar molecules
1931 Paul Dirac shows that charge quantization can be explained if magnetic monopoles exist
1931 Wolfgang Pauli puts forth the neutrino hypothesis to explain the apparent violation of energy conservation in beta decay
1932 Carl D. Anderson discovers the positron
1932 James Chadwick discovers the neutron
1932 John Cockcroft and Ernest Walton split lithium and boron nuclei using proton bombardment
1932 Werner Heisenberg presents the proton–neutron model of the nucleus and uses it to explain isotopes
1933 Ernst Stueckelberg (1932), Lev Landau (1932), and Clarence Zener discover the Landau–Zener transition
1933 Max Delbrück suggests that quantum effects will cause photons to be scattered by an external electric field
1934 Enrico Fermi publishes a very successful model of beta decay in which neutrinos were produced.
1934 Enrico Fermi suggests bombarding uranium atoms with neutrons to make a 93 proton element
1934 Irène Joliot-Curie and Frédéric Joliot bombard aluminium atoms with alpha particles to create artificially radioactive phosphorus-30
1934 Leó Szilárd realizes that nuclear chain reactions may be possible
1934 Lev Landau tells Edward Teller that non-linear molecules may have vibrational modes which remove the degeneracy of an orbitally degenerate state (Jahn–Teller effect)
1934 Pavel Cherenkov reports that light is emitted by relativistic particles traveling in a nonscintillating liquid
1935 Albert Einstein, Boris Podolsky, and Nathan Rosen put forth the EPR paradox
1935 Henry Eyring develops the transition state theory
1935 Hideki Yukawa presents a theory of the nuclear force and predicts the scalar meson
1935 Niels Bohr presents his analysis of the EPR paradox
1936 Carl D. Anderson discovered the muon while he studied cosmic radiation;
1936 Alexandru Proca formulates the relativistic quantum field equations for a massive vector meson of spin-1 as a basis for nuclear forces
1936 Eugene Wigner develops the theory of neutron absorption by atomic nuclei
1936 Hermann Arthur Jahn and Edward Teller present their systematic study of the symmetry types for which the Jahn–Teller effect is expected
1937 Carl Anderson proves experimentally the existence of the pion predicted by Yukawa's theory.
1937 Hans Hellmann finds the Hellmann–Feynman theorem
1937 Seth Neddermeyer, Carl Anderson, J.C. Street, and E.C. Stevenson discover muons using cloud chamber measurements of cosmic rays
1939 Lise Meitner and Otto Robert Frisch determine that nuclear fission is taking place in the Hahn–Strassmann experiments
1939 Otto Hahn and Fritz Strassmann bombard uranium salts with thermal neutrons and discover barium among the reaction products
1939 Richard Feynman finds the Hellmann–Feynman theorem
1942 Enrico Fermi makes the first controlled nuclear chain reaction
1942 Ernst Stueckelberg introduces the propagator to positron theory and interprets positrons as negative energy electrons moving backwards through spacetime
== Quantum field theory ==
1947 George Dixon Rochester and Clifford Charles Butler discovered the kaon, the first strange particle;
1947 Cecil Powell, César Lattes, and Giuseppe Occhialini discover the pi meson by studying cosmic ray tracks
1947 Richard Feynman presents his propagator approach to quantum electrodynamics
1947 Willis Lamb and Robert Retherford measure the Lamb–Retherford shift
1948 Hendrik Casimir predicts a rudimentary attractive Casimir force on a parallel plate capacitor
1951 Martin Deutsch discovers positronium
1952 David Bohm propose his interpretation of quantum mechanics
1953 Robert Wilson observes Delbruck scattering of 1.33 MeV gamma-rays by the electric fields of lead nuclei
1953 Charles H. Townes, collaborating with J. P. Gordon, and H. J. Zeiger, builds the first ammonia maser
1954 Chen Ning Yang and Robert Mills investigate a theory of hadronic isospin by demanding local gauge invariance under isotopic spin space rotations, the first non-Abelian gauge theory
1955 Owen Chamberlain, Emilio Segrè, Clyde Wiegand, and Thomas Ypsilantis discover the antiproton
1955 and 1956 Murray Gell-Mann and Kazuhiko Nishijima independently derive the Gell-Mann–Nishijima formula, which relates the baryon number, the strangeness, and the isospin of hadrons to the charge, eventually leading to the systematic categorization of hadrons and, ultimately, the quark model of hadron composition.
1956 Clyde Cowan and Frederick Reines discovered the (electron) neutrino;
1956 Chen Ning Yang and Tsung Lee propose parity violation by the weak nuclear force
1956 Chien Shiung Wu discovers parity violation by the weak force in decaying cobalt
1956 Frederick Reines and Clyde Cowan detect antineutrino
1957 Bruno Pontecorvo postulated the flavor oscillation;
1957 Gerhart Luders proves the CPT theorem
1957 Richard Feynman, Murray Gell-Mann, Robert Marshak, and E.C.G. Sudarshan propose a vector/axial vector (VA) Lagrangian for weak interactions.
1958 Marcus Sparnaay experimentally confirms the Casimir effect
1959 Yakir Aharonov and David Bohm predict the Aharonov–Bohm effect
1960 R.G. Chambers experimentally confirms the Aharonov–Bohm effect
1961 Jeffrey Goldstone considers the breaking of global phase symmetry
1961 Murray Gell-Mann and Yuval Ne'eman discover the Eightfold Way patterns, the SU(3) group
1962 Leon Lederman shows that the electron neutrino is distinct from the muon neutrino
1963 Eugene Wigner discovers the fundamental roles played by quantum symmetries in atoms and molecules
== The formation and successes of the Standard Model ==
1963 Nicola Cabibbo develops the mathematical matrix by which the first two (and ultimately three) generations of quarks can be predicted.
1964 Murray Gell-Mann and George Zweig propose the quark/aces model
1964 François Englert, Robert Brout, Peter Higgs, Gerald Guralnik, C. R. Hagen, and Tom Kibble postulate that a fundamental quantum field, now called the Higgs field, permeates space and, by way of the Higgs mechanism, provides mass to all the elementary subatomic particles that interact with it. While the Higgs field is postulated to confer mass on quarks and leptons, it represents only a tiny portion of the masses of other subatomic particles, such as protons and neutrons. In these, gluons that bind quarks together confer most of the particle mass. The result is obtained independently by three groups: François Englert and Robert Brout; Peter Higgs, working from the ideas of Philip Anderson; and Gerald Guralnik, C. R. Hagen, and Tom Kibble.
1964 Murray Gell-Mann and George Zweig independently propose the quark model of hadrons, predicting the arbitrarily named up, down, and strange quarks. Gell-Mann is credited with coining the term quark, which he found in James Joyce's book Finnegans Wake.
1964 Sheldon Glashow and James Bjorken predict the existence of the charm quark. The addition is proposed because it allows for a better description of the weak interaction (the mechanism that allows quarks and other particles to decay), equalizes the number of known quarks with the number of known leptons, and implies a mass formula that correctly reproduced the masses of the known mesons.
1964 John Stewart Bell shows that all local hidden variable theories must satisfy Bell's inequality
1964 Peter Higgs considers the breaking of local phase symmetry
1964 Val Fitch and James Cronin observe CP violation by the weak force in the decay of K mesons
1967 Bruno Pontecorvo postulated neutrino oscillation;
1967 Steven Weinberg and Abdus Salam publish papers in which they describe Yang–Mills theory using the SU(2) X U(1) supersymmetry group, thereby yielding a mass for the W particle of the weak interaction via spontaneous symmetry breaking.
1967 Steven Weinberg puts forth his electroweak model of leptons
1968 Stanford University: Deep inelastic scattering experiments at the Stanford Linear Accelerator Center (SLAC) show that the proton contains much smaller, point-like objects and is therefore not an elementary particle. Physicists at the time are reluctant to identify these objects with quarks, instead calling them partons — a term coined by Richard Feynman. The objects that are observed at SLAC will later be identified as up and down quarks. Nevertheless, "parton" remains in use as a collective term for the constituents of hadrons (quarks, antiquarks, and gluons). The existence of the strange quark is indirectly validated by the SLAC's scattering experiments: not only is it a necessary component of Gell-Mann and Zweig's three-quark model, but it provides an explanation for the kaon (K) and pion (π) hadrons discovered in cosmic rays in 1947.
1969 John Clauser, Michael Horne, Abner Shimony and Richard Holt propose a polarization correlation test of Bell's inequality
1970 Sheldon Glashow, John Iliopoulos, and Luciano Maiani propose the charm quark
1971 Gerard 't Hooft shows that the Glashow-Salam-Weinberg electroweak model can be renormalized
1972 Stuart Freedman and John Clauser perform the first polarization correlation test of Bell's inequality
1973 Frank Anthony Wilczek discover the quark asymptotic freedom in the theory of strong interactions; receives the Lorentz Medal in 2002, and the Nobel Prize in Physics in 2004 for his discovery and his subsequent contributions to quantum chromodynamics.
1973 Makoto Kobayashi and Toshihide Maskawa note that the experimental observation of CP violation can be explained if an additional pair of quarks exist. The two new quarks are eventually named top and bottom.
1973 David Politzer and Frank Anthony Wilczek propose the asymptotic freedom of quarks
1974 Burton Richter and Samuel Ting: Charm quarks are produced almost simultaneously by two teams in November 1974 (see November Revolution) — one at SLAC under Burton Richter, and one at Brookhaven National Laboratory under Samuel Ting. The charm quarks are observed bound with charm antiquarks in mesons. The two discovering parties independently assign the discovered meson two different symbols, J and ψ; thus, it becomes formally known as the J/ψ meson. The discovery finally convinces the physics community of the quark model's validity.
1974 Robert J. Buenker and Sigrid D. Peyerimhoff introduce the multireference configuration interaction method.
1975 Martin Perl discovers the tau lepton
1977 Leon Lederman observes the bottom quark with his team at Fermilab. This discovery is a strong indicator of the top quark's existence: without the top quark, the bottom quark would be without a partner that is required by the mathematics of the theory.
1977 Martin Lewis Perl discovered the tau lepton after a series of experiments;
1977 Steve Herb finds the upsilon resonance implying the existence of the beauty/bottom quark
1979 Gluon observed indirectly in three-jet events at DESY;
1982 Alain Aspect, J. Dalibard, and G. Roger perform a polarization correlation test of Bell's inequality that rules out conspiratorial polarizer communication
1983 Carlo Rubbia and Simon van der Meer discovered the W and Z bosons;
1983 Carlo Rubbia, Simon van der Meer, and the CERN UA-1 collaboration find the W and Z intermediate vector bosons
1989 The Z intermediate vector boson resonance width indicates three quark–lepton generations
1994 The CERN LEAR Crystal Barrel Experiment justifies the existence of glueballs (exotic meson).
1995 The top quark is finally observed by a team at Fermilab after an 18-year search. It has a mass much greater than had been previously expected — almost as great as a gold atom.
1995 The D0 and CDF experiments at the Fermilab Tevatron discover the top quark.
1998 – The Super-Kamiokande (Japan) detector facility reports experimental evidence for neutrino oscillations, implying that at least one neutrino has mass.
1998 Super-Kamiokande (Japan) observes evidence for neutrino oscillations, implying that at least one neutrino has mass.
1999 Ahmed Zewail wins the Nobel prize in chemistry for his work on femtochemistry for atoms and molecules.
2000 scientists at Fermilab announce the first direct evidence for the tau neutrino, the third kind of neutrino in particle physics.
2000 CERN announced quark-gluon plasma, a new phase of matter.
2001 the Sudbury Neutrino Observatory (Canada) confirm the existence of neutrino oscillations. Lene Hau stops a beam of light completely in a Bose–Einstein condensate.
2001 The Sudbury Neutrino Observatory (Canada) confirms the existence of neutrino oscillations.
2005 the RHIC accelerator of Brookhaven National Laboratory generates a "perfect" fluid, perhaps the quark–gluon plasma.
2010 The Large Hadron Collider at CERN begins operation with the primary goal of searching for the Higgs boson.
2012 Higgs boson-like particle discovered at CERN's Large Hadron Collider (LHC).
2014 The LHCb experiment observes particles consistent with tetraquarks and pentaquarks
2014 The T2K and OPERA experiment observe the appearance of electron neutrinos and Tau neutrinos in a muon neutrino beam
== See also ==
Chronology of the universe
History of subatomic physics
History of quantum mechanics
History of quantum field theory
History of the molecule
History of thermodynamics
History of chemistry
Golden age of physics
Timeline of cosmological theories
Timeline of particle physics technology
== References ==
== External links ==
Alain Connes official website with downloadable papers.
Alain Connes's Standard Model.
A History of Quantum Mechanics Archived 2019-10-28 at the Wayback Machine
A Brief History of Quantum Mechanics | Wikipedia/Timeline_of_particle_physics |
Particle physics is the study of the interactions of elementary particles at high energies, whilst physical cosmology studies the universe as a single physical entity. The interface between these two fields is sometimes referred to as particle cosmology. Particle physics must be taken into account in cosmological models of the early universe, when the average energy density was very high. The processes of particle pair production, scattering and decay influence the cosmology.
As a rough approximation, a particle scattering or decay process is important at a particular cosmological epoch if its time scale is shorter than or similar to the time scale of the universe's expansion. The latter quantity is
1
H
{\displaystyle {\frac {1}{H}}}
where
H
{\displaystyle H}
is the time-dependent Hubble parameter. This is roughly equal to the age of the universe at that time.
For example, the pion has a mean lifetime to decay of about 26 nanoseconds. This means that particle physics processes involving pion decay can be neglected until roughly that much time has passed since the Big Bang.
Cosmological observations of phenomena such as the cosmic microwave background and the cosmic abundance of elements, together with the predictions of the Standard Model of particle physics, place constraints on the physical conditions in the early universe. The success of the Standard Model at explaining these observations support its validity under conditions beyond those which can be produced in a laboratory. Conversely, phenomena discovered through cosmological observations, such as dark matter and baryon asymmetry, suggest the presence of physics that goes beyond the Standard Model.
== Further reading ==
Allday, Jonathan (2002). Quarks, Leptons and the Big Bang (Second ed.). Taylor & Francis. ISBN 978-0-7503-0806-9.
Bergström, Lars & Goobar, Ariel (2004); Cosmology and Particle Astrophysics, 2nd ed. Springer Verlag. ISBN 3-540-43128-4.
Branco, G. C., Shafi, Q., & Silva-Marcos, J. I. (2001). Recent developments in particle physics and cosmology. Dordrecht: Kluwer Academic. ISBN 0-7923-7181-X
Collins, P. D. B. (2007). Particle physics and cosmology. New York: John Wiley & Sons. ISBN 0-471-12071-5
Kazakov, D. I., & Smadja, G. (2005). Particle physics and cosmology the interface. NATO science series, v. 188. Dordecht: Springer. ISBN 1-4020-3161-0
"Science and technology - Cosmology and particle physics - What can the matter B?". The Economist. Vol. 379, no. 8474. 2006. p. 94. OCLC 102695447.
== External links ==
Center for Particle Cosmology at the University of Pennsylvania | Wikipedia/Particle_physics_in_cosmology |
The Annual Review of Nuclear and Particle Science is a peer-reviewed academic journal that publishes review articles about nuclear and particle science. As of 2024, Journal Citation Reports lists the journal's 2023 impact factor as 9.1, ranking it second of 22 journal titles in the category "Physics, Nuclear" and third of 30 journal titles in the category "Physics, Particles and Fields". Beginning in 2020, the Annual Review of Nuclear and Particle Science is published open access under the Subscribe to Open (S2O) publishing model.
The journal was first created by the National Research Council's Committee on Nuclear Science, which partnered with Annual Reviews to produce the first volume in 1952. The initial title of the journal was Annual Review of Nuclear Science. Annual Reviews published all volumes independently beginning with Volume 3. In 1978, the journal's name was changed to Annual Review of Nuclear and Particle Science.
In its history, it has had eight editors, four of whom had tenures of 10 or more years: Emilio Segrè, John David Jackson, Chris Quigg, and Barry R. Holstein.
== History ==
In the early 1950s, the National Research Council's Committee on Nuclear Science announced its support for an annual volume of review articles that covered recent developments in the field of nuclear science. One of the key proponents of creating the journal was Alberto F. Thompson, who had previously helped establish Nuclear Science Abstracts in 1948. The Committee on Nuclear Science consulted the nonprofit publishing company Annual Reviews for advice, and Annual Reviews agreed to publish the initial and subsequent volumes. Members of the Committee acted as the editorial board for the first volume, which was published in December 1952. Published under the title Annual Review of Nuclear Science, it covered nuclear science developments in 1950. Beginning with Volume 2, James G. Beckerley was editor, with Martin D. Kamen, Donald F. Mastick, and Leonard I. Schiff as associate editors. From Volume 3 onward, Annual Reviews assumed all responsibility for the journal from the National Research Council.
In 1978, the journal's name was changed to the Annual Review of Nuclear and Particle Science. This name was judged to be more reflective of the journal's content, which also included particle physics. Under Annual Reviews's Subscribe to Open publishing model, it was announced that the 2020 volume of Annual Review of Nuclear and Particle Science would be published open access, a first for the journal. As of 2020, it was published both in print and electronically.
== Editorial processes ==
The Annual Review of Nuclear and Particle Science is helmed by the editor. The editor is assisted by the editorial committee, which includes associate editors, regular members, and occasionally guest editors. Guest members participate at the invitation of the editor, and serve terms of one year. All other members of the editorial committee are appointed by the Annual Reviews board of directors and serve five-year terms. The editorial committee determines which topics should be included in each volume and solicits reviews from qualified authors. Unsolicited manuscripts are not accepted. Peer review of accepted manuscripts is undertaken by the editorial committee.
=== Editors of volumes ===
Dates indicate publication years in which someone was credited as a lead editor or co-editor of a journal volume. The planning process for a volume begins well before the volume appears, so appointment to the position of lead editor generally occurred prior to the first year shown here. An editor who has retired or died may be credited as a lead editor of a volume that they helped to plan, even if it is published after their retirement or death.
James G. Beckerley 1953–1957
Emilio Segrè 1958–1977
John David Jackson 1978–1993
Chris Quigg 1994–2004
Boris Kayser Appointed 2004, credited 2005–2010
Barry R. Holstein Appointed 2009, credited 2011–2023
Wick C. Haxton, Michael E. Peskin Appointed 2023.
== References == | Wikipedia/Annual_Review_of_Nuclear_and_Particle_Science |
Vanishing-dimensions theory is a particle physics theory suggesting that systems with higher energy have a smaller number of dimensions.
For example, the theory implies that the Universe had fewer dimensions after the Big Bang when its energy was high. Then the number of dimensions may have increased as the system cooled and the Universe may gain more dimensions with time. There could have originally been only one spatial dimension, with two dimensions total — one time dimension and one space dimension. When there were only two dimensions, the Universe lacked gravitational degrees of freedom.
The theory is also tied to smaller amount of dimensions in smaller systems with the universe expansion being a suggested motivating phenomenon for growth of the number of dimensions with time, suggesting a larger number of dimensions in systems on larger scale.
In 2011, Dejan Stojkovic from the University at Buffalo and Jonas Mureika from the Loyola Marymount University described use of a Laser Interferometer Space Antenna system, intended to detect gravitational waves, to test the vanishing-dimension theory by detecting a maximum frequency after which gravitational waves can't be observed.
The vanishing-dimensions theory is seen as an explanation to cosmological constant problem: a fifth dimension would answer the question of energy density required to maintain the constant.
== References ==
== Further reading == | Wikipedia/Vanishing_dimensions_theory |
A trion is a bound state of three charged particles. A negatively charged trion in crystals consists of two electrons and one hole, while a positively charged trion consists of two holes and one electron. The binding energy of a trion is largely determined by the exchange interaction between the two electrons (holes). The ground state of a negatively charged trion is a singlet (total spin of two electrons S=0). The triplet state (total spin of two electrons S=1) is unbound in the absence of an additional potential or sufficiently strong magnetic field.
Like excitons, trions can be created by optical excitation. An incident photon creates an exciton, and this exciton binds to an additional electron (hole), creating a trion. The binding time of the exciton to the extra electron is of the same order as the time of exciton formation. This is why trions are observed not only in the emission spectra, but also in the absorption and reflection spectra.
Trion states were predicted theoretically in 1958; First time they were observed experimentally in 1993 in CdTe/Cd1−xZnxTe quantum wells by Ronald Cox and co-authors, and later in various other semiconductor structures. In recent years, trion states in quantum dots have been actively studied. There are experimental proofs of their existence in nanotubes supported by theoretical studies.
Particularly interesting is the study of trions in atomically thin two-dimensional (2D) layers of transition metal dichalcogenides. In such materials, the interaction between the charge carriers is enhanced many times over due to the weakening of the screening
An important property of a trion is that its ground state is a singlet. As a result, in a sufficiently large magnetic field, when all the electrons appear spin-polarised, trions are born under the action of light of only one circular polarization. In this polarization, excitons with the appropriate angular momentum form singlet trion states. Light with the opposite circular polarization can only form triplet states of the trion.
In addition to the formation of bound states, the interaction of excitons with electrons can lead to the scattering of excitons by electrons. In a magnetic field, the electron spectrum becomes discrete, and the exciton states scattered by electrons manifest as the phenomenon of "exciton cyclotron resonance" (ExCR). In ExCR, an incident photon creates an exciton, which forces an additional electron to transfer between Landau level s. The reverse process is called "shake-up". In this case, the recombination of the trion is accompanied by the transition of an additional electron between Landau levels.
Since the energies of an exciton and a trion are close, they can form a coherent bound state in which a trion can "lose" an electron to become an exciton and an exciton can "capture" an electron to become a trion. If there is no time between the loss and capture of the electron for it to dissipate, a mixed state similar to an exciton-polariton is formed. Such states have been reliably observed in quantum wells and monolayers of dichalcogenides.
The exciton-electron interaction in the presence of a dense electron gas can lead to the formation of the so-called "Suris tetron". This is a state of four particles: an exciton, an electron and a hole in the Fermi Sea.
== References == | Wikipedia/Trion_(physics) |
In particle physics, a resonance is the peak located around a certain energy found in differential cross sections of scattering experiments. These peaks are associated with subatomic particles, which include a variety of bosons, quarks and hadrons (such as nucleons, delta baryons or upsilon mesons) and their excitations. In common usage, "resonance" only describes particles with very short lifetimes, mostly high-energy hadrons existing for 10−23 seconds or less. It is also used to describe particles in intermediate steps of a decay, so-called virtual particles.
The width of the resonance (Γ) is related to the mean lifetime (τ) of the particle (or its excited state) by the relation
Γ
=
ℏ
τ
{\displaystyle \Gamma ={\frac {\hbar }{\tau }}}
where
ℏ
=
h
2
π
{\displaystyle {\hbar }={\frac {h}{2\pi }}}
and h is the Planck constant.
Thus, the lifetime of a particle is the direct inverse of the particle's resonance width. For example, the charged pion has the second-longest lifetime of any meson, at 2.6033×10−8 s. Therefore, its resonance width is very small, about 2.528×10−8 eV or about 6.11 MHz. Pions are generally not considered as "resonances". The charged rho meson has a very short lifetime, about 4.41×10−24 s. Correspondingly, its resonance width is very large, at 149.1 MeV or about 36 ZHz. This amounts to nearly one-fifth of the particle's rest mass.
== See also ==
== References == | Wikipedia/Resonance_(particle_physics) |
The self-consistency principle was established by Rolf Hagedorn in 1965 to explain the thermodynamics of fireballs in high energy physics collisions. A thermodynamical approach to the high energy collisions first proposed by E. Fermi.
== Partition function ==
The partition function of the fireballs can be written in two forms, one in terms of its density of states,
σ
(
E
)
{\displaystyle \sigma (E)}
, and the other in terms of its mass spectrum,
ρ
(
m
)
{\displaystyle \rho (m)}
.
The self-consistency principle says that both forms must be asymptotically equivalent for energies or masses sufficiently high (asymptotic limit). Also, the density of states and the mass spectrum must be asymptotically equivalent in the sense of the weak constraint proposed by Hagedorn as
log
[
ρ
(
m
)
]
=
log
[
σ
(
E
)
]
{\displaystyle \log[\rho (m)]=\log[\sigma (E)]}
.
These two conditions are known as the self-consistency principle or bootstrap-idea. After a long mathematical analysis Hagedorn was able to prove that there is in fact
ρ
(
m
)
{\displaystyle \textstyle \rho (m)}
and
σ
(
E
)
{\displaystyle \textstyle \sigma (E)}
satisfying the above conditions, resulting in
ρ
(
m
)
=
a
m
−
5
/
2
exp
(
β
o
m
)
{\displaystyle \rho (m)=am^{-5/2}\exp(\beta _{o}m)}
and
σ
(
E
)
=
b
E
α
−
1
exp
(
β
o
E
)
{\displaystyle \sigma (E)=bE^{\alpha -1}\exp(\beta _{o}E)}
with
a
{\displaystyle \textstyle a}
and
α
{\displaystyle \textstyle \alpha }
related by
α
=
a
V
(
2
π
β
)
3
/
5
{\displaystyle \alpha ={\frac {aV}{(2\pi \beta )^{3/5}}}}
.
Then the asymptotic partition function is given by
Z
q
(
V
o
,
T
)
=
(
1
β
−
β
o
)
α
−
1
{\displaystyle Z_{q}(V_{o},T)={\bigg (}{\frac {1}{\beta -\beta _{o}}}{\bigg )}^{\alpha }-1}
where a singularity is clearly observed for
β
{\displaystyle \beta }
→
β
o
{\displaystyle \beta _{o}}
. This singularity determines the limiting temperature
T
o
=
1
/
β
o
{\displaystyle \textstyle T_{o}=1/\beta _{o}}
in Hagedorn's theory, which is also known as Hagedorn temperature.
Hagedorn was able not only to give a simple explanation for the thermodynamical aspect of high energy particle production, but also worked out a formula for the hadronic mass spectrum and predicted the limiting temperature for hot hadronic systems.
After some time this limiting temperature was shown by N. Cabibbo and G. Parisi to be related to a phase transition, which characterizes by the deconfinement of quarks at high energies. The mass spectrum was further analyzed by Steven Frautschi.
== Q-exponential function ==
The Hagedorn theory was able to describe correctly the experimental data from collision with center-of-mass energies up to approximately 10 GeV, but above this region it failed. In 2000 I. Bediaga, E. M. F. Curado and J. M. de Miranda proposed a phenomenological generalization of Hagedorn's theory by replacing the exponential function that appears in the partition function by the q-exponential function from the Tsallis non-extensive statistics. With this modification the generalized theory was able again to describe the extended experimental data.
In 2012 A. Deppman proposed a non-extensive self-consistent thermodynamical theory that includes the self-consistency principle and the non-extensive statistics. This theory gives as result the same formula proposed by Bediaga et al., which describes correctly the high energy data, but also new formulas for the mass spectrum and density of states of fireball. It also predicts a new limiting temperature and a limiting entropic index.
== References == | Wikipedia/Self-consistency_principle_in_high_energy_physics |
The Particle Physics Project Prioritization Panel (P5) is a scientific advisory panel tasked with recommending plans for U.S. investment in particle physics research over the next ten years, on the basis of various funding scenarios. The P5 is a temporary subcommittee of the High Energy Physics Advisory Panel (HEPAP), which serves the Department of Energy's Office of Science and the National Science Foundation. In 2014, the panel was chaired by Steven Ritz of the University of California, Santa Cruz. In 2023, the panel was chaired by Hitoshi Murayama of the University of California, Berkeley.
== 2014 report ==
In 2013, HEPAP was asked to convene a panel (the P5) to evaluate research priorities in the context of anticipated developments in the field globally in the next 20 years. Recommendations were to be made on the basis of three funding scenarios for high-energy physics:
A constant funding level for the next three years followed by an annual 2% increase, relative to the FY2013 budget
A constant funding level for the next three years followed by an annual 3% increase, relative to the proposed FY2014 budget
An unconstrained budget
=== Science drivers ===
In May 2014, the first P5 report since 2008 was released. The 2014 report identified five "science drivers"—goals intended to inform funding priorities—drawn from a year-long discussion within the particle physics community. These science drivers are:
Use of the Higgs boson as a tool for further inquiry
Investigation of the physics of neutrino mass
Investigation of the physics of dark matter
Investigation of the physics of dark energy and cosmic inflation
Exploration of new particles, interactions, and physics principles
=== Recommendations ===
In pursuit of the five science drivers, the 2014 report identified three "high priority large category" projects meriting significant investment in the FY2014–2023 period, regardless of the broader funding situation: the High Luminosity Large Hadron Collider (a proposed upgrade to the Large Hadron Collider located at CERN in Europe); the International Linear Collider (a proposed electron-positron collider, likely hosted in Japan); and the Long Baseline Neutrino Facility (an expansion of the proposed Long Baseline Neutrino Experiment (that was renamed the Deep Underground Neutrino Experiment), to be constructed at Fermilab in Illinois and at the Homestake Mine in South Dakota).
In addition to these large projects, the report identified numerous smaller projects with potential for near-term return on investment, including the Mu2e experiment, second- and third-generation dark matter experiments, particle-physics components of the Large Synoptic Survey Telescope (LSST), cosmic microwave background experiments, and a number of small neutrino experiments.
The report made several recommendations for significant shifts in priority, namely:
An increase in the proportion of the high-energy physics budget devoted to construction of new facilities, from 15% to 20%-25%
An expansion in scope of the Long Baseline Neutrino Experiment to a major international collaboration, with redirection of resources from other R&D projects to the development of higher powered proton beams for the neutrino facility
Increased funding for second-generation dark matter detection experiments
Increased funding of cosmic microwave background (CMB) research
The panel stressed that the most conservative of the funding scenarios considered would endanger the ability of the U.S. to host a major particle physics project while maintaining the necessary supporting elements.
=== Impact and outcomes since 2014 ===
A goal of the 2014 P5 exercise was to provide Congress with a science-justified roadmap for project funding. Five years later, in 2019, the Department of Energy Office of Science declared: "Congressional appropriations reflect strong support for P5. Language in appropriations reports have consistently recognized community’s efforts in creating and executing the P5 report strategy" and "P5 was wildly successful." From 2016 to 2020, the High Energy Physics (HEP) budget grew from less than $800 million to more than $1 billion.
However, members of the HEP community were concerned because the increased funding went primarily toward projects, while funding for core research and technology programs, which was also supported by P5, declined from $361 million to $316 million. In 2020, an assessment of progress of the P5-defined program produced by the High Energy Physics Advisory Panel (HEPAP) concluded: "While investments over the past 5 years have
focused on project construction, it will be fundamentally important to balance the components of the HEP budget to continue successful execution of the P5 plan. Operations of the newly constructed experiments require full support to reap their scientific goals. The HEP research program also needs strong support
to fully execute the plan, throughout the construction, operations, and data analysis phases of the experiments, and to lay a foundation for the future."
As of 2022, several of the "Large Projects" identified as priorities by the 2014 P5 had fallen considerably behind schedule or been affected by cost gaps, including:
The Deep Underground Neutrino Experiment, has been descoped, with start-up delayed from 2027 to 2032.
The mu2e experiment was delayed from 2020 to 2026.
The PIP-II project start-up was delayed from 2020 to 2028 startup.
The High Luminosity LHC contributions from Fermilab faced a $90M cost gap in 2021.
The International Linear Collider (ILC), proposed for construction in Japan, was "shelved".
== Prelude to the 2023 report ==
=== Issues ===
The P5 process occurred in spring 2023 and was informed by the outcomes of the 2021 Snowmass Process finalized in summer 2022. The Snowmass 2021 study identified two existential threats to the field that P5 must address:
That the field has entered a "nightmare scenario" because no unexpected physics signatures have been observed by experiments at the highest energy accelerator, the Large Hadron Collider. As pointed out by many at the final Snowmass meeting, this give little basis for the 2023 P5 to recommend new large projects.
That LBNF/DUNE (also called the Deep Underground Neutrino Experiment), the flagship project that came out of the 2014 P5, will be reevaluated due to spiraling costs and extended delays. The escalation has led to comparisons to the Superconducting Super Collider (SSC), a particle physics Megaproject that was cancelled mid-way through construction in 1993 due to cost over-run---a debacle with enormous personal and scientific costs to the particle physicists involved.
Along with these major issues, P5 also faces a field that is less unified than in 2014, as was emphasized by the title of the Scientific American report on Snowmass 2021 outcomes: "Physicists Struggle to Unite around Future Plans."
Some members of the field have expressed that the pressure to project a unified opinion is stifling debate, with one physicists telling a reporter from Physics Today: "There are big issues people didn’t discuss." Panel chair Hitoshi Murayama has expressed awareness of this problem, saying that "community buy-in is key" for the success of the P5 report.
=== Panel ===
The membership of the 2023 P5 was announced in December 2022, with Hitoshi Murayama of the University of California, Berkeley as head. See the official page.
Similar to 2014, the 2023 P5 members are all particle and accelerator physicists; no members specialize in project management. This places the committee in a good position to evaluate responses to the "nightmare scenario." However, this makes it difficult for the members to assess whether the information on cost and schedule provided to the committee has a sound basis. That lack of expertise may explain how the 2014 P5 failed to foresee the LBNF/DUNE cost-and-schedule crisis, and will make it difficult for the 2023 P5 to head off an "SSC scenario."
=== Tasking ===
Regina Rameika from the Department of Energy Office of Science summarized the P5 charge in a presentation to the High Energy Physics Advisory Panel on Dec. 8, 2022. The charge asked P5 to:
Update the 2014 P5 strategic plan, making recommendations for actions within a ten-year time-frame while considering a twenty-year context.
Re-evaluate the 2014 "science drivers" and recommended scientific projects, as well as make the scientific case for new initiatives.
Maintain a balance between large projects and small experiments. P5 does not recommend specific small experiments but was asked to comment on the scientific focus for that portfolio. The emphasis on P5 direction to small experiments was new compared to the 2014 P5 charge.
Address synergies within US programs and with the worldwide program.
The priority of projects is being considered within two funding scenarios from the Department of Energy (DOE) and the National Science Foundation (NSF). The first, which was described by physicists as "grim", envisions a 2% increase per year of the high energy physics budgets for DOE and NSF. The second assumes full funding from the 2022 CHIPS And Science Act
and a 3% increase per year to DOE and NSF HEP. P5 is asked to consider operating costs, including the rising cost of energy to run accelerators.
=== Input from community meetings and town halls ===
Throughout 2023, P5 received input from the community through meetings that included invited talks and requested talks in a "town hall" format. Four meetings were held at national laboratories. Two virtual town halls were also held. The topics of the meetings covered physics goals across the range of topics defined by the Snowmass Study, as well as the balance of university- and laboratory-based research, opportunities for early early career scientists, and the need for public outreach.
=== Input from the International Benchmarking Report ===
In Autumn 2023, the P5 Panel received input from the HEPAP International Benchmarking Subpanel, headed by Fermilab scientists. This report is one in a series of evaluations of DOE supported science in an international context. Differences between high energy physics and the rest of the physics community are apparent in the report. For example, the report that citations are a poor metric for measurement of scientific impact. Two points made in the report are especially relevant to P5 considerations: 1) The US should prioritize being a "partner of choice" and 2) The US requires a range of project sizes and goals to maintain a healthy "scientific ecosystem".
The primary outcome of the benchmarking report was that "the U.S. is not always viewed as a reliable partner, largely due to unpredictable budgets and inadequate communication, and that shortcomings in domestic HEP programs are jeopardizing U.S. leadership." The report highlighted that the 1993 cancellation of the Superconducting Super Collider and the sudden 2008 termination of the B physics program at the Stanford Linear Accelerator Center, and the abrupt end of the TeVatron program at Fermilab followed by the immediate dismantling of the accelerator have caused the international community to lose confidence that the US will complete projects. Without addressing the DUNE project directly, this recommendation pointed to the potential negative impact on international cooperation if DUNE were abruptly curtailed by P5.
A second major recommendation of the benchmarking report focused on the need to maintain a program of projects at all scales, from small to large, and that are chosen to specifically enhance areas in which the US technology is lagging, such as in accelerator physics. This echoed calls from the community expressed in the P5 Town Halls.
== The 2023 report ==
In December 2023, the 2023 P5 report was released. The proposals contained therein were intended to help better understand some of the current concerns of particle physics, including challenges to the Standard Model, and involve studies primarily dealing with gravity, black holes, dark matter, dark energy, Higgs boson, muons, neutrinos, and more.
=== Science Drivers and Related Experimental Approaches ===
The 2023 P5 report identified three science drivers, each with two experimental approaches:
“Decipher the Quantum Realm” through “Elucidat[ing] the Mysteries of Neutrinos” and “Reveal[ing] the Secrets of the Higgs Boson.”
“Explore New Paradigms in Physics” though “Search[ing] for Direct Evidence of New Particles and Pursu[ing] Quantum Imprints of New Phenomena.”
“Illuminate the Hidden Universe” through “Determin[ing] the Nature of Dark Matter” and “Understand[ing] What Drives Cosmic Evolution.”
=== Specific Recommendations ===
The recommendations that followed the statement of goals reflected the recommendations heard during the Snowmass process and those of the International Benchmarking Panel, discussed above.
In particular, Recommendation 1 stated “As the highest priority independent of the budget scenarios, [funding agencies must] complete construction projects and support operations of ongoing experiments and research to enable maximum science.” This reflects concerns throughout the community of potential abrupt cancellations of ongoing particle physics projects, as flagged by the Benchmarking Panel.
The P5 report sought to control the narrative of the DUNE project, which has seen an explosion in cost between the 2014 and 2023 P5 reports and is now lagging behind the competing HyperKamiokande project that will turn on in 2027. P5 offered compromises on beam power for DUNE Phase I and reductions of the DUNE Phase II upgrades to keep the project funding on track to begin data-taking in 2031.
Despite the issues with DUNE, P5 recommended initiating work on a new megaproject called a muon collider. Accelerating and colliding muons for particle physics studies offers theoretical advantages over an electron-positron collider, but represents an untested and challenging new direction from a practical standpoint. The report states: “Although we do not know if a muon collider is ultimately feasible, the road toward it leads from current Fermilab strengths and capabilities to a series of proton beam improvements and neutrino beam facilities, each producing world-class science while performing critical R&D towards a muon collider. At the end of the path is an unparalleled global
facility on US soil. This is our Muon Shot.” The cost of a 10 TeV muon collider was not estimated in the report.
The report offered a new emphasis on cosmology and astrophysics as a branch of particle physics. P5 placed the $800M CMB-S4 experiment at the top of the list of new projects. The report also emphasized the importance of the planned expansion of the IceCube neutrino detector in Antarctica, recommending funding for this new project in any budget scenario.
In a recommendation with an unusual level of specifics regarding its implementation, P5 introduced a new program entitled “Advancing Science and Technology through Agile Experiments" (ASTAE). This responds to calls by the community to support “small” experiments, which particle physics defines as costing less than $50M in total. Unlike other programs, this recommendation called for $35M/year to be invested in ASTAE. This recommendation again reflected the concerns identified by the International Benchmarking Panel.
=== Initial Community Support for the P5 Report ===
The American Physical Society, Fermi National Accelerator Laboratory and SLAC Laboratory organized endorsements by the community of the P5 report.
As of January 15, the number of endorsers was 2602 US scientists.
Among the endorsers, 37% were tenured faculty level or laboratory scientists, 9% were at the untenured faculty or laboratory scientist level, 16% were postdoctoral fellows, 20% were graduate students, and the remainder were other categories.
The geographic distribution of the endorsements heavily favored Illinois, home of Fermilab, and California, home of SLAC.
=== Outcomes ===
Only six months after the release of the 2023 P5 report, the first and sixth priority new projects, CMB-S4 and IceCube-Gen2,
faced major setbacks from a call by NSF to immediately address the urgent need to update the South Pole Station infrastructure. In response, NSF halted the installation of new projects until end of the 2020's. Lack of near-term access to infrastructure at the pole led NSF and DOE to cancel the joint-agency CMB-S4 project, despite strong protest from the P5 leadership and appeals from the 500-person, international team. The IceCube-Gen2 project, planned to begin installation in the latter 2020's, may suffer delays due to the infrastructure renovations.
== References ==
== External links ==
Building for Discovery: Strategic Plan for U.S. Particle Physics in the Global Context: Report of the Particle Physics Project Prioritization Panel (P5) | Wikipedia/Particle_Physics_Project_Prioritization_Panel |
External beam radiation therapy (EBRT) is a form of radiotherapy that utilizes a high-energy collimated beam of ionizing radiation, from a source outside the body, to target and kill cancer cells. The radiotherapy beam is composed of particles, which are focussed in a particular direction of travel using collimators. Each radiotherapy beam consists of one type of particle intended for use in treatment, though most beams contain some contamination by other particle types.
Radiotherapy beams are classified by the particle they are intended to deliver, such as photons (as x-rays or gamma rays), electrons, and heavy ions; x-rays and electron beams are by far the most widely used sources for external beam radiotherapy. Orthovoltage ("superficial") X-rays are used for treating skin cancer and superficial structures. Megavoltage X-rays are used to treat deep-seated tumors (e.g. bladder, bowel, prostate, lung, or brain), whereas megavoltage electron beams are typically used to treat superficial lesions extending to a depth of approximately 5 cm. A small number of centers operate experimental and pilot programs employing beams of heavier particles, particularly protons, owing to the rapid decrease in absorbed dose beneath the depth of the target.
Teletherapy is the most common form of radiotherapy (radiation therapy). The patient sits or lies on a couch and an external source of ionizing radiation is pointed at a particular part of the body. In contrast to brachytherapy (sealed source radiotherapy) and unsealed source radiotherapy, in which the radiation source is inside the body, external beam radiotherapy directs the radiation at the tumor from outside the body.
== X-rays and gamma rays ==
Conventionally, the energy of diagnostic and therapeutic gamma- and X-rays is on the order of kiloelectronvolts (keV) or megaelectronvolts (MeV), and the energy of therapeutic electrons is on the order of megaelectronvolts. The beam is made up of a spectrum of energies: the maximum energy is approximately equal to the beam's maximum electric potential within a linear accelerator times the electron charge. For instance, a 1 megavolt beam will produce photons with a maximum energy around 1 MeV. In practice, the mean X-ray energy is about one-third of the maximum energy. Beam quality and hardness may be improved by X-ray filters, which improves the homogeneity of the X-ray spectrum.
Medically useful X-rays are produced when electrons are accelerated to energies at which either the photoelectric effect predominates (for diagnostic use, since the photoelectric effect offers comparatively excellent contrast with effective atomic number Z) or Compton scattering and pair production predominate (at energies above approximately 200 keV for the former and 1 MeV for the latter), for therapeutic X-ray beams. Some examples of X-ray energies used in medicine are:
Very low-energy superficial X-rays – 35 to 60 keV (mammography, which prioritizes soft-tissue contrast, uses very low-energy kV X-rays)
Superficial radiotherapy X-rays – 60 to 150 keV
Diagnostic X-rays – 20 to 150 keV (mammography to CT); this is the range of photon energies at which the photoelectric effect, which gives maximal soft-tissue contrast, predominates.
Orthovoltage X-rays – 200 to 500 keV
Supervoltage X-rays – 500 to 1000 keV
Megavoltage X-rays – 1 to 25 MeV (in practice, nominal energies above 15 MeV are unusual in clinical practice).
Megavoltage X-rays are by far most common in radiotherapy for the treatment of a wide range of cancers. Superficial and orthovoltage X-rays have application for the treatment of cancers at or close to the skin surface. Typically, higher-energy megavoltage X-rays are chosen when it is desirable to maximize "skin-sparing" (since the relative dose to the skin is lower for such high-energy beams).
Medically useful photon beams can also be derived from a radioactive source such as iridium-192, caesium-137, or cobalt-60. (Radium-226 has also been used as such a source in the past, though has been replaced in this capacity by less harmful radioisotopes.) Such photon beams, derived from radioactive decay, are approximately monochromatic, in contrast to the continuous bremsstrahlung spectrum from a linac. These decays include the emission of gamma rays, whose energy is isotope-specific and ranges between 300 keV and 1.5 MeV.
Superficial radiation therapy machines produce low energy x-rays in the same energy range as diagnostic x-ray machines, 20–150 keV, to treat skin conditions. Orthovoltage X-ray machines produce higher energy x-rays in the range 200–500 keV. Radiation from orthovoltage x-ray machines has been called "deep" due to its greater penetrating ability, allowing it to treat tumors at depths unreachable by lower-energy "superficial" radiation. Orthovoltage units have essentially the same design as diagnostic X-ray machines and are generally limited to photon energies less than 600 keV. X-rays with energies on the order of 1 MeV are generated in Linear accelerators ("linacs"). The first use of a linac for medical radiotherapy was in 1953. Commercially available medical linacs produce X-rays and electrons with an energy range from 4 MeV up to around 25 MeV. The X-rays themselves are produced by the rapid deceleration of electrons in a target material, typically a tungsten alloy, which produces an X-ray spectrum via bremsstrahlung radiation. The shape and intensity of the beam produced by a linac may be modified or collimated by a variety of means. Thus, conventional, conformal, intensity-modulated, tomographic, and stereotactic radiotherapy are all provided using specially-modified linear accelerators.
Cobalt units use radiation from cobalt-60, which emits two gamma rays at energies of 1.17 and 1.33 MeV, a dichromatic beam with an average energy of 1.25 MeV. The role of the cobalt unit has largely been replaced by the linear accelerator, which can generate higher energy radiation. Nonetheless, cobalt treatment still retains some applications, such as the Gamma Knife, since the machinery is relatively reliable and simple to maintain compared to the modern linear accelerator.
=== Sources and properties of X-rays ===
Bremsstrahlung X-rays are produced by bombarding energetic cathode rays (electrons) onto a target made of a material with high atomic number, such as tungsten. The target acts as a sort of transducer, converting part of the electrons' kinetic energy into energetic photons. Kilovoltage X-rays are typically produced using an X-ray tube, in which electrons travel through a vacuum from a hot cathode to a cold anode, which also acts as the target. However, it is impractical to produce megavoltage X-rays using this method; instead, a linear accelerator is most commonly used to produce X-rays of such energy. X-ray emission is more forward-directed at megavoltage energies and more laterally-directed at kilovoltage energies. Consequently, kilovoltage X-rays tend to be produced using a reflection-type target, in which the radiation is emitted back from the target's surface, while megavoltage X-rays tend to be produced with a transmission target in which the X-rays are emitted on the side opposite that of electron incidence. Reflection type targets exhibit the heel effect and can use a rotating anode to aid in heat dissipation.
Compton scattering is the dominant interaction between a megavoltage beam and the patient, while the photoelectric effect dominates at keV energies. Additionally, Compton scattering is much less dependent on atomic number than the photoelectric effect; while kilovoltage beams enhance the distinction between muscle and bone in medical imaging, megavoltage beams suppress that distinction to the advantage of teletherapy. Pair production and photoneutron production increase at higher energies, only becoming significant at energies on the order of 1 MeV.
X-ray energy in the keV range is described by the electrical voltage used to produce it. For instance, a 100 kVp beam is produced by a 100 kV voltage applied to an X-ray tube and will have a maximum photon energy of 100 keV. However, the beam's spectrum can be affected by other factors as well, such as the voltage waveform and external X-ray filtration. These factors are reflected in the beam's half-value layer (HVL), measured in-air under conditions of "good geometry". A typical superficial X-ray energy might be 100 kVp per 3 mmAl – "100 kilovolts applied to the X-ray tube with a measured half-value layer of 3 millimeters of aluminum". The half-value layer for orthovoltage beams is more typically measured using copper; a typical orthovoltage energy is 250 kVp per 2 mmCu. For X-rays in the MeV range, an actual voltage of the same magnitude is not used in production of the beam. A 6 MV beam contains photons of no more than 1 MeV, rather than 6 MeV; the energy of such a beam is instead generally characterized by measuring the ratio of the beam's intensity at varying depths in a medium.
Kilovoltage beams do not exhibit a build-up effect and thus deposit their maximum dose at the surface, i.e. dmax = 0 or D0 = 100%. Conversely, megavoltage beams do exhibit the buildup effect deposit; they deposit their maximum dose at some depth below the surface, i.e. dmax > 0. The depth of dose maximum is governed by the range of the electrons liberated upstream during Compton scattering. At depths beyond dmax, the dose profile of all X-ray beams decreases roughly exponentially with depth. Though actual values of dmax are influenced by various factors, the following are representative benchmark values.
== Electrons ==
X-rays are generated by bombarding a high atomic number material with electrons. If the target is removed (and the beam current decreased). a high energy electron beam is obtained. Electron beams are useful for treating superficial lesions, because the maximum dose deposition occurs near the surface and thereafter decreases rapidly with depth, sparing underlying tissue. Electron beams usually have nominal energies in the range of 4–20 MeV, corresponding to a treatment range of approximately 1–5 cm (in water-equivalent tissue). Energies above 18 MeV are rarely used. Although the X-ray target is removed in electron mode, the beam must be fanned out by sets of thin scattering foils in order to achieve flat and symmetric dose profiles in the treated tissue.
Many linear accelerators can produce both electrons and x-rays.
== Hadron therapy ==
Hadron therapy involves the therapeutic use of protons, neutrons, and heavier ions (fully ionized atomic nuclei). Of these, proton therapy is by far the most common, though still rare compared to other forms of external beam radiotherapy, since it requires large and expensive equipment. The gantry (the part that rotates around the patient) is a multi-story structure, and a proton therapy system can cost (as of 2009) up to US$150 million.
== Multi-leaf collimator ==
Modern linear accelerators are equipped with multileaf collimators (MLCs), which can move within the radiation field as the linac gantry rotates, and block the field as necessary according to the gantry position. This technology allows radiotherapy treatment planners great flexibility in shielding organs-at-risk (OARSs), while ensuring that the prescribed dose is delivered to the target organs. A typical multi-leaf collimator consists of two sets of 40 to 160 leaves, each around 5–10 mm thick and several centimetres long in the other two dimensions. Each leaf in the MLC is aligned parallel to the radiation field and can be moved independently to block part of the field, adapting it to the shape of the tumor (by adjusting the position of the leaves), thus minimizing the amount of healthy tissue subject to radiation exposure. On older linacs without MLCs, this must be accomplished manually using several hand-crafted blocks.
== Intensity modulated radiation therapy ==
Intensity modulated radiation therapy (IMRT) is an advanced radiotherapy technique used to minimize the amount of normal tissue being irradiated in the treatment field. In some systems, this intensity modulation is achieved by moving the leaves in the MLC during the course of treatment, thereby delivering a radiation field with a non-uniform (i.e., modulated) intensity. Using IMRT, radiation oncologists are able to split the radiation beam into many beamlets and vary the intensity of each beamlet, and doctors are often able to further limit the amount of radiation received by healthy tissue near the tumor. Doctors have found that this sometimes allows them to safely give a higher dose of radiation to the tumor, potentially increasing the chance of successful treatment.
=== Volumetric modulated arc therapy ===
Volumetric modulated arc therapy (VMAT) is an extension of IMRT characterized by a linear accelerator rotating around the patient. This means that rather than radiation entering the patient at only a small number of fixed angles, it can enter at many angles. This can be beneficial for some treatment sites in which the target volume is surrounded by a number, allowing directed treatment without exposing nearby organs to heightened radiation levels.
=== Flattening filter free ===
The intensity of the X-rays produced in a megavoltage linac is much higher in the centre of the beam compared to the edges. To offset this central peak, a flattening filter is used. A flattening filter is cone-shaped so as to compensate for the forward bias in the momentum of incident electrons (and is typically made from a metal such as tungsten); after an X-ray beam passes through the flattening filter, it has a more uniform profile. This simplifies treatment planning, though significantly reduces the intensity of the beam. With greater computing power and more efficient treatment planning algorithms, the need for simpler treatment planning techniques – such as "forward planning", in which the planner directly instructs the linac on how to deliver the prescribed treatment – is reduced. This has led to increased interest in flattening filter free (FFF) treatments.
FFF treatments have been found to have an increased maximum dose rate, allowing reduced treatment times and a reduction in the effect of patient motion on the delivery of the treatment. This makes FFF an area of particular interest in stereotactic treatments. For instance, in treatment of breast cancer, the reduced treatment time may reduce patient movement and breast treatments where there is the potential to reduce breathing motion.
== Image-guided radiation therapy ==
Image-guided radiation therapy (IGRT) augments radiotherapy with imaging to increase the accuracy and precision of target localization, thereby reducing the amount of healthy tissue in the treatment field. To allow patients to benefit from sophisticated treatment techniques as IMRT or hadron therapy, patient alignment accuracies with an error margin of at most 0.5 mm are desirable. Therefore, methods such as stereoscopic digital kilovoltage imaging-based patient position verification (PPVS), and alignment estimation based on in-situ cone-beam computed tomography (CT), enrich the range of modern IGRT approaches.
== See also ==
Brachytherapy
Cyberknife
Gamma Knife, a type of radiosurgery
Intraoperative electron radiation therapy
Intraoperative radiation therapy
Neutron capture therapy of cancer
Radiation therapy
Tomotherapy
== References ==
== General references ==
Radiotherapy physics in practice, edited by JR Williams and DI Thwaites, Oxford University Press UK (2nd edition 2000), ISBN 0-19-262878-X
Linear Particle Accelerator (Linac) Animation by Ionactive
http://www.myradiotherapy.com
Superficial radiation therapy
National Institute of Radiological Science (Japan) | Wikipedia/External_beam_radiotherapy |
Physics beyond the Standard Model (BSM) refers to the theoretical developments needed to explain the deficiencies of the Standard Model, such as the inability to explain the fundamental dimensionless physical constants of the standard model, the strong CP problem, neutrino oscillations, matter–antimatter asymmetry, and the nature of dark matter and dark energy. Another problem lies within the mathematical framework of the Standard Model itself: the Standard Model is inconsistent with that of general relativity, and one or both theories break down under certain conditions, such as spacetime singularities like the Big Bang and black hole event horizons.
Theories that lie beyond the Standard Model include various extensions of the standard model through supersymmetry, such as the Minimal Supersymmetric Standard Model (MSSM) and Next-to-Minimal Supersymmetric Standard Model (NMSSM), and entirely novel explanations, such as string theory, M-theory, and extra dimensions. As these theories tend to reproduce the entirety of current phenomena, the question of which theory is the right one, or at least the "best step" towards a Theory of Everything, can only be settled via experiments, and is one of the most active areas of research in both theoretical and experimental physics.
== Problems with the Standard Model ==
Despite being the most successful theory of particle physics to date, the Standard Model is not perfect. A large share of the published output of theoretical physicists consists of proposals for various forms of "Beyond the Standard Model" new physics proposals that would modify the Standard Model in ways subtle enough to be consistent with existing data, yet address its imperfections materially enough to predict non-Standard Model outcomes of new experiments that can be proposed.
=== Phenomena not explained ===
The Standard Model is inherently an incomplete theory. There are fundamental physical phenomena in nature that the Standard Model does not adequately explain:
Dimensionless physical constants. The standard model does not explain the masses of the elementary particles (as fractions of the Planck mass), their mixing angles and phases, the coupling constants, the cosmological constant (multiplied with the Planck length), and the number of spatial dimensions.
Gravity. The standard model does not explain gravity. The approach of simply adding a graviton to the Standard Model does not recreate what is observed experimentally without other modifications, as yet undiscovered, to the Standard Model. Moreover, the Standard Model is widely considered to be incompatible with the most successful theory of gravity to date, general relativity.
Dark matter. Assuming that general relativity and Lambda CDM are true, cosmological observations tell us the standard model explains about 5% of the mass-energy present in the universe. About 26% should be dark matter (the remaining 69% being dark energy) which would behave just like other matter, but which only interacts weakly (if at all) with the Standard Model fields. Yet, the Standard Model does not supply any fundamental particles that are good dark matter candidates.
Dark energy. As mentioned, the remaining 69% of the universe's energy should consist of the so-called dark energy, a constant energy density for the vacuum. Attempts to explain dark energy in terms of vacuum energy of the standard model lead to a mismatch of 120 orders of magnitude.
Neutrino oscillations. According to the Standard Model, neutrinos do not oscillate. However, experiments and astronomical observations have shown that neutrino oscillation does occur. These are typically explained by postulating that neutrinos have mass. Neutrinos do not have mass in the Standard Model, and mass terms for the neutrinos can be added to the Standard Model by hand, but these lead to new theoretical problems. For example, the mass terms need to be extraordinarily small and it is not clear if the neutrino masses would arise in the same way that the masses of other fundamental particles do in the Standard Model. There are also other extensions of the Standard Model for neutrino oscillations which do not assume massive neutrinos, such as Lorentz-violating neutrino oscillations.
Matter–antimatter asymmetry. The universe is made out of mostly matter. However, the standard model predicts that matter and antimatter should have been created in (almost) equal amounts if the initial conditions of the universe did not involve disproportionate matter relative to antimatter. Yet, there is no mechanism in the Standard Model to sufficiently explain this asymmetry.
==== Experimental results not explained ====
No experimental result is accepted as definitively contradicting the Standard Model at the 5 σ level, widely considered to be the threshold of a discovery in particle physics. Because every experiment contains some degree of statistical and systemic uncertainty, and the theoretical predictions themselves are also almost never calculated exactly and are subject to uncertainties in measurements of the fundamental constants of the Standard Model (some of which are tiny and others of which are substantial), it is to be expected that some of the hundreds of experimental tests of the Standard Model will deviate from it to some extent, even if there were no new physics to be discovered.
At any given moment there are several experimental results standing that significantly differ from a Standard Model-based prediction. In the past, many of these discrepancies have been found to be statistical flukes or experimental errors that vanish as more data has been collected, or when the same experiments were conducted more carefully. On the other hand, any physics beyond the Standard Model would necessarily first appear in experiments as a statistically significant difference between an experiment and the theoretical prediction. The task is to determine which is the case.
In each case, physicists seek to determine if a result is merely a statistical fluke or experimental error on the one hand, or a sign of new physics on the other. More statistically significant results cannot be mere statistical flukes but can still result from experimental error or inaccurate estimates of experimental precision. Frequently, experiments are tailored to be more sensitive to experimental results that would distinguish the Standard Model from theoretical alternatives.
Some of the most notable examples include the following:
B meson decay etc. – results from a BaBar experiment may suggest a surplus over Standard Model predictions of a type of particle decay ( B → D(*) τ− ντ ). In this, an electron and positron collide, resulting in a B meson and an antimatter B meson, which then decays into a D meson and a tau lepton as well as a tau antineutrino. While the level of certainty of the excess (3.4 σ in statistical jargon) is not enough to declare a break from the Standard Model, the results are a potential sign of something amiss and are likely to affect existing theories, including those attempting to deduce the properties of Higgs bosons. In 2015, LHCb reported observing a 2.1 σ excess in the same ratio of branching fractions. The Belle experiment also reported an excess. In 2017 a meta analysis of all available data reported a cumulative 5 σ deviation from SM.
Neutron lifetime puzzle - Free neutrons are not stable but decay after some time. Currently there are two methods used to measure this lifetime ("bottle" versus "beam") that give different values not within each other's error margin. Currently the lifetime from the bottle method is at
τ
n
=
877.75
s
{\displaystyle \tau _{n}=877.75s}
with a difference of 10 seconds below the beam method value of
τ
n
=
887.7
s
{\displaystyle \tau _{n}=887.7s}
. This problem may be solved by taking into account neutron scattering which decreases the lifetime of the involved neutrons. This error occurs in the bottle method and the effect depends on the shape of the bottle – thus this might be a bottle method only systematic error.
=== Theoretical predictions not observed ===
Observation at particle colliders of all of the fundamental particles predicted by the Standard Model has been confirmed. The Higgs boson is predicted by the Standard Model's explanation of the Higgs mechanism, which describes how the weak SU(2) gauge symmetry is broken and how fundamental particles obtain mass; it was the last particle predicted by the Standard Model to be observed. On July 4, 2012, CERN scientists using the Large Hadron Collider announced the discovery of a particle consistent with the Higgs boson, with a mass of about 126 GeV/c2. A Higgs boson was confirmed to exist on March 14, 2013, although efforts to confirm that it has all of the properties predicted by the Standard Model are ongoing.
A few hadrons (i.e. composite particles made of quarks) whose existence is predicted by the Standard Model, which can be produced only at very high energies in very low frequencies have not yet been definitively observed, and "glueballs" (i.e. composite particles made of gluons) have also not yet been definitively observed. Some very low frequency particle decays predicted by the Standard Model have also not yet been definitively observed because insufficient data is available to make a statistically significant observation.
=== Unexplained relations ===
Koide formula – an unexplained empirical equation remarked upon by Yoshio Koide in 1981, and later by others. It relates the masses of the three charged leptons:
Q
=
m
e
+
m
μ
+
m
τ
(
m
e
+
m
μ
+
m
τ
)
2
=
0.666661
(
7
)
≈
2
3
{\displaystyle Q={\frac {m_{e}+m_{\mu }+m_{\tau }}{{\big (}{\sqrt {m_{e}}}+{\sqrt {m_{\mu }}}+{\sqrt {m_{\tau }}}{\big )}^{2}}}=0.666661(7)\approx {\frac {2}{3}}}
. The Standard Model does not predict lepton masses (they are free parameters of the theory). However, the value of the Koide formula being equal to 2/3 within experimental errors of the measured lepton masses suggests the existence of a theory which is able to predict lepton masses.
The CKM matrix, if interpreted as a rotation matrix in a 3-dimensional vector space, "rotates" a vector composed of square roots of down-type quark masses
(
m
d
,
m
s
,
m
b
)
{\displaystyle ({\sqrt {m_{d}}},{\sqrt {m_{s}}},{\sqrt {m_{b}}}{\big )}}
into a vector of square roots of up-type quark masses
(
m
u
,
m
c
,
m
t
)
{\displaystyle ({\sqrt {m_{u}}},{\sqrt {m_{c}}},{\sqrt {m_{t}}}{\big )}}
, up to vector lengths, a result due to Kohzo Nishida.
The sum of squares of the Yukawa couplings of all Standard Model fermions is approximately 0.984, which is very close to 1. To put it another way, the sum of squares of fermion masses is very close to half of squared Higgs vacuum expectation value. This sum is dominated by the top quark.
The sum of squares of boson masses (that is, W, Z, and Higgs bosons) is also very close to half of squared Higgs vacuum expectation value, the ratio is approximately 1.004.
Consequently, the sum of squared masses of all Standard Model particles is very close to the squared Higgs vacuum expectation value, the ratio is approximately 0.994.
It is unclear if these empirical relationships represent any underlying physics; according to Koide, the rule he discovered "may be an accidental coincidence".
=== Theoretical problems ===
Some features of the standard model are added in an ad hoc way. These are not problems per se (i.e. the theory works fine with the ad hoc insertions), but they imply a lack of understanding. These contrived features have motivated theorists to look for more fundamental theories with fewer parameters. Some of the contrivances are:
Hierarchy problem – the standard model introduces particle masses through a process known as spontaneous symmetry breaking caused by the Higgs field. Within the standard model, the mass of the Higgs particle gets some very large quantum corrections due to the presence of virtual particles (mostly virtual top quarks). These corrections are much larger than the actual mass of the Higgs. This means that the bare mass parameter of the Higgs in the standard model must be fine tuned in such a way that almost completely cancels the quantum corrections. This level of fine-tuning is deemed unnatural by many theorists.
Number of parameters – the standard model depends on 19 parameter numbers. Their values are known from experiment, but the origin of the values is unknown. Some theorists have tried to find relations between different parameters, for example, between the masses of particles in different generations or calculating particle masses, such as in asymptotic safety scenarios.
Quantum triviality – suggests that it may not be possible to create a consistent quantum field theory involving elementary scalar Higgs particles. This is sometimes called the Landau pole problem. A possible solution is that the renormalized value could go to zero as the cut-off is removed, meaning that the bare value is completely screened by quantum fluctuations.
Strong CP problem – it can be argued theoretically that the standard model should contain a term in the strong interaction that breaks CP symmetry, causing slightly different interaction rates for matter vs. antimatter. Experimentally, however, no such violation has been found, implying that the coefficient of this term – if any – would be suspiciously close to zero.
== Additional experimental results ==
Research from experimental data on the cosmological constant, LIGO noise, and pulsar timing, suggests it's very unlikely that there are any new particles with masses much higher than those which can be found in the standard model or the Large Hadron Collider. However, this research has also indicated that quantum gravity or perturbative quantum field theory will become strongly coupled before 1 PeV, leading to other new physics in the TeVs.
== Grand unified theories ==
The standard model has three gauge symmetries; the colour SU(3), the weak isospin SU(2), and the weak hypercharge U(1) symmetry, corresponding to the three fundamental forces. Due to renormalization the coupling constants of each of these symmetries vary with the energy at which they are measured. Around 1016 GeV these couplings become approximately equal. This has led to speculation that above this energy the three gauge symmetries of the standard model are unified in one single gauge symmetry with a simple gauge group, and just one coupling constant. Below this energy the symmetry is spontaneously broken to the standard model symmetries. Popular choices for the unifying group are the special unitary group in five dimensions SU(5) and the special orthogonal group in ten dimensions SO(10).
Theories that unify the standard model symmetries in this way are called Grand Unified Theories (or GUTs), and the energy scale at which the unified symmetry is broken is called the GUT scale. Generically, grand unified theories predict the creation of magnetic monopoles in the early universe, and instability of the proton. Neither of these have been observed, and this absence of observation puts limits on the possible GUTs.
== Supersymmetry ==
Supersymmetry extends the Standard Model by adding another class of symmetries to the Lagrangian. These symmetries exchange fermionic particles with bosonic ones. Such a symmetry predicts the existence of supersymmetric particles, abbreviated as sparticles, which include the sleptons, squarks, neutralinos and charginos. Each particle in the Standard Model would have a superpartner whose spin differs by 1/2 from the ordinary particle. Due to the breaking of supersymmetry, the sparticles are much heavier than their ordinary counterparts; they are so heavy that existing particle colliders may not be powerful enough to produce them.
== Neutrinos ==
In the standard model, neutrinos cannot spontaneously change flavor. Measurements however indicated that neutrinos do spontaneously change flavor, in what is called neutrino oscillations.
Neutrino oscillations are usually explained using massive neutrinos. In the standard model, neutrinos have exactly zero mass, as the standard model only contains left-handed neutrinos. With no suitable right-handed partner, it is impossible to add a renormalizable mass term to the standard model.
These measurements only give the mass differences between the different flavours. The best constraint on the absolute mass of the neutrinos comes from precision measurements of tritium decay, providing an upper limit 2 eV, which makes them at least five orders of magnitude lighter than the other particles in the standard model.
This necessitates an extension of the standard model, which not only needs to explain how neutrinos get their mass, but also why the mass is so small.
One approach to add masses to the neutrinos, the so-called seesaw mechanism, is to add right-handed neutrinos and have these couple to left-handed neutrinos with a Dirac mass term. The right-handed neutrinos have to be sterile, meaning that they do not participate in any of the standard model interactions. Because they have no charges, the right-handed neutrinos can act as their own anti-particles, and have a Majorana mass term. Like the other Dirac masses in the standard model, the neutrino Dirac mass is expected to be generated through the Higgs mechanism, and is therefore unpredictable. The standard model fermion masses differ by many orders of magnitude; the Dirac neutrino mass has at least the same uncertainty. On the other hand, the Majorana mass for the right-handed neutrinos does not arise from the Higgs mechanism, and is therefore expected to be tied to some energy scale of new physics beyond the standard model, for example the Planck scale.
Therefore, any process involving right-handed neutrinos will be suppressed at low energies. The correction due to these suppressed processes effectively gives the left-handed neutrinos a mass that is inversely proportional to the right-handed Majorana mass, a mechanism known as the see-saw.
The presence of heavy right-handed neutrinos thereby explains both the small mass of the left-handed neutrinos and the absence of the right-handed neutrinos in observations. However, due to the uncertainty in the Dirac neutrino masses, the right-handed neutrino masses can lie anywhere. For example, they could be as light as keV and be dark matter,
they can have a mass in the LHC energy range
and lead to observable lepton number violation,
or they can be near the GUT scale, linking the right-handed neutrinos to the possibility of a grand unified theory.
The mass terms mix neutrinos of different generations. This mixing is parameterized by the PMNS matrix, which is the neutrino analogue of the CKM quark mixing matrix. Unlike the quark mixing, which is almost minimal, the mixing of the neutrinos appears to be almost maximal. This has led to various speculations of symmetries between the various generations that could explain the mixing patterns.
The mixing matrix could also contain several complex phases that break CP invariance, although there has been no experimental probe of these. These phases could potentially create a surplus of leptons over anti-leptons in the early universe, a process known as leptogenesis. This asymmetry could then at a later stage be converted in an excess of baryons over anti-baryons, and explain the matter-antimatter asymmetry in the universe.
The light neutrinos are disfavored as an explanation for the observation of dark matter, based on considerations of large-scale structure formation in the early universe. Simulations of structure formation show that they are too hot – that is, their kinetic energy is large compared to their mass – while formation of structures similar to the galaxies in our universe requires cold dark matter. The simulations show that neutrinos can at best explain a few percent of the missing mass in dark matter. However, the heavy, sterile, right-handed neutrinos are a possible candidate for a dark matter WIMP.
There are however other explanations for neutrino oscillations which do not necessarily require neutrinos to have masses, such as Lorentz-violating neutrino oscillations.
== Preon models ==
Several preon models have been proposed to address the unsolved problem concerning the fact that there are three generations of quarks and leptons. Preon models generally postulate some additional new particles which are further postulated to be able to combine to form the quarks and leptons of the standard model. One of the earliest preon models was the Rishon model.
To date, no preon model is widely accepted or fully verified.
== Theories of everything ==
Theoretical physics continues to strive toward a theory of everything, a theory that fully explains and links together all known physical phenomena, and predicts the outcome of any experiment that could be carried out in principle.
In practical terms the immediate goal in this regard is to develop a theory which would unify the Standard Model with General Relativity in a theory of quantum gravity. Additional features, such as overcoming conceptual flaws in either theory or accurate prediction of particle masses, would be desired.
The challenges in putting together such a theory are not just conceptual - they include the experimental aspects of the very high energies needed to probe exotic realms.
Several notable attempts in this direction are supersymmetry, loop quantum gravity, and String theory.
=== Supersymmetry ===
=== Loop quantum gravity ===
Theories of quantum gravity such as loop quantum gravity and others are thought by some to be promising candidates to the mathematical unification of quantum field theory and general relativity, requiring less drastic changes to existing theories. However recent work places stringent limits on the putative effects of quantum gravity on the speed of light, and disfavours some current models of quantum gravity.
=== String theory ===
Extensions, revisions, replacements, and reorganizations of the Standard Model exist in attempt to correct for these and other issues. String theory is one such reinvention, and many theoretical physicists think that such theories are the next theoretical step toward a true Theory of Everything.
Among the numerous variants of string theory, M-theory, whose mathematical existence was first proposed at a String Conference in 1995 by Edward Witten, is believed by many to be a proper "ToE" candidate, notably by physicists Brian Greene and Stephen Hawking. Though a full mathematical description is not yet known, solutions to the theory exist for specific cases. Recent works have also proposed alternate string models, some of which lack the various harder-to-test features of M-theory (e.g. the existence of Calabi–Yau manifolds, many extra dimensions, etc.) including works by well-published physicists such as Lisa Randall.
== See also ==
== Footnotes ==
== References ==
== Further reading ==
Lisa Randall (2005). Warped Passages: Unraveling the Mysteries of the Universe's Hidden Dimensions. HarperCollins. ISBN 978-0-06-053108-9.
== External resources ==
Standard Model Theory @ SLAC
Scientific American Apr 2006
LHC. Nature July 2007
Les Houches Conference, Summer 2005 | Wikipedia/Beyond_the_Standard_Model |
The Stanford Physics Information Retrieval System (SPIRES) is a database management system developed by Stanford University. It is used by universities, colleges and research institutions. The first website in North America was created to allow remote users access to its database.
== History ==
SPIRES was originally developed at the Stanford Linear Accelerator Center (SLAC) in 1969, from a design based on a 1967 information study of physicists at SLAC. The system was designed as a physics database management system (DBMS) to deal with high-energy-physics preprints. Written in PL/I, SPIRES ran on an IBM System/360.
In the early 1970s, an evaluation of this system resulted in the decision to implement a new system for use by faculty, staff and students at Stanford University. SPIRES was renamed the Stanford Public Information Retrieval System. The new development took place under a National Science Foundation grant headed by Edwin B. Parker, principal investigator. SPIRES joined forces with the BALLOTS project to create a bibliographic citation retrieval system and quickly evolved into a generalized information retrieval and data base management system that could meet the needs of a large and diverse computing community.
SPIRES was rewritten in PL360, a block structured programming language designed explicitly for System/360-compatible hardware. The primary authors were Thomas H. Martin, Dick Guertin and Bill Kiefer. John Schroeder was the manager of the SPIRES project during this early phase of development.
Eventually, BALLOTS split off from SPIRES and the Research Libraries Group adopted SPIRES as its data base engine while providing a graphical interface to its clients. Socrates was a library circulation management system rooted in SPIRES.
SPIRES became the primary database management system for Stanford University business and student services in the 1980s and 1990s. It was also adopted by about two dozen other universities, including installations using the Michigan Terminal System (MTS), and VM/CMS. These universities collaborated through annual meetings of the SPIRES Consortium.
In 2004, SPIRES was migrated off the mainframe onto Unix platforms by means of a System/360 emulator developed by Dick Guertin. The DBMS now runs on Unix, Linux or macOS and is available under Mozilla Public License.
== SPIRES High Energy Physics database (SPIRES-HEP) ==
The SPIRES High Energy Physics database (SPIRES-HEP), installed at Stanford Linear Accelerator Center (SLAC) in the 1970s, became the first website in North America and the first database accessible through the World Wide Web in 1991. It has since expanded into a joint project of SLAC, Fermilab, and DESY, with mirrors hosted at those institutions as well as at the Institute for High Energy Physics (Russia), the University of Durham (UK), the Yukawa Institute for Theoretical Physics at Kyoto University (Japan), and
the Indonesian Institute of Sciences LIPI (Indonesia). This project stores bibliographic information about the literature of the field of High Energy Physics and is an example of academic databases and search engines.
SPIRES is, as of 2012, being replaced by INSPIRE-HEP, a modern system based on Invenio software. INSPIRE is run by a collaboration of the physics labs at CERN, DESY, Fermilab and SLAC, and interacts closely with HEP publishers, arXiv.org, NASA's Astrophysics Data System, Particle Data Group, and other information resources.
== Operating platforms ==
SPIRES currently runs on Unix, Linux and macOS platforms. Its primary use today is for the world physics communities, and "legacy" data at Stanford University. SPIRES runs under emulation of the original ORVYL operating system. The emulators are written primarily in "C" compiled by 32-bit "gcc" or "g++" depending upon architectures (ppc or i386). The SPIRES engine is less than one-megabyte in size, but performs all the searching, maintenance, and formatting of databases. A 270k emulator runs a 973k SPIRES. In 2017, the Emulators were adapted by Dick Guertin to become 64-bit programs dealing with 32-bit SPIRES.
== References ==
== External links ==
SPIRES software at Stanford ITS | Wikipedia/Stanford_Physics_Information_Retrieval_System |
In physics, phenomenology is the application of theoretical physics to experimental data by making quantitative predictions based upon known theories. It is related to the philosophical notion of the same name in that these predictions describe anticipated behaviors for the phenomena in reality. Phenomenology stands in contrast with experimentation in the scientific method, in which the goal of the experiment is to test a scientific hypothesis instead of making predictions.
Phenomenology is commonly applied to the field of particle physics, where it forms a bridge between the mathematical models of theoretical physics (such as quantum field theories and theories of the structure of space-time) and the results of the high-energy particle experiments. It is sometimes used in other fields such as in condensed matter physics and plasma physics, when there are no existing theories for the observed experimental data.
== Applications in particle physics ==
=== Standard Model consequences ===
Within the well-tested and generally accepted Standard Model, phenomenology is the calculating of detailed predictions for experiments, usually at high precision (e.g., including radiative corrections).
Examples include:
Next-to-leading order calculations of particle production rates and distributions.
Monte Carlo simulation studies of physics processes at colliders.
Extraction of parton distribution functions from data.
==== CKM matrix calculations ====
The CKM matrix is useful in these predictions:
Application of heavy quark effective field theory to extract CKM matrix elements.
Using lattice QCD to extract quark masses and CKM matrix elements from experiment.
=== Theoretical models ===
In Physics beyond the Standard Model, phenomenology addresses the experimental consequences of new models: how their new particles could be searched for, how the model parameters could be measured, and how the model could be distinguished from other, competing models.
==== Phenomenological analysis ====
Phenomenological analyses, in which one studies the experimental consequences of adding the most general set of beyond-the-Standard-Model effects in a given sector of the Standard Model, usually parameterized in terms of anomalous couplings and higher-dimensional operators. In this case, the term "phenomenological" is being used more in its philosophy of science sense.
== See also ==
Effective theory
Phenomenological model
Phenomenological quantum gravity
== References ==
== External links ==
Papers on phenomenology are available on the hep-ph archive of the ArXiv.org e-print archive
List of topics on phenomenology from IPPP, the Institute for Particle Physics Phenomenology at University of Durham, UK
Collider Phenomenology: Basic knowledge and techniques, lectures by Tao Han
Pheno '08 Symposium on particle physics phenomenology, including slides from the talks linked from the symposium program. | Wikipedia/Particle_physics_phenomenology |
Astronomy & Astrophysics (A&A) is a monthly peer-reviewed scientific journal covering theoretical, observational, and instrumental astronomy and astrophysics. It is operated by an editorial team under the supervision of a board of directors representing 27 sponsoring countries plus a representative of the European Southern Observatory. The journal is published by EDP Sciences and the current editors-in-chief are Thierry Forveille and João Alves.
== History ==
=== Origins ===
Astronomy & Astrophysics was created as an answer to the publishing situation found in Europe in the 1960s. At that time, multiple journals were being published in several countries around the continent. These journals usually had a limited number of subscribers, and articles were written in languages other than English. They were less widely read than American and British journals and the research they reported had therefore less impact in the community.
Starting in 1963, conversations between astronomers from European countries assessed the need for a common astronomical journal. On 8 April 1968, leading astronomers from Belgium, Denmark, France, Germany, the Netherlands, and Scandinavian countries met in Leiden University to prepare a possible merging of some of the principal existing journals. It was proposed that the new journal be called Astronomy and Astrophysics, A European Journal.
The main policy-making body of the new journal was to be the "Board of Directors", consisting of senior astronomers or government representatives of the sponsoring countries. The board appoints the editors-in chief, who are responsible for the scientific contents of the journal. The European Southern Observatory was chosen as an additional body that acts on behalf of the board and handles the administrative, financial, and legal matters of the journal.
A second meeting held in July 1968 in Brussels cemented the agreement discussed in Leiden. Each nation established an annual monetary contribution and appointed its delegates for the board of directors. Also at this meeting, the first editors-in-chief were appointed: Stuart Pottasch and Jean-Louis Steinberg.
The next meeting took place in Paris on 11 October 1968 and is officially regarded as the first meeting of the board of directors. At this meeting, the first chairman of the board, Adriaan Blaauw, was appointed, and the contract with the publisher Springer Science+Business Media was formalized.
=== Early years ===
The first issue of A&A was published in January 1969, merging several national journals of individual European countries into one comprehensive publication. These journals, with their ISSN and date of first publication, are as follows:
Annales d'Astrophysique ISSN 0365-0499 (France), established in 1938
Bulletin of the Astronomical Institutes of the Netherlands ISSN 0365-8910 (Netherlands), established in 1921
Bulletin Astronomique ISSN 0245-9787 (France), established in 1884
Journal des Observateurs ISSN 0368-3389 (France), established in 1915
Zeitschrift für Astrophysik ISSN 0372-8331 (Germany), established in 1930
Arkiv för Astronomi (ISSN 0004-2048), established in 1948 in Sweden, was also incorporated in 1973. The publishing of Astronomy & Astrophysics was further extended in 1992 by the incorporation of Bulletin of the Astronomical Institutes of Czechoslovakia (ISSN 0004-6248), established in 1947.
There were only four issues of the journal in 1969, but it soon became a monthly publication and one of the four major generalist astronomical journals in the world. Initially, papers were submitted in English, French or German, but it soon became clear that, for a given author, the papers in English were cited twice as often as those in other languages.
In addition to regular research papers in several different fields of astrophysics. A&A featured Letters and Research Notes for short manuscripts on a significant result or idea. A Supplement Series for the journal was created in 1970 for publishing extensive tabular material and catalogs.
=== 21st century ===
The turn of the century brought important changes to the journal. In 2001, a new contract was signed with EDP Sciences, which replaced Springer as the publishing house. Special Issues featuring results of astronomical surveys and space missions such as XMM-Newton, Planck, Rosetta, and Gaia were introduced.
The editorial structure of the journal was profoundly changed in 2003 and 2005 to involve more countries in the editorial process and to better handle the increasing number of submissions. Precise criteria for publishing in Astronomy & Astrophysics were explicited in 2004. English language editing was introduced in 2001 as a service to the diverse authorship of the journal. An extensive survey of authors conducted in 2007 showed widespread satisfaction with the new directions of the journal, although the use of structured abstracts proved more controversial.
The evolution of electronic publishing resulted in the extinction of the Supplement Series, which was incorporated in the main journal in 2001, and of the printed edition in 2016. The Research Notes section was also discontinued in 2016.
In 2023, A&A announced the introduction of links between articles and corresponding ESO datasets.
The journal editorial office is located at the Paris Observatory and is supervised by the managing editor. It handles over 2000 papers per year.
An archive of the published articles and related material is maintained by the Centre de données astronomiques de Strasbourg.
== Sponsoring countries ==
The original sponsoring countries were the four countries whose journals merged to form Astronomy & Astrophysics (France, Germany, the Netherlands and Sweden), together with Belgium, Denmark, Finland, and Norway. Norway later withdrew, but Austria, Greece, Italy, Spain, and Switzerland joined during the 1970s and 1980s. The Czech Republic, Estonia, Hungary, Poland, and Slovakia all joined as new members in the 1990s.
In 2001 the words "A European Journal" were removed from the front cover in recognition of the fact that the journal was becoming increasingly global in scope. In effect, Argentina was admitted as an "observer" in 2002. In 2004 the board of directors decided that the journal "will henceforth consider applications for sponsoring membership from any country in the world with well-documented active and excellent astronomical research". Argentina became the first non-European country to gain full membership in 2005, followed by Brazil and Chile in 2006 (Brazil withdrew in 2016). Other European countries also joined during the 21st century: Portugal, Croatia, and Bulgaria during the 2010s, and Armenia, Lithuania, Norway, Serbia and Ukraine in the 2010s. The current list of member countries is listed here.
== Chairs of the Board of Directors ==
The following persons are or have been chairs of the Board of Directors:
2023 - : A. Kučinskas
2022-2023: W. J. Duschl
2016-2022: A. Moitinho
2014-2016: J. Lub
2011-2013: B. Nordstroem
2010: K.S. de Boer
2005-2009: G. Meynet
1999-2004: Aa. Sandqvist
1993-1998: A. Maeder
1979-1992: G. Contopoulos
1969-1978: A. Blaauw
== Editors-in-Chief ==
2012 - : Main Journal: Thierry Forveille, Letters: João Alves (replaced Malcolm Walmsley in 2013)
2006 - 2011: Main Journal: Claude Bertout; Letters: Malcolm Walmsley
2004 - 2005: Main Journal: Claude Bertout; Letters: Peter Schneider
1999 - 2003: Main Journal: Claude Bertout, Harm Habing; Letters: Peter Schneider
1996 - 1998: Main Journal: James Lequeux, Harm Habing; Letters: Peter Schneider (replaced Stuart Pottasch in 1997)
1988 - 1995: Main Journal: James Lequeux, Michael Grewing; Letters: Stuart Pottasch
1986 - 1988: Main Journal: Françoise Praderie, Michael Grewing; Letters:Stuart Pottasch
1983 - 1985: Main Journal: Catherine Cesarsky, Michael Grewing; Letters: Stuart Pottasch
1981 - 1982: Main Journal: James Lequeux, Michael Grewing; Letters: Stuart Pottasch
1979 - 1980: Main Journal: James Lequeux, Hans-Heinrich Voigt; Letters: Stuart Pottasch
1975 - 1978: Main Journal: Jean Heidmann, Hans-Heinrich Voigt; Letters: Stuart Pottasch (from 1976 on)
1973 - 1974: Jean Heidmann, Stuart Pottasch
1969 - 1972: Jean-Louis Steinberg, Stuart Pottasch
== Open access ==
Before 2022, the most recent issue of A&A was available free of charge for readers. Authors had the option to pay article processing charges (APC) for immediate and permanent open access. Furthermore, all Letters to the Editor and all articles published in Sections 12 to 15 were in free access at no cost to the authors. Articles in the other sections of the journal were made freely available 12 months after publication (delayed open-access), through the publisher's site and via the Astrophysics Data System.
Since the beginning of 2022, Astronomy & Astrophysics is published in full open access under the Subscribe to Open (S2O) model.
== Scientific Writing School ==
A&A organises Scientific Writing Schools aimed at postgraduate students and young researchers. The purpose of these schools is to teach young authors how to express their scientific results through adequate and efficient science writing. As of 2025, six of these schools were organised in Belgium (2008 and 2009), Hungary (2014), Chile (2016), China (2019), and Portugal (2025).
== Abstracting and indexing ==
This journal is abstracted and indexed in:
According to the Journal Citation Reports, the journal has a 2022 impact factor of 6.5.
== See also ==
The Astronomical Journal
The Astrophysical Journal
Monthly Notices of the Royal Astronomical Society
== References ==
== External links ==
Official website | Wikipedia/Astronomy_and_Astrophysics |
The Dark Energy Survey (DES) is an astronomical survey designed to constrain the properties of dark energy. It uses images taken in the near-ultraviolet, visible, and near-infrared to measure the expansion of the universe using Type Ia supernovae, baryon acoustic oscillations, the number of galaxy clusters, and weak gravitational lensing. The collaboration is composed of research institutions and universities from the United States, Australia, Brazil, the United Kingdom, Germany, Spain, and Switzerland. The collaboration is divided into several scientific working groups. The director of DES is Josh Frieman.
The DES began by developing and building Dark Energy Camera (DECam), an instrument designed specifically for the survey. This camera has a wide field of view and high sensitivity, particularly in the red part of the visible spectrum and in the near infrared. Observations were performed with DECam mounted on the 4-meter Víctor M. Blanco Telescope, located at the Cerro Tololo Inter-American Observatory (CTIO) in Chile. Observing sessions ran from 2013 to 2019; as of 2021 the DES collaboration has published results from the first three years of the survey.
== DECam ==
DECam, short for the Dark Energy Camera, is a large camera built to replace the previous prime focus camera on the Victor M. Blanco Telescope. The camera consists of three major components: mechanics, optics, and CCDs.
=== Mechanics ===
The mechanics of the camera consists of a filter changer with an 8-filter capacity and shutter. There is also an optical barrel that supports 5 corrector lenses, the largest of which is 98 cm in diameter. These components are attached to the CCD focal plane which is cooled to 173 K (−148 °F; −100 °C) with liquid nitrogen in order to reduce thermal noise in the CCDs. The focal plane is also kept in an extremely low vacuum of 0.00013 pascals (1.3×10−9 atm) to prevent the formation of condensation on the sensors. The entire camera with lenses, filters, and CCDs weighs approximately 4 tons. When mounted at the prime focus it was supported with a hexapod system allowing for real time focal adjustment.
=== Optics ===
The camera is outfitted with u, g, r, i, z, and Y filters spanning roughly from 340–1070 nm, similar to those used in the Sloan Digital Sky Survey (SDSS). This allows DES to obtain photometric redshift measurements to z≈1. DECam also contains five lenses acting as corrector optics to extend the telescope's field of view to a diameter of 2.2°, one of the widest fields of view available for ground-based optical and infrared imaging. One significant difference between previous charge-coupled devices (CCD) at the Victor M. Blanco Telescope and DECam is the improved quantum efficiency in the red and near-infrared wavelengths.
=== CCDs ===
The scientific sensor array on DECam is an array of 62 2048×4096 pixel back-illuminated CCDs totaling 520 megapixels; an additional 12 2048×2048 pixel CCDs (50 Mpx) are used for guiding the telescope, monitoring focus, and alignment. The full DECam focal plane contains 570 megapixels. The CCDs for DECam use high resistivity silicon manufactured by Dalsa and LBNL with 15×15 micron pixels. By comparison, the OmniVision Technologies back-illuminated CCD that was used in the iPhone 4 has a 1.75×1.75 micron pixel with 5 megapixels. The larger pixels allow DECam to collect more light per pixel, improving low light sensitivity which is desirable for an astronomical instrument. DECam's CCDs also have a 250-micron crystal depth; this is significantly larger than most consumer CCDs. The additional crystal depth increases the path length travelled by entering photons. This, in turn, increases the probability of interaction and allows the CCDs to have an increased sensitivity to lower energy photons, extending the wavelength range to 1050 nm. Scientifically this is important because it allows one to look for objects at a higher redshift, increasing statistical power in the studies mentioned above. When placed in the telescope's focal plane each pixel has a width of 0.27″ on the sky, resulting in a total field of view of 3 square degrees.
== Survey ==
DES imaged 5,000 square degrees of the southern sky in a footprint that overlaps with the South Pole Telescope and Stripe 82 (in large part avoiding the Milky Way). The survey took 758 observing nights spread over six annual sessions between August and February to complete, covering the survey footprint ten times in five photometric bands (g, r, i, z, and Y). The survey reached a depth of 24th magnitude in the i band over the entire survey area. Longer exposure times and faster observing cadence were made in five smaller patches totaling 30 square degrees to search for supernovae.
First light was achieved on 12 September 2012; after a verification and testing period, scientific survey observations started in August 2013. The last observing session was completed on 9 January 2019.
=== Other surveys using DECam ===
After completion of the Dark Energy Survey, the Dark Energy Camera was used for other sky surveys:
Dark Energy Camera Legacy Survey (DECaLS) covers the sky below 32°Declination, not including the Milky Way. This survey covers over 9000 square degrees.
The DESI Legacy Imaging Surveys (Legacy Surveys), as of data release 10, includes DECaLS, BASS and MzLS. It also incorporating additional DECam data, which means that it covers almost the entire extragalactic southern sky, including parts of the Magellanic Clouds. The purpose of the Legacy Surveys is to find targets for the Dark Energy Spectroscopic Instrument.
Dark Energy Camera Plane Survey (DECaPS), covers the Milky Way in the southern sky.
== Observing ==
Each year from August through February, observers will stay in dormitories on the mountain. During a weeklong period of work, observers sleep during the day and use the telescope and camera at night. There will be some DES members working at the telescope console to monitor operations while others are monitoring camera operations and data process.
For the wide-area footprint observations, DES takes roughly two minutes for each new image: The exposures are typically 90 seconds long, with another 30 seconds for readout of the camera data and slewing to point the telescope at its next target. Despite the restrictions on each exposure, the team also need to consider different sky conditions for the observations, such as moonlight and cloud cover.
In order to get better images, DES team use a computer algorithm called the "Observing Tactician" (ObsTac) to help with sequencing observations. It optimizes among different factors, such as the date and time, weather conditions, and the position of the moon. ObsTac automatically points the telescope in the best direction, and selects the exposure, using the best light filter. It also decides whether to take a wide-area or time-domain survey image, depending on whether or not the exposure will also be used for supernova searches.
== Results ==
=== Cosmology ===
Dark Energy Group published several papers presenting their results for cosmology. Most of these cosmology results coming from its first-year data and the third-year data. Their results for cosmology were concluded with a Multi-Probe Methodology, which mainly combine the data from Galaxy-Galaxy Lensing, different shape of weak lensing, cosmic shear, galaxy clustering and photometric data set.
For the first-year data collected by DES, Dark Energy Survey Group showed the Cosmological Constraints results from Galaxy Clustering and Weak Lensing results and cosmic shear measurement. With Galaxy Clustering and Weak Lensing results,
S
8
=
σ
8
(
Ω
m
/
0.3
)
0.5
=
0.773
−
0.020
+
0.026
{\displaystyle S_{8}=\sigma _{8}(\Omega _{m}/0.3)^{0.5}=0.773_{-0.020}^{+0.026}}
and
Ω
m
=
0.267
−
0.017
+
0.030
{\displaystyle \Omega _{m}=0.267_{-0.017}^{+0.030}}
for ΛCDM,
S
8
=
0.782
−
0.024
+
0.036
{\displaystyle S_{8}=0.782_{-0.024}^{+0.036}}
,
Ω
m
=
0.284
−
0.030
+
0.033
{\displaystyle \Omega _{m}=0.284_{-0.030}^{+0.033}}
and
ω
=
−
0.82
−
0.20
+
0.21
{\displaystyle \omega =-0.82_{-0.20}^{+0.21}}
at 68% confidence limits for ωCMD. Combine the most significant measurements of cosmic shear in a galaxy survey, Dark Energy Survey Group showed that
σ
8
(
Ω
m
/
0.3
)
0.5
=
0.782
−
0.027
+
0.027
{\displaystyle \sigma _{8}(\Omega _{m}/0.3)^{0.5}=0.782_{-0.027}^{+0.027}}
at 68% confidence limits and
σ
8
(
Ω
m
/
0.3
)
0.5
=
0.777
−
0.038
+
0.036
{\displaystyle \sigma _{8}(\Omega _{m}/0.3)^{0.5}=0.777_{-0.038}^{+0.036}}
for ΛCDM with
ω
=
−
0.95
−
0.36
+
0.33
{\displaystyle \omega =-0.95_{-0.36}^{+0.33}}
. Other cosmological analyses from first year data showed a derivation and validation of redshift distribution estimates and their uncertainties for the galaxies used as weak lensing sources. The DES team also published a paper summarize all the Photometric Data Set for Cosmology for their first-year data.
For the third-year data collected by DES, they updated the Cosmological Constraints to
σ
8
(
Ω
m
/
0.3
)
0.5
=
0.759
−
0.025
+
0.023
{\displaystyle \sigma _{8}(\Omega _{m}/0.3)^{0.5}=0.759_{-0.025}^{+0.023}}
for the ΛCDM model with the new cosmic shear measurements. From third-year data of Galaxy Clustering and Weak Lensing results, DES updated the Cosmological Constraints to
S
8
=
σ
8
(
Ω
m
/
0.3
)
0.5
=
0.776
−
0.017
+
0.017
{\displaystyle S_{8}=\sigma _{8}(\Omega _{m}/0.3)^{0.5}=0.776_{-0.017}^{+0.017}}
and
Ω
m
=
0.339
−
0.031
+
0.032
{\displaystyle \Omega _{m}=0.339_{-0.031}^{+0.032}}
in ΛCDM at 68% confidence limits,
S
8
=
σ
8
(
Ω
m
/
0.3
)
0.5
=
0.775
−
0.024
+
0.026
{\displaystyle S_{8}=\sigma _{8}(\Omega _{m}/0.3)^{0.5}=0.775_{-0.024}^{+0.026}}
,
Ω
m
=
0.352
−
0.041
+
0.035
{\displaystyle \Omega _{m}=0.352_{-0.041}^{+0.035}}
and
ω
=
−
0.98
−
0.20
+
0.32
{\displaystyle \omega =-0.98_{-0.20}^{+0.32}}
in ωCDM at 68% confidence limits. Similarly, the DES team published their third-year observations for photometric data set for cosmology comprising nearly 5000 deg2 of grizY imaging in the south Galactic cap, including nearly 390 million objects, with depth reaching S/N ~ 10 for extended objects up to
i
A
B
{\displaystyle i_{AB}}
~ 23.0, and top-of-the-atmosphere photometric uniformity < 3mmag.
=== Weak lensing ===
Weak lensing was measured statistically by measuring the shear-shear correlation function, a two-point function, or its Fourier Transform, the shear power spectrum. In April 2015, the Dark Energy Survey released mass maps using cosmic shear measurements of about 2 million galaxies from the science verification data between August 2012 and February 2013. In 2021 weak lensing was used to map the dark matter in a region of the southern hemisphere sky, in 2022 together with galaxy clustering data to give new cosmological constrains. and in 2023 with data from the Planck telescope and South Pole telescope to give once new improved constraints.
Another big part of weak lensing result is to calibrate the redshift of the source galaxies. In December 2020 and June 2021, DES team published two papers showing their results about using weak lensing to calibrate the redshift of the source galaxies in order to mapping the matter density field with gravitational lensing.
=== Gravitational waves ===
After LIGO detected the first gravitational wave signal from GW170817, DES made follow-up observations of GW170817 using DECam. With DECam independent discovery of the optical source, DES team establish its association with GW170817 by showing that none of the 1500 other sources found within the event localization region could plausibly be associated with the event. DES team monitored the source for over two weeks and provide the light curve data as a machine-readable file. From the observation data set, DES concluded that the optical counterpart they have identified near NGC 4993 is associated with GW170817. This discovery ushers in the era of multi-messenger astronomy with gravitational waves and demonstrates the power of DECam to identify the optical counterparts of gravitational-wave sources.
=== Dwarf galaxies ===
In March 2015, two teams released their discoveries of several new potential dwarf galaxy candidates found in Year 1 DES data. In August 2015, the Dark Energy Survey team announced the discovery of eight additional candidates in Year 2 DES data. Later on, Dark Energy Survey team found more dwarf galaxies. With more Dwarf Galaxy results, the team was able to take a deep look about more properties of the detected Dwarf Galaxy such as the chemical abundance, the structure of stellar population, and Stellar Kinematics and Metallicities. In Feb 2019, the team also discovered a sixth star cluster in the Fornax Dwarf Spheroidal Galaxy and a tidally Disrupted Ultra-Faint Dwarf Galaxy.
=== Baryon acoustic oscillations ===
The signature of baryon acoustic oscillations (BAO) can be observed in the distribution of tracers of the matter density field and used to measure the expansion history of the Universe. BAO can also be measured using purely photometric data, though at less significance. DES team observation samples consists of 7 million galaxies distributed over a footprint of 4100 deg2 with 0.6 < zphoto < 1.1 and a typical redshift uncertainty of 0.03(1+z). From their statistics, they combine the likelihoods derived from angular correlations and spherical harmonics to constrain the ratio of comoving angular diameter distance
D
m
(
Z
e
f
f
=
0.835
)
/
r
d
=
18.92
±
0.51
{\displaystyle D_{m}(Z_{e}ff=0.835)/r_{d}=18.92\pm 0.51}
at the effective redshift of our sample to the sound horizon scale at the drag epoch.
=== Type Ia supernova observations ===
In May 2019, Dark Energy Survey team published their first cosmology results using Type Ia supernovae. The supernova data was from DES-SN3YR. The Dark Energy Survey team found Ωm = 0.331 ± 0.038 with a flat ΛCDM model and Ωm = 0.321 ± 0.018, w = −0.978 ± 0.059 with a flat wCDM model. Analyzing the same data from DES-SN3YR, they also found a new current Hubble constant,
H
0
=
67.1
±
1.3
k
m
s
−
1
M
p
c
−
1
{\displaystyle H_{0}=67.1\pm 1.3\,\mathrm {km\,s^{-1}\,Mpc^{-1}} }
. This result has an excellent agreement with the Hubble constant measurement from Planck Satellite Collaboration in 2018. In June 2019, there a follow-up paper was published by DES team discussing the systematic uncertainties, and validation of using the supernovae to measure the cosmology results mentioned before. The team also published their photometric pipeline and light curve data in another paper published in the same month.
=== Minor planets ===
Several minor planets were discovered by DeCam in the course of The Dark Energy Survey, including high-inclination trans-Neptunian objects (TNOs).
The MPC has assigned the IAU code W84 for DeCam's observations of small Solar System bodies. As of October 2019, the MPC inconsistently credits the discovery of nine numbered minor planets, all of them trans-Neptunian objects, to either "DeCam" or "Dark Energy Survey". The list does not contain any unnumbered minor planets potentially discovered by DeCam, as discovery credits are only given upon a body's numbering, which in turn depends on a sufficiently secure orbit determination.
== Gallery ==
== See also ==
Cosmic Evolution Survey
== References ==
== External links ==
Dark Energy Survey website
Dark Energy Survey Science Program (PDF)
Dark Energy Survey Data Management
Dark Energy Camera (DECam) Archived 2017-10-18 at the Wayback Machine
Biron, Lauren (4 October 2022). "15 spectacular photos from the Dark Energy Camera". symmetry magazine. | Wikipedia/Dark_Energy_Survey |
In physical chemistry, there are numerous quantities associated with chemical compounds and reactions; notably in terms of amounts of substance, activity or concentration of a substance, and the rate of reaction. This article uses SI units.
== Introduction ==
Theoretical chemistry requires quantities from core physics, such as time, volume, temperature, and pressure. But the highly quantitative nature of physical chemistry, in a more specialized way than core physics, uses molar amounts of substance rather than simply counting numbers; this leads to the specialized definitions in this article. Core physics itself rarely uses the mole, except in areas overlapping thermodynamics and chemistry.
== Notes on nomenclature ==
Entity refers to the type of particle/s in question, such as atoms, molecules, complexes, radicals, ions, electrons etc.
Conventionally for concentrations and activities, square brackets [ ] are used around the chemical molecular formula. For an arbitrary atom, generic letters in upright non-bold typeface such as A, B, R, X or Y etc. are often used.
No standard symbols are used for the following quantities, as specifically applied to a substance:
the mass of a substance m,
the number of moles of the substance n,
partial pressure of a gas in a gaseous mixture p (or P),
some form of energy of a substance (for chemistry enthalpy H is common),
entropy of a substance S
the electronegativity of an atom or chemical bond χ.
Usually the symbol for the quantity with a subscript of some reference to the quantity is used, or the quantity is written with the reference to the chemical in round brackets. For example, the mass of water might be written in subscripts as mH2O, mwater, maq, mw (if clear from context) etc., or simply as m(H2O). Another example could be the electronegativity of the fluorine-fluorine covalent bond, which might be written with subscripts χF-F, χFF or χF-F etc., or brackets χ(F-F), χ(FF) etc.
Neither is standard. For the purpose of this article, the nomenclature is as follows, closely (but not exactly) matching standard use.
For general equations with no specific reference to an entity, quantities are written as their symbols with an index to label the component of the mixture - i.e. qi. The labeling is arbitrary in initial choice, but once chosen fixed for the calculation.
If any reference to an actual entity (say hydrogen ions H+) or any entity at all (say X) is made, the quantity symbol q is followed by curved ( ) brackets enclosing the molecular formula of X, i.e. q(X), or for a component i of a mixture q(Xi). No confusion should arise with the notation for a mathematical function.
== Quantification ==
=== General basic quantities ===
=== General derived quantities ===
== Kinetics and equilibria ==
The defining formulae for the equilibrium constants Kc (all reactions) and Kp (gaseous reactions) apply to the general chemical reaction:
ν
X
1
1
+
ν
X
2
2
+
⋯
+
ν
X
r
r
↽
−
−
⇀
η
Y
1
1
+
η
Y
2
2
+
⋯
+
η
Y
p
p
{\displaystyle {\ce {{\nu _{1}X1}+{\nu _{2}X2}+\cdots +\nu _{\mathit {r}}X_{\mathit {r}}<=>{\eta _{1}Y1}+{\eta _{2}Y2}+\cdots +\eta _{\mathit {p}}{Y}_{\mathit {p}}}}}
and the defining equation for the rate constant k applies to the simpler synthesis reaction (one product only):
ν
X
1
1
+
ν
X
2
2
+
⋯
+
ν
X
r
r
⟶
η
Y
{\displaystyle {\ce {{\nu _{1}X1}+{\nu _{2}X2}+\cdots +\nu _{\mathit {r}}X_{\mathit {r}}->\eta {Y}}}}
where:
i = dummy index labelling component i of reactant mixture,
j = dummy index labelling component i of product mixture,
Xi = component i of the reactant mixture,
Yj = reactant component j of the product mixture,
r (as an index) = number of reactant components,
p (as an index) = number of product components,
νi = stoichiometry number for component i in product mixture,
ηj = stoichiometry number for component j in product mixture,
σi = order of reaction for component i in reactant mixture.
The dummy indices on the substances X and Y label the components (arbitrary but fixed for calculation); they are not the numbers of each component molecules as in usual chemistry notation.
The units for the chemical constants are unusual since they can vary depending on the stoichiometry of the reaction, and the number of reactant and product components. The general units for equilibrium constants can be determined by usual methods of dimensional analysis. For the generality of the kinetics and equilibria units below, let the indices for the units be;
S
1
=
∑
j
=
1
p
η
j
−
∑
i
=
1
r
ν
i
,
S
2
=
1
−
∑
i
=
1
r
σ
i
.
{\displaystyle S_{1}=\sum _{j=1}^{p}\eta _{j}-\sum _{i=1}^{r}\nu _{i}\,,\quad \,S_{2}=1-\sum _{i=1}^{r}\sigma _{i}\,.}
== Electrochemistry ==
Notation for half-reaction standard electrode potentials is as follows. The redox reaction
split into:
a reduction reaction:
B
+
+
e
−
↽
−
−
⇀
B
{\displaystyle {\ce {B+ + e^- <=> B}}}
and an oxidation reaction:
A
+
+
e
−
↽
−
−
⇀
A
{\displaystyle {\ce {A+ + e^- <=> A}}}
(written this way by convention) the electrode potential for the half reactions are written as
E
⊖
(
A
+
|
A
)
{\displaystyle E^{\ominus }\left({\ce {A+}}\vert {\ce {A}}\right)}
and
E
⊖
(
B
+
|
B
)
{\displaystyle E^{\ominus }\left({\ce {B+}}\vert {\ce {B}}\right)}
respectively.
For the case of a metal-metal half electrode, letting M represent the metal and z be its valency, the half reaction takes the form of a reduction reaction:
M
+
z
+
z
e
−
↽
−
−
⇀
M
{\displaystyle {\ce {{M^{+{\mathit {z}}}}+{\mathit {z}}e^{-}<=>M}}}
== Quantum chemistry ==
== References ==
== Sources ==
Physical chemistry, P.W. Atkins, Oxford University Press, 1978, ISBN 0-19-855148-7
Chemistry, Matter and the Universe, R.E. Dickerson, I. Geis, W.A. Benjamin Inc. (USA), 1976, ISBN 0-8053-2369-4
Chemical thermodynamics, D.J.G. Ives, University Chemistry Series, Macdonald Technical and Scientific co. ISBN 0-356-03736-3.
Elements of Statistical Thermodynamics (2nd Edition), L.K. Nash, Principles of Chemistry, Addison-Wesley, 1974, ISBN 0-201-05229-6
Statistical Physics (2nd Edition), F. Mandl, Manchester Physics, John Wiley & Sons, 2008, ISBN 978-0-471-91533-1
== Further reading ==
Quanta: A handbook of concepts, P.W. Atkins, Oxford University Press, 1974, ISBN 0-19-855493-1
Molecular Quantum Mechanics Parts I and II: An Introduction to QUANTUM CHEMISTRY (Volume 1), P.W. Atkins, Oxford University Press, 1977, ISBN 0-19-855129-0
Thermodynamics, From Concepts to Applications (2nd Edition), A. Shavit, C. Gutfinger, CRC Press (Taylor and Francis Group, USA), 2009, ISBN 978-1-4200-7368-3
Properties of matter, B.H. Flowers, E. Mendoza, Manchester Physics Series, J. Wiley and Sons, 1970, ISBN 978-0-471-26498-9 | Wikipedia/Defining_equation_(physical_chemistry) |
In thermodynamics, the Massieu function (sometimes called Massieu–Gibbs function, Massieu potential, or Gibbs function, or characteristic (state) function in its original terminology), symbol Ψ (Psi), is defined by the following relation:
Ψ
=
Ψ
(
X
1
,
…
,
X
i
,
Y
i
+
1
,
…
Y
r
)
{\displaystyle \Psi =\Psi {\big (}X_{1},\dots ,X_{i},Y_{i+1},\dots Y_{r}{\big )}\,}
where for every system with degree of freedom r one may choose r variables, e.g.
(
X
1
,
…
,
X
i
,
Y
i
+
1
,
…
Y
r
)
,
{\displaystyle {\big (}X_{1},\dots ,X_{i},Y_{i+1},\dots Y_{r}{\big )},}
to define a coordinate system, where X and Y are extensive and intensive variables, respectively, and where at least one extensive variable must be within this set in order to define the size of the system. The (r + 1)-th variable, Ψ, is then called the Massieu function.
The Massieu function was introduced in the 1869 paper "On the Characteristic Functions of Various Fluids" by French engineer François Massieu (1832-1896). The name "Gibbs function" is the eponym of American physicist Willard Gibbs (1839-1903), who cited Massieu in his 1876 On the Equilibrium of Heterogeneous Substances. Massieu, as discussed in the first footnote to the abstract of Gibbs' Equilibrium, “appears to have been the first to solve the problem of representing all the properties of a body of invariable composition which are concerned in reversible processes by means of a single function.” Massieu's 1869 paper seems to be the source for the generalized mathematical conception of the energy of a system being equal to summations of the products of pairs of conjugate variables.
== References ==
== Further reading ==
Massieu, F. (1869). "Sur les fonctions caractéristiques des divers fluides et sur la théorie des vapeurs" [On the characteristic functions of various fluids and on the theory of vapors]. Comptes rendus (in French). 69: 858–862.
Massieu, François. (1876). Thermodynamique : Mémoire sur les fonctions caractéristiques des divers fluides et sur la théorie des vapeurs. 92 pgs. Académie des Sciences de L'Institut National de France. | Wikipedia/Massieu_function |
Extended irreversible thermodynamics is a branch of non-equilibrium thermodynamics that goes beyond the local equilibrium hypothesis of classical irreversible thermodynamics.
The space of state variables is enlarged by including the fluxes of mass, momentum and energy and eventually higher order fluxes.
The formalism is well-suited for describing high-frequency processes and small-length scales materials.
== Overview ==
Over the last decades, many efforts have been displayed to generalize the classical laws of Fourier (heat conduction), Fick (matter diffusion), Newton (viscous flows) and Ohm (electrical transport).
Indeed, modern technology strives towards miniaturized devices, high frequency and strongly non-linear processes requiring for a new conceptual approach.
Several classes of theories have been developed with this objective and one of them, known under the heading of Extended Irreversible Thermodynamics (EIT) has raised a particular growing interest.
The paternity of EIT can be traced back to James Clerk Maxwell who in 1867 introduced time derivative terms in the constitutive equations of ideal gases.
== Basic concepts ==
The basic idea underlying EIT is to upgrade to the status of independent variables the non-equilibrium internal energy, matter, momentum and electrical fluxes.
The choice of the fluxes as variables finds its roots in Grad's thirteen-moment kinetic theory of gases, which therefore provides the natural basis for the development of EIT.
The main consequence of the selection of fluxes as state variables is that the constitutive equations of Fourier, Fick, Newton and Ohm are replaced by first-order time evolution equations including memory and non-local effects.
The selection of the fluxes as variables is not a mere arbitrary act if it is recalled that in the everyday life, fluxes may play a leading role as for instance in traffic control (flux of cars), economy (flux of money), and the World Wide Web (flux of information).
== An extension of classical irreversible thermodynamics ==
EIT can be viewed as the natural extension of Classical Irreversible Thermodynamics (CIT).
Mainly developed by the Belgian-Dutch school headed by I. Prigogine, working on a simple hypothesis of local thermodynamic equilibrium, CIT assumes the existence of field laws of the diffusion type. Mathematically, these are parabolic partial differential equations. They entail that a locally applied disturbance propagates at infinite velocity across the body. This contradicts both experimental evidence and the principle of causality. The latter requires that an effect comes after the application of its cause.
In EIT, the idea of local thermodynamic equilibrium is abandoned. In contrast with CIT, the field equations of EIT are hyperbolic circumventing the paradox of signals moving at infinite velocity.
== Applications ==
The range of applications of EIT is not limited to situations near equilibrium but encompasses several and various domains including
-memory effects (fast processes, polymers, superfluids),
-non-local effects (micro- and nano-materials),
-non-linear effects (high powers, shock waves).
However, the discussion is not closed. Several fundamental questions as the definition of a non-equilibrium entropy and temperature, the status of the Second law of thermodynamics, a univocal choice of state variables receive only partial responses and ask for more definitive answers.
== References == | Wikipedia/Extended_irreversible_thermodynamics |
In thermodynamics, dissipation is the result of an irreversible process that affects a thermodynamic system. In a dissipative process, energy (internal, bulk flow kinetic, or system potential) transforms from an initial form to a final form, where the capacity of the final form to do thermodynamic work is less than that of the initial form. For example, transfer of energy as heat is dissipative because it is a transfer of energy other than by thermodynamic work or by transfer of matter, and spreads previously concentrated energy. Following the second law of thermodynamics, in conduction and radiation from one body to another, the entropy varies with temperature (reduces the capacity of the combination of the two bodies to do work), but never decreases in an isolated system.
In mechanical engineering, dissipation is the irreversible conversion of mechanical energy into thermal energy with an associated increase in entropy.
Processes with defined local temperature produce entropy at a certain rate. The entropy production rate times local temperature gives the dissipated power. Important examples of irreversible processes are: heat flow through a thermal resistance, fluid flow through a flow resistance, diffusion (mixing), chemical reactions, and electric current flow through an electrical resistance (Joule heating).
== Definition ==
Dissipative thermodynamic processes are essentially irreversible because they produce entropy. Planck regarded friction as the prime example of an irreversible thermodynamic process. In a process in which the temperature is locally continuously defined, the local density of rate of entropy production times local temperature gives the local density of dissipated power.
A particular occurrence of a dissipative process cannot be described by a single individual Hamiltonian formalism. A dissipative process requires a collection of admissible individual Hamiltonian descriptions, exactly which one describes the actual particular occurrence of the process of interest being unknown. This includes friction and hammering, and all similar forces that result in decoherency of energy—that is, conversion of coherent or directed energy flow into an indirected or more isotropic distribution of energy.
=== Energy ===
"The conversion of mechanical energy into heat is called energy dissipation." – François Roddier The term is also applied to the loss of energy due to generation of unwanted heat in electric and electronic circuits.
=== Computational physics ===
In computational physics, numerical dissipation (also known as "Numerical diffusion") refers to certain side-effects that may occur as a result of a numerical solution to a differential equation. When the pure advection equation, which is free of dissipation, is solved by a numerical approximation method, the energy of the initial wave may be reduced in a way analogous to a diffusional process. Such a method is said to contain 'dissipation'. In some cases, "artificial dissipation" is intentionally added to improve the numerical stability characteristics of the solution.
=== Mathematics ===
A formal, mathematical definition of dissipation, as commonly used in the mathematical study of measure-preserving dynamical systems, is given in the article wandering set.
== Examples ==
=== In hydraulic engineering ===
Dissipation is the process of converting mechanical energy of downward-flowing water into thermal and acoustical energy. Various devices are designed in stream beds to reduce the kinetic energy of flowing waters to reduce their erosive potential on banks and river bottoms. Very often, these devices look like small waterfalls or cascades, where water flows vertically or over riprap to lose some of its kinetic energy.
=== Irreversible processes ===
Important examples of irreversible processes are:
Heat flow through a thermal resistance
Fluid flow through a flow resistance
Diffusion (mixing)
Chemical reactions
Electrical current flow through an electrical resistance (Joule heating).
=== Waves or oscillations ===
Waves or oscillations, lose energy over time, typically from friction or turbulence. In many cases, the "lost" energy raises the temperature of the system. For example, a wave that loses amplitude is said to dissipate. The precise nature of the effects depends on the nature of the wave: an atmospheric wave, for instance, may dissipate close to the surface due to friction with the land mass, and at higher levels due to radiative cooling.
== History ==
The concept of dissipation was introduced in the field of thermodynamics by William Thomson (Lord Kelvin) in 1852. Lord Kelvin deduced that a subset of the above-mentioned irreversible dissipative processes will occur unless a process is governed by a "perfect thermodynamic engine". The processes that Lord Kelvin identified were friction, diffusion, conduction of heat and the absorption of light.
== See also ==
Entropy production
General equation of heat transfer
Flood control
Principle of maximum entropy
Two-dimensional gas
== References ==
=== General References ===
"Dissipative system, a system that loses energy in the course of its time evolution." Benenson, W.; Harris, J. W.; Stocker, H.; Lutz, H. (2002). "6.1.3". Handbook of Physics. New York, NY: Springer-Verlag. p. 219. ISBN 978-0-387-21632-4. | Wikipedia/Energy_dissipation |
Protein folding is the physical process by which a protein, after synthesis by a ribosome as a linear chain of amino acids, changes from an unstable random coil into a more ordered three-dimensional structure. This structure permits the protein to become biologically functional or active.
The folding of many proteins begins even during the translation of the polypeptide chain. The amino acids interact with each other to produce a well-defined three-dimensional structure, known as the protein's native state. This structure is determined by the amino-acid sequence or primary structure.
The correct three-dimensional structure is essential to function, although some parts of functional proteins may remain unfolded, indicating that protein dynamics are important. Failure to fold into a native structure generally produces inactive proteins, but in some instances, misfolded proteins have modified or toxic functionality. Several neurodegenerative and other diseases are believed to result from the accumulation of amyloid fibrils formed by misfolded proteins, the infectious varieties of which are known as prions. Many allergies are caused by the incorrect folding of some proteins because the immune system does not produce the antibodies for certain protein structures.
Denaturation of proteins is a process of transition from a folded to an unfolded state. It happens in cooking, burns, proteinopathies, and other contexts. Residual structure present, if any, in the supposedly unfolded state may form a folding initiation site and guide the subsequent folding reactions.
The duration of the folding process varies dramatically depending on the protein of interest. When studied outside the cell, the slowest folding proteins require many minutes or hours to fold, primarily due to proline isomerization, and must pass through a number of intermediate states, like checkpoints, before the process is complete. On the other hand, very small single-domain proteins with lengths of up to a hundred amino acids typically fold in a single step. Time scales of milliseconds are the norm, and the fastest known protein folding reactions are complete within a few microseconds. The folding time scale of a protein depends on its size, contact order, and circuit topology.
Understanding and simulating the protein folding process has been an important challenge for computational biology since the late 1960s.
== Process of protein folding ==
=== Primary structure ===
The primary structure of a protein, its linear amino-acid sequence, determines its native conformation. The specific amino acid residues and their position in the polypeptide chain are the determining factors for which portions of the protein fold closely together and form its three-dimensional conformation. The amino acid composition is not as important as the sequence. The essential fact of folding, however, remains that the amino acid sequence of each protein contains the information that specifies both the native structure and the pathway to attain that state. This is not to say that nearly identical amino acid sequences always fold similarly. Conformations differ based on environmental factors as well; similar proteins fold differently based on where they are found.
=== Secondary structure ===
Formation of a secondary structure is the first step in the folding process that a protein takes to assume its native structure. Characteristic of secondary structure are the structures known as alpha helices and beta sheets that fold rapidly because they are stabilized by intramolecular hydrogen bonds, as was first characterized by Linus Pauling. Formation of intramolecular hydrogen bonds provides another important contribution to protein stability. α-helices are formed by hydrogen bonding of the backbone to form a spiral shape (refer to figure on the right). The β pleated sheet is a structure that forms with the backbone bending over itself to form the hydrogen bonds (as displayed in the figure to the left). The hydrogen bonds are between the amide hydrogen and carbonyl oxygen of the peptide bond. There exists anti-parallel β pleated sheets and parallel β pleated sheets where the stability of the hydrogen bonds is stronger in the anti-parallel β sheet as it hydrogen bonds with the ideal 180 degree angle compared to the slanted hydrogen bonds formed by parallel sheets.
=== Tertiary structure ===
The α-Helices and β-Sheets are commonly amphipathic, meaning they have a hydrophilic and a hydrophobic portion. This ability helps in forming tertiary structure of a protein in which folding occurs so that the hydrophilic sides are facing the aqueous environment surrounding the protein and the hydrophobic sides are facing the hydrophobic core of the protein. Secondary structure hierarchically gives way to tertiary structure formation. Once the protein's tertiary structure is formed and stabilized by the hydrophobic interactions, there may also be covalent bonding in the form of disulfide bridges formed between two cysteine residues. These non-covalent and covalent contacts take a specific topological arrangement in a native structure of a protein. Tertiary structure of a protein involves a single polypeptide chain; however, additional interactions of folded polypeptide chains give rise to quaternary structure formation.
=== Quaternary structure ===
Tertiary structure may give way to the formation of quaternary structure in some proteins, which usually involves the "assembly" or "coassembly" of subunits that have already folded; in other words, multiple polypeptide chains could interact to form a fully functional quaternary protein.
=== Driving forces of protein folding ===
Folding is a spontaneous process that is mainly guided by hydrophobic interactions, formation of intramolecular hydrogen bonds, van der Waals forces, and it is opposed by conformational entropy. The folding time scale of an isolated protein depends on its size, contact order, and circuit topology. Inside cells, the process of folding often begins co-translationally, so that the N-terminus of the protein begins to fold while the C-terminal portion of the protein is still being synthesized by the ribosome; however, a protein molecule may fold spontaneously during or after biosynthesis. While these macromolecules may be regarded as "folding themselves", the process also depends on the solvent (water or lipid bilayer), the concentration of salts, the pH, the temperature, the possible presence of cofactors and of molecular chaperones.
Proteins will have limitations on their folding abilities by the restricted bending angles or conformations that are possible. These allowable angles of protein folding are described with a two-dimensional plot known as the Ramachandran plot, depicted with psi and phi angles of allowable rotation.
==== Hydrophobic effect ====
Protein folding must be thermodynamically favorable within a cell in order for it to be a spontaneous reaction. Since it is known that protein folding is a spontaneous reaction, then it must assume a negative Gibbs free energy value. Gibbs free energy in protein folding is directly related to enthalpy and entropy. For a negative delta G to arise and for protein folding to become thermodynamically favorable, then either enthalpy, entropy, or both terms must be favorable.
Minimizing the number of hydrophobic side-chains exposed to water is an important driving force behind the folding process. The hydrophobic effect is the phenomenon in which the hydrophobic chains of a protein collapse into the core of the protein (away from the hydrophilic environment). In an aqueous environment, the water molecules tend to aggregate around the hydrophobic regions or side chains of the protein, creating water shells of ordered water molecules. An ordering of water molecules around a hydrophobic region increases order in a system and therefore contributes a negative change in entropy (less entropy in the system). The water molecules are fixed in these water cages which drives the hydrophobic collapse, or the inward folding of the hydrophobic groups. The hydrophobic collapse introduces entropy back to the system via the breaking of the water cages which frees the ordered water molecules. The multitude of hydrophobic groups interacting within the core of the globular folded protein contributes a significant amount to protein stability after folding, because of the vastly accumulated van der Waals forces (specifically London Dispersion forces). The hydrophobic effect exists as a driving force in thermodynamics only if there is the presence of an aqueous medium with an amphiphilic molecule containing a large hydrophobic region. The strength of hydrogen bonds depends on their environment; thus, H-bonds enveloped in a hydrophobic core contribute more than H-bonds exposed to the aqueous environment to the stability of the native state.
In proteins with globular folds, hydrophobic amino acids tend to be interspersed along the primary sequence, rather than randomly distributed or clustered together. However, proteins that have recently been born de novo, which tend to be intrinsically disordered, show the opposite pattern of hydrophobic amino acid clustering along the primary sequence.
==== Chaperones ====
Molecular chaperones are a class of proteins that aid in the correct folding of other proteins in vivo. Chaperones exist in all cellular compartments and interact with the polypeptide chain in order to allow the native three-dimensional conformation of the protein to form; however, chaperones themselves are not included in the final structure of the protein they are assisting in. Chaperones may assist in folding even when the nascent polypeptide is being synthesized by the ribosome. Molecular chaperones operate by binding to stabilize an otherwise unstable structure of a protein in its folding pathway, but chaperones do not contain the necessary information to know the correct native structure of the protein they are aiding; rather, chaperones work by preventing incorrect folding conformations. In this way, chaperones do not actually increase the rate of individual steps involved in the folding pathway toward the native structure; instead, they work by reducing possible unwanted aggregations of the polypeptide chain that might otherwise slow down the search for the proper intermediate and they provide a more efficient pathway for the polypeptide chain to assume the correct conformations. Chaperones are not to be confused with folding catalyst proteins, which catalyze chemical reactions responsible for slow steps in folding pathways. Examples of folding catalysts are protein disulfide isomerases and peptidyl-prolyl isomerases that may be involved in formation of disulfide bonds or interconversion between cis and trans stereoisomers of peptide group. Chaperones are shown to be critical in the process of protein folding in vivo because they provide the protein with the aid needed to assume its proper alignments and conformations efficiently enough to become "biologically relevant". This means that the polypeptide chain could theoretically fold into its native structure without the aid of chaperones, as demonstrated by protein folding experiments conducted in vitro; however, this process proves to be too inefficient or too slow to exist in biological systems; therefore, chaperones are necessary for protein folding in vivo. Along with its role in aiding native structure formation, chaperones are shown to be involved in various roles such as protein transport, degradation, and even allow denatured proteins exposed to certain external denaturant factors an opportunity to refold into their correct native structures.
A fully denatured protein lacks both tertiary and secondary structure, and exists as a so-called random coil. Under certain conditions some proteins can refold; however, in many cases, denaturation is irreversible. Cells sometimes protect their proteins against the denaturing influence of heat with enzymes known as heat shock proteins (a type of chaperone), which assist other proteins both in folding and in remaining folded. Heat shock proteins have been found in all species examined, from bacteria to humans, suggesting that they evolved very early and have an important function. Some proteins never fold in cells at all except with the assistance of chaperones which either isolate individual proteins so that their folding is not interrupted by interactions with other proteins or help to unfold misfolded proteins, allowing them to refold into the correct native structure. This function is crucial to prevent the risk of precipitation into insoluble amorphous aggregates. The external factors involved in protein denaturation or disruption of the native state include temperature, external fields (electric, magnetic), molecular crowding, and even the limitation of space (i.e. confinement), which can have a big influence on the folding of proteins. High concentrations of solutes, extremes of pH, mechanical forces, and the presence of chemical denaturants can contribute to protein denaturation, as well. These individual factors are categorized together as stresses. Chaperones are shown to exist in increasing concentrations during times of cellular stress and help the proper folding of emerging proteins as well as denatured or misfolded ones.
Under some conditions proteins will not fold into their biochemically functional forms. Temperatures above or below the range that cells tend to live in will cause thermally unstable proteins to unfold or denature (this is why boiling makes an egg white turn opaque). Protein thermal stability is far from constant, however; for example, hyperthermophilic bacteria have been found that grow at temperatures as high as 122 °C, which of course requires that their full complement of vital proteins and protein assemblies be stable at that temperature or above.
The bacterium E. coli is the host for bacteriophage T4, and the phage encoded gp31 protein (P17313) appears to be structurally and functionally homologous to E. coli chaperone protein GroES and able to substitute for it in the assembly of bacteriophage T4 virus particles during infection. Like GroES, gp31 forms a stable complex with GroEL chaperonin that is absolutely necessary for the folding and assembly in vivo of the bacteriophage T4 major capsid protein gp23.
=== Fold switching ===
Some proteins have multiple native structures, and change their fold based on some external factors. For example, the KaiB protein switches fold throughout the day, acting as a clock for cyanobacteria. It has been estimated that around 0.5–4% of PDB (Protein Data Bank) proteins switch folds.
== Protein misfolding and neurodegenerative disease ==
A protein is considered to be misfolded if it cannot achieve its normal native state. This can be due to mutations in the amino acid sequence or a disruption of the normal folding process by external factors. The misfolded protein typically contains β-sheets that are organized in a supramolecular arrangement known as a cross-β structure. These β-sheet-rich assemblies are very stable, very insoluble, and generally resistant to proteolysis. The structural stability of these fibrillar assemblies is caused by extensive interactions between the protein monomers, formed by backbone hydrogen bonds between their β-strands. The misfolding of proteins can trigger the further misfolding and accumulation of other proteins into aggregates or oligomers. The increased levels of aggregated proteins in the cell leads to formation of amyloid-like structures which can cause degenerative disorders and cell death. The amyloids are fibrillary structures that contain intermolecular hydrogen bonds which are highly insoluble and made from converted protein aggregates. Therefore, the proteasome pathway may not be efficient enough to degrade the misfolded proteins prior to aggregation. Misfolded proteins can interact with one another and form structured aggregates and gain toxicity through intermolecular interactions.
Aggregated proteins are associated with prion-related illnesses such as Creutzfeldt–Jakob disease, bovine spongiform encephalopathy (mad cow disease), amyloid-related illnesses such as Alzheimer's disease and familial amyloid cardiomyopathy or polyneuropathy, as well as intracellular aggregation diseases such as Huntington's and Parkinson's disease. These age onset degenerative diseases are associated with the aggregation of misfolded proteins into insoluble, extracellular aggregates and/or intracellular inclusions including cross-β amyloid fibrils. It is not completely clear whether the aggregates are the cause or merely a reflection of the loss of protein homeostasis, the balance between synthesis, folding, aggregation and protein turnover. Recently the European Medicines Agency approved the use of Tafamidis or Vyndaqel (a kinetic stabilizer of tetrameric transthyretin) for the treatment of transthyretin amyloid diseases. This suggests that the process of amyloid fibril formation (and not the fibrils themselves) causes the degeneration of post-mitotic tissue in human amyloid diseases. Misfolding and excessive degradation instead of folding and function leads to a number of proteopathy diseases such as antitrypsin-associated emphysema, cystic fibrosis and the lysosomal storage diseases, where loss of function is the origin of the disorder. While protein replacement therapy has historically been used to correct the latter disorders, an emerging approach is to use pharmaceutical chaperones to fold mutated proteins to render them functional.
== Experimental techniques for studying protein folding ==
While inferences about protein folding can be made through mutation studies, typically, experimental techniques for studying protein folding rely on the gradual unfolding or folding of proteins and observing conformational changes using standard non-crystallographic techniques.
=== X-ray crystallography ===
X-ray crystallography is one of the more efficient and important methods for attempting to decipher the three dimensional configuration of a folded protein. To be able to conduct X-ray crystallography, the protein under investigation must be located inside a crystal lattice. To place a protein inside a crystal lattice, one must have a suitable solvent for crystallization, obtain a pure protein at supersaturated levels in solution, and precipitate the crystals in solution. Once a protein is crystallized, X-ray beams can be concentrated through the crystal lattice which would diffract the beams or shoot them outwards in various directions. These exiting beams are correlated to the specific three-dimensional configuration of the protein enclosed within. The X-rays specifically interact with the electron clouds surrounding the individual atoms within the protein crystal lattice and produce a discernible diffraction pattern. Only by relating the electron density clouds with the amplitude of the X-rays can this pattern be read and lead to assumptions of the phases or phase angles involved that complicate this method. Without the relation established through a mathematical basis known as Fourier transform, the "phase problem" would render predicting the diffraction patterns very difficult. Emerging methods like multiple isomorphous replacement use the presence of a heavy metal ion to diffract the X-rays into a more predictable manner, reducing the number of variables involved and resolving the phase problem.
=== Fluorescence spectroscopy ===
Fluorescence spectroscopy is a highly sensitive method for studying the folding state of proteins. Three amino acids, phenylalanine (Phe), tyrosine (Tyr) and tryptophan (Trp), have intrinsic fluorescence properties, but only Tyr and Trp are used experimentally because their quantum yields are high enough to give good fluorescence signals. Both Trp and Tyr are excited by a wavelength of 280 nm, whereas only Trp is excited by a wavelength of 295 nm. Because of their aromatic character, Trp and Tyr residues are often found fully or partially buried in the hydrophobic core of proteins, at the interface between two protein domains, or at the interface between subunits of oligomeric proteins. In this apolar environment, they have high quantum yields and therefore high fluorescence intensities. Upon disruption of the protein's tertiary or quaternary structure, these side chains become more exposed to the hydrophilic environment of the solvent, and their quantum yields decrease, leading to low fluorescence intensities. For Trp residues, the wavelength of their maximal fluorescence emission also depend on their environment.
Fluorescence spectroscopy can be used to characterize the equilibrium unfolding of proteins by measuring the variation in the intensity of fluorescence emission or in the wavelength of maximal emission as functions of a denaturant value. The denaturant can be a chemical molecule (urea, guanidinium hydrochloride), temperature, pH, pressure, etc. The equilibrium between the different but discrete protein states, i.e. native state, intermediate states, unfolded state, depends on the denaturant value; therefore, the global fluorescence signal of their equilibrium mixture also depends on this value. One thus obtains a profile relating the global protein signal to the denaturant value. The profile of equilibrium unfolding may enable one to detect and identify intermediates of unfolding. General equations have been developed by Hugues Bedouelle to obtain the thermodynamic parameters that characterize the unfolding equilibria for homomeric or heteromeric proteins, up to trimers and potentially tetramers, from such profiles. Fluorescence spectroscopy can be combined with fast-mixing devices such as stopped flow, to measure protein folding kinetics, generate a chevron plot and derive a Phi value analysis.
=== Circular dichroism ===
Circular dichroism is one of the most general and basic tools to study protein folding. Circular dichroism spectroscopy measures the absorption of circularly polarized light. In proteins, structures such as alpha helices and beta sheets are chiral, and thus absorb such light. The absorption of this light acts as a marker of the degree of foldedness of the protein ensemble. This technique has been used to measure equilibrium unfolding of the protein by measuring the change in this absorption as a function of denaturant concentration or temperature. A denaturant melt measures the free energy of unfolding as well as the protein's m value, or denaturant dependence. A temperature melt measures the denaturation temperature (Tm) of the protein. As for fluorescence spectroscopy, circular-dichroism spectroscopy can be combined with fast-mixing devices such as stopped flow to measure protein folding kinetics and to generate chevron plots.
=== Vibrational circular dichroism of proteins ===
The more recent developments of vibrational circular dichroism (VCD) techniques for proteins, currently involving Fourier transform (FT) instruments, provide powerful means for determining protein conformations in solution even for very large protein molecules. Such VCD studies of proteins can be combined with X-ray diffraction data for protein crystals, FT-IR data for protein solutions in heavy water (D2O), or quantum computations.
=== Protein nuclear magnetic resonance spectroscopy ===
Protein nuclear magnetic resonance (NMR) is able to collect protein structural data by inducing a magnet field through samples of concentrated protein. In NMR, depending on the chemical environment, certain nuclei will absorb specific radio-frequencies. Because protein structural changes operate on a time scale from ns to ms, NMR is especially equipped to study intermediate structures in timescales of ps to s. Some of the main techniques for studying proteins structure and non-folding protein structural changes include COSY, TOCSY, HSQC, time relaxation (T1 & T2), and NOE. NOE is especially useful because magnetization transfers can be observed between spatially proximal hydrogens are observed. Different NMR experiments have varying degrees of timescale sensitivity that are appropriate for different protein structural changes. NOE can pick up bond vibrations or side chain rotations, however, NOE is too sensitive to pick up protein folding because it occurs at larger timescale.
Because protein folding takes place in about 50 to 3000 s−1 CPMG Relaxation dispersion and chemical exchange saturation transfer have become some of the primary techniques for NMR analysis of folding. In addition, both techniques are used to uncover excited intermediate states in the protein folding landscape. To do this, CPMG Relaxation dispersion takes advantage of the spin echo phenomenon. This technique exposes the target nuclei to a 90 pulse followed by one or more 180 pulses. As the nuclei refocus, a broad distribution indicates the target nuclei is involved in an intermediate excited state. By looking at Relaxation dispersion plots the data collect information on the thermodynamics and kinetics between the excited and ground. Saturation Transfer measures changes in signal from the ground state as excited states become perturbed. It uses weak radio frequency irradiation to saturate the excited state of a particular nuclei which transfers its saturation to the ground state. This signal is amplified by decreasing the magnetization (and the signal) of the ground state.
The main limitations in NMR is that its resolution decreases with proteins that are larger than 25 kDa and is not as detailed as X-ray crystallography. Additionally, protein NMR analysis is quite difficult and can propose multiple solutions from the same NMR spectrum.
In a study focused on the folding of an amyotrophic lateral sclerosis involved protein SOD1, excited intermediates were studied with relaxation dispersion and Saturation transfer. SOD1 had been previously tied to many disease causing mutants which were assumed to be involved in protein aggregation, however the mechanism was still unknown. By using Relaxation Dispersion and Saturation Transfer experiments many excited intermediate states were uncovered misfolding in the SOD1 mutants.
=== Dual-polarization interferometry ===
Dual polarisation interferometry is a surface-based technique for measuring the optical properties of molecular layers. When used to characterize protein folding, it measures the conformation by determining the overall size of a monolayer of the protein and its density in real time at sub-Angstrom resolution, although real-time measurement of the kinetics of protein folding are limited to processes that occur slower than ~10 Hz. Similar to circular dichroism, the stimulus for folding can be a denaturant or temperature.
=== Studies of folding with high time resolution ===
The study of protein folding has been greatly advanced in recent years by the development of fast, time-resolved techniques. Experimenters rapidly trigger the folding of a sample of unfolded protein and observe the resulting dynamics. Fast techniques in use include neutron scattering, ultrafast mixing of solutions, photochemical methods, and laser temperature jump spectroscopy. Among the many scientists who have contributed to the development of these techniques are Jeremy Cook, Heinrich Roder, Terry Oas, Harry Gray, Martin Gruebele, Brian Dyer, William Eaton, Sheena Radford, Chris Dobson, Alan Fersht, Bengt Nölting and Lars Konermann.
=== Proteolysis ===
Proteolysis is routinely used to probe the fraction unfolded under a wide range of solution conditions (e.g. fast parallel proteolysis (FASTpp).
=== Single-molecule force spectroscopy ===
Single molecule techniques such as optical tweezers and AFM have been used to understand protein folding mechanisms of isolated proteins as well as proteins with chaperones. Optical tweezers have been used to stretch single protein molecules from their C- and N-termini and unfold them to allow study of the subsequent refolding. The technique allows one to measure folding rates at single-molecule level; for example, optical tweezers have been recently applied to study folding and unfolding of proteins involved in blood coagulation. von Willebrand factor (vWF) is a protein with an essential role in blood clot formation process. It discovered – using single molecule optical tweezers measurement – that calcium-bound vWF acts as a shear force sensor in the blood. Shear force leads to unfolding of the A2 domain of vWF, whose refolding rate is dramatically enhanced in the presence of calcium. Recently, it was also shown that the simple src SH3 domain accesses multiple unfolding pathways under force.
=== Biotin painting ===
Biotin painting enables condition-specific cellular snapshots of (un)folded proteins. Biotin 'painting' shows a bias towards predicted Intrinsically disordered proteins.
== Computational studies of protein folding ==
Computational studies of protein folding includes three main aspects related to the prediction of protein stability, kinetics, and structure. A 2013 review summarizes the available computational methods for protein folding.
=== Levinthal's paradox ===
In 1969, Cyrus Levinthal noted that, because of the very large number of degrees of freedom in an unfolded polypeptide chain, the molecule has an astronomical number of possible conformations. An estimate of 3300 or 10143 was made in one of his papers. Levinthal's paradox is a thought experiment based on the observation that if a protein were folded by sequential sampling of all possible conformations, it would take an astronomical amount of time to do so, even if the conformations were sampled at a rapid rate (on the nanosecond or picosecond scale). Based upon the observation that proteins fold much faster than this, Levinthal then proposed that a random conformational search does not occur, and the protein must, therefore, fold through a series of meta-stable intermediate states.
=== Energy landscape of protein folding ===
The configuration space of a protein during folding can be visualized as an energy landscape. According to Joseph Bryngelson and Peter Wolynes, proteins follow the principle of minimal frustration, meaning that naturally evolved proteins have optimized their folding energy landscapes, and that nature has chosen amino acid sequences so that the folded state of the protein is sufficiently stable. In addition, the acquisition of the folded state had to become a sufficiently fast process. Even though nature has reduced the level of frustration in proteins, some degree of it remains up to now as can be observed in the presence of local minima in the energy landscape of proteins.
A consequence of these evolutionarily selected sequences is that proteins are generally thought to have globally "funneled energy landscapes" (a term coined by José Onuchic) that are largely directed toward the native state. This "folding funnel" landscape allows the protein to fold to the native state through any of a large number of pathways and intermediates, rather than being restricted to a single mechanism. The theory is supported by both computational simulations of model proteins and experimental studies, and it has been used to improve methods for protein structure prediction and design. The description of protein folding by the leveling free-energy landscape is also consistent with the 2nd law of thermodynamics. Physically, thinking of landscapes in terms of visualizable potential or total energy surfaces simply with maxima, saddle points, minima, and funnels, rather like geographic landscapes, is perhaps a little misleading. The relevant description is really a high-dimensional phase space in which manifolds might take a variety of more complicated topological forms.
The unfolded polypeptide chain begins at the top of the funnel where it may assume the largest number of unfolded variations and is in its highest energy state. Energy landscapes such as these indicate that there are a large number of initial possibilities, but only a single native state is possible; however, it does not reveal the numerous folding pathways that are possible. A different molecule of the same exact protein may be able to follow marginally different folding pathways, seeking different lower energy intermediates, as long as the same native structure is reached. Different pathways may have different frequencies of utilization depending on the thermodynamic favorability of each pathway. This means that if one pathway is found to be more thermodynamically favorable than another, it is likely to be used more frequently in the pursuit of the native structure. As the protein begins to fold and assume its various conformations, it always seeks a more thermodynamically favorable structure than before and thus continues through the energy funnel. Formation of secondary structures is a strong indication of increased stability within the protein, and only one combination of secondary structures assumed by the polypeptide backbone will have the lowest energy and therefore be present in the native state of the protein. Among the first structures to form once the polypeptide begins to fold are alpha helices and beta turns, where alpha helices can form in as little as 100 nanoseconds and beta turns in 1 microsecond.
There exists a saddle point in the energy funnel landscape where the transition state for a particular protein is found. The transition state in the energy funnel diagram is the conformation that must be assumed by every molecule of that protein if the protein wishes to finally assume the native structure. No protein may assume the native structure without first passing through the transition state. The transition state can be referred to as a variant or premature form of the native state rather than just another intermediary step. The folding of the transition state is shown to be rate-determining, and even though it exists in a higher energy state than the native fold, it greatly resembles the native structure. Within the transition state, there exists a nucleus around which the protein is able to fold, formed by a process referred to as "nucleation condensation" where the structure begins to collapse onto the nucleus.
=== Modeling of protein folding ===
De novo or ab initio techniques for computational protein structure prediction can be used for simulating various aspects of protein folding. Molecular dynamics (MD) was used in simulations of protein folding and dynamics in silico. First equilibrium folding simulations were done using implicit solvent model and umbrella sampling. Because of computational cost, ab initio MD folding simulations with explicit water are limited to peptides and small proteins. MD simulations of larger proteins remain restricted to dynamics of the experimental structure or its high-temperature unfolding. Long-time folding processes (beyond about 1 millisecond), like folding of larger proteins (>150 residues) can be accessed using coarse-grained models.
Several large-scale computational projects, such as Rosetta@home, Folding@home and Foldit, target protein folding.
Long continuous-trajectory simulations have been performed on Anton, a massively parallel supercomputer designed and built around custom ASICs and interconnects by D. E. Shaw Research. The longest published result of a simulation performed using Anton as of 2011 was a 2.936 millisecond simulation of NTL9 at 355 K. Such simulations are currently able to unfold and refold small proteins (<150 amino acids residues) in equilibrium and predict how mutations affect folding kinetics and stability.
In 2020 a team of researchers that used AlphaFold, an artificial intelligence (AI) protein structure prediction program developed by DeepMind placed first in CASP, a long-standing structure prediction contest. The team achieved a level of accuracy much higher than any other group. It scored above 90% for around two-thirds of the proteins in CASP's global distance test (GDT), a test that measures the degree of similarity between the structure predicted by a computational program, and the empirical structure determined experimentally in a lab. A score of 100 is considered a complete match, within the distance cutoff used for calculating GDT.
AlphaFold's protein structure prediction results at CASP were described as "transformational" and "astounding". Some researchers noted that the accuracy is not high enough for a third of its predictions, and that it does not reveal the physical mechanism of protein folding for the protein folding problem to be considered solved. Nevertheless, it is considered a significant achievement in computational biology and great progress towards a decades-old grand challenge of biology, predicting the structure of proteins.
== See also ==
== References ==
== External links ==
Human Proteome Folding Project | Wikipedia/Protein_folding |
Entropy production (or generation) is the amount of entropy which is produced during heat process to evaluate the efficiency of the process.
== Short history ==
Entropy is produced in irreversible processes. The importance of avoiding irreversible processes (hence reducing the entropy production) was recognized as early as 1824 by Carnot. In 1865 Rudolf Clausius expanded his previous work from 1854 on the concept of "unkompensierte Verwandlungen" (uncompensated transformations), which, in our modern nomenclature, would be called the entropy production. In the same article in which he introduced the name entropy, Clausius gives the expression for the entropy production for a cyclical process in a closed system, which he denotes by N, in equation (71) which reads
N
=
S
−
S
0
−
∫
d
Q
T
.
{\displaystyle N=S-S_{0}-\int {\frac {dQ}{T}}.}
Here S is the entropy in the final state and S0 the entropy in the initial state; S0-S is the entropy difference for the backwards part of the process. The integral is to be taken from the initial state to the final state, giving the entropy difference for the forwards part of the process. From the context, it is clear that N = 0 if the process is reversible and N > 0 in case of an irreversible process.
== First and second law ==
The laws of thermodynamics apply to well-defined systems. Fig. 1 is a general representation of a thermodynamic system. We consider systems which, in general, are inhomogeneous. Heat and mass are transferred across the boundaries (nonadiabatic, open systems), and the boundaries are moving (usually through pistons). In our formulation we assume that heat and mass transfer and volume changes take place only separately at well-defined regions of the system boundary. The expression, given here, are not the most general formulations of the first and second law. E.g. kinetic energy and potential energy terms are missing and exchange of matter by diffusion is excluded.
The rate of entropy production, denoted by
S
˙
i
{\displaystyle {\dot {S}}_{\text{i}}}
, is a key element of the second law of thermodynamics for open inhomogeneous systems which reads
d
S
d
t
=
∑
k
Q
˙
k
T
k
+
∑
k
S
˙
k
+
∑
k
S
˙
i
k
with
S
˙
i
k
≥
0.
{\displaystyle {\frac {\mathrm {d} S}{\mathrm {d} t}}=\sum _{k}{\frac {{\dot {Q}}_{k}}{T_{k}}}+\sum _{k}{\dot {S}}_{k}+\sum _{k}{\dot {S}}_{{\text{i}}k}{\text{ with }}{\dot {S}}_{{\text{i}}k}\geq 0.}
Here S is the entropy of the system; Tk is the temperature at which the heat enters the system at heat flow rate
Q
˙
k
{\displaystyle {\dot {Q}}_{k}}
;
S
˙
k
=
n
˙
k
S
m
k
=
m
˙
k
s
k
{\displaystyle {\dot {S}}_{k}={\dot {n}}_{k}S_{{\text{m}}k}={\dot {m}}_{k}s_{k}}
represents the entropy flow into the system at position k, due to matter flowing into the system (
n
˙
k
,
m
˙
k
{\displaystyle {\dot {n}}_{k},{\dot {m}}_{k}}
are the molar flow rate and mass flow rate and Smk and sk are the molar entropy (i.e. entropy per unit amount of substance) and specific entropy (i.e. entropy per unit mass) of the matter, flowing into the system, respectively);
S
˙
i
k
{\displaystyle {\dot {S}}_{{\text{i}}k}}
represents the entropy production rates due to internal processes. The subscript 'i' in
S
˙
i
k
{\displaystyle {\dot {S}}_{{\text{i}}k}}
refers to the fact that the entropy is produced due to irreversible processes. The entropy-production rate of every process in nature is always positive or zero. This is an essential aspect of the second law.
The Σ's indicate the algebraic sum of the respective contributions if there are more heat flows, matter flows, and internal processes.
In order to demonstrate the impact of the second law, and the role of entropy production, it has to be combined with the first law which reads
d
U
d
t
=
∑
k
Q
˙
k
+
∑
k
H
˙
k
−
∑
k
p
k
d
V
k
d
t
+
P
,
{\displaystyle {\frac {\mathrm {d} U}{\mathrm {d} t}}=\sum _{k}{\dot {Q}}_{k}+\sum _{k}{\dot {H}}_{k}-\sum _{k}p_{k}{\frac {\mathrm {d} V_{k}}{\mathrm {d} t}}+P,}
with U the internal energy of the system;
H
˙
k
=
n
˙
k
H
m
k
=
m
˙
k
h
k
{\displaystyle {\dot {H}}_{k}={\dot {n}}_{k}H_{{\text{m}}k}={\dot {m}}_{k}h_{k}}
the enthalpy flows into the system due to the matter that flows into the system (Hmk its molar enthalpy, hk the specific enthalpy (i.e. enthalpy per unit mass)), and dVk/dt are the rates of change of the volume of the system due to a moving boundary at position k while pk is the pressure behind that boundary; P represents all other forms of power application (such as electrical).
The first and second law have been formulated in terms of time derivatives of U and S rather than in terms of total differentials dU and dS where it is tacitly assumed that dt > 0. So, the formulation in terms of time derivatives is more elegant. An even bigger advantage of this formulation is, however, that it emphasizes that heat flow rate and power are the basic thermodynamic properties and that heat and work are derived quantities being the time integrals of the heat flow rate and the power respectively.
== Examples of irreversible processes ==
Entropy is produced in irreversible processes. Some important irreversible processes are:
heat flow through a thermal resistor
fluid flow through a flow resistance such as in the Joule expansion or the Joule–Thomson effect
heat transfer
Joule heating
friction between solid surfaces
fluid viscosity within a system.
The expression for the rate of entropy production in the first two cases will be derived in separate sections.
== Performance of heat engines and refrigerators ==
Most heat engines and refrigerators are closed cyclic machines. In the steady state the internal energy and the entropy of the machines after one cycle are the same as at the start of the cycle. Hence, on average, dU/dt = 0 and dS/dt = 0 since U and S are functions of state. Furthermore, they are closed systems (
n
˙
=
0
{\displaystyle {\dot {n}}=0}
) and the volume is fixed (dV/dt = 0). This leads to a significant simplification of the first and second law:
0
=
∑
k
Q
˙
k
+
P
{\displaystyle 0=\sum _{k}{\dot {Q}}_{k}+P}
and
0
=
∑
k
Q
˙
k
T
k
+
S
˙
i
.
{\displaystyle 0=\sum _{k}{\frac {{\dot {Q}}_{k}}{T_{k}}}+{\dot {S}}_{\text{i}}.}
The summation is over the (two) places where heat is added or removed.
=== Engines ===
For a heat engine (Fig. 2a) the first and second law obtain the form
0
=
Q
˙
H
−
Q
˙
a
−
P
{\displaystyle 0={\dot {Q}}_{\text{H}}-{\dot {Q}}_{\text{a}}-P}
and
0
=
Q
˙
H
T
H
−
Q
˙
a
T
a
+
S
˙
i
.
{\displaystyle 0={\frac {{\dot {Q}}_{\text{H}}}{T_{\text{H}}}}-{\frac {{\dot {Q}}_{\text{a}}}{T_{\text{a}}}}+{\dot {S}}_{\text{i}}.}
Here
Q
˙
H
{\displaystyle {\dot {Q}}_{\text{H}}}
is the heat supplied at the high temperature TH,
Q
˙
a
{\displaystyle {\dot {Q}}_{\text{a}}}
is the heat removed at ambient temperature Ta, and P is the power delivered by the engine. Eliminating
Q
˙
a
{\displaystyle {\dot {Q}}_{\text{a}}}
gives
P
=
T
H
−
T
a
T
H
Q
˙
H
−
T
a
S
˙
i
.
{\displaystyle P={\frac {T_{\text{H}}-T_{\text{a}}}{T_{\text{H}}}}{\dot {Q}}_{\text{H}}-T_{\text{a}}{\dot {S}}_{\text{i}}.}
The efficiency is defined by
η
=
P
Q
˙
H
.
{\displaystyle \eta ={\frac {P}{{\dot {Q}}_{\text{H}}}}.}
If
S
˙
i
=
0
{\displaystyle {\dot {S}}_{\text{i}}=0}
the performance of the engine is at its maximum and the efficiency is equal to the Carnot efficiency
η
C
=
T
H
−
T
a
T
H
.
{\displaystyle \eta _{\text{C}}={\frac {T_{\text{H}}-T_{\text{a}}}{T_{\text{H}}}}.}
=== Refrigerators ===
For refrigerators (Fig. 2b) holds
0
=
Q
˙
L
−
Q
˙
a
+
P
{\displaystyle 0={\dot {Q}}_{\text{L}}-{\dot {Q}}_{\text{a}}+P}
and
0
=
Q
˙
L
T
L
−
Q
˙
a
T
a
+
S
˙
i
.
{\displaystyle 0={\frac {{\dot {Q}}_{\text{L}}}{T_{\text{L}}}}-{\frac {{\dot {Q}}_{\text{a}}}{T_{\text{a}}}}+{\dot {S}}_{\text{i}}.}
Here P is the power, supplied to produce the cooling power
Q
˙
L
{\displaystyle {\dot {Q}}_{\text{L}}}
at the low temperature TL. Eliminating
Q
˙
a
{\displaystyle {\dot {Q}}_{\text{a}}}
now gives
Q
˙
L
=
T
L
T
a
−
T
L
(
P
−
T
a
S
˙
i
)
.
{\displaystyle {\dot {Q}}_{\text{L}}={\frac {T_{\text{L}}}{T_{\text{a}}-T_{\text{L}}}}(P-T_{\text{a}}{\dot {S}}_{\text{i}}).}
The coefficient of performance of refrigerators is defined by
ξ
=
Q
˙
L
P
.
{\displaystyle \xi ={\frac {{\dot {Q}}_{\text{L}}}{P}}.}
If
S
˙
i
=
0
{\displaystyle {\dot {S}}_{\text{i}}=0}
the performance of the cooler is at its maximum. The COP is then given by the Carnot coefficient of performance
ξ
C
=
T
L
T
a
−
T
L
.
{\displaystyle \xi _{\text{C}}={\frac {T_{\text{L}}}{T_{\text{a}}-T_{\text{L}}}}.}
=== Power dissipation ===
In both cases we find a contribution
T
a
S
˙
i
{\displaystyle T_{\text{a}}{\dot {S}}_{\text{i}}}
which reduces the system performance. This product of ambient temperature and the (average) entropy production rate
P
diss
=
T
a
S
˙
i
{\displaystyle P_{\text{diss}}=T_{\text{a}}{\dot {S}}_{\text{i}}}
is called the dissipated power.
== Equivalence with other formulations ==
It is interesting to investigate how the above mathematical formulation of the second law relates with other well-known formulations of the second law.
We first look at a heat engine, assuming that
Q
˙
a
=
0
{\displaystyle {\dot {Q}}_{\text{a}}=0}
. In other words: the heat flow rate
Q
˙
H
{\displaystyle {\dot {Q}}_{\text{H}}}
is completely converted into power. In this case the second law would reduce to
0
=
Q
˙
H
T
H
+
S
˙
i
.
{\displaystyle 0={\frac {{\dot {Q}}_{\text{H}}}{T_{\text{H}}}}+{\dot {S}}_{\text{i}}.}
Since
Q
˙
H
≥
0
{\displaystyle {\dot {Q}}_{\text{H}}\geq 0}
and
T
H
>
0
{\displaystyle T_{\text{H}}>0}
this would result in
S
˙
i
≤
0
{\displaystyle {\dot {S}}_{\text{i}}\leq 0}
which violates the condition that the entropy production is always positive. Hence: No process is possible in which the sole result is the absorption of heat from a reservoir and its complete conversion into work. This is the Kelvin statement of the second law.
Now look at the case of the refrigerator and assume that the input power is zero. In other words: heat is transported from a low temperature to a high temperature without doing work on the system. The first law with P = 0 would give
Q
˙
L
=
Q
˙
a
{\displaystyle {\dot {Q}}_{\text{L}}={\dot {Q}}_{\text{a}}}
and the second law then yields
0
=
Q
˙
L
T
L
−
Q
˙
L
T
a
+
S
˙
i
{\displaystyle 0={\frac {{\dot {Q}}_{\text{L}}}{T_{\text{L}}}}-{\frac {{\dot {Q}}_{\text{L}}}{T_{\text{a}}}}+{\dot {S}}_{\text{i}}}
or
S
˙
i
=
Q
˙
L
(
1
T
a
−
1
T
L
)
.
{\displaystyle {\dot {S}}_{\text{i}}={\dot {Q}}_{\text{L}}\left({\frac {1}{T_{\text{a}}}}-{\frac {1}{T_{\text{L}}}}\right).}
Since
Q
˙
L
≥
0
{\displaystyle {\dot {Q}}_{\text{L}}\geq 0}
and
T
a
>
T
L
{\displaystyle T_{\text{a}}>T_{\text{L}}}
this would result in
S
˙
i
≤
0
{\displaystyle {\dot {S}}_{\text{i}}\leq 0}
which again violates the condition that the entropy production is always positive. Hence: No process is possible whose sole result is the transfer of heat from a body of lower temperature to a body of higher temperature. This is the Clausius statement of the second law.
== Expressions for the entropy production ==
=== Heat flow ===
In case of a heat flow rate
Q
˙
{\displaystyle {\dot {Q}}}
from T1 to T2 (with
T
1
≥
T
2
{\displaystyle T_{1}\geq T_{2}}
) the rate of entropy production is given by
S
˙
i
=
Q
˙
(
1
T
2
−
1
T
1
)
.
{\displaystyle {\dot {S}}_{\text{i}}={\dot {Q}}\left({\frac {1}{T_{2}}}-{\frac {1}{T_{1}}}\right).}
If the heat flow is in a bar with length L, cross-sectional area A, and thermal conductivity κ, and the temperature difference is small
Q
˙
=
κ
A
L
(
T
1
−
T
2
)
{\displaystyle {\dot {Q}}=\kappa {\frac {A}{L}}(T_{1}-T_{2})}
the entropy production rate is
S
˙
i
=
κ
A
L
(
T
1
−
T
2
)
2
T
1
T
2
.
{\displaystyle {\dot {S}}_{\text{i}}=\kappa {\frac {A}{L}}{\frac {(T_{1}-T_{2})^{2}}{T_{1}T_{2}}}.}
=== Flow of mass ===
In case of a volume flow rate
V
˙
{\displaystyle {\dot {V}}}
from a pressure p1 to p2
S
˙
i
=
−
∫
p
1
p
2
V
˙
T
d
p
.
{\displaystyle {\dot {S}}_{\text{i}}=-\int _{p_{1}}^{p_{2}}{\frac {\dot {V}}{T}}\mathrm {d} p.}
For small pressure drops and defining the flow conductance C by
V
˙
=
C
(
p
1
−
p
2
)
{\displaystyle {\dot {V}}=C(p_{1}-p_{2})}
we get
S
˙
i
=
C
(
p
1
−
p
2
)
2
T
.
{\displaystyle {\dot {S}}_{\text{i}}=C{\frac {(p_{1}-p_{2})^{2}}{T}}.}
The dependences of
S
˙
i
{\displaystyle {\dot {S}}_{\text{i}}}
on T1 − T2 and on p1 − p2 are quadratic.
This is typical for expressions of the entropy production rates in general. They guarantee that the entropy production is positive.
=== Entropy of mixing ===
In this Section we will calculate the entropy of mixing when two ideal gases diffuse into each other. Consider a volume Vt divided in two volumes Va and Vb so that Vt = Va + Vb. The volume Va contains amount of substance na of an ideal gas a and Vb contains amount of substance nb of gas b. The total amount of substance is nt = na + nb. The temperature and pressure in the two volumes is the same. The entropy at the start is given by
S
t1
=
S
a1
+
S
b1
.
{\displaystyle S_{\text{t1}}=S_{\text{a1}}+S_{\text{b1}}.}
When the division between the two gases is removed the two gases expand, comparable to a Joule–Thomson expansion. In the final state the temperature is the same as initially but the two gases now both take the volume Vt. The relation of the entropy of an amount of substance n of an ideal gas is
S
=
n
C
V
ln
T
T
0
+
n
R
ln
V
V
0
{\displaystyle S=nC_{\text{V}}\ln {\frac {T}{T_{0}}}+nR\ln {\frac {V}{V_{0}}}}
where CV is the molar heat capacity at constant volume and R is the molar gas constant.
The system is an adiabatic closed system, so the entropy increase during the mixing of the two gases is equal to the entropy production. It is given by
S
Δ
=
S
t2
−
S
t1
.
{\displaystyle S_{\Delta }=S_{\text{t2}}-S_{\text{t1}}.}
As the initial and final temperature are the same, the temperature terms cancel, leaving only the volume terms. The result is
S
Δ
=
n
a
R
ln
V
t
V
a
+
n
b
R
ln
V
t
V
b
.
{\displaystyle S_{\Delta }=n_{\text{a}}R\ln {\frac {V_{\text{t}}}{V_{\text{a}}}}+n_{\text{b}}R\ln {\frac {V_{\text{t}}}{V_{\text{b}}}}.}
Introducing the concentration x = na/nt = Va/Vt we arrive at the well-known expression
S
Δ
=
−
n
t
R
[
x
ln
x
+
(
1
−
x
)
ln
(
1
−
x
)
]
.
{\displaystyle S_{\Delta }=-n_{\text{t}}R[x\ln x+(1-x)\ln(1-x)].}
=== Joule expansion ===
The Joule expansion is similar to the mixing described above. It takes place in an adiabatic system consisting of a gas and two rigid vessels a and b of equal volume, connected by a valve. Initially, the valve is closed. Vessel a contains the gas while the other vessel b is empty. When the valve is opened, the gas flows from vessel a into b until the pressures in the two vessels are equal. The volume, taken by the gas, is doubled while the internal energy of the system is constant (adiabatic and no work done). Assuming that the gas is ideal, the molar internal energy is given by Um = CVT. As CV is constant, constant U means constant T. The molar entropy of an ideal gas, as function of the molar volume Vm and T, is given by
S
m
=
C
V
ln
T
T
0
+
R
ln
V
m
V
0
.
{\displaystyle S_{\text{m}}=C_{\text{V}}\ln {\frac {T}{T_{0}}}+R\ln {\frac {V_{\text{m}}}{V_{0}}}.}
The system consisting of the two vessels and the gas is closed and adiabatic, so the entropy production during the process is equal to the increase of the entropy of the gas. So, doubling the volume with T constant gives that the molar entropy produced is
S
mi
=
R
ln
2.
{\displaystyle S_{\text{mi}}=R\ln 2.}
=== Microscopic interpretation ===
The Joule expansion provides an opportunity to explain the entropy production in statistical mechanical (i.e., microscopic) terms. At the expansion, the volume that the gas can occupy is doubled. This means that, for every molecule there are now two possibilities: it can be placed in container a or b. If the gas has amount of substance n, the number of molecules is equal to n⋅NA, where NA is the Avogadro constant. The number of microscopic possibilities increases by a factor of 2 per molecule due to the doubling of volume, so in total the factor is 2n⋅NA. Using the well-known Boltzmann expression for the entropy
S
=
k
ln
Ω
,
{\displaystyle S=k\ln \Omega ,}
where k is the Boltzmann constant and Ω is the number of microscopic possibilities to realize the macroscopic state. This gives change in molar entropy of
S
m
Δ
=
S
Δ
/
n
=
k
ln
(
2
n
⋅
N
A
)
/
n
=
k
N
A
ln
2
=
R
ln
2.
{\displaystyle S_{{\text{m}}\Delta }=S_{\Delta }/n=k\ln(2^{n\cdot N_{\text{A}}})/n=kN_{\text{A}}\ln 2=R\ln 2.}
So, in an irreversible process, the number of microscopic possibilities to realize the macroscopic state is increased by a certain factor.
== Basic inequalities and stability conditions ==
In this section we derive the basic inequalities and stability conditions for closed systems. For closed systems the first law reduces to
d
U
d
t
=
Q
˙
−
p
d
V
d
t
+
P
.
{\displaystyle {\frac {\mathrm {d} U}{\mathrm {d} t}}={\dot {Q}}-p{\frac {\mathrm {d} V}{\mathrm {d} t}}+P.}
The second law we write as
d
S
d
t
−
Q
˙
T
≥
0.
{\displaystyle {\frac {\mathrm {d} S}{\mathrm {d} t}}-{\frac {\dot {Q}}{T}}\geq 0.}
For adiabatic systems
Q
˙
=
0
{\displaystyle {\dot {Q}}=0}
so dS/dt ≥ 0. In other words: the entropy of adiabatic systems cannot decrease. In equilibrium the entropy is at its maximum. Isolated systems are a special case of adiabatic systems, so this statement is also valid for isolated systems.
Now consider systems with constant temperature and volume. In most cases T is the temperature of the surroundings with which the system is in good thermal contact. Since V is constant the first law gives
Q
˙
=
d
U
/
d
t
−
P
{\displaystyle {\dot {Q}}=\mathrm {d} U/\mathrm {d} t-P}
. Substitution in the second law, and using that T is constant, gives
d
(
T
S
)
d
t
−
d
U
d
t
+
P
≥
0.
{\displaystyle {\frac {\mathrm {d} (TS)}{\mathrm {d} t}}-{\frac {\mathrm {d} U}{\mathrm {d} t}}+P\geq 0.}
With the Helmholtz free energy, defined as
F
=
U
−
T
S
,
{\displaystyle F=U-TS,}
we get
d
F
d
t
−
P
≤
0.
{\displaystyle {\frac {\mathrm {d} F}{\mathrm {d} t}}-P\leq 0.}
If P = 0 this is the mathematical formulation of the general property that the free energy of systems with fixed temperature and volume tends to a minimum. The expression can be integrated from the initial state i to the final state f resulting in
W
S
≤
F
i
−
F
f
{\displaystyle W_{\text{S}}\leq F_{\text{i}}-F_{\text{f}}}
where WS is the work done by the system. If the process inside the system is completely reversible the equality sign holds. Hence the maximum work, that can be extracted from the system, is equal to the free energy of the initial state minus the free energy of the final state.
Finally we consider systems with constant temperature and pressure and take P = 0. As p is constant the first laws gives
d
U
d
t
=
Q
˙
−
d
(
p
V
)
d
t
.
{\displaystyle {\frac {\mathrm {d} U}{\mathrm {d} t}}={\dot {Q}}-{\frac {\mathrm {d} (pV)}{\mathrm {d} t}}.}
Combining with the second law, and using that T is constant, gives
d
(
T
S
)
d
t
−
d
U
d
t
−
d
(
p
V
)
d
t
≥
0.
{\displaystyle {\frac {\mathrm {d} (TS)}{\mathrm {d} t}}-{\frac {\mathrm {d} U}{\mathrm {d} t}}-{\frac {\mathrm {d} (pV)}{\mathrm {d} t}}\geq 0.}
With the Gibbs free energy, defined as
G
=
U
+
p
V
−
T
S
,
{\displaystyle G=U+pV-TS,}
we get
d
G
d
t
≤
0.
{\displaystyle {\frac {\mathrm {d} G}{\mathrm {d} t}}\leq 0.}
== Homogeneous systems ==
In homogeneous systems the temperature and pressure are well-defined and all internal processes are reversible. Hence
S
˙
i
=
0
{\displaystyle {\dot {S}}_{\text{i}}=0}
. As a result, the second law, multiplied by T, reduces to
T
d
S
d
t
=
Q
˙
+
n
˙
T
S
m
.
{\displaystyle T{\frac {\mathrm {d} S}{\mathrm {d} t}}={\dot {Q}}+{\dot {n}}TS_{\text{m}}.}
With P = 0 the first law becomes
d
U
d
t
=
Q
˙
+
n
˙
H
m
−
p
d
V
d
t
.
{\displaystyle {\frac {\mathrm {d} U}{\mathrm {d} t}}={\dot {Q}}+{\dot {n}}H_{\text{m}}-p{\frac {\mathrm {d} V}{\mathrm {d} t}}.}
Eliminating
Q
˙
{\displaystyle {\dot {Q}}}
and multiplying with dt gives
d
U
=
T
d
S
−
p
d
V
+
(
H
m
−
T
S
m
)
d
n
.
{\displaystyle \mathrm {d} U=T\mathrm {d} S-p\mathrm {d} V+(H_{\text{m}}-TS_{\text{m}})\mathrm {d} n.}
Since
H
m
−
T
S
m
=
G
m
=
μ
{\displaystyle H_{\text{m}}-TS_{\text{m}}=G_{\text{m}}=\mu }
with Gm the molar Gibbs free energy and μ the molar chemical potential we obtain the well-known result
d
U
=
T
d
S
−
p
d
V
+
μ
d
n
.
{\displaystyle \mathrm {d} U=T\mathrm {d} S-p\mathrm {d} V+\mu \mathrm {d} n.}
== Entropy production in stochastic processes ==
Since physical processes can be described by stochastic processes, such as Markov chains and diffusion processes, entropy production can be defined mathematically in such processes.
For a continuous-time Markov chain with instantaneous probability distribution
p
i
(
t
)
{\displaystyle p_{i}(t)}
and transition rate
q
i
j
{\displaystyle q_{ij}}
, the instantaneous entropy production rate is
e
p
(
t
)
=
1
2
∑
i
,
j
[
p
i
(
t
)
q
i
j
−
p
j
(
t
)
q
j
i
]
log
p
i
(
t
)
q
i
j
p
j
(
t
)
q
j
i
.
{\displaystyle e_{p}(t)={\frac {1}{2}}\sum _{i,j}[p_{i}(t)q_{ij}-p_{j}(t)q_{ji}]\log {\frac {p_{i}(t)q_{ij}}{p_{j}(t)q_{ji}}}.}
The long-time behavior of entropy production is kept after a proper lifting of the process. This approach provides a dynamic explanation for the Kelvin statement and the Clausius statement of the second law of thermodynamics.
Entropy production in diffusive-reactive system has also been studied, with interesting results emerging from diffusion, cross diffusion and reactions.
For a continuous-time Gauss-Markov process, a multivariate Ornstein-Uhlenbeck process is a diffusion process defined by
N
{\displaystyle N}
coupled linear Langevin equations of the form
d
x
m
(
t
)
d
t
=
−
∑
n
B
m
n
x
n
(
t
)
+
η
m
(
t
)
.
{\displaystyle {\frac {dx_{m}(t)}{dt}}=-\sum _{n}B_{mn}x_{n}(t)+\eta _{m}(t).}
(
m
,
n
=
1
,
…
,
N
)
{\displaystyle (m,n=1,\ldots ,N)}
, i.e., in vector and matrix notations,
d
x
(
t
)
d
t
=
−
B
x
(
t
)
+
η
(
t
)
.
.
{\displaystyle {\frac {d\mathbf {x} (t)}{dt}}=-\mathbf {B} \mathbf {x} (t)+{\boldsymbol {\eta }}(t)..}
The
η
m
(
t
)
{\displaystyle \eta _{m}(t)}
are Gaussian white noises such that
⟨
η
m
(
t
)
η
n
(
t
′
)
⟩
=
2
D
m
n
δ
(
t
−
t
′
)
,
.
{\displaystyle \langle \eta _{m}(t)\eta _{n}(t')\rangle =2D_{mn}\delta (t-t'),.}
i.e.,
⟨
η
(
t
)
η
T
(
t
′
)
⟩
=
2
D
δ
(
t
−
t
′
)
.
{\displaystyle \langle {\boldsymbol {\eta }}(t){\boldsymbol {\eta }}^{T}(t')\rangle =2\mathbf {D} \delta (t-t').}
The stationary covariance matrix reads
S
=
B
−
1
D
=
D
(
B
T
)
−
1
.
{\displaystyle \mathbf {S} =\mathbf {B} ^{-1}\mathbf {D} =\mathbf {D} \left(\mathbf {B} ^{\mathrm {T} }\right)^{-1}.}
We can parametrize the matrices
B
{\displaystyle \mathbf {B} }
,
D
{\displaystyle \mathbf {D} }
, and
S
{\displaystyle \mathbf {S} }
by setting
L
=
B
S
=
D
+
Q
,
L
T
=
S
B
T
=
D
−
Q
.
{\displaystyle \mathbf {L} =\mathbf {B} \mathbf {S} =\mathbf {D} +\mathbf {Q} ,\quad \mathbf {L} ^{\mathrm {T} }=\mathbf {S} \mathbf {B} ^{\mathrm {T} }=\mathbf {D} -\mathbf {Q} .}
Finally, the entropy production reads
e
p
=
t
r
(
B
T
D
−
1
Q
)
=
−
t
r
(
D
−
1
B
Q
)
.
{\displaystyle e_{p}=\mathrm {tr} (\mathbf {B} ^{\mathrm {T} }\mathbf {D} ^{-1}\mathbf {Q} )=-\mathrm {tr} (\mathbf {D} ^{-1}\mathbf {B} \mathbf {Q} ).}
A recent application of this formula is demonstrated in neuroscience, where it has been shown that entropy production of multivariate Ornstein-Uhlenbeck processes correlates with consciousness levels in the human brain.
== See also ==
Thermodynamics
First law of thermodynamics
Second law of thermodynamics
Irreversible process
Non-equilibrium thermodynamics
High entropy alloys
General equation of heat transfer
== References ==
== Further reading ==
Crooks, G. (1999). "Entropy production fluctuation theorem and the non-equilibrium work relation for free energy differences". Physical Review E (Free PDF). 60 (3): 2721–2726. arXiv:cond-mat/9901352. Bibcode:1999PhRvE..60.2721C. doi:10.1103/PhysRevE.60.2721. PMID 11970075. S2CID 1813818.
Seifert, Udo (2005). "Entropy Production along a Stochastic Trajectory and an Integral Fluctuation Theorem". Physical Review Letters (Free PDF). 95 (4): 040602. arXiv:cond-mat/0503686. Bibcode:2005PhRvL..95d0602S. doi:10.1103/PhysRevLett.95.040602. PMID 16090792. S2CID 31706268. | Wikipedia/Entropy_production |
Energy dissipation and entropy production extremal principles are ideas developed within non-equilibrium thermodynamics that attempt to predict the likely steady states and dynamical structures that a physical system might show. The search for extremum principles for non-equilibrium thermodynamics follows their successful use in other branches of physics. According to Kondepudi (2008), and to Grandy (2008), there is no general rule that provides an extremum principle that governs the evolution of a far-from-equilibrium system to a steady state. According to Glansdorff and Prigogine (1971, page 16), irreversible processes usually are not governed by global extremal principles because description of their evolution requires differential equations which are not self-adjoint, but local extremal principles can be used for local solutions. Lebon Jou and Casas-Vásquez (2008) state that "In non-equilibrium ... it is generally not possible to construct thermodynamic potentials depending on the whole set of variables". Šilhavý (1997) offers the opinion that "... the extremum principles of thermodynamics ... do not have any counterpart for [non-equilibrium] steady states (despite many claims in the literature)." It follows that any general extremal principle for a non-equilibrium problem will need to refer in some detail to the constraints that are specific for the structure of the system considered in the problem.
== Fluctuations, entropy, 'thermodynamics forces', and reproducible dynamical structure ==
Apparent 'fluctuations', which appear to arise when initial conditions are inexactly specified, are the drivers of the formation of non-equilibrium dynamical structures. There is no special force of nature involved in the generation of such fluctuations. Exact specification of initial conditions would require statements of the positions and velocities of all particles in the system, obviously not a remotely practical possibility for a macroscopic system. This is the nature of thermodynamic fluctuations. They cannot be predicted in particular by the scientist, but they are determined by the laws of nature and they are the singular causes of the natural development of dynamical structure.
It is pointed out by W.T. Grandy Jr that entropy, though it may be defined for a non-equilibrium system, is when strictly considered, only a macroscopic quantity that refers to the whole system, and is not a dynamical variable and in general does not act as a local potential that describes local physical forces. Under special circumstances it can metaphorically be thought of as if the thermal variables behaved like local physical forces. The approximation that constitutes classical irreversible thermodynamics is built on this metaphoric thinking.
As indicated by the quotation marks of Onsager (1931), such a metaphorical but not categorically mechanical force, the thermal "force",
X
t
h
{\displaystyle X_{th}}
, 'drives' the conduction of heat. For this so-called "thermodynamic force", we can write
X
t
h
=
−
1
T
∇
T
{\displaystyle X_{th}=-{\frac {1}{T}}\nabla T}
.
Actually this thermal "thermodynamic force" is a manifestation of the degree of inexact specification of the microscopic initial conditions for the system, expressed in the thermodynamic variable known as temperature,
T
{\displaystyle T}
. Temperature is only one example, and all the thermodynamic macroscopic variables constitute inexact specifications of the initial conditions, and have their respective "thermodynamic forces". These inexactitudes of specification are the source of the apparent fluctuations that drive the generation of dynamical structure, of the very precise but still less than perfect reproducibility of non-equilibrium experiments, and of the place of entropy in thermodynamics. If one did not know of such inexactitude of specification, one might find the origin of the fluctuations mysterious. What is meant here by "inexactitude of specification" is not that the mean values of the macroscopic variables are inexactly specified, but that the use of macroscopic variables to describe processes that actually occur by the motions and interactions of microscopic objects such as molecules is necessarily lacking in the molecular detail of the processes, and is thus inexact. There are many microscopic states compatible with a single macroscopic state, but only the latter is specified, and that is specified exactly for the purposes of the theory.
It is reproducibility in repeated observations that identifies dynamical structure in a system. E.T. Jaynes explains how this reproducibility is why entropy is so important in this topic: entropy is a measure of experimental reproducibility. The entropy tells how many times one would have to repeat the experiment in order to expect to see a departure from the usual reproducible result. When the process goes on in a system with less than a 'practically infinite' number (much much less than Avogadro's or Loschmidt's numbers) of molecules, the thermodynamic reproducibility fades, and fluctuations become easier to see.
According to this view of Jaynes, it is a common and confusing abuse of language, that one often sees reproducibility of dynamical structure called "order". Dewar writes "Jaynes considered reproducibility - rather than disorder - to be the key idea behind the second law of thermodynamics (Jaynes 1963, 1965, 1988, 1989)." Grandy (2008) in section 4.3 on page 55 clarifies the distinction between the idea that entropy is related to order (which he considers to be an "unfortunate" "mischaracterization" that needs "debunking"), and the aforementioned idea of Jaynes that entropy is a measure of experimental reproducibility of process (which Grandy regards as correct). According to this view, even the admirable book of Glansdorff and Prigogine (1971) is guilty of this unfortunate abuse of language.
== Local thermodynamic equilibrium ==
Various principles have been proposed by diverse authors for over a century. According to Glansdorff and Prigogine (1971, page 15), in general, these principles apply only to systems that can be described by thermodynamical variables, in which dissipative processes dominate by excluding large deviations from statistical equilibrium. The thermodynamical variables are defined subject to the kinematical requirement of local thermodynamic equilibrium. This means that collisions between molecules are so frequent that chemical and radiative processes do not disrupt the local Maxwell-Boltzmann distribution of molecular velocities.
== Linear and non-linear processes ==
Dissipative structures can depend on the presence of non-linearity in their dynamical régimes. Autocatalytic reactions provide examples of non-linear dynamics, and may lead to the natural evolution of self-organized dissipative structures.
== Continuous and discontinuous motions of fluids ==
Much of the theory of classical non-equilibrium thermodynamics is concerned with the spatially continuous motion of fluids, but fluids can also move with spatial discontinuities. Helmholtz (1868) wrote about how in a flowing fluid, there can arise a zero fluid pressure, which sees the fluid broken asunder. This arises from the momentum of the fluid flow, showing a different kind of dynamical structure from that of the conduction of heat or electricity. Thus for example: water from a nozzle can form a shower of droplets (Rayleigh 1878, and in section 357 et seq. of Rayleigh (1896/1926)); waves on the surface of the sea break discontinuously when they reach the shore (Thom 1975). Helmholtz pointed out that the sounds of organ pipes must arise from such discontinuity of flow, occasioned by the passage of air past a sharp-edged obstacle; otherwise the oscillatory character of the sound wave would be damped away to nothing. The definition of the rate of entropy production of such a flow is not covered by the usual theory of classical non-equilibrium thermodynamics. There are many other commonly observed discontinuities of fluid flow that also lie beyond the scope of the classical theory of non-equilibrium thermodynamics, such as: bubbles in boiling liquids and in effervescent drinks; also protected towers of deep tropical convection (Riehl, Malkus 1958), also called penetrative convection (Lindzen 1977).
== Historical development ==
=== W. Thomson, Baron Kelvin ===
William Thomson, later Baron Kelvin, (1852 a, 1852 b) wrote
"II. When heat is created by any unreversible process (such as friction), there is a dissipation of mechanical energy, and a full restoration of it to its primitive condition is impossible.
III. When heat is diffused by conduction, there is a dissipation of mechanical energy, and perfect restoration is impossible.
IV. When radiant heat or light is absorbed, otherwise than in vegetation, or in a chemical reaction, there is a dissipation of mechanical energy, and perfect restoration is impossible."
In 1854, Thomson wrote about the relation between two previously known non-equilibrium effects. In the Peltier effect, an electric current driven by an external electric field across a bimetallic junction will cause heat to be carried across the junction when the temperature gradient is constrained to zero. In the Seebeck effect, a flow of heat driven by a temperature gradient across such a junction will cause an electromotive force across the junction when the electric current is constrained to zero. Thus thermal and electric effects are said to be coupled. Thomson (1854) proposed a theoretical argument, partly based on the work of Carnot and Clausius, and in those days partly simply speculative, that the coupling constants of these two effects would be found experimentally to be equal. Experiment later confirmed this proposal. It was later one of the ideas that led Onsager to his results as noted below.
=== Helmholtz ===
In 1869, Hermann von Helmholtz stated his Helmholtz minimum dissipation theorem, subject to a certain kind of boundary condition, a principle of least viscous dissipation of kinetic energy: "For a steady flow in a viscous liquid, with the speeds of flow on the boundaries of the fluid being given steady, in the limit of small speeds, the currents in the liquid so distribute themselves that the dissipation of kinetic energy by friction is minimum."
In 1878, Helmholtz, like Thomson also citing Carnot and Clausius, wrote about electric current in an electrolyte solution with a concentration gradient. This shows a non-equilibrium coupling, between electric effects and concentration-driven diffusion. Like Thomson (Kelvin) as noted above, Helmholtz also found a reciprocal relation, and this was another of the ideas noted by Onsager.
=== J. W. Strutt, Baron Rayleigh ===
Rayleigh (1873) (and in Sections 81 and 345 of Rayleigh (1896/1926)) introduced the dissipation function for the description of dissipative processes involving viscosity. More general versions of this function have been used by many subsequent investigators of the nature of dissipative processes and dynamical structures. Rayleigh's dissipation function was conceived of from a mechanical viewpoint, and it did not refer in its definition to temperature, and it needed to be 'generalized' to make a dissipation function suitable for use in non-equilibrium thermodynamics.
Studying jets of water from a nozzle, Rayleigh (1878, 1896/1926) noted that when a jet is in a state of conditionally stable dynamical structure, the mode of fluctuation most likely to grow to its full extent and lead to another state of conditionally stable dynamical structure is the one with the fastest growth rate. In other words, a jet can settle into a conditionally stable state, but it is likely to suffer fluctuation so as to pass to another, less unstable, conditionally stable state. He used like reasoning in a study of Bénard convection. These physically lucid considerations of Rayleigh seem to contain the heart of the distinction between the principles of minimum and maximum rates of dissipation of energy and entropy production, which have been developed in the course of physical investigations by later authors.
=== Korteweg ===
Korteweg (1883) gave a proof "that in any simply connected region, when the velocities along the boundaries are given, there exists, as far as the squares and products of the velocities may be neglected, only one solution of the equations for the steady motion of an incompressible viscous fluid, and that this solution is always stable." He attributed the first part of this theorem to Helmholtz, who had shown that it is a simple consequence of a theorem that "if the motion be steady, the currents in a viscous [incompressible] fluid are so distributed that the loss of [kinetic] energy due to viscosity is a minimum, on the supposition that the velocities along boundaries of the fluid are given." Because of the restriction to cases in which the squares and products of the velocities can be neglected, these motions are below the threshold for turbulence.
=== Onsager ===
Great theoretical progress was made by Onsager in 1931 and in 1953.
=== Prigogine ===
Further progress was made by Prigogine in 1945 and later. Prigogine (1947) cites Onsager (1931).
=== Casimir ===
Casimir (1945) extended the theory of Onsager.
=== Ziman ===
Ziman (1956) gave very readable account. He proposed the following as a general principle of the thermodynamics of irreversible processes: "Consider all distributions of currents such that the intrinsic entropy production equals the extrinsic entropy production for the given set of forces. Then, of all current distributions satisfying this condition, the steady state distribution makes the entropy production a maximum." He commented that this was a known general principle, discovered by Onsager, but was "not quoted in any of the books on the subject". He notes the difference between this principle and "Prigogine's theorem, which states, crudely speaking, that if not all the forces acting on a system are fixed the free forces will take such values as to make the entropy production a minimum." Prigogine was present when this paper was read and he is reported by the journal editor to have given "notice that he doubted the validity of part of Ziman's thermodynamic interpretation".
=== Ziegler ===
Hans Ziegler extended the Melan-Prager non-equilibrium theory of materials to the non-isothermal case.
=== Gyarmati ===
Gyarmati (1967/1970) gives a systematic presentation, and extends Onsager's principle of least dissipation of energy, to give a more symmetric form known as Gyarmati's principle. Gyarmati (1967/1970) cites 11 papers or books authored or co-authored by Prigogine.
Gyarmati (1967/1970) also gives in Section III 5 a very helpful precis of the subtleties of Casimir (1945)). He explains that the Onsager reciprocal relations concern variables which are even functions of the velocities of the molecules, and notes that Casimir went on to derive anti-symmetric relations concerning variables which are odd functions of the velocities of the molecules.
=== Paltridge ===
The physics of the earth's atmosphere includes dramatic events like lightning and the effects of volcanic eruptions, with discontinuities of motion such as noted by Helmholtz (1868). Turbulence is prominent in atmospheric convection. Other discontinuities include the formation of raindrops, hailstones, and snowflakes. The usual theory of classical non-equilibrium thermodynamics will need some extension to cover atmospheric physics. According to Tuck (2008), "On the macroscopic level, the way has been pioneered by a meteorologist (Paltridge 1975, 2001). Initially Paltridge (1975) used the terminology "minimum entropy exchange", but after that, for example in Paltridge (1978), and in Paltridge (1979)), he used the now current terminology "maximum entropy production" to describe the same thing. This point is clarified in the review by Ozawa, Ohmura, Lorenz, Pujol (2003). Paltridge (1978) cited Busse's (1967) fluid mechanical work concerning an extremum principle. Nicolis and Nicolis (1980) discuss Paltridge's work, and they comment that the behaviour of the entropy production is far from simple and universal. This seems natural in the context of the requirement of some classical theory of non-equilibrium thermodynamics that the threshold of turbulence not be crossed. Paltridge himself nowadays tends to prefer to think in terms of the dissipation function rather than in terms of rate of entropy production.
== Speculated thermodynamic extremum principles for energy dissipation and entropy production ==
Jou, Casas-Vazquez, Lebon (1993) note that classical non-equilibrium thermodynamics "has seen an extraordinary expansion since the second world war", and they refer to the Nobel prizes for work in the field awarded to Lars Onsager and Ilya Prigogine. Martyushev and Seleznev (2006) note the importance of entropy in the evolution of natural dynamical structures: "Great contribution has been done in this respect by two scientists, namely Clausius, ... , and Prigogine." Prigogine in his 1977 Nobel Lecture said: "... non-equilibrium may be a source of order. Irreversible processes may lead to a new type of dynamic states of matter which I have called “dissipative structures”." Glansdorff and Prigogine (1971) wrote on page xx: "Such 'symmetry breaking instabilities' are of special interest as they lead to a spontaneous 'self-organization' of the system both from the point of view of its space order and its function."
Analyzing the Rayleigh–Bénard convection cell phenomenon, Chandrasekhar (1961) wrote "Instability occurs at the minimum temperature gradient at which a balance can be maintained between the kinetic energy dissipated by viscosity and the internal energy released by the buoyancy force." With a temperature gradient greater than the minimum, viscosity can dissipate kinetic energy as fast as it is released by convection due to buoyancy, and a steady state with convection is stable. The steady state with convection is often a pattern of macroscopically visible hexagonal cells with convection up or down in the middle or at the 'walls' of each cell, depending on the temperature dependence of the quantities; in the atmosphere under various conditions it seems that either is possible. (Some details are discussed by Lebon, Jou, and Casas-Vásquez (2008) on pages 143–158.) With a temperature gradient less than the minimum, viscosity and heat conduction are so effective that convection cannot keep going.
Glansdorff and Prigogine (1971) on page xv wrote "Dissipative structures have a quite different [from equilibrium structures] status: they are formed and maintained through the effect of exchange of energy and matter in non-equilibrium conditions." They were referring to the dissipation function of Rayleigh (1873) that was used also by Onsager (1931, I, 1931, II). On pages 78–80 of their book Glansdorff and Prigogine (1971) consider the stability of laminar flow that was pioneered by Helmholtz; they concluded that at a stable steady state of sufficiently slow laminar flow, the dissipation function was minimum.
These advances have led to proposals for various extremal principles for the "self-organized" régimes that are possible for systems governed by classical linear and non-linear non-equilibrium thermodynamical laws, with stable stationary régimes being particularly investigated. Convection introduces effects of momentum which appear as non-linearity in the dynamical equations. In the more restricted case of no convective motion, Prigogine wrote of "dissipative structures". Šilhavý (1997) offers the opinion that "... the extremum principles of [equilibrium] thermodynamics ... do not have any counterpart for [non-equilibrium] steady states (despite many claims in the literature)."
=== Prigogine's proposed theorem of minimum entropy production for very slow purely diffusive transfer ===
In 1945 Prigogine (see also Prigogine (1947)) proposed a “Theorem of Minimum Entropy Production” which applies only to the purely diffusive linear regime, with negligible inertial terms, near a stationary thermodynamically non-equilibrium state. Prigogine's proposal is that the rate of entropy production is locally minimum at every point. The proof offered by Prigogine is open to serious criticism. A critical and unsupportive discussion of Prigogine's proposal is offered by Grandy (2008). It has been shown by Barbera that the total whole body entropy production cannot be minimum, but this paper did not consider the pointwise minimum proposal of Prigogine. A proposal closely related to Prigogine's is that the pointwise rate of entropy production should have its maximum value minimized at the steady state. This is compatible, but not identical, with the Prigogine proposal. Moreover, N. W. Tschoegl proposes a proof, perhaps more physically motivated than Prigogine's, that would if valid support the conclusion of Helmholtz and of Prigogine, that under these restricted conditions, the entropy production is at a pointwise minimum.
=== Faster transfer with convective circulation: second entropy ===
In contrast to the case of sufficiently slow transfer with linearity between flux and generalized force with negligible inertial terms, there can be heat transfer that is not very slow. Then there is consequent non-linearity, and heat flow can develop into phases of convective circulation. In these cases, the time rate of entropy production has been shown to be a non-monotonic function of time during the approach to steady state heat convection. This makes these cases different from the near-thermodynamic-equilibrium regime of very-slow-transfer with linearity. Accordingly, the local time rate of entropy production, defined according to the local thermodynamic equilibrium hypothesis, is not an adequate variable for prediction of the time course of far-from-thermodynamic equilibrium processes. The principle of minimum entropy production is not applicable to these cases.
To cover these cases, there is needed at least one further state variable, a non-equilibrium quantity, the so-called second entropy. This appears to be a step towards generalization beyond the classical second law of thermodynamics, to cover non-equilibrium states or processes. The classical law refers only to states of thermodynamic equilibrium, and local thermodynamic equilibrium theory is an approximation that relies upon it. Still it is invoked to deal with phenomena near but not at thermodynamic equilibrium, and has some uses then. But the classical law is inadequate for description of the time course of processes far from thermodynamic equilibrium. For such processes, a more powerful theory is needed, and the second entropy is part of such a theory.
=== Speculated principles of maximum entropy production and minimum energy dissipation ===
Onsager (1931, I) wrote: "Thus the vector field J of the heat flow is described by the condition that the rate of increase of entropy, less the dissipation function, be a maximum." Careful note needs to be taken of the opposite signs of the rate of entropy production and of the dissipation function, appearing in the left-hand side of Onsager's equation (5.13) on Onsager's page 423.
Although largely unnoticed at the time, Ziegler proposed an idea early with his work in the mechanics of plastics in 1961, and later in his book on thermomechanics revised in 1983, and in various papers (e.g., Ziegler (1987),). Ziegler never stated his principle as a universal law but he may have intuited this. He demonstrated his principle using vector space geometry based on an “orthogonality condition” which only worked in systems where the velocities were defined as a single vector or tensor, and thus, as he wrote at p. 347, was “impossible to test by means of macroscopic mechanical models”, and was, as he pointed out, invalid in “compound systems where several elementary processes take place simultaneously”.
In relation to the earth's atmospheric energy transport process, according to Tuck (2008), "On the macroscopic level, the way has been pioneered by a meteorologist (Paltridge 1975, 2001)." Initially Paltridge (1975) used the terminology "minimum entropy exchange", but after that, for example in Paltridge (1978), and in Paltridge (1979), he used the now current terminology "maximum entropy production" to describe the same thing. The logic of Paltridge's earlier work is open to serious criticism. Nicolis and Nicolis (1980) discuss Paltridge's work, and they comment that the behaviour of the entropy production is far from simple and universal. Later work by Paltridge focuses more on the idea of a dissipation function than on the idea of rate of production of entropy.
Sawada (1981), also in relation to the Earth's atmospheric energy transport process, postulating a principle of largest amount of entropy increment per unit time, cites work in fluid mechanics by Malkus and Veronis (1958) as having "proven a principle of maximum heat current, which in turn is a maximum entropy production for a given boundary condition", but this inference is not logically valid. Again investigating planetary atmospheric dynamics, Shutts (1981) used an approach to the definition of entropy production, different from Paltridge's, to investigate a more abstract way to check the principle of maximum entropy production, and reported a good fit.
=== Prospects ===
Until recently, prospects for useful extremal principles in this area have seemed clouded. C. Nicolis (1999) concludes that one model of atmospheric dynamics has an attractor which is not a regime of maximum or minimum dissipation; she says this seems to rule out the existence of a global organizing principle, and comments that this is to some extent disappointing; she also points to the difficulty of finding a thermodynamically consistent form of entropy production. Another top expert offers an extensive discussion of the possibilities for principles of extrema of entropy production and of dissipation of energy: Chapter 12 of Grandy (2008) is very cautious, and finds difficulty in defining the 'rate of internal entropy production' in many cases, and finds that sometimes for the prediction of the course of a process, an extremum of the quantity called the rate of dissipation of energy may be more useful than that of the rate of entropy production; this quantity appeared in Onsager's 1931 origination of this subject. Other writers have also felt that prospects for general global extremal principles are clouded. Such writers include Glansdorff and Prigogine (1971), Lebon, Jou and Casas-Vásquez (2008), and Šilhavý (1997). It has been shown that heat convection does not obey extremal principles for entropy production and chemical reactions do not obey extremal principles for the secondary differential of entropy production, hence the development of a general extremal principle seems infeasible.
== See also ==
Non-equilibrium thermodynamics
Dissipative system
Self-organization
Autocatalytic reactions and order creation
Fluctuation theorem
Fluctuation dissipation theorem
== References == | Wikipedia/Extremal_principles_in_non-equilibrium_thermodynamics |
The lapse rate is the rate at which an atmospheric variable, normally temperature in Earth's atmosphere, falls with altitude. Lapse rate arises from the word lapse (in its "becoming less" sense, not its "interruption" sense). In dry air, the adiabatic lapse rate (i.e., decrease in temperature of a parcel of air that rises in the atmosphere without exchanging energy with surrounding air) is 9.8 °C/km (5.4 °F per 1,000 ft). The saturated adiabatic lapse rate (SALR), or moist adiabatic lapse rate (MALR), is the decrease in temperature of a parcel of water-saturated air that rises in the atmosphere. It varies with the temperature and pressure of the parcel and is often in the range 3.6 to 9.2 °C/km (2 to 5 °F/1000 ft), as obtained from the International Civil Aviation Organization (ICAO). The environmental lapse rate is the decrease in temperature of air with altitude for a specific time and place (see below). It can be highly variable between circumstances.
Lapse rate corresponds to the vertical component of the spatial gradient of temperature. Although this concept is most often applied to the Earth's troposphere, it can be extended to any gravitationally supported parcel of gas.
== Environmental lapse rate ==
A formal definition from the Glossary of Meteorology is:
The decrease of an atmospheric variable with height, the variable being temperature unless otherwise specified.
Typically, the lapse rate is the negative of the rate of temperature change with altitude change:
Γ
=
−
d
T
d
z
{\displaystyle \Gamma =-{\frac {\mathrm {d} T}{\mathrm {d} z}}}
where
Γ
{\displaystyle \Gamma }
(sometimes
L
{\displaystyle L}
) is the lapse rate given in units of temperature divided by units of altitude, T is temperature, and z is altitude.
The environmental lapse rate (ELR), is the actual rate of decrease of temperature with altitude in the atmosphere at a given time and location.
As an average, the International Civil Aviation Organization (ICAO) defines an international standard atmosphere (ISA) with a temperature lapse rate of 6.50 °C/km (3.56 °F or 1.98 °C/1,000 ft) from sea level to 11 km (36,090 ft or 6.8 mi). From 11 km up to 20 km (65,620 ft or 12.4 mi), the constant temperature is −56.5 °C (−69.7 °F), which is the lowest assumed temperature in the ISA. The standard atmosphere contains no moisture.
Unlike the idealized ISA, the temperature of the actual atmosphere does not always fall at a uniform rate with height. For example, there can be an inversion layer in which the temperature increases with altitude.
== Cause ==
The temperature profile of the atmosphere is a result of the interaction between radiative heating from sunlight, cooling to space via thermal radiation, and upward heat transport via natural convection (which carries hot air and latent heat upward). Above the tropopause, convection does not occur and all cooling is radiative.
Within the troposphere, the lapse rate is essentially the consequence of a balance between (a) radiative cooling of the air, which by itself would lead to a high lapse rate; and (b) convection, which is activated when the lapse rate exceeds a critical value; convection stabilizes the environmental lapse rate.
Sunlight hits the surface of the earth (land and sea) and heats them. The warm surface heats the air above it. In addition, nearly a third of absorbed sunlight is absorbed within the atmosphere, heating the atmosphere directly.
Thermal conduction helps transfer heat from the surface to the air; this conduction occurs within the few millimeters of air closest to the surface. However, above that thin interface layer, thermal conduction plays a negligible role in transferring heat within the atmosphere; this is because the thermal conductivity of air is very low.: 387
The air is radiatively cooled by greenhouse gases (water vapor, carbon dioxide, etc.) and clouds emitting longwave thermal radiation to space.
If radiation were the only way to transfer energy within the atmosphere, then the lapse rate near the surface would be roughly 40 °C/km and the greenhouse effect of gases in the atmosphere would keep the ground at roughly 333 K (60 °C; 140 °F).: 59–60
However, when air gets hot or humid, its density decreases. Thus, air which has been heated by the surface tends to rise and carry internal energy upward, especially if the air has been moistened by evaporation from water surfaces. This is the process of convection. Vertical convective motion stops when a parcel of air at a given altitude has the same density as the other air at the same elevation.
Convection carries hot, moist air upward and cold, dry air downward, with a net effect of transferring heat upward. This makes the air below cooler than it would otherwise be and the air above warmer. Because convection is available to transfer heat within the atmosphere, the lapse rate in the troposphere is reduced to around 6.5 °C/km and the greenhouse effect is reduced to a point where Earth has its observed surface temperature of around 288 K (15 °C; 59 °F).
== Convection and adiabatic expansion ==
As convection causes parcels of air to rise or fall, there is little heat transfer between those parcels and the surrounding air. Air has low thermal conductivity, and the bodies of air involved are very large; so transfer of heat by conduction is negligibly small. Also, intra-atmospheric radiative heat transfer is relatively slow and so is negligible for moving air. Thus, when air ascends or descends, there is little exchange of heat with the surrounding air. A process in which no heat is exchanged with the environment is referred to as an adiabatic process.
Air expands as it moves upward, and contracts as it moves downward. The expansion of rising air parcels, and the contraction of descending air parcels, are adiabatic processes, to a good approximation. When a parcel of air expands, it pushes on the air around it, doing thermodynamic work. Since the upward-moving and expanding parcel does work but gains no heat, it loses internal energy so that its temperature decreases. Downward-moving and contracting air has work done on it, so it gains internal energy and its temperature increases.
Adiabatic processes for air have a characteristic temperature-pressure curve. As air circulates vertically, the air takes on that characteristic gradient, called the adiabatic lapse rate. When the air contains little water, this lapse rate is known as the dry adiabatic lapse rate: the rate of temperature decrease is 9.8 °C/km (5.4 °F per 1,000 ft) (3.0 °C/1,000 ft). The reverse occurs for a sinking parcel of air.
When the environmental lapse rate is less than the adiabatic lapse rate the atmosphere is stable and convection will not occur.: 63 The environmental lapse is forced towards the adiabatic lapse rate whenever air is convecting vertically.
Only the troposphere (up to approximately 12 kilometres (39,000 ft) of altitude) in the Earth's atmosphere undergoes convection: the stratosphere does not generally convect. However, some exceptionally energetic convection processes, such as volcanic eruption columns and overshooting tops associated with severe supercell thunderstorms, may locally and temporarily inject convection through the tropopause and into the stratosphere.
Energy transport in the atmosphere is more complex than the interaction between radiation and dry convection. The water cycle (including evaporation, condensation, precipitation) transports latent heat and affects atmospheric humidity levels, significantly influencing the temperature profile, as described below.
== Mathematics of the adiabatic lapse rate ==
The following calculations derive the temperature as a function of altitude for a packet of air which is ascending or descending without exchanging heat with its environment.
=== Dry adiabatic lapse rate ===
Thermodynamics defines an adiabatic process as:
P
d
V
=
−
V
d
P
γ
{\displaystyle P\,\mathrm {d} V=-{\frac {V\,\mathrm {d} P}{\gamma }}}
the first law of thermodynamics can be written as
m
c
v
d
T
−
V
d
P
γ
=
0
{\displaystyle mc_{\text{v}}\,\mathrm {d} T-{\frac {V\,\mathrm {d} P}{\gamma }}=0}
Also, since the density
ρ
=
m
/
V
{\displaystyle \rho =m/V}
and
γ
=
c
p
/
c
v
{\displaystyle \gamma =c_{\text{p}}/c_{\text{v}}}
, we can show that:
ρ
c
p
d
T
−
d
P
=
0
{\displaystyle \rho c_{\text{p}}\,\mathrm {d} T-\mathrm {d} P=0}
where
c
p
{\displaystyle c_{\text{p}}}
is the specific heat at constant pressure.
Assuming an atmosphere in hydrostatic equilibrium:
d
P
=
−
ρ
g
d
z
{\displaystyle \mathrm {d} P=-\rho g\,\mathrm {d} z}
where g is the standard gravity. Combining these two equations to eliminate the pressure, one arrives at the result for the dry adiabatic lapse rate (DALR),
Γ
d
=
−
d
T
d
z
=
g
c
p
=
9.8
∘
C
/
km
{\displaystyle \Gamma _{\text{d}}=-{\frac {\mathrm {d} T}{\mathrm {d} z}}={\frac {g}{c_{\text{p}}}}=9.8\ ^{\circ }{\text{C}}/{\text{km}}}
The DALR (
Γ
d
{\displaystyle \Gamma _{\text{d}}}
) is the temperature gradient experienced in an ascending or descending packet of air that is not saturated with water vapor, i.e., with less than 100% relative humidity.
=== Moist adiabatic lapse rate ===
The presence of water within the atmosphere (usually the troposphere) complicates the process of convection. Water vapor contains latent heat of vaporization. As a parcel of air rises and cools, it eventually becomes saturated; that is, the vapor pressure of water in equilibrium with liquid water has decreased (as temperature has decreased) to the point where it is equal to the actual vapor pressure of water. With further decrease in temperature the water vapor in excess of the equilibrium amount condenses, forming cloud, and releasing heat (latent heat of condensation). Before saturation, the rising air follows the dry adiabatic lapse rate. After saturation, the rising air follows the moist (or wet) adiabatic lapse rate. The release of latent heat is an important source of energy in the development of thunderstorms.
While the dry adiabatic lapse rate is a constant 9.8 °C/km (5.4 °F per 1,000 ft, 3 °C/1,000 ft), the moist adiabatic lapse rate varies strongly with temperature. A typical value is around 5 °C/km, (9 °F/km, 2.7 °F/1,000 ft, 1.5 °C/1,000 ft). The formula for the saturated adiabatic lapse rate (SALR) or moist adiabatic lapse rate (MALR) is given by:
Γ
w
=
g
(
1
+
H
v
r
R
sd
T
)
(
c
pd
+
H
v
2
r
R
sw
T
2
)
{\displaystyle \Gamma _{\text{w}}=g\,{\frac {\left(1+{\dfrac {H_{\text{v}}\,r}{R_{\text{sd}}\,T}}\right)}{\left(c_{\text{pd}}+{\dfrac {H_{\text{v}}^{2}\,r}{R_{\text{sw}}\,T^{2}}}\right)}}}
where:
The SALR or MALR (
Γ
w
{\displaystyle \Gamma _{\text{w}}}
) is the temperature gradient experienced in an ascending or descending packet of air that is saturated with water vapor, i.e., with 100% relative humidity.
== Effect on weather ==
The varying environmental lapse rates throughout the Earth's atmosphere are of critical importance in meteorology, particularly within the troposphere. They are used to determine if the parcel of rising air will rise high enough for its water to condense to form clouds, and, having formed clouds, whether the air will continue to rise and form bigger shower clouds, and whether these clouds will get even bigger and form cumulonimbus clouds (thunder clouds).
As unsaturated air rises, its temperature drops at the dry adiabatic rate. The dew point also drops (as a result of decreasing air pressure) but much more slowly, typically about 2 °C per 1,000 m. If unsaturated air rises far enough, eventually its temperature will reach its dew point, and condensation will begin to form. This altitude is known as the lifting condensation level (LCL) when mechanical lift is present and the convective condensation level (CCL) when mechanical lift is absent, in which case, the parcel must be heated from below to its convective temperature. The cloud base will be somewhere within the layer bounded by these parameters.
The difference between the dry adiabatic lapse rate and the rate at which the dew point drops is around 4.5 °C per 1,000 m. Given a difference in temperature and dew point readings on the ground, one can easily find the LCL by multiplying the difference by 125 m/°C.
If the environmental lapse rate is less than the moist adiabatic lapse rate, the air is absolutely stable — rising air will cool faster than the surrounding air and lose buoyancy. This often happens in the early morning, when the air near the ground has cooled overnight. Cloud formation in stable air is unlikely.
If the environmental lapse rate is between the moist and dry adiabatic lapse rates, the air is conditionally unstable — an unsaturated parcel of air does not have sufficient buoyancy to rise to the LCL or CCL, and it is stable to weak vertical displacements in either direction. If the parcel is saturated it is unstable and will rise to the LCL or CCL, and either be halted due to an inversion layer of convective inhibition, or if lifting continues, deep, moist convection (DMC) may ensue, as a parcel rises to the level of free convection (LFC), after which it enters the free convective layer (FCL) and usually rises to the equilibrium level (EL).
If the environmental lapse rate is larger than the dry adiabatic lapse rate, it has a superadiabatic lapse rate, the air is absolutely unstable — a parcel of air will gain buoyancy as it rises both below and above the lifting condensation level or convective condensation level. This often happens in the afternoon mainly over land masses. In these conditions, the likelihood of cumulus clouds, showers or even thunderstorms is increased.
Meteorologists use radiosondes to measure the environmental lapse rate and compare it to the predicted adiabatic lapse rate to forecast the likelihood that air will rise. Charts of the environmental lapse rate are known as thermodynamic diagrams, examples of which include Skew-T log-P diagrams and tephigrams. (See also Thermals).
The difference in moist adiabatic lapse rate and the dry rate is the cause of foehn wind phenomenon (also known as "Chinook winds" in parts of North America). The phenomenon exists because warm moist air rises through orographic lifting up and over the top of a mountain range or large mountain. The temperature decreases with the dry adiabatic lapse rate, until it hits the dew point, where water vapor in the air begins to condense. Above that altitude, the adiabatic lapse rate decreases to the moist adiabatic lapse rate as the air continues to rise. Condensation is also commonly followed by precipitation on the top and windward sides of the mountain. As the air descends on the leeward side, it is warmed by adiabatic compression at the dry adiabatic lapse rate. Thus, the foehn wind at a certain altitude is warmer than the corresponding altitude on the windward side of the mountain range. In addition, because the air has lost much of its original water vapor content, the descending air creates an arid region on the leeward side of the mountain.
== Impact on the greenhouse effect ==
If the environmental lapse rate was zero, so that the atmosphere was the same temperature at all elevations, then there would be no greenhouse effect. This doesn't mean the lapse rate and the greenhouse effect are the same thing, just that the lapse rate is a prerequisite for the greenhouse effect.
The presence of greenhouse gases on a planet causes radiative cooling of the air, which leads to the formation of a non-zero lapse rate. So, the presence of greenhouse gases leads to there being a greenhouse effect at a global level. However, this need not be the case at a localized level.
The localized greenhouse effect is stronger in locations where the lapse rate is stronger. In Antarctica, thermal inversions in the atmosphere (so that air at higher altitudes is warmer) sometimes cause the localized greenhouse effect to become negative (signifying enhanced radiative cooling to space instead of inhibited radiative cooling as is the case for a positive greenhouse effect).
== Lapse rate in an isolated column of gas ==
A question has sometimes arisen as to whether a temperature gradient will arise in a column of still air in a gravitational field without external energy flows. This issue was addressed by James Clerk Maxwell, who established in 1868 that if any temperature gradient forms, then that temperature gradient must be universal (i.e., the gradient must be same for all materials) or the second law of thermodynamics would be violated. Maxwell also concluded that the universal result must be one in which the temperature is uniform, i.e., the lapse rate is zero.
Santiago and Visser (2019) confirm the correctness of Maxwell's conclusion (zero lapse rate) provided relativistic effects are neglected. When relativity is taken into account, gravity gives rise to an extremely small lapse rate, the Tolman gradient (derived by R. C. Tolman in 1930). At Earth's surface, the Tolman gradient would be about
Γ
t
=
T
s
×
(
10
−
16
{\displaystyle \Gamma _{t}=T_{s}\times (10^{-16}}
m
−
1
)
{\displaystyle ^{-1})}
, where
T
s
{\displaystyle T_{s}}
is the temperature of the gas at the elevation of Earth's surface. Santiago and Visser remark that "gravity is the only force capable of creating temperature gradients in thermal equilibrium states without violating the laws of thermodynamics" and "the existence of Tolman's temperature gradient is not at all controversial (at least not within the general relativity community)."
== See also ==
Adiabatic process
Atmospheric thermodynamics
Fluid dynamics
Foehn wind
Lapse rate climate feedback
Scale height
== Notes ==
== References ==
== Further reading ==
Beychok, Milton R. (2005). Fundamentals Of Stack Gas Dispersion (4th ed.). author-published. ISBN 978-0-9644588-0-2. www.air-dispersion.com
R. R. Rogers and M. K. Yau (1989). Short Course in Cloud Physics (3rd ed.). Butterworth-Heinemann. ISBN 978-0-7506-3215-7.
== External links ==
Definition, equations and tables of lapse rate from the Planetary Data system.
National Science Digital Library glossary:
Lapse Rate
Environmental lapse rate
Absolute stable air
An introduction to lapse rate calculation from first principles from U. Texas | Wikipedia/Adiabatic_lapse_rate |
Cloud physics is the study of the physical processes that lead to the formation, growth and precipitation of atmospheric clouds. These aerosols are found in the troposphere, stratosphere, and mesosphere, which collectively make up the greatest part of the homosphere. Clouds consist of microscopic droplets of liquid water (warm clouds), tiny crystals of ice (cold clouds), or both (mixed phase clouds), along with microscopic particles of dust, smoke, or other matter, known as condensation nuclei. Cloud droplets initially form by the condensation of water vapor onto condensation nuclei when the supersaturation of air exceeds a critical value according to Köhler theory. Cloud condensation nuclei are necessary for cloud droplets formation because of the Kelvin effect, which describes the change in saturation vapor pressure due to a curved surface. At small radii, the amount of supersaturation needed for condensation to occur is so large, that it does not happen naturally. Raoult's law describes how the vapor pressure is dependent on the amount of solute in a solution. At high concentrations, when the cloud droplets are small, the supersaturation required is smaller than without the presence of a nucleus.
In warm clouds, larger cloud droplets fall at a higher terminal velocity; because at a given velocity, the drag force per unit of droplet weight on smaller droplets is larger than on large droplets. The large droplets can then collide with small droplets and combine to form even larger drops. When the drops become large enough that their downward velocity (relative to the surrounding air) is greater than the upward velocity (relative to the ground) of the surrounding air, the drops can fall as precipitation. The collision and coalescence is not as important in mixed phase clouds where the Bergeron process dominates. Other important processes that form precipitation are riming, when a supercooled liquid drop collides with a solid snowflake, and aggregation, when two solid snowflakes collide and combine. The precise mechanics of how a cloud forms and grows is not completely understood, but scientists have developed theories explaining the structure of clouds by studying the microphysics of individual droplets. Advances in weather radar and satellite technology have also allowed the precise study of clouds on a large scale.
== History of cloud physics ==
The modern cloud physics began in the 19th century and was described in several publications. Otto von Guericke originated the idea that clouds were composed of water bubbles. In 1847 Augustus Waller used spider web to examine droplets under the microscope. These observations were confirmed by William Henry Dines in 1880 and Richard Assmann in 1884.
== Cloud formation ==
=== Cooling air to its dew point ===
==== Adiabatic cooling ====
As water evaporates from an area of Earth's surface, the air over that area becomes moist. Moist air is lighter than the surrounding dry air, creating an unstable situation. When enough moist air has accumulated, all the moist air rises as a single packet, without mixing with the surrounding air. As more moist air forms along the surface, the process repeats, resulting in a series of discrete packets of moist air rising to form clouds.
This process occurs when one or more of three possible lifting agents—cyclonic/frontal, convective, or orographic—causes air containing invisible water vapor to rise and cool to its dew point, the temperature at which the air becomes saturated. The main mechanism behind this process is adiabatic cooling. Atmospheric pressure decreases with altitude, so the rising air expands in a process that expends energy and causes the air to cool, which makes water vapor condense into cloud. Water vapor in saturated air is normally attracted to condensation nuclei such as dust and salt particles that are small enough to be held aloft by normal circulation of the air. The water droplets in a cloud have a normal radius of about 0.002 mm (0.00008 in). The droplets may collide to form larger droplets, which remain aloft as long as the velocity of the rising air within the cloud is equal to or greater than the terminal velocity of the droplets.
For non-convective cloud, the altitude at which condensation begins to happen is called the lifted condensation level (LCL), which roughly determines the height of the cloud base. Free convective clouds generally form at the altitude of the convective condensation level (CCL). Water vapor in saturated air is normally attracted to condensation nuclei such as salt particles that are small enough to be held aloft by normal circulation of the air. If the condensation process occurs below the freezing level in the troposphere, the nuclei help transform the vapor into very small water droplets. Clouds that form just above the freezing level are composed mostly of supercooled liquid droplets, while those that condense out at higher altitudes where the air is much colder generally take the form of ice crystals. An absence of sufficient condensation particles at and above the condensation level causes the rising air to become supersaturated and the formation of cloud tends to be inhibited.
===== Frontal and cyclonic lift =====
Frontal and cyclonic lift occur in their purest manifestations when stable air, which has been subjected to little or no surface heating, is forced aloft at weather fronts and around centers of low pressure. Warm fronts associated with extratropical cyclones tend to generate mostly cirriform and stratiform clouds over a wide area unless the approaching warm airmass is unstable, in which case cumulus congestus or cumulonimbus clouds will usually be embedded in the main precipitating cloud layer. Cold fronts are usually faster moving and generate a narrower line of clouds which are mostly stratocumuliform, cumuliform, or cumulonimbiform depending on the stability of the warm air mass just ahead of the front.
===== Convective lift =====
Another agent is the buoyant convective upward motion caused by significant daytime solar heating at surface level, or by relatively high absolute humidity. Incoming short-wave radiation generated by the sun is re-emitted as long-wave radiation when it reaches Earth's surface. This process warms the air closest to ground and increases air mass instability by creating a steeper temperature gradient from warm or hot at surface level to cold aloft. This causes it to rise and cool until temperature equilibrium is achieved with the surrounding air aloft. Moderate instability allows for the formation of cumuliform clouds of moderate size that can produce light showers if the airmass is sufficiently moist. Typical convection upcurrents may allow the droplets to grow to a radius of about 0.015 millimetres (0.0006 in) before precipitating as showers. The equivalent diameter of these droplets is about 0.03 millimetres (0.001 in).
If air near the surface becomes extremely warm and unstable, its upward motion can become quite explosive, resulting in towering cumulonimbiform clouds that can cause severe weather. As tiny water particles that make up the cloud group together to form droplets of rain, they are pulled down to earth by the force of gravity. The droplets would normally evaporate below the condensation level, but strong updrafts buffer the falling droplets, and can keep them aloft much longer than they would otherwise. Violent updrafts can reach speeds of up to 180 miles per hour (290 km/h). The longer the rain droplets remain aloft, the more time they have to grow into larger droplets that eventually fall as heavy showers.
Rain droplets that are carried well above the freezing level become supercooled at first then freeze into small hail. A frozen ice nucleus can pick up 0.5 inches (1.3 cm) in size traveling through one of these updrafts and can cycle through several updrafts and downdrafts before finally becoming so heavy that it falls to the ground as large hail. Cutting a hailstone in half shows onion-like layers of ice, indicating distinct times when it passed through a layer of super-cooled water. Hailstones have been found with diameters of up to 7 inches (18 cm).
Convective lift can occur in an unstable air mass well away from any fronts. However, very warm unstable air can also be present around fronts and low-pressure centers, often producing cumuliform and cumulonimbiform clouds in heavier and more active concentrations because of the combined frontal and convective lifting agents. As with non-frontal convective lift, increasing instability promotes upward vertical cloud growth and raises the potential for severe weather. On comparatively rare occasions, convective lift can be powerful enough to penetrate the tropopause and push the cloud top into the stratosphere.
===== Orographic lift =====
A third source of lift is wind circulation forcing air over a physical barrier such as a mountain (orographic lift). If the air is generally stable, nothing more than lenticular cap clouds will form. However, if the air becomes sufficiently moist and unstable, orographic showers or thunderstorms may appear.
==== Non-adiabatic cooling ====
Along with adiabatic cooling that requires a lifting agent, there are three other main mechanisms for lowering the temperature of the air to its dew point, all of which occur near surface level and do not require any lifting of the air. Conductive, radiational, and evaporative cooling can cause condensation at surface level resulting in the formation of fog. Conductive cooling takes place when air from a relatively mild source area comes into contact with a colder surface, as when mild marine air moves across a colder land area. Radiational cooling occurs due to the emission of infrared radiation, either by the air or by the surface underneath. This type of cooling is common during the night when the sky is clear. Evaporative cooling happens when moisture is added to the air through evaporation, which forces the air temperature to cool to its wet-bulb temperature, or sometimes to the point of saturation.
=== Adding moisture to the air ===
There are five main ways water vapor can be added to the air. Increased vapor content can result from wind convergence over water or moist ground into areas of upward motion. Precipitation or virga falling from above also enhances moisture content. Daytime heating causes water to evaporate from the surface of oceans, water bodies or wet land. Transpiration from plants is another typical source of water vapor. Lastly, cool or dry air moving over warmer water will become more humid. As with daytime heating, the addition of moisture to the air increases its heat content and instability and helps set into motion those processes that lead to the formation of cloud or fog.
=== Supersaturation ===
The amount of water that can exist as vapor in a given volume increases with the temperature. When the amount of water vapor is in equilibrium above a flat surface of water the level of vapor pressure is called saturation and the relative humidity is 100%. At this equilibrium there are equal numbers of molecules evaporating from the water as there are condensing back into the water. If the relative humidity becomes greater than 100%, it is called supersaturated. Supersaturation occurs in the absence of condensation nuclei.
Since the saturation vapor pressure is proportional to temperature, cold air has a lower saturation point than warm air. The difference between these values is the basis for the formation of clouds. When saturated air cools, it can no longer contain the same amount of water vapor. If the conditions are right, the excess water will condense out of the air until the lower saturation point is reached. Another possibility is that the water stays in vapor form, even though it is beyond the saturation point, resulting in supersaturation.
Supersaturation of more than 1–2% relative to water is rarely seen in the atmosphere, since cloud condensation nuclei are usually present. Much higher degrees of supersaturation are possible in clean air, and are the basis of the cloud chamber.
There are no instruments to take measurements of supersaturation in clouds.
=== Supercooling ===
Water droplets commonly remain as liquid water and do not freeze, even well below 0 °C (32 °F). Ice nuclei that may be present in an atmospheric droplet become active for ice formation at specific temperatures in between 0 °C (32 °F) and −38 °C (−36 °F), depending on nucleus geometry and composition. Without ice nuclei, supercooled water droplets (as well as any extremely pure liquid water) can exist down to about −38 °C (−36 °F), at which point spontaneous freezing occurs.
=== Collision-coalescence ===
One theory explaining how the behavior of individual droplets in a cloud leads to the formation of precipitation is the collision-coalescence process. Droplets suspended in the air will interact with each other, either by colliding and bouncing off each other or by combining to form a larger droplet. Eventually, the droplets become large enough that they fall to the earth as precipitation. The collision-coalescence process does not make up a significant part of cloud formation, as water droplets have a relatively high surface tension. In addition, the occurrence of collision-coalescence is closely related to entrainment-mixing processes.
=== Bergeron process ===
The primary mechanism for the formation of ice clouds was discovered by Tor Bergeron. The Bergeron process notes that the saturation vapor pressure of water, or how much water vapor a given volume can contain, depends on what the vapor is interacting with. Specifically, the saturation vapor pressure with respect to ice is lower than the saturation vapor pressure with respect to water. Water vapor interacting with a water droplet may be saturated, at 100% relative humidity, when interacting with a water droplet, but the same amount of water vapor would be supersaturated when interacting with an ice particle. The water vapor will attempt to return to equilibrium, so the extra water vapor will condense into ice on the surface of the particle. These ice particles end up as the nuclei of larger ice crystals. This process only happens at temperatures between 0 °C (32 °F) and −40 °C (−40 °F). Below −40 °C (−40 °F), liquid water will spontaneously nucleate, and freeze. The surface tension of the water allows the droplet to stay liquid well below its normal freezing point. When this happens, it is now supercooled liquid water. The Bergeron process relies on super cooled liquid water (SLW) interacting with ice nuclei to form larger particles. If there are few ice nuclei compared to the amount of SLW, droplets will be unable to form. A process whereby scientists seed a cloud with artificial ice nuclei to encourage precipitation is known as cloud seeding. This can help cause precipitation in clouds that otherwise may not rain. Cloud seeding adds excess artificial ice nuclei which shifts the balance so that there are many nuclei compared to the amount of super cooled liquid water. An over seeded cloud will form many particles, but each will be very small. This can be done as a preventative measure for areas that are at risk for hail storms.
== Cloud classification ==
Clouds in the troposphere, the atmospheric layer closest to Earth, are classified according to the height at which they are found, and their shape or appearance. There are five forms based on physical structure and process of formation. Cirriform clouds are high, thin and wispy, and are seen most extensively along the leading edges of organized weather disturbances. Stratiform clouds are non-convective and appear as extensive sheet-like layers, ranging from thin to very thick with considerable vertical development. They are mostly the product of large-scale lifting of stable air. Unstable free-convective cumuliform clouds are formed mostly into localized heaps. Stratocumuliform clouds of limited convection show a mix of cumuliform and stratiform characteristics which appear in the form of rolls or ripples. Highly convective cumulonimbiform clouds have complex structures often including cirriform tops and stratocumuliform accessory clouds.
These forms are cross-classified by altitude range or level into ten genus types which can be subdivided into species and lesser types. High-level clouds form at altitudes of 5 to 12 kilometers. All cirriform clouds are classified as high-level and therefore constitute a single cloud genus cirrus. Stratiform and stratocumuliform clouds in the high level of the troposphere have the prefix cirro- added to their names yielding the genera cirrostratus and cirrocumulus. Similar clouds found in the middle level (altitude range 2 to 7 kilometers) carry the prefix alto- resulting in the genus names altostratus and altocumulus.
Low level clouds have no height-related prefixes, so stratiform and stratocumuliform clouds based around 2 kilometres or lower are known simply as stratus and stratocumulus. Small cumulus clouds with little vertical development (species humilis) are also commonly classified as low level.
Cumuliform and cumulonimbiform heaps and deep stratiform layers often occupy at least two tropospheric levels, and the largest or deepest of these can occupy all three levels. They may be classified as low or mid-level, but are also commonly classified or characterized as vertical or multi-level. Nimbostratus clouds are stratiform layers with sufficient vertical extent to produce significant precipitation. Towering cumulus (species congestus), and cumulonimbus may form anywhere from near the surface to intermediate heights of around 3 kilometres. Of the vertically developed clouds, the cumulonimbus type is the tallest and can virtually span the entire troposphere from a few hundred metres above the ground up to the tropopause. It is the cloud responsible for thunderstorms.
Some clouds can form at very high to extreme levels above the troposphere, mostly above the polar regions of Earth. Polar stratospheric clouds are seen but rarely in winter at altitudes of 18 to 30 kilometers, while in summer, noctilucent clouds occasionally form at high latitudes at an altitude range of 76 to 85 kilometers. These polar clouds show some of the same forms as seen lower in the troposphere.
Homospheric types determined by cross-classification of forms and levels.
Homospheric types include the ten tropospheric genera and several additional major types above the troposphere. The cumulus genus includes four species that indicate vertical size and structure.
== Determination of properties ==
Satellites are used to gather data about cloud properties and other information such as Cloud Amount, height, IR emissivity, visible optical depth, icing, effective particle size for both liquid and ice, and cloud top temperature and pressure.
=== Detection ===
Data sets regarding cloud properties are gathered using satellites, such as MODIS, POLDER, CALIPSO or ATSR. The instruments measure the radiances of the clouds, from which the relevant parameters can be retrieved. This is usually done by using inverse theory.
The method of detection is based on the fact that the clouds tend to appear brighter and colder than the land surface. Because of this, difficulties rise in detecting clouds above bright (highly reflective) surfaces, such as oceans and ice.
=== Parameters ===
The value of a certain parameter is more reliable the more satellites are measuring the said parameter. This is because the range of errors and neglected details varies from instrument to instrument. Thus, if the analysed parameter has similar values for different instruments, it is accepted that the true value lies in the range given by the corresponding data sets.
The Global Energy and Water Cycle Experiment uses the following quantities in order to compare data quality from different satellites in order to establish a reliable quantification of the properties of the clouds:
the cloud cover or cloud amount with values between 0 and 1
the cloud temperature at cloud top ranging from 150 to 340 K
the cloud pressure at top 1013 - 100 hPa
the cloud height, measured above sea level, ranging from 0 to 20 km
the cloud IR emissivity, with values between 0 and 1, with a global average around 0.7
the effective cloud amount, the cloud amount weighted by the cloud IR emissivity, with a global average of 0.5
the cloud (visible) optical depth varies within a range of 4 and 10.
the cloud water path for the liquid and solid (ice) phases of the cloud particles
the cloud effective particle size for both liquid and ice, ranging from 0 to 200 μm
=== Icing ===
Another vital property is the icing characteristic of various cloud genus types at various altitudes, which can have great impact on the safety of flying. The methodologies used to determine these characteristics include using CloudSat data for the analysis and retrieval of icing conditions, the location of clouds using cloud geometric and reflectivity data, the identification of cloud types using cloud classification data, and finding vertical temperature distribution along the CloudSat track (GFS).
The range of temperatures that can give rise to icing conditions is defined according to cloud types and altitude levels:
Low-level stratocumulus and stratus can cause icing at a temperature range of 0 to -10 °C.
For mid-level altocumulus and altostratus, the range is 0 to -20 °C.
Vertical or multi-level cumulus, cumulonimbus, and nimbostatus, create icing at a range of 0 to -25 °C.
High-level cirrus, cirrocumulus, and cirrostratus generally cause no icing because they are made mostly of ice crystals colder than -25 °C.
=== Cohesion and dissolution ===
There are forces throughout the homosphere (which includes the troposphere, stratosphere, and mesosphere) that can impact the structural integrity of a cloud. It has been speculated that as long as the air remains saturated, the natural force of cohesion that hold the molecules of a substance together may act to keep the cloud from breaking up. However, this speculation has a logical flaw in that the water droplets in the cloud are not in contact with each other and therefore not satisfying the condition required for the intermolecular forces of cohesion to act. Dissolution of the cloud can occur when the process of adiabatic cooling ceases and upward lift of the air is replaced by subsidence. This leads to at least some degree of adiabatic warming of the air which can result in the cloud droplets or crystals turning back into invisible water vapor. Stronger forces such as wind shear and downdrafts can impact a cloud, but these are largely confined to the troposphere where nearly all the Earth's weather takes place. A typical cumulus cloud weighs about 500 metric tons, or 1.1 million pounds, the weight of 100 elephants.
== Models ==
There are two main model schemes that can represent cloud physics, the most common is bulk microphysics models that uses mean values to describe the cloud properties (e.g. rain water content, ice content), the properties can represent only the first order (concentration) or also the second order (mass).
The second option is to use bin microphysics scheme that keep the moments (mass or concentration) in different for different size of particles.
The bulk microphysics models are much faster than the bin models but are less accurate.
== See also ==
Hurricane dynamics and cloud microphysics
== References == | Wikipedia/Cloud_physics |
The primitive equations are a set of nonlinear partial differential equations that are used to approximate global atmospheric flow and are used in most atmospheric models. They consist of three main sets of balance equations:
A continuity equation: Representing the conservation of mass.
Conservation of momentum: Consisting of a form of the Navier–Stokes equations that describe hydrodynamical flow on the surface of a sphere under the assumption that vertical motion is much smaller than horizontal motion (hydrostasis) and that the fluid layer depth is small compared to the radius of the sphere
A thermal energy equation: Relating the overall temperature of the system to heat sources and sinks
The primitive equations may be linearized to yield Laplace's tidal equations, an eigenvalue problem from which the analytical solution to the latitudinal structure of the flow may be determined.
In general, nearly all forms of the primitive equations relate the five variables u, v, ω, T, W, and their evolution over space and time.
The equations were first written down by Vilhelm Bjerknes.
== Definitions ==
u
{\displaystyle u}
is the zonal velocity (velocity in the east–west direction tangent to the sphere)
v
{\displaystyle v}
is the meridional velocity (velocity in the north–south direction tangent to the sphere)
ω
{\displaystyle \omega }
is the vertical velocity in isobaric coordinates
T
{\displaystyle T}
is the temperature
Φ
{\displaystyle \Phi }
is the geopotential
f
{\displaystyle f}
is the term corresponding to the Coriolis force, and is equal to
2
Ω
sin
(
ϕ
)
{\displaystyle 2\Omega \sin(\phi )}
, where
Ω
{\displaystyle \Omega }
is the angular rotation rate of the Earth (
2
π
/
24
{\displaystyle 2\pi /24}
radians per sidereal hour), and
ϕ
{\displaystyle \phi }
is the latitude
R
{\displaystyle R}
is the gas constant
p
{\displaystyle p}
is the pressure
ρ
{\displaystyle \rho }
is the density
c
p
{\displaystyle c_{p}}
is the specific heat on a constant pressure surface
J
{\displaystyle J}
is the heat flow per unit time per unit mass
W
{\displaystyle W}
is the precipitable water
Π
{\displaystyle \Pi }
is the Exner function
θ
{\displaystyle \theta }
is the potential temperature
η
{\displaystyle \eta }
is the Absolute vorticity
== Forces that cause atmospheric motion ==
Forces that cause atmospheric motion include the pressure gradient force, gravity, and viscous friction. Together, they create the forces that accelerate our atmosphere.
The pressure gradient force causes an acceleration forcing air from regions of high pressure to regions of low pressure. Mathematically, this can be written as:
f
m
=
1
ρ
d
p
d
x
.
{\displaystyle {\frac {f}{m}}={\frac {1}{\rho }}{\frac {dp}{dx}}.}
The gravitational force accelerates objects at approximately 9.8 m/s2 directly towards the center of the Earth.
The force due to viscous friction can be approximated as:
f
r
=
f
a
1
ρ
μ
(
∇
⋅
(
μ
∇
v
)
+
∇
(
λ
∇
⋅
v
)
)
.
{\displaystyle f_{r}={f \over a}{1 \over \rho }\mu \left(\nabla \cdot (\mu \nabla v)+\nabla (\lambda \nabla \cdot v)\right).}
Using Newton's second law, these forces (referenced in the equations above as the accelerations due to these forces) may be summed to produce an equation of motion that describes this system. This equation can be written in the form:
d
v
d
t
=
−
(
1
ρ
)
∇
p
−
g
(
r
r
)
+
f
r
{\displaystyle {\frac {dv}{dt}}=-({\frac {1}{\rho }})\nabla p-g({\frac {r}{r}})+f_{r}}
g
=
g
e
.
{\displaystyle g=g_{e}.\,}
Therefore, to complete the system of equations and obtain 6 equations and 6 variables:
d
v
d
t
=
−
(
1
ρ
)
∇
p
−
g
(
r
r
)
+
(
1
ρ
)
[
∇
⋅
(
μ
∇
v
)
+
∇
(
λ
∇
⋅
v
)
]
{\displaystyle {\frac {dv}{dt}}=-({\frac {1}{\rho }})\nabla p-g({\frac {r}{r}})+({\frac {1}{\rho }})\left[\nabla \cdot (\mu \nabla v)+\nabla (\lambda \nabla \cdot v)\right]}
c
v
d
T
d
t
+
p
d
α
d
t
=
q
+
f
{\displaystyle c_{v}{\frac {dT}{dt}}+p{\frac {d\alpha }{dt}}=q+f}
d
ρ
d
t
+
ρ
∇
⋅
v
=
0
{\displaystyle {\frac {d\rho }{dt}}+\rho \nabla \cdot v=0}
p
=
n
T
.
{\displaystyle p=nT.}
where n is the number density in mol, and T:=RT is the temperature equivalent value in Joule/mol.
== Forms of the primitive equations ==
The precise form of the primitive equations depends on the vertical coordinate system chosen, such as pressure coordinates, log pressure coordinates, or sigma coordinates. Furthermore, the velocity, temperature, and geopotential variables may be decomposed into mean and perturbation components using Reynolds decomposition.
=== Pressure coordinate in vertical, Cartesian tangential plane ===
In this form pressure is selected as the vertical coordinate and the horizontal coordinates are written for the Cartesian tangential plane (i.e. a plane tangent to some point on the surface of the Earth). This form does not take the curvature of the Earth into account, but is useful for visualizing some of the physical processes involved in formulating the equations due to its relative simplicity.
Note that the capital D time derivatives are material derivatives. Five equations in five unknowns comprise the system.
the inviscid (frictionless) momentum equations:
D
u
D
t
−
f
v
=
−
∂
Φ
∂
x
{\displaystyle {\frac {Du}{Dt}}-fv=-{\frac {\partial \Phi }{\partial x}}}
D
v
D
t
+
f
u
=
−
∂
Φ
∂
y
{\displaystyle {\frac {Dv}{Dt}}+fu=-{\frac {\partial \Phi }{\partial y}}}
the hydrostatic equation, a special case of the vertical momentum equation in which vertical acceleration is considered negligible:
0
=
−
∂
Φ
∂
p
−
R
T
p
{\displaystyle 0=-{\frac {\partial \Phi }{\partial p}}-{\frac {RT}{p}}}
the continuity equation, connecting horizontal divergence/convergence to vertical motion under the hydrostatic approximation (
d
p
=
−
ρ
d
Φ
{\displaystyle dp=-\rho \,d\Phi }
):
∂
u
∂
x
+
∂
v
∂
y
+
∂
ω
∂
p
=
0
{\displaystyle {\frac {\partial u}{\partial x}}+{\frac {\partial v}{\partial y}}+{\frac {\partial \omega }{\partial p}}=0}
and the thermodynamic energy equation, a consequence of the first law of thermodynamics
∂
T
∂
t
+
u
∂
T
∂
x
+
v
∂
T
∂
y
+
ω
(
∂
T
∂
p
−
R
T
p
c
p
)
=
J
c
p
{\displaystyle {\frac {\partial T}{\partial t}}+u{\frac {\partial T}{\partial x}}+v{\frac {\partial T}{\partial y}}+\omega \left({\frac {\partial T}{\partial p}}-{\frac {RT}{pc_{p}}}\right)={\frac {J}{c_{p}}}}
When a statement of the conservation of water vapor substance is included, these six equations form the basis for any numerical weather prediction scheme.
=== Primitive equations using sigma coordinate system, polar stereographic projection ===
According to the National Weather Service Handbook No. 1 – Facsimile Products, the primitive equations can be simplified into the following equations:
Zonal wind:
∂
u
∂
t
=
η
v
−
∂
Φ
∂
x
−
c
p
θ
∂
π
∂
x
−
z
∂
u
∂
σ
−
∂
(
u
2
+
v
2
2
)
∂
x
{\displaystyle {\frac {\partial u}{\partial t}}=\eta v-{\frac {\partial \Phi }{\partial x}}-c_{p}\theta {\frac {\partial \pi }{\partial x}}-z{\frac {\partial u}{\partial \sigma }}-{\frac {\partial ({\frac {u^{2}+v^{2}}{2}})}{\partial x}}}
Meridional wind:
∂
v
∂
t
=
−
η
u
v
−
∂
Φ
∂
y
−
c
p
θ
∂
π
∂
y
−
z
∂
v
∂
σ
−
∂
(
u
2
+
v
2
2
)
∂
y
{\displaystyle {\frac {\partial v}{\partial t}}=-\eta {\frac {u}{v}}-{\frac {\partial \Phi }{\partial y}}-c_{p}\theta {\frac {\partial \pi }{\partial y}}-z{\frac {\partial v}{\partial \sigma }}-{\frac {\partial ({\frac {u^{2}+v^{2}}{2}})}{\partial y}}}
Temperature:
∂
T
∂
t
=
∂
T
∂
t
+
u
∂
T
∂
x
+
v
∂
T
∂
y
+
w
∂
T
∂
z
{\displaystyle {\frac {\partial T}{\partial t}}={\frac {\partial T}{\partial t}}+u{\frac {\partial T}{\partial x}}+v{\frac {\partial T}{\partial y}}+w{\frac {\partial T}{\partial z}}}
The first term is equal to the change in temperature due to incoming solar radiation and outgoing longwave radiation, which changes with time throughout the day. The second, third, and fourth terms are due to advection. Additionally, the variable T with subscript is the change in temperature on that plane. Each T is actually different and related to its respective plane. This is divided by the distance between grid points to get the change in temperature with the change in distance. When multiplied by the wind velocity on that plane, the units kelvins per meter and meters per second give kelvins per second. The sum of all the changes in temperature due to motions in the x, y, and z directions give the total change in temperature with time.
Precipitable water:
δ
W
∂
t
=
u
∂
W
∂
x
+
v
∂
W
∂
y
+
w
∂
W
∂
z
{\displaystyle {\frac {\delta W}{\partial t}}=u{\frac {\partial W}{\partial x}}+v{\frac {\partial W}{\partial y}}+w{\frac {\partial W}{\partial z}}}
This equation and notation works in much the same way as the temperature equation. This equation describes the motion of water from one place to another at a point without taking into account water that changes form. Inside a given system, the total change in water with time is zero. However, concentrations are allowed to move with the wind.
Pressure thickness:
∂
∂
t
∂
p
∂
σ
=
u
∂
∂
x
x
∂
p
∂
σ
+
v
∂
∂
y
y
∂
p
∂
σ
+
w
∂
∂
z
z
∂
p
∂
σ
{\displaystyle {\frac {\partial }{\partial t}}{\frac {\partial p}{\partial \sigma }}=u{\frac {\partial }{\partial x}}x{\frac {\partial p}{\partial \sigma }}+v{\frac {\partial }{\partial y}}y{\frac {\partial p}{\partial \sigma }}+w{\frac {\partial }{\partial z}}z{\frac {\partial p}{\partial \sigma }}}
These simplifications make it much easier to understand what is happening in the model. Things like the temperature (potential temperature), precipitable water, and to an extent the pressure thickness simply move from one spot on the grid to another with the wind. The wind is forecast slightly differently. It uses geopotential, specific heat, the Exner function π, and change in sigma coordinate.
== Solution to the linearized primitive equations ==
The analytic solution to the linearized primitive equations involves a sinusoidal oscillation in time and longitude, modulated by coefficients related to height and latitude.
{
u
,
v
,
Φ
}
=
{
u
^
,
v
^
,
Φ
^
}
e
i
(
s
λ
+
σ
t
)
{\displaystyle {\begin{Bmatrix}u,v,\Phi \end{Bmatrix}}={\begin{Bmatrix}{\hat {u}},{\hat {v}},{\hat {\Phi }}\end{Bmatrix}}e^{i(s\lambda +\sigma t)}}
where s and
σ
{\displaystyle \sigma }
are the zonal wavenumber and angular frequency, respectively. The solution represents atmospheric waves and tides.
When the coefficients are separated into their height and latitude components, the height dependence takes the form of propagating or evanescent waves (depending on conditions), while the latitude dependence is given by the Hough functions.
This analytic solution is only possible when the primitive equations are linearized and simplified. Unfortunately many of these simplifications (i.e. no dissipation, isothermal atmosphere) do not correspond to conditions in the actual atmosphere. As a result, a numerical solution which takes these factors into account is often calculated using general circulation models and climate models.
== See also ==
Barometric formula
Climate model
Euler equations
Fluid dynamics
General circulation model
Numerical weather prediction
== References ==
Beniston, Martin. From Turbulence to Climate: Numerical Investigations of the Atmosphere with a Hierarchy of Models. Berlin: Springer, 1998. ISBN 3-540-63495-9
Firth, Robert. Mesoscale and Microscale Meteorological Model Grid Construction and Accuracy. LSMSA, 2006.
Thompson, Philip. Numerical Weather Analysis and Prediction. New York: The Macmillan Company, 1961.
Pielke, Roger A. Mesoscale Meteorological Modeling. Orlando: Academic Press, Inc., 1984. ISBN 0-12-554820-6
U.S. Department of Commerce, National Oceanic and Atmospheric Administration, National Weather Service. National Weather Service Handbook No. 1 – Facsimile Products. Washington, DC: Department of Commerce, 1979.
== External links ==
National Weather Service – NCSU
Collaborative Research and Training Site, Review of the Primitive Equations. | Wikipedia/Primitive_equations |
The vorticity equation of fluid dynamics describes the evolution of the vorticity ω of a particle of a fluid as it moves with its flow; that is, the local rotation of the fluid (in terms of vector calculus this is the curl of the flow velocity). The governing equation is:where D/Dt is the material derivative operator, u is the flow velocity, ρ is the local fluid density, p is the local pressure, τ is the viscous stress tensor and B represents the sum of the external body forces. The first source term on the right hand side represents vortex stretching.
The equation is valid in the absence of any concentrated torques and line forces for a compressible, Newtonian fluid. In the case of incompressible flow (i.e., low Mach number) and isotropic fluids, with conservative body forces, the equation simplifies to the vorticity transport equation:
D
ω
D
t
=
(
ω
⋅
∇
)
u
+
ν
∇
2
ω
{\displaystyle {\frac {D{\boldsymbol {\omega }}}{Dt}}=\left({\boldsymbol {\omega }}\cdot \nabla \right)\mathbf {u} +\nu \nabla ^{2}{\boldsymbol {\omega }}}
where ν is the kinematic viscosity and
∇
2
{\displaystyle \nabla ^{2}}
is the Laplace operator. Under the further assumption of two-dimensional flow, the equation simplifies to:
D
ω
D
t
=
ν
∇
2
ω
{\displaystyle {\frac {D{\boldsymbol {\omega }}}{Dt}}=\nu \nabla ^{2}{\boldsymbol {\omega }}}
== Physical interpretation ==
The term Dω/Dt on the left-hand side is the material derivative of the vorticity vector ω. It describes the rate of change of vorticity of the moving fluid particle. This change can be attributed to unsteadiness in the flow (∂ω/∂t, the unsteady term) or due to the motion of the fluid particle as it moves from one point to another ((u ∙ ∇)ω, the convection term).
The term (ω ∙ ∇) u on the right-hand side describes the stretching or tilting of vorticity due to the flow velocity gradients. Note that (ω ∙ ∇) u is a vector quantity, as ω ∙ ∇ is a scalar differential operator, while ∇u is a nine-element tensor quantity.
The term ω(∇ ∙ u) describes stretching of vorticity due to flow compressibility. It follows from the Navier-Stokes equation for continuity, namely
∂
ρ
∂
t
+
∇
⋅
(
ρ
u
)
=
0
⟺
∇
⋅
u
=
−
1
ρ
d
ρ
d
t
=
1
v
d
v
d
t
{\displaystyle {\begin{aligned}{\frac {\partial \rho }{\partial t}}+\nabla \cdot \left(\rho \mathbf {u} \right)&=0\\\Longleftrightarrow \nabla \cdot \mathbf {u} &=-{\frac {1}{\rho }}{\frac {d\rho }{dt}}={\frac {1}{v}}{\frac {dv}{dt}}\end{aligned}}}
where v = 1/ρ is the specific volume of the fluid element. One can think of ∇ ∙ u as a measure of flow compressibility. Sometimes the negative sign is included in the term.
The term 1/ρ2∇ρ × ∇p is the baroclinic term. It accounts for the changes in the vorticity due to the intersection of density and pressure surfaces.
The term ∇ × (∇ ∙ τ/ρ), accounts for the diffusion of vorticity due to the viscous effects.
The term ∇ × B provides for changes due to external body forces. These are forces that are spread over a three-dimensional region of the fluid, such as gravity or electromagnetic forces. (As opposed to forces that act only over a surface (like drag on a wall) or a line (like surface tension around a meniscus).
=== Simplifications ===
In case of conservative body forces, ∇ × B = 0.
For a barotropic fluid, ∇ρ × ∇p = 0. This is also true for a constant density fluid (including incompressible fluid) where ∇ρ = 0. Note that this is not the same as an incompressible flow, for which the barotropic term cannot be neglected.
This note seems to be talking about the fact that conservation of momentum says
D
ρ
D
t
+
ρ
(
∇
⋅
u
)
=
∂
ρ
∂
t
+
u
⋅
∇
ρ
+
ρ
(
∇
⋅
u
)
=
0
{\displaystyle {\frac {D\rho }{Dt}}+\rho (\nabla \cdot \mathbf {u} )={\frac {\partial \rho }{\partial t}}+\mathbf {u} \cdot \nabla \rho +\rho (\nabla \cdot \mathbf {u} )=0}
and there's a difference between assuming that ρ=constant (the 'incompressible fluid' option, above) and that
∇
⋅
u
=
0
{\displaystyle \nabla \cdot \mathbf {u} =0}
(the 'incompressible flow' option, above). With the first assumption, conservation of momentum implies (for non-zero density) that
∇
⋅
u
=
0
{\displaystyle \nabla \cdot \mathbf {u} =0}
; whereas the second assumption doesn't necessary imply that ρ is constant. This second assumption only strictly requires that the time rate of change of the density is compensated by the gradient of the density, as in:
∂
ρ
∂
t
=
−
u
⋅
∇
ρ
{\displaystyle {\frac {\partial \rho }{\partial t}}=-\mathbf {u} \cdot \nabla \rho }
. You can make sense of this by considering the ideal gas law p = ρRT (which is valid if the Reynolds number is large enough that viscous friction becomes unimportant.) Then, even for an adiabatic, chemically-homogenous fluid, the density can vary when the pressure changes, e.g. with Bernoulli.
For inviscid fluids, the viscosity tensor τ is zero.
Thus for an inviscid, barotropic fluid with conservative body forces, the vorticity equation simplifies to
D
D
t
(
ω
ρ
)
=
(
ω
ρ
)
⋅
∇
u
{\displaystyle {\frac {D}{Dt}}\left({\frac {\boldsymbol {\omega }}{\rho }}\right)=\left({\frac {\boldsymbol {\omega }}{\rho }}\right)\cdot \nabla \mathbf {u} }
Alternately, in case of incompressible, inviscid fluid with conservative body forces,
D
ω
D
t
=
∂
ω
∂
t
+
(
u
⋅
∇
)
ω
=
(
ω
⋅
∇
)
u
{\displaystyle {\frac {D{\boldsymbol {\omega }}}{Dt}}={\frac {\partial {\boldsymbol {\omega }}}{\partial t}}+(\mathbf {u} \cdot \nabla ){\boldsymbol {\omega }}=({\boldsymbol {\omega }}\cdot \nabla )\mathbf {u} }
For a brief review of additional cases and simplifications, see also. For the vorticity equation in turbulence theory, in context of the flows in oceans and atmosphere, refer to.
== Derivation ==
The vorticity equation can be derived from the Navier–Stokes equation for the conservation of angular momentum. In the absence of any concentrated torques and line forces, one obtains:
D
u
D
t
=
∂
u
∂
t
+
(
u
⋅
∇
)
u
=
−
1
ρ
∇
p
+
∇
⋅
τ
ρ
+
B
ρ
{\displaystyle {\frac {D\mathbf {u} }{Dt}}={\frac {\partial \mathbf {u} }{\partial t}}+\left(\mathbf {u} \cdot \nabla \right)\mathbf {u} =-{\frac {1}{\rho }}\nabla p+{\frac {\nabla \cdot \tau }{\rho }}+{\frac {\mathbf {B} }{\rho }}}
Now, vorticity is defined as the curl of the flow velocity vector; taking the curl of momentum equation yields the desired equation. The following identities are useful in derivation of the equation:
ω
=
∇
×
u
(
u
⋅
∇
)
u
=
∇
(
1
2
u
⋅
u
)
−
u
×
ω
∇
×
(
u
×
ω
)
=
−
ω
(
∇
⋅
u
)
+
(
ω
⋅
∇
)
u
−
(
u
⋅
∇
)
ω
∇
⋅
ω
=
0
∇
×
∇
ϕ
=
0
{\displaystyle {\begin{aligned}{\boldsymbol {\omega }}&=\nabla \times \mathbf {u} \\\left(\mathbf {u} \cdot \nabla \right)\mathbf {u} &=\nabla \left({\frac {1}{2}}\mathbf {u} \cdot \mathbf {u} \right)-\mathbf {u} \times {\boldsymbol {\omega }}\\\nabla \times \left(\mathbf {u} \times {\boldsymbol {\omega }}\right)&=-{\boldsymbol {\omega }}\left(\nabla \cdot \mathbf {u} \right)+\left({\boldsymbol {\omega }}\cdot \nabla \right)\mathbf {u} -\left(\mathbf {u} \cdot \nabla \right){\boldsymbol {\omega }}\\[4pt]\nabla \cdot {\boldsymbol {\omega }}&=0\\[4pt]\nabla \times \nabla \phi &=0\end{aligned}}}
where
ϕ
{\displaystyle \phi }
is any scalar field.
== Tensor notation ==
The vorticity equation can be expressed in tensor notation using Einstein's summation convention and the Levi-Civita symbol eijk:
D
ω
i
D
t
=
∂
ω
i
∂
t
+
v
j
∂
ω
i
∂
x
j
=
ω
j
∂
v
i
∂
x
j
−
ω
i
∂
v
j
∂
x
j
+
e
i
j
k
1
ρ
2
∂
ρ
∂
x
j
∂
p
∂
x
k
+
e
i
j
k
∂
∂
x
j
(
1
ρ
∂
τ
k
m
∂
x
m
)
+
e
i
j
k
∂
B
k
∂
x
j
{\displaystyle {\begin{aligned}{\frac {D\omega _{i}}{Dt}}&={\frac {\partial \omega _{i}}{\partial t}}+v_{j}{\frac {\partial \omega _{i}}{\partial x_{j}}}\\&=\omega _{j}{\frac {\partial v_{i}}{\partial x_{j}}}-\omega _{i}{\frac {\partial v_{j}}{\partial x_{j}}}+e_{ijk}{\frac {1}{\rho ^{2}}}{\frac {\partial \rho }{\partial x_{j}}}{\frac {\partial p}{\partial x_{k}}}+e_{ijk}{\frac {\partial }{\partial x_{j}}}\left({\frac {1}{\rho }}{\frac {\partial \tau _{km}}{\partial x_{m}}}\right)+e_{ijk}{\frac {\partial B_{k}}{\partial x_{j}}}\end{aligned}}}
== In specific sciences ==
=== Atmospheric sciences ===
In the atmospheric sciences, the vorticity equation can be stated in terms of the absolute vorticity of air with respect to an inertial frame, or of the vorticity with respect to the rotation of the Earth. The absolute version is
d
η
d
t
=
−
η
∇
h
⋅
v
h
−
(
∂
w
∂
x
∂
v
∂
z
−
∂
w
∂
y
∂
u
∂
z
)
−
1
ρ
2
k
⋅
(
∇
h
p
×
∇
h
ρ
)
{\displaystyle {\frac {d\eta }{dt}}=-\eta \nabla _{\text{h}}\cdot \mathbf {v} _{\text{h}}-\left({\frac {\partial w}{\partial x}}{\frac {\partial v}{\partial z}}-{\frac {\partial w}{\partial y}}{\frac {\partial u}{\partial z}}\right)-{\frac {1}{\rho ^{2}}}\mathbf {k} \cdot \left(\nabla _{\text{h}}p\times \nabla _{\text{h}}\rho \right)}
Here, η is the polar (z) component of the vorticity, ρ is the atmospheric density, u, v, and w are the components of wind velocity, and ∇h is the 2-dimensional (i.e. horizontal-component-only) del.
== See also ==
Vorticity
Barotropic vorticity equation
Vortex stretching
Burgers vortex
== References ==
== Further reading ==
Manna, Utpal; Sritharan, S. S. (2007). "Lyapunov Functionals and Local Dissipativity for the Vorticity Equation in Lp and Besov spaces". Differential and Integral Equations. 20 (5): 581–598. arXiv:0802.2898. doi:10.57262/die/1356039440. S2CID 50701138.
Barbu, V.; Sritharan, S. S. (2000). "M-Accretive Quantization of the Vorticity Equation" (PDF). In Balakrishnan, A. V. (ed.). Semi-Groups of Operators: Theory and Applications. Boston: Birkhauser. pp. 296–303.
Krigel, A. M. (1983). "Vortex evolution". Geophysical & Astrophysical Fluid Dynamics. 24 (3): 213–223. Bibcode:1983GApFD..24..213K. doi:10.1080/03091928308209066. | Wikipedia/Vorticity_equation |
The barotropic vorticity equation assumes the atmosphere is nearly barotropic, which means that the direction and speed of the geostrophic wind are independent of height. In other words, there is no vertical wind shear of the geostrophic wind. It also implies that thickness contours (a proxy for temperature) are parallel to upper level height contours. In this type of atmosphere, high and low pressure areas are centers of warm and cold temperature anomalies. Warm-core highs (such as the subtropical ridge and the Bermuda-Azores high) and cold-core lows have strengthening winds with height, with the reverse true for cold-core highs (shallow Arctic highs) and warm-core lows (such as tropical cyclones).
A simplified form of the vorticity equation for an inviscid, divergence-free flow (solenoidal velocity field), the barotropic vorticity equation can simply be stated as
D
η
D
t
=
0
,
{\displaystyle {\frac {D\eta }{Dt}}=0,}
where D/Dt is the material derivative and
η
=
ζ
+
f
{\displaystyle \eta =\zeta +f}
is absolute vorticity, with ζ being relative vorticity, defined as the vertical component of the curl of the fluid velocity and f is the Coriolis parameter
f
=
2
Ω
sin
φ
,
{\displaystyle f=2\Omega \sin \varphi ,}
where Ω is the angular frequency of the planet's rotation (Ω = 0.7272×10−4 s−1 for the earth) and φ is latitude.
In terms of relative vorticity, the equation can be rewritten as
D
ζ
D
t
=
−
v
β
,
{\displaystyle {\frac {D\zeta }{Dt}}=-v\beta ,}
where β = ∂f/∂y is the variation of the Coriolis parameter with distance y in the north–south direction and v is the component of velocity in this direction.
In 1950, Charney, Fjørtoft, and von Neumann integrated this equation (with an added diffusion term on the right-hand side) on a computer for the first time, using an observed field of 500 hPa geopotential height for the first timestep. This was one of the first successful instances of numerical weather prediction.
== See also ==
Barotropic
== References ==
== External links ==
http://www.met.reading.ac.uk/~ross/Science/BarVor.html | Wikipedia/Barotropic_vorticity_equation |
In fluid mechanics, the pressure-gradient force is the force that results when there is a difference in pressure across a surface. In general, a pressure is a force per unit area across a surface. A difference in pressure across a surface then implies a difference in force, which can result in an acceleration according to Newton's second law of motion, if there is no additional force to balance it. The resulting force is always directed from the region of higher-pressure to the region of lower-pressure. When a fluid is in an equilibrium state (i.e. there are no net forces, and no acceleration), the system is referred to as being in hydrostatic equilibrium. In the case of atmospheres, the pressure-gradient force is balanced by the gravitational force, maintaining hydrostatic equilibrium. In Earth's atmosphere, for example, air pressure decreases at altitudes above Earth's surface, thus providing a pressure-gradient force which counteracts the force of gravity on the atmosphere.
== Magnus effect ==
The Magnus effect is an observable phenomenon that is commonly associated with a spinning object moving through a fluid. The path of the spinning object is deflected in a manner that is not present when the object is not spinning. The deflection can be explained by the difference in pressure of the fluid on opposite sides of the spinning object. The Magnus effect is dependent on the speed of rotation.
== Formalism ==
Consider a cubic parcel of fluid with a density
ρ
{\displaystyle \rho }
, a height
d
z
{\displaystyle dz}
, and a surface area
d
A
{\displaystyle dA}
. The mass of the parcel can be expressed as,
m
=
ρ
d
A
d
z
{\displaystyle m=\rho \,dA\,dz}
. Using Newton's second law,
F
=
m
a
{\displaystyle F=ma}
, we can then examine a pressure difference
d
P
{\displaystyle dP}
(assumed to be only in the
z
{\displaystyle z}
-direction) to find the resulting force,
F
=
−
d
P
d
A
=
ρ
a
d
A
d
z
{\displaystyle F=-dP\,dA=\rho a\,dA\,dz}
.
The acceleration resulting from the pressure gradient is then,
a
=
−
1
ρ
d
P
d
z
.
{\displaystyle a=-{\frac {1}{\rho }}{\frac {dP}{dz}}.}
The effects of the pressure gradient are usually expressed in this way, in terms of an acceleration, instead of in terms of a force. We can express the acceleration more precisely, for a general pressure
P
{\displaystyle P}
as,
a
→
=
−
1
ρ
∇
→
P
.
{\displaystyle {\vec {a}}=-{\frac {1}{\rho }}{\vec {\nabla }}P.}
The direction of the resulting force (acceleration) is thus in the opposite direction of the most rapid increase of pressure.
== References ==
Roland B. Stull (2000) Meteorology for Scientists and Engineers, Second Edition, Ed. Brooks/Cole, ISBN 0-534-37214-7. | Wikipedia/Pressure-gradient_force |
Positron emission tomography (PET) is a functional imaging technique that uses radioactive substances known as radiotracers to visualize and measure changes in metabolic processes, and in other physiological activities including blood flow, regional chemical composition, and absorption.
Different tracers are used for various imaging purposes, depending on the target process within the body, such as:
Fluorodeoxyglucose ([18F]FDG or FDG) is commonly used to detect cancer;
[18F]Sodium fluoride (Na18F) is widely used for detecting bone formation;
Oxygen-15 (15O) is sometimes used to measure blood flow.
PET is a common imaging technique, a medical scintillography technique used in nuclear medicine. A radiopharmaceutical—a radioisotope attached to a drug—is injected into the body as a tracer. When the radiopharmaceutical undergoes beta plus decay, a positron is emitted, and when the positron interacts with an ordinary electron, the two particles annihilate and two gamma rays are emitted in opposite directions. These gamma rays are detected by two gamma cameras to form a three-dimensional image.
PET scanners can incorporate a computed tomography scanner (CT) and are known as PET–CT scanners. PET scan images can be reconstructed using a CT scan performed using one scanner during the same session.
One of the disadvantages of a PET scanner is its high initial cost and ongoing operating costs.
== Uses ==
PET is both a medical and research tool used in pre-clinical and clinical settings. It is used heavily in the imaging of tumors and the search for metastases within the field of clinical oncology, and for the clinical diagnosis of certain diffuse brain diseases such as those causing various types of dementias. PET is valued as a research tool to learn and enhance knowledge of the normal human brain, heart function, and support drug development. PET is also used in pre-clinical studies using animals. It allows repeated investigations into the same subjects over time, where subjects can act as their own control and substantially reduces the numbers of animals required for a given study. This approach allows research studies to reduce the sample size needed while increasing the statistical quality of its results.
Physiological processes lead to anatomical changes in the body. Since PET is capable of detecting biochemical processes as well as expression of some proteins, PET can provide molecular-level information much before any anatomic changes are visible. PET scanning does this by using radiolabelled molecular probes that have different rates of uptake depending on the type and function of tissue involved. Regional tracer uptake in various anatomic structures can be visualized and relatively quantified in terms of injected positron emitter within a PET scan.
PET imaging is best performed using a dedicated PET scanner. It is also possible to acquire PET images using a conventional dual-head gamma camera fitted with a coincidence detector. The quality of gamma-camera PET imaging is lower, and the scans take longer to acquire. However, this method allows a low-cost on-site solution to institutions with low PET scanning demand. An alternative would be to refer these patients to another center or relying on a visit by a mobile scanner.
Alternative methods of medical imaging include single-photon emission computed tomography (SPECT), computed tomography (CT), magnetic resonance imaging (MRI) and functional magnetic resonance imaging (fMRI), and ultrasound. SPECT is an imaging technique similar to PET that uses radioligands to detect molecules in the body. SPECT is less expensive and provides inferior image quality than PET.
=== Oncology ===
PET scanning with the radiotracer [18F]fluorodeoxyglucose (FDG) is widely used in clinical oncology. FDG is a glucose analog that is taken up by glucose-using cells and phosphorylated by hexokinase (whose mitochondrial form is significantly elevated in rapidly growing malignant tumors). Metabolic trapping of the radioactive glucose molecule allows the PET scan to be utilized. The concentrations of imaged FDG tracer indicate tissue metabolic activity as it corresponds to the regional glucose uptake. FDG is used to explore the possibility of cancer spreading to other body sites (cancer metastasis). These FDG PET scans for detecting cancer metastasis are the most common in standard medical care (representing 90% of current scans). The same tracer may also be used for the diagnosis of types of dementia. Less often, other radioactive tracers, usually but not always labelled with fluorine-18 (18F), are used to image the tissue concentration of different kinds of molecules of interest inside the body.
A typical dose of FDG used in an oncological scan has an effective radiation dose of 7.6 mSv. Because the hydroxy group that is replaced by fluorine-18 to generate FDG is required for the next step in glucose metabolism in all cells, no further reactions occur in FDG. Furthermore, most tissues (with the notable exception of liver and kidneys) cannot remove the phosphate added by hexokinase. This means that FDG is trapped in any cell that takes it up until it decays, since phosphorylated sugars, due to their ionic charge, cannot exit from the cell. This results in intense radiolabeling of tissues with high glucose uptake, such as the normal brain, liver, kidneys, and most cancers, which have a higher glucose uptake than most normal tissue due to the Warburg effect. As a result, FDG-PET can be used for diagnosis, staging, and monitoring treatment of cancers, particularly in Hodgkin lymphoma, non-Hodgkin lymphoma, and lung cancer.
A 2020 review of research on the use of PET for Hodgkin lymphoma found evidence that negative findings in interim PET scans are linked to higher overall survival and progression-free survival; however, the certainty of the available evidence was moderate for survival, and very low for progression-free survival.
A few other isotopes and radiotracers are slowly being introduced into oncology for specific purposes. For example, 11C-labelled metomidate (11C-metomidate) has been used to detect tumors of adrenocortical origin. Also, fluorodopa (FDOPA) PET/CT (also called F-18-DOPA PET/CT) has proven to be a more sensitive alternative to finding and also localizing pheochromocytoma than the iobenguane (MIBG) scan.
=== Neuroimaging ===
==== Neurology ====
PET imaging with oxygen-15 indirectly measures blood flow to the brain. In this method, increased radioactivity signal indicates increased blood flow which is assumed to correlate with increased brain activity. Because of its two-minute half-life, oxygen-15 must be piped directly from a medical cyclotron for such uses, which is difficult.
PET imaging with FDG takes advantage of the fact that the brain is normally a rapid user of glucose. Standard FDG PET of the brain measures regional glucose use and can be used in neuropathological diagnosis.
Brain pathologies such as Alzheimer's disease (AD) greatly decrease brain metabolism of both glucose and oxygen in tandem. Therefore FDG PET of the brain may also be used to successfully differentiate Alzheimer's disease from other dementing processes, and also to make early diagnoses of Alzheimer's disease. The advantage of FDG PET for these uses is its much wider availability. In addition, some other fluorine-18 based radioactive tracers can be used to detect amyloid-beta plaques, a potential biomarker for Alzheimer's in the brain. These include florbetapir, flutemetamol, Pittsburgh compound B (PiB) and florbetaben.
PET imaging with FDG can also be used for localization of "seizure focus". A seizure focus will appear as hypometabolic during an interictal scan. Several radiotracers (i.e. radioligands) have been developed for PET that are ligands for specific neuroreceptor subtypes such as [11C]raclopride, [18F]fallypride and [18F]desmethoxyfallypride for dopamine D2/D3 receptors; [11C]McN5652 and [11C]DASB for serotonin transporters; [18F]mefway for serotonin 5HT1A receptors; and [18F]nifene for nicotinic acetylcholine receptors or enzyme substrates (e.g. 6-FDOPA for the AADC enzyme). These agents permit the visualization of neuroreceptor pools in the context of a plurality of neuropsychiatric and neurologic illnesses.
PET may also be used for the diagnosis of hippocampal sclerosis, which causes epilepsy. FDG, and the less common tracers flumazenil and MPPF have been explored for this purpose. If the sclerosis is unilateral (right hippocampus or left hippocampus), FDG uptake can be compared with the healthy side. Even if the diagnosis is difficult with MRI, it may be diagnosed with PET.
The development of a number of novel probes for non-invasive, in-vivo PET imaging of neuroaggregate in human brain has brought amyloid imaging close to clinical use. The earliest amyloid imaging probes included [18F]FDDNP, developed at the University of California, Los Angeles, and Pittsburgh compound B (PiB), developed at the University of Pittsburgh. These probes permit the visualization of amyloid plaques in the brains of Alzheimer's patients and could assist clinicians in making a positive clinical diagnosis of AD pre-mortem and aid in the development of novel anti-amyloid therapies. [11C]polymethylpentene (PMP) is a novel radiopharmaceutical used in PET imaging to determine the activity of the acetylcholinergic neurotransmitter system by acting as a substrate for acetylcholinesterase. Post-mortem examination of AD patients has shown decreased levels of acetylcholinesterase. [11C]PMP is used to map the acetylcholinesterase activity in the brain, which could allow for premortem diagnoses of AD and help to monitor AD treatments. Avid Radiopharmaceuticals has developed and commercialized a compound called florbetapir that uses the longer-lasting radionuclide fluorine-18 to detect amyloid plaques using PET scans.
==== Neuropsychology or cognitive neuroscience ====
To examine links between specific psychological processes or disorders and brain activity.
==== Psychiatry and neuropsychopharmacology ====
Numerous compounds that bind selectively to neuroreceptors of interest in biological psychiatry have been radiolabeled with C-11 or F-18. Radioligands that bind to dopamine receptors (D1, D2, reuptake transporter), serotonin receptors (5HT1A, 5HT2A, reuptake transporter), opioid receptors (mu and kappa), cholinergic receptors (nicotinic and muscarinic) and other sites have been used successfully in studies with human subjects. Studies have been performed examining the state of these receptors in patients compared to healthy controls in schizophrenia, substance abuse, mood disorders and other psychiatric conditions.
==== Stereotactic surgery and radiosurgery ====
PET can also be used in image guided surgery for the treatment of intracranial tumors, arteriovenous malformations and other surgically treatable conditions.
=== Cardiology ===
Cardiology, atherosclerosis and vascular disease study: FDG PET can help in identifying hibernating myocardium. However, the cost-effectiveness of PET for this role versus SPECT is unclear. FDG PET imaging of atherosclerosis to detect patients at risk of stroke is also feasible. Also, it can help test the efficacy of novel anti-atherosclerosis therapies.
=== Infectious diseases ===
Imaging infections with molecular imaging technologies can improve diagnosis and treatment follow-up. Clinically, PET has been widely used to image bacterial infections using FDG to identify the infection-associated inflammatory response. Three different PET contrast agents have been developed to image bacterial infections in vivo are [18F]maltose, [18F]maltohexaose, and [18F]2-fluorodeoxysorbitol (FDS). FDS has the added benefit of being able to target only Enterobacteriaceae.
=== Bio-distribution studies ===
In pre-clinical trials, a new drug can be radiolabeled and injected into animals. Such scans are referred to as biodistribution studies. The information regarding drug uptake, retention and elimination over time can be obtained quickly and cost-effectively compare to the older technique of killing and dissecting the animals. Commonly, drug occupancy at a purported site of action can be inferred indirectly by competition studies between unlabeled drug and radiolabeled compounds to bind with specificity to the site. A single radioligand can be used this way to test many potential drug candidates for the same target. A related technique involves scanning with radioligands that compete with an endogenous (naturally occurring) substance at a given receptor to demonstrate that a drug causes the release of the natural substance.
=== Small animal imaging ===
A miniature animal PET has been constructed that is small enough for a fully conscious rat to be scanned. This RatCAP (rat conscious animal PET) allows animals to be scanned without the confounding effects of anesthesia. PET scanners designed specifically for imaging rodents, often referred to as microPET, as well as scanners for small primates, are marketed for academic and pharmaceutical research. The scanners are based on microminiature scintillators and amplified avalanche photodiodes (APDs) through a system that uses single-chip silicon photomultipliers.
In 2018 the UC Davis School of Veterinary Medicine became the first veterinary center to employ a small clinical PET scanner as a scanner for clinical (rather than research) animal diagnosis. Because of cost as well as the marginal utility of detecting cancer metastases in companion animals (the primary use of this modality), veterinary PET scanning is expected to be rarely available in the immediate future.
=== Musculo-skeletal imaging ===
PET imaging has been used for imaging muscles and bones. FDG is the most commonly used tracer for imaging muscles, and NaF-F18 is the most widely used tracer for imaging bones.
==== Muscles ====
PET is a feasible technique for studying skeletal muscles during exercise. Also, PET can provide muscle activation data about deep-lying muscles (such as the vastus intermedialis and the gluteus minimus) compared to techniques like electromyography, which can be used only on superficial muscles directly under the skin. However, a disadvantage is that PET provides no timing information about muscle activation because it has to be measured after the exercise is completed. This is due to the time it takes for FDG to accumulate in the activated muscles.
==== Bones ====
Together with [18F]sodium floride, PET for bone imaging has been in use for 60 years for measuring regional bone metabolism and blood flow using static and dynamic scans. Researchers have recently started using [18F]sodium fluoride to study bone metastasis as well.
== Safety ==
PET scanning is non-invasive, but it does involve exposure to ionizing radiation. FDG, which is now the standard radiotracer used for PET neuroimaging and cancer patient management, has an effective radiation dose of 14 mSv.
The amount of radiation in FDG is similar to the effective dose of spending one year in the American city of Denver, Colorado (12.4 mSv/year). For comparison, radiation dosage for other medical procedures range from 0.02 mSv for a chest X-ray and 6.5–8 mSv for a CT scan of the chest. Average civil aircrews are exposed to 3 mSv/year, and the whole body occupational dose limit for nuclear energy workers in the US is 50 mSv/year. For scale, see Orders of magnitude (radiation).
For PET–CT scanning, the radiation exposure may be substantial—around 23–26 mSv (for a 70 kg person—dose is likely to be higher for higher body weights).
== Operation ==
=== Radionuclides and radiotracers ===
Radionuclides are incorporated either into compounds normally used by the body such as glucose (or glucose analogues), water, or ammonia, or into molecules that bind to receptors or other sites of drug action. Such labelled compounds are known as radiotracers. PET technology can be used to trace the biologic pathway of any compound in living humans (and many other species as well), provided it can be radiolabeled with a PET isotope. Thus, the specific processes that can be probed with PET are virtually limitless, and radiotracers for new target molecules and processes are continuing to be synthesized. As of this writing there are already dozens in clinical use and hundreds applied in research. In 2020 by far the most commonly used radiotracer in clinical PET scanning is the carbohydrate derivative FDG. This radiotracer is used in essentially all scans for oncology and most scans in neurology, thus makes up the large majority of radiotracer (>95%) used in PET and PET–CT scanning.
Due to the short half-lives of most positron-emitting radioisotopes, the radiotracers have traditionally been produced using a cyclotron in close proximity to the PET imaging facility. The half-life of fluorine-18 is long enough that radiotracers labeled with fluorine-18 can be manufactured commercially at offsite locations and shipped to imaging centers. Recently rubidium-82 generators have become commercially available. These contain strontium-82, which decays by electron capture to produce positron-emitting rubidium-82.
The use of positron-emitting isotopes of metals in PET scans has been reviewed, including elements not listed above, such as lanthanides.
=== Immuno-PET ===
The isotope 89Zr has been applied to the tracking and quantification of molecular antibodies with PET cameras (a method called "immuno-PET").
The biological half-life of antibodies is typically on the order of days, see daclizumab and erenumab by way of example. To visualize and quantify the distribution of such antibodies in the body, the PET isotope 89Zr is well suited because its physical half-life matches the typical biological half-life of antibodies, see table above.
=== Emission ===
To conduct the scan, a short-lived radioactive tracer isotope is injected into the living subject (usually into blood circulation). Each tracer atom has been chemically incorporated into a biologically active molecule. There is a waiting period while the active molecule becomes concentrated in tissues of interest. Then the subject is placed in the imaging scanner. The molecule most commonly used for this purpose is FDG, a sugar, for which the waiting period is typically an hour. During the scan, a record of tissue concentration is made as the tracer decays.
As the radioisotope undergoes positron emission decay (also known as positive beta decay), it emits a positron, an antiparticle of the electron with opposite charge. The emitted positron travels in tissue for a short distance (typically less than 1 mm, but dependent on the isotope), during which time it loses kinetic energy, until it decelerates to a point where it can interact with an electron. The encounter annihilates both electron and positron, producing a pair of annihilation (gamma) photons moving in approximately opposite directions. These are detected when they reach a scintillator in the scanning device, creating a burst of light which is detected by photomultiplier tubes or silicon avalanche photodiodes (Si APD). The technique depends on simultaneous or coincident detection of the pair of photons moving in approximately opposite directions (they would be exactly opposite in their center of mass frame, but the scanner has no way to know this, and so has a built-in slight direction-error tolerance). Photons that do not arrive in temporal "pairs" (i.e. within a timing-window of a few nanoseconds) are ignored.
=== Localization of the positron annihilation event ===
The most significant fraction of electron–positron annihilations results in two 511 keV gamma photons being emitted at almost 180 degrees to each other. Hence, it is possible to localize their source along a straight line of coincidence (also called the line of response, or LOR). In practice, the LOR has a non-zero width as the emitted photons are not exactly 180 degrees apart. If the resolving time of the detectors is less than 500 picoseconds rather than about 10 nanoseconds, it is possible to localize the event to a segment of a chord, whose length is determined by the detector timing resolution. As the timing resolution improves, the signal-to-noise ratio (SNR) of the image will improve, requiring fewer events to achieve the same image quality. This technology is not yet common, but it is available on some new systems.
=== Image reconstruction ===
The raw data collected by a PET scanner are a list of 'coincidence events' representing near-simultaneous detection (typically, within a window of 6 to 12 nanoseconds of each other) of annihilation photons by a pair of detectors. Each coincidence event represents a line in space connecting the two detectors along which the positron emission occurred (i.e., the line of response (LOR)).
Analytical techniques, much like the reconstruction of computed tomography (CT) and single-photon emission computed tomography (SPECT) data, are commonly used, although the data set collected in PET is much poorer than CT, so reconstruction techniques are more difficult. Coincidence events can be grouped into projection images, called sinograms. The sinograms are sorted by the angle of each view and tilt (for 3D images). The sinogram images are analogous to the projections captured by CT scanners, and can be reconstructed in a similar way. The statistics of data thereby obtained are much worse than those obtained through transmission tomography. A normal PET data set has millions of counts for the whole acquisition, while the CT can reach a few billion counts. This contributes to PET images appearing "noisier" than CT. Two major sources of noise in PET are scatter (a detected pair of photons, at least one of which was deflected from its original path by interaction with matter in the field of view, leading to the pair being assigned to an incorrect LOR) and random events (photons originating from two different annihilation events but incorrectly recorded as a coincidence pair because their arrival at their respective detectors occurred within a coincidence timing window).
In practice, considerable pre-processing of the data is required – correction for random coincidences, estimation and subtraction of scattered photons, detector dead-time correction (after the detection of a photon, the detector must "cool down" again) and detector-sensitivity correction (for both inherent detector sensitivity and changes in sensitivity due to angle of incidence).
Filtered back projection (FBP) has been frequently used to reconstruct images from the projections. This algorithm has the advantage of being simple while having a low requirement for computing resources. Disadvantages are that shot noise in the raw data is prominent in the reconstructed images, and areas of high tracer uptake tend to form streaks across the image. Also, FBP treats the data deterministically – it does not account for the inherent randomness associated with PET data, thus requiring all the pre-reconstruction corrections described above.
Statistical, likelihood-based approaches: Statistical, likelihood-based iterative expectation-maximization algorithms such as the Shepp–Vardi algorithm are now the preferred method of reconstruction. These algorithms compute an estimate of the likely distribution of annihilation events that led to the measured data, based on statistical principles. The advantage is a better noise profile and resistance to the streak artifacts common with FBP, but the disadvantage is greater computer resource requirements. A further advantage of statistical image reconstruction techniques is that the physical effects that would need to be pre-corrected for when using an analytical reconstruction algorithm, such as scattered photons, random coincidences, attenuation and detector dead-time, can be incorporated into the likelihood model being used in the reconstruction, allowing for additional noise reduction. Iterative reconstruction has also been shown to result in improvements in the resolution of the reconstructed images, since more sophisticated models of the scanner physics can be incorporated into the likelihood model than those used by analytical reconstruction methods, allowing for improved quantification of the radioactivity distribution.
Research has shown that Bayesian methods that involve a Poisson likelihood function and an appropriate prior probability (e.g., a smoothing prior leading to total variation regularization or a Laplacian distribution leading to
ℓ
1
{\displaystyle \ell _{1}}
-based regularization in a wavelet or other domain), such as via Ulf Grenander's Sieve estimator or via Bayes penalty methods or via I.J. Good's roughness method may yield superior performance to expectation-maximization-based methods which involve a Poisson likelihood function but do not involve such a prior.
Attenuation correction: Quantitative PET Imaging requires attenuation correction. In these systems attenuation correction is based on a transmission scan using 68Ge rotating rod source.
Transmission scans directly measure attenuation values at 511 keV. Attenuation occurs when photons emitted by the radiotracer inside the body are absorbed by intervening tissue between the detector and the emission of the photon. As different LORs must traverse different thicknesses of tissue, the photons are attenuated differentially. The result is that structures deep in the body are reconstructed as having falsely low tracer uptake. Contemporary scanners can estimate attenuation using integrated x-ray CT equipment, in place of earlier equipment that offered a crude form of CT using a gamma ray (positron emitting) source and the PET detectors.
While attenuation-corrected images are generally more faithful representations, the correction process is itself susceptible to significant artifacts. As a result, both corrected and uncorrected images are always reconstructed and read together.
2D/3D reconstruction: Early PET scanners had only a single ring of detectors, hence the acquisition of data and subsequent reconstruction was restricted to a single transverse plane. More modern scanners now include multiple rings, essentially forming a cylinder of detectors.
There are two approaches to reconstructing data from such a scanner:
Treat each ring as a separate entity, so that only coincidences within a ring are detected, the image from each ring can then be reconstructed individually (2D reconstruction), or
Allow coincidences to be detected between rings as well as within rings, then reconstruct the entire volume together (3D).
3D techniques have better sensitivity (because more coincidences are detected and used) hence less noise, but are more sensitive to the effects of scatter and random coincidences, as well as requiring greater computer resources. The advent of sub-nanosecond timing resolution detectors affords better random coincidence rejection, thus favoring 3D image reconstruction.
Time-of-flight (TOF) PET: For modern systems with a higher time resolution (roughly 3 nanoseconds) a technique called "time-of-flight" is used to improve the overall performance. Time-of-flight PET makes use of very fast gamma-ray detectors and data processing system which can more precisely decide the difference in time between the detection of the two photons. It is impossible to localize the point of origin of the annihilation event exactly (currently within 10 cm). Therefore, image reconstruction is still needed. TOF technique gives a remarkable improvement in image quality, especially signal-to-noise ratio.
=== Combination of PET with CT or MRI ===
PET scans are increasingly read alongside CT or MRI scans, with the combination (co-registration) giving both anatomic and metabolic information (i.e., what the structure is, and what it is doing biochemically). Because PET imaging is most useful in combination with anatomical imaging, such as CT, modern PET scanners are now available with integrated high-end multi-detector-row CT scanners (PET–CT). Because the two scans can be performed in immediate sequence during the same session, with the patient not changing position between the two types of scans, the two sets of images are more precisely registered, so that areas of abnormality on the PET imaging can be more perfectly correlated with anatomy on the CT images. This is very useful in showing detailed views of moving organs or structures with higher anatomical variation, which is more common outside the brain.
At the Jülich Institute of Neurosciences and Biophysics, the world's largest PET–MRI device began operation in April 2009. A 9.4-tesla magnetic resonance tomograph (MRT) combined with a PET. Presently, only the head and brain can be imaged at these high magnetic field strengths.
For brain imaging, registration of CT, MRI and PET scans may be accomplished without the need for an integrated PET–CT or PET–MRI scanner by using a device known as the N-localizer.
=== Limitations ===
The minimization of radiation dose to the subject is an attractive feature of the use of short-lived radionuclides. Besides its established role as a diagnostic technique, PET has an expanding role as a method to assess the response to therapy, in particular, cancer therapy, where the risk to the patient from lack of knowledge about disease progress is much greater than the risk from the test radiation. Since the tracers are radioactive, the elderly and pregnant are unable to use it due to risks posed by radiation.
Limitations to the widespread use of PET arise from the high costs of cyclotrons needed to produce the short-lived radionuclides for PET scanning and the need for specially adapted on-site chemical synthesis apparatus to produce the radiopharmaceuticals after radioisotope preparation. Organic radiotracer molecules that will contain a positron-emitting radioisotope cannot be synthesized first and then the radioisotope prepared within them, because bombardment with a cyclotron to prepare the radioisotope destroys any organic carrier for it. Instead, the isotope must be prepared first, then the chemistry to prepare any organic radiotracer (such as FDG) accomplished very quickly, in the short time before the isotope decays. Few hospitals and universities are capable of maintaining such systems, and most clinical PET is supported by third-party suppliers of radiotracers that can supply many sites simultaneously. This limitation restricts clinical PET primarily to the use of tracers labelled with fluorine-18, which has a half-life of 110 minutes and can be transported a reasonable distance before use, or to rubidium-82 (used as rubidium-82 chloride) with a half-life of 1.27 minutes, which is created in a portable generator and is used for myocardial perfusion studies. In recent years a few on-site cyclotrons with integrated shielding and "hot labs" (automated chemistry labs that are able to work with radioisotopes) have begun to accompany PET units to remote hospitals. The presence of the small on-site cyclotron promises to expand in the future as the cyclotrons shrink in response to the high cost of isotope transportation to remote PET machines. In recent years the shortage of PET scans has been alleviated in the US, as rollout of radiopharmacies to supply radioisotopes has grown 30 percent per year.
Because the half-life of fluorine-18 is about two hours, the prepared dose of a radiopharmaceutical bearing this radionuclide will undergo multiple half-lives of decay during the working day. This necessitates frequent recalibration of the remaining dose (determination of activity per unit volume) and careful planning with respect to patient scheduling.
== History ==
The concept of emission and transmission tomography was introduced by David E. Kuhl, Luke Chapman and Roy Edwards in the late 1950s. Their work would lead to the design and construction of several tomographic instruments at Washington University School of Medicine and later at the University of Pennsylvania. In the 1960s and 70s tomographic imaging instruments and techniques were further developed by Michel Ter-Pogossian, Michael E. Phelps, Edward J. Hoffman and others at Washington University School of Medicine.
Work by Gordon Brownell, Charles Burnham and their associates at the Massachusetts General Hospital beginning in the 1950s contributed significantly to the development of PET technology and included the first demonstration of annihilation radiation for medical imaging. Their innovations, including the use of light pipes and volumetric analysis, have been important in the deployment of PET imaging. In 1961, James Robertson and his associates at Brookhaven National Laboratory built the first single-plane PET scan, nicknamed the "head-shrinker".
One of the factors most responsible for the acceptance of positron imaging was the development of radiopharmaceuticals. In particular, the development of labeled 2-fluorodeoxy-D-glucose (FDG—firstly synthethized and described by two Czech scientists from Charles University in Prague in 1968) by the Brookhaven group under the direction of Al Wolf and Joanna Fowler was a major factor in expanding the scope of PET imaging. The compound was first administered to two normal human volunteers by Abass Alavi in August 1976 at the University of Pennsylvania. Brain images obtained with an ordinary (non-PET) nuclear scanner demonstrated the concentration of FDG in that organ. Later, the substance was used in dedicated positron tomographic scanners, to yield the modern procedure.
The logical extension of positron instrumentation was a design using two two-dimensional arrays. PC-I was the first instrument using this concept and was designed in 1968, completed in 1969 and reported in 1972. The first applications of PC-I in tomographic mode as distinguished from the computed tomographic mode were reported in 1970. It soon became clear to many of those involved in PET development that a circular or cylindrical array of detectors was the logical next step in PET instrumentation. Although many investigators took this approach, James Robertson and Zang-Hee Cho were the first to propose a ring system that has become the prototype of the current shape of PET. The first multislice cylindrical array PET scanner was completed in 1974 at the Mallinckrodt Institute of Radiology by the group led by Ter-Pogossian.
The PET–CT scanner, attributed to David Townsend and Ronald Nutt, was named by Time as the medical invention of the year in 2000.
== Cost ==
As of August 2008, Cancer Care Ontario reports that the current average incremental cost to perform a PET scan in the province is CA$1,000–1,200 per scan. This includes the cost of the radiopharmaceutical and a stipend for the physician reading the scan.
In the United States, a PET scan is estimated to be US$1,500–5,000.
In England, the National Health Service reference cost (2015–2016) for an adult outpatient PET scan is £798.
In Australia, as of July 2018, the Medicare Benefits Schedule Fee for whole body FDG PET ranges from A$953 to A$999, depending on the indication for the scan.
== Quality control ==
The overall performance of PET systems can be evaluated by quality control tools such as the Jaszczak phantom.
== See also ==
Diffuse optical imaging – also known as diffuse optical tomography, a medical imaging techniquePages displaying wikidata descriptions as a fallback
Hot cell – Shielded nuclear radiation containment chamber
Molecular imaging – Imaging molecules within living patients
Neurotherapy – Medical treatment
== References ==
== External links == | Wikipedia/Positron-emission_tomography |
In physics, the observer effect is the disturbance of an observed system by the act of observation. This is often the result of utilising instruments that, by necessity, alter the state of what they measure in some manner. A common example is checking the pressure in an automobile tire, which causes some of the air to escape, thereby changing the amount of pressure one observes. Similarly, seeing non-luminous objects requires light hitting the object to cause it to reflect that light. While the effects of observation are often negligible, the object still experiences a change (leading to the Schrödinger's cat thought experiment). This effect can be found in many domains of physics, but can usually be reduced to insignificance by using different instruments or observation techniques.
A notable example of the observer effect occurs in quantum mechanics, as demonstrated by the double-slit experiment. Physicists have found that observation of quantum phenomena by a detector or an instrument can change the measured results of this experiment. Despite the "observer effect" in the double-slit experiment being caused by the presence of an electronic detector, the experiment's results have been interpreted by some to suggest that a conscious mind can directly affect reality. However, the need for the "observer" to be conscious is not supported by scientific research, and has been pointed out as a misconception rooted in a poor understanding of the quantum wave function ψ and the quantum measurement process.
== Particle physics ==
An electron is detected upon interaction with a photon; this interaction will inevitably alter the velocity and momentum of that electron. It is possible for other, less direct means of measurement to affect the electron. It is also necessary to distinguish clearly between the measured value of a quantity and the value resulting from the measurement process. In particular, a measurement of momentum is non-repeatable in short intervals of time. A formula (one-dimensional for simplicity) relating involved quantities, due to Niels Bohr (1928) is given by
|
v
x
′
−
v
x
|
Δ
p
x
≈
ℏ
/
Δ
t
,
{\displaystyle |v'_{x}-v_{x}|\Delta p_{x}\approx \hbar /\Delta t,}
where
Δpx is uncertainty in measured value of momentum,
Δt is duration of measurement,
vx is velocity of particle before measurement,
v′x is velocity of particle after measurement,
ħ is the reduced Planck constant.
The measured momentum of the electron is then related to vx, whereas its momentum after the measurement is related to v′x. This is a best-case scenario.
== Electronics ==
In electronics, ammeters and voltmeters are usually wired in series or parallel to the circuit, and so by their very presence affect the current or the voltage they are measuring by way of presenting an additional real or complex load to the circuit, thus changing the transfer function and behavior of the circuit itself. Even a more passive device such as a current clamp, which measures the wire current without coming into physical contact with the wire, affects the current through the circuit being measured because the inductance is mutual.
== Thermodynamics ==
In thermodynamics, a standard mercury-in-glass thermometer must absorb or give up some thermal energy to record a temperature, and therefore changes the temperature of the body which it is measuring.
== Quantum mechanics ==
== See also ==
Observer (special relativity)
== References == | Wikipedia/Observer_effect_(physics) |
In physics, Hamiltonian mechanics is a reformulation of Lagrangian mechanics that emerged in 1833. Introduced by Sir William Rowan Hamilton, Hamiltonian mechanics replaces (generalized) velocities
q
˙
i
{\displaystyle {\dot {q}}^{i}}
used in Lagrangian mechanics with (generalized) momenta. Both theories provide interpretations of classical mechanics and describe the same physical phenomena.
Hamiltonian mechanics has a close relationship with geometry (notably, symplectic geometry and Poisson structures) and serves as a link between classical and quantum mechanics.
== Overview ==
=== Phase space coordinates (p, q) and Hamiltonian H ===
Let
(
M
,
L
)
{\displaystyle (M,{\mathcal {L}})}
be a mechanical system with configuration space
M
{\displaystyle M}
and smooth Lagrangian
L
.
{\displaystyle {\mathcal {L}}.}
Select a standard coordinate system
(
q
,
q
˙
)
{\displaystyle ({\boldsymbol {q}},{\boldsymbol {\dot {q}}})}
on
M
.
{\displaystyle M.}
The quantities
p
i
(
q
,
q
˙
,
t
)
=
def
∂
L
/
∂
q
˙
i
{\displaystyle \textstyle p_{i}({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t)~{\stackrel {\text{def}}{=}}~{\partial {\mathcal {L}}}/{\partial {\dot {q}}^{i}}}
are called momenta. (Also generalized momenta, conjugate momenta, and canonical momenta). For a time instant
t
,
{\displaystyle t,}
the Legendre transformation of
L
{\displaystyle {\mathcal {L}}}
is defined as the map
(
q
,
q
˙
)
→
(
p
,
q
)
{\displaystyle ({\boldsymbol {q}},{\boldsymbol {\dot {q}}})\to \left({\boldsymbol {p}},{\boldsymbol {q}}\right)}
which is assumed to have a smooth inverse
(
p
,
q
)
→
(
q
,
q
˙
)
.
{\displaystyle ({\boldsymbol {p}},{\boldsymbol {q}})\to ({\boldsymbol {q}},{\boldsymbol {\dot {q}}}).}
For a system with
n
{\displaystyle n}
degrees of freedom, the Lagrangian mechanics defines the energy function
E
L
(
q
,
q
˙
,
t
)
=
def
∑
i
=
1
n
q
˙
i
∂
L
∂
q
˙
i
−
L
.
{\displaystyle E_{\mathcal {L}}({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t)\,{\stackrel {\text{def}}{=}}\,\sum _{i=1}^{n}{\dot {q}}^{i}{\frac {\partial {\mathcal {L}}}{\partial {\dot {q}}^{i}}}-{\mathcal {L}}.}
The Legendre transform of
L
{\displaystyle {\mathcal {L}}}
turns
E
L
{\displaystyle E_{\mathcal {L}}}
into a function
H
(
p
,
q
,
t
)
{\displaystyle {\mathcal {H}}({\boldsymbol {p}},{\boldsymbol {q}},t)}
known as the Hamiltonian. The Hamiltonian satisfies
H
(
∂
L
∂
q
˙
,
q
,
t
)
=
E
L
(
q
,
q
˙
,
t
)
{\displaystyle {\mathcal {H}}\left({\frac {\partial {\mathcal {L}}}{\partial {\boldsymbol {\dot {q}}}}},{\boldsymbol {q}},t\right)=E_{\mathcal {L}}({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t)}
which implies that
H
(
p
,
q
,
t
)
=
∑
i
=
1
n
p
i
q
˙
i
−
L
(
q
,
q
˙
,
t
)
,
{\displaystyle {\mathcal {H}}({\boldsymbol {p}},{\boldsymbol {q}},t)=\sum _{i=1}^{n}p_{i}{\dot {q}}^{i}-{\mathcal {L}}({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t),}
where the velocities
q
˙
=
(
q
˙
1
,
…
,
q
˙
n
)
{\displaystyle {\boldsymbol {\dot {q}}}=({\dot {q}}^{1},\ldots ,{\dot {q}}^{n})}
are found from the (
n
{\displaystyle n}
-dimensional) equation
p
=
∂
L
/
∂
q
˙
{\displaystyle \textstyle {\boldsymbol {p}}={\partial {\mathcal {L}}}/{\partial {\boldsymbol {\dot {q}}}}}
which, by assumption, is uniquely solvable for
q
˙
{\displaystyle {\boldsymbol {\dot {q}}}}
. The (
2
n
{\displaystyle 2n}
-dimensional) pair
(
p
,
q
)
{\displaystyle ({\boldsymbol {p}},{\boldsymbol {q}})}
is called phase space coordinates. (Also canonical coordinates).
=== From Euler–Lagrange equation to Hamilton's equations ===
In phase space coordinates
(
p
,
q
)
{\displaystyle ({\boldsymbol {p}},{\boldsymbol {q}})}
, the (
n
{\displaystyle n}
-dimensional) Euler–Lagrange equation
∂
L
∂
q
−
d
d
t
∂
L
∂
q
˙
=
0
{\displaystyle {\frac {\partial {\mathcal {L}}}{\partial {\boldsymbol {q}}}}-{\frac {d}{dt}}{\frac {\partial {\mathcal {L}}}{\partial {\dot {\boldsymbol {q}}}}}=0}
becomes Hamilton's equations in
2
n
{\displaystyle 2n}
dimensions
=== From stationary action principle to Hamilton's equations ===
Let
P
(
a
,
b
,
x
a
,
x
b
)
{\displaystyle {\mathcal {P}}(a,b,{\boldsymbol {x}}_{a},{\boldsymbol {x}}_{b})}
be the set of smooth paths
q
:
[
a
,
b
]
→
M
{\displaystyle {\boldsymbol {q}}:[a,b]\to M}
for which
q
(
a
)
=
x
a
{\displaystyle {\boldsymbol {q}}(a)={\boldsymbol {x}}_{a}}
and
q
(
b
)
=
x
b
.
{\displaystyle {\boldsymbol {q}}(b)={\boldsymbol {x}}_{b}.}
The action functional
S
:
P
(
a
,
b
,
x
a
,
x
b
)
→
R
{\displaystyle {\mathcal {S}}:{\mathcal {P}}(a,b,{\boldsymbol {x}}_{a},{\boldsymbol {x}}_{b})\to \mathbb {R} }
is defined via
S
[
q
]
=
∫
a
b
L
(
t
,
q
(
t
)
,
q
˙
(
t
)
)
d
t
=
∫
a
b
(
∑
i
=
1
n
p
i
q
˙
i
−
H
(
p
,
q
,
t
)
)
d
t
,
{\displaystyle {\mathcal {S}}[{\boldsymbol {q}}]=\int _{a}^{b}{\mathcal {L}}(t,{\boldsymbol {q}}(t),{\dot {\boldsymbol {q}}}(t))\,dt=\int _{a}^{b}\left(\sum _{i=1}^{n}p_{i}{\dot {q}}^{i}-{\mathcal {H}}({\boldsymbol {p}},{\boldsymbol {q}},t)\right)\,dt,}
where
q
=
q
(
t
)
{\displaystyle {\boldsymbol {q}}={\boldsymbol {q}}(t)}
, and
p
=
∂
L
/
∂
q
˙
{\displaystyle {\boldsymbol {p}}=\partial {\mathcal {L}}/\partial {\boldsymbol {\dot {q}}}}
(see above). A path
q
∈
P
(
a
,
b
,
x
a
,
x
b
)
{\displaystyle {\boldsymbol {q}}\in {\mathcal {P}}(a,b,{\boldsymbol {x}}_{a},{\boldsymbol {x}}_{b})}
is a stationary point of
S
{\displaystyle {\mathcal {S}}}
(and hence is an equation of motion) if and only if the path
(
p
(
t
)
,
q
(
t
)
)
{\displaystyle ({\boldsymbol {p}}(t),{\boldsymbol {q}}(t))}
in phase space coordinates obeys the Hamilton equations.
=== Basic physical interpretation ===
A simple interpretation of Hamiltonian mechanics comes from its application on a one-dimensional system consisting of one nonrelativistic particle of mass m. The value
H
(
p
,
q
)
{\displaystyle H(p,q)}
of the Hamiltonian is the total energy of the system, in this case the sum of kinetic and potential energy, traditionally denoted T and V, respectively. Here p is the momentum mv and q is the space coordinate. Then
H
=
T
+
V
,
T
=
p
2
2
m
,
V
=
V
(
q
)
{\displaystyle {\mathcal {H}}=T+V,\qquad T={\frac {p^{2}}{2m}},\qquad V=V(q)}
T is a function of p alone, while V is a function of q alone (i.e., T and V are scleronomic).
In this example, the time derivative of q is the velocity, and so the first Hamilton equation means that the particle's velocity equals the derivative of its kinetic energy with respect to its momentum. The time derivative of the momentum p equals the Newtonian force, and so the second Hamilton equation means that the force equals the negative gradient of potential energy.
== Example ==
A spherical pendulum consists of a mass m moving without friction on the surface of a sphere. The only forces acting on the mass are the reaction from the sphere and gravity. Spherical coordinates are used to describe the position of the mass in terms of (r, θ, φ), where r is fixed, r = ℓ.
The Lagrangian for this system is
L
=
1
2
m
ℓ
2
(
θ
˙
2
+
sin
2
θ
φ
˙
2
)
+
m
g
ℓ
cos
θ
.
{\displaystyle L={\frac {1}{2}}m\ell ^{2}\left({\dot {\theta }}^{2}+\sin ^{2}\theta \ {\dot {\varphi }}^{2}\right)+mg\ell \cos \theta .}
Thus the Hamiltonian is
H
=
P
θ
θ
˙
+
P
φ
φ
˙
−
L
{\displaystyle H=P_{\theta }{\dot {\theta }}+P_{\varphi }{\dot {\varphi }}-L}
where
P
θ
=
∂
L
∂
θ
˙
=
m
ℓ
2
θ
˙
{\displaystyle P_{\theta }={\frac {\partial L}{\partial {\dot {\theta }}}}=m\ell ^{2}{\dot {\theta }}}
and
P
φ
=
∂
L
∂
φ
˙
=
m
ℓ
2
sin
2
θ
φ
˙
.
{\displaystyle P_{\varphi }={\frac {\partial L}{\partial {\dot {\varphi }}}}=m\ell ^{2}\sin ^{2}\!\theta \,{\dot {\varphi }}.}
In terms of coordinates and momenta, the Hamiltonian reads
H
=
[
1
2
m
ℓ
2
θ
˙
2
+
1
2
m
ℓ
2
sin
2
θ
φ
˙
2
]
⏟
T
+
[
−
m
g
ℓ
cos
θ
]
⏟
V
=
P
θ
2
2
m
ℓ
2
+
P
φ
2
2
m
ℓ
2
sin
2
θ
−
m
g
ℓ
cos
θ
.
{\displaystyle H=\underbrace {\left[{\frac {1}{2}}m\ell ^{2}{\dot {\theta }}^{2}+{\frac {1}{2}}m\ell ^{2}\sin ^{2}\!\theta \,{\dot {\varphi }}^{2}\right]} _{T}+\underbrace {{\Big [}-mg\ell \cos \theta {\Big ]}} _{V}={\frac {P_{\theta }^{2}}{2m\ell ^{2}}}+{\frac {P_{\varphi }^{2}}{2m\ell ^{2}\sin ^{2}\theta }}-mg\ell \cos \theta .}
Hamilton's equations give the time evolution of coordinates and conjugate momenta in four first-order differential equations,
θ
˙
=
P
θ
m
ℓ
2
φ
˙
=
P
φ
m
ℓ
2
sin
2
θ
P
θ
˙
=
P
φ
2
m
ℓ
2
sin
3
θ
cos
θ
−
m
g
ℓ
sin
θ
P
φ
˙
=
0.
{\displaystyle {\begin{aligned}{\dot {\theta }}&={P_{\theta } \over m\ell ^{2}}\\[6pt]{\dot {\varphi }}&={P_{\varphi } \over m\ell ^{2}\sin ^{2}\theta }\\[6pt]{\dot {P_{\theta }}}&={P_{\varphi }^{2} \over m\ell ^{2}\sin ^{3}\theta }\cos \theta -mg\ell \sin \theta \\[6pt]{\dot {P_{\varphi }}}&=0.\end{aligned}}}
Momentum
P
φ
{\displaystyle P_{\varphi }}
, which corresponds to the vertical component of angular momentum
L
z
=
ℓ
sin
θ
×
m
ℓ
sin
θ
φ
˙
{\displaystyle L_{z}=\ell \sin \theta \times m\ell \sin \theta \,{\dot {\varphi }}}
, is a constant of motion. That is a consequence of the rotational symmetry of the system around the vertical axis. Being absent from the Hamiltonian, azimuth
φ
{\displaystyle \varphi }
is a cyclic coordinate, which implies conservation of its conjugate momentum.
== Deriving Hamilton's equations ==
Hamilton's equations can be derived by a calculation with the Lagrangian
L
{\displaystyle {\mathcal {L}}}
, generalized positions qi, and generalized velocities ⋅qi, where
i
=
1
,
…
,
n
{\displaystyle i=1,\ldots ,n}
. Here we work off-shell, meaning
q
i
{\displaystyle q^{i}}
,
q
˙
i
{\displaystyle {\dot {q}}^{i}}
,
t
{\displaystyle t}
are independent coordinates in phase space, not constrained to follow any equations of motion (in particular,
q
˙
i
{\displaystyle {\dot {q}}^{i}}
is not a derivative of
q
i
{\displaystyle q^{i}}
). The total differential of the Lagrangian is:
d
L
=
∑
i
(
∂
L
∂
q
i
d
q
i
+
∂
L
∂
q
˙
i
d
q
˙
i
)
+
∂
L
∂
t
d
t
.
{\displaystyle \mathrm {d} {\mathcal {L}}=\sum _{i}\left({\frac {\partial {\mathcal {L}}}{\partial q^{i}}}\mathrm {d} q^{i}+{\frac {\partial {\mathcal {L}}}{\partial {\dot {q}}^{i}}}\,\mathrm {d} {\dot {q}}^{i}\right)+{\frac {\partial {\mathcal {L}}}{\partial t}}\,\mathrm {d} t\ .}
The generalized momentum coordinates were defined as
p
i
=
∂
L
/
∂
q
˙
i
{\displaystyle p_{i}=\partial {\mathcal {L}}/\partial {\dot {q}}^{i}}
, so we may rewrite the equation as:
d
L
=
∑
i
(
∂
L
∂
q
i
d
q
i
+
p
i
d
q
˙
i
)
+
∂
L
∂
t
d
t
=
∑
i
(
∂
L
∂
q
i
d
q
i
+
d
(
p
i
q
˙
i
)
−
q
˙
i
d
p
i
)
+
∂
L
∂
t
d
t
.
{\displaystyle {\begin{aligned}\mathrm {d} {\mathcal {L}}=&\sum _{i}\left({\frac {\partial {\mathcal {L}}}{\partial q^{i}}}\,\mathrm {d} q^{i}+p_{i}\mathrm {d} {\dot {q}}^{i}\right)+{\frac {\partial {\mathcal {L}}}{\partial t}}\mathrm {d} t\\=&\sum _{i}\left({\frac {\partial {\mathcal {L}}}{\partial q^{i}}}\,\mathrm {d} q^{i}+\mathrm {d} (p_{i}{\dot {q}}^{i})-{\dot {q}}^{i}\,\mathrm {d} p_{i}\right)+{\frac {\partial {\mathcal {L}}}{\partial t}}\,\mathrm {d} t\,.\end{aligned}}}
After rearranging, one obtains:
d
(
∑
i
p
i
q
˙
i
−
L
)
=
∑
i
(
−
∂
L
∂
q
i
d
q
i
+
q
˙
i
d
p
i
)
−
∂
L
∂
t
d
t
.
{\displaystyle \mathrm {d} \!\left(\sum _{i}p_{i}{\dot {q}}^{i}-{\mathcal {L}}\right)=\sum _{i}\left(-{\frac {\partial {\mathcal {L}}}{\partial q^{i}}}\,\mathrm {d} q^{i}+{\dot {q}}^{i}\mathrm {d} p_{i}\right)-{\frac {\partial {\mathcal {L}}}{\partial t}}\,\mathrm {d} t\ .}
The term in parentheses on the left-hand side is just the Hamiltonian
H
=
∑
p
i
q
˙
i
−
L
{\textstyle {\mathcal {H}}=\sum p_{i}{\dot {q}}^{i}-{\mathcal {L}}}
defined previously, therefore:
d
H
=
∑
i
(
−
∂
L
∂
q
i
d
q
i
+
q
˙
i
d
p
i
)
−
∂
L
∂
t
d
t
.
{\displaystyle \mathrm {d} {\mathcal {H}}=\sum _{i}\left(-{\frac {\partial {\mathcal {L}}}{\partial q^{i}}}\,\mathrm {d} q^{i}+{\dot {q}}^{i}\,\mathrm {d} p_{i}\right)-{\frac {\partial {\mathcal {L}}}{\partial t}}\,\mathrm {d} t\ .}
One may also calculate the total differential of the Hamiltonian
H
{\displaystyle {\mathcal {H}}}
with respect to coordinates
q
i
{\displaystyle q^{i}}
,
p
i
{\displaystyle p_{i}}
,
t
{\displaystyle t}
instead of
q
i
{\displaystyle q^{i}}
,
q
˙
i
{\displaystyle {\dot {q}}^{i}}
,
t
{\displaystyle t}
, yielding:
d
H
=
∑
i
(
∂
H
∂
q
i
d
q
i
+
∂
H
∂
p
i
d
p
i
)
+
∂
H
∂
t
d
t
.
{\displaystyle \mathrm {d} {\mathcal {H}}=\sum _{i}\left({\frac {\partial {\mathcal {H}}}{\partial q^{i}}}\mathrm {d} q^{i}+{\frac {\partial {\mathcal {H}}}{\partial p_{i}}}\mathrm {d} p_{i}\right)+{\frac {\partial {\mathcal {H}}}{\partial t}}\,\mathrm {d} t\ .}
One may now equate these two expressions for
d
H
{\displaystyle d{\mathcal {H}}}
, one in terms of
L
{\displaystyle {\mathcal {L}}}
, the other in terms of
H
{\displaystyle {\mathcal {H}}}
:
∑
i
(
−
∂
L
∂
q
i
d
q
i
+
q
˙
i
d
p
i
)
−
∂
L
∂
t
d
t
=
∑
i
(
∂
H
∂
q
i
d
q
i
+
∂
H
∂
p
i
d
p
i
)
+
∂
H
∂
t
d
t
.
{\displaystyle \sum _{i}\left(-{\frac {\partial {\mathcal {L}}}{\partial q^{i}}}\mathrm {d} q^{i}+{\dot {q}}^{i}\mathrm {d} p_{i}\right)-{\frac {\partial {\mathcal {L}}}{\partial t}}\,\mathrm {d} t\ =\ \sum _{i}\left({\frac {\partial {\mathcal {H}}}{\partial q^{i}}}\mathrm {d} q^{i}+{\frac {\partial {\mathcal {H}}}{\partial p_{i}}}\mathrm {d} p_{i}\right)+{\frac {\partial {\mathcal {H}}}{\partial t}}\,\mathrm {d} t\ .}
Since these calculations are off-shell, one can equate the respective coefficients of
d
q
i
{\displaystyle \mathrm {d} q^{i}}
,
d
p
i
{\displaystyle \mathrm {d} p_{i}}
,
d
t
{\displaystyle \mathrm {d} t}
on the two sides:
∂
H
∂
q
i
=
−
∂
L
∂
q
i
,
∂
H
∂
p
i
=
q
˙
i
,
∂
H
∂
t
=
−
∂
L
∂
t
.
{\displaystyle {\frac {\partial {\mathcal {H}}}{\partial q^{i}}}=-{\frac {\partial {\mathcal {L}}}{\partial q^{i}}}\quad ,\quad {\frac {\partial {\mathcal {H}}}{\partial p_{i}}}={\dot {q}}^{i}\quad ,\quad {\frac {\partial {\mathcal {H}}}{\partial t}}=-{\partial {\mathcal {L}} \over \partial t}\ .}
On-shell, one substitutes parametric functions
q
i
=
q
i
(
t
)
{\displaystyle q^{i}=q^{i}(t)}
which define a trajectory in phase space with velocities
q
˙
i
=
d
d
t
q
i
(
t
)
{\displaystyle {\dot {q}}^{i}={\tfrac {d}{dt}}q^{i}(t)}
, obeying Lagrange's equations:
d
d
t
∂
L
∂
q
˙
i
−
∂
L
∂
q
i
=
0
.
{\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial {\mathcal {L}}}{\partial {\dot {q}}^{i}}}-{\frac {\partial {\mathcal {L}}}{\partial q^{i}}}=0\ .}
Rearranging and writing in terms of the on-shell
p
i
=
p
i
(
t
)
{\displaystyle p_{i}=p_{i}(t)}
gives:
∂
L
∂
q
i
=
p
˙
i
.
{\displaystyle {\frac {\partial {\mathcal {L}}}{\partial q^{i}}}={\dot {p}}_{i}\ .}
Thus Lagrange's equations are equivalent to Hamilton's equations:
∂
H
∂
q
i
=
−
p
˙
i
,
∂
H
∂
p
i
=
q
˙
i
,
∂
H
∂
t
=
−
∂
L
∂
t
.
{\displaystyle {\frac {\partial {\mathcal {H}}}{\partial q^{i}}}=-{\dot {p}}_{i}\quad ,\quad {\frac {\partial {\mathcal {H}}}{\partial p_{i}}}={\dot {q}}^{i}\quad ,\quad {\frac {\partial {\mathcal {H}}}{\partial t}}=-{\frac {\partial {\mathcal {L}}}{\partial t}}\,.}
In the case of time-independent
H
{\displaystyle {\mathcal {H}}}
and
L
{\displaystyle {\mathcal {L}}}
, i.e.
∂
H
/
∂
t
=
−
∂
L
/
∂
t
=
0
{\displaystyle \partial {\mathcal {H}}/\partial t=-\partial {\mathcal {L}}/\partial t=0}
, Hamilton's equations consist of 2n first-order differential equations, while Lagrange's equations consist of n second-order equations. Hamilton's equations usually do not reduce the difficulty of finding explicit solutions, but important theoretical results can be derived from them, because coordinates and momenta are independent variables with nearly symmetric roles.
Hamilton's equations have another advantage over Lagrange's equations: if a system has a symmetry, so that some coordinate
q
i
{\displaystyle q_{i}}
does not occur in the Hamiltonian (i.e. a cyclic coordinate), the corresponding momentum coordinate
p
i
{\displaystyle p_{i}}
is conserved along each trajectory, and that coordinate can be reduced to a constant in the other equations of the set. This effectively reduces the problem from n coordinates to (n − 1) coordinates: this is the basis of symplectic reduction in geometry. In the Lagrangian framework, the conservation of momentum also follows immediately, however all the generalized velocities
q
˙
i
{\displaystyle {\dot {q}}_{i}}
still occur in the Lagrangian, and a system of equations in n coordinates still has to be solved.
The Lagrangian and Hamiltonian approaches provide the groundwork for deeper results in classical mechanics, and suggest analogous formulations in quantum mechanics: the path integral formulation and the Schrödinger equation.
== Properties of the Hamiltonian ==
The value of the Hamiltonian
H
{\displaystyle {\mathcal {H}}}
is the total energy of the system if and only if the energy function
E
L
{\displaystyle E_{\mathcal {L}}}
has the same property. (See definition of
H
{\displaystyle {\mathcal {H}}}
).
d
H
d
t
=
∂
H
∂
t
{\displaystyle {\frac {d{\mathcal {H}}}{dt}}={\frac {\partial {\mathcal {H}}}{\partial t}}}
when
p
(
t
)
{\displaystyle \mathbf {p} (t)}
,
q
(
t
)
{\displaystyle \mathbf {q} (t)}
form a solution of Hamilton's equations. Indeed,
d
H
d
t
=
∂
H
∂
p
⋅
p
˙
+
∂
H
∂
q
⋅
q
˙
+
∂
H
∂
t
,
{\textstyle {\frac {d{\mathcal {H}}}{dt}}={\frac {\partial {\mathcal {H}}}{\partial {\boldsymbol {p}}}}\cdot {\dot {\boldsymbol {p}}}+{\frac {\partial {\mathcal {H}}}{\partial {\boldsymbol {q}}}}\cdot {\dot {\boldsymbol {q}}}+{\frac {\partial {\mathcal {H}}}{\partial t}},}
and everything but the final term cancels out.
H
{\displaystyle {\mathcal {H}}}
does not change under point transformations, i.e. smooth changes
q
↔
q
′
{\displaystyle {\boldsymbol {q}}\leftrightarrow {\boldsymbol {q'}}}
of space coordinates. (Follows from the invariance of the energy function
E
L
{\displaystyle E_{\mathcal {L}}}
under point transformations. The invariance of
E
L
{\displaystyle E_{\mathcal {L}}}
can be established directly).
∂
H
∂
t
=
−
∂
L
∂
t
.
{\displaystyle {\frac {\partial {\mathcal {H}}}{\partial t}}=-{\frac {\partial {\mathcal {L}}}{\partial t}}.}
(See § Deriving Hamilton's equations).
−
∂
H
∂
q
i
=
p
˙
i
=
∂
L
∂
q
i
{\displaystyle -{\frac {\partial {\mathcal {H}}}{\partial q^{i}}}={\dot {p}}_{i}={\frac {\partial {\mathcal {L}}}{\partial q^{i}}}}
. (Compare Hamilton's and Euler-Lagrange equations or see § Deriving Hamilton's equations).
∂
H
∂
q
i
=
0
{\displaystyle {\frac {\partial {\mathcal {H}}}{\partial q^{i}}}=0}
if and only if
∂
L
∂
q
i
=
0
{\displaystyle {\frac {\partial {\mathcal {L}}}{\partial q^{i}}}=0}
.A coordinate for which the last equation holds is called cyclic (or ignorable). Every cyclic coordinate
q
i
{\displaystyle q^{i}}
reduces the number of degrees of freedom by
1
{\displaystyle 1}
, causes the corresponding momentum
p
i
{\displaystyle p_{i}}
to be conserved, and makes Hamilton's equations easier to solve.
== Hamiltonian as the total system energy ==
In its application to a given system, the Hamiltonian is often taken to be
H
=
T
+
V
{\displaystyle {\mathcal {H}}=T+V}
where
T
{\displaystyle T}
is the kinetic energy and
V
{\displaystyle V}
is the potential energy. Using this relation can be simpler than first calculating the Lagrangian, and then deriving the Hamiltonian from the Lagrangian. However, the relation is not true for all systems.
The relation holds true for nonrelativistic systems when all of the following conditions are satisfied
∂
V
(
q
,
q
˙
,
t
)
∂
q
˙
i
=
0
,
∀
i
{\displaystyle {\frac {\partial V({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t)}{\partial {\dot {q}}_{i}}}=0\;,\quad \forall i}
∂
T
(
q
,
q
˙
,
t
)
∂
t
=
0
{\displaystyle {\frac {\partial T({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t)}{\partial t}}=0}
T
(
q
,
q
˙
)
=
∑
i
=
1
n
∑
j
=
1
n
(
c
i
j
(
q
)
q
˙
i
q
˙
j
)
{\displaystyle T({\boldsymbol {q}},{\boldsymbol {\dot {q}}})=\sum _{i=1}^{n}\sum _{j=1}^{n}{\biggl (}c_{ij}({\boldsymbol {q}}){\dot {q}}_{i}{\dot {q}}_{j}{\biggr )}}
where
t
{\displaystyle t}
is time,
n
{\displaystyle n}
is the number of degrees of freedom of the system, and each
c
i
j
(
q
)
{\displaystyle c_{ij}({\boldsymbol {q}})}
is an arbitrary scalar function of
q
{\displaystyle {\boldsymbol {q}}}
.
In words, this means that the relation
H
=
T
+
V
{\displaystyle {\mathcal {H}}=T+V}
holds true if
T
{\displaystyle T}
does not contain time as an explicit variable (it is scleronomic),
V
{\displaystyle V}
does not contain generalised velocity as an explicit variable, and each term of
T
{\displaystyle T}
is quadratic in generalised velocity.
=== Proof ===
Preliminary to this proof, it is important to address an ambiguity in the related mathematical notation. While a change of variables can be used to equate
L
(
p
,
q
,
t
)
=
L
(
q
,
q
˙
,
t
)
{\displaystyle {\mathcal {L}}({\boldsymbol {p}},{\boldsymbol {q}},t)={\mathcal {L}}({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t)}
,
it is important to note that
∂
L
(
q
,
q
˙
,
t
)
∂
q
˙
i
≠
∂
L
(
p
,
q
,
t
)
∂
q
˙
i
{\displaystyle {\frac {\partial {\mathcal {L}}({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t)}{\partial {\dot {q}}_{i}}}\neq {\frac {\partial {\mathcal {L}}({\boldsymbol {p}},{\boldsymbol {q}},t)}{\partial {\dot {q}}_{i}}}}
.
In this case, the right hand side always evaluates to 0. To perform a change of variables inside of a partial derivative, the multivariable chain rule should be used. Hence, to avoid ambiguity, the function arguments of any term inside of a partial derivative should be stated.
Additionally, this proof uses the notation
f
(
a
,
b
,
c
)
=
f
(
a
,
b
)
{\displaystyle f(a,b,c)=f(a,b)}
to imply that
∂
f
(
a
,
b
,
c
)
∂
c
=
0
{\displaystyle {\frac {\partial f(a,b,c)}{\partial c}}=0}
.
=== Application to systems of point masses ===
For a system of point masses, the requirement for
T
{\displaystyle T}
to be quadratic in generalised velocity is always satisfied for the case where
T
(
q
,
q
˙
,
t
)
=
T
(
q
,
q
˙
)
{\displaystyle T({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t)=T({\boldsymbol {q}},{\boldsymbol {\dot {q}}})}
, which is a requirement for
H
=
T
+
V
{\displaystyle {\mathcal {H}}=T+V}
anyway.
=== Conservation of energy ===
If the conditions for
H
=
T
+
V
{\displaystyle {\mathcal {H}}=T+V}
are satisfied, then conservation of the Hamiltonian implies conservation of energy. This requires the additional condition that
V
{\displaystyle V}
does not contain time as an explicit variable.
∂
V
(
q
,
q
˙
,
t
)
∂
t
=
0
{\displaystyle {\frac {\partial V({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t)}{\partial t}}=0}
In summary, the requirements for
H
=
T
+
V
=
constant of time
{\displaystyle {\mathcal {H}}=T+V={\text{constant of time}}}
to be satisfied for a nonrelativistic system are
V
=
V
(
q
)
{\displaystyle V=V({\boldsymbol {q}})}
T
=
T
(
q
,
q
˙
)
{\displaystyle T=T({\boldsymbol {q}},{\boldsymbol {\dot {q}}})}
T
{\displaystyle T}
is a homogeneous quadratic function in
q
˙
{\displaystyle {\boldsymbol {\dot {q}}}}
Regarding extensions to the Euler-Lagrange formulation which use dissipation functions (See Lagrangian mechanics § Extensions to include non-conservative forces), e.g. the Rayleigh dissipation function, energy is not conserved when a dissipation function has effect. It is possible to explain the link between this and the former requirements by relating the extended and conventional Euler-Lagrange equations: grouping the extended terms into the potential function produces a velocity dependent potential. Hence, the requirements are not satisfied when a dissipation function has effect.
== Hamiltonian of a charged particle in an electromagnetic field ==
A sufficient illustration of Hamiltonian mechanics is given by the Hamiltonian of a charged particle in an electromagnetic field. In Cartesian coordinates the Lagrangian of a non-relativistic classical particle in an electromagnetic field is (in SI Units):
L
=
∑
i
1
2
m
x
˙
i
2
+
∑
i
q
x
˙
i
A
i
−
q
φ
,
{\displaystyle {\mathcal {L}}=\sum _{i}{\tfrac {1}{2}}m{\dot {x}}_{i}^{2}+\sum _{i}q{\dot {x}}_{i}A_{i}-q\varphi ,}
where q is the electric charge of the particle, φ is the electric scalar potential, and the Ai are the components of the magnetic vector potential that may all explicitly depend on
x
i
{\displaystyle x_{i}}
and
t
{\displaystyle t}
.
This Lagrangian, combined with Euler–Lagrange equation, produces the Lorentz force law
m
x
¨
=
q
E
+
q
x
˙
×
B
,
{\displaystyle m{\ddot {\mathbf {x} }}=q\mathbf {E} +q{\dot {\mathbf {x} }}\times \mathbf {B} \,,}
and is called minimal coupling.
The canonical momenta are given by:
p
i
=
∂
L
∂
x
˙
i
=
m
x
˙
i
+
q
A
i
.
{\displaystyle p_{i}={\frac {\partial {\mathcal {L}}}{\partial {\dot {x}}_{i}}}=m{\dot {x}}_{i}+qA_{i}.}
The Hamiltonian, as the Legendre transformation of the Lagrangian, is therefore:
H
=
∑
i
x
˙
i
p
i
−
L
=
∑
i
(
p
i
−
q
A
i
)
2
2
m
+
q
φ
.
{\displaystyle {\mathcal {H}}=\sum _{i}{\dot {x}}_{i}p_{i}-{\mathcal {L}}=\sum _{i}{\frac {\left(p_{i}-qA_{i}\right)^{2}}{2m}}+q\varphi .}
This equation is used frequently in quantum mechanics.
Under gauge transformation:
A
→
A
+
∇
f
,
φ
→
φ
−
f
˙
,
{\displaystyle \mathbf {A} \rightarrow \mathbf {A} +\nabla f\,,\quad \varphi \rightarrow \varphi -{\dot {f}}\,,}
where f(r, t) is any scalar function of space and time. The aforementioned Lagrangian, the canonical momenta, and the Hamiltonian transform like:
L
→
L
′
=
L
+
q
d
f
d
t
,
p
→
p
′
=
p
+
q
∇
f
,
H
→
H
′
=
H
−
q
∂
f
∂
t
,
{\displaystyle L\rightarrow L'=L+q{\frac {df}{dt}}\,,\quad \mathbf {p} \rightarrow \mathbf {p'} =\mathbf {p} +q\nabla f\,,\quad H\rightarrow H'=H-q{\frac {\partial f}{\partial t}}\,,}
which still produces the same Hamilton's equation:
∂
H
′
∂
x
i
|
p
i
′
=
∂
∂
x
i
|
p
i
′
(
x
˙
i
p
i
′
−
L
′
)
=
−
∂
L
′
∂
x
i
|
p
i
′
=
−
∂
L
∂
x
i
|
p
i
′
−
q
∂
∂
x
i
|
p
i
′
d
f
d
t
=
−
d
d
t
(
∂
L
∂
x
˙
i
|
p
i
′
+
q
∂
f
∂
x
i
|
p
i
′
)
=
−
p
˙
i
′
{\displaystyle {\begin{aligned}\left.{\frac {\partial H'}{\partial {x_{i}}}}\right|_{p'_{i}}&=\left.{\frac {\partial }{\partial {x_{i}}}}\right|_{p'_{i}}({\dot {x}}_{i}p'_{i}-L')=-\left.{\frac {\partial L'}{\partial {x_{i}}}}\right|_{p'_{i}}\\&=-\left.{\frac {\partial L}{\partial {x_{i}}}}\right|_{p'_{i}}-q\left.{\frac {\partial }{\partial {x_{i}}}}\right|_{p'_{i}}{\frac {df}{dt}}\\&=-{\frac {d}{dt}}\left(\left.{\frac {\partial L}{\partial {{\dot {x}}_{i}}}}\right|_{p'_{i}}+q\left.{\frac {\partial f}{\partial {x_{i}}}}\right|_{p'_{i}}\right)\\&=-{\dot {p}}'_{i}\end{aligned}}}
In quantum mechanics, the wave function will also undergo a local U(1) group transformation during the Gauge Transformation, which implies that all physical results must be invariant under local U(1) transformations.
=== Relativistic charged particle in an electromagnetic field ===
The relativistic Lagrangian for a particle (rest mass
m
{\displaystyle m}
and charge
q
{\displaystyle q}
) is given by:
L
(
t
)
=
−
m
c
2
1
−
x
˙
(
t
)
2
c
2
+
q
x
˙
(
t
)
⋅
A
(
x
(
t
)
,
t
)
−
q
φ
(
x
(
t
)
,
t
)
{\displaystyle {\mathcal {L}}(t)=-mc^{2}{\sqrt {1-{\frac {{{\dot {\mathbf {x} }}(t)}^{2}}{c^{2}}}}}+q{\dot {\mathbf {x} }}(t)\cdot \mathbf {A} \left(\mathbf {x} (t),t\right)-q\varphi \left(\mathbf {x} (t),t\right)}
Thus the particle's canonical momentum is
p
(
t
)
=
∂
L
∂
x
˙
=
m
x
˙
1
−
x
˙
2
c
2
+
q
A
{\displaystyle \mathbf {p} (t)={\frac {\partial {\mathcal {L}}}{\partial {\dot {\mathbf {x} }}}}={\frac {m{\dot {\mathbf {x} }}}{\sqrt {1-{\frac {{\dot {\mathbf {x} }}^{2}}{c^{2}}}}}}+q\mathbf {A} }
that is, the sum of the kinetic momentum and the potential momentum.
Solving for the velocity, we get
x
˙
(
t
)
=
p
−
q
A
m
2
+
1
c
2
(
p
−
q
A
)
2
{\displaystyle {\dot {\mathbf {x} }}(t)={\frac {\mathbf {p} -q\mathbf {A} }{\sqrt {m^{2}+{\frac {1}{c^{2}}}{\left(\mathbf {p} -q\mathbf {A} \right)}^{2}}}}}
So the Hamiltonian is
H
(
t
)
=
x
˙
⋅
p
−
L
=
c
m
2
c
2
+
(
p
−
q
A
)
2
+
q
φ
{\displaystyle {\mathcal {H}}(t)={\dot {\mathbf {x} }}\cdot \mathbf {p} -{\mathcal {L}}=c{\sqrt {m^{2}c^{2}+{\left(\mathbf {p} -q\mathbf {A} \right)}^{2}}}+q\varphi }
This results in the force equation (equivalent to the Euler–Lagrange equation)
p
˙
=
−
∂
H
∂
x
=
q
x
˙
⋅
(
∇
A
)
−
q
∇
φ
=
q
∇
(
x
˙
⋅
A
)
−
q
∇
φ
{\displaystyle {\dot {\mathbf {p} }}=-{\frac {\partial {\mathcal {H}}}{\partial \mathbf {x} }}=q{\dot {\mathbf {x} }}\cdot ({\boldsymbol {\nabla }}\mathbf {A} )-q{\boldsymbol {\nabla }}\varphi =q{\boldsymbol {\nabla }}({\dot {\mathbf {x} }}\cdot \mathbf {A} )-q{\boldsymbol {\nabla }}\varphi }
from which one can derive
d
d
t
(
m
x
˙
1
−
x
˙
2
c
2
)
=
d
d
t
(
p
−
q
A
)
=
p
˙
−
q
∂
A
∂
t
−
q
(
x
˙
⋅
∇
)
A
=
q
∇
(
x
˙
⋅
A
)
−
q
∇
φ
−
q
∂
A
∂
t
−
q
(
x
˙
⋅
∇
)
A
=
q
E
+
q
x
˙
×
B
{\displaystyle {\begin{aligned}{\frac {\mathrm {d} }{\mathrm {d} t}}\left({\frac {m{\dot {\mathbf {x} }}}{\sqrt {1-{\frac {{\dot {\mathbf {x} }}^{2}}{c^{2}}}}}}\right)&={\frac {\mathrm {d} }{\mathrm {d} t}}(\mathbf {p} -q\mathbf {A} )={\dot {\mathbf {p} }}-q{\frac {\partial \mathbf {A} }{\partial t}}-q({\dot {\mathbf {x} }}\cdot \nabla )\mathbf {A} \\&=q{\boldsymbol {\nabla }}({\dot {\mathbf {x} }}\cdot \mathbf {A} )-q{\boldsymbol {\nabla }}\varphi -q{\frac {\partial \mathbf {A} }{\partial t}}-q({\dot {\mathbf {x} }}\cdot \nabla )\mathbf {A} \\&=q\mathbf {E} +q{\dot {\mathbf {x} }}\times \mathbf {B} \end{aligned}}}
The above derivation makes use of the vector calculus identity:
1
2
∇
(
A
⋅
A
)
=
A
⋅
J
A
=
A
⋅
(
∇
A
)
=
(
A
⋅
∇
)
A
+
A
×
(
∇
×
A
)
.
{\displaystyle {\tfrac {1}{2}}\nabla \left(\mathbf {A} \cdot \mathbf {A} \right)=\mathbf {A} \cdot \mathbf {J} _{\mathbf {A} }=\mathbf {A} \cdot (\nabla \mathbf {A} )=(\mathbf {A} \cdot \nabla )\mathbf {A} +\mathbf {A} \times (\nabla \times \mathbf {A} ).}
An equivalent expression for the Hamiltonian as function of the relativistic (kinetic) momentum,
P
=
γ
m
x
˙
(
t
)
=
p
−
q
A
{\displaystyle \mathbf {P} =\gamma m{\dot {\mathbf {x} }}(t)=\mathbf {p} -q\mathbf {A} }
, is
H
(
t
)
=
x
˙
(
t
)
⋅
P
(
t
)
+
m
c
2
γ
+
q
φ
(
x
(
t
)
,
t
)
=
γ
m
c
2
+
q
φ
(
x
(
t
)
,
t
)
=
E
+
V
{\displaystyle {\mathcal {H}}(t)={\dot {\mathbf {x} }}(t)\cdot \mathbf {P} (t)+{\frac {mc^{2}}{\gamma }}+q\varphi (\mathbf {x} (t),t)=\gamma mc^{2}+q\varphi (\mathbf {x} (t),t)=E+V}
This has the advantage that kinetic momentum
P
{\displaystyle \mathbf {P} }
can be measured experimentally whereas canonical momentum
p
{\displaystyle \mathbf {p} }
cannot. Notice that the Hamiltonian (total energy) can be viewed as the sum of the relativistic energy (kinetic+rest),
E
=
γ
m
c
2
{\displaystyle E=\gamma mc^{2}}
, plus the potential energy,
V
=
q
φ
{\displaystyle V=q\varphi }
.
== From symplectic geometry to Hamilton's equations ==
=== Geometry of Hamiltonian systems ===
The Hamiltonian can induce a symplectic structure on a smooth even-dimensional manifold M2n in several equivalent ways, the best known being the following:
As a closed nondegenerate symplectic 2-form ω. According to Darboux's theorem, in a small neighbourhood around any point on M there exist suitable local coordinates
p
1
,
⋯
,
p
n
,
q
1
,
⋯
,
q
n
{\displaystyle p_{1},\cdots ,p_{n},\ q_{1},\cdots ,q_{n}}
(canonical or symplectic coordinates) in which the symplectic form becomes:
ω
=
∑
i
=
1
n
d
p
i
∧
d
q
i
.
{\displaystyle \omega =\sum _{i=1}^{n}dp_{i}\wedge dq_{i}\,.}
The form
ω
{\displaystyle \omega }
induces a natural isomorphism of the tangent space with the cotangent space:
T
x
M
≅
T
x
∗
M
{\displaystyle T_{x}M\cong T_{x}^{*}M}
. This is done by mapping a vector
ξ
∈
T
x
M
{\displaystyle \xi \in T_{x}M}
to the 1-form
ω
ξ
∈
T
x
∗
M
{\displaystyle \omega _{\xi }\in T_{x}^{*}M}
, where
ω
ξ
(
η
)
=
ω
(
η
,
ξ
)
{\displaystyle \omega _{\xi }(\eta )=\omega (\eta ,\xi )}
for all
η
∈
T
x
M
{\displaystyle \eta \in T_{x}M}
. Due to the bilinearity and non-degeneracy of
ω
{\displaystyle \omega }
, and the fact that
dim
T
x
M
=
dim
T
x
∗
M
{\displaystyle \dim T_{x}M=\dim T_{x}^{*}M}
, the mapping
ξ
→
ω
ξ
{\displaystyle \xi \to \omega _{\xi }}
is indeed a linear isomorphism. This isomorphism is natural in that it does not change with change of coordinates on
M
.
{\displaystyle M.}
Repeating over all
x
∈
M
{\displaystyle x\in M}
, we end up with an isomorphism
J
−
1
:
Vect
(
M
)
→
Ω
1
(
M
)
{\displaystyle J^{-1}:{\text{Vect}}(M)\to \Omega ^{1}(M)}
between the infinite-dimensional space of smooth vector fields and that of smooth 1-forms. For every
f
,
g
∈
C
∞
(
M
,
R
)
{\displaystyle f,g\in C^{\infty }(M,\mathbb {R} )}
and
ξ
,
η
∈
Vect
(
M
)
{\displaystyle \xi ,\eta \in {\text{Vect}}(M)}
,
J
−
1
(
f
ξ
+
g
η
)
=
f
J
−
1
(
ξ
)
+
g
J
−
1
(
η
)
.
{\displaystyle J^{-1}(f\xi +g\eta )=fJ^{-1}(\xi )+gJ^{-1}(\eta ).}
(In algebraic terms, one would say that the
C
∞
(
M
,
R
)
{\displaystyle C^{\infty }(M,\mathbb {R} )}
-modules
Vect
(
M
)
{\displaystyle {\text{Vect}}(M)}
and
Ω
1
(
M
)
{\displaystyle \Omega ^{1}(M)}
are isomorphic). If
H
∈
C
∞
(
M
×
R
t
,
R
)
{\displaystyle H\in C^{\infty }(M\times \mathbb {R} _{t},\mathbb {R} )}
, then, for every fixed
t
∈
R
t
{\displaystyle t\in \mathbb {R} _{t}}
,
d
H
∈
Ω
1
(
M
)
{\displaystyle dH\in \Omega ^{1}(M)}
, and
J
(
d
H
)
∈
Vect
(
M
)
{\displaystyle J(dH)\in {\text{Vect}}(M)}
.
J
(
d
H
)
{\displaystyle J(dH)}
is known as a Hamiltonian vector field. The respective differential equation on
M
{\displaystyle M}
x
˙
=
J
(
d
H
)
(
x
)
{\displaystyle {\dot {x}}=J(dH)(x)}
is called Hamilton's equation. Here
x
=
x
(
t
)
{\displaystyle x=x(t)}
and
J
(
d
H
)
(
x
)
∈
T
x
M
{\displaystyle J(dH)(x)\in T_{x}M}
is the (time-dependent) value of the vector field
J
(
d
H
)
{\displaystyle J(dH)}
at
x
∈
M
{\displaystyle x\in M}
.
A Hamiltonian system may be understood as a fiber bundle E over time R, with the fiber Et being the position space at time t ∈ R. The Lagrangian is thus a function on the jet bundle J over E; taking the fiberwise Legendre transform of the Lagrangian produces a function on the dual bundle over time whose fiber at t is the cotangent space T∗Et, which comes equipped with a natural symplectic form, and this latter function is the Hamiltonian. The correspondence between Lagrangian and Hamiltonian mechanics is achieved with the tautological one-form.
Any smooth real-valued function H on a symplectic manifold can be used to define a Hamiltonian system. The function H is known as "the Hamiltonian" or "the energy function." The symplectic manifold is then called the phase space. The Hamiltonian induces a special vector field on the symplectic manifold, known as the Hamiltonian vector field.
The Hamiltonian vector field induces a Hamiltonian flow on the manifold. This is a one-parameter family of transformations of the manifold (the parameter of the curves is commonly called "the time"); in other words, an isotopy of symplectomorphisms, starting with the identity. By Liouville's theorem, each symplectomorphism preserves the volume form on the phase space. The collection of symplectomorphisms induced by the Hamiltonian flow is commonly called "the Hamiltonian mechanics" of the Hamiltonian system.
The symplectic structure induces a Poisson bracket. The Poisson bracket gives the space of functions on the manifold the structure of a Lie algebra.
If F and G are smooth functions on M then the smooth function ω(J(dF), J(dG)) is properly defined; it is called a Poisson bracket of functions F and G and is denoted {F, G}. The Poisson bracket has the following properties:
bilinearity
antisymmetry
Leibniz rule:
{
F
1
⋅
F
2
,
G
}
=
F
1
{
F
2
,
G
}
+
F
2
{
F
1
,
G
}
{\displaystyle \{F_{1}\cdot F_{2},G\}=F_{1}\{F_{2},G\}+F_{2}\{F_{1},G\}}
Jacobi identity:
{
{
H
,
F
}
,
G
}
+
{
{
F
,
G
}
,
H
}
+
{
{
G
,
H
}
,
F
}
≡
0
{\displaystyle \{\{H,F\},G\}+\{\{F,G\},H\}+\{\{G,H\},F\}\equiv 0}
non-degeneracy: if the point x on M is not critical for F then a smooth function G exists such that
{
F
,
G
}
(
x
)
≠
0
{\displaystyle \{F,G\}(x)\neq 0}
.
Given a function f
d
d
t
f
=
∂
∂
t
f
+
{
f
,
H
}
,
{\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}f={\frac {\partial }{\partial t}}f+\left\{f,{\mathcal {H}}\right\},}
if there is a probability distribution ρ, then (since the phase space velocity
(
p
˙
i
,
q
˙
i
)
{\displaystyle ({\dot {p}}_{i},{\dot {q}}_{i})}
has zero divergence and probability is conserved) its convective derivative can be shown to be zero and so
∂
∂
t
ρ
=
−
{
ρ
,
H
}
{\displaystyle {\frac {\partial }{\partial t}}\rho =-\left\{\rho ,{\mathcal {H}}\right\}}
This is called Liouville's theorem. Every smooth function G over the symplectic manifold generates a one-parameter family of symplectomorphisms and if {G, H} = 0, then G is conserved and the symplectomorphisms are symmetry transformations.
A Hamiltonian may have multiple conserved quantities Gi. If the symplectic manifold has dimension 2n and there are n functionally independent conserved quantities Gi which are in involution (i.e., {Gi, Gj} = 0), then the Hamiltonian is Liouville integrable. The Liouville–Arnold theorem says that, locally, any Liouville integrable Hamiltonian can be transformed via a symplectomorphism into a new Hamiltonian with the conserved quantities Gi as coordinates; the new coordinates are called action–angle coordinates. The transformed Hamiltonian depends only on the Gi, and hence the equations of motion have the simple form
G
˙
i
=
0
,
φ
˙
i
=
F
i
(
G
)
{\displaystyle {\dot {G}}_{i}=0\quad ,\quad {\dot {\varphi }}_{i}=F_{i}(G)}
for some function F. There is an entire field focusing on small deviations from integrable systems governed by the KAM theorem.
The integrability of Hamiltonian vector fields is an open question. In general, Hamiltonian systems are chaotic; concepts of measure, completeness, integrability and stability are poorly defined.
=== Riemannian manifolds ===
An important special case consists of those Hamiltonians that are quadratic forms, that is, Hamiltonians that can be written as
H
(
q
,
p
)
=
1
2
⟨
p
,
p
⟩
q
{\displaystyle {\mathcal {H}}(q,p)={\tfrac {1}{2}}\langle p,p\rangle _{q}}
where ⟨ , ⟩q is a smoothly varying inner product on the fibers T∗qQ, the cotangent space to the point q in the configuration space, sometimes called a cometric. This Hamiltonian consists entirely of the kinetic term.
If one considers a Riemannian manifold or a pseudo-Riemannian manifold, the Riemannian metric induces a linear isomorphism between the tangent and cotangent bundles. (See Musical isomorphism). Using this isomorphism, one can define a cometric. (In coordinates, the matrix defining the cometric is the inverse of the matrix defining the metric.) The solutions to the Hamilton–Jacobi equations for this Hamiltonian are then the same as the geodesics on the manifold. In particular, the Hamiltonian flow in this case is the same thing as the geodesic flow. The existence of such solutions, and the completeness of the set of solutions, are discussed in detail in the article on geodesics. See also Geodesics as Hamiltonian flows.
=== Sub-Riemannian manifolds ===
When the cometric is degenerate, then it is not invertible. In this case, one does not have a Riemannian manifold, as one does not have a metric. However, the Hamiltonian still exists. In the case where the cometric is degenerate at every point q of the configuration space manifold Q, so that the rank of the cometric is less than the dimension of the manifold Q, one has a sub-Riemannian manifold.
The Hamiltonian in this case is known as a sub-Riemannian Hamiltonian. Every such Hamiltonian uniquely determines the cometric, and vice versa. This implies that every sub-Riemannian manifold is uniquely determined by its sub-Riemannian Hamiltonian, and that the converse is true: every sub-Riemannian manifold has a unique sub-Riemannian Hamiltonian. The existence of sub-Riemannian geodesics is given by the Chow–Rashevskii theorem.
The continuous, real-valued Heisenberg group provides a simple example of a sub-Riemannian manifold. For the Heisenberg group, the Hamiltonian is given by
H
(
x
,
y
,
z
,
p
x
,
p
y
,
p
z
)
=
1
2
(
p
x
2
+
p
y
2
)
.
{\displaystyle {\mathcal {H}}\left(x,y,z,p_{x},p_{y},p_{z}\right)={\tfrac {1}{2}}\left(p_{x}^{2}+p_{y}^{2}\right).}
pz is not involved in the Hamiltonian.
=== Poisson algebras ===
Hamiltonian systems can be generalized in various ways. Instead of simply looking at the algebra of smooth functions over a symplectic manifold, Hamiltonian mechanics can be formulated on general commutative unital real Poisson algebras. A state is a continuous linear functional on the Poisson algebra (equipped with some suitable topology) such that for any element A of the algebra, A2 maps to a nonnegative real number.
A further generalization is given by Nambu dynamics.
=== Generalization to quantum mechanics through Poisson bracket ===
Hamilton's equations above work well for classical mechanics, but not for quantum mechanics, since the differential equations discussed assume that one can specify the exact position and momentum of the particle simultaneously at any point in time. However, the equations can be further generalized to then be extended to apply to quantum mechanics as well as to classical mechanics, through the deformation of the Poisson algebra over p and q to the algebra of Moyal brackets.
Specifically, the more general form of the Hamilton's equation reads
d
f
d
t
=
{
f
,
H
}
+
∂
f
∂
t
,
{\displaystyle {\frac {\mathrm {d} f}{\mathrm {d} t}}=\left\{f,{\mathcal {H}}\right\}+{\frac {\partial f}{\partial t}},}
where f is some function of p and q, and H is the Hamiltonian. To find out the rules for evaluating a Poisson bracket without resorting to differential equations, see Lie algebra; a Poisson bracket is the name for the Lie bracket in a Poisson algebra. These Poisson brackets can then be extended to Moyal brackets comporting to an inequivalent Lie algebra, as proven by Hilbrand J. Groenewold, and thereby describe quantum mechanical diffusion in phase space (See Phase space formulation and Wigner–Weyl transform). This more algebraic approach not only permits ultimately extending probability distributions in phase space to Wigner quasi-probability distributions, but, at the mere Poisson bracket classical setting, also provides more power in helping analyze the relevant conserved quantities in a system.
== See also ==
== References ==
== Further reading ==
== External links ==
Binney, James J., Classical Mechanics (lecture notes) (PDF), University of Oxford, retrieved 27 October 2010
Tong, David, Classical Dynamics (Cambridge lecture notes), University of Cambridge, retrieved 27 October 2010
Hamilton, William Rowan, On a General Method in Dynamics, Trinity College Dublin
Malham, Simon J.A. (2016), An introduction to Lagrangian and Hamiltonian mechanics (lecture notes) (PDF)
Morin, David (2008), Introduction to Classical Mechanics (Additional material: The Hamiltonian method) (PDF) | Wikipedia/Hamiltonian_function |
Environmental science is an interdisciplinary academic field that integrates physics, biology, meteorology, mathematics and geography (including ecology, chemistry, plant science, zoology, mineralogy, oceanography, limnology, soil science, geology and physical geography, and atmospheric science) to the study of the environment, and the solution of environmental problems. Environmental science emerged from the fields of natural history and medicine during the Enlightenment. Today it provides an integrated, quantitative, and interdisciplinary approach to the study of environmental systems.
Environmental scientists seek to understand the earth's physical, chemical, biological, and geological processes, and to use that knowledge to understand how issues such as alternative energy systems, pollution control and mitigation, natural resource management, and the effects of global warming and climate change influence and affect the natural systems and processes of earth.
Environmental issues almost always include an interaction of physical, chemical, and biological processes. Environmental scientists bring a systems approach to the analysis of environmental problems. Key elements of an effective environmental scientist include the ability to relate space, and time relationships as well as quantitative analysis.
Environmental science came alive as a substantive, active field of scientific investigation in the 1960s and 1970s driven by (a) the need for a multi-disciplinary approach to analyze complex environmental problems, (b) the arrival of substantive environmental laws requiring specific environmental protocols of investigation and (c) the growing public awareness of a need for action in addressing environmental problems. Events that spurred this development included the publication of Rachel Carson's landmark environmental book Silent Spring along with major environmental issues becoming very public, such as the 1969 Santa Barbara oil spill, and the Cuyahoga River of Cleveland, Ohio, "catching fire" (also in 1969), and helped increase the visibility of environmental issues and create this new field of study.
== Terminology ==
In common usage, "environmental science" and "ecology" are often used interchangeably, but technically, ecology refers only to the study of organisms and their interactions with each other as well as how they interrelate with environment. Ecology could be considered a subset of environmental science, which also could involve purely chemical or public health issues (for example) ecologists would be unlikely to study. In practice, there are considerable similarities between the work of ecologists and other environmental scientists. There is substantial overlap between ecology and environmental science with the disciplines of fisheries, forestry, and wildlife.
Environmental studies incorporates more of the social sciences for understanding human relationships, perceptions and policies towards the environment. Environmental engineering focuses on design and technology for improving environmental quality in every aspect.
== History ==
=== Ancient civilizations ===
Historical concern for environmental issues is well documented in archives around the world. Ancient civilizations were mainly concerned with what is now known as environmental science insofar as it related to agriculture and natural resources. Scholars believe that early interest in the environment began around 6000 BCE when ancient civilizations in Israel and Jordan collapsed due to deforestation. As a result, in 2700 BCE the first legislation limiting deforestation was established in Mesopotamia. Two hundred years later, in 2500 BCE, a community residing in the Indus River Valley observed the nearby river system in order to improve sanitation. This involved manipulating the flow of water to account for public health. In the Western Hemisphere, numerous ancient Central American city-states collapsed around 1500 BCE due to soil erosion from intensive agriculture. Those remaining from these civilizations took greater attention to the impact of farming practices on the sustainability of the land and its stable food production. Furthermore, in 1450 BCE the Minoan civilization on the Greek island of Crete declined due to deforestation and the resulting environmental degradation of natural resources. Pliny the Elder somewhat addressed the environmental concerns of ancient civilizations in the text Naturalis Historia, written between 77 and 79 ACE, which provided an overview of many related subsets of the discipline.
Although warfare and disease were of primary concern in ancient society, environmental issues played a crucial role in the survival and power of different civilizations. As more communities recognized the importance of the natural world to their long-term success, an interest in studying the environment came into existence.
=== Beginnings of environmental science ===
==== 18th century ====
In 1735, the concept of binomial nomenclature is introduced by Carolus Linnaeus as a way to classify all living organisms, influenced by earlier works of Aristotle. His text, Systema Naturae, represents one of the earliest culminations of knowledge on the subject, providing a means to identify different species based partially on how they interact with their environment.
==== 19th century ====
In the 1820s, scientists were studying the properties of gases, particularly those in the Earth's atmosphere and their interactions with heat from the Sun. Later that century, studies suggested that the Earth had experienced an Ice Age and that warming of the Earth was partially due to what are now known as greenhouse gases (GHG). The greenhouse effect was introduced, although climate science was not yet recognized as an important topic in environmental science due to minimal industrialization and lower rates of greenhouse gas emissions at the time.
==== 20th century ====
In the 1900s, the discipline of environmental science as it is known today began to take shape. The century is marked by significant research, literature, and international cooperation in the field.
In the early 20th century, criticism from dissenters downplayed the effects of global warming. At this time, few researchers were studying the dangers of fossil fuels. After a 1.3 degrees Celsius temperature anomaly was found in the Atlantic Ocean in the 1940s, however, scientists renewed their studies of gaseous heat trapping from the greenhouse effect (although only carbon dioxide and water vapor were known to be greenhouse gases then). Nuclear development following the Second World War allowed environmental scientists to intensively study the effects of carbon and make advancements in the field. Further knowledge from archaeological evidence brought to light the changes in climate over time, particularly ice core sampling.
Environmental science was brought to the forefront of society in 1962 when Rachel Carson published an influential piece of environmental literature, Silent Spring. Carson's writing led the American public to pursue environmental safeguards, such as bans on harmful chemicals like the insecticide DDT. Another important work, The Tragedy of the Commons, was published by Garrett Hardin in 1968 in response to accelerating natural degradation. In 1969, environmental science once again became a household term after two striking disasters: Ohio's Cuyahoga River caught fire due to the amount of pollution in its waters and a Santa Barbara oil spill endangered thousands of marine animals, both receiving prolific media coverage. Consequently, the United States passed an abundance of legislation, including the Clean Water Act and the Great Lakes Water Quality Agreement. The following year, in 1970, the first ever Earth Day was celebrated worldwide and the United States Environmental Protection Agency (EPA) was formed, legitimizing the study of environmental science in government policy. In the next two years, the United Nations created the United Nations Environment Programme (UNEP) in Stockholm, Sweden to address global environmental degradation.
Much of the interest in environmental science throughout the 1970s and the 1980s was characterized by major disasters and social movements. In 1978, hundreds of people were relocated from Love Canal, New York after carcinogenic pollutants were found to be buried underground near residential areas. The next year, in 1979, the nuclear power plant on Three Mile Island in Pennsylvania suffered a meltdown and raised concerns about the dangers of radioactive waste and the safety of nuclear energy. In response to landfills and toxic waste often disposed of near their homes, the official Environmental Justice Movement was started by a Black community in North Carolina in 1982. Two years later, the toxic methyl isocyanate gas was released to the public from a power plant disaster in Bhopal, India, harming hundreds of thousands of people living near the disaster site, the effects of which are still felt today. In a groundbreaking discovery in 1985, a British team of researchers studying Antarctica found evidence of a hole in the ozone layer, inspiring global agreements banning the use of chlorofluorocarbons (CFCs), which were previously used in nearly all aerosols and refrigerants. Notably, in 1986, the meltdown at the Chernobyl nuclear power plant in Ukraine released radioactive waste to the public, leading to international studies on the ramifications of environmental disasters. Over the next couple of years, the Brundtland Commission (previously known as the World Commission on Environment and Development) published a report titled Our Common Future and the Montreal Protocol formed the International Panel on Climate Change (IPCC) as international communication focused on finding solutions for climate change and degradation. In the late 1980s, the Exxon Valdez company was fined for spilling large quantities of crude oil off the coast of Alaska and the resulting cleanup, involving the work of environmental scientists. After hundreds of oil wells were burned in combat in 1991, warfare between Iraq and Kuwait polluted the surrounding atmosphere just below the air quality threshold environmental scientists believed was life-threatening.
==== 21st century ====
Many niche disciplines of environmental science have emerged over the years, although climatology is one of the most known topics. Since the 2000s, environmental scientists have focused on modeling the effects of climate change and encouraging global cooperation to minimize potential damages. In 2002, the Society for the Environment as well as the Institute of Air Quality Management were founded to share knowledge and develop solutions around the world. Later, in 2008, the United Kingdom became the first country to pass legislation (the Climate Change Act) that aims to reduce carbon dioxide output to a specified threshold. In 2016 the Kyoto Protocol became the Paris Agreement, which sets concrete goals to reduce greenhouse gas emissions and restricts Earth's rise in temperature to a 2 degrees Celsius maximum. The agreement is one of the most expansive international efforts to limit the effects of global warming to date.
Most environmental disasters in this time period involve crude oil pollution or the effects of rising temperatures. In 2010, BP was responsible for the largest American oil spill in the Gulf of Mexico, known as the Deepwater Horizon spill, which killed a number of the company's workers and released large amounts of crude oil into the water. Furthermore, throughout this century, much of the world has been ravaged by widespread wildfires and water scarcity, prompting regulations on the sustainable use of natural resources as determined by environmental scientists.
The 21st century is marked by significant technological advancements. New technology in environmental science has transformed how researchers gather information about various topics in the field. Research in engines, fuel efficiency, and decreasing emissions from vehicles since the times of the Industrial Revolution has reduced the amount of carbon and other pollutants into the atmosphere. Furthermore, investment in researching and developing clean energy (i.e. wind, solar, hydroelectric, and geothermal power) has significantly increased in recent years, indicating the beginnings of the divestment from fossil fuel use. Geographic information systems (GIS) are used to observe sources of air or water pollution through satellites and digital imagery analysis. This technology allows for advanced farming techniques like precision agriculture as well as monitoring water usage in order to set market prices. In the field of water quality, developed strains of natural and manmade bacteria contribute to bioremediation, the treatment of wastewaters for future use. This method is more eco-friendly and cheaper than manual cleanup or treatment of wastewaters. Most notably, the expansion of computer technology has allowed for large data collection, advanced analysis, historical archives, public awareness of environmental issues, and international scientific communication. The ability to crowdsource on the Internet, for example, represents the process of collectivizing knowledge from researchers around the world to create increased opportunity for scientific progress. With crowdsourcing, data is released to the public for personal analyses which can later be shared as new information is found. Another technological development, blockchain technology, monitors and regulates global fisheries. By tracking the path of fish through global markets, environmental scientists can observe whether certain species are being overharvested to the point of extinction. Additionally, remote sensing allows for the detection of features of the environment without physical intervention. The resulting digital imagery is used to create increasingly accurate models of environmental processes, climate change, and much more. Advancements to remote sensing technology are particularly useful in locating the nonpoint sources of pollution and analyzing ecosystem health through image analysis across the electromagnetic spectrum. Lastly, thermal imaging technology is used in wildlife management to catch and discourage poachers and other illegal wildlife traffickers from killing endangered animals, proving useful for conservation efforts. Artificial intelligence has also been used to predict the movement of animal populations and protect the habitats of wildlife.
== Components ==
=== Atmospheric sciences ===
Atmospheric sciences focus on the Earth's atmosphere, with an emphasis upon its interrelation to other systems. Atmospheric sciences can include studies of meteorology, greenhouse gas phenomena, atmospheric dispersion modeling of airborne contaminants, sound propagation phenomena related to noise pollution, and even light pollution.
Taking the example of the global warming phenomena, physicists create computer models of atmospheric circulation and infrared radiation transmission, chemists examine the inventory of atmospheric chemicals and their reactions, biologists analyze the plant and animal contributions to carbon dioxide fluxes, and specialists such as meteorologists and oceanographers add additional breadth in understanding the atmospheric dynamics.
=== Ecology ===
As defined by the Ecological Society of America, "Ecology is the study of the relationships between living organisms, including humans, and their physical environment; it seeks to understand the vital connections between plants and animals and the world around them." Ecologists might investigate the relationship between a population of organisms and some physical characteristic of their environment, such as concentration of a chemical; or they might investigate the interaction between two populations of different organisms through some symbiotic or competitive relationship. For example, an interdisciplinary analysis of an ecological system which is being impacted by one or more stressors might include several related environmental science fields. In an estuarine setting where a proposed industrial development could impact certain species by water and air pollution, biologists would describe the flora and fauna, chemists would analyze the transport of water pollutants to the marsh, physicists would calculate air pollution emissions and geologists would assist in understanding the marsh soils and bay muds.
=== Environmental chemistry ===
Environmental chemistry is the study of chemical alterations in the environment. Principal areas of study include soil contamination and water pollution. The topics of analysis include chemical degradation in the environment, multi-phase transport of chemicals (for example, evaporation of a solvent containing lake to yield solvent as an air pollutant), and chemical effects upon biota.
As an example study, consider the case of a leaking solvent tank which has entered the habitat soil of an endangered species of amphibian. As a method to resolve or understand the extent of soil contamination and subsurface transport of solvent, a computer model would be implemented. Chemists would then characterize the molecular bonding of the solvent to the specific soil type, and biologists would study the impacts upon soil arthropods, plants, and ultimately pond-dwelling organisms that are the food of the endangered amphibian.
=== Geosciences ===
Geosciences include environmental geology, environmental soil science, volcanic phenomena and evolution of the Earth's crust. In some classification systems this can also include hydrology, including oceanography.
As an example study, of soils erosion, calculations would be made of surface runoff by soil scientists. Fluvial geomorphologists would assist in examining sediment transport in overland flow. Physicists would contribute by assessing the changes in light transmission in the receiving waters. Biologists would analyze subsequent impacts to aquatic flora and fauna from increases in water turbidity.
== Regulations driving the studies ==
In the United States the National Environmental Policy Act (NEPA) of 1969 set forth requirements for analysis of federal government actions (such as highway construction projects and land management decisions) in terms of specific environmental criteria. Numerous state laws have echoed these mandates, applying the principles to local-scale actions. The upshot has been an explosion of documentation and study of environmental consequences before the fact of development actions.
One can examine the specifics of environmental science by reading examples of Environmental Impact Statements prepared under NEPA such as: Wastewater treatment expansion options discharging into the San Diego/Tijuana Estuary, Expansion of the San Francisco International Airport, Development of the Houston, Metro Transportation system, Expansion of the metropolitan Boston MBTA transit system, and Construction of Interstate 66 through Arlington, Virginia.
In England and Wales the Environment Agency (EA), formed in 1996, is a public body for protecting and improving the environment and enforces the regulations listed on the communities and local government site. (formerly the office of the deputy prime minister). The agency was set up under the Environment Act 1995 as an independent body and works closely with UK Government to enforce the regulations.
== See also ==
Environmental engineering science
Environmental informatics
Environmental monitoring
Environmental planning
Environmental statistics
Glossary of environmental science
List of environmental studies topics
== References ==
== External links ==
Glossary of environmental terms – Global Development Research Center | Wikipedia/Environmental_physics |
Particle physics or high-energy physics is the study of fundamental particles and forces that constitute matter and radiation. The field also studies combinations of elementary particles up to the scale of protons and neutrons, while the study of combinations of protons and neutrons is called nuclear physics.
The fundamental particles in the universe are classified in the Standard Model as fermions (matter particles) and bosons (force-carrying particles). There are three generations of fermions, although ordinary matter is made only from the first fermion generation. The first generation consists of up and down quarks which form protons and neutrons, and electrons and electron neutrinos. The three fundamental interactions known to be mediated by bosons are electromagnetism, the weak interaction, and the strong interaction.
Quarks cannot exist on their own but form hadrons. Hadrons that contain an odd number of quarks are called baryons and those that contain an even number are called mesons. Two baryons, the proton and the neutron, make up most of the mass of ordinary matter. Mesons are unstable and the longest-lived last for only a few hundredths of a microsecond. They occur after collisions between particles made of quarks, such as fast-moving protons and neutrons in cosmic rays. Mesons are also produced in cyclotrons or other particle accelerators.
Particles have corresponding antiparticles with the same mass but with opposite electric charges. For example, the antiparticle of the electron is the positron. The electron has a negative electric charge, the positron has a positive charge. These antiparticles can theoretically form a corresponding form of matter called antimatter. Some particles, such as the photon, are their own antiparticle.
These elementary particles are excitations of the quantum fields that also govern their interactions. The dominant theory explaining these fundamental particles and fields, along with their dynamics, is called the Standard Model. The reconciliation of gravity to the current particle physics theory is not solved; many theories have addressed this problem, such as loop quantum gravity, string theory and supersymmetry theory.
Experimental particle physics is the study of these particles in radioactive processes and in particle accelerators such as the Large Hadron Collider. Theoretical particle physics is the study of these particles in the context of cosmology and quantum theory. The two are closely interrelated: the Higgs boson was postulated theoretically before being confirmed by experiments.
== History ==
The idea that all matter is fundamentally composed of elementary particles dates from at least the 6th century BC. In the 19th century, John Dalton, through his work on stoichiometry, concluded that each element of nature was composed of a single, unique type of particle. The word atom, after the Greek word atomos meaning "indivisible", has since then denoted the smallest particle of a chemical element, but physicists later discovered that atoms are not, in fact, the fundamental particles of nature, but are conglomerates of even smaller particles, such as the electron. The early 20th century explorations of nuclear physics and quantum physics led to proofs of nuclear fission in 1939 by Lise Meitner (based on experiments by Otto Hahn), and nuclear fusion by Hans Bethe in that same year; both discoveries also led to the development of nuclear weapons. Bethe's 1947 calculation of the Lamb shift is credited with having "opened the way to the modern era of particle physics".
Throughout the 1950s and 1960s, a bewildering variety of particles was found in collisions of particles from beams of increasingly high energy. It was referred to informally as the "particle zoo". Important discoveries such as the CP violation by James Cronin and Val Fitch brought new questions to matter-antimatter imbalance. After the formulation of the Standard Model during the 1970s, physicists clarified the origin of the particle zoo. The large number of particles was explained as combinations of a (relatively) small number of more fundamental particles and framed in the context of quantum field theories. This reclassification marked the beginning of modern particle physics.
== Standard Model ==
The current state of the classification of all elementary particles is explained by the Standard Model, which gained widespread acceptance in the mid-1970s after experimental confirmation of the existence of quarks. It describes the strong, weak, and electromagnetic fundamental interactions, using mediating gauge bosons. The species of gauge bosons are eight gluons, W−, W+ and Z bosons, and the photon. The Standard Model also contains 24 fundamental fermions (12 particles and their associated anti-particles), which are the constituents of all matter. Finally, the Standard Model also predicted the existence of a type of boson known as the Higgs boson. On 4 July 2012, physicists with the Large Hadron Collider at CERN announced they had found a new particle that behaves similarly to what is expected from the Higgs boson.
The Standard Model, as currently formulated, has 61 elementary particles. Those elementary particles can combine to form composite particles, accounting for the hundreds of other species of particles that have been discovered since the 1960s. The Standard Model has been found to agree with almost all the experimental tests conducted to date. However, most particle physicists believe that it is an incomplete description of nature and that a more fundamental theory awaits discovery (See Theory of Everything). In recent years, measurements of neutrino mass have provided the first experimental deviations from the Standard Model, since neutrinos do not have mass in the Standard Model.
== Subatomic particles ==
Modern particle physics research is focused on subatomic particles, including atomic constituents, such as electrons, protons, and neutrons (protons and neutrons are composite particles called baryons, made of quarks), that are produced by radioactive and scattering processes; such particles are photons, neutrinos, and muons, as well as a wide range of exotic particles. All particles and their interactions observed to date can be described almost entirely by the Standard Model.
Dynamics of particles are also governed by quantum mechanics; they exhibit wave–particle duality, displaying particle-like behaviour under certain experimental conditions and wave-like behaviour in others. In more technical terms, they are described by quantum state vectors in a Hilbert space, which is also treated in quantum field theory. Following the convention of particle physicists, the term elementary particles is applied to those particles that are, according to current understanding, presumed to be indivisible and not composed of other particles.
=== Quarks and leptons ===
Ordinary matter is made from first-generation quarks (up, down) and leptons (electron, electron neutrino). Collectively, quarks and leptons are called fermions, because they have a quantum spin of half-integers (−1/2, 1/2, 3/2, etc.). This causes the fermions to obey the Pauli exclusion principle, where no two particles may occupy the same quantum state. Quarks have fractional elementary electric charge (−1/3 or 2/3) and leptons have whole-numbered electric charge (0 or -1). Quarks also have color charge, which is labeled arbitrarily with no correlation to actual light color as red, green and blue. Because the interactions between the quarks store energy which can convert to other particles when the quarks are far apart enough, quarks cannot be observed independently. This is called color confinement.
There are three known generations of quarks (up and down, strange and charm, top and bottom) and leptons (electron and its neutrino, muon and its neutrino, tau and its neutrino), with strong indirect evidence that a fourth generation of fermions does not exist.
=== Bosons ===
Bosons are the mediators or carriers of fundamental interactions, such as electromagnetism, the weak interaction, and the strong interaction. Electromagnetism is mediated by the photon, the quanta of light.: 29–30 The weak interaction is mediated by the W and Z bosons. The strong interaction is mediated by the gluon, which can link quarks together to form composite particles. Due to the aforementioned color confinement, gluons are never observed independently. The Higgs boson gives mass to the W and Z bosons via the Higgs mechanism – the gluon and photon are expected to be massless. All bosons have an integer quantum spin (0 and 1) and can have the same quantum state.
=== Antiparticles and color charge ===
Most aforementioned particles have corresponding antiparticles, which compose antimatter. Normal particles have positive lepton or baryon number, and antiparticles have these numbers negative. Most properties of corresponding antiparticles and particles are the same, with a few gets reversed; the electron's antiparticle, positron, has an opposite charge. To differentiate between antiparticles and particles, a plus or negative sign is added in superscript. For example, the electron and the positron are denoted e− and e+. However, in the case that the particle has a charge of 0 (equal to that of the antiparticle), the antiparticle is denoted with a line above the symbol. As such, an electron neutrino is νe, whereas its antineutrino is νe. When a particle and an antiparticle interact with each other, they are annihilated and convert to other particles. Some particles, such as the photon or gluon, have no antiparticles.
Quarks and gluons additionally have color charges, which influences the strong interaction. Quark's color charges are called red, green and blue (though the particle itself have no physical color), and in antiquarks are called antired, antigreen and antiblue. The gluon can have eight color charges, which are the result of quarks' interactions to form composite particles (gauge symmetry SU(3)).
=== Composite ===
The neutrons and protons in the atomic nuclei are baryons – the neutron is composed of two down quarks and one up quark, and the proton is composed of two up quarks and one down quark. A baryon is composed of three quarks, and a meson is composed of two quarks (one normal, one anti). Baryons and mesons are collectively called hadrons. Quarks inside hadrons are governed by the strong interaction, thus are subjected to quantum chromodynamics (color charges). The bounded quarks must have their color charge to be neutral, or "white" for analogy with mixing the primary colors. More exotic hadrons can have other types, arrangement or number of quarks (tetraquark, pentaquark).
An atom is made from protons, neutrons and electrons. By modifying the particles inside a normal atom, exotic atoms can be formed. A simple example would be the hydrogen-4.1, which has one of its electrons replaced with a muon.
=== Hypothetical ===
The graviton is a hypothetical particle that can mediate the gravitational interaction, but it has not been detected or completely reconciled with current theories. Many other hypothetical particles have been proposed to address the limitations of the Standard Model. Notably, supersymmetric particles aim to solve the hierarchy problem, axions address the strong CP problem, and various other particles are proposed to explain the origins of dark matter and dark energy.
== Experimental laboratories ==
The world's major particle physics laboratories are:
Brookhaven National Laboratory (Long Island, New York, United States). Its main facility is the Relativistic Heavy Ion Collider (RHIC), which collides heavy ions such as gold ions and polarized protons. It is the world's first heavy ion collider, and the world's only polarized proton collider.
Budker Institute of Nuclear Physics (Novosibirsk, Russia). Its main projects are now the electron-positron colliders VEPP-2000, operated since 2006, and VEPP-4, started experiments in 1994. Earlier facilities include the first electron–electron beam–beam collider VEP-1, which conducted experiments from 1964 to 1968; the electron-positron colliders VEPP-2, operated from 1965 to 1974; and, its successor VEPP-2M, performed experiments from 1974 to 2000.
CERN (European Organization for Nuclear Research) (Franco-Swiss border, near Geneva, Switzerland). Its main project is now the Large Hadron Collider (LHC), which had its first beam circulation on 10 September 2008, and is now the world's most energetic collider of protons. It also became the most energetic collider of heavy ions after it began colliding lead ions. Earlier facilities include the Large Electron–Positron Collider (LEP), which was stopped on 2 November 2000 and then dismantled to give way for LHC; and the Super Proton Synchrotron, which is being reused as a pre-accelerator for the LHC and for fixed-target experiments.
DESY (Deutsches Elektronen-Synchrotron) (Hamburg, Germany). Its main facility was the Hadron Elektron Ring Anlage (HERA), which collided electrons and positrons with protons. The accelerator complex is now focused on the production of synchrotron radiation with PETRA III, FLASH and the European XFEL.
Fermi National Accelerator Laboratory (Fermilab) (Batavia, Illinois, United States). Its main facility until 2011 was the Tevatron, which collided protons and antiprotons and was the highest-energy particle collider on earth until the Large Hadron Collider surpassed it on 29 November 2009.
Institute of High Energy Physics (IHEP) (Beijing, China). IHEP manages a number of China's major particle physics facilities, including the Beijing Electron–Positron Collider II(BEPC II), the Beijing Spectrometer (BES), the Beijing Synchrotron Radiation Facility (BSRF), the International Cosmic-Ray Observatory at Yangbajing in Tibet, the Daya Bay Reactor Neutrino Experiment, the China Spallation Neutron Source, the Hard X-ray Modulation Telescope (HXMT), and the Accelerator-driven Sub-critical System (ADS) as well as the Jiangmen Underground Neutrino Observatory (JUNO).
KEK (Tsukuba, Japan). It is the home of a number of experiments such as the K2K experiment and its successor T2K experiment, a neutrino oscillation experiment and Belle II, an experiment measuring the CP violation of B mesons.
SLAC National Accelerator Laboratory (Menlo Park, California, United States). Its 2-mile-long linear particle accelerator began operating in 1962 and was the basis for numerous electron and positron collision experiments until 2008. Since then the linear accelerator is being used for the Linac Coherent Light Source X-ray laser as well as advanced accelerator design research. SLAC staff continue to participate in developing and building many particle detectors around the world.
== Theory ==
Theoretical particle physics attempts to develop the models, theoretical framework, and mathematical tools to understand current experiments and make predictions for future experiments (see also theoretical physics). There are several major interrelated efforts being made in theoretical particle physics today.
One important branch attempts to better understand the Standard Model and its tests. Theorists make quantitative predictions of observables at collider and astronomical experiments, which along with experimental measurements is used to extract the parameters of the Standard Model with less uncertainty. This work probes the limits of the Standard Model and therefore expands scientific understanding of nature's building blocks. Those efforts are made challenging by the difficulty of calculating high precision quantities in quantum chromodynamics. Some theorists working in this area use the tools of perturbative quantum field theory and effective field theory, referring to themselves as phenomenologists. Others make use of lattice field theory and call themselves lattice theorists.
Another major effort is in model building where model builders develop ideas for what physics may lie beyond the Standard Model (at higher energies or smaller distances). This work is often motivated by the hierarchy problem and is constrained by existing experimental data. It may involve work on supersymmetry, alternatives to the Higgs mechanism, extra spatial dimensions (such as the Randall–Sundrum models), Preon theory, combinations of these, or other ideas. Vanishing-dimensions theory is a particle physics theory suggesting that systems with higher energy have a smaller number of dimensions.
A third major effort in theoretical particle physics is string theory. String theorists attempt to construct a unified description of quantum mechanics and general relativity by building a theory based on small strings, and branes rather than particles. If the theory is successful, it may be considered a "Theory of Everything", or "TOE".
There are also other areas of work in theoretical particle physics ranging from particle cosmology to loop quantum gravity.
== Practical applications ==
In principle, all physics (and practical applications developed therefrom) can be derived from the study of fundamental particles. In practice, even if "particle physics" is taken to mean only "high-energy atom smashers", many technologies have been developed during these pioneering investigations that later find wide uses in society. Particle accelerators are used to produce medical isotopes for research and treatment (for example, isotopes used in PET imaging), or used directly in external beam radiotherapy. The development of superconductors has been pushed forward by their use in particle physics. The World Wide Web and touchscreen technology were initially developed at CERN. Additional applications are found in medicine, national security, industry, computing, science, and workforce development, illustrating a long and growing list of beneficial practical applications with contributions from particle physics.
== Future ==
Major efforts to look for physics beyond the Standard Model include the Future Circular Collider proposed for CERN and the Particle Physics Project Prioritization Panel (P5) in the US that will update the 2014 P5 study that recommended the Deep Underground Neutrino Experiment, among other experiments.
== See also ==
== References ==
== External links == | Wikipedia/Theoretical_particle_physics |
Lectures on Theoretical Physics is a six-volume series of physics textbooks translated from Arnold Sommerfeld's classic German texts Vorlesungen über Theoretische Physik. The series includes the volumes Mechanics, Mechanics of Deformable Bodies, Electrodynamics, Optics, Thermodynamics and Statistical Mechanics, and Partial Differential Equations in Physics. Focusing on one subject each semester, the lectures formed a three-year cycle of courses that Sommerfeld repeatedly taught at the University of Munich for over thirty years. Sommerfeld's lectures were famous and he was held to be one of the greatest physics lecturers of his time.
== Background ==
Sommerfeld was a well known German theoretical physicist who played a major role in developing old quantum theory. He was renowned as a great teacher of theoretical physics in the early 20th century. Wolfgang Pauli wrote in 1951 that Sommerfeld was "the epitome of the scholar and the teacher". Another physicist, summarizing "the roles of the most important exponents of theoretical physics in its 'golden age'", wrote that "Planck was the authority, Einstein the genius, and Sommerfeld the teacher" in a 1973 biography of Max Planck. Summarizing public reception of Sommerfeld's teaching style, Robert Bruce Lindsay wrote in 1954 that it "is generally admitted that as an effective lecturer Sommerfeld has been rarely if ever surpassed."
The textbooks, originally published in German, were based on series of his lectures, which were made to be self-consistent within each section, at the University of Munich that ran over a three-year cycle. In addition to specialized classes, the set of lectures presented by the book represent Sommerfeld's standard introductory courses in physics that he gave in Munich, with each subject taught over one semester for a total of three years. Sommerfeld continued this cycle of lectures for over thirty years at the university, which were very popular and influential.
In addition to his normal lectures, Sommerfeld also lectured in specialized courses, including courses in atomic physics that form the subject of another of his books: Atomic Structure and Spectral Lines, which was published in 1919. The world-famous textbook is known as the "Bible of atomic physics". His other previous works included another lecture series titled Three lectures on atomic physics, which was published in 1926 by Methuen Publishing. He had also edited the book series Die Theorie des Kreisels, which was based on a set of lectures given by his mentor Felix Klein.
== Volumes ==
There are six volumes to the lecture series, Mechanics, Mechanics of deformable bodies, Electrodynamics, Optics, Thermodynamics and Statistical Mechanics, and Partial Differential Equations in Physics, which all follow a semester course given by Sommerfeld at the University of Munich. Characterizing the series, Rudolf Peierls wrote in 1956 that they exemplify "Sommerfeld's attitude of putting practical problems and their solution above abstract principles, an attitude by its nature closer to Boltzmann than to that of Gibbs."
=== Mechanics ===
Mechanik, the first volume of Sommerfeld's Vorlesungen über Theoretische Physik, was published in 1947 by Akademische Verlagegesellschaft Becker und Erler and subsequently translated into English by Martin O. Stern and published as Mechanics in 1953 by the Academic Press. Paul Peter Ewald wrote a foreword for the English edition where he attempts to summarize Sommerfeld's lecture style and use the information to explain why Sommerfeld had so many successful students.
The book was reviewed by Robert Bruce Lindsay, Rudolf Peierls, William V. Houston, and several others. In his 1954 review of the volume, Lindsay wrote that Sommerfeld's "clarity is indeed remarkably well exemplified" by the mechanics textbook and he praised the book for its "many ingenious comments to help the learner over the rough spots". Lindsay noted regret for the lack of an extended discussion of mass and force in physics before going on to write that the "book can be heartily recommended to all students of physics on the undergraduate senior and elementary graduate levels in American universities".
Table of contents:
=== Mechanics of Deformable Bodies ===
Mechanics of deformable bodies, the second volume of the series, was published in English in 1950 by Academic Press. The book was translated from the German volume Mechanik der deformierbaren Medien by G. Kuerti. The second volume covers topics including hydrodynamics and elasticity, which are discussed together, as well as more advanced topics like supersonic flow and shock waves.
The volume was reviewed by Rudolf Peierls. In his review, Peierls argued that covering hydrodynamics and elastics together allowed for the "general principles" to "be treated in a concise and transparent way". Peierls went on to note that it emphasizes physical principles even to the detriment of its treatment of mathematical techniques. At the end of his review of several of four of the book's volumes, Peierls closed by saying: "To me these volumes represent an almost perfect choice of topics for a basic course on theoretical physics".
Table of contents:
=== Electrodynamics ===
Electrodynamics, published in 1964 by Academic Press, is the third volume of its series and covers topics in electrodynamics. The book was translated from the German volume Elektrodynamik by Edward Ramberg. The book was reviewed by Rudolf Peierls, among several others.
Table of contents:
=== Optics ===
The series' fourth volume, Optics, was published in 1954 by Academic Press after being translated from the German textbook Optik by Otto Laporte and Peter A. Moldauer. The book was reviewed by Karl Meissner, Rudolf Peierls, and several others. Max Born wrote in 1952 that the book gives "a very elegant outline" of Cherenkov radiation. In his 1955 review, Karl Meissner wrote that the book is characteristic of Sommerfeld's lectures, which he summarized as "[c]lear and vivid presentation[s] of the basic ideas" with an "elegance in language and of mathematical developments" and an "emphasis on physics." Peierls called the book "a very welcome addition to the literature" in his 1955 review and he praised the book, like the other lectures, for "the use of powerful mathematical techniques" that are "presented and applied without losing sight of the physical ideas behind them."
Table of contents:
=== Thermodynamics and Statistical Mechanics ===
Thermodynamik und Statistik, the fifth volume of Sommerfeld's Lectures, was edited by Fritz Bopp and Josef Meixner and published posthumously in 1952 by Dieterich'sche Verlagsbuchhandlung. The book was translated into the English volume Thermodynamics and Statistical Mechanics by Joseph Kestin and published in 1956 by Academic Press. The book was reviewed by Rudolf Peierls and several others. After summarizing the book's contents in his 1956 review of the volume, Peierls wrote, "The book is a welcome addition to the text-books on this subject."
Table of contents:
=== Partial Differential Equations in Physics ===
Partielle Differentialgleichungen in der Physik, the sixth and final volume of its series, was published in 1947 by Dieterich'sche Verlagsbuchhandlung while it was translated to English by Ernst G. Straus and published by Academic Press in 1949 under the title Partial Differential Equations in Physics. The book was reviewed by George F. Carrier, Michael Golomb, Rudolf Peierls, and Lincoln LaPaz. After summarizing the book's contents, Carrier closed his 1952 review by writing: "The book contains an excellent choice of very instructive problems and should be invaluable in teaching the technique of solving the classical type boundary value problems." In his 1950 review, Golomb wrote that the book is "concise and very readable" and that its "chief merit" is "its skillful handling of complex problems". LaPaz gave a critical review going into detail on several issues before summarizing the review himself: "In conclusion, on balancing out the few quite minor imperfections of the book under review against its many excellences, the reader will not hesitate to accord hearty commendation to author, translator and editors alike for tasks exceedingly well done."
Table of contents:
== Publication history ==
=== Original German editions ===
Sommerfeld, Arnold (1947). Mechanik [Mechanics]. Vorlesungen über theoretische Physik (in German). Vol. 1. Leipzig: Akademische Verlagegesellschaft Becker and Erler. OCLC 491677632.
Sommerfeld, Arnold (1943). Mechanik der deformierbaren Medien [Mechanics of Deformable Bodies]. Vorlesungen über theoretische Physik (in German). Vol. 2. Wiesbaden: Dieterich'sche Verlagsbuchhandlung. OCLC 760037487.
Sommerfeld, Arnold (1948). Elektrodynamik [Electrodynamics]. Vorlesungen über theoretische Physik (in German). Vol. 3. Wiesbaden: Dieterich'sche Verlagsbuchhandlung. OCLC 760037491.
Sommerfeld, Arnold (1950). Optik [Opticcs]. Vorlesungen über theoretische Physik (in German). Vol. 4. Wiesbaden: Dieterich'sche Verlagsbuchhandlung. OCLC 450242968.
Sommerfeld, Arnold (1952). Bopp, Fritz; Meixner, Josef (eds.). Thermodynamik und Statistik [Thermodynamics and Statistical Mechanics]. Vorlesungen über theoretische Physik (in German). Vol. 5. Wiesbaden: Dieterich'sche Verlagsbuchhandlung. OCLC 602728833.
Sommerfeld, Arnold (1947). Partielle Differentialgleichungen der Physik [Partial Differential Equations in Physics]. Vorlesungen über theoretische Physik (in German). Vol. 6. Wiesbaden: Dieterich'sche Verlagsbuchhandlung. OCLC 441643966.
=== Original English translations ===
Sommerfeld, Arnold (1952). Mechanics. Lectures on Theoretical Physics. Vol. 1. Translated by Stern, Martin O. (1st ed.). New York: Academic Press. ISBN 0-12-654668-1. OCLC 13466875. {{cite book}}: ISBN / Date incompatibility (help)
Sommerfeld, Arnold (1950). Mechanics of deformable bodies. Lectures on Theoretical Physics. Vol. 2 (1st ed.). New York: Academic Press. ISBN 978-0-12-654650-7. OCLC 610683657. {{cite book}}: ISBN / Date incompatibility (help)
Sommerfeld, Arnold (1964). Electrodynamics. Lectures on Theoretical Physics. Vol. 3. Translated by Ramberg, Edward G. (1st ed.). New York: Academic Press. ISBN 978-0-12-654664-4. OCLC 761255842. {{cite book}}: ISBN / Date incompatibility (help)
Sommerfeld, Arnold (1964). Optics. Lectures on Theoretical Physics. Vol. 4. Translated by Laporte, Otto; Moldauer, Peter A. (1st pbk. ed.). New York: Academic Press. ISBN 978-0-323-15239-6. OCLC 840490686. {{cite book}}: ISBN / Date incompatibility (help)
Sommerfeld, Arnold (1964). Bopp, Fritz; Meixner, Josef (eds.). Thermodynamics and statistical mechanics. Lectures on Theoretical Physics. Vol. 5. Translated by Kestin, Joseph (1st ed.). New York: Academic Press. ISBN 978-0-323-13773-7. OCLC 834542530. {{cite book}}: ISBN / Date incompatibility (help)
Sommerfeld, Arnold (1949). Partial differential equations in physics. Lectures on Theoretical Physics. Vol. 6. Translated by Straus, Ernst G. (1st ed.). New York: Academic Press. ISBN 978-0-12-654658-3. OCLC 339897. {{cite book}}: ISBN / Date incompatibility (help)
=== eBooks ===
Sommerfeld, Arnold (1952). Mechanics. Lectures on Theoretical Physics. Translated by Stern, Martin O. New York: Academic Press. doi:10.1016/C2013-0-07601-3. ISBN 9780126546682 – via ScienceDirect. {{cite book}}: ISBN / Date incompatibility (help)
Sommerfeld, Arnold (1950). Mechanics of deformable bodies. Lectures on Theoretical Physics. New York: Academic Press. doi:10.1016/C2013-0-07598-6. ISBN 9780126546507 – via ScienceDirect. {{cite book}}: ISBN / Date incompatibility (help)
Sommerfeld, Arnold (1964). Electrodynamics. Lectures on Theoretical Physics. Translated by Ramberg, Edward G. New York: Academic Press. doi:10.1016/C2013-0-07600-1. ISBN 9780126546644 – via ScienceDirect. {{cite book}}: ISBN / Date incompatibility (help)
Sommerfeld, Arnold (1964). Optics. Lectures on Theoretical Physics. Translated by Laporte, Otto; Moldauer, Peter A. New York: Academic Press. doi:10.1016/B978-0-12-395500-5.X5001-3. ISBN 9780123955005 – via ScienceDirect. {{cite book}}: ISBN / Date incompatibility (help)
Sommerfeld, Arnold (1956). Bopp, Fritz; Meixner, Josef (eds.). Thermodynamics and statistical mechanics. Lectures on Theoretical Physics. Translated by Kestin, Joseph. New York: Academic Press. doi:10.1016/B978-0-12-654680-4.X5001-0. hdl:2027/mdp.39015035282444. ISBN 9780126546804 – via ScienceDirect. {{cite book}}: ISBN / Date incompatibility (help)
Sommerfeld, Arnold (1964). Partial differential equations in physics. Lectures on Theoretical Physics. Translated by Straus, Ernst G. New York: Academic Press. doi:10.1016/B978-0-12-654658-3.X5001-0. ISBN 9780126546583 – via ScienceDirect. {{cite book}}: ISBN / Date incompatibility (help)
== See also ==
Course of Theoretical Physics
The Feynman Lectures on Physics
List of textbooks on classical and quantum mechanics
List of textbooks in thermodynamics and statistical mechanics
List of textbooks in electromagnetism
== References ==
== Further reading ==
Eckert, Michael (2013). Arnold Sommerfeld : science, life and turbulent times 1868–1951. New York, NY: Springer. ISBN 978-1-4614-7461-6. OCLC 852251854.
Born, Max (1 November 1952). "Arnold Johannes Wilhelm Sommerfeld 1868-1951". Obituary Notices of Fellows of the Royal Society. 8 (21): 274–296. doi:10.1098/rsbm.1952.0018. JSTOR 768813. S2CID 161998194.
Pauli, Wolfgang (1984). "Arnold Sommerfeld". Physik und Erkenntnistheorie. Facetten der Physik (in German). Wiesbaden: Vieweg+Teubner Verlag. p. 42. doi:10.1007/978-3-322-88799-3_6. ISBN 978-3-322-88799-3.
== External links ==
"Publisher's website for Mechanics". Elsevier. 1964. Retrieved 1 December 2020.
"Publisher's website for Mechanics of deformable bodies". Elsevier. 1950. Retrieved 1 December 2020.
"Publisher's website for Electrodynamics". Elsevier. 1964. Retrieved 1 December 2020.
"Publisher's website for Optics". Elsevier. 1964. Retrieved 1 December 2020.
"Publisher's website for Thermodynamics and Statistical Mechanics". Elsevier. 1964. Retrieved 1 December 2020.
"Publisher's website for Partial Differential Equations in Physics". Elsevier. 1964. Retrieved 1 December 2020. | Wikipedia/Lectures_on_Theoretical_Physics |
Stochastic electrodynamics (SED) extends classical electrodynamics (CED) of theoretical physics by adding the hypothesis of a classical Lorentz invariant radiation field having statistical properties similar to that of the electromagnetic zero-point field (ZPF) of quantum electrodynamics (QED).
== Key ingredients ==
Stochastic electrodynamics combines two conventional classical ideas – electromagnetism derived from point charges obeying Maxwell's equations and particle motion driven by Lorentz forces – with one unconventional hypothesis: the classical field has radiation even at T=0. This zero-point radiation is inferred from observations of the (macroscopic) Casimir effect forces at low temperatures. As temperature approaches zero, experimental measurements of the force between two uncharged, conducting plates in a vacuum do not go to zero as classical electrodynamics would predict. Taking this result as evidence of classical zero-point radiation leads to the stochastic electrodynamics model.
== Brief history ==
Stochastic electrodynamics is a term for a collection of research efforts of many different styles based on the ansatz that there exists a Lorentz invariant random electromagnetic radiation. The basic ideas have been around for a long time, but Marshall (1963) and Brafford seem to have originated the more concentrated efforts that started in the 1960s. Thereafter Timothy Boyer, Luis de la Peña and Ana María Cetto were perhaps the most prolific contributors in the 1970s and beyond.
Others have made contributions, alterations, and proposals concentrating on applying SED to problems in QED. A separate thread has been the investigation of an earlier proposal by Walther Nernst attempting to use the SED notion of a classical ZPF to explain inertial mass as due to a vacuum reaction.
In 2010, Cavalleri et al. introduced SEDS ('pure' SED, as they call it, plus spin) as a fundamental improvement that they claim potentially overcomes all the known drawbacks of SED. They also claim SEDS resolves four observed effects that are so far unexplained by QED, i.e., 1) the physical origin of the ZPF and its natural upper cutoff; 2) an anomaly in experimental studies of the neutrino rest mass; 3) the origin and quantitative treatment of 1/f noise; and 4) the high-energy tail (~ 1021 eV) of cosmic rays. Two double-slit electron diffraction experiments are proposed to discriminate between QM and SEDS.
In 2013, Auñon et al. showed that Casimir and Van der Waals interactions are a particular case of stochastic forces from electromagnetic sources when the broad Planck's spectrum is chosen, and the wavefields are non-correlated. Addressing fluctuating partially coherent light emitters with a tailored spectral energy distribution in the optical range, this establishes the link between stochastic electrodynamics and coherence theory; henceforth putting forward a way to optically create and control both such zero-point fields as well as Lifshitz forces of thermal fluctuations. In addition, this opens the path to build many more stochastic forces on employing narrow-band light sources for bodies with frequency-dependent responses.
== Scope of SED ==
SED has been used in attempts to provide a classical explanation for effects previously considered to require quantum mechanics (here restricted to the Schrödinger equation and the Dirac equation and QED) for their explanation. It has also motivated a classical ZPF-based underpinning for gravity and inertia. There is no universal agreement on the successes and failures of SED, either in its congruence with standard theories of quantum mechanics, QED, and gravity or in its compliance with observation. The following SED-based explanations are relatively uncontroversial and are free of criticism at the time of writing:
The Van der Waals force
Diamagnetism
The Unruh effect
The following SED-based calculations and SED-related claims are more controversial, and some have been subject to published criticism:
The ground state of the harmonic oscillator
The ground state of the hydrogen atom
De Broglie waves
Inertia
Gravitation
== See also ==
Classical unified field theories – Theoretical attempts to unify the forces of nature
Stochastic quantum mechanics – Interpretation of quantum mechanics
Zero-point energy – Lowest possible energy of a quantum system or field
== References == | Wikipedia/Stochastic_electrodynamics |
Fluid theories of electricity are outdated theories that postulated one or more electrical fluids which were thought to be responsible for many electrical phenomena in the history of electromagnetism. The "two-fluid" theory of electricity, created by Charles François de Cisternay du Fay, postulated that electricity was the interaction between two electrical 'fluids.' An alternate simpler theory was proposed by Benjamin Franklin, called the unitary, or one-fluid, theory of electricity. This theory claimed that electricity was really one fluid, which could be present in excess, or absent from a body, thus explaining its electrical charge. Franklin's theory explained how charges could be dispelled (such as those in Leyden jars) and how they could be passed through a chain of people. The fluid theories of electricity eventually became updated to include the effects of magnetism, and electrons (upon their discovery).
== Fluid theories ==
In the 1700s many physical phenomena were thought of in terms of an aether, which was a fluid that could permeate matter. This idea had been used for centuries, and was the basis of thinking about physical phenomena, such as electricity, as liquids. Other 18th century examples of imponderable fluid models are Lavoisier's caloric and the magnetic fluids of Coulomb and Aepinus.
=== Two-fluid theory ===
By the 18th century, one of a few theories explaining observed electrical phenomena was the two-fluid theory. This theory is generally attributed to Charles François de Cisternay du Fay. du Fay's theory suggested that electricity was composed of two liquids, which could flow through solid bodies. One liquid carried a positive charge, and the other a negative charge. When these two liquids came into contact with one another, they would produce a neutral charge. This theory dealt mainly with explaining electrical attraction and repulsion, rather than how an object could be charged or discharged.
du Fay observed this while repeating an experiment created by Otto von Guericke, wherein a thin material, such as a feather or leaf, would repel a charged object after making contact with it. du Fay observed that the “leaf-gold is first attracted by the tube; and acquires an electricity be approaching it; and of consequence is immediately repell’d by it.” This seemed to confirm for du Fay that the leaf was being pushed as a ‘current’ of electricity flowed around and through it.
Through further testing, du Fay determined that an object could hold one of two types of electricity, either vitreous or resinous electricity. He found that an object with vitreous electricity would repel another vitreous object, but would be attracted to an object with resinous electricity
Another supporter of the two-fluid theory was Christian Gottlieb Kratzenstein. He speculated also the electric charges were carried by vortices in these two fluids.
=== One-fluid theory ===
In 1746 William Watson proposed a one-fluid theory.
On 11 July 1747 Benjamin Franklin composed a letter in which he outlined his new theory. This is the first record of his theory. Franklin developed this theory mainly concentrating on the charging and discharging of bodies, as opposed to du Fay, who concentrated mainly on electrical attraction and repulsion.
Franklin's theory stated that electricity should be thought of as the movement of a single liquid, as opposed to the interaction between two liquids. A body would show signs of electricity when it held either too much, or too little of this liquid. A neutral object was therefore thought to contain a “normal” amount of this fluid. Franklin also outlined two possible states of electrification, positive and negative. He argued that a positively charged object would contain too much fluid, while a negatively charged object would contain too little fluid. Franklin was able to apply this thinking by explaining unexplained phenomena of the time, such as the Leyden jar, a basic charge storing device similar to a capacitor. He argued that the wire and inner surface became positively charged, while the outer surface became negatively charged. This caused an imbalance in fluid, and a person touching both portions of the jar allowed the fluid to flow normally.
Despite being a simpler theory, it was heavily debated whether electricity was made up of one fluid or two for a century.
== Significance of the one-fluid theory ==
The one-fluid theory shows a significant shift in how the scientific community thought about electricity. Prior to Franklin's theory, there were many competing theories on how electricity functioned. Franklin's theory soon became the most widely accepted at the time. Franklin's theory is also notable, because it is the first theory that viewed electricity as the accumulation of 'charge' from elsewhere, rather than an excitation of the matter already present in an object.
Franklin's theory also provides the basis for conventional current, the thinking of electricity as being the movement of positive charges. Franklin arbitrarily thought of his electrical fluid as being of a positive charge, and therefore all thought was done in the frame of mind of a positive flow. This permeated the mindset of the scientific community to the point that electricity is still being thought of as the flow of positive charges, despite proof that electricity moving through metals (the most common conductor) is done by the electron, or negative particle.
Franklin was also the first person to suggest that lightning was in fact electricity. Franklin suggested that lightning was just a larger version of the small sparks that appeared between two charged objects. He therefore predicted that lightning could be shaped and directed by using a pointed conductor. This was the basis for his famous kite experiment.
== Shortcomings of the theory ==
Although the one-fluid theory marked a significant advance in discussions of electricity, it did have some deficiencies. Franklin created the theory to explain discharges, an aspect which had been mostly ignored previously. While it explained this well, it was not able to fully explain electrical attraction and repulsion. It made sense that two objects with too much fluid would push away from each other, and why two objects with largely different amounts of fluid would pull towards each other. However, it didn't make sense that two objects with no fluid would repel each other. Too little fluid should not cause a repulsion.
Another difficulty with this model of electricity is that it ignores the interactions between electricity and magnetism. Although this relationship was not well-studied at the time, it was known that there was some connection between the two phenomena. Franklin's model makes no reference to these forces, and makes no attempt to explain them.
Although fluid theory was the predominant viewpoint for a time, it was eventually replaced by more modern theories, specifically one which used observations about attractions between current-carrying wires to describe the magnetic effects between them.
== Connections to magnetism ==
Neither du Fay nor Franklin described the effects of magnetism in their theories, with both concerning themselves only with electrical effects. However, theories on magnetism followed a very similar pattern as those on electricity. Charles Coulomb described magnets as containing two magnetic fluids, aural and boreal, which could combine to describe magnetic attraction and repulsion. The related one-fluid theory for magnetism was proposed by Franz Aepinus, who described magnets as containing too much or too little magnetic fluid.
These theories of electricity and magnetism were thought of as two separate phenomena, until Hans Christian Ørsted noticed that a compass needle would deflect from magnetic north when placed near an electric current. This caused him to develop theories that electricity and magnetism were interrelated and could affect one another. Ørsted's work was the basis for a theory by French physicist André-Marie Ampère, which unified the relation between magnetism and electricity.
== See also ==
General
Contact tension
Hydraulic analogy
Imponderable fluid
Histories
History of the electric charge
History of electrochemistry
== References ==
== External links ==
A letter from Charles-François de Cisternay Du Fay concerning electricity. Archived 2016-07-08 at the Wayback Machine, Phil. Trans. 38 (1734) 258-266
History of electricity. Both kinds of electricity. Attraction and repulsion. The Dufay's law. | Wikipedia/Fluid_theory_of_electricity |
In astronomy, the geocentric model (also known as geocentrism, often exemplified specifically by the Ptolemaic system) is a superseded description of the Universe with Earth at the center. Under most geocentric models, the Sun, Moon, stars, and planets all orbit Earth. The geocentric model was the predominant description of the cosmos in many European ancient civilizations, such as those of Aristotle in Classical Greece and Ptolemy in Roman Egypt, as well as during the Islamic Golden Age.
Two observations supported the idea that Earth was the center of the Universe. First, from anywhere on Earth, the Sun appears to revolve around Earth once per day. While the Moon and the planets have their own motions, they also appear to revolve around Earth about once per day. The stars appeared to be fixed on a celestial sphere rotating once each day about an axis through the geographic poles of Earth. Second, Earth seems to be unmoving from the perspective of an earthbound observer; it feels solid, stable, and stationary.
Ancient Greek, ancient Roman, and medieval philosophers usually combined the geocentric model with a spherical Earth, in contrast to the older flat-Earth model implied in some mythology. However, the Greek astronomer and mathematician Aristarchus of Samos (c. 310 – c. 230 BC) developed a heliocentric model placing all of the then-known planets in their correct order around the Sun. The ancient Greeks believed that the motions of the planets were circular, a view that was not challenged in Western culture until the 17th century, when Johannes Kepler postulated that orbits were heliocentric and elliptical (Kepler's first law of planetary motion). In 1687, Isaac Newton showed that elliptical orbits could be derived from his laws of gravitation.
The astronomical predictions of Ptolemy's geocentric model, developed in the 2nd century of the Christian era, served as the basis for preparing astrological and astronomical charts for over 1,500 years. The geocentric model held sway into the early modern age, but from the late 16th century onward, it was gradually superseded by the heliocentric model of Copernicus, Galileo, and Kepler. There was much resistance to the transition between these two theories, since for a long time the geocentric postulate produced more accurate results. Additionally some felt that a new, unknown theory could not subvert an accepted consensus for geocentrism.
== Ancient Greece ==
In the 6th century BC, Anaximander proposed a cosmology in which Earth is shaped like a section of a pillar (a cylinder), held aloft at the center of everything. The Sun, Moon, and planets were holes in invisible wheels which surround Earth, and through those holes, humans could see concealed fire. At around the same time, Pythagoras thought that Earth was a sphere (in accordance with observations of eclipses), but not at the center; he believed that it was in motion around an unseen fire. Later these two concepts were combined, so that most of the educated Greeks from the 4th century BC onwards thought that Earth was a sphere at the center of the universe.
In the 4th century BC Plato and his student Aristotle, wrote works based on the geocentric model. According to Plato, the Earth was a sphere, stationary at the center of the universe. The stars and planets were carried around the Earth on spheres or circles, arranged in the order (outwards from the center): Moon, Sun, Venus, Mercury, Mars, Jupiter, Saturn, fixed stars, with the fixed stars located on the celestial sphere. In his "Myth of Er", a section of the Republic, Plato describes the cosmos as the Spindle of Necessity, attended by the Sirens and turned by the three Fates. Eudoxus of Cnidus, who worked with Plato, developed a less mythical, more mathematical explanation of the planets' motion based on Plato's dictum stating that all phenomena in the heavens can be explained with uniform circular motion. Aristotle elaborated on Eudoxus' system.
In the fully developed Aristotelian system, the spherical Earth is at the center of the universe, and all other heavenly bodies are attached to 47–55 transparent, rotating spheres surrounding the Earth, all concentric with it. (The number is so high because several spheres are needed for each planet.) These spheres, known as crystalline spheres, all moved at different uniform speeds to create the revolution of bodies around the Earth. They were composed of an incorruptible substance called aether. Aristotle believed that the Moon was in the innermost sphere and therefore touches the realm of Earth, causing the dark spots (maculae) and the ability to go through lunar phases. He further described his system by explaining the natural tendencies of the terrestrial elements: earth, water, fire, air, as well as celestial aether. His system held that earth was the heaviest element, with the strongest movement towards the center, thus water formed a layer surrounding the sphere of Earth. The tendency of air and fire, on the other hand, was to move upwards, away from the center, with fire being lighter than air. Beyond the layer of fire, were the solid spheres of aether in which the celestial bodies were embedded. They were also entirely composed of aether.
Adherence to the geocentric model stemmed largely from several important observations. First of all, if the Earth did move, then one ought to be able to observe the shifting of the fixed stars due to stellar parallax. Thus if the Earth was moving, the shapes of the constellations should change considerably over the course of a year. As they did not appear to move, either the stars are much farther away than the Sun and the planets than previously conceived, making their motion undetectable, or the Earth is not moving at all. Because the stars are actually much further away than Greek astronomers postulated (making angular movement extremely small), stellar parallax was not detected until the 19th century. Therefore, the Greeks chose the simpler of the two explanations. Another observation used in favor of the geocentric model at the time was the apparent consistency of Venus' luminosity, which implies that it is usually about the same distance from Earth, which in turn is more consistent with geocentrism than heliocentrism. (In fact, Venus' luminous consistency is due to any loss of light caused by its phases being compensated for by an increase in apparent size caused by its varying distance from Earth.) Objectors to heliocentrism noted that terrestrial bodies naturally tend to come to rest as near as possible to the center of the Earth. Further, barring the opportunity to fall closer the center, terrestrial bodies tend not to move unless forced by an outside object, or transformed to a different element by heat or moisture.
Atmospheric explanations for many phenomena were preferred because the Eudoxan–Aristotelian model based on perfectly concentric spheres was not intended to explain changes in the brightness of the planets due to a change in distance. Eventually, perfectly concentric spheres were abandoned as it was impossible to develop a sufficiently accurate model under that ideal, with the mathematical methods then available. However, while providing for similar explanations, the later deferent and epicycle model was already flexible enough to accommodate observations.
== Ptolemaic model ==
Although the basic tenets of Greek geocentrism were established by the time of Aristotle, the details of his system did not become standard. The Ptolemaic system, developed by the Hellenistic astronomer Claudius Ptolemaeus in the 2nd century AD, finally standardised geocentrism. His main astronomical work, the Almagest, was the culmination of centuries of work by Hellenic, Hellenistic and Babylonian astronomers. For over a millennium, European and Islamic astronomers assumed it was the correct cosmological model. Because of its influence, people sometimes wrongly think the Ptolemaic system is identical with the geocentric model.
Ptolemy argued that the Earth was a sphere in the center of the universe, from the simple observation that half the stars were above the horizon and half were below the horizon at any time (stars on rotating stellar sphere), and the assumption that the stars were all at some modest distance from the center of the universe. If the Earth were substantially displaced from the center, this division into visible and invisible stars would not be equal.
=== Ptolemaic system ===
In the Ptolemaic system, each planet is moved by a system of two spheres: one called its deferent; the other, its epicycle. The deferent is a circle whose center point, called the eccentric and marked in the diagram with an X, is distant from the Earth. The original purpose of the eccentric was to account for the difference in length of the seasons (northern autumn was about five days shorter than spring during this time period) by placing the Earth away from the center of rotation of the rest of the universe. Another sphere, the epicycle, is embedded inside the deferent sphere and is represented by the smaller dotted line to the right. A given planet then moves around the epicycle at the same time the epicycle moves along the path marked by the deferent. These combined movements cause the given planet to move closer to and further away from the Earth at different points in its orbit, and explained the observation that planets slowed down, stopped, and moved backward in retrograde motion, and then again reversed to resume normal, or prograde, motion.
The deferent-and-epicycle model had been used by Greek astronomers for centuries along with the idea of the eccentric (a deferent whose center is slightly away from the Earth), which was even older. In the illustration, the center of the deferent is not the Earth but the spot marked X, making it eccentric (from the Greek ἐκ ec- meaning "from" and κέντρον kentron meaning "center"), from which the spot takes its name. Unfortunately, the system that was available in Ptolemy's time did not quite match observations, even though it was an improvement over Hipparchus' system. Most noticeably the size of a planet's retrograde loop (especially that of Mars) would be smaller, or sometimes larger, than expected, resulting in positional errors of as much as 30 degrees. To alleviate the problem, Ptolemy developed the equant. The equant was a point near the center of a planet's orbit where, if you were to stand there and watch, the center of the planet's epicycle would always appear to move at uniform speed; all other locations would see non-uniform speed, as on the Earth. By using an equant, Ptolemy claimed to keep motion which was uniform and circular, although it departed from the Platonic ideal of uniform circular motion. The resultant system, which eventually came to be widely accepted in the west, seems unwieldy to modern astronomers; each planet required an epicycle revolving on a deferent, offset by an equant which was different for each planet. It predicted various celestial motions, including the beginning and end of retrograde motion, to within a maximum error of 10 degrees, considerably better than without the equant.
The model with epicycles is in fact a very good model of an elliptical orbit with low eccentricity. The well-known ellipse shape does not appear to a noticeable extent when the eccentricity is less than 5%, but the offset distance of the "center" (in fact the focus occupied by the Sun) is very noticeable even with low eccentricities as possessed by the planets.
To summarize, Ptolemy conceived a system that was compatible with Aristotelian philosophy and succeeded in tracking actual observations and predicting future movement mostly to within the limits of the next 1000 years of observations. The observed motions and his mechanisms for explaining them include:
The geocentric model was eventually replaced by the heliocentric model. Copernican heliocentrism could remove Ptolemy's epicycles because the retrograde motion could be seen to be the result of the combination of the movements and speeds of Earth and planets. Copernicus felt strongly that equants were a violation of Aristotelian purity, and proved that replacement of the equant with a pair of new epicycles was entirely equivalent. Astronomers often continued using the equants instead of the epicycles because the former was easier to calculate, and gave the same result.
It has been determined that the Copernican, Ptolemaic and even the Tychonic models provide identical results to identical inputs: they are computationally equivalent. It was not until Kepler demonstrated a physical observation that could show that the physical Sun is directly involved in determining an orbit that a new model was required.
The Ptolemaic order of spheres from Earth outward is:
Moon
Mercury
Venus
Sun
Mars
Jupiter
Saturn
Fixed Stars
Primum Mobile ("First Moved")
Ptolemy did not invent or work out this order, which aligns with the ancient Seven Heavens religious cosmology common to the major Eurasian religious traditions. It also follows the decreasing orbital periods of the Moon, Sun, planets and stars.
=== Persian and Arab astronomy and geocentrism ===
After the translation movement that included the translation of Almagest from Latin to Arabic, Muslims adopted and refined the geocentric model of Ptolemy, which they believed correlated with the teachings of Islam. Muslim astronomers generally accepted the Ptolemaic system and the geocentric model, but by the 10th century, texts appeared regularly whose subject matter expressed doubts concerning Ptolemy (shukūk). Several Muslim scholars questioned Earth's apparent immobility and centrality within the universe. Some Muslim astronomers believed that Earth rotates around its axis, such as Abu Sa'id al-Sijzi (d. circa 1020). According to al-Biruni, Sijzi invented an astrolabe called al-zūraqī, based upon a belief held by some of his contemporaries "that the motion we see is due to the Earth's movement and not to that of the sky". The prevalence of this belief is further confirmed by a reference from the 13th century that states: According to the geometers [or engineers] (muhandisīn), the Earth is in constant circular motion, and what appears to be the motion of the heavens is actually due to the motion of the Earth and not the stars. Early in the 11th century, Alhazen wrote a scathing critique of Ptolemy's model in his Doubts on Ptolemy (c. 1028), which some have interpreted to imply he was criticizing Ptolemy's geocentrism, but most agree that he was actually criticizing the details of Ptolemy's model rather than his geocentrism.
In the 12th century, Arzachel departed from the ancient Greek idea of uniform circular motions by hypothesizing that the planet Mercury moves in an elliptic orbit, while Alpetragius proposed a planetary model that abandoned the equant, epicycle and eccentric mechanisms, though this resulted in a system that was mathematically less accurate. His alternative system spread through most of Europe during the 13th century.
Fakhr al-Din al-Razi (1149–1209), in dealing with his conception of physics and the physical world in his Matalib, rejects the Aristotelian and Avicennian notion of the Earth's centrality within the universe, but instead argues that there are "a thousand thousand worlds (alfa alfi 'awalim) beyond this world, such that each one of those worlds be bigger and more massive than this world, as well as having the like of what this world has." To support his theological argument, he cites the Qur'anic verse, "All praise belongs to God, Lord of the Worlds", emphasizing the term "Worlds".
The "Maragha Revolution" refers to the Maragha school's revolution against Ptolemaic astronomy. The "Maragha school" was an astronomical tradition beginning in the Maragha observatory and continuing with astronomers from the Damascus mosque and Samarkand observatory. Like their Andalusian predecessors, the Maragha astronomers attempted to solve the equant problem (the circle around whose circumference a planet or the center of an epicycle was conceived to move uniformly) and produce alternative configurations to the Ptolemaic model without abandoning geocentrism. They were more successful than their Andalusian predecessors in producing non-Ptolemaic configurations which eliminated the equant and eccentrics, were more accurate than the Ptolemaic model in numerically predicting planetary positions, and were in better agreement with empirical observations. The most important of the Maragha astronomers included Mo'ayyeduddin Urdi (died 1266), Nasīr al-Dīn al-Tūsī (1201–1274), Qutb al-Din al-Shirazi (1236–1311), Ibn al-Shatir (1304–1375), Ali Qushji (c. 1474), Al-Birjandi (died 1525), and Shams al-Din al-Khafri (died 1550).
However, the Maragha school never made the paradigm shift to heliocentrism. The influence of the Maragha school on Copernicus remains speculative, since there is no documentary evidence to prove it. The possibility that Copernicus independently developed the Tusi couple remains open, since no researcher has yet demonstrated that he knew about Tusi's work or that of the Maragha school.
== Ptolemaic and rival systems ==
Not all Greeks agreed with the geocentric model. The Pythagorean system has already been mentioned; some Pythagoreans believed the Earth to be one of several planets going around a central fire. Hicetas and Ecphantus, two Pythagoreans of the 5th century BC, and Heraclides Ponticus in the 4th century BC, believed that the Earth rotated on its axis but remained at the center of the universe. Such a system still qualifies as geocentric. It was revived in the Middle Ages by Jean Buridan. Heraclides Ponticus was once thought to have proposed that both Venus and Mercury went around the Sun rather than the Earth, but it is now known that he did not. Martianus Capella definitely put Mercury and Venus in orbit around the Sun. Aristarchus of Samos wrote a work, which has not survived, on heliocentrism, saying that the Sun was at the center of the universe, while the Earth and other planets revolved around it. His theory was not popular, and he had one named follower, Seleucus of Seleucia.
Epicurus was the most radical. He correctly realized in the 4th century BC that the universe does not have any single center. This theory was widely accepted by the later Epicureans and was notably defended by Lucretius in his poem De rerum natura.
=== Copernican system ===
In 1543, the geocentric system met its first serious challenge with the publication of Copernicus' De revolutionibus orbium coelestium (On the Revolutions of the Heavenly Spheres), which posited that the Earth and the other planets instead revolved around the Sun. The geocentric system was still held for many years afterwards, as at the time the Copernican system did not offer better predictions than the geocentric system, and it posed problems for both natural philosophy and scripture. The Copernican system was no more accurate than Ptolemy's system, because it still used circular orbits. This was not altered until Johannes Kepler postulated that they were elliptical (Kepler's first law of planetary motion).
=== Tychonic system ===
Tycho Brahe (1545-1601), made more accurate determinations of the positions of planets and stars. He sought the effect of stellar parallax, which would have been empirically verifiable proof of the Earth's motion around the Sun predicted by the Copernican model. Having observed no effect, he rejected the idea of the Earth's motion.
Consequently, he introduced a new system, the Tychonic system, in which the Earth was still at the center of the universe, and around it revolved the Sun, but all the other planets revolved around the Sun in a set of epicycles. His model considered both the benefits of the Copernican model and the lack of evidence for the Earth's motion.
=== Observation by Galileo and abandonment of the Ptolemaic model ===
With the invention of the telescope in 1609, observations made by Galileo Galilei (such as that Jupiter has moons) called into question some of the tenets of geocentrism but did not seriously threaten it. Because he observed dark "spots" on the Moon, craters, he remarked that the moon was not a perfect celestial body as had been previously conceived. This was the first detailed observation by telescope of the Moon's imperfections, which had previously been explained by Aristotle as the Moon being contaminated by Earth and its heavier elements, in contrast to the aether of the higher spheres. Galileo could also see the moons of Jupiter, which he dedicated to Cosimo II de' Medici, and stated that they orbited around Jupiter, not Earth. This was a significant claim as it would mean not only that not everything revolved around Earth as stated in the Ptolemaic model, but also showed a secondary celestial body could orbit a moving celestial body, strengthening the heliocentric argument that a moving Earth could retain the Moon. Galileo's observations were verified by other astronomers of the time period who quickly adopted use of the telescope, including Christoph Scheiner, Johannes Kepler, and Giovan Paulo Lembo.
In December 1610, Galileo Galilei used his telescope to observe that Venus showed all phases, just like the Moon. He thought that while this observation was incompatible with the Ptolemaic system, it was a natural consequence of the heliocentric system.
However, Ptolemy placed Venus' deferent and epicycle entirely inside the sphere of the Sun (between the Sun and Mercury), but this was arbitrary; he could just as easily have swapped Venus and Mercury and put them on the other side of the Sun, or made any other arrangement of Venus and Mercury, as long as they were always near a line running from the Earth through the Sun, such as placing the center of the Venus epicycle near the Sun. In this case, if the Sun is the source of all the light, under the Ptolemaic system:
If Venus is between Earth and the Sun, the phase of Venus must always be crescent or all dark.
If Venus is beyond the Sun, the phase of Venus must always be gibbous or full.
But Galileo saw Venus at first small and full, and later large and crescent.
This showed that with a Ptolemaic cosmology, the Venus epicycle can be neither completely inside nor completely outside of the orbit of the Sun. As a result, Ptolemaics abandoned the idea that the epicycle of Venus was completely inside the Sun, and later 17th-century competition between astronomical cosmologies focused on variations of the Tychonic or Copernican systems.
=== Historical positions of the Roman Catholic hierarchy ===
The famous Galileo affair pitted the geocentric model against the claims of Galileo. In regards to the theological basis for such an argument, two Popes addressed the question of whether the use of phenomenological language would compel one to admit an error in Scripture. Both taught that it would not. Pope Leo XIII wrote:
we have to contend against those who, making an evil use of physical science, minutely scrutinize the Sacred Book in order to detect the writers in a mistake, and to take occasion to vilify its contents. ... There can never, indeed, be any real discrepancy between the theologian and the physicist, as long as each confines himself within his own lines, and both are careful, as St. Augustine warns us, "not to make rash assertions, or to assert what is not known as known". If dissension should arise between them, here is the rule also laid down by St. Augustine, for the theologian: "Whatever they can really demonstrate to be true of physical nature, we must show to be capable of reconciliation with our Scriptures; and whatever they assert in their treatises which is contrary to these Scriptures of ours, that is to Catholic faith, we must either prove it as well as we can to be entirely false, or at all events we must, without the smallest hesitation, believe it to be so." To understand how just is the rule here formulated we must remember, first, that the sacred writers, or to speak more accurately, the Holy Ghost "Who spoke by them, did not intend to teach men these things (that is to say, the essential nature of the things of the visible universe), things in no way profitable unto salvation." Hence they did not seek to penetrate the secrets of nature, but rather described and dealt with things in more or less figurative language, or in terms which were commonly used at the time, and which in many instances are in daily use at this day, even by the most eminent men of science. Ordinary speech primarily and properly describes what comes under the senses; and somewhat in the same way the sacred writers-as the Angelic Doctor also reminds us – "went by what sensibly appeared", or put down what God, speaking to men, signified, in the way men could understand and were accustomed to.
Maurice Finocchiaro, author of a book on the Galileo affair, notes that this is "a view of the relationship between biblical interpretation and scientific investigation that corresponds to the one advanced by Galileo in the "Letter to the Grand Duchess Christina". Pope Pius XII repeated his predecessor's teaching:
The first and greatest care of Leo XIII was to set forth the teaching on the truth of the Sacred Books and to defend it from attack. Hence with grave words did he proclaim that there is no error whatsoever if the sacred writer, speaking of things of the physical order "went by what sensibly appeared" as the Angelic Doctor says, speaking either "in figurative language, or in terms which were commonly used at the time, and which in many instances are in daily use at this day, even among the most eminent men of science". For "the sacred writers, or to speak more accurately – the words are St. Augustine's – the Holy Spirit, Who spoke by them, did not intend to teach men these things – that is the essential nature of the things of the universe – things in no way profitable to salvation"; which principle "will apply to cognate sciences, and especially to history", that is, by refuting, "in a somewhat similar way the fallacies of the adversaries and defending the historical truth of Sacred Scripture from their attacks".
In 1664, Pope Alexander VII republished the Index Librorum Prohibitorum (List of Prohibited Books) and attached the various decrees connected with those books, including those concerned with heliocentrism. He stated in a papal bull that his purpose in doing so was that "the succession of things done from the beginning might be made known [quo rei ab initio gestae series innotescat]".
The position of the curia evolved slowly over the centuries towards permitting the heliocentric view. In 1757, during the papacy of Benedict XIV, the Congregation of the Index withdrew the decree that prohibited all books teaching the Earth's motion, although the Dialogue and a few other books continued to be explicitly included. In 1820, the Congregation of the Holy Office, with the pope's approval, decreed that Catholic astronomer Giuseppe Settele was allowed to treat the Earth's motion as an established fact and removed any obstacle for Catholics to hold to the motion of the Earth:
The Assessor of the Holy Office has referred the request of Giuseppe Settele, Professor of Optics and Astronomy at La Sapienza University, regarding permission to publish his work Elements of Astronomy in which he espouses the common opinion of the astronomers of our time regarding the Earth’s daily and yearly motions, to His Holiness through Divine Providence, Pope Pius VII. Previously, His Holiness had referred this request to the Supreme Sacred Congregation and concurrently to the consideration of the Most Eminent and Most Reverend General Cardinal Inquisitor. His Holiness has decreed that no obstacles exist for those who sustain Copernicus' affirmation regarding the Earth's movement in the manner in which it is affirmed today, even by Catholic authors. He has, moreover, suggested the insertion of several notations into this work, aimed at demonstrating that the above mentioned affirmation [of Copernicus], as it has come to be understood, does not present any difficulties; difficulties that existed in times past, prior to the subsequent astronomical observations that have now occurred. [Pope Pius VII] has also recommended that the implementation [of these decisions] be given to the Cardinal Secretary of the Supreme Sacred Congregation and Master of the Sacred Apostolic Palace. He is now appointed the task of bringing to an end any concerns and criticisms regarding the printing of this book, and, at the same time, ensuring that in the future, regarding the publication of such works, permission is sought from the Cardinal Vicar whose signature will not be given without the authorization of the Superior of his Order.
In 1822, the Congregation of the Holy Office removed the prohibition on the publication of books treating of the Earth's motion in accordance with modern astronomy and Pope Pius VII ratified the decision:
The most excellent [cardinals] have decreed that there must be no denial, by the present or by future Masters of the Sacred Apostolic Palace, of permission to print and to publish works which treat of the mobility of the Earth and of the immobility of the sun, according to the common opinion of modern astronomers, as long as there are no other contrary indications, on the basis of the decrees of the Sacred Congregation of the Index of 1757 and of this Supreme [Holy Office] of 1820; and that those who would show themselves to be reluctant or would disobey, should be forced under punishments at the choice of [this] Sacred Congregation, with derogation of [their] claimed privileges, where necessary.
The 1835 edition of the Catholic List of Prohibited Books for the first time omits the Dialogue from the list. In his 1921 papal encyclical, In praeclara summorum, Pope Benedict XV stated that, "though this Earth on which we live may not be the center of the universe as at one time was thought, it was the scene of the original happiness of our first ancestors, witness of their unhappy fall, as too of the Redemption of mankind through the Passion and Death of Jesus Christ". In 1965 the Second Vatican Council stated that, "Consequently, we cannot but deplore certain habits of mind, which are sometimes found too among Christians, which do not sufficiently attend to the rightful independence of science and which, from the arguments and controversies they spark, lead many minds to conclude that faith and science are mutually opposed." The footnote on this statement is to Msgr. Pio Paschini's, Vita e opere di Galileo Galilei, 2 volumes, Vatican Press (1964). Pope John Paul II regretted the treatment that Galileo received, in a speech to the Pontifical Academy of Sciences in 1992. The Pope declared the incident to be based on a "tragic mutual miscomprehension". He further stated:
Cardinal Poupard has also reminded us that the sentence of 1633 was not irreformable, and that the debate which had not ceased to evolve thereafter, was closed in 1820 with the imprimatur given to the work of Canon Settele. ... The error of the theologians of the time, when they maintained the centrality of the Earth, was to think that our understanding of the physical world's structure was, in some way, imposed by the literal sense of Sacred Scripture. Let us recall the celebrated saying attributed to Baronius "Spiritui Sancto mentem fuisse nos docere quomodo ad coelum eatur, non quomodo coelum gradiatur". In fact, the Bible does not concern itself with the details of the physical world, the understanding of which is the competence of human experience and reasoning. There exist two realms of knowledge, one which has its source in Revelation and one which reason can discover by its own power. To the latter belong especially the experimental sciences and philosophy. The distinction between the two realms of knowledge ought not to be understood as opposition.
== Gravitation ==
Johannes Kepler analysed Tycho Brahe's famously accurate observations, and afterwards constructed his three laws in 1609 and 1619, based upon a heliocentric model wherein the planets move in elliptical paths. Using these laws, he was the first astronomer to successfully predict a transit of Venus for the year 1631. The change from circular orbits to elliptical planetary paths dramatically improved the accuracy of celestial observations and predictions. Because the heliocentric model devised by Copernicus was no more accurate than Ptolemy's system, new observations were needed to persuade those who still adhered to the geocentric model. However, Kepler's laws based upon Brahe's data became a problem that geocentrists could not easily overcome.
In 1687, Isaac Newton stated the law of universal gravitation, which was described earlier as a hypothesis by Robert Hooke and others. His main achievement was to mathematically derive Kepler's laws of planetary motion from the law of gravitation, thus helping to prove the latter. This introduced gravitation as the force which kept Earth and the planets moving through the universe, and also kept the atmosphere from flying away. The theory of gravity allowed scientists to rapidly construct a plausible heliocentric model for the Solar System. In his Principia, Newton explained his theory of how gravity, previously thought to be a mysterious, unexplained occult force, directed the movements of celestial bodies, and kept our Solar System in working order. His descriptions of centripetal force were a breakthrough in scientific thought, using the newly developed mathematical discipline of differential calculus, finally replacing the previous schools of scientific thought, which had been dominated by Aristotle and Ptolemy. However, the process was gradual.
Several empirical tests of Newton's theory, explaining the longer period of oscillation of a pendulum at the equator and the differing size of a degree of latitude, would gradually become available between 1673 and 1738. In addition, stellar aberration was observed by Robert Hooke in 1674, and tested in a series of observations by Jean Picard over a period of ten years, finishing in 1680. However, it was not explained until 1729, when James Bradley provided an approximate explanation in terms of the Earth's revolution about the Sun.
In 1838, astronomer Friedrich Wilhelm Bessel measured the parallax of the star 61 Cygni successfully, and disproved Ptolemy's claim that parallax motion did not exist. This finally confirmed the assumptions made by Copernicus, providing accurate, dependable scientific observations, and conclusively displaying how distant stars are from Earth.
A geocentric frame is useful for many everyday activities and most laboratory experiments, but is a less appropriate choice for Solar System mechanics and space travel. While a heliocentric frame is most useful in those cases, galactic and extragalactic astronomy is easier if the Sun is treated as neither stationary nor the center of the universe, but rather rotating around the center of our galaxy, while in turn our galaxy is also not at rest in the cosmic background.
== Relativity ==
Albert Einstein and Leopold Infeld wrote in The Evolution of Physics (1938): "Can we formulate physical laws so that they are valid for all CS [coordinate systems], not only those moving uniformly, but also those moving quite arbitrarily, relative to each other? If this can be done, our difficulties will be over. We shall then be able to apply the laws of nature to any CS. The struggle, so violent in the early days of science, between the views of Ptolemy and Copernicus would then be quite meaningless. Either CS could be used with equal justification. The two sentences, 'the sun is at rest and the Earth moves', or 'the sun moves and the Earth is at rest', would simply mean two different conventions concerning two different CS.
Could we build a real relativistic physics valid in all CS; a physics in which there would be no place for absolute, but only for relative, motion? This is indeed possible!"
Despite giving more respectability to the geocentric view than Newtonian physics does, relativity is not geocentric. Rather, relativity states that the Sun, the Earth, the Moon, Jupiter, or any other point for that matter could be chosen as a center of the Solar System with equal validity.
Relativity agrees with Newtonian predictions that regardless of whether the Sun or the Earth are chosen arbitrarily as the center of the coordinate system describing the Solar System, the paths of the planets form (roughly) ellipses with respect to the Sun, not the Earth. With respect to the average reference frame of the fixed stars, the planets do indeed move around the Sun, which due to its much larger mass, moves far less than its own diameter and the gravity of which is dominant in determining the orbits of the planets (in other words, the center of mass of the Solar System is near the center of the Sun). The Earth and Moon are much closer to being a binary planet; the center of mass around which they both rotate is still inside the Earth, but is about 4,624 km (2,873 miles) or 72.6% of the Earth's radius away from the centre of the Earth (thus closer to the surface than the center).
What the principle of relativity points out is that correct mathematical calculations can be made regardless of the reference frame chosen, and these will all agree with each other as to the predictions of actual motions of bodies with respect to each other. It is not necessary to choose the object in the Solar System with the largest gravitational field as the center of the coordinate system in order to predict the motions of planetary bodies, though doing so may make calculations easier to perform or interpret. A geocentric coordinate system can be more convenient when dealing only with bodies mostly influenced by the gravity of the Earth (such as artificial satellites and the Moon), or when calculating what the sky will look like when viewed from Earth (as opposed to an imaginary observer looking down on the entire Solar System, where a different coordinate system might be more convenient).
== Religious and contemporary adherence to geocentrism ==
The Ptolemaic model held sway into the early modern age; from the late 16th century onward it was gradually replaced as the consensus description by the heliocentric model. Geocentrism as a separate religious belief, however, never completely died out. In the United States between 1870 and 1920, for example, various members of the Lutheran Church–Missouri Synod published articles disparaging Copernican astronomy and promoting geocentrism. However, in the 1902 Theological Quarterly, A. L. Graebner observed that the synod had no doctrinal position on geocentrism, heliocentrism, or any scientific model, unless it were to contradict Scripture. He stated that any possible declarations of geocentrists within the synod did not set the position of the church body as a whole.
Articles arguing that geocentrism was the biblical perspective appeared in some early creation science newsletters. Contemporary advocates for such religious beliefs include Robert Sungenis (author of the 2006 book Galileo Was Wrong and the 2014 pseudo-documentary film The Principle). Most contemporary creationist organizations reject such perspectives. A few Orthodox Jewish leaders maintain a geocentric model of the universe and an interpretation of Maimonides to the effect that he ruled that the Earth is orbited by the Sun. The Lubavitcher Rebbe also explained that geocentrism is defensible based on the theory of relativity. While geocentrism is important in Maimonides' calendar calculations, the great majority of Jewish religious scholars, who accept the divinity of the Bible and accept many of his rulings as legally binding, do not believe that the Bible or Maimonides command a belief in geocentrism. There have been some modern Islamic scholars who promoted geocentrism. One of them was Ahmed Raza Khan Barelvi, a Sunni scholar of the Indian subcontinent. He rejected the heliocentric model and wrote a book that explains the movement of the sun, moon and other planets around the Earth.
According to a report released in 2014 by the National Science Foundation, 26% of Americans surveyed believe that the Sun revolves around the Earth. Morris Berman quotes a 2006 survey that show currently some 20% of the U.S. population believe that the Sun goes around the Earth (geocentricism) rather than the Earth goes around the Sun (heliocentricism), while a further 9% claimed not to know. Polls conducted by Gallup in the 1990s found that 16% of Germans, 18% of Americans and 19% of Britons hold that the Sun revolves around the Earth. A study conducted in 2005 by Jon D. Miller of Northwestern University, an expert in the public understanding of science and technology, found that about 20%, or one in five, of American adults believe that the Sun orbits the Earth. According to 2011 VTSIOM poll, 32% of Russians believe that the Sun orbits the Earth.
== Planetariums ==
Many planetariums can switch between heliocentric and geocentric models. In particular, the geocentric model is still used for projecting the celestial sphere and lunar phases in education and sometimes for navigation.
== See also ==
Aristotelian physics
Earth-centered, Earth-fixed coordinate system
History of the center of the Universe
Hollow Earth § Concave Hollow Earths
Religious cosmology
Sphere of fire
Wolfgang Smith, Catholic mathematician
== Notes ==
== References ==
== Bibliography ==
Crowe, Michael J. (1990). Theories of the World from Antiquity to the Copernican Revolution. Mineola, NY: Dover Publications. ISBN 0486261735.
Dreyer, J.L.E. (1953). A History of Astronomy from Thales to Kepler. New York: Dover Publications.
Evans, James (1998). The History and Practice of Ancient Astronomy. New York: Oxford University Press.
Grant, Edward (1984-01-01). "In Defense of the Earth's Centrality and Immobility: Scholastic Reaction to Copernicanism in the Seventeenth Century". Transactions of the American Philosophical Society. New Series. 74 (4): 1–69. doi:10.2307/1006444. ISSN 0065-9746. JSTOR 1006444.
Heath, Thomas (1913). Aristarchus of Samos. Oxford: Clarendon Press.
Hoyle, Fred (1973). Nicolaus Copernicus.
Koestler, Arthur (1986) [1959]. The Sleepwalkers: A History of Man's Changing Vision of the Universe. Penguin Books. ISBN 014055212X. 1990 reprint: ISBN 0140192468.
Kuhn, Thomas S. (1957). The Copernican Revolution (PDF). Cambridge: Harvard University Press. ISBN 0674171039. OCLC 1241666716. {{cite book}}: ISBN / Date incompatibility (help)
Linton, Christopher M. (2004). From Eudoxus to Einstein—A History of Mathematical Astronomy. Cambridge: Cambridge University Press. ISBN 9780521827508.
Qadir, Asghar (1989). Relativity: An introduction to the special theory. Singapore Teaneck, NJ: World Scientific. ISBN 9971-5-0612-2. OCLC 841809663.
Walker, Christopher, ed. (1996). Astronomy Before the Telescope. London: British Museum Press. ISBN 0714117463.
Wright, J. Edward (2000). The Early History Of Heaven. Oxford University Press. Google Books
== External links ==
Another demonstration of the complexity of observed orbits when assuming a geocentric model of the Solar System
Geocentric Perspective animation of the Solar System in 150AD
Ptolemy’s system of astronomy
The Galileo Project – Ptolemaic System | Wikipedia/Ptolemaic_model |
A mathematical model is an abstract description of a concrete system using mathematical concepts and language. The process of developing a mathematical model is termed mathematical modeling. Mathematical models are used in applied mathematics and in the natural sciences (such as physics, biology, earth science, chemistry) and engineering disciplines (such as computer science, electrical engineering), as well as in non-physical systems such as the social sciences (such as economics, psychology, sociology, political science). It can also be taught as a subject in its own right.
The use of mathematical models to solve problems in business or military operations is a large part of the field of operations research. Mathematical models are also used in music, linguistics, and
philosophy (for example, intensively in analytic philosophy). A model may help to explain a system and to study the effects of different components, and to make predictions about behavior.
== Elements of a mathematical model ==
Mathematical models can take many forms, including dynamical systems, statistical models, differential equations, or game theoretic models. These and other types of models can overlap, with a given model involving a variety of abstract structures. In general, mathematical models may include logical models. In many cases, the quality of a scientific field depends on how well the mathematical models developed on the theoretical side agree with results of repeatable experiments. Lack of agreement between theoretical mathematical models and experimental measurements often leads to important advances as better theories are developed. In the physical sciences, a traditional mathematical model contains most of the following elements:
Governing equations
Supplementary sub-models
Defining equations
Constitutive equations
Assumptions and constraints
Initial and boundary conditions
Classical constraints and kinematic equations
== Classifications ==
Mathematical models are of different types:
Linear vs. nonlinear. If all the operators in a mathematical model exhibit linearity, the resulting mathematical model is defined as linear. A model is considered to be nonlinear otherwise. The definition of linearity and nonlinearity is dependent on context, and linear models may have nonlinear expressions in them. For example, in a statistical linear model, it is assumed that a relationship is linear in the parameters, but it may be nonlinear in the predictor variables. Similarly, a differential equation is said to be linear if it can be written with linear differential operators, but it can still have nonlinear expressions in it. In a mathematical programming model, if the objective functions and constraints are represented entirely by linear equations, then the model is regarded as a linear model. If one or more of the objective functions or constraints are represented with a nonlinear equation, then the model is known as a nonlinear model.Linear structure implies that a problem can be decomposed into simpler parts that can be treated independently and/or analyzed at a different scale and the results obtained will remain valid for the initial problem when recomposed and rescaled.Nonlinearity, even in fairly simple systems, is often associated with phenomena such as chaos and irreversibility. Although there are exceptions, nonlinear systems and models tend to be more difficult to study than linear ones. A common approach to nonlinear problems is linearization, but this can be problematic if one is trying to study aspects such as irreversibility, which are strongly tied to nonlinearity.
Static vs. dynamic. A dynamic model accounts for time-dependent changes in the state of the system, while a static (or steady-state) model calculates the system in equilibrium, and thus is time-invariant. Dynamic models typically are represented by differential equations or difference equations.
Explicit vs. implicit. If all of the input parameters of the overall model are known, and the output parameters can be calculated by a finite series of computations, the model is said to be explicit. But sometimes it is the output parameters which are known, and the corresponding inputs must be solved for by an iterative procedure, such as Newton's method or Broyden's method. In such a case the model is said to be implicit. For example, a jet engine's physical properties such as turbine and nozzle throat areas can be explicitly calculated given a design thermodynamic cycle (air and fuel flow rates, pressures, and temperatures) at a specific flight condition and power setting, but the engine's operating cycles at other flight conditions and power settings cannot be explicitly calculated from the constant physical properties.
Discrete vs. continuous. A discrete model treats objects as discrete, such as the particles in a molecular model or the states in a statistical model; while a continuous model represents the objects in a continuous manner, such as the velocity field of fluid in pipe flows, temperatures and stresses in a solid, and electric field that applies continuously over the entire model due to a point charge.
Deterministic vs. probabilistic (stochastic). A deterministic model is one in which every set of variable states is uniquely determined by parameters in the model and by sets of previous states of these variables; therefore, a deterministic model always performs the same way for a given set of initial conditions. Conversely, in a stochastic model—usually called a "statistical model"—randomness is present, and variable states are not described by unique values, but rather by probability distributions.
Deductive, inductive, or floating. A deductive model is a logical structure based on a theory. An inductive model arises from empirical findings and generalization from them. The floating model rests on neither theory nor observation, but is merely the invocation of expected structure. Application of mathematics in social sciences outside of economics has been criticized for unfounded models. Application of catastrophe theory in science has been characterized as a floating model.
Strategic vs. non-strategic. Models used in game theory are different in a sense that they model agents with incompatible incentives, such as competing species or bidders in an auction. Strategic models assume that players are autonomous decision makers who rationally choose actions that maximize their objective function. A key challenge of using strategic models is defining and computing solution concepts such as Nash equilibrium. An interesting property of strategic models is that they separate reasoning about rules of the game from reasoning about behavior of the players.
== Construction ==
In business and engineering, mathematical models may be used to maximize a certain output. The system under consideration will require certain inputs. The system relating inputs to outputs depends on other variables too: decision variables, state variables, exogenous variables, and random variables. Decision variables are sometimes known as independent variables. Exogenous variables are sometimes known as parameters or constants. The variables are not independent of each other as the state variables are dependent on the decision, input, random, and exogenous variables. Furthermore, the output variables are dependent on the state of the system (represented by the state variables).
Objectives and constraints of the system and its users can be represented as functions of the output variables or state variables. The objective functions will depend on the perspective of the model's user. Depending on the context, an objective function is also known as an index of performance, as it is some measure of interest to the user. Although there is no limit to the number of objective functions and constraints a model can have, using or optimizing the model becomes more involved (computationally) as the number increases. For example, economists often apply linear algebra when using input–output models. Complicated mathematical models that have many variables may be consolidated by use of vectors where one symbol represents several variables.
=== A priori information ===
Mathematical modeling problems are often classified into black box or white box models, according to how much a priori information on the system is available. A black-box model is a system of which there is no a priori information available. A white-box model (also called glass box or clear box) is a system where all necessary information is available. Practically all systems are somewhere between the black-box and white-box models, so this concept is useful only as an intuitive guide for deciding which approach to take.
Usually, it is preferable to use as much a priori information as possible to make the model more accurate. Therefore, the white-box models are usually considered easier, because if you have used the information correctly, then the model will behave correctly. Often the a priori information comes in forms of knowing the type of functions relating different variables. For example, if we make a model of how a medicine works in a human system, we know that usually the amount of medicine in the blood is an exponentially decaying function, but we are still left with several unknown parameters; how rapidly does the medicine amount decay, and what is the initial amount of medicine in blood? This example is therefore not a completely white-box model. These parameters have to be estimated through some means before one can use the model.
In black-box models, one tries to estimate both the functional form of relations between variables and the numerical parameters in those functions. Using a priori information we could end up, for example, with a set of functions that probably could describe the system adequately. If there is no a priori information we would try to use functions as general as possible to cover all different models. An often used approach for black-box models are neural networks which usually do not make assumptions about incoming data. Alternatively, the NARMAX (Nonlinear AutoRegressive Moving Average model with eXogenous inputs) algorithms which were developed as part of nonlinear system identification can be used to select the model terms, determine the model structure, and estimate the unknown parameters in the presence of correlated and nonlinear noise. The advantage of NARMAX models compared to neural networks is that NARMAX produces models that can be written down and related to the underlying process, whereas neural networks produce an approximation that is opaque.
==== Subjective information ====
Sometimes it is useful to incorporate subjective information into a mathematical model. This can be done based on intuition, experience, or expert opinion, or based on convenience of mathematical form. Bayesian statistics provides a theoretical framework for incorporating such subjectivity into a rigorous analysis: we specify a prior probability distribution (which can be subjective), and then update this distribution based on empirical data.
An example of when such approach would be necessary is a situation in which an experimenter bends a coin slightly and tosses it once, recording whether it comes up heads, and is then given the task of predicting the probability that the next flip comes up heads. After bending the coin, the true probability that the coin will come up heads is unknown; so the experimenter would need to make a decision (perhaps by looking at the shape of the coin) about what prior distribution to use. Incorporation of such subjective information might be important to get an accurate estimate of the probability.
=== Complexity ===
In general, model complexity involves a trade-off between simplicity and accuracy of the model. Occam's razor is a principle particularly relevant to modeling, its essential idea being that among models with roughly equal predictive power, the simplest one is the most desirable. While added complexity usually improves the realism of a model, it can make the model difficult to understand and analyze, and can also pose computational problems, including numerical instability. Thomas Kuhn argues that as science progresses, explanations tend to become more complex before a paradigm shift offers radical simplification.
For example, when modeling the flight of an aircraft, we could embed each mechanical part of the aircraft into our model and would thus acquire an almost white-box model of the system. However, the computational cost of adding such a huge amount of detail would effectively inhibit the usage of such a model. Additionally, the uncertainty would increase due to an overly complex system, because each separate part induces some amount of variance into the model. It is therefore usually appropriate to make some approximations to reduce the model to a sensible size. Engineers often can accept some approximations in order to get a more robust and simple model. For example, Newton's classical mechanics is an approximated model of the real world. Still, Newton's model is quite sufficient for most ordinary-life situations, that is, as long as particle speeds are well below the speed of light, and we study macro-particles only. Note that better accuracy does not necessarily mean a better model. Statistical models are prone to overfitting which means that a model is fitted to data too much and it has lost its ability to generalize to new events that were not observed before.
=== Training, tuning, and fitting ===
Any model which is not pure white-box contains some parameters that can be used to fit the model to the system it is intended to describe. If the modeling is done by an artificial neural network or other machine learning, the optimization of parameters is called training, while the optimization of model hyperparameters is called tuning and often uses cross-validation. In more conventional modeling through explicitly given mathematical functions, parameters are often determined by curve fitting.
=== Evaluation and assessment ===
A crucial part of the modeling process is the evaluation of whether or not a given mathematical model describes a system accurately. This question can be difficult to answer as it involves several different types of evaluation.
==== Prediction of empirical data ====
Usually, the easiest part of model evaluation is checking whether a model predicts experimental measurements or other empirical data not used in the model development. In models with parameters, a common approach is to split the data into two disjoint subsets: training data and verification data. The training data are used to estimate the model parameters. An accurate model will closely match the verification data even though these data were not used to set the model's parameters. This practice is referred to as cross-validation in statistics.
Defining a metric to measure distances between observed and predicted data is a useful tool for assessing model fit. In statistics, decision theory, and some economic models, a loss function plays a similar role. While it is rather straightforward to test the appropriateness of parameters, it can be more difficult to test the validity of the general mathematical form of a model. In general, more mathematical tools have been developed to test the fit of statistical models than models involving differential equations. Tools from nonparametric statistics can sometimes be used to evaluate how well the data fit a known distribution or to come up with a general model that makes only minimal assumptions about the model's mathematical form.
==== Scope of the model ====
Assessing the scope of a model, that is, determining what situations the model is applicable to, can be less straightforward. If the model was constructed based on a set of data, one must determine for which systems or situations the known data is a "typical" set of data. The question of whether the model describes well the properties of the system between data points is called interpolation, and the same question for events or data points outside the observed data is called extrapolation.
As an example of the typical limitations of the scope of a model, in evaluating Newtonian classical mechanics, we can note that Newton made his measurements without advanced equipment, so he could not measure properties of particles traveling at speeds close to the speed of light. Likewise, he did not measure the movements of molecules and other small particles, but macro particles only. It is then not surprising that his model does not extrapolate well into these domains, even though his model is quite sufficient for ordinary life physics.
==== Philosophical considerations ====
Many types of modeling implicitly involve claims about causality. This is usually (but not always) true of models involving differential equations. As the purpose of modeling is to increase our understanding of the world, the validity of a model rests not only on its fit to empirical observations, but also on its ability to extrapolate to situations or data beyond those originally described in the model. One can think of this as the differentiation between qualitative and quantitative predictions. One can also argue that a model is worthless unless it provides some insight which goes beyond what is already known from direct investigation of the phenomenon being studied.
An example of such criticism is the argument that the mathematical models of optimal foraging theory do not offer insight that goes beyond the common-sense conclusions of evolution and other basic principles of ecology. It should also be noted that while mathematical modeling uses mathematical concepts and language, it is not itself a branch of mathematics and does not necessarily conform to any mathematical logic, but is typically a branch of some science or other technical subject, with corresponding concepts and standards of argumentation.
== Significance in the natural sciences ==
Mathematical models are of great importance in the natural sciences, particularly in physics. Physical theories are almost invariably expressed using mathematical models. Throughout history, more and more accurate mathematical models have been developed. Newton's laws accurately describe many everyday phenomena, but at certain limits theory of relativity and quantum mechanics must be used.
It is common to use idealized models in physics to simplify things. Massless ropes, point particles, ideal gases and the particle in a box are among the many simplified models used in physics. The laws of physics are represented with simple equations such as Newton's laws, Maxwell's equations and the Schrödinger equation. These laws are a basis for making mathematical models of real situations. Many real situations are very complex and thus modeled approximately on a computer, a model that is computationally feasible to compute is made from the basic laws or from approximate models made from the basic laws. For example, molecules can be modeled by molecular orbital models that are approximate solutions to the Schrödinger equation. In engineering, physics models are often made by mathematical methods such as finite element analysis.
Different mathematical models use different geometries that are not necessarily accurate descriptions of the geometry of the universe. Euclidean geometry is much used in classical physics, while special relativity and general relativity are examples of theories that use geometries which are not Euclidean.
== Some applications ==
Often when engineers analyze a system to be controlled or optimized, they use a mathematical model. In analysis, engineers can build a descriptive model of the system as a hypothesis of how the system could work, or try to estimate how an unforeseeable event could affect the system. Similarly, in control of a system, engineers can try out different control approaches in simulations.
A mathematical model usually describes a system by a set of variables and a set of equations that establish relationships between the variables. Variables may be of many types; real or integer numbers, Boolean values or strings, for example. The variables represent some properties of the system, for example, the measured system outputs often in the form of signals, timing data, counters, and event occurrence. The actual model is the set of functions that describe the relations between the different variables.
== Examples ==
One of the popular examples in computer science is the mathematical models of various machines, an example is the deterministic finite automaton (DFA) which is defined as an abstract mathematical concept, but due to the deterministic nature of a DFA, it is implementable in hardware and software for solving various specific problems. For example, the following is a DFA M with a binary alphabet, which requires that the input contains an even number of 0s:
M
=
(
Q
,
Σ
,
δ
,
q
0
,
F
)
{\displaystyle M=(Q,\Sigma ,\delta ,q_{0},F)}
where
Q
=
{
S
1
,
S
2
}
,
{\displaystyle Q=\{S_{1},S_{2}\},}
Σ
=
{
0
,
1
}
,
{\displaystyle \Sigma =\{0,1\},}
q
0
=
S
1
,
{\displaystyle q_{0}=S_{1},}
F
=
{
S
1
}
,
{\displaystyle F=\{S_{1}\},}
and
δ
{\displaystyle \delta }
is defined by the following state-transition table:
The state
S
1
{\displaystyle S_{1}}
represents that there has been an even number of 0s in the input so far, while
S
2
{\displaystyle S_{2}}
signifies an odd number. A 1 in the input does not change the state of the automaton. When the input ends, the state will show whether the input contained an even number of 0s or not. If the input did contain an even number of 0s,
M
{\displaystyle M}
will finish in state
S
1
,
{\displaystyle S_{1},}
an accepting state, so the input string will be accepted.
The language recognized by
M
{\displaystyle M}
is the regular language given by the regular expression 1*( 0 (1*) 0 (1*) )*, where "*" is the Kleene star, e.g., 1* denotes any non-negative number (possibly zero) of symbols "1".
Many everyday activities carried out without a thought are uses of mathematical models. A geographical map projection of a region of the earth onto a small, plane surface is a model which can be used for many purposes such as planning travel.
Another simple activity is predicting the position of a vehicle from its initial position, direction and speed of travel, using the equation that distance traveled is the product of time and speed. This is known as dead reckoning when used more formally. Mathematical modeling in this way does not necessarily require formal mathematics; animals have been shown to use dead reckoning.
Population Growth. A simple (though approximate) model of population growth is the Malthusian growth model. A slightly more realistic and largely used population growth model is the logistic function, and its extensions.
Model of a particle in a potential-field. In this model we consider a particle as being a point of mass which describes a trajectory in space which is modeled by a function giving its coordinates in space as a function of time. The potential field is given by a function
V
:
R
3
→
R
{\displaystyle V\!:\mathbb {R} ^{3}\!\to \mathbb {R} }
and the trajectory, that is a function
r
:
R
→
R
3
,
{\displaystyle \mathbf {r} \!:\mathbb {R} \to \mathbb {R} ^{3},}
is the solution of the differential equation:
−
d
2
r
(
t
)
d
t
2
m
=
∂
V
[
r
(
t
)
]
∂
x
x
^
+
∂
V
[
r
(
t
)
]
∂
y
y
^
+
∂
V
[
r
(
t
)
]
∂
z
z
^
,
{\displaystyle -{\frac {\mathrm {d} ^{2}\mathbf {r} (t)}{\mathrm {d} t^{2}}}m={\frac {\partial V[\mathbf {r} (t)]}{\partial x}}\mathbf {\hat {x}} +{\frac {\partial V[\mathbf {r} (t)]}{\partial y}}\mathbf {\hat {y}} +{\frac {\partial V[\mathbf {r} (t)]}{\partial z}}\mathbf {\hat {z}} ,}
that can be written also as
m
d
2
r
(
t
)
d
t
2
=
−
∇
V
[
r
(
t
)
]
.
{\displaystyle m{\frac {\mathrm {d} ^{2}\mathbf {r} (t)}{\mathrm {d} t^{2}}}=-\nabla V[\mathbf {r} (t)].}
Note this model assumes the particle is a point mass, which is certainly known to be false in many cases in which we use this model; for example, as a model of planetary motion.
Model of rational behavior for a consumer. In this model we assume a consumer faces a choice of
n
{\displaystyle n}
commodities labeled
1
,
2
,
…
,
n
{\displaystyle 1,2,\dots ,n}
each with a market price
p
1
,
p
2
,
…
,
p
n
.
{\displaystyle p_{1},p_{2},\dots ,p_{n}.}
The consumer is assumed to have an ordinal utility function
U
{\displaystyle U}
(ordinal in the sense that only the sign of the differences between two utilities, and not the level of each utility, is meaningful), depending on the amounts of commodities
x
1
,
x
2
,
…
,
x
n
{\displaystyle x_{1},x_{2},\dots ,x_{n}}
consumed. The model further assumes that the consumer has a budget
M
{\displaystyle M}
which is used to purchase a vector
x
1
,
x
2
,
…
,
x
n
{\displaystyle x_{1},x_{2},\dots ,x_{n}}
in such a way as to maximize
U
(
x
1
,
x
2
,
…
,
x
n
)
.
{\displaystyle U(x_{1},x_{2},\dots ,x_{n}).}
The problem of rational behavior in this model then becomes a mathematical optimization problem, that is:
max
U
(
x
1
,
x
2
,
…
,
x
n
)
{\displaystyle \max \,U(x_{1},x_{2},\ldots ,x_{n})}
subject to:
∑
i
=
1
n
p
i
x
i
≤
M
,
{\displaystyle \sum _{i=1}^{n}p_{i}x_{i}\leq M,}
x
i
≥
0
for all
i
=
1
,
2
,
…
,
n
.
{\displaystyle x_{i}\geq 0\;\;\;{\text{ for all }}i=1,2,\dots ,n.}
This model has been used in a wide variety of economic contexts, such as in general equilibrium theory to show existence and Pareto efficiency of economic equilibria.
Neighbour-sensing model is a model that explains the mushroom formation from the initially chaotic fungal network.
In computer science, mathematical models may be used to simulate computer networks.
In mechanics, mathematical models may be used to analyze the movement of a rocket model.
== See also ==
== References ==
== Further reading ==
=== Books ===
Aris, Rutherford [ 1978 ] ( 1994 ). Mathematical Modelling Techniques, New York: Dover. ISBN 0-486-68131-9
Bender, E.A. [ 1978 ] ( 2000 ). An Introduction to Mathematical Modeling, New York: Dover. ISBN 0-486-41180-X
Gary Chartrand (1977) Graphs as Mathematical Models, Prindle, Webber & Schmidt ISBN 0871502364
Dubois, G. (2018) "Modeling and Simulation", Taylor & Francis, CRC Press.
Gershenfeld, N. (1998) The Nature of Mathematical Modeling, Cambridge University Press ISBN 0-521-57095-6 .
Lin, C.C. & Segel, L.A. ( 1988 ). Mathematics Applied to Deterministic Problems in the Natural Sciences, Philadelphia: SIAM. ISBN 0-89871-229-7
Models as Mediators: Perspectives on Natural and Social Science edited by Mary S. Morgan and Margaret Morrison, 1999.
Mary S. Morgan The World in the Model: How Economists Work and Think, 2012.
=== Specific applications ===
Papadimitriou, Fivos. (2010). Mathematical Modelling of Spatial-Ecological Complex Systems: an Evaluation. Geography, Environment, Sustainability 1(3), 67–80. doi:10.24057/2071-9388-2010-3-1-67-80
Peierls, R. (1980). "Model-making in physics". Contemporary Physics. 21: 3–17. Bibcode:1980ConPh..21....3P. doi:10.1080/00107518008210938.
An Introduction to Infectious Disease Modelling by Emilia Vynnycky and Richard G White.
== External links ==
General reference
Patrone, F. Introduction to modeling via differential equations, with critical remarks.
Plus teacher and student package: Mathematical Modelling. Brings together all articles on mathematical modeling from Plus Magazine, the online mathematics magazine produced by the Millennium Mathematics Project at the University of Cambridge.
Philosophical
Frigg, R. and S. Hartmann, Models in Science, in: The Stanford Encyclopedia of Philosophy, (Spring 2006 Edition)
Griffiths, E. C. (2010) What is a model? | Wikipedia/Mathematical_modeling |
In physics, Ginzburg–Landau theory, often called Landau–Ginzburg theory, named after Vitaly Ginzburg and Lev Landau, is a mathematical physical theory used to describe superconductivity. In its initial form, it was postulated as a phenomenological model which could describe type-I superconductors without examining their microscopic properties. One GL-type superconductor is the famous YBCO, and generally all cuprates.
Later, a version of Ginzburg–Landau theory was derived from the Bardeen–Cooper–Schrieffer microscopic theory by Lev Gor'kov, thus showing that it also appears in some limit of microscopic theory and giving microscopic interpretation of all its parameters. The theory can also be given a general geometric setting, placing it in the context of Riemannian geometry, where in many cases exact solutions can be given. This general setting then extends to quantum field theory and string theory, again owing to its solvability, and its close relation to other, similar systems.
== Introduction ==
Based on Landau's previously established theory of second-order phase transitions, Ginzburg and Landau argued that the free energy density
f
s
{\displaystyle f_{s}}
of a superconductor near the superconducting transition can be expressed in terms of a complex order parameter field
ψ
(
r
)
=
|
ψ
(
r
)
|
e
i
ϕ
(
r
)
{\displaystyle \psi (r)=|\psi (r)|e^{i\phi (r)}}
, where the quantity
|
ψ
(
r
)
|
2
{\displaystyle |\psi (r)|^{2}}
is a measure of the local density of superconducting electrons
n
s
(
r
)
{\displaystyle n_{s}(r)}
analogous to a quantum mechanical wave function. While
ψ
(
r
)
{\displaystyle \psi (r)}
is nonzero below a phase transition into a superconducting state, no direct interpretation of this parameter was given in the original paper. Assuming smallness of
|
ψ
|
{\displaystyle |\psi |}
and smallness of its gradients, the free energy density has the form of a field theory and exhibits U(1) gauge symmetry:
f
s
=
f
n
+
α
(
T
)
|
ψ
|
2
+
1
2
β
(
T
)
|
ψ
|
4
+
1
2
m
∗
|
(
−
i
ℏ
∇
−
e
∗
c
A
)
ψ
|
2
+
B
2
8
π
,
{\displaystyle f_{s}=f_{n}+\alpha (T)|\psi |^{2}+{\frac {1}{2}}\beta (T)|\psi |^{4}+{\frac {1}{2m^{*}}}\left|\left(-i\hbar \nabla -{\frac {e^{*}}{c}}\mathbf {A} \right)\psi \right|^{2}+{\frac {\mathbf {B} ^{2}}{8\pi }},}
where
f
n
{\displaystyle f_{n}}
is the free energy density of the normal phase,
α
(
T
)
{\displaystyle \alpha (T)}
and
β
(
T
)
{\displaystyle \beta (T)}
are phenomenological parameters that are functions of T (and often written just
α
{\displaystyle \alpha }
and
β
{\displaystyle \beta }
).
m
∗
{\displaystyle m^{*}}
is an effective mass,
e
∗
{\displaystyle e^{*}}
is an effective charge (usually
2
e
{\displaystyle 2e}
, where e is the charge of an electron),
A
{\displaystyle \mathbf {A} }
is the magnetic vector potential, and
B
=
∇
×
A
{\displaystyle \mathbf {B} =\nabla \times \mathbf {A} }
is the magnetic field.
The total free energy is given by
F
=
∫
f
s
d
3
r
{\displaystyle F=\int f_{s}d^{3}r}
. By minimizing
F
{\displaystyle F}
with respect to variations in the order parameter
ψ
{\displaystyle \psi }
and the vector potential
A
{\displaystyle \mathbf {A} }
, one arrives at the Ginzburg–Landau equations
α
ψ
+
β
|
ψ
|
2
ψ
+
1
2
m
∗
(
−
i
ℏ
∇
−
e
∗
c
A
)
2
ψ
=
0
{\displaystyle \alpha \psi +\beta |\psi |^{2}\psi +{\frac {1}{2m^{*}}}\left(-i\hbar \nabla -{\frac {e^{*}}{c}}\mathbf {A} \right)^{2}\psi =0}
∇
×
B
=
4
π
c
J
;
J
=
e
∗
m
∗
Re
{
ψ
∗
(
−
i
ℏ
∇
−
e
∗
c
A
)
ψ
}
,
{\displaystyle \nabla \times \mathbf {B} ={\frac {4\pi }{c}}\mathbf {J} \;\;;\;\;\mathbf {J} ={\frac {e^{*}}{m^{*}}}\operatorname {Re} \left\{\psi ^{*}\left(-i\hbar \nabla -{\frac {e^{*}}{c}}\mathbf {A} \right)\psi \right\},}
where
J
{\displaystyle J}
denotes the dissipation-free electric current density and Re the real part. The first equation — which bears some similarities to the time-independent Schrödinger equation, but is principally different due to a nonlinear term — determines the order parameter,
ψ
{\displaystyle \psi }
. The second equation then provides the superconducting current.
== Simple interpretation ==
Consider a homogeneous superconductor where there is no superconducting current and the equation for ψ simplifies to:
α
ψ
+
β
|
ψ
|
2
ψ
=
0.
{\displaystyle \alpha \psi +\beta |\psi |^{2}\psi =0.}
This equation has a trivial solution: ψ = 0. This corresponds to the normal conducting state, that is for temperatures above the superconducting transition temperature, T > Tc.
Below the superconducting transition temperature, the above equation is expected to have a non-trivial solution (that is
ψ
≠
0
{\displaystyle \psi \neq 0}
). Under this assumption the equation above can be rearranged into:
|
ψ
|
2
=
−
α
β
.
{\displaystyle |\psi |^{2}=-{\frac {\alpha }{\beta }}.}
When the right hand side of this equation is positive, there is a nonzero solution for ψ (remember that the magnitude of a complex number can be positive or zero). This can be achieved by assuming the following temperature dependence of
α
:
α
(
T
)
=
α
0
(
T
−
T
c
)
{\displaystyle \alpha :\alpha (T)=\alpha _{0}(T-T_{\rm {c}})}
with
α
0
/
β
>
0
{\displaystyle \alpha _{0}/\beta >0}
:
Above the superconducting transition temperature, T > Tc, the expression α(T) / β is positive and the right hand side of the equation above is negative. The magnitude of a complex number must be a non-negative number, so only ψ = 0 solves the Ginzburg–Landau equation.
Below the superconducting transition temperature, T < Tc, the right hand side of the equation above is positive and there is a non-trivial solution for ψ. Furthermore,
|
ψ
|
2
=
−
α
0
(
T
−
T
c
)
β
,
{\displaystyle |\psi |^{2}=-{\frac {\alpha _{0}(T-T_{c})}{\beta }},}
that is ψ approaches zero as T gets closer to Tc from below. Such a behavior is typical for a second order phase transition.
In Ginzburg–Landau theory the electrons that contribute to superconductivity were proposed to form a superfluid. In this interpretation, |ψ|2 indicates the fraction of electrons that have condensed into a superfluid.
== Coherence length and penetration depth ==
The Ginzburg–Landau equations predicted two new characteristic lengths in a superconductor. The first characteristic length was termed coherence length, ξ. For T > Tc (normal phase), it is given by
ξ
=
ℏ
2
2
m
∗
|
α
|
.
{\displaystyle \xi ={\sqrt {\frac {\hbar ^{2}}{2m^{*}|\alpha |}}}.}
while for T < Tc (superconducting phase), where it is more relevant, it is given by
ξ
=
ℏ
2
4
m
∗
|
α
|
.
{\displaystyle \xi ={\sqrt {\frac {\hbar ^{2}}{4m^{*}|\alpha |}}}.}
It sets the exponential law according to which small perturbations of density of superconducting electrons recover their equilibrium value ψ0. Thus this theory characterized all superconductors by two length scales. The second one is the penetration depth, λ. It was previously introduced by the London brothers in their London theory. Expressed in terms of the parameters of Ginzburg–Landau model it is
λ
=
m
∗
μ
0
e
∗
2
ψ
0
2
=
m
∗
β
μ
0
e
∗
2
|
α
|
,
{\displaystyle \lambda ={\sqrt {\frac {m^{*}}{\mu _{0}e^{*2}\psi _{0}^{2}}}}={\sqrt {\frac {m^{*}\beta }{\mu _{0}e^{*2}|\alpha |}}},}
where ψ0 is the equilibrium value of the order parameter in the absence of an electromagnetic field. The penetration depth sets the exponential law according to which an external magnetic field decays inside the superconductor.
The original idea on the parameter κ belongs to Landau. The ratio κ = λ/ξ is presently known as the Ginzburg–Landau parameter. It has been proposed by Landau that Type I superconductors are those with 0 < κ < 1/√2, and Type II superconductors those with κ > 1/√2.
== Fluctuations ==
The phase transition from the normal state is of second order for Type II superconductors, taking into account fluctuations, as demonstrated by Dasgupta and Halperin, while for Type I superconductors it is of first order, as demonstrated by Halperin, Lubensky and Ma.
== Classification of superconductors ==
In the original paper Ginzburg and Landau observed the existence of two types of superconductors depending
on the energy of the interface between the normal and superconducting states. The Meissner state breaks down when the applied magnetic field is too large. Superconductors can be divided into two classes according to how this breakdown occurs. In Type I superconductors, superconductivity is abruptly destroyed when the strength of the applied field rises above a critical value Hc. Depending on the geometry of the sample, one may obtain an intermediate state consisting of a baroque pattern of regions of normal material carrying a magnetic field mixed with regions of superconducting material containing no field. In Type II superconductors, raising the applied field past a critical value Hc1 leads to a mixed state (also known as the vortex state) in which an increasing amount of magnetic flux penetrates the material, but there remains no resistance to the flow of electric current as long as the current is not too large. At a second critical field strength Hc2, superconductivity is destroyed. The mixed state is actually caused by vortices in the electronic superfluid, sometimes called fluxons because the flux carried by these vortices is quantized. Most pure elemental superconductors, except niobium and carbon nanotubes, are Type I, while almost all impure and compound superconductors are Type II.
The most important finding from Ginzburg–Landau theory was made by Alexei Abrikosov in 1957. He used Ginzburg–Landau theory to explain experiments on superconducting alloys and thin films. He found that in a type-II superconductor in a high magnetic field, the field penetrates in a triangular lattice of quantized tubes of flux vortices.
== Geometric formulation ==
The Ginzburg–Landau functional can be formulated in the general setting of a complex vector bundle over a compact Riemannian manifold. This is the same functional as given above, transposed to the notation commonly used in Riemannian geometry. In multiple interesting cases, it can be shown to exhibit the same phenomena as the above, including Abrikosov vortices (see discussion below).
For a complex vector bundle
E
{\displaystyle E}
over a Riemannian manifold
M
{\displaystyle M}
with fiber
C
n
{\displaystyle \mathbb {C} ^{n}}
, the order parameter
ψ
{\displaystyle \psi }
is understood as a section of the vector bundle
E
{\displaystyle E}
. The Ginzburg–Landau functional is then a Lagrangian for that section:
L
(
ψ
,
A
)
=
∫
M
|
g
|
d
x
1
∧
⋯
∧
d
x
m
[
|
F
|
2
+
|
D
ψ
|
2
+
1
4
(
σ
−
|
ψ
|
2
)
2
]
{\displaystyle {\mathcal {L}}(\psi ,A)=\int _{M}{\sqrt {|g|}}dx^{1}\wedge \dotsm \wedge dx^{m}\left[\vert F\vert ^{2}+\vert D\psi \vert ^{2}+{\frac {1}{4}}\left(\sigma -\vert \psi \vert ^{2}\right)^{2}\right]}
The notation used here is as follows. The fibers
C
n
{\displaystyle \mathbb {C} ^{n}}
are assumed to be equipped with a Hermitian inner product
⟨
⋅
,
⋅
⟩
{\displaystyle \langle \cdot ,\cdot \rangle }
so that the square of the norm is written as
|
ψ
|
2
=
⟨
ψ
,
ψ
⟩
{\displaystyle \vert \psi \vert ^{2}=\langle \psi ,\psi \rangle }
. The phenomenological parameters
α
{\displaystyle \alpha }
and
β
{\displaystyle \beta }
have been absorbed so that the potential energy term is a quartic mexican hat potential; i.e., exhibiting spontaneous symmetry breaking, with a minimum at some real value
σ
∈
R
{\displaystyle \sigma \in \mathbb {R} }
. The integral is explicitly over the volume form
∗
(
1
)
=
|
g
|
d
x
1
∧
⋯
∧
d
x
m
{\displaystyle *(1)={\sqrt {|g|}}dx^{1}\wedge \dotsm \wedge dx^{m}}
for an
m
{\displaystyle m}
-dimensional manifold
M
{\displaystyle M}
with determinant
|
g
|
{\displaystyle |g|}
of the metric tensor
g
{\displaystyle g}
.
The
D
=
d
+
A
{\displaystyle D=d+A}
is the connection one-form and
F
{\displaystyle F}
is the corresponding curvature 2-form (this is not the same as the free energy
F
{\displaystyle F}
given up top; here,
F
{\displaystyle F}
corresponds to the electromagnetic field strength tensor). The
A
{\displaystyle A}
corresponds to the vector potential, but is in general non-Abelian when
n
>
1
{\displaystyle n>1}
, and is normalized differently. In physics, one conventionally writes the connection as
d
−
i
e
A
{\displaystyle d-ieA}
for the electric charge
e
{\displaystyle e}
and vector potential
A
{\displaystyle A}
; in Riemannian geometry, it is more convenient to drop the
e
{\displaystyle e}
(and all other physical units) and take
A
=
A
μ
d
x
μ
{\displaystyle A=A_{\mu }dx^{\mu }}
to be a one-form taking values in the Lie algebra corresponding to the symmetry group of the fiber. Here, the symmetry group is SU(n), as that leaves the inner product
⟨
⋅
,
⋅
⟩
{\displaystyle \langle \cdot ,\cdot \rangle }
invariant; so here,
A
{\displaystyle A}
is a form taking values in the algebra
s
u
(
n
)
{\displaystyle {\mathfrak {su}}(n)}
.
The curvature
F
{\displaystyle F}
generalizes the electromagnetic field strength to the non-Abelian setting, as the curvature form of an affine connection on a vector bundle . It is conventionally written as
F
=
D
∘
D
=
d
A
+
A
∧
A
=
(
∂
A
ν
∂
x
μ
+
A
μ
A
ν
)
d
x
μ
∧
d
x
ν
=
1
2
(
∂
A
ν
∂
x
μ
−
∂
A
μ
∂
x
ν
+
[
A
μ
,
A
ν
]
)
d
x
μ
∧
d
x
ν
{\displaystyle {\begin{aligned}F=D\circ D=dA+A\wedge A=\left({\frac {\partial A_{\nu }}{\partial x^{\mu }}}+A_{\mu }A_{\nu }\right)dx^{\mu }\wedge dx^{\nu }={\frac {1}{2}}\left({\frac {\partial A_{\nu }}{\partial x^{\mu }}}-{\frac {\partial A_{\mu }}{\partial x^{\nu }}}+[A_{\mu },A_{\nu }]\right)dx^{\mu }\wedge dx^{\nu }\\\end{aligned}}}
That is, each
A
μ
{\displaystyle A_{\mu }}
is an
n
×
n
{\displaystyle n\times n}
skew-symmetric matrix. (See the article on the metric connection for additional articulation of this specific notation.) To emphasize this, note that the first term of the Ginzburg–Landau functional, involving the field-strength only, is
L
(
A
)
=
Y
M
(
A
)
=
∫
M
∗
(
1
)
|
F
|
2
{\displaystyle {\mathcal {L}}(A)=YM(A)=\int _{M}*(1)\vert F\vert ^{2}}
which is just the Yang–Mills action on a compact Riemannian manifold.
The Euler–Lagrange equations for the Ginzburg–Landau functional are the Yang–Mills equations
D
∗
D
ψ
=
1
2
(
σ
−
|
ψ
|
2
)
ψ
{\displaystyle D^{*}D\psi ={\frac {1}{2}}\left(\sigma -\vert \psi \vert ^{2}\right)\psi }
and
D
∗
F
=
−
Re
⟨
D
ψ
,
ψ
⟩
{\displaystyle D^{*}F=-\operatorname {Re} \langle D\psi ,\psi \rangle }
where
D
∗
{\displaystyle D^{*}}
is the adjoint of
D
{\displaystyle D}
, analogous to the codifferential
δ
=
d
∗
{\displaystyle \delta =d^{*}}
. Note that these are closely related to the Yang–Mills–Higgs equations.
=== Specific results ===
In string theory, it is conventional to study the Ginzburg–Landau functional for the manifold
M
{\displaystyle M}
being a Riemann surface, and taking
n
=
1
{\displaystyle n=1}
; i.e., a line bundle. The phenomenon of Abrikosov vortices persists in these general cases, including
M
=
R
2
{\displaystyle M=\mathbb {R} ^{2}}
, where one can specify any finite set of points where
ψ
{\displaystyle \psi }
vanishes, including multiplicity. The proof generalizes to arbitrary Riemann surfaces and to Kähler manifolds. In the limit of weak coupling, it can be shown that
|
ψ
|
{\displaystyle \vert \psi \vert }
converges uniformly to 1, while
D
ψ
{\displaystyle D\psi }
and
d
A
{\displaystyle dA}
converge uniformly to zero, and the curvature becomes a sum over delta-function distributions at the vortices. The sum over vortices, with multiplicity, just equals the degree of the line bundle; as a result, one may write a line bundle on a Riemann surface as a flat bundle, with N singular points and a covariantly constant section.
When the manifold is four-dimensional, possessing a spinc structure, then one may write a very similar functional, the Seiberg–Witten functional, which may be analyzed in a similar fashion, and which possesses many similar properties, including self-duality. When such systems are integrable, they are studied as Hitchin systems.
== Self-duality ==
When the manifold
M
{\displaystyle M}
is a Riemann surface
M
=
Σ
{\displaystyle M=\Sigma }
, the functional can be re-written so as to explicitly show self-duality. One achieves this by writing the exterior derivative as a sum of Dolbeault operators
d
=
∂
+
∂
¯
{\displaystyle d=\partial +{\overline {\partial }}}
. Likewise, the space
Ω
1
{\displaystyle \Omega ^{1}}
of one-forms over a Riemann surface decomposes into a space that is holomorphic, and one that is anti-holomorphic:
Ω
1
=
Ω
1
,
0
⊕
Ω
0
,
1
{\displaystyle \Omega ^{1}=\Omega ^{1,0}\oplus \Omega ^{0,1}}
, so that forms in
Ω
1
,
0
{\displaystyle \Omega ^{1,0}}
are holomorphic in
z
{\displaystyle z}
and have no dependence on
z
¯
{\displaystyle {\overline {z}}}
; and vice-versa for
Ω
0
,
1
{\displaystyle \Omega ^{0,1}}
. This allows the vector potential to be written as
A
=
A
1
,
0
+
A
0
,
1
{\displaystyle A=A^{1,0}+A^{0,1}}
and likewise
D
=
∂
A
+
∂
¯
A
{\displaystyle D=\partial _{A}+{\overline {\partial }}_{A}}
with
∂
A
=
∂
+
A
1
,
0
{\displaystyle \partial _{A}=\partial +A^{1,0}}
and
∂
¯
A
=
∂
¯
+
A
0
,
1
{\displaystyle {\overline {\partial }}_{A}={\overline {\partial }}+A^{0,1}}
.
For the case of
n
=
1
{\displaystyle n=1}
, where the fiber is
C
{\displaystyle \mathbb {C} }
so that the bundle is a line bundle, the field strength can similarly be written as
F
=
−
(
∂
A
∂
¯
A
+
∂
¯
A
∂
A
)
{\displaystyle F=-\left(\partial _{A}{\overline {\partial }}_{A}+{\overline {\partial }}_{A}\partial _{A}\right)}
Note that in the sign-convention being used here, both
A
1
,
0
,
A
0
,
1
{\displaystyle A^{1,0},A^{0,1}}
and
F
{\displaystyle F}
are purely imaginary (viz U(1) is generated by
e
i
θ
{\displaystyle e^{i\theta }}
so derivatives are purely imaginary). The functional then becomes
L
(
ψ
,
A
)
=
2
π
σ
deg
L
+
∫
Σ
i
2
d
z
∧
d
z
¯
[
2
|
∂
¯
A
ψ
|
2
+
(
∗
(
−
i
F
)
−
1
2
(
σ
−
|
ψ
|
2
)
2
]
{\displaystyle {\mathcal {L}}\left(\psi ,A\right)=2\pi \sigma \operatorname {deg} L+\int _{\Sigma }{\frac {i}{2}}dz\wedge d{\overline {z}}\left[2\vert {\overline {\partial }}_{A}\psi \vert ^{2}+\left(*(-iF)-{\frac {1}{2}}(\sigma -\vert \psi \vert ^{2}\right)^{2}\right]}
The integral is understood to be over the volume form
∗
(
1
)
=
i
2
d
z
∧
d
z
¯
{\displaystyle *(1)={\frac {i}{2}}dz\wedge d{\overline {z}}}
,
so that
Area
Σ
=
∫
Σ
∗
(
1
)
{\displaystyle \operatorname {Area} \Sigma =\int _{\Sigma }*(1)}
is the total area of the surface
Σ
{\displaystyle \Sigma }
. The
∗
{\displaystyle *}
is the Hodge star, as before. The degree
deg
L
{\displaystyle \operatorname {deg} L}
of the line bundle
L
{\displaystyle L}
over the surface
Σ
{\displaystyle \Sigma }
is
deg
L
=
c
1
(
L
)
=
1
2
π
∫
Σ
i
F
{\displaystyle \operatorname {deg} L=c_{1}(L)={\frac {1}{2\pi }}\int _{\Sigma }iF}
where
c
1
(
L
)
=
c
1
(
L
)
[
Σ
]
∈
H
2
(
Σ
)
{\displaystyle c_{1}(L)=c_{1}(L)[\Sigma ]\in H^{2}(\Sigma )}
is the first Chern class.
The Lagrangian is minimized (stationary) when
ψ
,
A
{\displaystyle \psi ,A}
solve the Ginzberg–Landau equations
∂
¯
A
ψ
=
0
∗
(
i
F
)
=
1
2
(
σ
−
|
ψ
|
2
)
{\displaystyle {\begin{aligned}{\overline {\partial }}_{A}\psi &=0\\*(iF)&={\frac {1}{2}}\left(\sigma -\vert \psi \vert ^{2}\right)\\\end{aligned}}}
Note that these are both first-order differential equations, manifestly self-dual. Integrating the second of these, one quickly finds that a non-trivial solution must obey
4
π
deg
L
≤
σ
Area
Σ
{\displaystyle 4\pi \operatorname {deg} L\leq \sigma \operatorname {Area} \Sigma }
.
Roughly speaking, this can be interpreted as an upper limit to the density of the Abrikosov vortecies. One can also show that the solutions are bounded; one must have
|
ψ
|
≤
σ
{\displaystyle |\psi |\leq \sigma }
.
== In string theory ==
In particle physics, any quantum field theory with a unique classical vacuum state and a potential energy with a degenerate critical point is called a Landau–Ginzburg theory. The generalization to N = (2,2) supersymmetric theories in 2 spacetime dimensions was proposed by Cumrun Vafa and Nicholas Warner in November 1988; in this generalization one imposes that the superpotential possess a degenerate critical point. The same month, together with Brian Greene they argued that these theories are related by a renormalization group flow to sigma models on Calabi–Yau manifolds. In his 1993 paper "Phases of N = 2 theories in two-dimensions", Edward Witten argued that Landau–Ginzburg theories and sigma models on Calabi–Yau manifolds are different phases of the same theory. A construction of such a duality was given by relating the Gromov–Witten theory of Calabi–Yau orbifolds to FJRW theory an analogous Landau–Ginzburg "FJRW" theory. Witten's sigma models were later used to describe the low energy dynamics of 4-dimensional gauge theories with monopoles as well as brane constructions.
== See also ==
== References ==
=== Papers ===
V.L. Ginzburg and L.D. Landau, Zh. Eksp. Teor. Fiz. 20, 1064 (1950). English translation in: L. D. Landau, Collected papers (Oxford: Pergamon Press, 1965) p. 546
A.A. Abrikosov, Zh. Eksp. Teor. Fiz. 32, 1442 (1957) (English translation: Sov. Phys. JETP 5 1174 (1957)].) Abrikosov's original paper on vortex structure of Type-II superconductors derived as a solution of G–L equations for κ > 1/√2
L.P. Gor'kov, Sov. Phys. JETP 36, 1364 (1959)
A.A. Abrikosov's 2003 Nobel lecture: pdf file or video
V.L. Ginzburg's 2003 Nobel Lecture: pdf file or video | Wikipedia/Landau-Ginzburg_theory |
In density functional theory (DFT), the Harris energy functional is a non-self-consistent approximation to the Kohn–Sham density functional theory. It gives the energy of a combined system as a function of the electronic densities of the isolated parts. The energy of the Harris functional varies much less than the energy of the Kohn–Sham functional as the density moves away from the converged density.
== Background ==
Kohn–Sham equations are the one-electron equations that must be solved in a self-consistent fashion in order to find the ground state density of a system of interacting electrons:
(
−
ℏ
2
2
m
∇
2
+
v
H
[
n
]
+
v
x
c
[
n
]
+
v
e
x
t
(
r
)
)
ϕ
j
(
r
)
=
ϵ
j
ϕ
j
(
r
)
.
{\displaystyle \left({\frac {-\hbar ^{2}}{2m}}\nabla ^{2}+v_{\rm {H}}[n]+v_{\rm {xc}}[n]+v_{\rm {ext}}(r)\right)\phi _{j}(r)=\epsilon _{j}\phi _{j}(r).}
The density,
n
,
{\displaystyle n,}
is given by that of the Slater determinant formed by the spin-orbitals of the occupied states:
n
(
r
)
=
∑
j
f
j
|
ϕ
j
(
r
)
|
2
,
{\displaystyle n(r)=\sum _{j}f_{j}\vert \phi _{j}(r)\vert ^{2},}
where the coefficients
f
j
{\displaystyle f_{j}}
are the occupation numbers given by the Fermi–Dirac distribution at the temperature of the system with the restriction
∑
j
f
j
=
N
{\textstyle \sum _{j}f_{j}=N}
, where
N
{\displaystyle N}
is the total number of electrons. In the equation above,
v
H
[
n
]
{\displaystyle v_{\rm {H}}[n]}
is the Hartree potential and
v
x
c
[
n
]
{\displaystyle v_{\rm {xc}}[n]}
is the exchange–correlation potential, which are expressed in terms of the electronic density. Formally, one must solve these equations self-consistently, for which the usual strategy is to pick an initial guess for the density,
n
0
(
r
)
{\displaystyle n_{0}(r)}
, substitute in the Kohn–Sham equation, extract a new density
n
1
(
r
)
{\displaystyle n_{1}(r)}
and iterate the process until convergence is obtained. When the final self-consistent density
n
(
r
)
{\displaystyle n(r)}
is reached, the energy of the system is expressed as:
E
[
n
]
=
∑
j
∈
occupied
ϵ
j
−
1
2
∫
v
H
[
n
]
n
(
r
)
d
r
−
∫
v
x
c
[
n
]
n
(
r
)
d
r
+
E
x
c
[
n
]
{\displaystyle E[n]=\sum _{j\in {\text{occupied}}}\epsilon _{j}-{\tfrac {1}{2}}\int v_{\rm {H}}[n]n(r)\,\mathrm {d} r-\int v_{\rm {xc}}[n]n(r)\,\mathrm {d} r+E_{\rm {xc}}[n]}
.
== Definition ==
Assume that we have an approximate electron density
n
0
(
r
)
{\displaystyle n_{0}(r)}
, which is different from the exact electron density
n
(
r
)
{\displaystyle n(r)}
. We construct exchange-correlation potential
v
x
c
(
r
)
{\displaystyle v_{\rm {xc}}(r)}
and the Hartree potential
v
H
(
r
)
{\displaystyle v_{\rm {H}}(r)}
based on the approximate electron density
n
0
(
r
)
{\displaystyle n_{0}(r)}
. Kohn–Sham equations are then solved with the XC and Hartree potentials and eigenvalues are then obtained; that is, we perform one single iteration of the self-consistency calculation. The sum of eigenvalues is often called the band structure energy:
E
b
a
n
d
=
∑
i
ϵ
i
,
{\displaystyle E_{\rm {band}}=\sum _{i}\epsilon _{i},}
where
i
{\displaystyle i}
loops over all occupied Kohn–Sham orbitals. The Harris energy functional is defined as
E
H
a
r
r
i
s
[
n
0
]
=
∑
i
ϵ
i
−
∫
d
r
3
v
x
c
[
n
0
]
(
r
)
n
0
(
r
)
−
1
2
∫
d
r
3
v
H
[
n
0
]
(
r
)
n
0
(
r
)
+
E
x
c
[
n
0
]
{\displaystyle E_{\rm {Harris}}[n_{0}]=\sum _{i}\epsilon _{i}-\int \mathrm {d} r^{3}v_{\rm {xc}}[n_{0}](r)n_{0}(r)-{\tfrac {1}{2}}\int \mathrm {d} r^{3}v_{\rm {H}}[n_{0}](r)n_{0}(r)+E_{\rm {xc}}[n_{0}]}
== Comments ==
It was discovered by Harris that the difference between the Harris energy
E
H
a
r
r
i
s
{\displaystyle E_{\rm {Harris}}}
and the exact total energy is to the second order of the error of the approximate electron density, i.e.,
O
(
(
ρ
−
ρ
0
)
2
)
{\displaystyle O((\rho -\rho _{0})^{2})}
. Therefore, for many systems the accuracy of Harris energy functional may be sufficient. The Harris functional was originally developed for such calculations rather than self-consistent convergence, although it can be applied in a self-consistent manner in which the density is changed. Many density-functional tight-binding methods, such as CP2K, DFTB+, Fireball, and Hotbit, are built based on the Harris energy functional. In these methods, one often does not perform self-consistent Kohn–Sham DFT calculations and the total energy is estimated using the Harris energy functional, although a version of the Harris functional where one does perform self-consistency calculations has been used. These codes are often much faster than conventional Kohn–Sham DFT codes that solve Kohn–Sham DFT in a self-consistent manner.
While the Kohn–Sham DFT energy is a variational functional (never lower than the ground state energy), the Harris DFT energy was originally believed to be anti-variational (never higher than the ground state energy). This was, however, conclusively demonstrated to be incorrect.
== References == | Wikipedia/Harris_functional |
Modern valence bond theory is the application of valence bond theory (VBT) with computer programs that are competitive in accuracy and economy, with programs for the Hartree–Fock or post-Hartree-Fock methods. The latter methods dominated quantum chemistry from the advent of digital computers because they were easier to program. The early popularity of valence bond methods thus declined. It is only recently that the programming of valence bond methods has improved. These developments are due to and described by Gerratt, Cooper, Karadakov and Raimondi (1997); Li and McWeeny (2002); Joop H. van Lenthe and co-workers (2002); Song, Mo, Zhang and Wu (2005); and Shaik and Hiberty (2004)
While molecular orbital theory (MOT) describes the electronic wavefunction as a linear combination of basis functions that are centered on the various atoms in a species (linear combination of atomic orbitals), VBT describes the electronic wavefunction as a linear combination of several valence bond structures. Each of these valence bond structures can be described using linear combinations of either atomic orbitals, delocalized atomic orbitals (Coulson-Fischer theory), or even molecular orbital fragments. Although this is often overlooked, MOT and VBT are equally valid ways of describing the electronic wavefunction, and are actually related by a unitary transformation. Assuming MOT and VBT are applied at the same level of theory, this relationship ensures that they will describe the same wavefunction, but will do so in different forms.
== Theory ==
=== Bonding in H2 ===
Heitler and London's original work on VBT attempts to approximate the electronic wavefunction as a covalent combination of localized basis functions on the bonding atoms. In VBT, wavefunctions are described as the sums and differences of VB determinants, which enforce the antisymmetric properties required by the Pauli exclusion principle. Taking H2 as an example, the VB determinant is
|
a
b
¯
|
=
N
{
a
(
1
)
b
(
2
)
[
α
(
1
)
β
(
2
)
]
−
a
(
2
)
b
(
1
)
[
α
(
2
)
β
(
1
)
]
}
{\displaystyle \left\vert a{\overline {b}}\right\vert =N\{a(1)b(2)[\alpha (1)\beta (2)]-a(2)b(1)[\alpha (2)\beta (1)]\}}
In this expression, N is a normalization constant, and a and b are basis functions that are localized on the two hydrogen atoms, often considered simply to be 1s atomic orbitals. The numbers are an index to describe the electron (i.e. a(1) represents the concept of ‘electron 1’ residing in orbital a). ɑ and β describe the spin of the electron. The bar over b in
|
a
b
¯
|
{\displaystyle \left\vert a{\overline {b}}\right\vert }
indicates that the electron associated with orbital b has β spin (in the first term, electron 2 is in orbital b, and thus electron 2 has β spin). By itself, a single VB determinant is not a proper spin-eigenfunction, and thus cannot describe the true wavefunction. However, by taking the sum and difference (linear combinations) of VB determinants, two approximate wavefunctions can be obtained:
Φ
H
L
=
|
a
b
¯
|
−
|
a
¯
b
|
{\displaystyle \Phi _{HL}=\left\vert a{\overline {b}}\right\vert -\left\vert {\overline {a}}b\right\vert }
Φ
T
=
|
a
b
¯
|
+
|
a
¯
b
|
{\displaystyle \Phi _{T}=\left\vert a{\overline {b}}\right\vert +\left\vert {\overline {a}}b\right\vert }
ΦHL is the wavefunction as described by Heiter and London originally, and describes the covalent bonding between orbitals a and b in which the spins are paired, as expected for a chemical bond. ΦT is a representation of the bond that where the electron spins are parallel, resulting in a triplet state. This is a highly repulsive interaction, so this description of the bonding will not play a major role in determining the wave function.
Other ways of describing the wavefunction can also be constructed. Specifically, instead of considering a covalent interaction, the ionic interactions can be considered, resulting in the wavefunction
Φ
I
=
|
a
a
¯
|
+
|
b
¯
b
|
{\displaystyle \Phi _{I}=\left\vert a{\overline {a}}\right\vert +\left\vert {\overline {b}}b\right\vert }
This wavefunction describes the bonding in H2 as the ionic interaction between an H+ and an H−.
Since none of these wavefunctions, ΦHL (covalent bonding) or ΦI (ionic bonding) perfectly approximates the wavefunction, a combination of these two can be used to describe the total wavefunction
Φ
V
B
T
=
λ
Φ
H
L
+
μ
Φ
I
{\displaystyle \Phi _{VBT}=\lambda \Phi _{HL}+\mu \Phi _{I}}
where λ and μ are coefficients that can vary from 0 to 1. In determining the lowest energy wavefunction, these coefficients can be varied until a minimum energy is reached. λ will be larger in bonds that have more covalency, while μ will be larger in bonds that are more ionic. In the specific case of H2, λ ≈ 0.75, and μ ≈ 0.25.
The orbitals that were used as the basis (a and b) do not necessarily have to be localized on the atoms involved in bonding. Orbitals that are partially delocalized onto the other atom involved in bonding can also be used, as in the Coulson-Fischer theory. Even the molecular orbitals associated with a portion of a molecule can be used as a basis set, a processes referred to as using fragment orbitals.
For more complicated molecules, ΦVBT could consider several possible structures that all contribute in various degrees (there would be several coefficients, not just λ and μ). An example of this is the Kekule and Dewar structures used in describing benzene.
Note that all normalization constants were ignored in the discussion above for simplicity.
== Relationship to molecular orbital theory ==
=== History ===
The application of VBT and MOT to computations that attempt to approximate the Schrödinger equation began near the middle of the 20th century, but MOT quickly became the preferred approach between the two. The relative computational ease of doing calculations with non-overlapping orbitals in MOT is said to have contributed to its popularity. In addition, the successful explanation of π-systems, pericyclic reactions, and extended solids further cemented MOT as the preeminent approach. Despite this, the two theories are just two different ways of representing the same wavefunction. As shown below, at the same level of theory, the two methods lead to the same results.
=== H2 - molecular orbital vs valence bond theory ===
The relationship between MOT and VBT can be made more clear by directly comparing the results of the two theories for the hydrogen molecule, H2. Using MOT, the same basis orbitals (a and b) can be used to describe the bonding. Combining them in a constructive and destructive manner gives two spin-orbitals
σ
=
a
+
b
{\displaystyle \sigma =a+b}
σ
∗
=
a
−
b
{\displaystyle \sigma ^{*}=a-b}
The ground state wavefunction of H2 would be that where the σ orbital is doubly occupied, which is expressed as the following Slater determinant (as required by MOT)
Φ
M
O
T
=
|
σ
σ
¯
|
{\displaystyle \Phi _{MOT}=\left\vert \sigma {\overline {\sigma }}\right\vert }
This expression for the wavefunction can be shown to be equivalent to the following wavefunction
Φ
M
O
T
=
(
|
a
b
¯
|
−
|
a
¯
b
|
)
+
(
|
a
a
¯
|
+
|
b
b
¯
|
)
{\displaystyle \Phi _{MOT}=(\left\vert a{\overline {b}}\right\vert -\left\vert {\overline {a}}b\right\vert )+(\left\vert a{\overline {a}}\right\vert +\left\vert b{\overline {b}}\right\vert )}
which is now expressed in terms of VB determinants. This transformation does not alter the wavefunction in any way, only the way that the wavefunction is represented. This process of going from an MO description to a VB description can be referred to as ‘mapping MO wavefunctions onto VB wavefunctions’, and is fundamentally the same process as that used to generate localized molecular orbitals.
Rewriting the VB wavefunction derived above, we can clearly see the relationship between MOT and VBT
Φ
V
B
T
=
λ
(
|
a
b
¯
|
−
|
a
¯
b
|
)
+
μ
(
|
a
a
¯
|
+
|
b
b
¯
|
)
{\displaystyle \Phi _{VBT}=\lambda (\left\vert a{\overline {b}}\right\vert -\left\vert {\overline {a}}b\right\vert )+\mu (\left\vert a{\overline {a}}\right\vert +\left\vert b{\overline {b}}\right\vert )}
Φ
M
O
T
=
(
|
a
b
¯
|
−
|
a
¯
b
|
)
+
(
|
a
a
¯
|
+
|
b
b
¯
|
)
{\displaystyle \Phi _{MOT}=(\left\vert a{\overline {b}}\right\vert -\left\vert {\overline {a}}b\right\vert )+(\left\vert a{\overline {a}}\right\vert +\left\vert b{\overline {b}}\right\vert )}
Thus, at its simplest level, MOT is just VBT, where the covalent and ionic contributions (the first and second terms, respectively) are equal. This is the basis of the claim that MOT does not correctly predict the dissociation of molecules. When MOT includes configuration interaction (MO-CI), this allows the relative contributions of the covalent and ionic contributions to be altered. This leads to the same description of bonding for both VBT and MO-CI. In conclusion, the two theories, when brought to a high enough level of theory, will converge. Their distinction is in the way they are built up to that description.
Note that in all of the aforementioned discussions, as with the derivation of H2 for VBT, normalization constants were ignored for simplicity.
=== Perceived failures of valence bond theory ===
When describing the relationship between MOT and VBT, a few examples are commonly perceived as failures of VBT. However, these often arise from an incomplete or inaccurate use of VBT.
==== Triplet ground state of oxygen ====
It is known that O2 has a triplet ground state, but a classic Lewis structure depiction of oxygen would not indicate that any unpaired electrons exist. Perhaps because Lewis structures and VBT often depict the same structure as the most stable state, this misinterpretation has persisted. However, as has been consistently demonstrated with VBT calculations, the lowest energy state is that with two, three electron π-bonds, which is the triplet state.
==== Ionization energy of methane ====
The photoelectron spectrum (PES) of methane is commonly used as an argument as to why MO theory is superior to VBT. From an MO calculation (or even just a qualitative MOT diagram), it can be seen that the HOMO is a triply degenerate state, while the HOMO-1 is a single degenerate state. By invoking Koopman's theorem, one can predict that there would be two distinct peaks in the ionization spectrum of methane. Those would be by exciting an electron from the t2 orbitals or the a1 orbital, which would result in a 3:1 ratio in intensity. This is corroborated by experiment. However, when one examines the VB description of CH4, it is clear that there are 4 equivalent bonds between C and H. If one were to invoke Koopman's Theorem (which is implicitly done when claiming that VBT is inadequate to describe PES), a single ionization energy peak would be predicted. However, Koopman's Theorem cannot be applied to orbitals that are not the canonical molecular orbitals, and thus a different approach is required to understand the ionization potentials of methane from VBT. To do this, the ionized product, CH4+ must be analyzed. The VB wavefunction of CH4+ would be an equal combination of 4 structures, each having 3 two-electron bonds, and 1 one-electron bond. Based on group theory arguments, these states must give rise to a triply degenerate T2 state and a single degenerate A1 state. A diagram showing the relative energies of the states is shown below, and it can be seen that there exist two distinct transitions from the CH4 state with 4 equivalent bonds to the two CH4+ states.
== Valence bond theory methods ==
Listed below are a few notable VBT methods that are applied in modern computational software packages.
=== Generalized VBT (GVB) ===
This was one of the first ab initio computational methods developed that utilized VBT. Using Coulson-Fischer type basis orbitals, this method uses singly-occupied, instead of doubly-occupied orbitals, as the basis set. This allows from the distance between paired electrons to increase during variational optimization, lowering the resultant energy. The total wavefunction is described by a single set of orbitals, rather than a linear combination of multiple VB structures. GVB is considered to be a user-friendly method for new practitioners.
=== Spin-coupled generalized valence bond theory ===
SCGVB (or sometimes SCVB/full GVB) is an extension of GVB that still uses delocalized orbitals, whose delocalization can adjust with molecular structure. In addition, the electronic wavefunction is still a single product of orbitals. The difference is that the spin functions are allowed to adjust simultaneously with the orbitals during energy minimization procedures. This is considered to be one of the best VB descriptions of the wavefunction that relies on only a single configuration.
=== Complete active space valence bond method (CASVB) ===
This is a method that often gets confused as a traditional VB method. Instead, this is a localization procedure that maps the full configuration interaction Hartree-Fock wavefunction (CASSCF) onto valence bond structures.
== Spin-coupled theory ==
There are a large number of different valence bond methods. Most use n valence bond orbitals for n electrons. If a single set of these orbitals is combined with all linear independent combinations of the spin functions, we have spin-coupled valence bond theory. The total wave function is optimized using the variational method by varying the coefficients of the basis functions in the valence bond orbitals and the coefficients of the different spin functions. In other cases only a sub-set of all possible spin functions is used. Many valence bond methods use several sets of the valence bond orbitals. It is important to note here that different authors use different names for these different valence bond methods.
== Valence bond programs ==
Several research groups have produced computer programs for modern valence bond calculations that are freely available.
== See also ==
List of quantum chemistry and solid-state physics software
== References ==
== Further reading ==
J. Gerratt, D. L. Cooper, P. B. Karadakov and M. Raimondi, "Modern Valence Bond Theory", Chemical Society Reviews, 26, 87, 1997, and several others by the same authors.
J. H. van Lenthe, G. G. Balint-Kurti, "The Valence Bond Self-Consistent Field (VBSCF) method", Chemical Physics Letters 76, 138–142, 1980.
J. H. van Lenthe, G. G. Balint-Kurti, "The Valence Bond Self-Consistent Field (VBSCF) method", The Journal of Chemical Physics 78, 5699–5713, 1983.
J. Li and R. McWeeny, "VB2000: Pushing Valence Bond Theory to new limits", International Journal of Quantum Chemistry, 89, 208, 2002.
L. Song, Y. Mo, Q. Zhang and W. Wu, "XMVB: A program for ab initio nonorthogonal valence bond computations", Journal of Computational Chemistry, 26, 514, 2005.
S. Shaik and P. C. Hiberty, "Valence Bond theory, its History, Fundamentals and Applications. A Primer", Reviews of Computational Chemistry, 20, 1 2004. A recent review that covers, not only their own contributions, but the whole of modern valence bond theory. | Wikipedia/Modern_valence_bond_theory |
Time-dependent density-functional theory (TDDFT) is a quantum mechanical theory used in physics and chemistry to investigate the properties and dynamics of many-body systems in the presence of time-dependent potentials, such as electric or magnetic fields. The effect of such fields on molecules and solids can be studied with TDDFT to extract features like excitation energies, frequency-dependent response properties, and photoabsorption spectra.
TDDFT is an extension of density-functional theory (DFT), and the conceptual and computational foundations are analogous – to show that the (time-dependent) wave function is equivalent to the (time-dependent) electronic density, and then to derive the effective potential of a fictitious non-interacting system which returns the same density as any given interacting system. The issue of constructing such a system is more complex for TDDFT, most notably because the time-dependent effective potential at any given instant depends on the value of the density at all previous times. Consequently, the development of time-dependent approximations for the implementation of TDDFT is behind that of DFT, with applications routinely ignoring this memory requirement.
== Overview ==
The formal foundation of TDDFT is the Runge–Gross (RG) theorem (1984) – the time-dependent analogue of the Hohenberg–Kohn (HK) theorem (1964). The RG theorem shows that, for a given initial wavefunction, there is a unique mapping between the time-dependent external potential of a system and its time-dependent density. This implies that the many-body wavefunction, depending upon 3N variables, is equivalent to the density, which depends upon only 3, and that all properties of a system can thus be determined from knowledge of the density alone. Unlike in DFT, there is no general minimization principle in time-dependent quantum mechanics. Consequently, the proof of the RG theorem is more involved than the HK theorem.
Given the RG theorem, the next step in developing a computationally useful method is to determine the fictitious non-interacting system which has the same density as the physical (interacting) system of interest. As in DFT, this is called the (time-dependent) Kohn–Sham system. This system is formally found as the stationary point of an action functional defined in the Keldysh formalism.
The most popular application of TDDFT is in the calculation of the energies of excited states of isolated systems and, less commonly, solids. Such calculations are based on the fact that the linear response function – that is, how the electron density changes when the external potential changes – has poles at the exact excitation energies of a system. Such calculations require, in addition to the exchange-correlation potential, the exchange-correlation kernel – the functional derivative of the exchange-correlation potential with respect to the density.
== Formalism ==
=== Runge–Gross theorem ===
The approach of Runge and Gross considers a single-component system in the presence of a time-dependent scalar field for which the Hamiltonian takes the form
H
^
(
t
)
=
T
^
+
V
^
e
x
t
(
t
)
+
W
^
,
{\displaystyle {\hat {H}}(t)={\hat {T}}+{\hat {V}}_{\mathrm {ext} }(t)+{\hat {W}},}
where T is the kinetic energy operator, W the electron-electron interaction, and Vext(t) the external potential which along with the number of electrons defines the system. Nominally, the external potential contains the electrons' interaction with the nuclei of the system. For non-trivial time-dependence, an additional explicitly time-dependent potential is present which can arise, for example, from a time-dependent electric or magnetic field. The many-body wavefunction evolves according to the time-dependent Schrödinger equation under a single initial condition,
H
^
(
t
)
|
Ψ
(
t
)
⟩
=
i
ℏ
∂
∂
t
|
Ψ
(
t
)
⟩
,
|
Ψ
(
0
)
⟩
=
|
Ψ
⟩
.
{\displaystyle {\hat {H}}(t)|\Psi (t)\rangle =i\hbar {\frac {\partial }{\partial t}}|\Psi (t)\rangle ,\ \ \ |\Psi (0)\rangle =|\Psi \rangle .}
Employing the Schrödinger equation as its starting point, the Runge–Gross theorem shows that at any time, the density uniquely determines the external potential. This is done in two steps:
Assuming that the external potential can be expanded in a Taylor series about a given time, it is shown that two external potentials differing by more than an additive constant generate different current densities.
Employing the continuity equation, it is then shown that for finite systems, different current densities correspond to different electron densities.
=== Time-dependent Kohn–Sham system ===
For a given interaction potential, the RG theorem shows that the external potential uniquely determines the density. The Kohn–Sham approaches chooses a non-interacting system (that for which the interaction potential is zero) in which to form the density that is equal to the interacting system. The advantage of doing so lies in the ease in which non-interacting systems can be solved – the wave function of a non-interacting system can be represented as a Slater determinant of single-particle orbitals, each of which are determined by a single partial differential equation in three variable – and that the kinetic energy of a non-interacting system can be expressed exactly in terms of those orbitals. The problem is thus to determine a potential, denoted as vs(r,t) or vKS(r,t), that determines a non-interacting Hamiltonian, Hs,
H
^
s
(
t
)
=
T
^
+
V
^
s
(
t
)
,
{\displaystyle {\hat {H}}_{s}(t)={\hat {T}}+{\hat {V}}_{s}(t),}
which in turn determines a determinantal wave function
H
^
s
(
t
)
|
Φ
(
t
)
⟩
=
i
∂
∂
t
|
Φ
(
t
)
⟩
,
|
Φ
(
0
)
⟩
=
|
Φ
⟩
,
{\displaystyle {\hat {H}}_{s}(t)|\Phi (t)\rangle =i{\frac {\partial }{\partial t}}|\Phi (t)\rangle ,\ \ \ |\Phi (0)\rangle =|\Phi \rangle ,}
which is constructed in terms of a set of N orbitals which obey the equation,
(
−
1
2
∇
2
+
v
s
(
r
,
t
)
)
ϕ
j
(
r
,
t
)
=
i
∂
∂
t
ϕ
j
(
r
,
t
)
ϕ
j
(
r
,
0
)
=
ϕ
j
(
r
)
,
{\displaystyle \left(-{\frac {1}{2}}\nabla ^{2}+v_{s}(\mathbf {r} ,t)\right)\phi _{j}(\mathbf {r} ,t)=i{\frac {\partial }{\partial t}}\phi _{j}(\mathbf {r} ,t)\ \ \ \phi _{j}(\mathbf {r} ,0)=\phi _{j}(\mathbf {r} ),}
and generate a time-dependent density
ρ
s
(
r
,
t
)
=
∑
j
=
1
N
b
f
j
(
t
)
|
ϕ
j
(
r
,
t
)
|
2
,
{\displaystyle \rho _{s}(\mathbf {r} ,t)=\sum _{j=1}^{N_{\textrm {b}}}f_{j}(t)|\phi _{j}(\mathbf {r} ,t)|^{2},}
such that ρs is equal to the density of the interacting system at all times:
ρ
s
(
r
,
t
)
=
ρ
(
r
,
t
)
.
{\displaystyle \rho _{s}(\mathbf {r} ,t)=\rho (\mathbf {r} ,t).}
Note that in the expression of density above, the summation is over all
N
b
{\displaystyle N_{\textrm {b}}}
Kohn–Sham orbitals and
f
j
(
t
)
{\displaystyle f_{j}(t)}
is the time-dependent occupation number for orbital
j
{\displaystyle j}
. If the potential vs(r,t) can be determined, or at the least well-approximated, then the original Schrödinger equation, a single partial differential equation in 3N variables, has been replaced by N differential equations in 3 dimensions, each differing only in the initial condition.
The problem of determining approximations to the Kohn–Sham potential is challenging. Analogously to DFT, the time-dependent KS potential is decomposed to extract the external potential of the system and the time-dependent Coulomb interaction, vJ. The remaining component is the exchange-correlation potential:
v
s
(
r
,
t
)
=
v
e
x
t
(
r
,
t
)
+
v
J
(
r
,
t
)
+
v
x
c
(
r
,
t
)
.
{\displaystyle v_{s}(\mathbf {r} ,t)=v_{\rm {ext}}(\mathbf {r} ,t)+v_{J}(\mathbf {r} ,t)+v_{\rm {xc}}(\mathbf {r} ,t).\,}
In their seminal paper, Runge and Gross approached the definition of the KS potential through an action-based argument starting from the Dirac action
A
[
Ψ
]
=
∫
d
t
⟨
Ψ
(
t
)
|
H
−
i
∂
∂
t
|
Ψ
(
t
)
⟩
.
{\displaystyle A[\Psi ]=\int \mathrm {d} t\ \langle \Psi (t)|H-i{\frac {\partial }{\partial t}}|\Psi (t)\rangle .}
Treated as a functional of the wave function, A[Ψ], variations of the wave function yield the many-body Schrödinger equation as the stationary point. Given the unique mapping between densities and wave function, Runge and Gross then treated the Dirac action as a density functional,
A
[
ρ
]
=
A
[
Ψ
[
ρ
]
]
,
{\displaystyle A[\rho ]=A[\Psi [\rho ]],\,}
and derived a formal expression for the exchange-correlation component of the action, which determines the exchange-correlation potential by functional differentiation. Later it was observed that an approach based on the Dirac action yields paradoxical conclusions when considering the causality of the response functions it generates. The density response function, the functional derivative of the density with respect to the external potential, should be causal: a change in the potential at a given time can not affect the density at earlier times. The response functions from the Dirac action however are symmetric in time so lack the required causal structure. An approach which does not suffer from this issue was later introduced through an action based on the Keldysh formalism of complex-time path integration. An alternative resolution of the causality paradox through a refinement of the action principle in real time has been recently proposed by Vignale.
== Linear response TDDFT ==
Linear-response TDDFT can be used if the external perturbation is small in the sense that it does not completely destroy the ground-state structure of the system. In this case one can analyze the linear response of the system. This is a great advantage as, to first order, the variation of the system will depend only on the ground-state wave-function so that we can simply use all the properties of DFT.
Consider a small time-dependent external perturbation
δ
V
e
x
t
(
t
)
{\displaystyle \delta V^{ext}(t)}
.
This gives
H
′
(
t
)
=
H
+
δ
V
e
x
t
(
t
)
{\displaystyle H'(t)=H+\delta V^{ext}(t)}
H
K
S
′
[
ρ
]
(
t
)
=
H
K
S
[
ρ
]
+
δ
V
H
[
ρ
]
(
t
)
+
δ
V
x
c
[
ρ
]
(
t
)
+
δ
V
e
x
t
(
t
)
{\displaystyle H'_{KS}[\rho ](t)=H_{KS}[\rho ]+\delta V_{H}[\rho ](t)+\delta V_{xc}[\rho ](t)+\delta V^{ext}(t)}
and looking at the linear response of the density
δ
ρ
(
r
t
)
=
χ
(
r
t
,
r
′
t
′
)
δ
V
e
x
t
(
r
′
t
′
)
{\displaystyle \delta \rho (\mathbf {r} t)=\chi (\mathbf {r} t,\mathbf {r'} t')\delta V^{ext}(\mathbf {r'} t')}
δ
ρ
(
r
t
)
=
χ
K
S
(
r
t
,
r
′
t
′
)
δ
V
e
f
f
[
ρ
]
(
r
′
t
′
)
{\displaystyle \delta \rho (\mathbf {r} t)=\chi _{KS}(\mathbf {r} t,\mathbf {r'} t')\delta V^{eff}[\rho ](\mathbf {r'} t')}
where
δ
V
e
f
f
[
ρ
]
(
t
)
=
δ
V
e
x
t
(
t
)
+
δ
V
H
[
ρ
]
(
t
)
+
δ
V
x
c
[
ρ
]
(
t
)
{\displaystyle \delta V^{eff}[\rho ](t)=\delta V^{ext}(t)+\delta V_{H}[\rho ](t)+\delta V_{xc}[\rho ](t)}
Here and in the following it is assumed that primed variables are integrated.
Within the linear-response domain, the variation of the Hartree (H) and the exchange-correlation (xc) potential to linear order may be expanded with respect to the density variation
δ
V
H
[
ρ
]
(
r
)
=
δ
V
H
[
ρ
]
δ
ρ
δ
ρ
=
1
|
r
−
r
′
|
δ
ρ
(
r
′
)
{\displaystyle \delta V_{H}[\rho ](\mathbf {r} )={\frac {\delta V_{H}[\rho ]}{\delta \rho }}\delta \rho ={\frac {1}{|\mathbf {r} -\mathbf {r'} |}}\delta \rho (\mathbf {r'} )}
and
δ
V
x
c
[
ρ
]
(
r
)
=
δ
V
x
c
[
ρ
]
δ
ρ
δ
ρ
=
f
x
c
(
r
t
,
r
′
t
′
)
δ
ρ
(
r
′
)
{\displaystyle \delta V_{xc}[\rho ](\mathbf {r} )={\frac {\delta V_{xc}[\rho ]}{\delta \rho }}\delta \rho =f_{xc}(\mathbf {r} t,\mathbf {r'} t')\delta \rho (\mathbf {r'} )}
Finally, inserting this relation in the response equation for the KS system and comparing
the resultant equation with the response equation for the physical system yields the Dyson
equation of TDDFT:
χ
(
r
1
t
1
,
r
2
t
2
)
=
χ
K
S
(
r
1
t
1
,
r
2
t
2
)
+
χ
K
S
(
r
1
t
1
,
r
2
′
t
2
′
)
(
1
|
r
2
′
−
r
1
′
|
+
f
x
c
(
r
2
′
t
2
′
,
r
1
′
t
1
′
)
)
χ
(
r
1
′
t
1
′
,
r
2
t
2
)
{\displaystyle \chi (\mathbf {r} _{1}t_{1},\mathbf {r} _{2}t_{2})=\chi _{KS}(\mathbf {r_{1}} t_{1},\mathbf {r} _{2}t_{2})+\chi _{KS}(\mathbf {r_{1}} t_{1},\mathbf {r} _{2}'t_{2}')\left({\frac {1}{|\mathbf {r} _{2}'-\mathbf {r} _{1}'|}}+f_{xc}(\mathbf {r} _{2}'t_{2}',\mathbf {r} _{1}'t_{1}')\right)\chi (\mathbf {r} _{1}'t_{1}',\mathbf {r} _{2}t_{2})}
From this last equation it is possible to derive the excitation energies of the system, as these are simply the poles of the response function.
Other linear-response approaches include the Casida formalism (an expansion in electron-hole pairs) and the Sternheimer equation (density-functional perturbation theory).
== Key papers ==
Hohenberg, P.; Kohn, W. (1964). "Inhomogeneous Electron Gas". Physical Review. 136 (3B): B864. Bibcode:1964PhRv..136..864H. doi:10.1103/PhysRev.136.B864.
Runge, Erich; Gross, E. K. U. (1984). "Density-Functional Theory for Time-Dependent Systems". Physical Review Letters. 52 (12): 997. Bibcode:1984PhRvL..52..997R. doi:10.1103/PhysRevLett.52.997.
== Books on TDDFT ==
M.A.L. Marques; C.A. Ullrich; F. Nogueira; A. Rubio; K. Burke; E.K.U. Gross, eds. (2006). Time-Dependent Density Functional Theory. Springer-Verlag. ISBN 978-3-540-35422-2.
Carsten Ullrich (2012). Time-Dependent Density-Functional Theory: Concepts and Applications. Oxford Graduate Texts. Oxford University Press. ISBN 978-0-19-956302-9.
== TDDFT codes ==
ELK
Firefly
GAMESS-US
Gaussian
Amsterdam Density Functional
deMon2k
CP2K
Dalton
NWChem
Octopus
pw-teleman library
PARSEC
Qbox/Qb@ll
Q-Chem
Spartan
TeraChem
TURBOMOLE
YAMBO code
ORCA
Jaguar
GPAW
ONETEP
VASP
Quantum ESPRESSO
OpenQP
== References ==
== External links ==
tddft.org
Brief introduction of TD-DFT | Wikipedia/Time-dependent_density_functional_theory |
The Debye–Hückel theory was proposed by Peter Debye and Erich Hückel as a theoretical explanation for departures from ideality in solutions of electrolytes and plasmas.
It is a linearized Poisson–Boltzmann model, which assumes an extremely simplified model of electrolyte solution but nevertheless gave accurate predictions of mean activity coefficients for ions in dilute solution. The Debye–Hückel equation provides a starting point for modern treatments of non-ideality of electrolyte solutions.
== Overview ==
In the chemistry of electrolyte solutions, an ideal solution is a solution whose colligative properties are proportional to the concentration of the solute. Real solutions may show departures from this kind of ideality. In order to accommodate these effects in the thermodynamics of solutions, the concept of activity was introduced: the properties are then proportional to the activities of the ions. Activity a is proportional to concentration c, with the proportionality constant known as an activity coefficient
γ
{\displaystyle \gamma }
:
a
=
γ
c
/
c
0
.
{\displaystyle a=\gamma c/c^{0}.}
In an ideal electrolyte solution the activity coefficients for all the ions are equal to one. Ideality of an electrolyte solution can be achieved only in very dilute solutions. Non-ideality of more concentrated solutions arises principally (but not exclusively) because ions of opposite charge attract each other due to electrostatic forces, while ions of the same charge repel each other. In consequence, ions are not randomly distributed throughout the solution, as they would be in an ideal solution.
Activity coefficients of single ions cannot be measured experimentally because an electrolyte solution must contain both positively charged ions and negatively charged ions. Instead, a mean activity coefficient
γ
±
{\displaystyle \gamma _{\pm }}
is defined. For example, with the electrolyte NaCl,
γ
±
=
(
γ
Na
+
γ
Cl
−
)
1
/
2
.
{\displaystyle \gamma _{\pm }={\left(\gamma _{{\ce {Na+}}}\gamma _{{\ce {Cl-}}}\right)}^{1/2}.}
In general, the mean activity coefficient of a fully dissociated electrolyte of formula AnBm is given by
γ
±
=
(
γ
A
n
γ
B
m
)
1
/
(
n
+
m
)
.
{\displaystyle \gamma _{\pm }={\left({\gamma _{A}}^{n}{\gamma _{B}}^{m}\right)}^{1/(n+m)}.}
Activity coefficients are themselves functions of concentration, since the amount of inter-ionic interaction increases as the concentration of the electrolyte increases. Debye and Hückel developed a theory with which single-ion activity coefficients could be calculated. By calculating the mean activity coefficients from them, the theory could be tested against experimental data. It was found to give excellent agreement for "dilute" solutions.
== The model ==
A description of Debye–Hückel theory includes a very detailed discussion of the assumptions and their limitations as well as the mathematical development and applications.
A snapshot of a 2-dimensional section of an idealized electrolyte solution is shown in the picture. The ions are shown as spheres with unit electrical charge. The solvent (pale blue) is shown as a uniform medium, without structure. On average, each ion is surrounded more closely by ions of opposite charge than by ions of like charge. These concepts were developed into a quantitative theory involving ions of charge z1e+ and z2e−, where z can be any integer. The principal assumption is that departure from ideality is due to electrostatic interactions between ions, mediated by Coulomb's law: the force of interaction between two electric charges, separated by a distance, r in a medium of relative permittivity εr is given by
force
=
z
1
z
2
e
2
4
π
ε
0
ε
r
r
2
{\displaystyle {\text{force}}={\frac {z_{1}z_{2}e^{2}}{4\pi \varepsilon _{0}\varepsilon _{r}r^{2}}}}
It is also assumed that
The solute is completely dissociated; it is a strong electrolyte.
Ions are spherical and are not polarized by the surrounding electric field. Solvation of ions is ignored except insofar as it determines the effective sizes of the ions.
The solvent plays no role other than providing a medium of constant relative permittivity (dielectric constant).
There is no electrostriction.
Individual ions surrounding a "central" ion can be represented by a statistically averaged cloud of continuous charge density, with a minimum distance of closest approach.
The last assumption means that each cation is surrounded by a spherically symmetric cloud of other ions. The cloud has a net negative charge. Similarly each anion is surrounded by a cloud with net positive charge.
== Mathematical development ==
The deviation from ideality is taken to be a function of the potential energy resulting from the electrostatic interactions between ions and their surrounding clouds. To calculate this energy two steps are needed.
The first step is to specify the electrostatic potential for ion j by means of Poisson's equation
∇
2
ψ
j
(
r
)
=
−
1
ε
0
ε
r
ρ
j
(
r
)
{\displaystyle \nabla ^{2}\psi _{j}(r)=-{\frac {1}{\varepsilon _{0}\varepsilon _{r}}}\rho _{j}(r)}
ψ(r) is the total potential at a distance, r, from the central ion and ρ(r) is the averaged charge density of the surrounding cloud at that distance. To apply this formula it is essential that the cloud has spherical symmetry, that is, the charge density is a function only of distance from the central ion as this allows the Poisson equation to be cast in terms of spherical coordinates with no angular dependence.
The second step is to calculate the charge density by means of a Boltzmann distribution.
n
i
′
=
n
i
exp
(
−
z
i
e
ψ
j
(
r
)
k
B
T
)
{\displaystyle n'_{i}=n_{i}\exp \left({\frac {-z_{i}e\psi _{j}(r)}{k_{\text{B}}T}}\right)}
where kB is Boltzmann constant and T is the temperature. This distribution also depends on the potential ψ(r) and this introduces a serious difficulty in terms of the superposition principle. Nevertheless, the two equations can be combined to produce the Poisson–Boltzmann equation.
∇
2
ψ
j
(
r
)
=
−
1
ε
0
ε
r
∑
i
[
n
i
(
z
i
e
)
exp
(
−
z
i
e
ψ
j
(
r
)
k
B
T
)
]
{\displaystyle \nabla ^{2}\psi _{j}(r)=-{\frac {1}{\varepsilon _{0}\varepsilon _{r}}}\sum _{i}\left[n_{i}(z_{i}e)\exp \left({\frac {-z_{i}e\psi _{j}(r)}{k_{\text{B}}T}}\right)\right]}
Solution of this equation is far from straightforward. Debye and Hückel expanded the exponential as a truncated Taylor series to first order. The zeroth order term vanishes because the solution is on average electrically neutral (so that
∑
n
i
z
i
=
0
{\textstyle \sum n_{i}z_{i}=0}
), which leaves us with only the first order term. The result has the form of the Helmholtz equation
∇
2
ψ
j
(
r
)
=
κ
2
ψ
j
(
r
)
with
κ
2
=
e
2
ε
0
ε
r
k
B
T
∑
i
n
i
z
i
2
,
{\displaystyle \nabla ^{2}\psi _{j}(r)=\kappa ^{2}\psi _{j}(r)\qquad {\text{with}}\qquad \kappa ^{2}={\frac {e^{2}}{\varepsilon _{0}\varepsilon _{r}k_{\text{B}}T}}\sum _{i}n_{i}z_{i}^{2},}
which has an analytical solution. This equation applies to electrolytes with equal numbers of ions of each charge. Nonsymmetrical electrolytes require another term with ψ2. For symmetrical electrolytes, this reduces to the modified spherical Bessel equation
(
∂
2
∂
r
2
+
2
r
∂
∂
r
−
κ
2
)
ψ
j
=
0
,
{\displaystyle \left({\frac {\partial ^{2}}{\partial r^{2}}}+{\frac {2}{r}}{\frac {\partial }{\partial r}}-\kappa ^{2}\right)\psi _{j}=0,}
with solutions
ψ
j
(
r
)
=
A
′
e
−
κ
r
r
+
A
″
e
κ
r
r
.
{\displaystyle \psi _{j}(r)=A'{\frac {e^{-\kappa r}}{r}}+A''{\frac {e^{\kappa r}}{r}}.}
The coefficients
A
′
{\displaystyle A'}
and
A
″
{\displaystyle A''}
are fixed by the boundary conditions. As
r
→
∞
{\displaystyle r\to \infty }
,
ψ
{\displaystyle \psi }
must not diverge, so
A
″
=
0
{\displaystyle A''=0}
. At
r
=
a
0
{\displaystyle r=a_{0}}
, which is the distance of the closest approach of ions, the force exerted by the charge should be balanced by the force of other ions, imposing
∂
r
ψ
j
(
a
0
)
=
−
z
j
e
/
(
4
π
ε
0
ε
r
a
0
2
)
{\displaystyle \partial _{r}\psi _{j}(a_{0})=-z_{j}e/(4\pi \varepsilon _{0}\varepsilon _{r}a_{0}^{2})}
, from which
A
′
{\displaystyle A'}
is found, yielding
ψ
j
(
r
)
=
z
j
e
4
π
ε
0
ε
r
e
κ
a
0
1
+
κ
a
0
e
−
κ
r
r
{\displaystyle \psi _{j}(r)={\frac {z_{j}e}{4\pi \varepsilon _{0}\varepsilon _{r}}}{\frac {e^{\kappa a_{0}}}{1+\kappa a_{0}}}{\frac {e^{-\kappa r}}{r}}}
The electrostatic potential energy,
u
j
{\displaystyle u_{j}}
, of the ion at
r
=
0
{\displaystyle r=0}
is
u
j
=
z
j
e
(
ψ
j
(
a
0
)
−
z
j
e
4
π
ε
0
ε
r
1
a
0
)
=
−
z
j
2
e
2
4
π
ε
0
ε
r
κ
1
+
κ
a
0
{\displaystyle u_{j}=z_{j}e\left(\psi _{j}(a_{0})-{\frac {z_{j}e}{4\pi \varepsilon _{0}\varepsilon _{r}}}{\frac {1}{a_{0}}}\right)=-{\frac {z_{j}^{2}e^{2}}{4\pi \varepsilon _{0}\varepsilon _{r}}}{\frac {\kappa }{1+\kappa a_{0}}}}
This is the potential energy of a single ion in a solution. The multiple-charge generalization from electrostatics gives an expression for the potential energy of the entire solution.
The mean activity coefficient is given by the logarithm of this quantity as follows
log
10
γ
±
=
−
A
z
j
2
I
1
+
B
a
0
I
{\displaystyle \log _{10}\gamma _{\pm }=-Az_{j}^{2}{\frac {\sqrt {I}}{1+Ba_{0}{\sqrt {I}}}}}
A
=
e
2
B
2.303
×
8
π
ε
0
ε
r
k
B
T
{\displaystyle A={\frac {e^{2}B}{2.303\times 8\pi \varepsilon _{0}\varepsilon _{r}k_{\text{B}}T}}}
B
=
(
2
e
2
N
ε
0
ε
r
k
B
T
)
1
/
2
{\displaystyle B=\left({\frac {2e^{2}N}{\varepsilon _{0}\varepsilon _{r}k_{\text{B}}T}}\right)^{1/2}}
where I is the ionic strength and a0 is a parameter that represents the distance of closest approach of ions. For aqueous solutions at 25 °C A = 0.51 mol−1/2dm3/2 and B = 3.29 nm−1mol−1/2dm3/2
A
{\displaystyle A}
is a constant that depends on temperature. If
I
{\displaystyle I}
is expressed in terms of molality, instead of molarity (as in the equation above and in the rest of this article), then an experimental value for
A
{\displaystyle A}
of water is
1.172
mol
−
1
/
2
kg
1
/
2
{\displaystyle 1.172{\text{ mol}}^{-1/2}{\text{kg}}^{1/2}}
at 25 °C. It is common to use a base-10 logarithm, in which case we factor ln 10, so A is
0.509
mol
−
1
/
2
kg
1
/
2
{\displaystyle 0.509{\text{ mol}}^{-1/2}{\text{kg}}^{1/2}}
. The multiplier 103 before
I
/
2
{\displaystyle I/2}
in the equation is for the case when the dimensions of
I
{\displaystyle I}
are
mol
/
dm
3
{\displaystyle {\text{mol}}/{\text{dm}}^{3}}
. When the dimensions of
I
{\displaystyle I}
are
mole
/
m
3
{\displaystyle {\text{mole}}/{\text{m}}^{3}}
, the multiplier 103 must be dropped from the equation : section 2.5.2
The most significant aspect of this result is the prediction that the mean activity coefficient is a function of ionic strength rather than the electrolyte concentration. For very low values of the ionic strength the value of the denominator in the expression above becomes nearly equal to one. In this situation the mean activity coefficient is proportional to the square root of the ionic strength. This is known as the Debye–Hückel limiting law. In this limit the equation is given as follows: section 2.5.2
ln
(
γ
i
)
=
−
z
i
2
q
2
κ
8
π
ε
r
ε
0
k
B
T
=
−
z
i
2
q
3
N
A
1
/
2
4
π
(
ε
r
ε
0
k
B
T
)
3
/
2
10
3
I
2
=
−
A
z
i
2
I
,
{\displaystyle \ln(\gamma _{i})=-{\frac {z_{i}^{2}q^{2}\kappa }{8\pi \varepsilon _{r}\varepsilon _{0}k_{\text{B}}T}}=-{\frac {z_{i}^{2}q^{3}N_{\text{A}}^{1/2}}{4\pi (\varepsilon _{r}\varepsilon _{0}k_{\text{B}}T)^{3/2}}}{\sqrt {10^{3}{\frac {I}{2}}}}=-Az_{i}^{2}{\sqrt {I}},}
The excess osmotic pressure obtained from Debye–Hückel theory is in cgs units:
P
ex
=
−
k
B
T
κ
cgs
3
24
π
=
−
k
B
T
24
π
(
4
π
∑
j
c
j
q
j
ε
0
ε
r
k
B
T
)
3
/
2
.
{\displaystyle P^{\text{ex}}=-{\frac {k_{\text{B}}T\kappa _{\text{cgs}}^{3}}{24\pi }}=-{\frac {k_{\text{B}}T}{24\pi }}{\left({\frac {4\pi \sum _{j}c_{j}q_{j}}{\varepsilon _{0}\varepsilon _{r}k_{\text{B}}T}}\right)}^{3/2}.}
Therefore, the total pressure is the sum of the excess osmotic pressure and the ideal pressure
P
id
=
k
B
T
∑
i
c
i
{\textstyle P^{\text{id}}=k_{\text{B}}T\sum _{i}c_{i}}
. The osmotic coefficient is then given by
ϕ
=
P
id
+
P
ex
P
id
=
1
+
P
ex
P
id
.
{\displaystyle \phi ={\frac {P^{\text{id}}+P^{\text{ex}}}{P^{\text{id}}}}=1+{\frac {P^{\text{ex}}}{P^{\text{id}}}}.}
== Nondimensionalization ==
Taking the differential equation from earlier (as stated above, the equation only holds for low concentrations):
∂
2
∂
r
2
φ
(
r
)
+
2
r
∂
∂
r
φ
(
r
)
=
I
q
φ
(
r
)
ε
r
ε
0
k
B
T
=
κ
2
φ
(
r
)
.
{\displaystyle {\frac {\partial ^{2}}{\partial r^{2}}}\varphi (r)+{\frac {2}{r}}{\frac {\partial }{\partial r}}\varphi (r)={\frac {Iq\varphi (r)}{\varepsilon _{r}\varepsilon _{0}k_{\text{B}}T}}=\kappa ^{2}\varphi (r).}
Using the Buckingham π theorem on this problem results in the following dimensionless groups:
π
1
=
q
φ
(
r
)
k
B
T
=
Φ
(
R
(
r
)
)
,
π
2
=
ε
r
,
π
3
=
a
k
B
T
ε
0
q
2
,
π
4
=
a
3
I
,
π
5
=
z
0
,
π
6
=
r
a
=
R
(
r
)
.
{\displaystyle {\begin{aligned}\pi _{1}&={\frac {q\varphi (r)}{k_{\text{B}}T}}=\Phi (R(r)),&\pi _{2}&=\varepsilon _{r},\\[1ex]\pi _{3}&={\frac {ak_{\text{B}}T\varepsilon _{0}}{q^{2}}},&\pi _{4}&=a^{3}I,\\[1ex]\pi _{5}&=z_{0},&\pi _{6}&={\frac {r}{a}}=R(r).\end{aligned}}}
Φ
{\displaystyle \Phi }
is called the reduced scalar electric potential field.
R
{\displaystyle R}
is called the reduced radius. The existing groups may be recombined to form two other dimensionless groups for substitution into the differential equation. The first is what could be called the square of the reduced inverse screening length,
(
κ
a
)
2
{\displaystyle (\kappa a)^{2}}
. The second could be called the reduced central ion charge,
Z
0
{\displaystyle Z_{0}}
(with a capital Z). Note that, though
z
0
{\displaystyle z_{0}}
is already dimensionless, without the substitution given below, the differential equation would still be dimensional.
π
4
π
2
π
3
=
a
2
q
2
I
ε
r
ε
0
k
B
T
=
(
κ
a
)
2
{\displaystyle {\frac {\pi _{4}}{\pi _{2}\pi _{3}}}={\frac {a^{2}q^{2}I}{\varepsilon _{r}\varepsilon _{0}k_{\text{B}}T}}=(\kappa a)^{2}}
π
5
π
2
π
3
=
z
0
q
2
4
π
a
ε
r
ε
0
k
B
T
=
Z
0
{\displaystyle {\frac {\pi _{5}}{\pi _{2}\pi _{3}}}={\frac {z_{0}q^{2}}{4\pi a\varepsilon _{r}\varepsilon _{0}k_{\text{B}}T}}=Z_{0}}
To obtain the nondimensionalized differential equation and initial conditions, use the
π
{\displaystyle \pi }
groups to eliminate
φ
(
r
)
{\displaystyle \varphi (r)}
in favor of
Φ
(
R
(
r
)
)
{\displaystyle \Phi (R(r))}
, then eliminate
R
(
r
)
{\displaystyle R(r)}
in favor of
r
{\displaystyle r}
while carrying out the chain rule and substituting
R
′
(
r
)
=
a
{\displaystyle {R^{\prime }}(r)=a}
, then eliminate
r
{\displaystyle r}
in favor of
R
{\displaystyle R}
(no chain rule needed), then eliminate
I
{\displaystyle I}
in favor of
(
κ
a
)
2
{\displaystyle (\kappa a)^{2}}
, then eliminate
z
0
{\displaystyle z_{0}}
in favor of
Z
0
{\displaystyle Z_{0}}
. The resulting equations are as follows:
∂
Φ
(
R
)
∂
R
|
R
=
1
=
−
Z
0
{\displaystyle \left.{\frac {\partial \Phi (R)}{\partial R}}\right|_{R=1}=-Z_{0}}
Φ
(
∞
)
=
0
{\displaystyle \Phi (\infty )=0}
∂
2
Φ
(
R
)
∂
R
2
+
2
R
∂
Φ
(
R
)
∂
R
=
(
κ
a
)
2
Φ
(
R
)
.
{\displaystyle {\frac {\partial ^{2}\Phi (R)}{\partial R^{2}}}+{\frac {2}{R}}{\frac {\partial \Phi (R)}{\partial R}}=(\kappa a)^{2}\Phi (R).}
For table salt in 0.01 M solution at 25 °C, a typical value of
(
κ
a
)
2
{\displaystyle (\kappa a)^{2}}
is 0.0005636, while a typical value of
Z
0
{\displaystyle Z_{0}}
is 7.017, highlighting the fact that, in low concentrations,
(
κ
a
)
2
{\displaystyle (\kappa a)^{2}}
is a target for a zero order of magnitude approximation such as perturbation analysis. Unfortunately, because of the boundary condition at infinity, regular perturbation does not work. The same boundary condition prevents us from finding the exact solution to the equations. Singular perturbation may work, however.
== Limitations and extensions ==
This equation for
log
γ
±
{\displaystyle \log \gamma _{\pm }}
gives satisfactory agreement with experimental measurements for low electrolyte concentrations, typically less than 10−3 mol/L. Deviations from the theory occur at higher concentrations and with electrolytes that produce ions of higher charges, particularly unsymmetrical electrolytes. Essentially these deviations occur because the model is oversimplified, so there is little to be gained making small adjustments to the model. The individual assumptions can be challenged in turn.
Complete dissociation. Ion association may take place, particularly with ions of higher charge. This was followed up in detail by Niels Bjerrum. The Bjerrum length is the separation at which the electrostatic interaction between two ions is comparable in magnitude to kBT.
Weak electrolytes. A weak electrolyte is one that is not fully dissociated. As such it has a dissociation constant. The dissociation constant can be used to calculate the extent of dissociation and hence, make the necessary correction needed to calculate activity coefficients.
Ions are spherical, not point charges and are not polarized. Many ions such as the nitrate ion, NO3−, are not spherical. Polyatomic ions are also polarizable.
Role of the solvent. The solvent is not a structureless medium but is made up of molecules. The water molecules in aqueous solution are both dipolar and polarizable. Both cations and anions have a strong primary solvation shell and a weaker secondary solvation shell. Ion–solvent interactions are ignored in Debye–Hückel theory.
Moreover, ionic radius is assumed to be negligible, but at higher concentrations, the ionic radius becomes comparable to the radius of the ionic atmosphere.
Most extensions to Debye–Hückel theory are empirical in nature. They usually allow the Debye–Hückel equation to be followed at low concentration and add further terms in some power of the ionic strength to fit experimental observations. The main extensions are the Davies equation, Pitzer equations and specific ion interaction theory.
One such extended Debye–Hückel equation is given by:
−
log
10
(
γ
)
=
A
|
z
+
z
−
|
I
1
+
B
a
I
{\displaystyle -\log _{10}(\gamma )={\frac {A|z_{+}z_{-}|{\sqrt {I}}}{1+Ba{\sqrt {I}}}}}
where
γ
{\displaystyle \gamma }
as its common logarithm is the activity coefficient,
z
{\displaystyle z}
is the integer charge of the ion (1 for H+, 2 for Mg2+ etc.),
I
{\displaystyle I}
is the ionic strength of the aqueous solution, and
a
{\displaystyle a}
is the size or effective diameter of the ion in angstrom. The effective hydrated radius of the ion, a is the radius of the ion and its closely bound water molecules. Large ions and less highly charged ions bind water less tightly and have smaller hydrated radii than smaller, more highly charged ions. Typical values are 3Å for ions such as H+, Cl−, CN−, and HCOO−. The effective diameter for the hydronium ion is 9Å.
A
{\displaystyle A}
and
B
{\displaystyle B}
are constants with values of respectively 0.5085 and 0.3281 at 25 °C in water [1].
The extended Debye–Hückel equation provides accurate results for μ ≤ 0.1. For solutions of greater ionic strengths, the Pitzer equations should be used. In these solutions the activity coefficient may actually increase with ionic strength.
The Debye–Hückel equation cannot be used in the solutions of surfactants where the presence of micelles influences on the electrochemical properties of the system (even rough judgement overestimates γ for ~50%).
=== Electrolytes mixtures ===
The theory can be applied also to dilute solutions of mixed electrolytes. Freezing point depression measurements has been used to this purpose.
== Conductivity ==
The treatment given so far is for a system not subject to an external electric field. When conductivity is measured the system is subject to an oscillating external field due to the application of an AC voltage to electrodes immersed in the solution. Debye and Hückel modified their theory in 1926 and their theory was further modified by Lars Onsager in 1927. All the postulates of the original theory were retained. In addition it was assumed that the electric field causes the charge cloud to be distorted away from spherical symmetry. After taking this into account, together with the specific requirements of moving ions, such as viscosity and electrophoretic effects, Onsager was able to derive a theoretical expression to account for the empirical relation known as Kohlrausch's Law, for the molar conductivity, Λm.
Λ
m
=
Λ
m
0
−
K
c
{\displaystyle \Lambda _{m}=\Lambda _{m}^{0}-K{\sqrt {c}}}
Λ
m
0
{\displaystyle \Lambda _{m}^{0}}
is known as the limiting molar conductivity, K is an empirical constant and c is the electrolyte concentration. Limiting here means "at the limit of the infinite dilution").
Onsager's expression is
Λ
m
=
Λ
m
0
−
(
A
+
B
Λ
m
0
)
c
{\displaystyle \Lambda _{m}=\Lambda _{m}^{0}-(A+B\Lambda _{m}^{0}){\sqrt {c}}}
where A and B are constants that depend only on known quantities such as temperature, the charges on the ions and the dielectric constant and viscosity of the solvent. This is known as the Debye–Hückel–Onsager equation. However, this equation only applies to very dilute solutions and has been largely superseded by other equations due to Fuoss and Onsager, 1932 and 1957 and later.
== Summary of Debye and Hückel's first article on the theory of dilute electrolytes ==
The English title of the article is "On the Theory of Electrolytes. I. Freezing Point Depression and Related Phenomena". It was originally published in 1923 in volume 24 of a German-language journal Physikalische Zeitschrift. An English translation: 217–63 of the article is included in a book of collected papers presented to Debye by "his pupils, friends, and the publishers on the occasion of his seventieth birthday on March 24, 1954".: xv Another English translation was completed in 2019. The article deals with the calculation of properties of electrolyte solutions that are under the influence of ion-induced electric fields, thus it deals with electrostatics.
In the same year they first published this article, Debye and Hückel, hereinafter D&H, also released an article that covered their initial characterization of solutions under the influence of electric fields called "On the Theory of Electrolytes. II. Limiting Law for Electric Conductivity", but that subsequent article is not (yet) covered here.
In the following summary (as yet incomplete and unchecked), modern notation and terminology are used, from both chemistry and mathematics, in order to prevent confusion. Also, with a few exceptions to improve clarity, the subsections in this summary are (very) condensed versions of the same subsections of the original article.
=== Introduction ===
D&H note that the Guldberg–Waage formula for electrolyte species in chemical reaction equilibrium in classical form is: 221
∏
i
=
1
s
x
i
ν
i
=
K
,
{\displaystyle \prod _{i=1}^{s}x_{i}^{\nu _{i}}=K,}
where
∏
{\textstyle \prod }
is a notation for multiplication,
i
{\displaystyle i}
is a dummy variable indicating the species,
s
{\displaystyle s}
is the number of species participating in the reaction,
x
i
{\displaystyle x_{i}}
is the mole fraction of species
i
{\displaystyle i}
,
ν
i
{\displaystyle \nu _{i}}
is the stoichiometric coefficient of species
i
{\displaystyle i}
,
K is the equilibrium constant.
D&H say that, due to the "mutual electrostatic forces between the ions", it is necessary to modify the Guldberg–Waage equation by replacing
K
{\displaystyle K}
with
γ
K
{\displaystyle \gamma K}
, where
γ
{\displaystyle \gamma }
is an overall activity coefficient, not a "special" activity coefficient (a separate activity coefficient associated with each species)—which is what is used in modern chemistry as of 2007.
The relationship between
γ
{\displaystyle \gamma }
and the special activity coefficients
γ
i
{\displaystyle \gamma _{i}}
is: 248
log
(
γ
)
=
∑
i
=
1
s
ν
i
log
(
γ
i
)
.
{\displaystyle \log(\gamma )=\sum _{i=1}^{s}\nu _{i}\log(\gamma _{i}).}
=== Fundamentals ===
D&H use the Helmholtz and Gibbs free entropies
Φ
{\displaystyle \Phi }
and
Ξ
{\displaystyle \Xi }
to express the effect of electrostatic forces in an electrolyte on its thermodynamic state. Specifically, they split most of the thermodynamic potentials into classical and electrostatic terms:
Φ
=
S
−
U
T
=
−
A
T
,
{\displaystyle \Phi =S-{\frac {U}{T}}=-{\frac {A}{T}},}
where
Φ
{\displaystyle \Phi }
is Helmholtz free entropy,
S
{\displaystyle S}
is entropy,
U
{\displaystyle U}
is internal energy,
T
{\displaystyle T}
is temperature,
A
{\displaystyle A}
is Helmholtz free energy.
D&H give the total differential of
Φ
{\displaystyle \Phi }
as: 222
d
Φ
=
P
T
d
V
+
U
T
2
d
T
,
{\displaystyle d\Phi ={\frac {P}{T}}\,dV+{\frac {U}{T^{2}}}\,dT,}
where
P
{\displaystyle P}
is pressure,
V
{\displaystyle V}
is volume.
By the definition of the total differential, this means that
P
T
=
∂
Φ
∂
V
,
{\displaystyle {\frac {P}{T}}={\frac {\partial \Phi }{\partial V}},}
U
T
2
=
∂
Φ
∂
T
,
{\displaystyle {\frac {U}{T^{2}}}={\frac {\partial \Phi }{\partial T}},}
which are useful further on.
As stated previously, the internal energy is divided into two parts:: 222
U
=
U
k
+
U
e
{\displaystyle U=U_{k}+U_{e}}
where
k
{\displaystyle k}
indicates the classical part,
e
{\displaystyle e}
indicates the electric part.
Similarly, the Helmholtz free entropy is also divided into two parts:
Φ
=
Φ
k
+
Φ
e
.
{\displaystyle \Phi =\Phi _{k}+\Phi _{e}.}
D&H state, without giving the logic, that: 222
Φ
e
=
∫
U
e
T
2
d
T
.
{\displaystyle \Phi _{e}=\int {\frac {U_{e}}{T^{2}}}\,dT.}
It would seem that, without some justification,
Φ
e
=
∫
P
e
T
d
V
+
∫
U
e
T
2
d
T
.
{\displaystyle \Phi _{e}=\int {\frac {P_{e}}{T}}\,dV+\int {\frac {U_{e}}{T^{2}}}\,dT.}
Without mentioning it specifically, D&H later give what might be the required (above) justification while arguing that
Φ
e
=
Ξ
e
{\displaystyle \Phi _{e}=\Xi _{e}}
, an assumption that the solvent is incompressible.
The definition of the Gibbs free entropy
Ξ
{\displaystyle \Xi }
is: 222–3
Ξ
=
S
−
U
+
P
V
T
=
Φ
−
P
V
T
=
−
G
T
,
{\displaystyle \Xi =S-{\frac {U+PV}{T}}=\Phi -{\frac {PV}{T}}=-{\frac {G}{T}},}
where
G
{\displaystyle G}
is Gibbs free energy.
D&H give the total differential of
Ξ
{\displaystyle \Xi }
as: 222
d
Ξ
=
−
V
T
d
P
+
U
+
P
V
T
2
d
T
.
{\displaystyle d\Xi =-{\frac {V}{T}}\,dP+{\frac {U+PV}{T^{2}}}\,dT.}
At this point D&H note that, for water containing 1 mole per liter of potassium chloride (nominal pressure and temperature aren't given), the electric pressure
P
e
{\displaystyle P_{e}}
amounts to 20 atmospheres. Furthermore, they note that this level of pressure gives a relative volume change of 0.001. Therefore, they neglect change in volume of water due to electric pressure, writing: 223
Ξ
=
Ξ
k
+
Ξ
e
,
{\displaystyle \Xi =\Xi _{k}+\Xi _{e},}
and put
Ξ
e
=
Φ
e
=
∫
U
e
T
2
d
T
.
{\displaystyle \Xi _{e}=\Phi _{e}=\int {\frac {U_{e}}{T^{2}}}\,dT.}
D&H say that, according to Planck, the classical part of the Gibbs free entropy is: 223
Ξ
k
=
∑
i
=
0
s
N
i
(
ξ
i
−
k
B
ln
(
x
i
)
)
,
{\displaystyle \Xi _{k}=\sum _{i=0}^{s}N_{i}(\xi _{i}-k_{\text{B}}\ln(x_{i})),}
where
i
{\displaystyle i}
is a species,
s
{\displaystyle s}
is the number of different particle types in solution,
N
i
{\displaystyle N_{i}}
is the number of particles of species i,
ξ
i
{\displaystyle \xi _{i}}
is the particle specific Gibbs free entropy of species i,
k
B
{\displaystyle k_{\text{B}}}
is the Boltzmann constant,
x
i
{\displaystyle x_{i}}
is the mole fraction of species i.
Species zero is the solvent. The definition of
ξ
i
{\displaystyle \xi _{i}}
is as follows, where lower-case letters indicate the particle specific versions of the corresponding extensive properties:: 223
ξ
i
=
s
i
−
u
i
+
P
v
i
T
.
{\displaystyle \xi _{i}=s_{i}-{\frac {u_{i}+Pv_{i}}{T}}.}
D&H don't say so, but the functional form for
Ξ
k
{\displaystyle \Xi _{k}}
may be derived from the functional dependence of the chemical potential of a component of an ideal mixture upon its mole fraction.
D&H note that the internal energy
U
{\displaystyle U}
of a solution is lowered by the electrical interaction of its ions, but that this effect can't be determined by using the crystallographic approximation for distances between dissimilar atoms (the cube root of the ratio of total volume to the number of particles in the volume). This is because there is more thermal motion in a liquid solution than in a crystal. The thermal motion tends to smear out the natural lattice that would otherwise be constructed by the ions. Instead, D&H introduce the concept of an ionic atmosphere or cloud. Like the crystal lattice, each ion still attempts to surround itself with oppositely charged ions, but in a more free-form manner; at small distances away from positive ions, one is more likely to find negative ions and vice versa.: 225
=== The potential energy of an arbitrary ion solution ===
Electroneutrality of a solution requires that: 233
∑
i
=
1
s
N
i
z
i
=
0
,
{\displaystyle \sum _{i=1}^{s}N_{i}z_{i}=0,}
where
N
i
{\displaystyle N_{i}}
is the total number of ions of species i in the solution,
z
i
{\displaystyle z_{i}}
is the charge number of species i.
To bring an ion of species i, initially far away, to a point
P
{\displaystyle P}
within the ion cloud requires interaction energy in the amount of
z
i
q
φ
{\displaystyle z_{i}q\varphi }
, where
q
{\displaystyle q}
is the elementary charge, and
φ
{\displaystyle \varphi }
is the value of the scalar electric potential field at
P
{\displaystyle P}
. If electric forces were the only factor in play, the minimal-energy configuration of all the ions would be achieved in a close-packed lattice configuration. However, the ions are in thermal equilibrium with each other and are relatively free to move. Thus they obey Boltzmann statistics and form a Boltzmann distribution. All species' number densities
n
i
{\displaystyle n_{i}}
are altered from their bulk (overall average) values
n
i
0
{\displaystyle n_{i}^{0}}
by the corresponding Boltzmann factor
e
−
z
i
q
φ
k
B
T
{\displaystyle e^{-{\frac {z_{i}q\varphi }{k_{\text{B}}T}}}}
, where
k
B
{\displaystyle k_{\text{B}}}
is the Boltzmann constant, and
T
{\displaystyle T}
is the temperature. Thus at every point in the cloud: 233
n
i
=
N
i
V
e
−
z
i
q
φ
k
B
T
=
n
i
0
e
−
z
i
q
φ
k
B
T
.
{\displaystyle n_{i}={\frac {N_{i}}{V}}e^{-{\frac {z_{i}q\varphi }{k_{\text{B}}T}}}=n_{i}^{0}e^{-{\frac {z_{i}q\varphi }{k_{\text{B}}T}}}.}
Note that in the infinite temperature limit, all ions are distributed uniformly, with no regard for their electrostatic interactions.: 227
The charge density is related to the number density:: 233
ρ
=
∑
i
z
i
q
n
i
=
∑
i
z
i
q
n
i
0
e
−
z
i
q
φ
k
B
T
.
{\displaystyle \rho =\sum _{i}z_{i}qn_{i}=\sum _{i}z_{i}qn_{i}^{0}e^{-{\frac {z_{i}q\varphi }{k_{\text{B}}T}}}.}
When combining this result for the charge density with the Poisson equation from electrostatics, a form of the Poisson–Boltzmann equation results:: 233
∇
2
φ
=
−
ρ
ε
r
ε
0
=
−
∑
i
z
i
q
n
i
0
ε
r
ε
0
e
−
z
i
q
φ
k
B
T
.
{\displaystyle \nabla ^{2}\varphi =-{\frac {\rho }{\varepsilon _{r}\varepsilon _{0}}}=-\sum _{i}{\frac {z_{i}qn_{i}^{0}}{\varepsilon _{r}\varepsilon _{0}}}e^{-{\frac {z_{i}q\varphi }{k_{\text{B}}T}}}.}
This equation is difficult to solve and does not follow the principle of linear superposition for the relationship between the number of charges and the strength of the potential field. It has been solved analyticallt by the Swedish mathematician Thomas Hakon Gronwall and his collaborators physical chemists V. K. La Mer and Karl Sandved in a 1928 article from Physikalische Zeitschrift dealing with extensions to Debye–Huckel theory.
However, for sufficiently low concentrations of ions, a first-order Taylor series expansion approximation for the exponential function may be used (
e
x
≈
1
+
x
{\displaystyle e^{x}\approx 1+x}
for
0
<
x
≪
1
{\displaystyle 0<x\ll 1}
) to create a linear differential equation.: Section 2.4.2 D&H say that this approximation holds at large distances between ions,: 227 which is the same as saying that the concentration is low. Lastly, they claim without proof that the addition of more terms in the expansion has little effect on the final solution.: 227 Thus
−
∑
i
z
i
q
n
i
0
ε
r
ε
0
e
−
z
i
q
φ
k
B
T
≈
−
∑
i
z
i
q
n
i
0
ε
r
ε
0
(
1
−
z
i
q
φ
k
B
T
)
=
−
(
∑
i
z
i
q
n
i
0
ε
r
ε
0
−
∑
i
z
i
2
q
2
n
i
0
φ
ε
r
ε
0
k
B
T
)
.
{\displaystyle -\sum _{i}{\frac {z_{i}qn_{i}^{0}}{\varepsilon _{r}\varepsilon _{0}}}e^{-{\frac {z_{i}q\varphi }{k_{\text{B}}T}}}\approx -\sum _{i}{\frac {z_{i}qn_{i}^{0}}{\varepsilon _{r}\varepsilon _{0}}}\left(1-{\frac {z_{i}q\varphi }{k_{\text{B}}T}}\right)=-\left(\sum _{i}{\frac {z_{i}qn_{i}^{0}}{\varepsilon _{r}\varepsilon _{0}}}-\sum _{i}{\frac {z_{i}^{2}q^{2}n_{i}^{0}\varphi }{\varepsilon _{r}\varepsilon _{0}k_{\text{B}}T}}\right).}
The Poisson–Boltzmann equation is transformed to: 233
∇
2
φ
=
∑
i
z
i
2
q
2
n
i
0
φ
ε
r
ε
0
k
B
T
,
{\displaystyle \nabla ^{2}\varphi =\sum _{i}{\frac {z_{i}^{2}q^{2}n_{i}^{0}\varphi }{\varepsilon _{r}\varepsilon _{0}k_{\text{B}}T}},}
because the first summation is zero due to electroneutrality.: 234
Factor out the scalar potential and assign the leftovers, which are constant, to
κ
2
{\displaystyle \kappa ^{2}}
. Also, let
I
{\displaystyle I}
be the ionic strength of the solution:: 234
κ
2
=
∑
i
z
i
2
q
2
n
i
0
ε
r
ε
0
k
B
T
=
2
I
q
2
ε
r
ε
0
k
B
T
,
{\displaystyle \kappa ^{2}=\sum _{i}{\frac {z_{i}^{2}q^{2}n_{i}^{0}}{\varepsilon _{r}\varepsilon _{0}k_{\text{B}}T}}={\frac {2Iq^{2}}{\varepsilon _{r}\varepsilon _{0}k_{\text{B}}T}},}
I
=
1
2
∑
i
z
i
2
n
i
0
.
{\displaystyle I={\frac {1}{2}}\sum _{i}z_{i}^{2}n_{i}^{0}.}
So, the fundamental equation is reduced to a form of the Helmholtz equation:
∇
2
φ
=
κ
2
φ
.
{\displaystyle \nabla ^{2}\varphi =\kappa ^{2}\varphi .}
Today,
κ
−
1
{\displaystyle \kappa ^{-1}}
is called the Debye screening length. D&H recognize the importance of the parameter in their article and characterize it as a measure of the thickness of the ion atmosphere, which is an electrical double layer of the Gouy–Chapman type.: 229
The equation may be expressed in spherical coordinates by taking
r
=
0
{\displaystyle r=0}
at some arbitrary ion:: 229
∇
2
φ
=
1
r
2
∂
∂
r
(
r
2
∂
φ
(
r
)
∂
r
)
=
∂
2
φ
(
r
)
∂
r
2
+
2
r
∂
φ
(
r
)
∂
r
=
κ
2
φ
(
r
)
.
{\displaystyle \nabla ^{2}\varphi ={\frac {1}{r^{2}}}{\frac {\partial }{\partial r}}\left(r^{2}{\frac {\partial \varphi (r)}{\partial r}}\right)={\frac {\partial ^{2}\varphi (r)}{\partial r^{2}}}+{\frac {2}{r}}{\frac {\partial \varphi (r)}{\partial r}}=\kappa ^{2}\varphi (r).}
The equation has the following general solution (keep in mind that
κ
{\displaystyle \kappa }
is a positive constant):: 229
φ
(
r
)
=
A
e
−
κ
2
r
r
+
A
′
e
κ
2
r
2
r
κ
2
=
A
e
−
κ
r
r
+
A
″
e
κ
r
r
=
A
e
−
κ
r
r
,
{\displaystyle \varphi (r)=A{\frac {e^{-{\sqrt {\kappa ^{2}}}r}}{r}}+A'{\frac {e^{{\sqrt {\kappa ^{2}}}r}}{2r{\sqrt {\kappa ^{2}}}}}=A{\frac {e^{-\kappa r}}{r}}+A''{\frac {e^{\kappa r}}{r}}=A{\frac {e^{-\kappa r}}{r}},}
where
A
{\displaystyle A}
,
A
′
{\displaystyle A'}
, and
A
″
{\displaystyle A''}
are undetermined constants
The electric potential is zero at infinity by definition, so
A
″
{\displaystyle A''}
must be zero.: 229
In the next step, D&H assume that there is a certain radius
a
i
{\displaystyle a_{i}}
, beyond which no ions in the atmosphere may approach the (charge) center of the singled out ion. This radius may be due to the physical size of the ion itself, the sizes of the ions in the cloud, and any water molecules that surround the ions. Mathematically, they treat the singled out ion as a point charge to which one may not approach within the radius
a
i
{\displaystyle a_{i}}
.: 231
The potential of a point charge by itself is
φ
pc
(
r
)
=
1
4
π
ε
r
ε
0
z
i
q
r
.
{\displaystyle \varphi _{\text{pc}}(r)={\frac {1}{4\pi \varepsilon _{r}\varepsilon _{0}}}{\frac {z_{i}q}{r}}.}
D&H say that the total potential inside the sphere is: 232
φ
sp
(
r
)
=
φ
pc
(
r
)
+
B
i
=
1
4
π
ε
r
ε
0
z
i
q
r
+
B
i
,
{\displaystyle \varphi _{\text{sp}}(r)=\varphi _{\text{pc}}(r)+B_{i}={\frac {1}{4\pi \varepsilon _{r}\varepsilon _{0}}}{\frac {z_{i}q}{r}}+B_{i},}
where
B
i
{\displaystyle B_{i}}
is a constant that represents the potential added by the ionic atmosphere. No justification for
B
i
{\displaystyle B_{i}}
being a constant is given. However, one can see that this is the case by considering that any spherical static charge distribution is subject to the mathematics of the shell theorem. The shell theorem says that no force is exerted on charged particles inside a sphere (of arbitrary charge). Since the ion atmosphere is assumed to be (time-averaged) spherically symmetric, with charge varying as a function of radius
r
{\displaystyle r}
, it may be represented as an infinite series of concentric charge shells. Therefore, inside the radius
a
i
{\displaystyle a_{i}}
, the ion atmosphere exerts no force. If the force is zero, then the potential is a constant (by definition).
In a combination of the continuously distributed model which gave the Poisson–Boltzmann equation and the model of the point charge, it is assumed that at the radius
a
i
{\displaystyle a_{i}}
, there is a continuity of
φ
(
r
)
{\displaystyle \varphi (r)}
and its first derivative. Thus: 232
φ
(
a
i
)
=
A
i
e
−
κ
a
i
a
i
=
1
4
π
ε
r
ε
0
z
i
q
a
i
+
B
i
=
φ
sp
(
a
i
)
,
{\displaystyle \varphi (a_{i})=A_{i}{\frac {e^{-\kappa a_{i}}}{a_{i}}}={\frac {1}{4\pi \varepsilon _{r}\varepsilon _{0}}}{\frac {z_{i}q}{a_{i}}}+B_{i}=\varphi _{\text{sp}}(a_{i}),}
φ
′
(
a
i
)
=
−
A
i
e
−
κ
a
i
(
1
+
κ
a
i
)
a
i
2
=
−
1
4
π
ε
r
ε
0
z
i
q
a
i
2
=
φ
sp
′
(
a
i
)
,
{\displaystyle \varphi '(a_{i})=-{\frac {A_{i}e^{-\kappa a_{i}}(1+\kappa a_{i})}{a_{i}^{2}}}=-{\frac {1}{4\pi \varepsilon _{r}\varepsilon _{0}}}{\frac {z_{i}q}{a_{i}^{2}}}=\varphi _{\text{sp}}'(a_{i}),}
A
i
=
z
i
q
4
π
ε
r
ε
0
e
κ
a
i
1
+
κ
a
i
,
{\displaystyle A_{i}={\frac {z_{i}q}{4\pi \varepsilon _{r}\varepsilon _{0}}}{\frac {e^{\kappa a_{i}}}{1+\kappa a_{i}}},}
B
i
=
−
z
i
q
κ
4
π
ε
r
ε
0
1
1
+
κ
a
i
.
{\displaystyle B_{i}=-{\frac {z_{i}q\kappa }{4\pi \varepsilon _{r}\varepsilon _{0}}}{\frac {1}{1+\kappa a_{i}}}.}
By the definition of electric potential energy, the potential energy associated with the singled out ion in the ion atmosphere is: 230, 232
u
i
=
z
i
q
B
i
=
−
z
i
2
q
2
κ
4
π
ε
r
ε
0
1
1
+
κ
a
i
.
{\displaystyle u_{i}=z_{i}qB_{i}=-{\frac {z_{i}^{2}q^{2}\kappa }{4\pi \varepsilon _{r}\varepsilon _{0}}}{\frac {1}{1+\kappa a_{i}}}.}
Notice that this only requires knowledge of the charge of the singled out ion and the potential of all the other ions.
To calculate the potential energy of the entire electrolyte solution, one must use the multiple-charge generalization for electric potential energy:: 230, 232
U
e
=
1
2
∑
i
=
1
s
N
i
u
i
=
−
∑
i
=
1
s
N
i
z
i
2
2
q
2
κ
4
π
ε
r
ε
0
1
1
+
κ
a
i
.
{\displaystyle U_{e}={\frac {1}{2}}\sum _{i=1}^{s}N_{i}u_{i}=-\sum _{i=1}^{s}{\frac {N_{i}z_{i}^{2}}{2}}{\frac {q^{2}\kappa }{4\pi \varepsilon _{r}\varepsilon _{0}}}{\frac {1}{1+\kappa a_{i}}}.}
=== The additional electric term to the thermodynamic potential ===
== Experimental verification of the theory ==
To verify the validity of the Debye–Hückel theory, many experimental ways have been tried, measuring the activity coefficients: the problem is that we need to go towards very high dilutions.
Typical examples are: measurements of vapour pressure, freezing point, osmotic pressure (indirect methods) and measurement of electric potential in cells (direct method).
Going towards high dilutions good results have been found using liquid membrane cells, it has been possible to investigate aqueous media 10−4 M and it has been found that for 1:1 electrolytes (as NaCl or KCl) the Debye–Hückel equation is totally correct, but for 2:2 or 3:2 electrolytes it is possible to find negative deviation from the Debye–Hückel limit law: this strange behavior can be observed only in the very dilute area, and in more concentrate regions the deviation becomes positive.
It is possible that Debye–Hückel equation is not able to foresee this behavior because of the linearization of the Poisson–Boltzmann equation, or maybe not: studies about this have been started only during the last years of the 20th century because before it wasn't possible to investigate the 10−4 M region, so it is possible that during the next years new theories will be born.
== See also ==
Electrolyte
Chemical activity
Ionic strength
Poisson-Boltzmann equation
Debye length
Bjerrum length
Bates-Guggenheim Convention
Ionic atmosphere
Electrical double layer
Ion association
Davies equation
Pitzer equation
Specific ion Interaction Theory
== References == | Wikipedia/Debye–Huckel_equation |
The pair distribution function describes the distribution of distances between pairs of particles contained within a given volume. Mathematically, if a and b are two particles, the pair distribution function of b with respect to a, denoted by
g
a
b
(
r
→
)
{\displaystyle g_{ab}({\vec {r}})}
is the probability of finding the particle b at distance
r
→
{\displaystyle {\vec {r}}}
from a, with a taken as the origin of coordinates.
== Overview ==
The pair distribution function is used to describe the distribution of objects within a medium (for example, oranges in a crate or nitrogen molecules in a gas cylinder). If the medium is homogeneous (i.e. every spatial location has identical properties), then there is an equal probability density for finding an object at any position
r
→
{\displaystyle {\vec {r}}}
:
p
(
r
→
)
=
1
/
V
{\displaystyle p({\vec {r}})=1/V}
,
where
V
{\displaystyle V}
is the volume of the container. On the other hand, the likelihood of finding pairs of objects at given positions (i.e. the two-body probability density) is not uniform. For example, pairs of hard balls must be separated by at least the diameter of a ball. The pair distribution function
g
(
r
→
,
r
→
′
)
{\displaystyle g({\vec {r}},{\vec {r}}')}
is obtained by scaling the two-body probability density function by the total number of objects
N
{\displaystyle N}
and the size of the container:
g
(
r
→
,
r
→
′
)
=
p
(
r
→
,
r
→
′
)
V
2
N
−
1
N
{\displaystyle g({\vec {r}},{\vec {r}}')=p({\vec {r}},{\vec {r}}')V^{2}{\frac {N-1}{N}}}
.
In the common case where the number of objects in the container is large, this simplifies to give:
g
(
r
→
,
r
→
′
)
≈
p
(
r
→
,
r
→
′
)
V
2
{\displaystyle g({\vec {r}},{\vec {r}}')\approx p({\vec {r}},{\vec {r}}')V^{2}}
.
== Simple models and general properties ==
The simplest possible pair distribution function assumes that all object locations are mutually independent, giving:
g
(
r
→
)
=
1
{\displaystyle g({\vec {r}})=1}
,
where
r
→
{\displaystyle {\vec {r}}}
is the separation between a pair of objects. However, this is inaccurate in the case of hard objects as discussed above, because it does not account for the minimum separation required between objects. The hole-correction (HC) approximation provides a better model:
g
(
r
)
=
{
0
,
r
<
b
,
1
,
r
≥
b
,
{\displaystyle g(r)={\begin{cases}0,&r<b,\\1,&r\geq {}b\end{cases}},}
where
b
{\displaystyle b}
is the diameter of one of the objects.
Although the HC approximation gives a reasonable description of sparsely packed objects, it breaks down for dense packing. This may be illustrated by considering a box completely filled by identical hard balls so that each ball touches its neighbours. In this case, every pair of balls in the box is separated by a distance of exactly
r
=
n
b
{\displaystyle r=nb}
where
n
{\displaystyle n}
is a positive whole number. The pair distribution for a volume completely filled by hard spheres is therefore a set of Dirac delta functions of the form:
g
(
r
)
=
∑
i
δ
(
r
−
i
b
)
{\displaystyle g(r)=\sum \limits _{i}\delta (r-ib)}
.
Finally, it may be noted that a pair of objects which are separated by a large distance have no influence on each other's position (provided that the container is not completely filled). Therefore,
lim
r
→
∞
g
(
r
)
=
1
{\displaystyle \lim \limits _{r\to \infty }g(r)=1}
.
In general, a pair distribution function will take a form somewhere between the sparsely packed (HC approximation) and the densely packed (delta function) models, depending on the packing density
f
{\displaystyle f}
.
== Radial distribution function ==
Of special practical importance is the radial distribution function, which is independent of orientation. It is a major descriptor for the atomic structure of amorphous materials (glasses, polymers) and liquids. The radial distribution function can be calculated directly from physical measurements like light scattering or x-ray powder diffraction by performing a Fourier Transform.
In Statistical Mechanics the PDF is given by the expression
g
a
b
(
r
)
=
1
N
a
N
b
∑
i
=
1
N
a
∑
j
=
1
N
b
⟨
δ
(
|
r
i
j
|
−
r
)
⟩
{\displaystyle g_{ab}(r)={\frac {1}{N_{a}N_{b}}}\sum \limits _{i=1}^{N_{a}}\sum \limits _{j=1}^{N_{b}}\langle \delta (\vert \mathbf {r} _{ij}\vert -r)\rangle }
== Applications ==
=== Thin Film Pair Distribution Function ===
When thin films are disordered, as they are in electronic devices, pair distribution is used to view the strain and structure-properties of that material or composition. They have these properties that cannot be exploited in the bulk or crystalline form. There is a method with the radial distribution that is able to view the local structure of a disordered thin film of
GeSe
2
{\displaystyle {\ce {GeSe2}}}
. But the creators of this method called a need for a better method to view the mid-range order of disordered films. The creation of thin-film Pair Distribution Function (tfPDF) uses a statistical distribution of a material’s mid-range order that enables viewing important details like the disorder. In this technique, 2D data from a scattering method is integrated and Fourier transformed into 1D data that shows the probability of bonds in that material. TfPDF works best when in conjunction with other characterization methods like transmission electron microscopy. Although a developing methodology, tfPDF can give complete structure-property relationships through a reliable characterization technique.
== See also ==
classical-map hypernetted-chain method
== References ==
Fischer-Colbrie, Bienenstock, Fuoss, Marcus. Phys. Rev. B (1988) 38, 12388
Jensen, K. M., Billinge, S. J. (2015). IUCrJ, 2(5), 481-489. | Wikipedia/Pair_distribution_function |
Coulomb's inverse-square law, or simply Coulomb's law, is an experimental law of physics that calculates the amount of force between two electrically charged particles at rest. This electric force is conventionally called the electrostatic force or Coulomb force. Although the law was known earlier, it was first published in 1785 by French physicist Charles-Augustin de Coulomb. Coulomb's law was essential to the development of the theory of electromagnetism and maybe even its starting point, as it allowed meaningful discussions of the amount of electric charge in a particle.
The law states that the magnitude, or absolute value, of the attractive or repulsive electrostatic force between two point charges is directly proportional to the product of the magnitudes of their charges and inversely proportional to the square of the distance between them. Coulomb discovered that bodies with like electrical charges repel:
It follows therefore from these three tests, that the repulsive force that the two balls – [that were] electrified with the same kind of electricity – exert on each other, follows the inverse proportion of the square of the distance.
Coulomb also showed that oppositely charged bodies attract according to an inverse-square law:
|
F
|
=
k
e
|
q
1
|
|
q
2
|
r
2
{\displaystyle |F|=k_{\text{e}}{\frac {|q_{1}||q_{2}|}{r^{2}}}}
Here, ke is a constant, q1 and q2 are the quantities of each charge, and the scalar r is the distance between the charges.
The force is along the straight line joining the two charges. If the charges have the same sign, the electrostatic force between them makes them repel; if they have different signs, the force between them makes them attract.
Being an inverse-square law, the law is similar to Isaac Newton's inverse-square law of universal gravitation, but gravitational forces always make things attract, while electrostatic forces make charges attract or repel. Also, gravitational forces are much weaker than electrostatic forces. Coulomb's law can be used to derive Gauss's law, and vice versa. In the case of a single point charge at rest, the two laws are equivalent, expressing the same physical law in different ways. The law has been tested extensively, and observations have upheld the law on the scale from 10−16 m to 108 m.
== History ==
Ancient cultures around the Mediterranean knew that certain objects, such as rods of amber, could be rubbed with cat's fur to attract light objects like feathers and pieces of paper. Thales of Miletus made the first recorded description of static electricity around 600 BC, when he noticed that friction could make a piece of amber attract small objects.
In 1600, English scientist William Gilbert made a careful study of electricity and magnetism, distinguishing the lodestone effect from static electricity produced by rubbing amber. He coined the Neo-Latin word electricus ("of amber" or "like amber", from ἤλεκτρον [elektron], the Greek word for "amber") to refer to the property of attracting small objects after being rubbed. This association gave rise to the English words "electric" and "electricity", which made their first appearance in print in Thomas Browne's Pseudodoxia Epidemica of 1646.
Early investigators of the 18th century who suspected that the electrical force diminished with distance as the force of gravity did (i.e., as the inverse square of the distance) included Daniel Bernoulli and Alessandro Volta, both of whom measured the force between plates of a capacitor, and Franz Aepinus who supposed the inverse-square law in 1758.
Based on experiments with electrically charged spheres, Joseph Priestley of England was among the first to propose that electrical force followed an inverse-square law, similar to Newton's law of universal gravitation. However, he did not generalize or elaborate on this. In 1767, he conjectured that the force between charges varied as the inverse square of the distance.
In 1769, Scottish physicist John Robison announced that, according to his measurements, the force of repulsion between two spheres with charges of the same sign varied as x−2.06.
In the early 1770s, the dependence of the force between charged bodies upon both distance and charge had already been discovered, but not published, by Henry Cavendish of England. In his notes, Cavendish wrote, "We may therefore conclude that the electric attraction and repulsion must be inversely as some power of the distance between that of the 2 + 1/50th and that of the 2 − 1/50th, and there is no reason to think that it differs at all from the inverse duplicate ratio".
Finally, in 1785, the French physicist Charles-Augustin de Coulomb published his first three reports of electricity and magnetism where he stated his law. This publication was essential to the development of the theory of electromagnetism. He used a torsion balance to study the repulsion and attraction forces of charged particles, and determined that the magnitude of the electric force between two point charges is directly proportional to the product of the charges and inversely proportional to the square of the distance between them.
The torsion balance consists of a bar suspended from its middle by a thin fiber. The fiber acts as a very weak torsion spring. In Coulomb's experiment, the torsion balance was an insulating rod with a metal-coated ball attached to one end, suspended by a silk thread. The ball was charged with a known charge of static electricity, and a second charged ball of the same polarity was brought near it. The two charged balls repelled one another, twisting the fiber through a certain angle, which could be read from a scale on the instrument. By knowing how much force it took to twist the fiber through a given angle, Coulomb was able to calculate the force between the balls and derive his inverse-square proportionality law.
== Mathematical form ==
Coulomb's law states that the electrostatic force
F
1
{\textstyle \mathbf {F} _{1}}
experienced by a charge,
q
1
{\displaystyle q_{1}}
at position
r
1
{\displaystyle \mathbf {r} _{1}}
, in the vicinity of another charge,
q
2
{\displaystyle q_{2}}
at position
r
2
{\displaystyle \mathbf {r} _{2}}
, in a vacuum is equal to
F
1
=
q
1
q
2
4
π
ε
0
r
^
12
|
r
12
|
2
{\displaystyle \mathbf {F} _{1}={\frac {q_{1}q_{2}}{4\pi \varepsilon _{0}}}{{\hat {\mathbf {r} }}_{12} \over {|\mathbf {r} _{12}|}^{2}}}
where
r
12
=
r
1
−
r
2
{\textstyle \mathbf {r_{12}=r_{1}-r_{2}} }
is the displacement vector between the charges,
r
^
12
{\textstyle {\hat {\mathbf {r} }}_{12}}
a unit vector pointing from
q
2
{\textstyle q_{2}}
to
q
1
{\textstyle q_{1}}
, and
ε
0
{\displaystyle \varepsilon _{0}}
the electric constant. Here,
r
^
12
{\textstyle \mathbf {\hat {r}} _{12}}
is used for the vector notation. The electrostatic force
F
2
{\textstyle \mathbf {F} _{2}}
experienced by
q
2
{\displaystyle q_{2}}
, according to Newton's third law, is
F
2
=
−
F
1
{\textstyle \mathbf {F} _{2}=-\mathbf {F} _{1}}
.
If both charges have the same sign (like charges) then the product
q
1
q
2
{\displaystyle q_{1}q_{2}}
is positive and the direction of the force on
q
1
{\displaystyle q_{1}}
is given by
r
^
12
{\textstyle {\widehat {\mathbf {r} }}_{12}}
; the charges repel each other. If the charges have opposite signs then the product
q
1
q
2
{\displaystyle q_{1}q_{2}}
is negative and the direction of the force on
q
1
{\displaystyle q_{1}}
is
−
r
^
12
{\textstyle -{\hat {\mathbf {r} }}_{12}}
; the charges attract each other.
=== System of discrete charges ===
The law of superposition allows Coulomb's law to be extended to include any number of point charges. The force acting on a point charge due to a system of point charges is simply the vector addition of the individual forces acting alone on that point charge due to each one of the charges. The resulting force vector is parallel to the electric field vector at that point, with that point charge removed.
Force
F
{\textstyle \mathbf {F} }
on a small charge
q
{\displaystyle q}
at position
r
{\displaystyle \mathbf {r} }
, due to a system of
n
{\textstyle n}
discrete charges in vacuum is
F
(
r
)
=
q
4
π
ε
0
∑
i
=
1
n
q
i
r
^
i
|
r
i
|
2
,
{\displaystyle \mathbf {F} (\mathbf {r} )={q \over 4\pi \varepsilon _{0}}\sum _{i=1}^{n}q_{i}{{\hat {\mathbf {r} }}_{i} \over {|\mathbf {r} _{i}|}^{2}},}
where
q
i
{\displaystyle q_{i}}
is the magnitude of the ith charge,
r
i
{\textstyle \mathbf {r} _{i}}
is the vector from its position to
r
{\displaystyle \mathbf {r} }
and
r
^
i
{\textstyle {\hat {\mathbf {r} }}_{i}}
is the unit vector in the direction of
r
i
{\displaystyle \mathbf {r} _{i}}
.
=== Continuous charge distribution ===
In this case, the principle of linear superposition is also used. For a continuous charge distribution, an integral over the region containing the charge is equivalent to an infinite summation, treating each infinitesimal element of space as a point charge
d
q
{\displaystyle dq}
. The distribution of charge is usually linear, surface or volumetric.
For a linear charge distribution (a good approximation for charge in a wire) where
λ
(
r
′
)
{\displaystyle \lambda (\mathbf {r} ')}
gives the charge per unit length at position
r
′
{\displaystyle \mathbf {r} '}
, and
d
ℓ
′
{\displaystyle d\ell '}
is an infinitesimal element of length,
d
q
′
=
λ
(
r
′
)
d
ℓ
′
.
{\displaystyle dq'=\lambda (\mathbf {r'} )\,d\ell '.}
For a surface charge distribution (a good approximation for charge on a plate in a parallel plate capacitor) where
σ
(
r
′
)
{\displaystyle \sigma (\mathbf {r} ')}
gives the charge per unit area at position
r
′
{\displaystyle \mathbf {r} '}
, and
d
A
′
{\displaystyle dA'}
is an infinitesimal element of area,
d
q
′
=
σ
(
r
′
)
d
A
′
.
{\displaystyle dq'=\sigma (\mathbf {r'} )\,dA'.}
For a volume charge distribution (such as charge within a bulk metal) where
ρ
(
r
′
)
{\displaystyle \rho (\mathbf {r} ')}
gives the charge per unit volume at position
r
′
{\displaystyle \mathbf {r} '}
, and
d
V
′
{\displaystyle dV'}
is an infinitesimal element of volume,
d
q
′
=
ρ
(
r
′
)
d
V
′
.
{\displaystyle dq'=\rho ({\boldsymbol {r'}})\,dV'.}
The force on a small test charge
q
{\displaystyle q}
at position
r
{\displaystyle {\boldsymbol {r}}}
in vacuum is given by the integral over the distribution of charge
F
(
r
)
=
q
4
π
ε
0
∫
d
q
′
r
−
r
′
|
r
−
r
′
|
3
.
{\displaystyle \mathbf {F} (\mathbf {r} )={\frac {q}{4\pi \varepsilon _{0}}}\int dq'{\frac {\mathbf {r} -\mathbf {r'} }{|\mathbf {r} -\mathbf {r'} |^{3}}}.}
The "continuous charge" version of Coulomb's law is never supposed to be applied to locations for which
|
r
−
r
′
|
=
0
{\displaystyle |\mathbf {r} -\mathbf {r'} |=0}
because that location would directly overlap with the location of a charged particle (e.g. electron or proton) which is not a valid location to analyze the electric field or potential classically. Charge is always discrete in reality, and the "continuous charge" assumption is just an approximation that is not supposed to allow
|
r
−
r
′
|
=
0
{\displaystyle |\mathbf {r} -\mathbf {r'} |=0}
to be analyzed.
== Coulomb constant ==
The constant of proportionality,
1
4
π
ε
0
{\displaystyle {\frac {1}{4\pi \varepsilon _{0}}}}
, in Coulomb's law:
F
1
=
q
1
q
2
4
π
ε
0
r
^
12
|
r
12
|
2
{\displaystyle \mathbf {F} _{1}={\frac {q_{1}q_{2}}{4\pi \varepsilon _{0}}}{{\hat {\mathbf {r} }}_{12} \over {|\mathbf {r} _{12}|}^{2}}}
is a consequence of historical choices for units.: 4–2
The constant
ε
0
{\displaystyle \varepsilon _{0}}
is the vacuum electric permittivity. Using the CODATA 2022 recommended value for
ε
0
{\displaystyle \varepsilon _{0}}
, the Coulomb constant is
k
e
=
1
4
π
ε
0
=
8.987
551
7862
(
14
)
×
10
9
N
⋅
m
2
⋅
C
−
2
{\displaystyle k_{\text{e}}={\frac {1}{4\pi \varepsilon _{0}}}=8.987\ 551\ 7862(14)\times 10^{9}\ \mathrm {N{\cdot }m^{2}{\cdot }C^{-2}} }
== Limitations ==
There are three conditions to be fulfilled for the validity of Coulomb's inverse square law:
The charges must have a spherically symmetric distribution (e.g. be point charges, or a charged metal sphere).
The charges must not overlap (e.g. they must be distinct point charges).
The charges must be stationary with respect to a nonaccelerating frame of reference.
The last of these is known as the electrostatic approximation. When movement takes place, an extra factor is introduced, which alters the force produced on the two objects. This extra part of the force is called the magnetic force. For slow movement, the magnetic force is minimal and Coulomb's law can still be considered approximately correct. A more accurate approximation in this case is, however, the Weber force. When the charges are moving more quickly in relation to each other or accelerations occur, Maxwell's equations and Einstein's theory of relativity must be taken into consideration.
== Electric field ==
An electric field is a vector field that associates to each point in space the Coulomb force experienced by a unit test charge. The strength and direction of the Coulomb force
F
{\textstyle \mathbf {F} }
on a charge
q
t
{\textstyle q_{t}}
depends on the electric field
E
{\textstyle \mathbf {E} }
established by other charges that it finds itself in, such that
F
=
q
t
E
{\textstyle \mathbf {F} =q_{t}\mathbf {E} }
. In the simplest case, the field is considered to be generated solely by a single source point charge. More generally, the field can be generated by a distribution of charges who contribute to the overall by the principle of superposition.
If the field is generated by a positive source point charge
q
{\textstyle q}
, the direction of the electric field points along lines directed radially outwards from it, i.e. in the direction that a positive point test charge
q
t
{\textstyle q_{t}}
would move if placed in the field. For a negative point source charge, the direction is radially inwards.
The magnitude of the electric field E can be derived from Coulomb's law. By choosing one of the point charges to be the source, and the other to be the test charge, it follows from Coulomb's law that the magnitude of the electric field E created by a single source point charge Q at a certain distance from it r in vacuum is given by
|
E
|
=
k
e
|
q
|
r
2
{\displaystyle |\mathbf {E} |=k_{\text{e}}{\frac {|q|}{r^{2}}}}
A system of n discrete charges
q
i
{\displaystyle q_{i}}
stationed at
r
i
=
r
−
r
i
{\textstyle \mathbf {r} _{i}=\mathbf {r} -\mathbf {r} _{i}}
produces an electric field whose magnitude and direction is, by superposition
E
(
r
)
=
1
4
π
ε
0
∑
i
=
1
n
q
i
r
^
i
|
r
i
|
2
{\displaystyle \mathbf {E} (\mathbf {r} )={1 \over 4\pi \varepsilon _{0}}\sum _{i=1}^{n}q_{i}{{\hat {\mathbf {r} }}_{i} \over {|\mathbf {r} _{i}|}^{2}}}
== Atomic forces ==
Coulomb's law holds even within atoms, correctly describing the force between the positively charged atomic nucleus and each of the negatively charged electrons. This simple law also correctly accounts for the forces that bind atoms together to form molecules and for the forces that bind atoms and molecules together to form solids and liquids. Generally, as the distance between ions increases, the force of attraction, and binding energy, approach zero and ionic bonding is less favorable. As the magnitude of opposing charges increases, energy increases and ionic bonding is more favorable.
== Relation to Gauss's law ==
=== Deriving Gauss's law from Coulomb's law ===
=== Deriving Coulomb's law from Gauss's law ===
Strictly speaking, Coulomb's law cannot be derived from Gauss's law alone, since Gauss's law does not give any information regarding the curl of E (see Helmholtz decomposition and Faraday's law). However, Coulomb's law can be proven from Gauss's law if it is assumed, in addition, that the electric field from a point charge is spherically symmetric (this assumption, like Coulomb's law itself, is exactly true if the charge is stationary, and approximately true if the charge is in motion).
== In relativity ==
Coulomb's law can be used to gain insight into the form of the magnetic field generated by moving charges since by special relativity, in certain cases the magnetic field can be shown to be a transformation of forces caused by the electric field. When no acceleration is involved in a particle's history, Coulomb's law can be assumed on any test particle in its own inertial frame, supported by symmetry arguments in solving Maxwell's equation, shown above. Coulomb's law can be expanded to moving test particles to be of the same form. This assumption is supported by Lorentz force law which, unlike Coulomb's law is not limited to stationary test charges. Considering the charge to be invariant of observer, the electric and magnetic fields of a uniformly moving point charge can hence be derived by the Lorentz transformation of the four force on the test charge in the charge's frame of reference given by Coulomb's law and attributing magnetic and electric fields by their definitions given by the form of Lorentz force. The fields hence found for uniformly moving point charges are given by:
E
=
q
4
π
ϵ
0
r
3
1
−
β
2
(
1
−
β
2
sin
2
θ
)
3
/
2
r
{\displaystyle \mathbf {E} ={\frac {q}{4\pi \epsilon _{0}r^{3}}}{\frac {1-\beta ^{2}}{(1-\beta ^{2}\sin ^{2}\theta )^{3/2}}}\mathbf {r} }
B
=
q
4
π
ϵ
0
r
3
1
−
β
2
(
1
−
β
2
sin
2
θ
)
3
/
2
v
×
r
c
2
=
v
×
E
c
2
{\displaystyle \mathbf {B} ={\frac {q}{4\pi \epsilon _{0}r^{3}}}{\frac {1-\beta ^{2}}{(1-\beta ^{2}\sin ^{2}\theta )^{3/2}}}{\frac {\mathbf {v} \times \mathbf {r} }{c^{2}}}={\frac {\mathbf {v} \times \mathbf {E} }{c^{2}}}}
where
q
{\displaystyle q}
is the charge of the point source,
r
{\displaystyle \mathbf {r} }
is the position vector from the point source to the point in space,
v
{\displaystyle \mathbf {v} }
is the velocity vector of the charged particle,
β
{\displaystyle \beta }
is the ratio of speed of the charged particle divided by the speed of light and
θ
{\displaystyle \theta }
is the angle between
r
{\displaystyle \mathbf {r} }
and
v
{\displaystyle \mathbf {v} }
.
This form of solutions need not obey Newton's third law as is the case in the framework of special relativity (yet without violating relativistic-energy momentum conservation). Note that the expression for electric field reduces to Coulomb's law for non-relativistic speeds of the point charge and that the magnetic field in non-relativistic limit (approximating
β
≪
1
{\displaystyle \beta \ll 1}
) can be applied to electric currents to get the Biot–Savart law. These solutions, when expressed in retarded time also correspond to the general solution of Maxwell's equations given by solutions of Liénard–Wiechert potential, due to the validity of Coulomb's law within its specific range of application. Also note that the spherical symmetry for gauss law on stationary charges is not valid for moving charges owing to the breaking of symmetry by the specification of direction of velocity in the problem. Agreement with Maxwell's equations can also be manually verified for the above two equations.
== Coulomb potential ==
=== Quantum field theory ===
The Coulomb potential admits continuum states (with E > 0), describing electron-proton scattering, as well as discrete bound states, representing the hydrogen atom. It can also be derived within the non-relativistic limit between two charged particles, as follows:
Under Born approximation, in non-relativistic quantum mechanics, the scattering amplitude
A
(
|
p
⟩
→
|
p
′
⟩
)
{\textstyle {\mathcal {A}}(|\mathbf {p} \rangle \to |\mathbf {p} '\rangle )}
is:
A
(
|
p
⟩
→
|
p
′
⟩
)
−
1
=
2
π
δ
(
E
p
−
E
p
′
)
(
−
i
)
∫
d
3
r
V
(
r
)
e
−
i
(
p
−
p
′
)
r
{\displaystyle {\mathcal {A}}(|\mathbf {p} \rangle \to |\mathbf {p} '\rangle )-1=2\pi \delta (E_{p}-E_{p'})(-i)\int d^{3}\mathbf {r} \,V(\mathbf {r} )e^{-i(\mathbf {p} -\mathbf {p} ')\mathbf {r} }}
This is to be compared to the:
∫
d
3
k
(
2
π
)
3
e
i
k
r
0
⟨
p
′
,
k
|
S
|
p
,
k
⟩
{\displaystyle \int {\frac {d^{3}k}{(2\pi )^{3}}}e^{ikr_{0}}\langle p',k|S|p,k\rangle }
where we look at the (connected) S-matrix entry for two electrons scattering off each other, treating one with "fixed" momentum as the source of the potential, and the other scattering off that potential.
Using the Feynman rules to compute the S-matrix element, we obtain in the non-relativistic limit with
m
0
≫
|
p
|
{\displaystyle m_{0}\gg |\mathbf {p} |}
⟨
p
′
,
k
|
S
|
p
,
k
⟩
|
c
o
n
n
=
−
i
e
2
|
p
−
p
′
|
2
−
i
ε
(
2
m
)
2
δ
(
E
p
,
k
−
E
p
′
,
k
)
(
2
π
)
4
δ
(
p
−
p
′
)
{\displaystyle \langle p',k|S|p,k\rangle |_{conn}=-i{\frac {e^{2}}{|\mathbf {p} -\mathbf {p} '|^{2}-i\varepsilon }}(2m)^{2}\delta (E_{p,k}-E_{p',k})(2\pi )^{4}\delta (\mathbf {p} -\mathbf {p} ')}
Comparing with the QM scattering, we have to discard the
(
2
m
)
2
{\displaystyle (2m)^{2}}
as they arise due to differing normalizations of momentum eigenstate in QFT compared to QM and obtain:
∫
V
(
r
)
e
−
i
(
p
−
p
′
)
r
d
3
r
=
e
2
|
p
−
p
′
|
2
−
i
ε
{\displaystyle \int V(\mathbf {r} )e^{-i(\mathbf {p} -\mathbf {p} ')\mathbf {r} }d^{3}\mathbf {r} ={\frac {e^{2}}{|\mathbf {p} -\mathbf {p} '|^{2}-i\varepsilon }}}
where Fourier transforming both sides, solving the integral and taking
ε
→
0
{\displaystyle \varepsilon \to 0}
at the end will yield
V
(
r
)
=
e
2
4
π
r
{\displaystyle V(r)={\frac {e^{2}}{4\pi r}}}
as the Coulomb potential.
However, the equivalent results of the classical Born derivations for the Coulomb problem are thought to be strictly accidental.
The Coulomb potential, and its derivation, can be seen as a special case of the Yukawa potential, which is the case where the exchanged boson – the photon – has no rest mass.
== Verification ==
It is possible to verify Coulomb's law with a simple experiment. Consider two small spheres of mass
m
{\displaystyle m}
and same-sign charge
q
{\displaystyle q}
, hanging from two ropes of negligible mass of length
l
{\displaystyle l}
. The forces acting on each sphere are three: the weight
m
g
{\displaystyle mg}
, the rope tension
T
{\displaystyle \mathbf {T} }
and the electric force
F
{\displaystyle \mathbf {F} }
. In the equilibrium state:
and
Dividing (1) by (2):
Let
L
1
{\displaystyle \mathbf {L} _{1}}
be the distance between the charged spheres; the repulsion force between them
F
1
{\displaystyle \mathbf {F} _{1}}
, assuming Coulomb's law is correct, is equal to
so:
If we now discharge one of the spheres, and we put it in contact with the charged sphere, each one of them acquires a charge
q
2
{\textstyle {\frac {q}{2}}}
. In the equilibrium state, the distance between the charges will be
L
2
<
L
1
{\textstyle \mathbf {L} _{2}<\mathbf {L} _{1}}
and the repulsion force between them will be:
We know that
F
2
=
m
g
tan
θ
2
{\displaystyle \mathbf {F} _{2}=mg\tan \theta _{2}}
and:
q
2
4
4
π
ε
0
L
2
2
=
m
g
tan
θ
2
{\displaystyle {\frac {\frac {q^{2}}{4}}{4\pi \varepsilon _{0}L_{2}^{2}}}=mg\tan \theta _{2}}
Dividing (4) by (5), we get:
Measuring the angles
θ
1
{\displaystyle \theta _{1}}
and
θ
2
{\displaystyle \theta _{2}}
and the distance between the charges
L
1
{\displaystyle \mathbf {L} _{1}}
and
L
2
{\displaystyle \mathbf {L} _{2}}
is sufficient to verify that the equality is true taking into account the experimental error. In practice, angles can be difficult to measure, so if the length of the ropes is sufficiently great, the angles will be small enough to make the following approximation:
Using this approximation, the relationship (6) becomes the much simpler expression:
In this way, the verification is limited to measuring the distance between the charges and checking that the division approximates the theoretical value.
== See also ==
== References ==
Spavieri, G., Gillies, G. T., & Rodriguez, M. (2004). Physical implications of Coulomb’s Law. Metrologia, 41(5), S159–S170. doi:10.1088/0026-1394/41/5/s06
== Related reading ==
Coulomb, Charles Augustin (1788) [1785]. "Premier mémoire sur l'électricité et le magnétisme". Histoire de l'Académie Royale des Sciences. Imprimerie Royale. pp. 569–577.
Coulomb, Charles Augustin (1788) [1785]. "Second mémoire sur l'électricité et le magnétisme". Histoire de l'Académie Royale des Sciences. Imprimerie Royale. pp. 578–611.
Coulomb, Charles Augustin (1788) [1785]. "Troisième mémoire sur l'électricité et le magnétisme". Histoire de l'Académie Royale des Sciences. Imprimerie Royale. pp. 612–638.
Griffiths, David J. (1999). Introduction to Electrodynamics (3rd ed.). Prentice Hall. ISBN 978-0-13-805326-0.
Tamm, Igor E. (1979) [1976]. Fundamentals of the Theory of Electricity (9th ed.). Moscow: Mir. pp. 23–27.
Tipler, Paul A.; Mosca, Gene (2008). Physics for Scientists and Engineers (6th ed.). New York: W. H. Freeman and Company. ISBN 978-0-7167-8964-2. LCCN 2007010418.
Young, Hugh D.; Freedman, Roger A. (2010). Sears and Zemansky's University Physics: With Modern Physics (13th ed.). Addison-Wesley (Pearson). ISBN 978-0-321-69686-1.
== External links ==
Coulomb's Law on Project PHYSNET
Electricity and the Atom Archived 2009-02-21 at the Wayback Machine—a chapter from an online textbook
A maze game for teaching Coulomb's law—a game created by the Molecular Workbench software
Electric Charges, Polarization, Electric Force, Coulomb's Law Walter Lewin, 8.02 Electricity and Magnetism, Spring 2002: Lecture 1 (video). MIT OpenCourseWare. License: Creative Commons Attribution-Noncommercial-Share Alike. | Wikipedia/Coulombic_force |
The classical-map hypernetted-chain method (CHNC method) is a method used in many-body theoretical physics for interacting uniform electron liquids in two and three dimensions, and for non-ideal plasmas. The method extends the famous hypernetted-chain method (HNC) introduced by J.M.J. van Leeuwen et al. to quantum fluids as well. The classical HNC, together with the Percus–Yevick approximation, are the two pillars which bear the brunt of most calculations in the theory of interacting classical fluids. Also, HNC and PY have become important in providing basic reference schemes in the theory of fluids, and hence they are of great importance to the physics of many-particle systems.
The HNC and PY integral equations provide the pair distribution functions of the particles in a classical fluid, even for very high coupling strengths. The coupling strength is measured by the ratio of the potential energy to the kinetic energy. In a classical fluid, the kinetic energy is proportional to the temperature. In a quantum fluid, the situation is very complicated as one needs to deal with quantum operators, and matrix elements of such operators, which appear in various perturbation methods based on Feynman diagrams. The CHNC method provides an approximate "escape" from these difficulties, and applies to regimes beyond perturbation theory. In Robert B. Laughlin's famous Nobel Laureate work on the fractional quantum Hall effect, an HNC equation was used within a classical plasma analogy.
In the CHNC method, the pair-distributions of the interacting particles are calculated using a mapping which ensures that the quantum mechanically correct non-interacting pair distribution function is recovered when the Coulomb interactions are switched off. The value of the method lies in its ability to calculate the interacting pair distribution functions g(r) at zero and finite temperatures. Comparison of the calculated g(r) with results from Quantum Monte Carlo show remarkable agreement, even for very strongly correlated systems.
The interacting pair-distribution functions obtained from CHNC have been used to calculate the exchange-correlation energies, Landau parameters of Fermi liquids and other quantities of interest in many-body physics and density functional theory, as well as in the theory of hot plasmas.
== See also ==
Fermi liquid
Many-body theory
Quantum fluid
Radial distribution function
== References ==
== Further reading ==
C. Bulutay; B. Tanatar (2002). "Spin-dependent analysis of two-dimensional electron liquids" (PDF). Physical Review B. 65 (19): 195116. Bibcode:2002PhRvB..65s5116B. doi:10.1103/PhysRevB.65.195116. hdl:11693/24708.
M.W.C. Dharma-wardana; F. Perrot (2002). "Equation of state and the Hugoniot of laser shock-compressed deuterium: Demonstration of a basis-function-free method for quantum calculations". Physical Review B. 66 (1): 014110. arXiv:cond-mat/0112324. Bibcode:2002PhRvB..66a4110D. doi:10.1103/PhysRevB.66.014110.
N.Q. Khanh; H. Totsuji (2004). "Electron correlation in two-dimensional systems: CHNC approach to finite-temperature and spin-polarization effects". Solid State Communications. 129 (1): 37–42. Bibcode:2004SSCom.129...37K. doi:10.1016/j.ssc.2003.09.010.
M.W.C. Dharma-wardana (2005). "Spin and temperature dependent study of exchange and correlation in thick two-dimensional electron layers". Physical Review B. 72 (12): 125339. arXiv:cond-mat/0506804. Bibcode:2005PhRvB..72l5339D. doi:10.1103/PhysRevB.72.125339. | Wikipedia/Classical-map_hypernetted-chain_method |
The projector augmented wave method (PAW) is a technique used in ab initio electronic structure calculations. It is a generalization of the pseudopotential and linear augmented-plane-wave methods, and allows for density functional theory calculations to be performed with greater computational efficiency.
Valence wavefunctions tend to have rapid oscillations near ion cores due to the requirement that they be orthogonal to core states; this situation is problematic because it requires many Fourier components (or in the case of grid-based methods, a very fine mesh) to describe the wavefunctions accurately. The PAW approach addresses this issue by transforming these rapidly oscillating wavefunctions into smooth wavefunctions which are more computationally convenient, and provides a way to calculate all-electron properties from these smooth wavefunctions. This approach is somewhat reminiscent of a change from the Schrödinger picture to the Heisenberg picture.
== Transforming the wavefunction ==
The linear transformation
T
{\displaystyle {\mathcal {T}}}
transforms the fictitious pseudo wavefunction
|
Ψ
~
⟩
{\displaystyle |{\tilde {\Psi }}\rangle }
to the all-electron wavefunction
|
Ψ
⟩
{\displaystyle |\Psi \rangle }
:
|
Ψ
⟩
=
T
|
Ψ
~
⟩
{\displaystyle |\Psi \rangle ={\mathcal {T}}|{\tilde {\Psi }}\rangle }
Note that the "all-electron" wavefunction is a Kohn–Sham single particle wavefunction, and should not be confused with the many-body wavefunction. In order to have
|
Ψ
~
⟩
{\displaystyle |{\tilde {\Psi }}\rangle }
and
|
Ψ
⟩
{\displaystyle |\Psi \rangle }
differ only in the regions near the ion cores, we write
T
=
1
+
∑
R
T
^
R
{\displaystyle {\mathcal {T}}=1+\sum _{R}{\hat {\mathcal {T}}}_{R}}
,
where
T
^
R
{\displaystyle {\hat {\mathcal {T}}}_{R}}
is non-zero only within some spherical augmentation region
Ω
R
{\displaystyle \Omega _{R}}
enclosing atom
R
{\displaystyle R}
.
Around each atom, it is useful to expand the pseudo wavefunction into pseudo partial waves:
|
Ψ
~
⟩
=
∑
i
|
ϕ
~
i
⟩
c
i
{\displaystyle |{\tilde {\Psi }}\rangle =\sum _{i}|{\tilde {\phi }}_{i}\rangle c_{i}}
within
Ω
R
{\displaystyle \Omega _{R}}
.
Because the operator
T
{\displaystyle {\mathcal {T}}}
is linear, the coefficients
c
i
{\displaystyle c_{i}}
can be written as an inner product with a set of so-called projector functions,
|
p
i
⟩
{\displaystyle |p_{i}\rangle }
:
c
i
=
⟨
p
i
|
Ψ
~
⟩
{\displaystyle c_{i}=\langle p_{i}|{\tilde {\Psi }}\rangle }
where
⟨
p
i
|
ϕ
~
j
⟩
=
δ
i
j
{\displaystyle \langle p_{i}|{\tilde {\phi }}_{j}\rangle =\delta _{ij}}
. The all-electron partial waves,
|
ϕ
i
⟩
=
T
|
ϕ
~
i
⟩
{\displaystyle |\phi _{i}\rangle ={\mathcal {T}}|{\tilde {\phi }}_{i}\rangle }
, are typically chosen to be solutions to the Kohn–Sham Schrödinger equation for an isolated atom. The transformation
T
{\displaystyle {\mathcal {T}}}
is thus specified by three quantities:
a set of all-electron partial waves
|
ϕ
i
⟩
{\displaystyle |\phi _{i}\rangle }
a set of pseudo partial waves
|
ϕ
~
i
⟩
{\displaystyle |{\tilde {\phi }}_{i}\rangle }
a set of projector functions
|
p
i
⟩
{\displaystyle |p_{i}\rangle }
and we can explicitly write it down as
T
=
1
+
∑
i
(
|
ϕ
i
⟩
−
|
ϕ
~
i
⟩
)
⟨
p
i
|
{\displaystyle {\mathcal {T}}=1+\sum _{i}\left(|\phi _{i}\rangle -|{\tilde {\phi }}_{i}\rangle \right)\langle p_{i}|}
Outside the augmentation regions, the pseudo partial waves are equal to the all-electron partial waves. Inside the spheres, they can be any smooth continuation, such as a linear combination of polynomials or Bessel functions.
The PAW method is typically combined with the frozen core approximation, in which the core states are assumed to be unaffected by the ion's environment. There are several online repositories of pre-computed atomic PAW data.
== Transforming operators ==
The PAW transformation allows all-electron observables to be calculated using the pseudo-wavefunction from a pseudopotential calculation, conveniently avoiding having to ever represent the all-electron wavefunction explicitly in memory. This is particularly important for the calculation of properties such as NMR, which strongly depend on the form of the wavefunction near the nucleus. Starting with the definition of the expectation value of an operator:
a
i
=
⟨
Ψ
|
A
^
|
Ψ
⟩
{\displaystyle a_{i}=\langle \Psi |{\hat {A}}|\Psi \rangle }
,
where you can substitute in the pseudo wavefunction as you know
|
Ψ
⟩
=
T
|
Ψ
~
⟩
{\displaystyle |\Psi \rangle ={\mathcal {T}}|{\tilde {\Psi }}\rangle }
:
a
i
=
⟨
Ψ
~
|
T
†
A
^
T
|
Ψ
~
⟩
{\displaystyle a_{i}=\langle {\tilde {\Psi }}|{\mathcal {T}}^{\dagger }{\hat {A}}{\mathcal {T}}|{\tilde {\Psi }}\rangle }
,
from which you can define the pseudo operator, indicated by a tilde:
A
~
=
T
†
A
^
T
{\displaystyle {\tilde {A}}={\mathcal {T}}^{\dagger }{\hat {A}}{\mathcal {T}}}
.
If the operator
A
^
{\displaystyle {\hat {A}}}
is local and well-behaved we can expand this using the definition of
T
{\displaystyle {\mathcal {T}}}
to give the PAW operator transform
A
~
=
A
^
+
∑
i
,
j
|
p
i
⟩
(
⟨
ϕ
i
|
A
^
|
ϕ
j
⟩
−
⟨
ϕ
~
i
|
A
^
|
ϕ
~
j
⟩
)
⟨
p
j
|
{\displaystyle {\tilde {A}}={\hat {A}}+\sum _{i,j}|p_{i}\rangle \left(\langle \phi _{i}|{\hat {A}}|\phi _{j}\rangle -\langle {\tilde {\phi }}_{i}|{\hat {A}}|{\tilde {\phi }}_{j}\rangle \right)\langle p_{j}|}
.
where the indices
i
,
j
{\displaystyle i,j}
run over all projectors on all atoms. Usually only indices on the same atom are summed over, i.e. off-site contributions are ignored, and this is called the "on-site approximation".
In the original paper, Blöchl notes that there is a degree of freedom in this equation for an arbitrary operator
B
^
{\displaystyle {\hat {B}}}
, that is localised inside the spherical augmentation region, to add a term of the form:
B
^
−
∑
i
,
j
|
p
i
⟩
⟨
ϕ
~
i
|
B
^
|
ϕ
~
j
⟩
⟨
p
j
|
{\displaystyle {\hat {B}}-\sum _{i,j}|p_{i}\rangle \langle {\tilde {\phi }}_{i}|{\hat {B}}|{\tilde {\phi }}_{j}\rangle \langle p_{j}|}
,
which can be seen as the basis for implementation of pseudopotentials within PAW, as the nuclear coulomb potential can now be substituted with a smoother one.
== Further reading ==
Rostgaard, Carsten (2010). "The Projector Augmented-wave Method". arXiv:0910.1921v2 [cond-mat.mtrl-sci].
Kresse, G.; Joubert, D. (1999). "From ultrasoft pseudopotentials to the projector augmented-wave method". Physical Review B. 59 (3): 1758–1775. Bibcode:1999PhRvB..59.1758K. doi:10.1103/PhysRevB.59.1758.
Dal Corso, Andrea (2010-08-11). "Projector augmented-wave method: Application to relativistic spin-density functional theory". Physical Review B. 82 (7): 075116. Bibcode:2010PhRvB..82g5116D. doi:10.1103/PhysRevB.82.075116.
== Software implementing the projector augmented-wave method ==
ABINIT
CASTEP (to calculate NMR properties)
CP2K (in form of their Gaussian and Augmented Plane Wave (GAPW) Method)
CP-PAW
GPAW
ONETEP
PWPAW
S/PHI/nX
Quantum ESPRESSO
VASP
== References == | Wikipedia/Projector_augmented_wave_method |
The muffin-tin approximation is a shape approximation of the potential well in a crystal lattice. It is most commonly employed in quantum mechanical simulations of the electronic band structure in solids. The approximation was proposed by John C. Slater. Augmented plane wave method (APW) is a method which uses muffin-tin approximation. It is a method to approximate the energy states of an electron in a crystal lattice. The basic approximation lies in the potential in which the potential is assumed to be spherically symmetric in the muffin-tin region and constant in the interstitial region. Wave functions (the augmented plane waves) are constructed by matching solutions of the Schrödinger equation within each sphere with plane-wave solutions in the interstitial region, and linear combinations of these wave functions are then determined by the variational method. Many modern electronic structure methods employ the approximation. Among them APW method, the linear muffin-tin orbital method (LMTO) and various Green's function methods. One application is found in the variational theory developed by Jan Korringa (1947) and by Walter Kohn and N. Rostoker (1954) referred to as the KKR method. This method has been adapted to treat random materials as well, where it is called the KKR coherent potential approximation.
In its simplest form, non-overlapping spheres are centered on the atomic positions. Within these regions, the screened potential experienced by an electron is approximated to be spherically symmetric about the given nucleus. In the remaining interstitial region, the potential is approximated as a constant. Continuity of the potential between the atom-centered spheres and interstitial region is enforced.
In the interstitial region of constant potential, the single electron wave functions can be expanded in terms of plane waves. In the atom-centered regions, the wave functions can be expanded in terms of spherical harmonics and the eigenfunctions of a radial Schrödinger equation. Such use of functions other than plane waves as basis functions is termed the augmented plane-wave approach (of which there are many variations). It allows for an efficient representation of single-particle wave functions in the vicinity of the atomic cores where they can vary rapidly (and where plane waves would be a poor choice on convergence grounds in the absence of a pseudopotential).
== See also ==
Anderson's rule
Band gap
Bloch waves
Kohn–Sham equations
Kronig–Penney model
Local-density approximation
== References == | Wikipedia/Muffin-tin_approximation |
In computational chemistry, orbital-free density functional theory (OFDFT) is a quantum mechanical approach to electronic structure determination which is based on functionals of the electronic density. It is most closely related to the Thomas–Fermi model. Orbital-free density functional theory is, at present, less accurate than Kohn–Sham density functional theory models, but it has the advantage of being fast, so that it can be applied to large systems.
== Kinetic energy of electrons: an orbital-dependent functional ==
The Hohenberg–Kohn theorems guarantee that, for a system of atoms, there exists a functional of the electron density that yields the total energy. Minimization of this functional with respect to the density gives the ground-state density from which all of the system's properties can be obtained. Although the Hohenberg–Kohn theorems tell us that such a functional exists, they do not give us guidance on how to find it. In practice, the density functional is known exactly except for two terms. These are the electronic kinetic energy and the exchange–correlation energy. The lack of the true exchange–correlation functional is a well known problem in DFT, and there exists a huge variety of approaches to approximate this crucial component.
In general, there is no known form for the interacting kinetic energy in terms of electron density. In practice, instead of deriving approximations for interacting kinetic energy, much effort was devoted to deriving approximations for non-interacting (Kohn–Sham) kinetic energy, which is defined as (in atomic units)
T
S
[
{
ϕ
i
}
]
=
∑
i
=
1
N
⟨
ϕ
i
|
−
1
2
∇
2
|
ϕ
i
⟩
,
{\displaystyle T_{S}[\{\phi _{i}\}]=\sum _{i=1}^{N}\langle \phi _{i}|-{\frac {1}{2}}\nabla ^{2}|\phi _{i}\rangle ,}
where
|
ϕ
i
⟩
{\displaystyle |\phi _{i}\rangle }
is the i-th Kohn–Sham orbital. The summation is performed over all the occupied Kohn–Sham orbitals.
== Thomas-Fermi (TF) kinetic energy ==
One of the first attempts to do this (even before the formulation of the Hohenberg–Kohn theorem) was the Thomas–Fermi model (1927), which wrote the kinetic energy as
T
TF
[
n
]
=
3
10
(
3
π
2
)
2
3
⏟
C
T
F
∫
[
n
(
r
)
]
5
3
d
3
r
.
{\displaystyle T_{\text{TF}}[n]=\underbrace {{\frac {3}{10}}(3\pi ^{2})^{\frac {2}{3}}} _{C_{TF}}\int [n(\mathbf {r} )]^{\frac {5}{3}}\,d^{3}r.}
This expression is based on the homogeneous electron gas (HEG) and a Local Density Approximation (LDA), thus, is not very accurate for most physical systems. By formulating Kohn–Sham kinetic energy in terms of electron density, one avoids diagonalizing the Kohn–Sham Hamiltonian for solving for the Kohn–Sham orbitals, therefore saving the computational cost. Since no Kohn–Sham orbital is involved in orbital-free density functional theory, one only needs to minimize the system's energy with respect to the electron density. An important bound for the TF kinetic energy is the Lieb-Thirring inequality.
== Von Weizsäcker (vW) kinetic energy ==
A notable historical improvement of the Thomas-Fermi model is the von Weizsäcker (vW) kinetic energy (1935), which is exactly the kinetic energy for noninteracting bosons and can be regarded as a Generalized Gradient approximation (GGA).
T
vW
[
n
]
=
1
8
∫
∇
n
(
r
)
⋅
∇
n
(
r
)
n
(
r
)
d
3
r
=
∫
n
(
r
)
(
−
1
2
Δ
)
n
(
r
)
d
3
r
{\displaystyle T_{\text{vW}}[n]={\frac {1}{8}}\int {\frac {\nabla n(\mathbf {r} )\cdot \nabla n(\mathbf {r} )}{n(\mathbf {r} )}}d^{3}r=\int {\sqrt {n(\mathbf {r} )}}(-{\frac {1}{2}}\Delta ){\sqrt {n(\mathbf {r} )}}d^{3}r}
== Pauli kinetic energy ==
A conceptually really important quantity in OFDFT is the Pauli kinetic energy. As the Kohn-Sham correlation energy links the real system of interacting electrons to the artificial Kohn-Sham (KS) system of noninteracting electrons, the Pauli kinetic energy links the KS system to the fictitious system noninteracting model bosons. In the same way as the KS interacting energy it is highy KS-orbital dependent and must be in practice approximated.
T
P
[
n
]
≡
T
S
[
n
]
−
T
vW
[
n
]
{\displaystyle T_{\text{P}}[n]\equiv T_{\text{S}}[n]-T_{\text{vW}}[n]}
The term Pauli comes from the fact, that the functional is related to the Pauli exclusion principle.
T
P
[
n
]
=
0
{\displaystyle T_{\text{P}}[n]=0}
for an electron number of
N
≤
2
{\displaystyle N\leq 2}
.
== Dirac exchange energy ==
The exchange energy in orbital-free density functional theory (OFDFT) is the Dirac exchange as a Local Density Approximation (LDA) (1930)
It is related to the homogeneous electron gas (HEG).
E
x
LDA
[
n
]
=
−
3
4
(
3
π
)
1
/
3
⏟
C
x
∫
[
n
(
r
)
]
4
3
d
3
r
{\displaystyle E_{x}^{\text{LDA}}[n]=-\underbrace {{\frac {3}{4}}{\bigg (}{\frac {3}{\pi }}{\bigg )}^{1/3}} _{C_{x}}\int [n(\mathbf {r} )]^{\frac {4}{3}}d^{3}r}
An important bound for the LDA exchange energy is the Lieb-Oxford inequality.
== Nonlocal (NL) kinetic energy density functionals (KEDF) ==
State of the art kinetic energy density functionals for orbital-free density functional theory and still subject to ongoing research are the so called nonlocal (NL) kinetic energy density functionals such as e.g. the Huang-Carter (HC) functional (2010), the Mi-Genova-Pavenello (MGP) functional (2018) or the Wang-Teter (WT) functional (1992). They admit the general form
T
NL
[
n
]
=
C
NL
∬
d
3
r
d
3
r
′
n
(
r
)
α
K
[
n
]
(
r
,
r
′
)
n
(
r
′
)
β
{\displaystyle T_{\text{NL}}[n]=C_{\text{NL}}\iint d^{3}rd^{3}r'n(\mathbf {r} )^{\alpha }K[n](r,r')n(\mathbf {r} ')^{\beta }}
where
α
{\displaystyle \alpha }
and
β
{\displaystyle \beta }
are arbitrary fractional exponents,
K
[
n
]
(
r
,
r
′
)
{\displaystyle K[n](r,r')}
is a nonlocal KEDF kernel and
C
NL
{\displaystyle C_{\text{NL}}}
some constant.
== Levy-Perdew-Sahni (LPS) equation ==
The analogue to the Kohn-Sham (KS) equations (1965) in Orbital-free density functional theory (OFDFT) is the Levy-Perdew-Sahni (LPS) equation (1984), an effectively bosonic Schrödinger equation
(
−
1
2
Δ
+
v
S
(
r
)
+
v
P
(
r
)
)
n
(
r
)
=
μ
n
(
r
)
{\displaystyle {\bigg (}-{\frac {1}{2}}\Delta +v_{S}(\mathbf {r} )+v_{P}(\mathbf {r} ){\bigg )}{\sqrt {n(\mathbf {r} )}}=\mu {\sqrt {n(\mathbf {r} )}}}
where
v
S
(
r
)
{\displaystyle v_{S}(\mathbf {r} )}
is the Kohn-Sham (KS) potential,
v
P
(
r
)
{\displaystyle v_{P}(\mathbf {r} )}
the Pauli potential,
μ
{\displaystyle \mu }
the highest occupied KS orbital and
n
(
r
)
{\displaystyle {\sqrt {n(\mathbf {r} )}}}
the square root of the density. One big benefit of the LPS equation being so intimately related to the KS equations is that an existing KS code can be easily modified in an OF code with ejecting all orbitals except for one in the Self-Consistent-Field (SCF) cycle.
=== Derivation ===
Starting from Euler-Lagrange equation of density functional theory
δ
T
S
[
n
]
δ
n
(
r
)
+
v
S
(
r
)
=
μ
{\displaystyle {\frac {\delta T_{S}[n]}{\delta n(\mathbf {r} )}}+v_{S}(\mathbf {r} )=\mu }
, simultaneously adding and subtracting the von Weizsäcker potential, i.e. the functional derivative of the vW functional and acknowleding the definition of the Pauli kinetic energy, while the functional derivative of the Pauli kinetic energy with respect to the density is the Pauli potential
δ
T
v
W
[
n
]
δ
n
(
r
)
⏟
v
v
W
(
r
)
+
v
S
(
r
)
+
δ
T
P
[
n
]
δ
n
(
r
)
⏟
v
P
(
r
)
=
μ
{\displaystyle \underbrace {\frac {\delta T_{vW}[n]}{\delta n(\mathbf {r} )}} _{v_{vW}(\mathbf {r} )}+v_{S}(\mathbf {r} )+\underbrace {\frac {\delta T_{P}[n]}{\delta n(\mathbf {r} )}} _{v_{P}(\mathbf {r} )}=\mu }
. Expanding the functional derivative via chain rule
δ
n
(
r
)
δ
n
(
r
)
⏟
1
/
n
(
r
)
δ
δ
n
(
r
)
∫
n
(
r
)
(
−
1
2
Δ
)
n
(
r
)
d
3
r
⏟
−
1
2
Δ
n
(
r
)
+
v
S
(
r
)
+
v
P
(
r
)
=
μ
{\displaystyle \underbrace {\frac {\delta {\sqrt {n(\mathbf {r} )}}}{\delta n(\mathbf {r} )}} _{1/{\sqrt {n(\mathbf {r} )}}}\underbrace {{\frac {\delta }{\delta {\sqrt {n(\mathbf {r} )}}}}\int {\sqrt {n(\mathbf {r} )}}(-{\frac {1}{2}}\Delta ){\sqrt {n(\mathbf {r} )}}d^{3}r} _{-{\frac {1}{2}}\Delta {\sqrt {n(\mathbf {r} )}}}+v_{S}(\mathbf {r} )+v_{P}(\mathbf {r} )=\mu }
and as a last step multiplying both sides by the square root of the density
n
(
r
)
{\displaystyle {\sqrt {n(\mathbf {r} )}}}
yields the LPS equation.
=== Bosonic transformation ===
With the linear transformation
n
(
r
)
↦
1
N
ϕ
B
(
r
)
{\displaystyle {\sqrt {n(\mathbf {r} )}}\mapsto {\frac {1}{\sqrt {N}}}\phi _{B}(\mathbf {r} )}
and by defining the bosonic potential as
v
B
(
r
)
≡
v
S
(
r
)
+
v
P
(
r
)
{\displaystyle v_{B}(\mathbf {r} )\equiv v_{S}(\mathbf {r} )+v_{P}(\mathbf {r} )}
the LPS equation evolves to the bosonic Schrödinger equation
(
−
1
2
Δ
+
v
B
(
r
)
)
ϕ
B
(
r
)
=
μ
ϕ
B
(
r
)
{\displaystyle {\bigg (}-{\frac {1}{2}}\Delta +v_{B}(\mathbf {r} ){\bigg )}\phi _{B}(\mathbf {r} )=\mu \phi _{B}(\mathbf {r} )}
.
Note that the normalization constraint in Bra–ket notation
⟨
ϕ
B
|
ϕ
B
⟩
=
1
{\displaystyle \langle \phi _{B}|\phi _{B}\rangle =1}
holds, since
N
[
n
]
=
∫
n
(
r
)
d
3
r
{\displaystyle N[n]=\int n(\mathbf {r} )d^{3}r}
.
== Time-dependent orbital-free density functional theory (TDOFDFT) ==
Quite recently also a time-dependent version of OFDFT has been developed. It is also implemented in DFTpy.
== Software packages ==
As a free open-source software package for OFDFT DFTpy has been developed by the Pavanello Group. It was launched in 2020. The most recent version is 2.1.1.
== References == | Wikipedia/Orbital-free_density_functional_theory |
In statistical mechanics the Ornstein–Zernike (OZ) equation is an integral equation introduced by Leonard Ornstein and Frits Zernike that relates different correlation functions with each other. Together with a closure relation, it is used to compute the structure factor and thermodynamic state functions of amorphous matter like liquids or colloids.
== Context ==
The OZ equation has practical importance as a foundation for approximations for computing the
pair correlation function of molecules or ions in liquids, or of colloidal particles. The pair correlation function is related via Fourier transform to the static structure factor, which can be determined experimentally using X-ray diffraction or neutron diffraction.
The OZ equation relates the pair correlation function to the direct correlation function. The direct correlation function is only used in connection with the OZ equation, which can actually be seen as its definition.
Besides the OZ equation, other methods for the computation of the pair correlation function include the virial expansion at low densities, and the Bogoliubov–Born–Green–Kirkwood–Yvon (BBGKY) hierarchy. Any of these methods must be combined with a physical approximation: truncation in the case of the virial expansion, a closure relation for OZ or BBGKY.
== The equation ==
To keep notation simple, we only consider homogeneous fluids. Thus the pair correlation function only depends on distance, and therefore is also called the radial distribution function. It can be written
g
(
r
1
,
r
2
)
=
g
(
r
1
−
r
2
)
≡
g
(
r
12
)
=
g
(
|
r
12
|
)
≡
g
(
r
12
)
≡
g
(
12
)
,
{\displaystyle g(\mathbf {r} _{1},\mathbf {r} _{2})=g(\mathbf {r} _{1}-\mathbf {r} _{2})\equiv g(\mathbf {r} _{12})=g(|\mathbf {r} _{12}|)\equiv g(r_{12})\equiv g(12),}
where the first equality comes from homogeneity, the second from isotropy, and the equivalences introduce new notation.
It is convenient to define the total correlation function as:
h
(
12
)
≡
g
(
12
)
−
1
{\displaystyle h(12)\equiv g(12)-1}
which expresses the influence of molecule 1 on molecule 2 at distance
r
12
{\displaystyle \,r_{12}\,}
. The OZ equation
splits this influence into two contributions, a direct and indirect one. The direct contribution defines the direct correlation function,
c
(
r
)
.
{\displaystyle c(r).}
The indirect part is due to the influence of molecule 1 on a third, labeled molecule 3, which in turn affects molecule 2, directly and indirectly. This indirect effect is weighted by the density and averaged over all the possible positions of molecule 3.
By eliminating the indirect influence,
c
(
r
)
{\displaystyle \,c(r)\,}
is shorter-ranged than
h
(
r
)
{\displaystyle h(r)}
and can be more easily modelled and approximated. The radius of
c
(
r
)
{\displaystyle \,c(r)\,}
is determined by the radius of intermolecular forces, whereas the radius of
h
(
r
)
{\displaystyle \,h(r)\,}
is of the order of the correlation length.
== Fourier transform ==
The integral in the OZ equation is a convolution. Therefore, the OZ equation can be resolved by Fourier transform.
If we denote the Fourier transforms of
h
(
r
)
{\displaystyle h(\mathbf {r} )}
and
c
(
r
)
{\displaystyle c(\mathbf {r} )}
by
h
^
(
k
)
{\displaystyle {\hat {h}}(\mathbf {k} )}
and
c
^
(
k
)
{\displaystyle {\hat {c}}(\mathbf {k} )}
, respectively, and use the convolution theorem, we obtain
h
^
(
k
)
=
c
^
(
k
)
+
ρ
h
^
(
k
)
c
^
(
k
)
,
{\displaystyle {\hat {h}}(\mathbf {k} )\;=\;{\hat {c}}(\mathbf {k} )\;+\;\rho \;{\hat {h}}(\mathbf {k} )\;{\hat {c}}(\mathbf {k} )~,}
which yields
c
^
(
k
)
=
h
^
(
k
)
1
+
ρ
h
^
(
k
)
and
h
^
(
k
)
=
c
^
(
k
)
1
−
ρ
c
^
(
k
)
.
{\displaystyle {\hat {c}}(\mathbf {k} )\;=\;{\frac {{\hat {h}}(\mathbf {k} )}{\;1\;+\;\rho \;{\hat {h}}(\mathbf {k} )\;}}\qquad {\text{ and }}\qquad {\hat {h}}(\mathbf {k} )\;=\;{\frac {{\hat {c}}(\mathbf {k} )}{\;1\;-\;\rho \;{\hat {c}}(\mathbf {k} )\;}}~.}
== Closure relations ==
As both functions,
h
{\displaystyle \,h\,}
and
c
{\displaystyle \,c\,}
, are unknown, one needs an additional equation, known as a closure relation. While the OZ equation is purely formal, the closure must introduce some physically motivated approximation.
In the low-density limit, the pair correlation function is given by the Boltzmann factor,
g
(
12
)
=
e
−
β
u
(
12
)
,
ρ
→
0
{\displaystyle g(12)={\text{e}}^{-\beta u(12)},\quad \rho \to 0}
with
β
=
1
/
k
B
T
{\displaystyle \beta =1/k_{\text{B}}T}
and with the pair potential
u
(
r
)
{\displaystyle u(r)}
.
Closure relations for higher densities modify this simple relation in different ways. The best known closure approximations are:
The Percus–Yevick approximation for particles with impenetrable ("hard") core,
the hypernetted-chain approximation, for particles with soft cores and attractive potential tails,
the mean spherical approximation,
the Rogers-Young approximation.
The latter two interpolate in different ways between the former two, and thereby achieve a satisfactory description of particles that have a hard core and attractive forces.
== See also ==
Hypernetted-chain equation – closure relation
Percus–Yevick approximation – closure relation
== References ==
== External links ==
"The Ornstein–Zernike equation and integral equations". cbp.tnw.utwente.nl.
"Multilevel wavelet solver for the Ornstein–Zernike equation" (PDF). ncsu.edu (Abstract).
"Analytical solution of the Ornstein–Zernike equation for a multicomponent fluid" (PDF). iop.org.
"The Ornstein–Zernike equation in the canonical ensemble". iop.org. | Wikipedia/Ornstein–Zernike_equation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.