text
stringlengths
559
401k
source
stringlengths
13
121
Atomic astrophysics is concerned with performing atomic physics calculations that will be useful to astronomers and using atomic data to interpret astronomical observations. Atomic physics plays a key role in astrophysics as astronomers' only information about a particular object comes through the light that it emits, and this light arises through atomic transitions. Molecular astrophysics, developed into a rigorous field of investigation by theoretical astrochemist Alexander Dalgarno beginning in 1967, concerns the study of emission from molecules in space. There are 110 currently known interstellar molecules. These molecules have large numbers of observable transitions. Lines may also be observed in absorption—for example the highly redshifted lines seen against the gravitationally lensed quasar PKS1830-211. High energy radiation, such as ultraviolet light, can break the molecular bonds which hold atoms in molecules. In general then, molecules are found in cool astrophysical environments. The most massive objects in our galaxy are giant clouds of molecules and dust known as giant molecular clouds. In these clouds, and smaller versions of them, stars and planets are formed. One of the primary fields of study of molecular astrophysics is star and planet formation. Molecules may be found in many environments, however, from stellar atmospheres to those of planetary satellites. Most of these locations are relatively cool, and molecular emission is most easily studied via photons emitted when the molecules make transitions between low rotational energy states. One molecule, composed of the abundant carbon and oxygen atoms, and very stable against dissociation into atoms, is carbon monoxide (CO). The wavelength of the photon emitted when the CO molecule falls from its lowest excited state to its zero energy, or ground, state is 2.6mm, or 115 gigahertz. This frequency is a thousand times higher than typical FM radio frequencies. At these high frequencies, molecules in the Earth's atmosphere can block transmissions from space, and telescopes must be located in dry (water is an important atmospheric blocker), high sites. Radio telescopes must have very accurate surfaces to produce high fidelity images. On February 21, 2014, NASA announced a greatly upgraded database for tracking polycyclic aromatic hydrocarbons (PAHs) in the universe. According to scientists, more than 20% of the carbon in the universe may be associated with PAHs, possible starting materials for the formation of life. PAHs seem to have been formed shortly after the Big Bang, are widespread throughout the universe, and are associated with new stars and exoplanets. == See also == == References == National Radio Astronomy: Molecular Astrophysics Molecular Astrophysics: A volume honouring Alexander Dalgarno == External links ==
Wikipedia/Atomic_and_molecular_astrophysics
Social physics or sociophysics is a field of science which uses mathematical tools inspired by physics to understand the behavior of human crowds. In a modern commercial use, it can also refer to the analysis of social phenomena with big data. Social physics is closely related to econophysics, which uses physics methods to describe economics. == History == The earliest mentions of a concept of social physics began with the English philosopher Thomas Hobbes. In 1636 he traveled to Florence, Italy, and met physicist-astronomer Galileo Galilei, known for his contributions to the study of motion. It was here that Hobbes began to outline the idea of representing the "physical phenomena" of society in terms of the laws of motion. In his treatise De Corpore, Hobbes sought to relate the movement of "material bodies" to the mathematical terms of motion outlined by Galileo and similar scientists of the time period. Although there was no explicit mention of "social physics", the sentiment of examining society with scientific methods began before the first written mention of social physics. Later, French social thinker Henri de Saint-Simon's first book, the 1803 Lettres d’un Habitant de Geneve, introduced the idea of describing society using laws similar to those of the physical and biological sciences. His student and collaborator was Auguste Comte, a French philosopher widely regarded as the founder of sociology, who first defined the term in an essay appearing in Le Producteur, a journal project by Saint-Simon. Comte defined social physics:Social physics is that science which occupies itself with social phenomena, considered in the same light as astronomical, physical, chemical, and physiological phenomena, that is to say as being subject to natural and invariable laws, the discovery of which is the special object of its researches. After Saint-Simon and Comte, Belgian statistician Adolphe Quetelet, proposed that society be modeled using mathematical probability and social statistics. Quetelet's 1835 book, Essay on Social Physics: Man and the Development of his Faculties, outlines the project of a social physics characterized by measured variables that follow a normal distribution, and collected data about many such variables. A frequently repeated anecdote is that when Comte discovered that Quetelet had appropriated the term "social physics", he found it necessary to invent a new term, "sociologie" ("sociology") because he disagreed with Quetelet's collection of statistics. There have been several “generations” of social physicists. The first generation began with Saint-Simon, Comte, and Quetelet, and ended with the late 1800s with historian Henry Adams. In the middle of the 20th century, researchers such as the American astrophysicist John Q. Stewart and Finnish geographer Reino Ajo, who showed that the spatial distribution of social interactions could be described using gravity models. Physicists such as Arthur Iberall use a homeokinetics approach to study social systems as complex self-organizing systems. For example, a homeokinetics analysis of society shows that one must account for flow variables such as the flow of energy, of materials, of action, reproduction rate, and value-in-exchange. More recently there have been a large number of social science papers that use mathematics broadly similar to that of physics, and described as “computational social science”. In the late 1800s, Adams separated “human physics” into the subsets of social physics or social mechanics (sociology of interactions using physics-like mathematical tools) and social thermodynamics or sociophysics (sociology described using mathematical invariances similar to those in thermodynamics). This dichotomy is roughly analogous to the difference between microeconomics and macroeconomics. == Examples == === Ising model and voter dynamics === One of the most well-known examples in social physics is the relationship of the Ising model and the voting dynamics of a finite population. The Ising model, as a model of ferromagnetism, is represented by a grid of spaces, each of which is occupied by a Spin (physics), numerically ±1. Mathematically, the final energy state of the system depends on the interactions of the spaces and their respective spins. For example, if two adjacent spaces share the same spin, the surrounding neighbors will begin to align, and the system will eventually reach a state of consensus. In social physics, it has been observed that voter dynamics in a finite population obey the same mathematical properties of the Ising model. In the social physics model, each spin denotes an opinion, e.g. yes or no, and each space represents a "voter". If two adjacent spaces (voters) share the same spin (opinion), their neighbors begin to align with their spin value; if two adjacent spaces do not share the same spin, then their neighbors remain the same. Eventually, the remaining voters will reach a state of consensus as the "information flows outward". The Sznajd model is an extension of the Ising model and is classified as an econophysics model. It emphasizes the alignment of the neighboring spins in a phenomenon called "social validation". It follows the same properties as the Ising model and is extended to observe the patterns of opinion dynamics as a whole, rather than focusing on just voter dynamics. === Potts model and cultural dynamics === The Potts model is a generalization of the Ising model and has been used to examine the concept of cultural dissemination as described by American political scientist Robert Axelrod. Axelrod's model of cultural dissemination states that individuals who share cultural characteristics are more likely to interact with each other, thus increasing the number of overlapping characteristics and expanding their interaction network. The Potts model has the caveat that each spin can hold multiple values, unlike the Ising model that could only hold one value. Each spin, then, represents an individual's "cultural characteristics... [or] in Axelrod's words, 'the set of individual attributes that are subject to social influence'". It is observed that, using the mathematical properties of the Potts model, neighbors whose cultural characteristics overlap tend to interact more frequently than with unlike neighbors, thus leading to a self-organizing grouping of similar characteristics. Simulations done on the Potts model both show Axelrod's model of cultural dissemination agrees with the Potts model as an Ising-class model. == Recent work == In modern use “social physics” refers to using “big data” analysis and the mathematical laws to understand the behavior of human crowds. The core idea is that data about human activity (e.g., phone call records, credit card purchases, taxi rides, web activity) contain mathematical patterns that are characteristic of how social interactions spread and converge. These mathematical invariances can then serve as a filter for analysis of behavior changes and for detecting emerging behavioral patterns. Social physics has recently been applied to analyze the COVID-19 pandemics. It has been demonstrated that the large difference in the spread of COVID-19 between countries is due to differences in responses to social stress. The combination of traditional epidemic models with social physics models of the classical general adaptation syndrome triad, "anxiety-resistance-exhaustion", accurately describes the first two waves of the COVID-19 epidemic for 13 countries. The differences between countries are concentrated in two kinetic constants: the rate of mobilization and the rate of exhaustion. Recent books about social physics include MIT Professor Alex Pentland's book Social Physics  or Nature editor Mark Buchanan's book The Social Atom. Popular reading about sociophysics include English physicist Philip Ball's Why Society is a Complex Matter, Dirk Helbing's The Automation of Society is next or American physicist Laszlo Barabasi's book Linked. == See also == Historic recurrence Logology (science) == References == == Further reading == Arnopoulos, Paris, Sociophysics, Cosmos and Chaos in Nature and Culture, New York, Nova Science Publishers Inc., 1st ed. 1995, 2nd ed. 2005. Ball, Philip, Critical Mass: How One Thing Leads to Another, 2004, ISBN 0-434-01135-5.
Wikipedia/Sociophysics
In physics and engineering, magnetohydrodynamics (MHD; also called magneto-fluid dynamics or hydro­magnetics) is a model of electrically conducting fluids that treats all interpenetrating particle species together as a single continuous medium. It is primarily concerned with the low-frequency, large-scale, magnetic behavior in plasmas and liquid metals and has applications in multiple fields including space physics, geophysics, astrophysics, and engineering. The word magneto­hydro­dynamics is derived from magneto- meaning magnetic field, hydro- meaning water, and dynamics meaning movement. The field of MHD was initiated by Hannes Alfvén, for which he received the Nobel Prize in Physics in 1970. == History == The MHD description of electrically conducting fluids was first developed by Hannes Alfvén in a 1942 paper published in Nature titled "Existence of Electromagnetic–Hydrodynamic Waves" which outlined his discovery of what are now referred to as Alfvén waves. Alfvén initially referred to these waves as "electromagnetic–hydrodynamic waves"; however, in a later paper he noted, "As the term 'electromagnetic–hydrodynamic waves' is somewhat complicated, it may be convenient to call this phenomenon 'magneto–hydrodynamic' waves." == Equations == In MHD, motion in the fluid is described using linear combinations of the mean motions of the individual species: the current density J {\displaystyle \mathbf {J} } and the center of mass velocity v {\displaystyle \mathbf {v} } . In a given fluid, each species σ {\displaystyle \sigma } has a number density n σ {\displaystyle n_{\sigma }} , mass m σ {\displaystyle m_{\sigma }} , electric charge q σ {\displaystyle q_{\sigma }} , and a mean velocity u σ {\displaystyle \mathbf {u} _{\sigma }} . The fluid's total mass density is then ρ = ∑ σ m σ n σ {\textstyle \rho =\sum _{\sigma }m_{\sigma }n_{\sigma }} , and the motion of the fluid can be described by the current density expressed as J = ∑ σ n σ q σ u σ {\displaystyle \mathbf {J} =\sum _{\sigma }n_{\sigma }q_{\sigma }\mathbf {u} _{\sigma }} and the center of mass velocity expressed as: v = 1 ρ ∑ σ m σ n σ u σ . {\displaystyle \mathbf {v} ={\frac {1}{\rho }}\sum _{\sigma }m_{\sigma }n_{\sigma }\mathbf {u} _{\sigma }.} MHD can be described by a set of equations consisting of a continuity equation, an equation of motion (the Cauchy momentum equation), an equation of state, Ampère's Law, Faraday's law, and Ohm's law. As with any fluid description to a kinetic system, a closure approximation must be applied to the highest moment of the particle distribution equation. This is often accomplished with approximations to the heat flux through a condition of adiabaticity or isothermality. In the adiabatic limit, that is, the assumption of an isotropic pressure p {\displaystyle p} and isotropic temperature, a fluid with an adiabatic index γ {\displaystyle \gamma } , electrical resistivity η {\displaystyle \eta } , magnetic field B {\displaystyle \mathbf {B} } , and electric field E {\displaystyle \mathbf {E} } can be described by the continuity equation ∂ ρ ∂ t + ∇ ⋅ ( ρ v ) = 0 , {\displaystyle {\frac {\partial \rho }{\partial t}}+\nabla \cdot \left(\rho \mathbf {v} \right)=0,} the equation of state d d t ( p ρ γ ) = 0 , {\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}\left({\frac {p}{\rho ^{\gamma }}}\right)=0,} the equation of motion ρ ( ∂ ∂ t + v ⋅ ∇ ) v = J × B − ∇ p , {\displaystyle \rho \left({\frac {\partial }{\partial t}}+\mathbf {v} \cdot \nabla \right)\mathbf {v} =\mathbf {J} \times \mathbf {B} -\nabla p,} the low-frequency Ampère's law μ 0 J = ∇ × B , {\displaystyle \mu _{0}\mathbf {J} =\nabla \times \mathbf {B} ,} Faraday's law ∂ B ∂ t = − ∇ × E , {\displaystyle {\frac {\partial \mathbf {B} }{\partial t}}=-\nabla \times \mathbf {E} ,} and Ohm's law E + v × B = η J . {\displaystyle \mathbf {E} +\mathbf {v} \times \mathbf {B} =\eta \mathbf {J} .} Taking the curl of this equation and using Ampère's law and Faraday's law results in the induction equation, ∂ B ∂ t = ∇ × ( v × B ) + η μ 0 ∇ 2 B , {\displaystyle {\frac {\partial \mathbf {B} }{\partial t}}=\nabla \times (\mathbf {v} \times \mathbf {B} )+{\frac {\eta }{\mu _{0}}}\nabla ^{2}\mathbf {B} ,} where η / μ 0 {\displaystyle \eta /\mu _{0}} is the magnetic diffusivity. In the equation of motion, the Lorentz force term J × B {\displaystyle \mathbf {J} \times \mathbf {B} } can be expanded using Ampère's law and a vector calculus identity to give J × B = ( B ⋅ ∇ ) B μ 0 − ∇ ( B 2 2 μ 0 ) , {\displaystyle \mathbf {J} \times \mathbf {B} ={\frac {\left(\mathbf {B} \cdot \nabla \right)\mathbf {B} }{\mu _{0}}}-\nabla \left({\frac {B^{2}}{2\mu _{0}}}\right),} where the first term on the right hand side is the magnetic tension force and the second term is the magnetic pressure force. == Ideal MHD == The simplest form of MHD, ideal MHD, assumes that the resistive term η J {\displaystyle \eta \mathbf {J} } in Ohm's law is small relative to the other terms such that it can be taken to be equal to zero. This occurs in the limit of large magnetic Reynolds numbers during which magnetic induction dominates over magnetic diffusion at the velocity and length scales under consideration. Consequently, processes in ideal MHD that convert magnetic energy into kinetic energy, referred to as ideal processes, cannot generate heat and raise entropy.: 6  A fundamental concept underlying ideal MHD is the frozen-in flux theorem which states that the bulk fluid and embedded magnetic field are constrained to move together such that one can be said to be "tied" or "frozen" to the other. Therefore, any two points that move with the bulk fluid velocity and lie on the same magnetic field line will continue to lie on the same field line even as the points are advected by fluid flows in the system.: 25  The connection between the fluid and magnetic field fixes the topology of the magnetic field in the fluid—for example, if a set of magnetic field lines are tied into a knot, then they will remain so as long as the fluid has negligible resistivity. This difficulty in reconnecting magnetic field lines makes it possible to store energy by moving the fluid or the source of the magnetic field. The energy can then become available if the conditions for ideal MHD break down, allowing magnetic reconnection that releases the stored energy from the magnetic field. === Ideal MHD equations === In ideal MHD, the resistive term η J {\displaystyle \eta \mathbf {J} } vanishes in Ohm's law giving the ideal Ohm's law, E + v × B = 0. {\displaystyle \mathbf {E} +\mathbf {v} \times \mathbf {B} =0.} Similarly, the magnetic diffusion term η ∇ 2 B / μ 0 {\displaystyle \eta \nabla ^{2}\mathbf {B} /\mu _{0}} in the induction equation vanishes giving the ideal induction equation,: 23  ∂ B ∂ t = ∇ × ( v × B ) . {\displaystyle {\frac {\partial \mathbf {B} }{\partial t}}=\nabla \times (\mathbf {v} \times \mathbf {B} ).} === Applicability of ideal MHD to plasmas === Ideal MHD is only strictly applicable when: The plasma is strongly collisional, so that the time scale of collisions is shorter than the other characteristic times in the system, and the particle distributions are therefore close to Maxwellian. The resistivity due to these collisions is small. In particular, the typical magnetic diffusion times over any scale length present in the system must be longer than any time scale of interest. Interest in length scales much longer than the ion skin depth and Larmor radius perpendicular to the field, long enough along the field to ignore Landau damping, and time scales much longer than the ion gyration time (system is smooth and slowly evolving). === Importance of resistivity === In an imperfectly conducting fluid the magnetic field can generally move through the fluid following a diffusion law with the resistivity of the plasma serving as a diffusion constant. This means that solutions to the ideal MHD equations are only applicable for a limited time for a region of a given size before diffusion becomes too important to ignore. One can estimate the diffusion time across a solar active region (from collisional resistivity) to be hundreds to thousands of years, much longer than the actual lifetime of a sunspot—so it would seem reasonable to ignore the resistivity. By contrast, a meter-sized volume of seawater has a magnetic diffusion time measured in milliseconds. Even in physical systems—which are large and conductive enough that simple estimates of the Lundquist number suggest that the resistivity can be ignored—resistivity may still be important: many instabilities exist that can increase the effective resistivity of the plasma by factors of more than 109. The enhanced resistivity is usually the result of the formation of small scale structure like current sheets or fine scale magnetic turbulence, introducing small spatial scales into the system over which ideal MHD is broken and magnetic diffusion can occur quickly. When this happens, magnetic reconnection may occur in the plasma to release stored magnetic energy as waves, bulk mechanical acceleration of material, particle acceleration, and heat. Magnetic reconnection in highly conductive systems is important because it concentrates energy in time and space, so that gentle forces applied to a plasma for long periods of time can cause violent explosions and bursts of radiation. When the fluid cannot be considered as completely conductive, but the other conditions for ideal MHD are satisfied, it is possible to use an extended model called resistive MHD. This includes an extra term in Ohm's Law which models the collisional resistivity. Generally MHD computer simulations are at least somewhat resistive because their computational grid introduces a numerical resistivity. == Structures in MHD systems == In many MHD systems most of the electric current is compressed into thin nearly-two-dimensional ribbons termed current sheets. These can divide the fluid into magnetic domains, inside of which the currents are relatively weak. Current sheets in the solar corona are thought to be between a few meters and a few kilometers in thickness, which is quite thin compared to the magnetic domains (which are thousands to hundreds of thousands of kilometers across). Another example is in the Earth's magnetosphere, where current sheets separate topologically distinct domains, isolating most of the Earth's ionosphere from the solar wind. == Waves == The wave modes derived using the MHD equations are called magnetohydrodynamic waves or MHD waves. There are three MHD wave modes that can be derived from the linearized ideal-MHD equations for a fluid with a uniform and constant magnetic field: Alfvén waves Slow magnetosonic waves Fast magnetosonic waves These modes have phase velocities that are independent of the magnitude of the wavevector, so they experience no dispersion. The phase velocity depends on the angle between the wave vector k and the magnetic field B. An MHD wave propagating at an arbitrary angle θ with respect to the time independent or bulk field B0 will satisfy the dispersion relation ω k = v A cos ⁡ θ {\displaystyle {\frac {\omega }{k}}=v_{A}\cos \theta } where v A = B 0 μ 0 ρ {\displaystyle v_{A}={\frac {B_{0}}{\sqrt {\mu _{0}\rho }}}} is the Alfvén speed. This branch corresponds to the shear Alfvén mode. Additionally the dispersion equation gives ω k = ( 1 2 ( v A 2 + v s 2 ) ± 1 2 ( v A 2 + v s 2 ) 2 − 4 v s 2 v A 2 cos 2 ⁡ θ ) 1 2 {\displaystyle {\frac {\omega }{k}}=\left({\tfrac {1}{2}}\left(v_{A}^{2}+v_{s}^{2}\right)\pm {\tfrac {1}{2}}{\sqrt {\left(v_{A}^{2}+v_{s}^{2}\right)^{2}-4v_{s}^{2}v_{A}^{2}\cos ^{2}\theta }}\right)^{\frac {1}{2}}} where v s = γ p ρ {\displaystyle v_{s}={\sqrt {\frac {\gamma p}{\rho }}}} is the ideal gas speed of sound. The plus branch corresponds to the fast-MHD wave mode and the minus branch corresponds to the slow-MHD wave mode. A summary of the properties of these waves is provided: The MHD oscillations will be damped if the fluid is not perfectly conducting but has a finite conductivity, or if viscous effects are present. MHD waves and oscillations are a popular tool for the remote diagnostics of laboratory and astrophysical plasmas, for example, the corona of the Sun (Coronal seismology). == Extensions == Resistive Resistive MHD describes magnetized fluids with finite electron diffusivity (η ≠ 0). This diffusivity leads to a breaking in the magnetic topology; magnetic field lines can 'reconnect' when they collide. Usually this term is small and reconnections can be handled by thinking of them as not dissimilar to shocks; this process has been shown to be important in the Earth-Solar magnetic interactions. Extended Extended MHD describes a class of phenomena in plasmas that are higher order than resistive MHD, but which can adequately be treated with a single fluid description. These include the effects of Hall physics, electron pressure gradients, finite Larmor Radii in the particle gyromotion, and electron inertia. Two-fluid Two-fluid MHD describes plasmas that include a non-negligible Hall electric field. As a result, the electron and ion momenta must be treated separately. This description is more closely tied to Maxwell's equations as an evolution equation for the electric field exists. Hall In 1960, M. J. Lighthill criticized the applicability of ideal or resistive MHD theory for plasmas. It concerned the neglect of the "Hall current term" in Ohm's law, a frequent simplification made in magnetic fusion theory. Hall-magnetohydrodynamics (HMHD) takes into account this electric field description of magnetohydrodynamics, and Ohm's law takes the form E + v × B − 1 n e e ( J × B ) ⏟ Hall current term = η J , {\displaystyle \mathbf {E} +\mathbf {v} \times \mathbf {B} -\underbrace {{\frac {1}{n_{e}e}}(\mathbf {J} \times \mathbf {B} )} _{\text{Hall current term}}=\eta \mathbf {J} ,} where n e {\displaystyle n_{e}} is the electron number density and e {\displaystyle e} is the elementary charge. The most important difference is that in the absence of field line breaking, the magnetic field is tied to the electrons and not to the bulk fluid. Electron MHD Electron Magnetohydrodynamics (EMHD) describes small scales plasmas when electron motion is much faster than the ion one. The main effects are changes in conservation laws, additional resistivity, importance of electron inertia. Many effects of Electron MHD are similar to effects of the Two fluid MHD and the Hall MHD. EMHD is especially important for z-pinch, magnetic reconnection, ion thrusters, neutron stars, and plasma switches. Collisionless MHD is also often used for collisionless plasmas. In that case the MHD equations are derived from the Vlasov equation. Reduced By using a multiscale analysis the (resistive) MHD equations can be reduced to a set of four closed scalar equations. This allows for, amongst other things, more efficient numerical calculations. == Limitations == === Importance of kinetic effects === Another limitation of MHD (and fluid theories in general) is that they depend on the assumption that the plasma is strongly collisional (this is the first criterion listed above), so that the time scale of collisions is shorter than the other characteristic times in the system, and the particle distributions are Maxwellian. This is usually not the case in fusion, space and astrophysical plasmas. When this is not the case, or the interest is in smaller spatial scales, it may be necessary to use a kinetic model which properly accounts for the non-Maxwellian shape of the distribution function. However, because MHD is relatively simple and captures many of the important properties of plasma dynamics it is often qualitatively accurate and is therefore often the first model tried. Effects which are essentially kinetic and not captured by fluid models include double layers, Landau damping, a wide range of instabilities, chemical separation in space plasmas and electron runaway. In the case of ultra-high intensity laser interactions, the incredibly short timescales of energy deposition mean that hydrodynamic codes fail to capture the essential physics. == Applications == === Geophysics === Beneath the Earth's mantle lies the core, which is made up of two parts: the solid inner core and liquid outer core. Both have significant quantities of iron. The liquid outer core moves in the presence of the magnetic field and eddies are set up into the same due to the Coriolis effect. These eddies develop a magnetic field which boosts Earth's original magnetic field—a process which is self-sustaining and is called the geomagnetic dynamo. Based on the MHD equations, Glatzmaier and Paul Roberts have made a supercomputer model of the Earth's interior. After running the simulations for thousands of years in virtual time, the changes in Earth's magnetic field can be studied. The simulation results are in good agreement with the observations as the simulations have correctly predicted that the Earth's magnetic field flips every few hundred thousand years. During the flips, the magnetic field does not vanish altogether—it just gets more complex. ==== Earthquakes ==== Some monitoring stations have reported that earthquakes are sometimes preceded by a spike in ultra low frequency (ULF) activity. A remarkable example of this occurred before the 1989 Loma Prieta earthquake in California, although a subsequent study indicates that this was little more than a sensor malfunction. On December 9, 2010, geoscientists announced that the DEMETER satellite observed a dramatic increase in ULF radio waves over Haiti in the month before the magnitude 7.0 Mw 2010 earthquake. Researchers are attempting to learn more about this correlation to find out whether this method can be used as part of an early warning system for earthquakes. === Space Physics === The study of space plasmas near Earth and throughout the Solar System is known as space physics. Areas researched within space physics encompass a large number of topics, ranging from the ionosphere to auroras, Earth's magnetosphere, the Solar wind, and coronal mass ejections. MHD forms the framework for understanding how populations of plasma interact within the local geospace environment. Researchers have developed global models using MHD to simulate phenomena within Earth's magnetosphere, such as the location of Earth's magnetopause (the boundary between the Earth's magnetic field and the solar wind), the formation of the ring current, auroral electrojets, and geomagnetically induced currents. One prominent use of global MHD models is in space weather forecasting. Intense solar storms have the potential to cause extensive damage to satellites and infrastructure, thus it is crucial that such events are detected early. The Space Weather Prediction Center (SWPC) runs MHD models to predict the arrival and impacts of space weather events at Earth. === Astrophysics === MHD applies to astrophysics, including stars, the interplanetary medium (space between the planets), and possibly within the interstellar medium (space between the stars) and jets. Most astrophysical systems are not in local thermal equilibrium, and therefore require an additional kinematic treatment to describe all the phenomena within the system (see Astrophysical plasma). Sunspots are caused by the Sun's magnetic fields, as Joseph Larmor theorized in 1919. The solar wind is also governed by MHD. The differential solar rotation may be the long-term effect of magnetic drag at the poles of the Sun, an MHD phenomenon due to the Parker spiral shape assumed by the extended magnetic field of the Sun. Previously, theories describing the formation of the Sun and planets could not explain how the Sun has 99.87% of the mass, yet only 0.54% of the angular momentum in the Solar System. In a closed system such as the cloud of gas and dust from which the Sun was formed, mass and angular momentum are both conserved. That conservation would imply that as the mass concentrated in the center of the cloud to form the Sun, it would spin faster, much like a skater pulling their arms in. The high speed of rotation predicted by early theories would have flung the proto-Sun apart before it could have formed. However, magnetohydrodynamic effects transfer the Sun's angular momentum into the outer solar system, slowing its rotation. Breakdown of ideal MHD (in the form of magnetic reconnection) is known to be the likely cause of solar flares. The magnetic field in a solar active region over a sunspot can store energy that is released suddenly as a burst of motion, X-rays, and radiation when the main current sheet collapses, reconnecting the field. === Magnetic confinement fusion === MHD describes a wide range of physical phenomena occurring in fusion plasmas in devices such as tokamaks or stellarators. The Grad-Shafranov equation derived from ideal MHD describes the equilibrium of axisymmetric toroidal plasma in a tokamak. In tokamak experiments, the equilibrium during each discharge is routinely calculated and reconstructed, which provides information on the shape and position of the plasma controlled by currents in external coils. MHD stability theory is known to govern the operational limits of tokamaks. For example, the ideal MHD kink modes provide hard limits on the achievable plasma beta (Troyon limit) and plasma current (set by the q > 2 {\displaystyle q>2} requirement of the safety factor). In a tokamak, instabilities also emerge from resistive MHD. For instance, tearing modes are instabilities arising within the framework of non-ideal MHD. This is an active field of research, since these instabilities are the starting point for disruptions. === Sensors === Magnetohydrodynamic sensors are used for precision measurements of angular velocities in inertial navigation systems such as in aerospace engineering. Accuracy improves with the size of the sensor. The sensor is capable of surviving in harsh environments. === Engineering === MHD is related to engineering problems such as plasma confinement, liquid-metal cooling of nuclear reactors, and electromagnetic casting (among others). A magnetohydrodynamic drive or MHD propulsor is a method for propelling seagoing vessels using only electric and magnetic fields with no moving parts, using magnetohydrodynamics. The working principle involves electrification of the propellant (gas or water) which can then be directed by a magnetic field, pushing the vehicle in the opposite direction. Although some working prototypes exist, MHD drives remain impractical. The first prototype of this kind of propulsion was built and tested in 1965 by Steward Way, a professor of mechanical engineering at the University of California, Santa Barbara. Way, on leave from his job at Westinghouse Electric, assigned his senior-year undergraduate students to develop a submarine with this new propulsion system. In the early 1990s, a foundation in Japan (Ship & Ocean Foundation (Minato-ku, Tokyo)) built an experimental boat, the Yamato-1, which used a magnetohydrodynamic drive incorporating a superconductor cooled by liquid helium, and could travel at 15 km/h. MHD power generation fueled by potassium-seeded coal combustion gas showed potential for more efficient energy conversion (the absence of solid moving parts allows operation at higher temperatures), but failed due to cost-prohibitive technical difficulties. One major engineering problem was the failure of the wall of the primary-coal combustion chamber due to abrasion. In microfluidics, MHD is studied as a fluid pump for producing a continuous, nonpulsating flow in a complex microchannel design. MHD can be implemented in the continuous casting process of metals to suppress instabilities and control the flow. Industrial MHD problems can be modeled using the open-source software EOF-Library. Two simulation examples are 3D MHD with a free surface for electromagnetic levitation melting, and liquid metal stirring by rotating permanent magnets. === Magnetic drug targeting === An important task in cancer research is developing more precise methods for delivery of medicine to affected areas. One method involves the binding of medicine to biologically compatible magnetic particles (such as ferrofluids), which are guided to the target via careful placement of permanent magnets on the external body. Magnetohydrodynamic equations and finite element analysis are used to study the interaction between the magnetic fluid particles in the bloodstream and the external magnetic field. == See also == === Further reading === Galtier, Sebastien (2016). Introduction to Modern Magnetohydrodynamics. Cambridge University Press. ISBN 9781107158658. == References ==
Wikipedia/Magnetohydrodynamics
Hydrostatics is the branch of fluid mechanics that studies fluids at hydrostatic equilibrium and "the pressure in a fluid or exerted by a fluid on an immersed body". The word "hydrostatics" is sometimes used to refer specifically to water and other liquids, but more often it includes both gases and liquids, whether compressible or incompressible. It encompasses the study of the conditions under which fluids are at rest in stable equilibrium. It is opposed to fluid dynamics, the study of fluids in motion. Hydrostatics is fundamental to hydraulics, the engineering of equipment for storing, transporting and using fluids. It is also relevant to geophysics and astrophysics (for example, in understanding plate tectonics and the anomalies of the Earth's gravitational field), to meteorology, to medicine (in the context of blood pressure), and many other fields. Hydrostatics offers physical explanations for many phenomena of everyday life, such as why atmospheric pressure changes with altitude, why wood and oil float on water, and why the surface of still water is always level according to the curvature of the earth. == History == Some principles of hydrostatics have been known in an empirical and intuitive sense since antiquity, by the builders of boats, cisterns, aqueducts and fountains. Archimedes is credited with the discovery of Archimedes' Principle, which relates the buoyancy force on an object that is submerged in a fluid to the weight of fluid displaced by the object. The Roman engineer Vitruvius warned readers about lead pipes bursting under hydrostatic pressure. The concept of pressure and the way it is transmitted by fluids was formulated by the French mathematician and philosopher Blaise Pascal in 1647. === Hydrostatics in ancient Greece and Rome === ==== Pythagorean Cup ==== The "fair cup" or Pythagorean cup, which dates from about the 6th century BC, is a hydraulic technology whose invention is credited to the Greek mathematician and geometer Pythagoras. It was used as a learning tool. The cup consists of a line carved into the interior of the cup, and a small vertical pipe in the center of the cup that leads to the bottom. The height of this pipe is the same as the line carved into the interior of the cup. The cup may be filled to the line without any fluid passing into the pipe in the center of the cup. However, when the amount of fluid exceeds this fill line, fluid will overflow into the pipe in the center of the cup. Due to the drag that molecules exert on one another, the cup will be emptied. ==== Heron's fountain ==== Heron's fountain is a device invented by Heron of Alexandria that consists of a jet of fluid being fed by a reservoir of fluid. The fountain is constructed in such a way that the height of the jet exceeds the height of the fluid in the reservoir, apparently in violation of principles of hydrostatic pressure. The device consisted of an opening and two containers arranged one above the other. The intermediate pot, which was sealed, was filled with fluid, and several cannula (a small tube for transferring fluid between vessels) connecting the various vessels. Trapped air inside the vessels induces a jet of water out of a nozzle, emptying all water from the intermediate reservoir. === Pascal's contribution in hydrostatics === Pascal made contributions to developments in both hydrostatics and hydrodynamics. Pascal's law is a fundamental principle of fluid mechanics that states that any pressure applied to the surface of a fluid is transmitted uniformly throughout the fluid in all directions, in such a way that initial variations in pressure are not changed. == Pressure in fluids at rest == Due to the fundamental nature of fluids, a fluid cannot remain at rest under the presence of a shear stress. However, fluids can exert pressure normal to any contacting surface. If a point in the fluid is thought of as an infinitesimally small cube, then it follows from the principles of equilibrium that the pressure on every side of this unit of fluid must be equal. If this were not the case, the fluid would move in the direction of the resulting force. Thus, the pressure on a fluid at rest is isotropic; i.e., it acts with equal magnitude in all directions. This characteristic allows fluids to transmit force through the length of pipes or tubes; i.e., a force applied to a fluid in a pipe is transmitted, via the fluid, to the other end of the pipe. This principle was first formulated, in a slightly extended form, by Blaise Pascal, and is now called Pascal's law. === Hydrostatic pressure === In a fluid at rest, all frictional and inertial stresses vanish and the state of stress of the system is called hydrostatic. When this condition of V = 0 is applied to the Navier–Stokes equations for viscous fluids or Euler equations (fluid dynamics) for ideal inviscid fluid, the gradient of pressure becomes a function of body forces only. The Navier-Stokes momentum equations are: By setting the flow velocity u = 0 {\displaystyle \mathbf {u} =\mathbf {0} } , they become simply: 0 = − ∇ p + ρ g {\displaystyle \mathbf {0} =-\nabla p+\rho \mathbf {g} } or: ∇ p = ρ g {\displaystyle \nabla p=\rho \mathbf {g} } This is the general form of Stevin's law: the pressure gradient equals the body force force density field. Let us now consider two particular cases of this law. In case of a conservative body force with scalar potential ϕ {\displaystyle \phi } : ρ g = − ∇ ϕ {\displaystyle \rho \mathbf {g} =-\nabla \phi } the Stevin equation becomes: ∇ p = − ∇ ϕ {\displaystyle \nabla p=-\nabla \phi } That can be integrated to give: Δ p = − Δ ϕ {\displaystyle \Delta p=-\Delta \phi } So in this case the pressure difference is the opposite of the difference of the scalar potential associated to the body force. In the other particular case of a body force of constant direction along z: g = − g ( x , y , z ) k ^ {\displaystyle \mathbf {g} =-g(x,y,z){\hat {k}}} the generalised Stevin's law above becomes: ∂ p ∂ z = − ρ ( x , y , z ) g ( x , y , z ) {\displaystyle {\frac {\partial p}{\partial z}}=-\rho (x,y,z)g(x,y,z)} That can be integrated to give another (less-) generalised Stevin's law: p ( x , y , z ) − p 0 ( x , y ) = − ∫ 0 z ρ ( x , y , z ′ ) g ( x , y , z ′ ) d z ′ {\displaystyle p(x,y,z)-p_{0}(x,y)=-\int _{0}^{z}\rho (x,y,z')g(x,y,z')dz'} where: p {\displaystyle p} is the hydrostatic pressure (Pa), ρ {\displaystyle \rho } is the fluid density (kg/m3), g {\displaystyle g} is gravitational acceleration (m/s2), z {\displaystyle z} is the height (parallel to the direction of gravity) of the test area (m), 0 {\displaystyle 0} is the height of the zero reference point of the pressure (m) p 0 {\displaystyle p_{0}} is the hydrostatic pressure field (Pa) along x and y at the zero reference point For water and other liquids, this integral can be simplified significantly for many practical applications, based on the following two assumptions. Since many liquids can be considered incompressible, a reasonable good estimation can be made from assuming a constant density throughout the liquid. The same assumption cannot be made within a gaseous environment. Also, since the height Δ z {\displaystyle \Delta z} of the fluid column between z and z0 is often reasonably small compared to the radius of the Earth, one can neglect the variation of g. Under these circumstances, one can transport out of the integral the density and the gravity acceleration and the law is simplified into the formula Δ p ( z ) = ρ g Δ z , {\displaystyle \Delta p(z)=\rho g\Delta z,} where Δ z {\displaystyle \Delta z} is the height z − z0 of the liquid column between the test volume and the zero reference point of the pressure. This formula is often called Stevin's law. One could arrive to the above formula also by considering the first particular case of the equation for a conservative body force field: in fact the body force field of uniform intensity and direction: ρ g ( x , y , z ) = − ρ g k ^ {\displaystyle \rho \mathbf {g} (x,y,z)=-\rho g{\hat {k}}} is conservative, so one can write the body force density as: ρ g = ∇ ( − ρ g z ) {\displaystyle \rho \mathbf {g} =\nabla (-\rho gz)} Then the body force density has a simple scalar potential: ϕ ( z ) = − ρ g z {\displaystyle \phi (z)=-\rho gz} And the pressure difference follows another time the Stevin's law: Δ p = − Δ ϕ = ρ g Δ z {\displaystyle \Delta p=-\Delta \phi =\rho g\Delta z} The reference point should lie at or below the surface of the liquid. Otherwise, one has to split the integral into two (or more) terms with the constant ρliquid and ρ(z′)above. For example, the absolute pressure compared to vacuum is p = ρ g Δ z + p 0 , {\displaystyle p=\rho g\Delta z+p_{\mathrm {0} },} where Δ z {\displaystyle \Delta z} is the total height of the liquid column above the test area to the surface, and p0 is the atmospheric pressure, i.e., the pressure calculated from the remaining integral over the air column from the liquid surface to infinity. This can easily be visualized using a pressure prism. Hydrostatic pressure has been used in the preservation of foods in a process called pascalization. === Medicine === In medicine, hydrostatic pressure in blood vessels is the pressure of the blood against the wall. It is the opposing force to oncotic pressure. In capillaries, hydrostatic pressure (also known as capillary blood pressure) is higher than the opposing “colloid osmotic pressure” in blood—a “constant” pressure primarily produced by circulating albumin—at the arteriolar end of the capillary. This pressure forces plasma and nutrients out of the capillaries and into surrounding tissues. Fluid and the cellular wastes in the tissues enter the capillaries at the venule end, where the hydrostatic pressure is less than the osmotic pressure in the vessel. === Atmospheric pressure === Statistical mechanics shows that, for a pure ideal gas of constant temperature in a gravitational field, T, its pressure, p will vary with height, h, as p ( h ) = p ( 0 ) e − M g h k T {\displaystyle p(h)=p(0)e^{-{\frac {Mgh}{kT}}}} where g is the acceleration due to gravity T is the absolute temperature k is Boltzmann constant M is the molecular mass of the gas p is the pressure h is the height This is known as the barometric formula, and may be derived from assuming the pressure is hydrostatic. If there are multiple types of molecules in the gas, the partial pressure of each type will be given by this equation. Under most conditions, the distribution of each species of gas is independent of the other species. === Buoyancy === Any body of arbitrary shape which is immersed, partly or fully, in a fluid will experience the action of a net force in the opposite direction of the local pressure gradient. If this pressure gradient arises from gravity, the net force is in the vertical direction opposite that of the gravitational force. This vertical force is termed buoyancy or buoyant force and is equal in magnitude, but opposite in direction, to the weight of the displaced fluid. Mathematically, F = ρ g V {\displaystyle F=\rho gV} where ρ is the density of the fluid, g is the acceleration due to gravity, and V is the volume of fluid directly above the curved surface. In the case of a ship, for instance, its weight is balanced by pressure forces from the surrounding water, allowing it to float. If more cargo is loaded onto the ship, it would sink more into the water – displacing more water and thus receive a higher buoyant force to balance the increased weight. Discovery of the principle of buoyancy is attributed to Archimedes. === Hydrostatic force on submerged surfaces === The horizontal and vertical components of the hydrostatic force acting on a submerged surface are given by the following formula: F h = p c A F v = ρ g V {\displaystyle {\begin{aligned}F_{\mathrm {h} }&=p_{\mathrm {c} }A\\F_{\mathrm {v} }&=\rho gV\end{aligned}}} where pc is the pressure at the centroid of the vertical projection of the submerged surface A is the area of the same vertical projection of the surface ρ is the density of the fluid g is the acceleration due to gravity V is the volume of fluid directly above the curved surface == Liquids (fluids with free surfaces) == Liquids can have free surfaces at which they interface with gases, or with a vacuum. In general, the lack of the ability to sustain a shear stress entails that free surfaces rapidly adjust towards an equilibrium. However, on small length scales, there is an important balancing force from surface tension. === Capillary action === When liquids are constrained in vessels whose dimensions are small, compared to the relevant length scales, surface tension effects become important leading to the formation of a meniscus through capillary action. This capillary action has profound consequences for biological systems as it is part of one of the two driving mechanisms of the flow of water in plant xylem, the transpirational pull. === Hanging drops === Without surface tension, drops would not be able to form. The dimensions and stability of drops are determined by surface tension. The drop's surface tension is directly proportional to the cohesion property of the fluid. == See also == Communicating vessels – Set of internally connected containers containing a homogeneous fluid Hydrostatic test – Non-destructive test of pressure vessels D-DIA – Apparatus used for high pressure and high temperature deformation experiments == References == == Further reading == Batchelor, George K. (1967). An Introduction to Fluid Dynamics. Cambridge University Press. ISBN 0-521-66396-2. Falkovich, Gregory (2011). Fluid Mechanics (A short course for physicists). Cambridge University Press. ISBN 978-1-107-00575-4. Kundu, Pijush K.; Cohen, Ira M. (2008). Fluid Mechanics (4th rev. ed.). Academic Press. ISBN 978-0-12-373735-9. Currie, I. G. (1974). Fundamental Mechanics of Fluids. McGraw-Hill. ISBN 0-07-015000-1. Massey, B.; Ward-Smith, J. (2005). Mechanics of Fluids (8th ed.). Taylor & Francis. ISBN 978-0-415-36206-1. White, Frank M. (2003). Fluid Mechanics. McGraw–Hill. ISBN 0-07-240217-2. == External links == The Flow of Dry Water - The Feynman Lectures on Physics
Wikipedia/Hydrostatics
The nuclear force (or nucleon–nucleon interaction, residual strong force, or, historically, strong nuclear force) is a force that acts between hadrons, most commonly observed between protons and neutrons of atoms. Neutrons and protons, both nucleons, are affected by the nuclear force almost identically. Since protons have charge +1 e, they experience an electric force that tends to push them apart, but at short range the attractive nuclear force is strong enough to overcome the electrostatic force. The nuclear force binds nucleons into atomic nuclei. The nuclear force is powerfully attractive between nucleons at distances of about 0.8 femtometre (fm, or 0.8×10−15 m), but it rapidly decreases to insignificance at distances beyond about 2.5 fm. At distances less than 0.7 fm, the nuclear force becomes repulsive. This repulsion is responsible for the size of nuclei, since nucleons can come no closer than the force allows. (The size of an atom, of size in the order of angstroms (Å, or 10−10 m), is five orders of magnitude larger.) The nuclear force is not simple, though, as it depends on the nucleon spins, has a tensor component, and may depend on the relative momentum of the nucleons. The nuclear force has an essential role in storing energy that is used in nuclear power and nuclear weapons. Work (energy) is required to bring charged protons together against their electric repulsion. This energy is stored when the protons and neutrons are bound together by the nuclear force to form a nucleus. The mass of a nucleus is less than the sum total of the individual masses of the protons and neutrons. The difference in masses is known as the mass defect, which can be expressed as an energy equivalent. Energy is released when a heavy nucleus breaks apart into two or more lighter nuclei. This energy is the internucleon potential energy that is released when the nuclear force no longer holds the charged nuclear fragments together. A quantitative description of the nuclear force relies on equations that are partly empirical. These equations model the internucleon potential energies, or potentials. (Generally, forces within a system of particles can be more simply modelled by describing the system's potential energy; the negative gradient of a potential is equal to the vector force.) The constants for the equations are phenomenological, that is, determined by fitting the equations to experimental data. The internucleon potentials attempt to describe the properties of nucleon–nucleon interaction. Once determined, any given potential can be used in, e.g., the Schrödinger equation to determine the quantum mechanical properties of the nucleon system. The discovery of the neutron in 1932 revealed that atomic nuclei were made of protons and neutrons, held together by an attractive force. By 1935 the nuclear force was conceived to be transmitted by particles called mesons. This theoretical development included a description of the Yukawa potential, an early example of a nuclear potential. Pions, fulfilling the prediction, were discovered experimentally in 1947. By the 1970s, the quark model had been developed, by which the mesons and nucleons were viewed as composed of quarks and gluons. By this new model, the nuclear force, resulting from the exchange of mesons between neighbouring nucleons, is a multiparticle interaction, the collective effect of strong force on the underlining structure of the nucleons. == Description == While the nuclear force is usually associated with nucleons, more generally this force is felt between hadrons, or particles composed of quarks. At small separations between nucleons (less than ~ 0.7 fm between their centres, depending upon spin alignment) the force becomes repulsive, which keeps the nucleons at a certain average separation. For identical nucleons (such as two neutrons or two protons) this repulsion arises from the Pauli exclusion force. A Pauli repulsion also occurs between quarks of the same flavour from different nucleons (a proton and a neutron). === Field strength === At distances larger than 0.7 fm the force becomes attractive between spin-aligned nucleons, becoming maximal at a centre–centre distance of about 0.9 fm. Beyond this distance the force drops exponentially, until beyond about 2.0 fm separation, the force is negligible. Nucleons have a radius of about 0.8 fm. At short distances (less than 1.7 fm or so), the attractive nuclear force is stronger than the repulsive Coulomb force between protons; it thus overcomes the repulsion of protons within the nucleus. However, the Coulomb force between protons has a much greater range as it varies as the inverse square of the charge separation, and Coulomb repulsion thus becomes the only significant force between protons when their separation exceeds about 2 to 2.5 fm. The nuclear force has a spin-dependent component. The force is stronger for particles with their spins aligned than for those with their spins anti-aligned. If two particles are the same, such as two neutrons or two protons, the force is not enough to bind the particles, since the spin vectors of two particles of the same type must point in opposite directions when the particles are near each other and are (save for spin) in the same quantum state. This requirement for fermions stems from the Pauli exclusion principle. For fermion particles of different types, such as a proton and neutron, particles may be close to each other and have aligned spins without violating the Pauli exclusion principle, and the nuclear force may bind them (in this case, into a deuteron), since the nuclear force is much stronger for spin-aligned particles. But if the particles' spins are anti-aligned, the nuclear force is too weak to bind them, even if they are of different types. The nuclear force also has a tensor component which depends on the interaction between the nucleon spins and the angular momentum of the nucleons, leading to deformation from a simple spherical shape. === Nuclear binding === To disassemble a nucleus into unbound protons and neutrons requires work against the nuclear force. Conversely, energy is released when a nucleus is created from free nucleons or other nuclei: the nuclear binding energy. Because of mass–energy equivalence (i.e. Einstein's formula E = mc2), releasing this energy causes the mass of the nucleus to be lower than the total mass of the individual nucleons, leading to the so-called "mass defect". The nuclear force is nearly independent of whether the nucleons are neutrons or protons. This property is called charge independence. The force depends on whether the spins of the nucleons are parallel or antiparallel, as it has a non-central or tensor component. This part of the force does not conserve orbital angular momentum, which under the action of central forces is conserved. The symmetry resulting in the strong force, proposed by Werner Heisenberg, is that protons and neutrons are identical in every respect, other than their charge. This is not completely true, because neutrons are a tiny bit heavier, but it is an approximate symmetry. Protons and neutrons are therefore viewed as the same particle, but with different isospin quantum numbers; conventionally, the proton is isospin up, while the neutron is isospin down. The strong force is invariant under SU(2) isospin transformations, just as other interactions between particles are invariant under SU(2) transformations of intrinsic spin. In other words, both isospin and intrinsic spin transformations are isomorphic to the SU(2) symmetry group. There are only strong attractions when the total isospin of the set of interacting particles is 0, which is confirmed by experiment. Our understanding of the nuclear force is obtained by scattering experiments and the binding energy of light nuclei. The nuclear force occurs by the exchange of virtual light mesons, such as the virtual pions, as well as two types of virtual mesons with spin (vector mesons), the rho mesons and the omega mesons. The vector mesons account for the spin-dependence of the nuclear force in this "virtual meson" picture. The nuclear force is distinct from what historically was known as the weak nuclear force. The weak interaction is one of the four fundamental interactions, and plays a role in processes such as beta decay. The weak force plays no role in the interaction of nucleons, though it is responsible for the decay of neutrons to protons and vice versa. == History == The nuclear force has been at the heart of nuclear physics ever since the field was born in 1932 with the discovery of the neutron by James Chadwick. The traditional goal of nuclear physics is to understand the properties of atomic nuclei in terms of the "bare" interaction between pairs of nucleons, or nucleon–nucleon forces (NN forces). Within months after the discovery of the neutron, Werner Heisenberg and Dmitri Ivanenko had proposed proton–neutron models for the nucleus. Heisenberg approached the description of protons and neutrons in the nucleus through quantum mechanics, an approach that was not at all obvious at the time. Heisenberg's theory for protons and neutrons in the nucleus was a "major step toward understanding the nucleus as a quantum mechanical system". Heisenberg introduced the first theory of nuclear exchange forces that bind the nucleons. He considered protons and neutrons to be different quantum states of the same particle, i.e., nucleons distinguished by the value of their nuclear isospin quantum numbers. One of the earliest models for the nucleus was the liquid-drop model developed in the 1930s. One property of nuclei is that the average binding energy per nucleon is approximately the same for all stable nuclei, which is similar to a liquid drop. The liquid-drop model treated the nucleus as a drop of incompressible nuclear fluid, with nucleons behaving like molecules in a liquid. The model was first proposed by George Gamow and then developed by Niels Bohr, Werner Heisenberg, and Carl Friedrich von Weizsäcker. This crude model did not explain all the properties of the nucleus, but it did explain the spherical shape of most nuclei. The model also gave good predictions for the binding energy of nuclei. In 1934, Hideki Yukawa made the earliest attempt to explain the nature of the nuclear force. According to his theory, massive bosons (mesons) mediate the interaction between two nucleons. In light of quantum chromodynamics (QCD)—and, by extension, the Standard Model—meson theory is no longer perceived as fundamental. But the meson-exchange concept (where hadrons are treated as elementary particles) continues to represent the best working model for a quantitative NN potential. The Yukawa potential (also called a screened Coulomb potential) is a potential of the form V Yukawa ( r ) = − g 2 e − μ r r , {\displaystyle V_{\text{Yukawa}}(r)=-g^{2}{\frac {e^{-\mu r}}{r}},} where g is a magnitude scaling constant, i.e., the amplitude of potential, μ {\displaystyle \mu } is the Yukawa particle mass, r is the radial distance to the particle. The potential is monotone increasing, implying that the force is always attractive. The constants are determined empirically. The Yukawa potential depends only on the distance r between particles, hence it models a central force. Throughout the 1930s a group at Columbia University led by I. I. Rabi developed magnetic-resonance techniques to determine the magnetic moments of nuclei. These measurements led to the discovery in 1939 that the deuteron also possessed an electric quadrupole moment. This electrical property of the deuteron had been interfering with the measurements by the Rabi group. The deuteron, composed of a proton and a neutron, is one of the simplest nuclear systems. The discovery meant that the physical shape of the deuteron was not symmetric, which provided valuable insight into the nature of the nuclear force binding nucleons. In particular, the result showed that the nuclear force was not a central force, but had a tensor character. Hans Bethe identified the discovery of the deuteron's quadrupole moment as one of the important events during the formative years of nuclear physics. Historically, the task of describing the nuclear force phenomenologically was formidable. The first semi-empirical quantitative models came in the mid-1950s, such as the Woods–Saxon potential (1954). There was substantial progress in experiment and theory related to the nuclear force in the 1960s and 1970s. One influential model was the Reid potential (1968) V Reid ( r ) = − 10.463 e − μ r μ r − 1650.6 e − 4 μ r μ r + 6484.2 e − 7 μ r μ r , {\displaystyle V_{\text{Reid}}(r)=-10.463{\frac {e^{-\mu r}}{\mu r}}-1650.6{\frac {e^{-4\mu r}}{\mu r}}+6484.2{\frac {e^{-7\mu r}}{\mu r}},} where μ = 0.7 fm − 1 , {\displaystyle \mu =0.7~{\text{fm}}^{-1},} and where the potential is given in units of MeV. In recent years, experimenters have concentrated on the subtleties of the nuclear force, such as its charge dependence, the precise value of the πNN coupling constant, improved phase-shift analysis, high-precision NN data, high-precision NN potentials, NN scattering at intermediate and high energies, and attempts to derive the nuclear force from QCD. == As a residual of strong force == The nuclear force is a residual effect of the more fundamental strong force, or strong interaction. The strong interaction is the attractive force that binds the elementary particles called quarks together to form the nucleons (protons and neutrons) themselves. This more powerful force, one of the fundamental forces of nature, is mediated by particles called gluons. Gluons hold quarks together through colour charge which is analogous to electric charge, but far stronger. Quarks, gluons, and their dynamics are mostly confined within nucleons, but residual influences extend slightly beyond nucleon boundaries to give rise to the nuclear force. The nuclear forces arising between nucleons are analogous to the forces in chemistry between neutral atoms or molecules called London dispersion forces. Such forces between atoms are much weaker than the attractive electrical forces that hold the atoms themselves together (i.e., that bind electrons to the nucleus), and their range between atoms is shorter, because they arise from small separation of charges inside the neutral atom. Similarly, even though nucleons are made of quarks in combinations which cancel most gluon forces (they are "colour neutral"), some combinations of quarks and gluons nevertheless leak away from nucleons, in the form of short-range nuclear force fields that extend from one nucleon to another nearby nucleon. These nuclear forces are very weak compared to direct gluon forces ("colour forces" or strong forces) inside nucleons, and the nuclear forces extend only over a few nuclear diameters, falling exponentially with distance. Nevertheless, they are strong enough to bind neutrons and protons over short distances, and overcome the electrical repulsion between protons in the nucleus. Sometimes, the nuclear force is called the residual strong force, in contrast to the strong interactions which arise from QCD. This phrasing arose during the 1970s when QCD was being established. Before that time, the strong nuclear force referred to the inter-nucleon potential. After the verification of the quark model, strong interaction has come to mean QCD. == Nucleon–nucleon potentials == Two-nucleon systems such as the deuteron, the nucleus of a deuterium atom, as well as proton–proton or neutron–proton scattering are ideal for studying the NN force. Such systems can be described by attributing a potential (such as the Yukawa potential) to the nucleons and using the potentials in a Schrödinger equation. The form of the potential is derived phenomenologically (by measurement), although for the long-range interaction, meson-exchange theories help to construct the potential. The parameters of the potential are determined by fitting to experimental data such as the deuteron binding energy or NN elastic scattering cross sections (or, equivalently in this context, so-called NN phase shifts). The most widely used NN potentials are the Paris potential, the Argonne AV18 potential, the CD-Bonn potential, and the Nijmegen potentials. A more recent approach is to develop effective field theories for a consistent description of nucleon–nucleon and three-nucleon forces. Quantum hadrodynamics is an effective field theory of the nuclear force, comparable to QCD for colour interactions and QED for electromagnetic interactions. Additionally, chiral symmetry breaking can be analyzed in terms of an effective field theory (called chiral perturbation theory) which allows perturbative calculations of the interactions between nucleons with pions as exchange particles. === From nucleons to nuclei === The ultimate goal of nuclear physics would be to describe all nuclear interactions from the basic interactions between nucleons. This is called the microscopic or ab initio approach of nuclear physics. There are two major obstacles to overcome: Calculations in many-body systems are difficult (because of multi-particle interactions) and require advanced computation techniques. There is evidence that three-nucleon forces (and possibly higher multi-particle interactions) play a significant role. This means that three-nucleon potentials must be included into the model. This is an active area of research with ongoing advances in computational techniques leading to better first-principles calculations of the nuclear shell structure. Two- and three-nucleon potentials have been implemented for nuclides up to A = 12. === Nuclear potentials === A successful way of describing nuclear interactions is to construct one potential for the whole nucleus instead of considering all its nucleon components. This is called the macroscopic approach. For example, scattering of neutrons from nuclei can be described by considering a plane wave in the potential of the nucleus, which comprises a real part and an imaginary part. This model is often called the optical model since it resembles the case of light scattered by an opaque glass sphere. Nuclear potentials can be local or global: local potentials are limited to a narrow energy range and/or a narrow nuclear mass range, while global potentials, which have more parameters and are usually less accurate, are functions of the energy and the nuclear mass and can therefore be used in a wider range of applications. == See also == Physics portal Nuclear binding energy == References == == Bibliography == == Further reading == Ruprecht Machleidt, "Nuclear Forces", Scholarpedia, 9(1):30710. doi:10.4249/scholarpedia.30710 unflagged free DOI (link).
Wikipedia/Nuclear_force
Copernican heliocentrism is the astronomical model developed by Nicolaus Copernicus and published in 1543. This model positioned the Sun at the center of the Universe, motionless, with Earth and the other planets orbiting around it in circular paths, modified by epicycles, and at uniform speeds. The Copernican model displaced the geocentric model of Ptolemy that had prevailed for centuries, which had placed Earth at the center of the Universe. Although he had circulated an outline of his own heliocentric theory to colleagues sometime before 1514, he did not decide to publish it until he was urged to do so later by his pupil Rheticus. Copernicus's challenge was to present a practical alternative to the Ptolemaic model by more elegantly and accurately determining the length of a solar year while preserving the metaphysical implications of a mathematically ordered cosmos. Thus, his heliocentric model retained several of the Ptolemaic elements, causing inaccuracies, such as the planets' circular orbits, epicycles, and uniform speeds, while at the same time using accurate ideas such as: The Earth is one of several planets revolving around a stationary sun in a determined order. The Earth has three motions: daily rotation, annual revolution, and annual tilting of its axis. Retrograde motion of the planets is explained by the Earth's motion. The distance from the Earth to the Sun is small compared to the distance from the Sun to the stars. The Copernican model was later replaced by Kepler's laws of planetary motion. == Background == === Antiquity === Philolaus (4th century BCE) was one of the first to hypothesize movement of the Earth, probably inspired by Pythagoras' theories about a spherical, moving globe. In the 3rd century BCE, Aristarchus of Samos proposed what was, so far as is known, the first serious model of a heliocentric Solar System, having developed some of Heraclides Ponticus' theories (speaking of a "revolution of the Earth on its axis" every 24 hours). Though his original text has been lost, a reference in Archimedes' book The Sand Reckoner (Archimedis Syracusani Arenarius & Dimensio Circuli) describes a work in which Aristarchus advanced the heliocentric model. Archimedes wrote: You [King Gelon] are aware the 'universe' is the name given by most astronomers to the sphere the center of which is the center of the Earth, while its radius is equal to the straight line between the center of the Sun and the center of the Earth. This is the common account as you have heard from astronomers. But Aristarchus has brought out a book consisting of certain hypotheses, wherein it appears, as a consequence of the assumptions made, that the universe is many times greater than the 'universe' just mentioned. His hypotheses are that the fixed stars and the Sun remain unmoved, that the Earth revolves about the Sun on the circumference of a circle, the Sun lying in the middle of the Floor, and that the sphere of the fixed stars, situated about the same center as the Sun, is so great that the circle in which he supposes the Earth to revolve bears such a proportion to the distance of the fixed stars as the center of the sphere bears to its surface. It is a common misconception that the heliocentric view was rejected by the contemporaries of Aristarchus. This is the result of Gilles Ménage's translation of a passage from Plutarch's On the Apparent Face in the Orb of the Moon. Plutarch reported that Cleanthes (a contemporary of Aristarchus and head of the Stoics) as a worshiper of the Sun and opponent to the heliocentric model, was jokingly told by Aristarchus that he should be charged with impiety. Ménage, shortly after the trials of Galileo and Giordano Bruno, amended an accusative (identifying the object of the verb) with a nominative (the subject of the sentence), and vice versa, so that the impiety accusation fell over the heliocentric sustainer. The resulting misconception of an isolated and persecuted Aristarchus is still transmitted today. ==== Ptolemaic system ==== The prevailing astronomical model of the cosmos in Europe in the 1,400 years leading up to the 16th century was the Ptolemaic System, a geocentric model created by Claudius Ptolemy in his Almagest, dating from about 150 CE. Throughout the Middle Ages it was spoken of as the authoritative text on astronomy, although its author remained a little understood figure frequently mistaken as one of the Ptolemaic rulers of Egypt. The Ptolemaic system drew on many previous theories that viewed Earth as a stationary center of the universe. Stars were embedded in a large outer sphere which rotated relatively rapidly, while the planets dwelt in smaller spheres between—a separate one for each planet. To account for apparent anomalies in this view, such as the apparent retrograde motion of the planets, a system of deferents and epicycles was used. The planet was said to revolve in a small circle (the epicycle) about a center, which itself revolved in a larger circle (the deferent) about a center on or near the Earth. A complementary theory to Ptolemy's employed homocentric spheres: the spheres within which the planets rotated could themselves rotate somewhat. This theory predated Ptolemy (it was first devised by Eudoxus of Cnidus; by the time of Copernicus it was associated with Averroes). Also popular with astronomers were variations such as eccentrics—by which the rotational axis was offset and not completely at the center. The planets were also made to have exhibit irregular motions that deviated from a uniform and circular path. The eccentrics of the planets motions were analyzed to have made reverse motions over periods of observations. This retrograde motion created the foundation for why these particular pathways became known as epicycles. Ptolemy's unique contribution to this theory was the equant—a point about which the center of a planet's epicycle moved with uniform angular velocity, but which was offset from the center of its deferent. This violated one of the fundamental principles of Aristotelian cosmology—namely, that the motions of the planets should be explained in terms of uniform circular motion, and was considered a serious defect by many medieval astronomers. ==== Aryabhata ==== In 499 CE, the Indian astronomer and mathematician Aryabhata, influenced by Greek astronomy, propounded a planetary model that explicitly incorporated Earth's rotation about its axis, which he explains as the cause of what appears to be an apparent westward motion of the stars. He also believed that the orbits of planets are elliptical. Aryabhata's followers were particularly strong in South India, where his principles of the diurnal rotation of Earth, among others, were followed and a number of secondary works were based on them. === Middle Ages === ==== Islamic astronomers ==== Several Islamic astronomers questioned the Earth's apparent immobility and centrality within the universe. Some accepted that the Earth rotates around its axis, such as Al-Sijzi, who invented an astrolabe based on a belief held by some of his contemporaries "that the motion we see is due to the Earth's movement and not to that of the sky". That others besides Al-Sijzi held this view is further confirmed by a reference from an Arabic work in the 13th century which states: "According to the geometers [or engineers] (muhandisīn), the earth is in constant circular motion, and what appears to be the motion of the heavens is actually due to the motion of the earth and not the stars". In the 12th century, Nur ad-Din al-Bitruji proposed a complete alternative to the Ptolemaic system (although not heliocentric). He declared the Ptolemaic system as an imaginary model, successful at predicting planetary positions but not real or physical. Al-Btiruji's alternative system spread through most of Europe during the 13th century. Mathematical techniques developed in the 13th to 14th centuries by the Arab and Persian astronomers Mu'ayyad al-Din al-Urdi, Nasir al-Din al-Tusi, and Ibn al-Shatir for geocentric models of planetary motions closely resemble some of the techniques used later by Copernicus in his heliocentric models. ==== European astronomers post-Ptolemy ==== Martianus Capella (5th century CE) expressed the opinion that the planets Venus and Mercury did not go about the Earth but instead circled the Sun. Capella's model was discussed in the Early Middle Ages by various anonymous 9th-century commentators and Copernicus mentions him as an influence on his own work. Macrobius (420 CE) described a heliocentric model. John Scotus Eriugena (815–877 CE) proposed a model reminiscent of that from Tycho Brahe. Since the 13th century, European scholars were well aware of problems with Ptolemaic astronomy. The debate was precipitated by the reception by Averroes' criticism of Ptolemy, and it was again revived by the recovery of Ptolemy's text and its translation into Latin in the mid-15th century. Otto E. Neugebauer in 1957 argued that the debate in 15th-century Latin scholarship must also have been informed by the criticism of Ptolemy produced after Averroes, by the Ilkhanid-era (13th to 14th centuries) Persian school of astronomy associated with the Maragheh observatory (especially the works of al-Urdi, al-Tusi and al-Shatir). It has been argued that Copernicus could have independently discovered the Tusi couple or took the idea from Proclus's Commentary on the First Book of Euclid, which Copernicus cited. Another possible source for Copernicus' knowledge of this mathematical device is the Questiones de Spera of Nicole Oresme, who described how a reciprocating linear motion of a celestial body could be produced by a combination of circular motions similar to those proposed by al-Tusi. In Copernicus' day, the most up-to-date version of the Ptolemaic system was that of Georg von Peuerbach (1423–1461) and his student Regiomontanus (1436–1476). The state of the question as received by Copernicus is summarized in the Theoricae novae planetarum by Peuerbach, compiled from lecture notes by Regiomontanus in 1454, but not printed until 1472. Peuerbach attempts to give a new, mathematically more elegant presentation of Ptolemy's system, but he does not arrive at heliocentrism. Regiomontanus was the teacher of Domenico Maria Novara da Ferrara, who was in turn the teacher of Copernicus. There is a possibility that Regiomontanus had already arrived at a theory of heliocentrism before his death in 1476, as he paid particular attention to the heliocentric theory of Aristarchus in a late work and mentions the "motion of the Earth" in a letter. The state of knowledge on planetary theory received by Copernicus is summarized in Peuerbach's Theoricae Novae Planetarum (printed in 1472 by Regiomontanus). By 1470, the accuracy of observations by the Vienna school of astronomy, of which Peuerbach and Regiomontanus were members, was high enough to make the eventual development of heliocentrism inevitable, and indeed it is possible that Regiomontanus did arrive at an explicit theory of heliocentrism before his death in 1476, some 30 years before Copernicus. == Copernican theory == Copernicus' major work, De revolutionibus orbium coelestium (On the Revolutions of the Heavenly Spheres; first edition 1543 in Nuremberg, second edition 1566 in Basel), was a compendium of six books published during the year of his death, though he had arrived at his theory several decades earlier. The work marks the beginning of the shift away from a geocentric (and anthropocentric) universe with the Earth at its center. Copernicus held that the Earth is another planet revolving around the fixed Sun once a year and turning on its axis once a day. But while Copernicus put the Sun at the center of the celestial spheres, he did not put it at the exact center of the universe, but near it. Copernicus' system used only uniform circular motions, correcting what was seen by many as the chief inelegance in Ptolemy's system. The Copernican model replaced Ptolemy's equant circles with more epicycles. 1,500 years of Ptolemy's model helped to create a more accurate estimate of the planets' motions for Copernicus. That is the main reason that Copernicus' system had even more epicycles than Ptolemy's. The more epicycles proved to have more accurate measurements of how the planets were truly positioned, "although not enough to get excited about". The Copernican system can be summarized in several propositions, as Copernicus himself did in his early Commentariolus that he handed only to friends, probably in the 1510s. The "little commentary" was never printed. Its existence was only known indirectly until a copy was discovered in Stockholm around 1880, and another in Vienna a few years later. The major features of Copernican theory are: Heavenly motions are uniform, eternal, and circular or compounded of several circles (epicycles). The center of the universe is near the Sun. Around the Sun, in order, are Mercury, Venus, the Earth and Moon, Mars, Jupiter, Saturn, and the fixed stars. The Earth has three motions: daily rotation, annual revolution, and annual tilting of its axis. Retrograde motion of the planets is explained by the Earth's motion, which in short was also influenced by planets and other celestial bodies around Earth. The distance from the Earth to the Sun is small compared to the distance to the stars. Inspiration came to Copernicus not from observation of the planets, but from reading two authors, Cicero and Plutarch. In Cicero's writings, Copernicus found an account of the theory of Hicetas. Plutarch provided an account of the Pythagoreans Heraclides Ponticus, Philolaus, and Ecphantes. These authors had proposed a moving Earth, which did not revolve around a central Sun. Copernicus cited Aristarchus and Philolaus in an early manuscript of his book which survives, stating: "Philolaus believed in the mobility of the earth, and some even say that Aristarchus of Samos was of that opinion". For unknown reasons (although possibly out of reluctance to quote pre-Christian sources), Copernicus did not include this passage in the publication of his book. Copernicus used what is now known as the Urdi lemma and the Tusi couple in the same planetary models as found in Arabic sources. Furthermore, the exact replacement of the equant by two epicycles used by Copernicus in the Commentariolus was found in an earlier work by al-Shatir. Al-Shatir's lunar and Mercury models are also identical to those of Copernicus. This has led some scholars to argue that Copernicus must have had access to some yet to be identified work on the ideas of those earlier astronomers. However, no likely candidate for this conjectured work has come to light, and other scholars have argued that Copernicus could well have developed these ideas independently of the late Islamic tradition. Nevertheless, Copernicus cited some of the Islamic astronomers whose theories and observations he used in De Revolutionibus, namely al-Battani, Thabit ibn Qurra, al-Zarqali, Averroes, and al-Bitruji. It has been suggested that the idea of the Tusi couple may have arrived in Europe leaving few manuscript traces, since it could have occurred without the translation of any Arabic text into Latin. One possible route of transmission may have been through Byzantine science; Gregory Chioniades translated some of al-Tusi's works from Arabic into Byzantine Greek. Several Byzantine Greek manuscripts containing the Tusi-couple are still extant in Italy. === De revolutionibus orbium coelestium === When Copernicus' compendium was published, it contained an unauthorized, anonymous preface by a friend of Copernicus, the Lutheran theologian Andreas Osiander. This cleric stated that Copernicus wrote his heliocentric account of the Earth's movement as a mathematical hypothesis, not as an account that contained truth or even probability. Since Copernicus' hypothesis was believed to contradict the Old Testament account of the Sun's movement around the Earth (Joshua 10:12-13), this was apparently written to soften any religious backlash against the book. However, there is no evidence that Copernicus himself considered the heliocentric model as merely mathematically convenient, separate from reality. Copernicus' actual compendium began with a letter from his (by then deceased) friend Nikolaus von Schönberg, Cardinal Archbishop of Capua, urging Copernicus to publish his theory. Then, in a lengthy introduction, Copernicus dedicated the book to Pope Paul III, explaining his ostensible motive in writing the book as relating to the inability of earlier astronomers to agree on an adequate theory of the planets, and noting that if his system increased the accuracy of astronomical predictions it would allow the Church to develop a more accurate calendar. At that time, a reform of the Julian Calendar was considered necessary and was one of the major reasons for the Church's interest in astronomy. The work itself is divided into six books: The first is a general vision of the heliocentric theory, and a summarized exposition of his idea of the World. The second is mainly theoretical, presenting the principles of spherical astronomy and a list of stars (as a basis for the arguments developed in the subsequent books). The third is mainly dedicated to the apparent motions of the Sun and to related phenomena. The fourth is a description of the Moon and its orbital motions. The fifth is a concrete exposition of the new system, including planetary longitude. The sixth is further concrete exposition of the new system, including planetary latitude. == Early criticisms == From publication until about 1700, few astronomers were convinced by the Copernican system, though the work was relatively widely circulated (around 500 copies of the first and second editions have survived, which is a large number by the scientific standards of the time). Few of Copernicus' contemporaries were ready to concede that the Earth actually moved. Even forty-five years after the publication of De Revolutionibus, the astronomer Tycho Brahe went so far as to construct a cosmology precisely equivalent to that of Copernicus, but with the Earth held fixed in the center of the celestial sphere instead of the Sun. It was another generation before a community of practicing astronomers appeared who accepted heliocentric cosmology. For his contemporaries, the ideas presented by Copernicus were not markedly easier to use than the geocentric theory and did not produce more accurate predictions of planetary positions. Copernicus was aware of this and could not present any observational "proof", relying instead on arguments about what would be a more complete and elegant system. The Copernican model appeared to be contrary to common sense and to contradict the Bible. Tycho Brahe's arguments against Copernicus are illustrative of the physical, theological, and even astronomical grounds on which heliocentric cosmology was rejected. Tycho, arguably the most accomplished astronomer of his time, appreciated the elegance of the Copernican system, but objected to the idea of a moving Earth on the basis of physics, astronomy, and religion. The Aristotelian physics of the time (modern Newtonian physics was still a century away) offered no physical explanation for the motion of a massive body like Earth, but could easily explain the motion of heavenly bodies by postulating that they were made of a different sort of substance called aether that moved naturally. So Tycho said that the Copernican system "... expertly and completely circumvents all that is superfluous or discordant in the system of Ptolemy. On no point does it offend the principle of mathematics. Yet it ascribes to the Earth, that hulking, lazy body, unfit for motion, a motion as quick as that of the aethereal torches, and a triple motion at that." Thus many astronomers accepted some aspects of Copernicus's theory at the expense of others. == Copernican Revolution == The Copernican Revolution, a paradigm shift from the Ptolemaic model of the heavens, which described the cosmos as having Earth as a stationary body at the center of the universe, to the heliocentric model with the Sun at the center of the Solar System, spanned over a century, beginning with the publication of Copernicus' De revolutionibus orbium coelestium and ending with the work of Isaac Newton. While not warmly received by his contemporaries, his model did have a large influence on later scientists such as Galileo and Johannes Kepler, who adopted, championed and (especially in Kepler's case) sought to improve it. However, in the years following publication of de Revolutionibus, for leading astronomers such as Erasmus Reinhold, the key attraction of Copernicus's ideas was that they reinstated the idea of uniform circular motion for the planets. During the 17th century, several further discoveries eventually led to the wider acceptance of heliocentrism: Using detailed observations by Tycho Brahe, Kepler discovered Mars's orbit was an ellipse with the Sun at one focus, and its speed varied with its distance from the Sun. This discovery was detailed in his 1609 book Astronomia nova along with the claim that all planets had elliptical orbits and non-uniform motion, stating "And finally... the sun itself... will melt all this Ptolemaic apparatus like butter". Using the newly invented telescope, in 1610 Galileo observed the four large moons of Jupiter (evidence that the Solar System contained bodies that did not orbit Earth), the phases of Venus (more observational evidence not properly explained by the Ptolemaic theory) and the rotation of the Sun about a fixed axis: as indicated by the apparent annual variation in the motion of sunspots; With a telescope, Giovanni Zupi saw the phases of Mercury in 1639; Isaac Newton in 1687 proposed universal gravity and the inverse-square law of gravitational attraction to explain Kepler's elliptical planetary orbits. == Modern views == === Substantially correct === From a modern point of view, the Copernican model has a number of advantages. Copernicus gave a clear account of the cause of the seasons: that the Earth's axis is not perpendicular to the plane of its orbit. In addition, Copernicus's theory provided a strikingly simple explanation for the apparent retrograde motions of the planets—namely as parallactic displacements resulting from the Earth's motion around the Sun—an important consideration in Johannes Kepler's conviction that the theory was substantially correct. In the heliocentric model the planets' apparent retrograde motions' occurring at opposition to the Sun are a natural consequence of their heliocentric orbits. In the geocentric model, however, these are explained by the ad hoc use of epicycles, whose revolutions are mysteriously tied to that of the Sun. === Modern historiography === Whether Copernicus' propositions were "revolutionary" or "conservative" has been a topic of debate in the historiography of science. In his book The Sleepwalkers: A History of Man's Changing Vision of the Universe (1959), Arthur Koestler attempted to deconstruct the Copernican "revolution" by portraying Copernicus as a coward who was reluctant to publish his work due to a crippling fear of ridicule. Thomas Kuhn argued that Copernicus only transferred "some properties to the Sun's many astronomical functions previously attributed to the earth." Historians have since argued that Kuhn underestimated what was "revolutionary" about Copernicus' work, and emphasized the difficulty Copernicus would have had in putting forward a new astronomical theory relying alone on simplicity in geometry, given that he had no experimental evidence. == See also == Copernican principle == Notes == == References == Crowe, Michael J. (2001). Theories of the World from Antiquity to the Copernican Revolution. Mineola, New York: Dover Publications, Inc. ISBN 0-486-41444-2. di Bono, Mario (1995). "Copernicus, Amico, Fracastoro and Ṭūsï's Device: Observations on the Use and Transmission of a Model". Journal for the History of Astronomy. xxvi (2): 133–154. Bibcode:1995JHA....26..133D. doi:10.1177/002182869502600203. S2CID 118330488. Drake, Stillman (1970). Galileo Studies. Ann Arbor: The University of Michigan Press. ISBN 0-472-08283-3. Esposito, John L. (1999). The Oxford history of Islam. Oxford University Press. ISBN 978-0-1951-0799-9. Gingerich, Owen (2004). The Book Nobody Read. London: William Heinemann. ISBN 0-434-01315-3. Gingerich, Owen (June 2011), "Galileo, the Impact of the Telescope, and the Birth of Modern Astronomy" (PDF), Proceedings of the American Philosophical Society, vol. 155, no. 2, Philadelphia PA, pp. 134–141, archived from the original (PDF) on 2015-03-19, retrieved 2016-04-13 Goddu, André (2010). Copernicus and the Aristotelian tradition. Leiden, Netherlands: Brill. ISBN 978-9-0041-8107-6. Huff, Toby E (2010). Intellectual Curiosity and the Scientific Revolution: A Global Perspective. Cambridge: Cambridge University Press. ISBN 978-0-5211-7052-9. Koestler, Arthur (1989) [1959]. The Sleepwalkers: A History of Man's Changing Vision of the Universe. Arkana. ISBN 978-0-14-019246-9. Kuhn, Thomas S. (1985). The Copernican Revolution—Planetary Astronomy in the Development of Western Thought. Cambridge, Mississippi: Harvard University Press. ISBN 978-0-674-17103-9. Linton, Christopher M. (2004). From Eudoxus to Einstein—A History of Mathematical Astronomy. Cambridge: Cambridge University Press. ISBN 978-0-521-82750-8. McCluskey, S. C. (1998). Astronomies and Cultures in Early Medieval Europe. Cambridge: Cambridge University Press. Raju, C. K. (2007). Cultural foundations of mathematics: the nature of mathematical proof and the transmission of the calculus from India to Europe in the 16th c. CE. Pearson Education India. ISBN 978-8-1317-0871-2. Saliba, George (2009), "Islamic reception of Greek astronomy" (PDF), in Valls-Gabaud & Boskenberg (2009), vol. 260, pp. 149–165, Bibcode:2011IAUS..260..149S, doi:10.1017/S1743921311002237 Sharratt, Michael (1994). Galileo: Decisive Innovator. Cambridge: Cambridge University Press. ISBN 0-521-56671-1. Valls-Gabaud, D.; Boskenberg, A., eds. (2009). The Role of Astronomy in Society and Culture. Proceedings IAU Symposium No. 260. Veselovsky, I.N. (1973). "Copernicus and Naṣīr al-Dīn al-Ṭūsī". Journal for the History of Astronomy. iv: 128–130. Bibcode:1973JHA.....4..128V. doi:10.1177/002182867300400205. S2CID 118453340. == Further reading == Hannam, James (2007). "Deconstructing Copernicus". Medieval Science and Philosophy. Retrieved 2007-08-17. Analyses the varieties of argument used by Copernicus in De revolutionibus. Goldstone, Lawrence (2010). The Astronomer: A Novel of Suspense. New York: Walker and Company. ISBN 978-0-8027-1986-7. == External links == Heliocentric Pantheon
Wikipedia/Copernican_model
Agrophysics is a branch of science bordering on agronomy and physics, whose objects of study are the agroecosystem - the biological objects, biotope and biocoenosis affected by human activity, studied and described using the methods of physical sciences. Using the achievements of the exact sciences to solve major problems in agriculture, agrophysics involves the study of materials and processes occurring in the production and processing of agricultural crops, with particular emphasis on the condition of the environment and the quality of farming materials and food production. Agrophysics is closely related to biophysics, but is restricted to the physics of the plants, animals, soil and an atmosphere involved in agricultural activities and biodiversity. It is different from biophysics in having the necessity of taking into account the specific features of biotope and biocoenosis, which involves the knowledge of nutritional science and agroecology, agricultural technology, biotechnology, genetics etc. The needs of agriculture, concerning the past experience study of the local complex soil and next plant-atmosphere systems, lay at the root of the emergence of a new branch – agrophysics – dealing this with experimental physics. The scope of the branch starting from soil science (physics) and originally limited to the study of relations within the soil environment, expanded over time onto influencing the properties of agricultural crops and produce as foods and raw postharvest materials, and onto the issues of quality, safety and labeling concerns, considered distinct from the field of nutrition for application in food science. Research centres focused on the development of the agrophysical sciences include the Institute of Agrophysics, Polish Academy of Sciences in Lublin, and the Agrophysical Research Institute, Russian Academy of Sciences in St. Petersburg. == See also == Agriculture science Agroecology Genomics Metagenomics Metabolomics Physics (Aristotle) Proteomics Soil plant atmosphere continuum Research institutes and societies Agrophysical Research Institute in St. Petersburg, Russia Bohdan Dobrzański Institute of Agrophysics in Lublin, Poland The Indian Society of AgroPhysics Scholarly journals Acta Agrophysica Journal of Agricultural Physics Polish Journal of Soil Science == References == Encyclopedia of Agrophysics in series: Encyclopedia of Earth Sciences Series edts. Jan Glinski, Jozef Horabik, Jerzy Lipiec, 2011, Publisher: Springer, ISBN 978-90-481-3585-1 Encyclopedia of Soil Science, edts. Ward Chesworth, 2008, Uniw. of Guelph Canada, Publ. Springer, ISBN 978-1-4020-3994-2 АГРОФИЗИКА - AGROPHYSICS by Е. В. Шеин (J.W. Chein), В. М. Гончаров (W.M. Gontcharow), Ростов-на-Дону (Rostov-on-Don), Феникс (Phoenix), 2006, - 399 c., ISBN 5-222-07741-1 - Рекомендовано УМО по классическому университетскому образованию в качестве учебника для студентов высших учебных заведений, обучающихся по специальности и направлению высшего профессионального образования "Почвоведение" Scientific Dictionary of Agrophysics: polish-English, polsko-angielski by R. Dębicki, J. Gliński, J. Horabik, R. T. Walczak - Lublin 2004, ISBN 83-87385-88-3 Physical Methods in Agriculture. Approach to Precision and Quality, edts. J. Blahovec and M. Kutilek, Kluwer Academic Publishers, New York 2002, ISBN 0-306-47430-1. Soil Physical Condition and Plant Roots by J. Gliński, J. Lipiec, 1990, CRC Press, Inc., Boca Raton, USA, ISBN 0-8493-6498-1 Soil Aeration and its Role for Plants by J. Gliński, W. Stępniewski, 1985, Publisher: CRC Press, Inc., Boca Raton, USA, ISBN 0-8493-5250-9 Fundamentals of Agrophysics (Osnovy agrofiziki) by A. F. Ioffe, I. B. Revut, Petr Basilevich Vershinin, 1966, English : Publisher: Jerusalem, Israel Program for Scientific Translations; (available from the U.S. Dept. of Commerce, Clearinghouse for Federal Scientific and Technical Information, Va.) Fundamentals of Agrophysics by P. V, etc. Vershinin, 1959, Publisher: IPST, ISBN 0-7065-0358-9 == External links == Agrophysical Research Institute of the Russian Academy of Agricultural Sciences Bohdan Dobrzański Institute of Agrophysics, Polish Academy of Sciences in Lublin Free Association of PMA Labs, Czech University of Agriculture, Prague International Agrophysics International Agrophysics - quarterly journal focused on applications of physics in environmental and agricultural sciences Polish Society of Agrophysics Sustainable Agriculture: Definitions and Terms
Wikipedia/Agrophysics
The Industrial Revolution, sometimes divided into the First Industrial Revolution and Second Industrial Revolution, was a transitional period of the global economy toward more widespread, efficient and stable manufacturing processes, succeeding the Second Agricultural Revolution. Beginning in Great Britain around 1760, the Industrial Revolution had spread to continental Europe and the United States by about 1840. This transition included going from hand production methods to machines; new chemical manufacturing and iron production processes; the increasing use of water power and steam power; the development of machine tools; and rise of the mechanised factory system. Output greatly increased, and the result was an unprecedented rise in population and population growth. The textile industry was the first to use modern production methods,: 40  and textiles became the dominant industry in terms of employment, value of output, and capital invested. Many technological and architectural innovations were British. By the mid-18th century, Britain was the leading commercial nation, controlled a global trading empire with colonies in North America and the Caribbean, and had military and political hegemony on the Indian subcontinent. The development of trade and rise of business were among the major causes of the Industrial Revolution.: 15  Developments in law facilitated the revolution, such as courts ruling in favour of property rights. An entrepreneurial spirit and consumer revolution helped drive industrialisation. The Industrial Revolution influenced almost every aspect of life. In particular, average income and population began to exhibit unprecedented sustained growth. Economists note the most important effect was that the standard of living for most in the Western world began to increase consistently for the first time, though others have said it did not begin to improve meaningfully until the 20th century. GDP per capita was broadly stable before the Industrial Revolution and the emergence of the modern capitalist economy, afterwards saw an era of per-capita economic growth in capitalist economies. Economic historians agree that the onset of the Industrial Revolution is the most important event in human history, comparable only to the adoption of agriculture with respect to material advancement. The precise start and end of the Industrial Revolution is debated among historians, as is the pace of economic and social changes. According to Leigh Shaw-Taylor, Britain was already industrialising in the 17th century. Eric Hobsbawm held that the Industrial Revolution began in Britain in the 1780s and was not fully felt until the 1830s, while T. S. Ashton held that it occurred between 1760 and 1830. Rapid adoption of mechanized textiles spinning occurred in Britain in the 1780s, and high rates of growth in steam power and iron production occurred after 1800. Mechanised textile production spread from Britain to continental Europe and the US in the early 19th century. A recession occurred from the late 1830s when the adoption of the Industrial Revolution's early innovations, such as mechanised spinning and weaving, slowed as markets matured; and despite increased adoption of locomotives, steamships, and hot blast iron smelting. New technologies such as the electrical telegraph, widely introduced in the 1840s in the UK and US, were not sufficient to drive high rates of growth. Rapid growth reoccurred after 1870, springing from new innovations in the Second Industrial Revolution. These included steel-making processes, mass production, assembly lines, electrical grid systems, large-scale manufacture of machine tools, and use of advanced machinery in steam-powered factories. == Etymology == The earliest recorded use of "Industrial Revolution" was in 1799 by French envoy Louis-Guillaume Otto, announcing that France had entered the race to industrialise. Raymond Williams states: "The idea of a new social order based on major industrial change was clear in Southey and Owen, between 1811-18, and was implicit as early as Blake in the early 1790s and Wordsworth at the turn of the [19th] century." The term Industrial Revolution applied to technological change was becoming more common by the 1830s, as in Jérôme-Adolphe Blanqui's description in 1837 of la révolution industrielle. Friedrich Engels in The Condition of the Working Class in England in 1844 spoke of "an industrial revolution, a revolution which...changed the whole of civil society". His book was not translated into English until the late 19th century, and the expression did not enter everyday language till then. Credit for its popularisation is given to Arnold Toynbee, whose 1881 lectures gave a detailed account of the term. Economic historians and authors such as Mendels, Pomeranz, and Kridte argue that proto-industrialisation in parts of Europe, the Muslim world, Mughal India, and China created the social and economic conditions that led to the Industrial Revolution, thus causing the Great Divergence. Some historians, such as John Clapham and Nicholas Crafts, have argued that the economic and social changes occurred gradually and that the term revolution is a misnomer. == Requirements == Several key factors enabled industrialisation. High agricultural productivity—exemplified by the British Agricultural Revolution—freed up labor and ensured food surpluses. The presence of skilled managers and entrepreneurs, an extensive network of ports, rivers, canals, and roads for efficient transport, and abundant natural resources such as coal, iron, and water power further supported industrial growth. Political stability, a legal system favorable to business, and access to financial capital also played crucial roles. Once industrialisation began in Britain in the 18th century, its spread was facilitated by the eagerness of British entrepreneurs to export industrial methods and the willingness of other nations to adopt them. By the early 19th century, industrialisation had reached Western Europe and the United States, and by the late 19th century, Japan. == Important technological developments == The commencement of the Industrial Revolution is closely linked to a small number of innovations, beginning in the second half of the 18th century. By the 1830s, the following gains had been made in important technologies: Textiles – mechanised cotton spinning powered by water, and later steam, increased output per worker by a factor of around 500. The power loom increased output by a factor of 40. The cotton gin increased productivity of removing seed from cotton by a factor of 50. Large gains in productivity occurred in spinning and weaving of wool and linen, but were not as great as in cotton. Steam power – the efficiency of steam engines increased so they used between one-fifth and one-tenth as much fuel. The adaptation of stationary steam engines to rotary motion made them suitable for industrial uses.: 82  The high-pressure engine had a high power-to-weight ratio, making it suitable for transportation. Steam power underwent a rapid expansion after 1800. Iron-making – the substitution of coke for charcoal greatly lowered the fuel cost of pig iron and wrought iron production.: 89–93  Using coke also allowed larger blast furnaces, resulting in economies of scale. The steam engine began being used to power blast air in the 1750s, enabling a large increase in iron production by overcoming the limitation of water power. The cast iron blowing cylinder was first used in 1760. It was improved by making it double acting, which allowed higher blast furnace temperatures. The puddling process produced structural grade iron at lower cost than the finery forge. The rolling mill was fifteen times faster than hammering wrought iron. Developed in 1828, hot blast greatly increased fuel efficiency in iron production. Invention of machine tools – the first machine tools were the screw-cutting lathe, the cylinder boring machine, and the milling machine. Machine tools made the economical manufacture of precision metal parts possible, although it took decades to develop effective techniques for making interchangeable parts. === Textile manufacture === ==== British textile industry ==== In 1750, Britain imported 2.5 million pounds of raw cotton, most of which was spun and woven by the cottage industry in Lancashire. The work was done by hand in workers' homes or master weavers' shops. Wages were six times those in India in 1770 when productivity in Britain was three times higher. In 1787, raw cotton consumption was 22 million pounds, most of which was cleaned, carded, and spun on machines.: 41–42  The British textile industry used 52 million pounds of cotton in 1800, and 588 million pounds in 1850. The share of value added by the cotton textile industry in Britain was 2.6% in 1760, 17% in 1801, and 22% in 1831. Value added by the British woollen industry was 14% in 1801. Cotton factories in Britain numbered about 900 in 1797. In 1760, approximately one-third of cotton cloth manufactured in Britain was exported, rising to two-thirds by 1800. In 1781, cotton spun amounted to 5 million pounds, which increased to 56 million pounds by 1800. In 1800, less than 0.1% of world cotton cloth was produced on machinery invented in Britain. In 1788, there were 50,000 spindles in Britain, rising to 7 million over the next 30 years. ==== Wool ==== The earliest European attempts at mechanised spinning were with wool; however, wool spinning proved more difficult to mechanise than cotton. Productivity improvement in wool spinning during the Industrial Revolution was significant but far less than cotton. ==== Silk ==== Arguably the first highly mechanised factory was John Lombe's water-powered silk mill at Derby, operational by 1721. Lombe learned silk thread manufacturing by taking a job in Italy and acting as an industrial spy; however, because the Italian silk industry guarded its secrets, the state of the industry at that time is unknown. Although Lombe's factory was technically successful, the supply of raw silk from Italy was cut off to eliminate competition. To promote manufacturing, the Crown paid for models of Lombe's machinery which were exhibited in the Tower of London. ==== Cotton ==== Parts of India, China, Central America, South America, and the Middle East have a long history of hand-manufacturing cotton textiles, which became a major industry after 1000 AD. Most cotton was grown by small farmers alongside food and, spun in households for domestic consumption. In the 1400s, China began to require households to pay part of their taxes in cotton cloth. By the 17th century, almost all Chinese wore cotton clothing, and it could be used as a medium of exchange. In India, cotton textiles were manufactured for distant markets, often produced by professional weavers. Cotton was a difficult raw material for Europe to obtain before it was grown on colonial plantations. Spanish explorers found Native Americans growing sea island cotton (Gossypium barbadense) and green seeded cotton Gossypium hirsutum. Sea island cotton began being exported from Barbados in the 1650s. Upland green seeded cotton was uneconomical because of the difficulty of removing seed, a problem solved by the cotton gin.: 157  A strain of cotton seed brought from Mexico to Natchez, Mississippi, in 1806 became the parent genetic material for over 90% of world production today; it produced bolls three to four times faster to pick. ==== Trade and textiles ==== The Age of Discovery was followed by colonialism beginning around the 16th century. Following the discovery of a trade route to India around southern Africa by the Portuguese, the British founded the East India Company, and other countries founded companies, which established trading posts throughout the Indian Ocean region. A largest segment of this trade was in cotton textiles, which were purchased in India and sold in Southeast Asia, including the Indonesian archipelago where spices were purchased for sale to Southeast Asia and Europe. By the 1760s, cloth was over three-quarters of the East India Company's exports. Indian textiles were in demand in Europe where previously only wool and linen were available; however, cotton goods consumed in Europe was minor until the early 19th century. ==== Pre-mechanized European textile production ==== By 1600, Flemish refugees began weaving cotton in English towns where cottage spinning and weaving of wool and linen was established. They were left alone by the guilds who did not consider cotton a threat. Earlier European attempts at cotton spinning and weaving were in 12th-century Italy and 15th-century southern Germany, but these ended when the supply of cotton was cut off. British cloth could not compete with Indian cloth because India's labour cost was approximately one-fifth to one-sixth that of Britain's. In 1700 and 1721, the British government passed Calico Acts to protect domestic woollen and linen industries from cotton fabric imported from India. The demand for heavier fabric was met by a domestic industry based around Lancashire that produced fustian, a cloth with flax warp and cotton weft. Flax was used for the warp because wheel-spun cotton had insufficient strength, the resulting blend was not as soft as 100% cotton and more difficult to sew. On the eve of the Industrial Revolution, spinning and weaving were done in households, for domestic consumption, and as a cottage industry under the putting-out system. Under the putting-out system, home-based workers produced under contract to merchant sellers, who often supplied the raw materials. In the off-season, the women, typically farmers' wives, did the spinning and the men did the weaving. Using the spinning wheel, it took 4-8 spinners to supply one handloom weaver.: 823  ==== Invention of textile machinery ==== The flying shuttle, patented in 1733 by John Kay—with subsequent improvements including an important one in 1747—doubled the output of a weaver, worsening the imbalance between spinning and weaving. It became widely used around Lancashire after 1760 when John's son, Robert, invented the dropbox, which facilitated changing thread colors.: 821–822  Lewis Paul patented the roller spinning frame and the flyer-and-bobbin system for drawing wool to a more even thickness. The technology was developed with John Wyatt of Birmingham. In 1743, a factory opened in Northampton with 50 spindles on each of five of Paul and Wyatt's machines, this operated until 1764. A similar mill was built by Daniel Bourn. Paul and Bourn patented carding machines in 1748. Based on two sets of rollers that travelled at different speeds, it was later used in the first cotton spinning mill. In 1764, in Oswaldtwistle, Lancashire, James Hargreaves invented the spinning jenny. It was the first practical spinning frame with multiple spindles. The jenny worked in a similar manner to the spinning wheel, by first clamping down on the fibres, then drawing them out, followed by twisting. It was a simple, wooden framed machine that only cost £6 for a 40-spindle model in 1792 and was used mainly by home spinners. The jenny produced a lightly twisted yarn only suitable for weft, not warp.: 825–827  The water frame, was developed by Richard Arkwright who, patented it in 1769. The design was partly based on a spinning machine built by Kay, hired by Arkwright.: 827–830  The water frame was able to produce a hard, medium-count thread suitable for warp, finally allowing 100% cotton cloth to be made in Britain. Arkwright used water power at a factory in Cromford, Derbyshire in 1771, giving the invention its name. Samuel Crompton invented the spinning mule in 1779, so called because it is a hybrid of Arkwright's water frame and James Hargreaves's spinning jenny (a mule is the product of crossbreeding a female horse with a male donkey). Crompton's mule could produce finer thread than hand spinning, at lower cost. Mule-spun thread was of suitable strength to be used as a warp and allowed Britain to produce highly competitive yarn in large quantities.: 832  Realising expiration of the Arkwright patent would greatly increase the supply of spun cotton and lead to a shortage of weavers, Edmund Cartwright developed a vertical power loom which he patented in 1785.: 834  Samuel Horrocks patented a loom in 1813, which was improved by Richard Roberts in 1822, and these were produced in large numbers by Roberts, Hill & Co. Roberts was a maker of high-quality machine tools and pioneer in the use of jigs and gauges for precision workshop measurement. The demand for cotton presented an opportunity to planters in the Southern US, who thought upland cotton would be profitable if a better way could be found to remove the seed. Eli Whitney responded by inventing the inexpensive cotton gin. A man using a cotton gin could remove seed in one day as would previously have taken two months to process. These advances were capitalised on by entrepreneurs, of whom the best known is Arkwright. He is credited with a list of inventions, but these were developed by such people as Kay and Thomas Highs. Arkwright nurtured the inventors, patented the ideas, financed the initiatives, and protected the machines. He created the cotton mill which brought the production processes together in a factory, and developed the use of power, which made cotton manufacture a mechanised industry. Other inventors increased the efficiency of the individual steps of spinning, so that the supply of yarn increased greatly. Steam power was then applied to drive textile machinery. Manchester acquired the nickname Cottonopolis during the early 19th century owing to its sprawl of textile factories. Though mechanisation dramatically decreased the cost of cotton cloth, by the mid-19th century machine-woven cloth still could not equal the quality of hand-woven Indian cloth. However, the high productivity of British textile manufacturing allowed coarser grades of British cloth to undersell hand-spun and woven fabric in low-wage India, destroying the Indian industry. === Iron industry === ==== British iron production ==== Bar iron was the commodity form of iron used as the raw material for making hardware goods such as nails, wire, hinges, horseshoes, wagon tires, chains, as well as structural shapes. A small amount of bar iron was converted into steel. Cast iron was used for pots, stoves, and other items where its brittleness was tolerable. Most cast iron was refined and converted to bar iron, with substantial losses. Bar iron was made by the bloomery process, the predominant iron smelting process until the late 18th century. In the UK in 1720, there were 20,500 tons of charcoal iron and 400 tons with coke. In 1806, charcoal iron production had dropped to 7,800 tons and coke cast iron was 250,000 tons.: 125  In 1750, the UK imported 31,000 tons of bar iron and either refined from cast iron or directly produced 18,800 tons of bar iron, using charcoal and 100 tons using coke. In 1796, the UK was making 125,000 tons of bar iron with coke and 6,400 tons with charcoal; imports were 38,000 tons and exports were 24,600 tons. In 1806 the UK did not import bar iron but exported 31,500 tons.: 125  ==== Iron process innovations ==== A major change in the iron industries during the Industrial Revolution was the replacement of wood and other bio-fuels with coal; for a given amount of heat, mining coal required much less labour than cutting wood and converting it to charcoal, and coal was more abundant than wood, supplies of which were becoming scarce before the enormous increase in iron production that took place in the late 18th century.: 122  In 1709, Abraham Darby made progress using coke to fuel his blast furnaces at Coalbrookdale. However, the coke pig iron made was not suitable for making wrought iron and was used mostly for the production of cast iron goods, such as pots and kettles. He had the advantage over his rivals in that his pots, cast by his patented process, were thinner and cheaper. In 1750, coke had generally replaced charcoal in the smelting of copper and lead and was in widespread use in glass production. In the smelting and refining of iron, coal and coke produced inferior iron to that made with charcoal because of the coal's sulfur content. Low sulfur coals were known, but they still contained harmful amounts.: 122–125  Another factor limiting the iron industry before the Industrial Revolution was the scarcity of water power to power blast bellows. This limitation was overcome by the steam engine. Use of coal in iron smelting started before the Industrial Revolution, based on innovations by Clement Clerke and others from 1678, using coal reverberatory furnaces known as cupolas. These were operated by the flames playing on the ore and charcoal or coke mixture, reducing the oxide to metal. This has the advantage that impurities in the coal do not migrate into the metal. This technology was applied to lead from 1678 and copper from 1687. It was applied to iron foundry work in the 1690s, but in this case the reverberatory furnace was known as an air furnace. Coke pig iron was hardly used to produce wrought iron until 1755, when Darby's son Abraham Darby II built furnaces at Horsehay and Ketley where low sulfur coal was available, and not far from Coalbrookdale. These furnaces were equipped with water-powered bellows, the water being pumped by Newcomen atmospheric engines. Abraham Darby III installed similar steam-pumped, water-powered blowing cylinders at the Dale Company when he took control in 1768. The Dale Company used Newcomen engines to drain its mines and made parts for engines which it sold throughout the country.: 123–125  Steam engines made the use of higher-pressure and volume blast practical; however, the leather used in bellows was expensive to replace. In 1757, ironmaster John Wilkinson patented a hydraulic powered blowing engine for blast furnaces. The blowing cylinder for blast furnaces was introduced in 1760 and the first blowing cylinder made of cast iron is believed to be the one used at Carrington in 1768, designed by John Smeaton.: 124, 135  Cast iron cylinders for use with a piston were difficult to manufacture. James Watt had difficulty trying to have a cylinder made for his first steam engine. In 1774 Wilkinson invented a machine for boring cylinders. After Wilkinson bored the first successful cylinder for a Boulton and Watt steam engine in 1776, he was given an exclusive contract for providing cylinders. Watt developed a rotary steam engine in 1782, they were widely applied to blowing, hammering, rolling and slitting.: 124  In addition to lower cost and greater availability, coke had other advantages over charcoal in that it was harder and made the column of materials flowing down the blast furnace more porous and did not crush in the much taller furnaces of the late 19th century. As cast iron became cheaper and widely available, it began being a structural material for bridges and buildings. A famous early example is The Iron Bridge built in 1778 with cast iron produced by Abraham Darby III. However, most cast iron was converted to wrought iron. Conversion of cast iron had long been done in a finery forge. An improved refining process known as potting and stamping was developed, but this was superseded by Henry Cort's puddling process. Cort developed significant iron manufacturing processes: rolling in 1783 and puddling in 1784.: 91  Puddling produced a structural grade iron at a relatively low cost. Puddling was backbreaking and extremely hot work. Few puddlers lived to be 40.: 218  Puddling became widely used after 1800. British iron manufacturers had used considerable amounts of iron imported from Sweden and Russia to supplement domestic supplies. Because of the increased British production, by the 1790s Britain eliminated imports and became a net exporter of bar iron. Hot blast, patented by the Scottish inventor James Beaumont Neilson in 1828, was the most important development of the 19th century for saving energy in making pig iron. The amount of fuel to make a unit of pig iron was reduced at first by between one-third using coke or two-thirds using coal; the efficiency gains continued as the technology improved. Hot blast raised the operating temperature of furnaces, increasing their capacity. Using less coal or coke meant introducing fewer impurities into the pig iron. This meant that lower quality coal could be used in areas where coking coal was unavailable or too expensive; however, by the end of the 19th century transportation costs fell considerably. Shortly before the Industrial Revolution, an improvement was made in the production of steel, which was an expensive commodity and used only where iron would not do, such as for cutting edge tools and springs. Benjamin Huntsman developed his crucible steel technique in the 1740s. The supply of cheaper iron and steel aided a number of industries, such as those making nails, hinges, wire, and other hardware items. The development of machine tools allowed better working of iron, causing it to be increasingly used in the rapidly growing machinery and engine industries. === Steam power === The development of the stationary steam engine was important in the Industrial Revolution; however, during its early period, most industrial power was supplied by water and wind. In Britain, by 1800 an estimated 10,000 horsepower was being supplied by steam. By 1815 steam power had grown to 210,000 hp. The first commercially successful industrial use of steam power was patented by Thomas Savery in 1698. He constructed in London a low-lift combined vacuum and pressure water pump that generated about one horsepower (hp) and was used in waterworks and a few mines. The first successful piston steam engine was introduced by Thomas Newcomen before 1712. Newcomen engines were installed for draining hitherto unworkable deep mines, with the engine on the surface; these were large machines, requiring a significant amount of capital, and produced upwards of 3.5 kW (5 hp). They were extremely inefficient by modern standards, but when located where coal was cheap at pit heads, they opened up a great expansion in coal mining by allowing mines to go deeper. The engines spread to Hungary in 1722, then Germany and Sweden; 110 were built by 1733. In the 1770s John Smeaton built large examples and introduced improvements. 1,454 engines had been built by 1800. Despite their disadvantages, Newcomen engines were reliable, easy to maintain and continued to be used in coalfields until the early 19th century. A fundamental change in working principles was brought about by Scotsman James Watt. With financial support from his business partner Englishman Matthew Boulton, he had succeeded by 1778 in perfecting his steam engine, which incorporated radical improvements, notably closing the upper part of the cylinder making the low-pressure steam drive the top of the piston instead of the atmosphere and the celebrated separate steam condenser chamber. The separate condenser did away with the cooling water that had been injected directly into the cylinder, which cooled the cylinder and wasted steam. These improvements increased engine efficiency so Boulton and Watt's engines used only 20–25% as much coal per horsepower-hour as Newcomen's. Boulton and Watt opened the Soho Foundry for the manufacture of such engines in 1795. In 1783, the Watt steam engine had been fully developed into a double-acting rotative type, which meant it could be used to directly drive the rotary machinery of a factory or mill. Both of Watt's basic engine types were commercially successful, and by 1800 the firm Boulton and Watt had constructed 496 engines, with 164 driving reciprocating pumps, 24 serving blast furnaces, and 308 powering mill machinery; most of the engines generated from 3.5 to 7.5 kW (5 to 10 hp). Until about 1800, the most common pattern of steam engine was the beam engine, built as an integral part of a stone or brick engine-house, but soon self-contained rotative engines were developed, such as the table engine. Around the start of the 19th century, at which time the Boulton and Watt patent expired, Cornish engineer Richard Trevithick and the American Oliver Evans began to construct higher-pressure non-condensing steam engines, exhausting against the atmosphere. High pressure yielded an engine and boiler compact enough to be used on mobile road and rail locomotives and steamboats. Small industrial power requirements continued to be provided by animal and human muscle until widespread electrification in the 20th century. These included crank-powered, treadle-powered and horse-powered workshop, and light industrial machinery. === Machine tools === Pre-industrial machinery was built by various craftsmen—millwrights built watermills and windmills; carpenters made wooden framing; and smiths and turners made metal parts. Wooden components had the disadvantage of changing dimensions with temperature and humidity, and the joints tended to work loose. As the Industrial Revolution progressed, machines with metal parts and frames became common. Other uses of metal parts were in firearms and threaded fasteners, such as machine screws, bolts, and nuts. There was need for precision in making parts, to allow better working machinery, interchangeability of parts, and standardization of threaded fasteners. The demand for metal parts led to the development of several machine tools. They have their origins in the tools developed in the 18th century by clock and scientific instrument makers, to enable them to batch-produce small mechanisms. Before machine tools, metal was worked manually using the basic hand tools: hammers, files, scrapers, saws, and chisels. Consequently, use of metal machine parts was kept to a minimum. Hand methods of production were laborious and costly, and precision was difficult to achieve. The first large precision machine tool was the cylinder boring machine invented by John Wilkinson in 1774. It was designed to bore the large cylinders on steam engines. Wilkinson's machine was the first to use the principle of line-boring, where the tool is supported on both ends. The planing machine, the milling machine and the shaping machine were developed. Though the milling machine was invented at this time, it was not developed as a serious workshop tool until later. James Fox and Matthew Murray were manufacturers of machine tools who found success in exports and developed the planer around the same time as Richard Roberts. Henry Maudslay, who trained a school of machine tool makers, was a mechanic who had been employed at the Royal Arsenal, Woolwich. He worked as an apprentice under Jan Verbruggen, who, in 1774, installed a horizontal boring machine which was the first industrial size lathe in the UK. Maudslay was hired by Joseph Bramah for the production of high-security metal locks that required precision craftsmanship. Bramah patented a lathe with similarities to the slide rest lathe,: 392–395  Maudslay perfected this lathe, which cut machine screws of different thread pitches. Before its invention, screws could not be cut with precision.: 392–395  The slide rest lathe was called one of history's most important inventions. Although it was not Maudslay's idea, he was the first to build a functional lathe using innovations of the lead screw, slide rest, and change gears.: 31, 36  Maudslay set up a shop, and built the machinery for making ships' pulley blocks for the Royal Navy in the Portsmouth Block Mills. These machines were all-metal and the first for mass production and making components with interchangeability. The lessons Maudslay learned about the need for stability and precision he adapted to the development of machine tools, and he trained men to build on his work, such as Richard Roberts, Joseph Clement and Joseph Whitworth. The techniques to make mass-produced metal parts of sufficient precision to be interchangeable is attributed to the U.S. Department of War which perfected interchangeable parts for firearms. In the half-century following the invention of the fundamental machine tools, the machine industry became the largest industrial sector of the U.S. economy. === Chemicals === Large-scale production of chemicals was an important development. The first of these was the production of sulphuric acid by the lead chamber process, invented by John Roebuck in 1746. He was able to increase the scale of the manufacture by replacing expensive glass vessels with larger, cheaper chambers made of riveted sheets of lead. Instead of a small amount, he was able to make around 50 kilograms (100 pounds) in each chamber, a tenfold increase. The production of an alkali on a large scale became an important goal, and Nicolas Leblanc succeeded in 1791 in introducing a method for the production of sodium carbonate (soda ash). The Leblanc process was a reaction of sulfuric acid with sodium chloride to give sodium sulfate and hydrochloric acid. The sodium sulfate was heated with calcium carbonate and coal to give a mixture of sodium carbonate and calcium sulfide. Adding water separated the soluble sodium carbonate from the calcium sulfide. The process produced significant pollution, nonetheless, this synthetic soda ash proved economical compared to that from burning plants, and to potash (potassium carbonate) produced from hardwood ashes. Soda ash and sulphuric acid were important because they enabled the introduction of other inventions, replacing small-scale operations with more cost-effective and controllable processes. Sodium carbonate had uses in the glass, textile, soap, and paper industries. Early uses for sulfuric acid included pickling (removing rust from) iron and steel, and for bleaching cloth. The development of bleaching powder (calcium hypochlorite) by chemist Charles Tennant in 1800, based on the discoveries of Claude Louis Berthollet, revolutionised the bleaching processes in the textile industry by reducing the time required for the traditional process then in use: repeated exposure to the sun in fields after soaking the textiles with alkali or sour milk. Tennant's St Rollox Chemical Works, Glasgow, became the world's largest chemical plant. After 1860 the focus on chemical innovation was in dyestuffs, and Germany took leadership, building a strong chemical industry. Aspiring chemists flocked to German universities in 1860–1914 to learn the latest techniques. British scientists lacked research universities and did not train advanced students; instead, the practice was to hire German-trained chemists. === Concrete === In 1824 Joseph Aspdin, a British bricklayer turned builder, patented a chemical process for making portland cement, an important advance in the building trades. This process involves sintering clay and limestone to about 1,400 °C (2,552 °F), then grinding it into a fine powder which is mixed with water, sand and gravel to produce concrete. Portland cement concrete was used by English engineer Marc Isambard Brunel when constructing the Thames Tunnel. Concrete was used on a large scale in the construction of the London sewer system a generation later. === Gas lighting === Though others made a similar innovation, the large-scale introduction of gas lighting was the work of William Murdoch, an employee of Boulton & Watt. The process consisted of the large-scale gasification of coal in furnaces, purification of the gas, and its storage and distribution. The first gas lighting utilities were established in London between 1812 and 1820. They became one of the major consumers of coal in the UK. Gas lighting affected social and industrial organisation because it allowed factories and stores to remain open longer. Its introduction allowed nightlife to flourish in cities and towns as interiors and streets could be lighted on a larger scale than before. === Glass making === Glass was made in ancient Greece and Rome. A new method of glass production, known as the cylinder process, was developed in Europe during the 19th century. In 1832 this process was used by the Chance Brothers to create sheet glass; they became the leading producers of window and plate glass. This advancement allowed for larger panes of glass to be created without interruption, thus freeing up the space planning in interiors as well as the fenestration of buildings. The Crystal Palace is a significant example of the use of sheet glass in a new and innovative structure. === Paper machine === A machine for making a continuous sheet of paper, on a loop of wire fabric, was patented in 1798 by Louis-Nicolas Robert in France. The paper machine is known as a Fourdrinier after the financiers, brothers Sealy and Henry Fourdrinier, who were stationers in London. The Fourdrinier machine is the predominant means of production today. The method of continuous production demonstrated by the paper machine influenced the development of continuous rolling of iron, steel and other continuous production processes. === Agriculture === The British Agricultural Revolution raised crop yields and released labour for industrial employment, although per-capita food supply in much of Europe remained stagnant until the late 18th century. Key innovations included Jethro Tull's early 18th-century mechanical seed drill (1701), which ensured more even sowing and depth control, Joseph Foljambe's iron Rotherham plough (c. 1730) and Andrew Meikle's threshing machine (1784), which reduced manual labour requirements. Hand threshing with a flail, was a laborious job that had taken about one-quarter of agricultural labour,: 286  lower labour requirements resulted in lower wages and fewer labourers, who faced near starvation, leading to the 1830 Swing Riots. === Mining === Coal mining in Britain, particularly in South Wales, started early. Before the steam engine, pits were often shallow bell pits following a seam of coal along the surface, which were abandoned as the coal was extracted. If the geology was favourable, the coal was mined by means of an adit or drift mine driven into the side of a hill. Shaft mining was done in some areas, but the limiting factor was the problem of removing water. It could be done by hauling buckets up the shaft or to a sough (a tunnel driven into a hill to drain a mine). In either case, the water had to be discharged into a stream or ditch at a level where it could flow away. Introduction of the steam pump by Thomas Savery in 1698 and the Newcomen steam engine in 1712 facilitated removal of water and enabled deeper shafts, enabling more coal to be extracted. These developments had begun before the Industrial Revolution, but the adoption of Smeaton's improvements to the Newcomen engine, followed by Watt's steam engines from the 1770s, reduced the fuel costs, making mines more profitable. The Cornish engine, developed in the 1810s, was more efficient than the Watt engine. Coal mining was dangerous owing to the presence of firedamp in coal seams. A degree of safety was provided by the safety lamp invented in 1816 by Sir Humphry Davy, and independently by George Stephenson. However, the lamps proved a false dawn because they became unsafe quickly and provided weak light. Firedamp explosions continued, often setting off coal dust explosions, so casualties grew during the 19th century. Conditions were very poor, with a high casualty rate from rock falls. === Transportation === At the beginning of the Industrial Revolution, inland transport was by navigable rivers and roads, with coastal vessels employed to move heavy goods. Wagonways were used for conveying coal to rivers for further shipment, but canals had not yet been widely constructed. Animals supplied all motive power on land, with sails providing motive power on the sea. The first horse railways were introduced toward the end of the 18th century, with steam locomotives introduced in the early 19th century. Improving sailing technologies boosted speed by 50% between 1750 and 1830. The Industrial Revolution improved Britain's transport infrastructure with turnpike road, waterway and rail networks. Raw materials and finished products could be moved quicker and cheaper than before. Improved transport allowed ideas to spread quickly. ==== Canals and improved waterways ==== Before and during the Industrial Revolution navigation on British rivers was improved by removing obstructions, straightening curves, widening and deepening, and building navigation locks. Britain had over 1,600 kilometres (1,000 mi) of navigable rivers and streams by 1750.: 46  Canals and waterways allowed bulk materials to be economically transported long distances inland. This was because a horse could pull a barge with a tens of times larger than could be drawn in a cart. Canals began to be built in the UK in the late 18th century to link major manufacturing centres. Known for its huge commercial success, the Bridgewater Canal in North West England, was opened in 1761 and mostly funded by The 3rd Duke of Bridgewater. From Worsley to the rapidly growing town of Manchester its construction cost £168,000 (£22,589,130 as of 2013), but its advantages over land and river transport meant that within one year, the coal price in Manchester fell by half. This success inspired Canal Mania, canals were hastily built with the aim of replicating the commercial success of Bridgewater, the most notable being the Leeds and Liverpool Canal and the Thames and Severn Canal which opened in 1774 and 1789 respectively. By the 1820s a national network was in existence. Canal construction served as a model for the organisation and methods used to construct the railways. They were largely superseded by the railways from the 1840s. The last major canal built in the UK was the Manchester Ship Canal, which upon opening in 1894 was the world's largest ship canal, and opened Manchester as a port. However, it never achieved the commercial success its sponsors hoped for and signalled canals as a dying transport mode in an age dominated by railways, which were quicker and often cheaper. Britain's canal network, and its mill buildings, is one of the most enduring features of the Industrial Revolution to be seen in Britain. ==== Roads ==== France was known for having an excellent road system at this time; however, most roads on the European continent and in the UK were in bad condition, dangerously rutted. Much of the original British road system was poorly maintained by local parishes, but from the 1720s turnpike trusts were set up to charge tolls and maintain some roads. Increasing numbers of main roads were turnpiked from the 1750s: almost every main road in England and Wales was the responsibility of a turnpike trust. New engineered roads were built by John Metcalf, Thomas Telford and John McAdam, with the first 'macadam' stretch of road being Marsh Road at Ashton Gate, Bristol in 1816. The first macadam road in the U.S. was the "Boonsborough Turnpike Road" between Hagerstown and Boonsboro, Maryland in 1823. The major turnpikes radiated from London and were the means by which the Royal Mail was able to reach the rest of the country. Heavy goods transport on these roads was by slow, broad-wheeled carts hauled by teams of horses. Lighter goods were conveyed by smaller carts or teams of packhorse. Stagecoaches carried the rich, and the less wealthy rode on carriers carts. Productivity of road transport increased greatly during the Industrial Revolution, and the cost of travel fell dramatically. Between 1690 and 1840 productivity tripled for long-distance carrying and increased four-fold in stage coaching. ==== Railways ==== Railways were made practical by the widespread introduction of inexpensive puddled iron after 1800, the rolling mill for making rails, and the development of the high-pressure steam engine. Reduced friction was a major reason for the success of railways compared to wagons. This was demonstrated on an iron plate-covered wooden tramway in 1805 at Croydon, England. A good horse on an ordinary turnpike road can draw two thousand pounds, or one ton. A party of gentlemen were invited to witness the experiment, that the superiority of the new road might be established by ocular demonstration. Twelve wagons were loaded with stones, till each wagon weighed three tons, and the wagons were fastened together. A horse was then attached, which drew the wagons with ease, six miles [10 km] in two hours, having stopped four times, in order to show he had the power of starting, as well as drawing his great load. Wagonways for moving coal in the mining areas had started in the 17th century and were often associated with canal or river systems for the further movement. These were horse-drawn or relied on gravity, with a stationary steam engine to haul the wagons back to the top of the incline. The first applications of steam locomotive were on wagon or plate ways. Horse-drawn public railways begin in the early 19th century when improvements to pig and wrought iron production lowered costs. Steam locomotives began being built after the introduction of high-pressure steam engines, after the expiration of the Boulton and Watt patent in 1800. High-pressure engines exhausted used steam to the atmosphere, doing away with the condenser and cooling water. They were much lighter and smaller in size for a given horsepower than the stationary condensing engines. A few of these early locomotives were used in mines. Steam-hauled public railways began with the Stockton and Darlington Railway in 1825. The rapid introduction of railways followed the 1829 Rainhill trials, which demonstrated Robert Stephenson's successful locomotive design and the 1828 development of hot blast, which dramatically reduced the fuel consumption of making iron and increased the capacity of the blast furnace. On 15 September 1830, the Liverpool and Manchester Railway, the first inter-city railway in the world, was opened. The railway was engineered by Joseph Locke and George Stephenson, linked the rapidly expanding industrial town of Manchester with the port of Liverpool. The railway became highly successful, transporting passengers and freight. The success of the inter-city railway, particularly in the transport of freight and commodities, led to Railway Mania. Construction of major railways connecting the larger cities and towns began in the 1830s, but only gained momentum at the very end of the first Industrial Revolution. After many of the workers had completed the railways, they did not return to the countryside but remained in the cities, providing additional workers for the factories. == Social effects == The Industrial Revolution effectively asked the social question, demanding new ideas for managing large groups. Visible poverty, growing population and materialistic wealth, caused tensions between the richest and poorest. These tensions were sometimes violently released and led to philosophical ideas such as socialism, communism and anarchism. === Factory system === Prior to the Industrial Revolution, most were employed in agriculture as self-employed farmers, tenants, landless agricultural labourers. It was common for families to spin yarn, weave cloth and make their clothing. Households also spun and wove for market production. At the beginning of the Industrial Revolution, India, China, and regions of Iraq and elsewhere in Asia and the Middle East produced most of the world's cotton cloth, while Europeans produced wool and linen goods. In Great Britain in the 16th century, the putting-out system was practised, by which farmers and townspeople produced goods for a market in their homes, often described as cottage industry. Merchant capitalists typically provided the raw materials, paid workers by the piece, and were responsible for sales. Embezzlement of supplies by workers and poor quality were common. The logistical effort in procuring and distributing raw materials and picking up finished goods were also limitations.: 57–59  Some early spinning and weaving machinery, such as a 40 spindle jenny for about six pounds in 1792, was affordable for cottagers.: 59  Later machinery such as spinning frames, spinning mules and power looms were expensive, giving rise to capitalist ownership of factories. Most textile factory workers during the Industrial Revolution were unmarried women and children, including many orphans. They worked for 12–14 hours with only Sundays off. It was common for women to take factory jobs seasonally during slack periods of farm work. Lack of adequate transportation, long hours, and poor pay made it difficult to recruit and retain workers. The change in the social relationship of the factory worker compared to farmers and cottagers was viewed unfavourably by Karl Marx; however, he recognized the increase in productivity from technology. === Standards of living === Some economists, such as Robert Lucas Jr., say the real effect of the Industrial Revolution was that "for the first time in history, the living standards of the masses of ordinary people have begun to undergo sustained growth ... Nothing remotely like this economic behaviour is mentioned by the classical economists, even as a theoretical possibility." Others argue that while growth of the economy was unprecedented, living standards for most did not grow meaningfully until the late 19th century and workers' living standards declined under early capitalism. Some studies estimate that wages in Britain only increased 15% between the 1780s and 1850s and life expectancy did not dramatically increase until the 1870s. Average height declined during the Industrial Revolution, because nutrition was decreasing. Life expectancy of children increased dramatically: the percentage of Londoners who died before the age of five decreased from 75% in 1730–49, to 32% in 1810–29. The effects on living conditions have been controversial and were debated by historians from the 1950s to the 1980s. Between 1813 and 1913, there was a significant increase in wages. ==== Food and nutrition ==== Chronic hunger and malnutrition were the norms for most, including in Britain and France, until the late 19th century. Until about 1750, malnutrition limited life expectancy in France to 35, and 40 in Britain. The US population was adequately fed, taller, and had a life expectancy of 45–50, though this slightly declined by the mid 19th century. Food consumption per person also declined during an episode known as the Antebellum Puzzle. Food supply in Great Britain was adversely affected by the Corn Laws (1815–46) which imposed tariffs on imported grain. The laws were enacted to keep prices high to benefit domestic producers. The Corn Laws were repealed in the early years of the Great Irish Famine. The initial technologies of the Industrial Revolution, such as mechanized textiles, iron and coal, did little, if anything, to lower food prices. In Britain and the Netherlands, food supply increased before the Industrial Revolution with better agricultural practices; however, population grew as well. ==== Housing ==== Rapid population growth included the new industrial and manufacturing cities, as well as service centers such as Edinburgh and London. The critical factor was financing, which was handled by building societies that dealt directly with large contracting firms. Private renting from housing landlords was the dominant tenure, this was usually of advantage to tenants. People moved in so rapidly there was not enough capital to build adequate housing, so low-income newcomers squeezed into overcrowded slums. Clean water, sanitation, and public health facilities were inadequate; the death rate was high, especially infant mortality, and tuberculosis among young adults. Cholera from polluted water and typhoid were endemic. Unlike rural areas, there were no famines such that which devastated Ireland in the 1840s. A large exposé literature grew up condemning the unhealthy conditions. The most famous publication was by a founder of the socialist movement. In The Condition of the Working Class in England in 1844, Friedrich Engels describes backstreets of Manchester and other mill towns, where people lived in shanties and shacks, some not enclosed, some with dirt floors. These shanty towns had narrow walkways between irregularly shaped lots and dwellings. There were no sanitary facilities. Population density was extremely high. However, not everyone lived in such poor conditions. The Industrial Revolution created a middle class of businessmen, clerks, foremen, and engineers who lived in much better conditions. Conditions improved over the 19th century with new public health acts regulating things such as sewage, hygiene, and home construction. In the introduction of his 1892 edition, Engels noted most of the conditions had greatly improved. For example, the Public Health Act 1875 led to the more sanitary byelaw terraced house. ==== Water and sanitation ==== Pre-industrial water supply relied on gravity systems, pumping water was done by water wheels, and wipes were made of wood. Steam-powered pumps and iron pipes allowed widespread piping of water to horse watering troughs and households. Engels' book describes how untreated sewage created awful odours and turned the rivers green in industrial cities. In 1854 John Snow traced a cholera outbreak in Soho, London to fecal contamination of a public water well by a home cesspit. Snow's finding that cholera could be spread by contaminated water took years to be accepted, but led to fundamental changes in the design of public water and waste systems. === Literacy === In the 18th century, there was relatively high literacy among farmers in England and Scotland. This permitted the recruitment of literate craftsmen, skilled workers, foremen, and managers who supervised textile factories and coal mines. Much of the labour was unskilled, and especially in textile mills children as young as eight proved useful in handling chores and adding to family income. Children were taken out of school to work alongside their parents in the factories. However, by the mid-19th century, unskilled labour forces were common in Western Europe, and British industry moved upscale, needing more engineers and skilled workers who could handle technical instructions and handle complex situations. Literacy was essential to be hired. A senior government official told Parliament in 1870: Upon the speedy provision of elementary education depends are industrial prosperity. It is of no use trying to give technical teaching to our citizens without elementary education; uneducated labourers—and many of our labourers are utterly uneducated—are, for the most part, unskilled labourers, and if we leave our work–folk any longer unskilled, notwithstanding their strong sinews and determined energy, they will become overmatched in the competition of the world. The invention of the paper machine and the application of steam power to the industrial processes of printing supported a massive expansion of newspaper and pamphlet publishing, which contributed to rising literacy and demands for mass political participation. === Clothing and consumer goods === Consumers benefited from falling prices for clothing and household articles such as cast iron cooking utensils, and in the following decades, stoves for cooking and space heating. Coffee, tea, sugar, tobacco, and chocolate became affordable to many in Europe. The consumer revolution in England from the 17th to the mid-18th century had seen a marked increase in the consumption and variety of luxury goods and products by individuals from different economic and social backgrounds. With improvements in transport and manufacturing technology, opportunities for buying and selling became faster and more efficient. The expanding textile trade in the north of England meant the three-piece suit became affordable to the masses. Founded by potter and retail entrepreneur Josiah Wedgwood in 1759, Wedgwood fine china and porcelain tableware was became a common feature on dining tables. Rising prosperity and social mobility in the 18th century increased those with disposable income for consumption, and the marketing of goods for individuals, as opposed households, started to appear. With the rapid growth of towns and cities, shopping became an important part of everyday life. Window shopping and the purchase of goods became a cultural activity...and many exclusive shops were opened in elegant urban districts: in the Strand and Piccadilly in London, for example, and in spa towns such as Bath and Harrogate. Prosperity and expansion in manufacturing industries such as pottery and metalware increased consumer choice dramatically. Where once labourers ate from metal platters with wooden implements, ordinary workers now dined on Wedgwood porcelain. Consumers came to demand an array of new household goods and furnishings: metal knives and forks...rugs, carpets, mirrors, cooking ranges, pots, pans, watches, clocks, and a dizzying array of furniture. The age of mass consumption had arrived. New businesses appeared in towns and cities throughout Britain. Confectionery was one such industry that saw rapid expansion. According to food historian Polly Russell: "chocolate and biscuits became products for the masses...By the mid-19th century, sweet biscuits were an affordable indulgence and business was booming. Manufacturers...transformed from small family-run businesses into state-of-the-art operations". In 1847 Fry's of Bristol produced the first chocolate bar. Their competitor Cadbury, of Birmingham, was the first to commercialize the association between confectionery and romance when they produced a heart-shaped box of chocolates for Valentine's Day in 1868. The department store became a common feature in major High Streets; one of the first was opened in 1796 by Harding, Howell & Co. on Pall Mall, London. In the 1860s, fish and chip shops to satisfy the needs of the growing industrial population. street sellers were common in an increasingly urbanized country. "Crowds swarmed in every thoroughfare. Scores of street sellers 'cried' merchandise from place to place, advertising the wealth of goods and services on offer. Milkmaids, orange sellers, fishwives and piemen...walked the streets offering their various wares for sale, while knife grinders and the menders of broken chairs and furniture could be found on street corners". A soft drinks company, R. White's Lemonade, began in 1845 by selling drinks in London in a wheelbarrow. Increased literacy, industrialisation, and the railway created a market for cheap literature for the masses and the ability for it to be circulated on a large scale. Penny dreadfuls were created in the 1830s to meet this demand, "Britain's first taste of mass-produced popular culture for the young", and "the Victorian equivalent of video games". By the 1860s and 70s more than one million boys' periodicals were sold per week. Labelled an "authorpreneur" by The Paris Review, Charles Dickens used the innovations of the era to sell books: new printing presses, enhanced advertising revenues, and the railways. His first novel, The Pickwick Papers (1836), became a phenomenon, its unprecedented success sparking spin-offs and merchandise ranging from Pickwick cigars, playing cards, china figurines, Sam Weller puzzles, Weller boot polish and jokebooks. Nicholas Dames in The Atlantic writes, "Literature" is not a big enough category for Pickwick. It defined its own, a new one that we have learned to call "entertainment". Urbanisation of rural populations led to development of the music hall in the 1850s, with the newly created urban communities, cut off from their cultural roots, requiring new and readily accessible forms of entertainment. In 1861, Welsh entrepreneur Pryce Pryce-Jones formed the first mail order business, an idea which changed retail. Selling Welsh flannel, he created catalogues, with customers able to order by mail for the first time—this following the Uniform Penny Post in 1840 and invention of the postage stamp (Penny Black) with a charge of one penny for carriage between any two places in the UK irrespective of distance—and the goods were delivered via the new railway system. As the railways expanded overseas, so did his business. === Population increase === The Industrial Revolution was the first time there was a simultaneous increase in population and per person income. The population of England and Wales, which had remained steady at six million in 1700–40, rose dramatically afterwards. England's population doubled from 8.3 million in 1801 to 17 million in 1850 and, by 1901, had doubled again to 31 million. Improved conditions led to the population of Britain increasing from 10 million to 30 million in the 19th century. Europe's population increased from 100 million in 1700 to 400 million by 1900. Between 1815 and 1939, 20% of Europe's population left home, pushed by poverty, a rapidly growing population, and the displacement of peasant farming and artisan manufacturing. They were pulled abroad by the enormous demand for labour, ready availability of land, and cheap transportation. Many did not find a satisfactory life, leading 7 million to return to Europe. This mass migration had large demographic effects: in 1800, less than 1% of the world population consisted of overseas Europeans and their descendants; by 1930, they represented 11%. The Americas felt the brunt of this huge emigration, largely concentrated in the US. === Urbanization === The growth of the industry since the late 18th century led to massive urbanisation and the rise of new great cities, first in Europe, then elsewhere, as new opportunities brought huge numbers of migrants from rural communities into urban areas. In 1800, only 3% of humans lived in cities, compared to 50% by 2000. Manchester had a population of 10,000 in 1717, by 1911 it had burgeoned to 2.3 million. === Effect on women and family life === Women's historians have debated the effect of the Industrial Revolution and capitalism on the status of women. Taking a pessimistic view, Alice Clark argues that when capitalism arrived in 17th-century England, it lowered the status of women as they lost much of their economic importance. Clark argues that in 16th-century England, women were engaged in many aspects of industry and agriculture. The home was a central unit of production, and women played a vital role in running farms and some trades and landed estates. Their economic role gave them a sort of equality. However, Clark argues, as capitalism expanded, there was more division of labour with husbands taking paid labour jobs outside the home, and wives reduced to unpaid household work. Middle- and upper-class women were confined to an idle domestic existence, supervising servants; lower-class women were forced to take poorly paid jobs. Capitalism, therefore, had a negative effect on powerful women. In a more positive interpretation, Ivy Pinchbeck argues capitalism created the conditions for women's emancipation. Tilly and Scott have emphasised the continuity in the status of women, finding three stages in English history. In the pre-industrial era, production was mostly for home use, and women produced much of the needs of the households. The second stage was the "family wage economy" of early industrialisation; the entire family depended on the collective wages of its members, including husband, wife, and older children. The third, or modern, stage is the "family consumer economy", in which the family is the site of consumption, and women are employed in large numbers in retail and clerical jobs to support rising consumption. Ideas of thrift and hard work characterised middle-class families as the Industrial Revolution swept Europe. These values were displayed in Samuel Smiles' book Self-Help, in which he states that the misery of the poorer classes was "voluntary and self-imposed—the results of idleness, thriftlessness, intemperance, and misconduct." === Labour conditions === ==== Social structure and working conditions ==== Harsh working conditions were prevalent long before the Industrial Revolution. Pre-industrial society was very static and often cruel—child labour, dirty living conditions, and long working hours were just as prevalent before the Industrial Revolution. The Industrial Revolution witnessed the triumph of a middle class of industrialists and businessmen over a landed class of nobility and gentry. Working people found increased opportunities for employment in mills and factories, but these were under strict working conditions with long hours dominated by a pace set by machines. As late as 1900, most US industrial workers worked 10-hour days, yet earned 20–40% less than that necessary for a decent life. Most workers in textiles, which was the leading industry in terms of employment, were women and children. For workers, industrial life "was a stony desert, which they had to make habitable by their own efforts." ==== Factories and urbanisation ==== Industrialisation led to the creation of the factory. The factory system contributed to the growth of urban areas as workers migrated into the cities in search of work in the factories. This was clearly illustrated in the mills and associated industries of Manchester, nicknamed "Cottonopolis", and the world's first industrial city. Manchester experienced a six-times increase in population between 1771 and 1831. Bradford grew by 50% every ten years between 1811 and 1851, and by 1851 only 50% of its population were born there. For much of the 19th century, production was done in small mills which were typically water-powered and built to serve local needs. Later, each factory would have its own steam engine and a chimney to give an efficient draft through its boiler. Some industrialists tried to improve factory and living conditions for their workers. One early reformer was Robert Owen, known for his pioneering efforts in improving conditions for at the New Lanark mills and often regarded as a key thinker of the early socialist movement. By 1746 an integrated brass mill was working at Warmley near Bristol. Raw material was smelted into brass and turned into pans, pins, wire, and other goods. Housing was provided for workers on site. Josiah Wedgwood and Matthew Boulton were other prominent early industrialists who employed the factory system. ==== Child labour ==== The chances of surviving childhood did not improve throughout the Industrial Revolution, although infant mortality rates were reduced markedly. There was still limited opportunity for education, and children were expected to work. Child labour had existed before, but with the increase in population and education it became more visible. Many children were forced to work in bad conditions for much lower pay than their elders, 10–20% of an adult male's wage, even though their productivity was comparable; there was no need for strength to operate an industrial machine, and since the industrial system was new, there were no experienced adult labourers. This made child labour the labour of choice for manufacturing in the early phases of the Industrial Revolution, between the 18th and 19th centuries. In England and Scotland in 1788, two-thirds of the workers in 143 water-powered cotton mills were children. Reports detailing some of the abuses, particularly in the mines and textile factories, helped to popularise the children's plight. The outcry, especially among the upper and middle classes, helped stir change for the young workers' welfare. Politicians and the government tried to limit child labour by law, but factory owners resisted; some felt they were aiding the poor by giving their children money to buy food, others simply welcomed the cheap labour. In 1833 and 1844, the first general laws against child labour, the Factory Acts, were passed in Britain: children younger than nine were not allowed to work, children were not permitted to work at night, and the working day for those under 18 was limited to 12 hours. Factory inspectors enforced the law; however, their scarcity made this difficult. A decade later, the employment of children and women in mining was forbidden. Although laws decreased child labourers, it remained significantly present in Europe and the US until the 20th century. ==== Organisation of labour ==== The Industrial Revolution concentrated labour into mills, factories, and mines, thus facilitating the organisation of combinations or trade unions advance the interests of working people. A union could demand better terms by withdrawing and halting production. Employers had to decide between giving in at a cost, or suffering the cost of the lost production. Skilled workers were difficult to replace, and these were the first to successfully advance their conditions through this kind of bargaining. The main method unions used, and still use, to effect change was strike action. Many strikes were painful events for both unions and management. In Britain, the Combination Act 1799 forbade workers to form any kind of trade union until its repeal in 1824. Even after this, unions were severely restricted. A British newspaper in 1834 described unions as "the most dangerous institutions that were ever permitted to take root, under shelter of law, in any country..." The Reform Act 1832 extended the vote in Britain, but did not grant universal suffrage. Six men from Tolpuddle in Dorset founded the Friendly Society of Agricultural Labourers to protest against the lowering of wages in the 1830s. They refused to work for less than ten shillings per week, by this time wages had been reduced to seven shillings and were to be reduced to six. In 1834 James Frampton, a local landowner, wrote to Prime Minister Lord Melbourne to complain about the union, invoking an obscure law from 1797 prohibiting people from swearing oaths to each other, which the members of the Society had done. Six men were arrested, found guilty, and transported to Australia. They became known as the Tolpuddle Martyrs. In the 1830s and 40s, the chartist movement was the first large-scale organised working-class political movement that campaigned for political equality and social justice. Its Charter of reforms received three million signatures, but was rejected by Parliament without consideration. Working people formed friendly societies and cooperative societies as mutual support groups against times of economic hardship. Enlightened industrialists, such as Robert Owen supported these organisations to improve conditions. Unions slowly overcame the legal restrictions on the right to strike. In 1842, a general strike involving cotton workers and colliers was organised through the chartist movement which stopped production across Britain. Eventually, effective political organisation for working people was achieved through trades unions who, after the extensions of the franchise in 1867 and 1885, began to support socialist political parties that later merged to become the British Labour Party. ==== Luddites ==== The rapid industrialisation of the English economy cost many craft workers their jobs. The Luddite movement started first with lace and hosiery workers near Nottingham, and spread to other areas of the textile industry. Many weavers found themselves suddenly unemployed as they could no longer compete with machines which required less skilled labour to produce more cloth than one weaver. Many such unemployed workers, weavers, and others turned their animosity towards the machines that had taken their jobs and began destroying factories and machinery. These attackers became known as Luddites, supposedly followers of Ned Ludd, a folklore figure. The first attacks of the movement began in 1811. The Luddites rapidly gained popularity, and the Government took drastic measures using the militia or army to protect industry. Rioters who were caught were tried and hanged, or transported for life. Unrest continued in other sectors as they industrialised, such as with agricultural labourers in the 1830s when large parts of southern Britain were affected by the Captain Swing disturbances. Threshing machines were a particular target, and hayrick burning was a popular activity. The riots led to the first formation of trade unions and further pressure for reform. ==== Shift in production's centre of gravity ==== The traditional centres of hand textile production such as India, the Middle East, and China could not withstand competition from machine-made textiles, which destroyed the hand-made textile industries and left millions without work, many of whom starved. The Industrial Revolution generated an enormous and unprecedented economic division in the world, as measured by the share of manufacturing output. ==== Cotton and the expansion of slavery ==== Cheap cotton textiles increased demand for raw cotton; previously, it had primarily been consumed in subtropical regions where it was grown, with little raw cotton available for export. Consequently, prices of raw cotton rose. British production grew from 2 million pounds in 1700 to 5 million in 1781 to 56 million in 1800. The invention of the cotton gin by American Eli Whitney in 1792 was the decisive event. It allowed green-seeded cotton to become profitable, leading to the widespread growth of slave plantations in the US, Brazil, and the West Indies. In 1791, American cotton production was 2 million pounds, soaring to 35 million by 1800, half of which was exported. America's cotton plantations were highly efficient, profitable and able to keep up with demand. The U.S. Civil War created a "cotton famine" that led to increased production in other areas of the world, including European colonies in Africa. === Effect on environment === The origins of the environmental movement lay in the response to increasing levels of smoke pollution during the Industrial Revolution. The emergence of great factories and the linked immense growth in coal consumption gave rise to an unprecedented level of air pollution in industrial centres; after 1900 the large volume of industrial chemical discharges added to the growing load of untreated human waste. The first large-scale, modern environmental laws came in the form of Britain's Alkali Act 1863, to regulate the air pollution given off by the Leblanc process used to produce soda ash. Alkali inspectors were appointed to curb this pollution. The manufactured gas industry began in British cities in 1812–20. This produced highly toxic effluent dumped into sewers and rivers. The gas companies were repeatedly sued in nuisance lawsuits. They usually lost and modified the worst practices. The City of London indicted gas companies in the 1820s for polluting the Thames, poisoning its fish. Parliament wrote company charters to regulate toxicity. The industry reached the U.S. around 1850 causing pollution and lawsuits. In industrial cities local experts and reformers, especially after 1890, took the lead in identifying environmental degradation and pollution, and initiating grass-roots movements to achieve reforms. Typically the highest priority went to water and air pollution. The Coal Smoke Abatement Society was formed in Britain in 1898. It was founded by artist William Blake Richmond, frustrated with the pall cast by coal smoke. Although there were earlier pieces of legislation, the Public Health Act 1875 required all furnaces and fireplaces to consume their smoke. It provided for sanctions against factories that emitted large amounts of black smoke. == Industrialisation beyond Great Britain == === Europe === The Industrial Revolution in continental Europe started in Belgium and France, then spread to the German states by the middle of the 19th century. In many industries, this involved the application of technology developed in Britain. Typically, the technology was purchased from Britain, or British engineers and entrepreneurs moved abroad in search of new opportunities. By 1809, part of the Ruhr in Westphalia was called 'Miniature England' because of its similarities to industrial areas of Britain. Most European governments provided state funding to the new industries. In some cases, such as iron, the different availability of resources locally meant only some aspects of the British technology were adopted. ==== Belgium ==== Belgium was the second country in which the Industrial Revolution took place and the first in continental Europe: Wallonia (French-speaking southern Belgium) took the lead. Starting in the 1820s, and especially after Belgium became independent in 1830, factories comprising coke blast furnaces as well as puddling and rolling mills were built in the coal mining areas around Liège and Charleroi. The leader was John Cockerill, a transplanted Englishman. His factories at Seraing integrated all stages of production, from engineering to the supply of raw materials, as early as 1825. Wallonia exemplified the radical evolution of industrial expansion, it was also the birthplace of a strong socialist party and trade unions. Thanks to coal, the region became the second industrial power after Britain. With its Sillon industriel, "Especially in the Haine, Sambre and Meuse valleys...there was a huge industrial development based on coal-mining and iron-making...". Philippe Raxhon wrote about the period after 1830: "It was not propaganda but a reality the Walloon regions were becoming the second industrial power...after Britain." "The sole industrial centre outside the collieries and blast furnaces of Walloon was the old cloth-making town of Ghent." Many 19th-century coal mines in Wallonia are now protected as World Heritage Sites. Even though Belgium was the second industrial country after Britain, the effect of the Industrial Revolution was very different. In 'Breaking stereotypes', Muriel Neven and Isabelle Devious say: The Industrial Revolution changed a mainly rural society into an urban one, but with a strong contrast between northern and southern Belgium. During the Middle Ages and the early modern period, Flanders was characterised by the presence of large urban centres [...] at the beginning of the nineteenth century this region (Flanders), with an urbanisation degree of more than 30 percent, remained one of the most urbanised in the world. By comparison, this proportion reached only 17 percent in Wallonia, barely 10 percent in most West European countries, 16 percent in France, and 25 percent in Britain. 19th-century industrialisation did not affect the traditional urban infrastructure, except in Ghent... Also, in Wallonia, the traditional urban network was largely unaffected by the industrialisation process, even though the proportion of city-dwellers rose from 17 to 45 percent between 1831 and 1910. Especially in the Haine, Sambre and Meuse valleys...where there was a huge industrial development based on coal-mining and iron-making, urbanisation was fast. During these eighty years, the number of municipalities with more than 5,000 inhabitants increased from only 21 to more than one hundred, concentrating nearly half of the Walloon population in this region. Nevertheless, industrialisation remained quite traditional in the sense that it did not lead to the growth of modern and large urban centres, but to a conurbation of industrial villages and towns developed around a coal mine or a factory. Communication routes between these small centres only became populated later and created a much less dense urban morphology than, for instance, the area around Liège where the old town was there to direct migratory flows. ==== France ==== The Industrial Revolution in France did not correspond to the main model followed by other countries. Most French historians argue France did not go through a clear take-off. Instead, economic growth and industrialisation was slow and steady through the 18th and 19th centuries. However, some stages were identified by Maurice Lévy-Leboyer: French Revolution and Napoleonic Wars (1789–1815) industrialisation, along with Britain (1815–1860) economic slowdown (1860–1905) renewal of growth after 1905 ==== Germany ==== Germany's political disunity—with three dozen states—and a pervasive conservatism made it difficult to build railways in the 1830s. However, by the 1840s, trunk lines linked the major cities; each German state was responsible for the lines within its borders. Lacking a technological base at first, the Germans imported their engineering and hardware from Britain, but quickly learned the skills needed to operate and expand the railways. In many cities, the new railway shops were the centres of technological awareness and training, so that by 1850, Germany was self-sufficient in meeting the demands of railway construction, and the railways were a major impetus for the growth of the new steel industry. Observers found that even as late as 1890, their engineering was inferior to Britain's. However, German unification in 1871 stimulated consolidation, nationalisation into state-owned companies, and further rapid growth. Unlike in France, the goal was the support of industrialisation, and so heavy lines crisscrossed the Ruhr and other industrial districts and provided good connections to the major ports of Hamburg and Bremen. By 1880, Germany had 9,400 locomotives pulling 43,000 passengers and 30,000 tons of freight, and pulled ahead of France. Based on its leadership in chemical research in universities and industrial laboratories, Germany became dominant in the world's chemical industry in the late 19th century. ==== Sweden ==== Between 1790 and 1815, Sweden experienced two parallel economic movements: an agricultural revolution with larger agricultural estates, new crops, and farming tools and commercialisation of farming, and a proto industrialisation, with small industries being established in the countryside and workers switching between agriculture in summer and industrial production in winter. This led to economic growth benefiting large sections of the population and leading up to a consumption revolution starting in the 1820s. Between 1815 and 1850, the protoindustries developed into more specialised and larger industries. This period witnessed regional specialisation with mining in Bergslagen, textile mills in Sjuhäradsbygden, and forestry in Norrland. Important institutional changes took place, such as free and mandatory schooling introduced in 1842 (first time in the world), the abolition of the national monopoly on trade in handicrafts in 1846, and a stock company law in 1848. From 1850 to 1890, there was a rapid expansion in exports, dominated by crops, wood, and steel. Sweden abolished most tariffs and other barriers to free trade in the 1850s and joined the gold standard in 1873. Large infrastructural investments were made, mainly in the expanding railroad network, which was financed by the government and private enterprises. From 1890 to 1930, new industries developed with their focus on the domestic market: mechanical engineering, power utilities, papermaking and textile. ==== Austria-Hungary ==== The Habsburg realms, which became Austria-Hungary in 1867, had a population of 23 million in 1800, growing to 36 million by 1870. Between 1818 and 1870, industrial growth averaged 3% annually, though development varied significantly across regions. A major boost to industrialisation came with the construction of the railway network between 1850 and 1873, which transformed transport by making it faster, more reliable and affordable. Proto-industrialisation had already begun by 1750 in Alpine and Bohemian regions— what is now the Czech Republic—which later emerged as the industrial hub of the empire. The textile industry led this transformation, adopting mechanisation, steam engines, and the factory system. The first mechanical loom in the Czech lands was introduced in Varnsdorf in 1801 followed shortly by the arrival of steam engines in Bohemia and Moravia. Textile production flourished in industrial centers such as Prague and Brno—the latter earning the nickname "Moravian Manchester." The Czech lands became an industrial heartland due to rich natural resources, skilled workforce, and early adoption of technology. The iron industry also expanded in the Alpine regions after 1750. Hungary, by contrast, remained predominantly rural and under-industrialised until after 1870. However, reformers like Count István Széchenyi played a crucial role in laying the groundwork for future development. Often called "the greatest Hungarian," Széchenyi advocated for economic modernisation, infrastructure development, and industrial education. His initiatives included the promotion of river regulation, bridge construction (notably the Chain Bridge in Budapest), and the founding of the Hungarian Academy of Sciences—all aimed at fostering a market-oriented economy. In 1791, Prague hosted the first World's Fair, in Clementinum showcasing the region’s growing industrial sophistication. An earlier industrial exhibition was held in conjunction with the coronation of Leopold II as King of Bohemia, celebrating advanced manufacturing techniques in the Czech lands. From 1870 to 1913, technological innovation drove industrialisation and urbanisation across the empire. Gross national product (GNP) per capita grew at an average annual rate of 1.8%—surpassing Britain (1%), France (1.1%), and Germany (1.5%). Nevertheless, Austria-Hungary as a whole continued to lag behind more industrialised powers like Britain and Germany, largely due to its later start in the modernisation process. === Japan === The Industrial Revolution began about 1870 as Meiji period leaders decided to catch up with the West. The government built railways, improved roads, and inaugurated a land reform program to prepare the country for further development. It inaugurated a new Western-based education system for young people, sent thousands of students to the US and Europe, and hired more than 3,000 Westerners to teach modern science, mathematics, technology, and foreign languages (Foreign government advisors in Meiji Japan). In 1871, a group of Japanese politicians known as the Iwakura Mission toured Europe and the US to learn Western ways. The result was a deliberate state-led industrialisation policy to enable Japan to quickly catch up. The Bank of Japan, founded in 1882, used taxes to fund model steel and textile factories. Modern industry first appeared in textiles, including cotton and especially silk, which was based in home workshops in rural areas. === United States === During the late 18th and early 19th centuries when Western Europe began to industrialise, the US was primarily an agricultural and natural resource producing and processing economy. The building of roads and canals, the introduction of steamboats and the building of railroads were important for handling agricultural and natural resource products in the large and sparsely populated country. Important American technological contributions were the cotton gin and the development of a system for making interchangeable parts, which was aided by the development of the milling machine in the US. The development of machine tools and system of interchangeable parts was the basis for the rise of the US as the world's leading industrial nation in the late 19th century. Oliver Evans invented an automated flour mill in the mid-1780s, that used control mechanisms and conveyors so no labour was needed from when grain was loaded into the elevator buckets, until the flour was discharged into a wagon. This is considered to be the first modern materials handling system, an important advance in the progress toward mass production. The US originally used horse-powered machinery for small-scale applications such as grain milling, but eventually switched to water power after textile factories began being built in the 1790s. As a result, industrialisation was concentrated in New England and the Northeastern United States, which has fast-moving rivers. The newer water-powered production lines proved more economical than horse-drawn production. In the late 19th century steam-powered manufacturing overtook water-powered manufacturing, allowing the industry to spread to the Midwest. Thomas Somers and the Cabot Brothers founded the Beverly Cotton Manufactory in 1787, the first cotton mill in America, the largest cotton mill of its era, and a significant milestone in the research and development of cotton mills. This mill was designed to use horsepower, but the operators quickly learned that the horse-drawn platform was economically unstable, and had losses for years. Despite this, the Manufactory served as a playground of innovation, both in turning a large amount of cotton, but also developing the water-powered milling structure used in Slater's Mill. In 1793, Samuel Slater (1768–1835) founded the Slater Mill at Pawtucket, Rhode Island. He had learned of the new textile technologies as a boy apprentice in Derbyshire, England, and defied laws against the emigration of skilled workers by leaving for New York in 1789, hoping to make money with his knowledge. After founding Slater's Mill, he went on to own 13 textile mills. Daniel Day established a wool carding mill in the Blackstone Valley at Uxbridge, Massachusetts in 1809, the third woollen mill established in the US. The Blackstone Valley National Heritage Corridor retraces the history of "America's Hardest-Working River', Blackstone River, which, with its tributaries, cover more than 70 kilometres (45 mi). At its peak over 1,100 mills operated in this valley, including Slater's Mill. Merchant Francis Cabot Lowell from Newburyport, Massachusetts, memorised the design of textile machines on his tour of British factories in 1810. The War of 1812 ruined his import business but realising demand for domestic-finished cloth was emerging in America, on his return he set up the Boston Manufacturing Company. Lowell and his partners built America's second cotton-to-cloth textile mill at Waltham, Massachusetts, second to the Beverly Cotton Manufactory. After his death in 1817, his associates built America's first planned factory town, which they named after him. This enterprise was capitalised in a public stock offering, one of the first uses of it in the US. Lowell, Massachusetts, using nine kilometres (5+1⁄2 miles) of canals and 7,500 kilowatts (10,000 horsepower) delivered by the Merrimack River. The short-lived utopia-like Waltham-Lowell system was formed, as a direct response to the poor working conditions in Britain. However, by 1850, especially following the Great Famine of Ireland, the system had been replaced by poor immigrant labour. A major U.S. contribution to industrialisation was the development of techniques to make interchangeable parts from metal. Precision metal machining techniques were developed by the U.S. Department of War to make interchangeable parts for firearms. Techniques included using fixtures to hold the parts in the proper position, jigs to guide the cutting tools and precision blocks and gauges to measure the accuracy. The milling machine, a fundamental machine tool, is believed to have been invented by Eli Whitney, who was a government contractor who built firearms as part of this program. Another important invention was the Blanchard lathe, invented by Thomas Blanchard. The Blanchard lathe was actually a shaper that could produce copies of wooden gun stocks. The use of machinery and the techniques for producing standardised and interchangeable parts became known as the American system of manufacturing. Precision manufacturing techniques made it possible to build machines that mechanised the shoe and watch industries. The industrialisation of the watch industry started in 1854 also in Waltham, Massachusetts, at the Waltham Watch Company, with the development of machine tools, gauges and assembling methods adapted to the micro precision required for watches. == Second Industrial Revolution == Steel is often cited as the first of several new areas for industrial mass-production, which are said to characterise a "Second Industrial Revolution", beginning around 1850, although a method for mass manufacture of steel was not invented until the 1860s, when Henry Bessemer invented a new furnace which could convert molten pig iron into steel in large quantities. However, it only became widely available in the 1870s after the process was modified to produce more uniform quality. This Second Industrial Revolution gradually grew to include chemicals, mainly the chemical industries, petroleum and, in the 20th century, the automotive industry, and was marked by a transition of technological leadership from Britain, to the US and Germany. The increasing availability of economical petroleum products also reduced the importance of coal and widened the potential for industrialisation. A new revolution began with electricity and electrification in the electrical industries. By the 1890s, industrialisation had created the first giant industrial corporations with burgeoning global interests, as companies like U.S. Steel, General Electric, Standard Oil and Bayer AG joined the railroad and ship companies on the world's stock markets. == Causes == The causes of the Industrial Revolution were complicated and remain debated. Geographic factors include Britain's vast mineral resources. In addition to metal ores, Britain had the highest quality coal reserves known at the time, as well as abundant water power, highly productive agriculture, numerous seaports and navigable waterways. Some historians believe the Industrial Revolution was an outgrowth of social and institutional changes brought by the end of feudalism in Britain after the English Civil War in the 17th century, although feudalism began to break down after the Black Death of the mid 14th century. This created labour shortages and led to falling food prices and a peak in real wages around 1500, after which population growth began reducing wages. After 1540, increasing precious metals supply from the Americas caused inflation, which caused land rents to fall in real terms. The Enclosure movement and the British Agricultural Revolution made food production more efficient and less labour-intensive, forcing the farmers who could no longer be self-sufficient in agriculture into cottage industry, for example weaving, and in the longer term into the cities and the newly developed factories. The colonial expansion of the 17th century with the accompanying development of international trade, creation of financial markets and accumulation of capital are also cited as factors, as is the scientific revolution of the 17th century. A change in marrying patterns to getting married later made people able to accumulate more human capital during their youth, thereby encouraging economic development. Until the 1980s, it was believed by historians that technological innovation was the heart of the Industrial Revolution and the key enabling technology was the invention and improvement of the steam engine. Lewis Mumford has proposed that the Industrial Revolution had its origins in the Early Middle Ages, much earlier than most estimates. He explains that the model for standardised mass production was the printing press and that "the archetypal model for the industrial era was the clock". He also cites the monastic emphasis on order and time-keeping, as well as the fact that medieval cities had at their centre a church with bell ringing at regular intervals as being necessary precursors to a greater synchronisation necessary for later, more physical, manifestations such as the steam engine. The presence of a large domestic market is considered an important driver of the Industrial Revolution, particularly explaining why it occurred in Britain. In other nations, such as France, markets were split up by local regions, which often imposed tolls and tariffs on goods traded among them. Internal tariffs were abolished by Henry VIII of England, they survived in Russia until 1753, 1789 in France and 1839 in Spain. Governments' grant of limited monopolies to inventors under a developing patent system (the Statute of Monopolies in 1623) is considered an influential factor. The effects of patents, both good and ill, on the development of industrialisation are clearly illustrated in the history of the steam engine. In return for publicly revealing the workings of an invention, patents rewarded inventors such as James Watt by allowing them to monopolise production, and increasing the pace of technological development. However, monopolies bring with them inefficiencies which counterbalance, or even overbalance, the benefits of publicising ingenuity and rewarding inventors. Watt's monopoly prevented other inventors, such as Richard Trevithick, William Murdoch, or Jonathan Hornblower, whom Boulton and Watt sued, from introducing improved steam engines, thereby slowing the spread of steam power. === Causes in Europe === One question of active interest to historians is why the Industrial Revolution occurred in Europe and not in other parts of the world in the 18th century, particularly China, India, and the Middle East (which pioneered in shipbuilding, textile production, water mills, and much more in the period between 750 and 1100), or at other times like in Classical Antiquity or the Middle Ages. A recent account argued that Europeans have been characterized for thousands of years by a freedom-loving culture originating from the aristocratic societies of early Indo-European invaders. Many historians, however, have challenged this explanation as being not only Eurocentric, but also ignoring historical context. In fact, before the Industrial Revolution, "there existed something of a global economic parity between the most advanced regions in the world economy." These historians have suggested a number of other factors, including education, technological changes (see Scientific Revolution in Europe), "modern" government, "modern" work attitudes, ecology, and culture. China was the world's most technologically advanced country for many centuries; however, China stagnated economically and technologically and was surpassed by Western Europe before the Age of Discovery, by which time China banned imports and denied entry to foreigners. China was also a totalitarian society. It also taxed transported goods heavily. Modern estimates of per capita income in Western Europe in the late 18th century are of roughly 1,500 dollars in purchasing power parity (and Britain had a per capita income of nearly 2,000 dollars) whereas China, by comparison, had only 450 dollars. India was essentially feudal, politically fragmented and not as economically advanced as Western Europe. Historians such as David Landes and sociologists Max Weber and Rodney Stark credit the different belief systems in Asia and Europe with dictating where the revolution occurred. The religion and beliefs of Europe were largely products of Judaeo-Christianity and Greek thought. Conversely, Chinese society was founded on men like Confucius, Mencius, Han Feizi (Legalism), Lao Tzu (Taoism), and Buddha (Buddhism), resulting in very different worldviews. Other factors include the considerable distance of China's coal deposits, though large, from its cities as well as the then unnavigable Yellow River that connects these deposits to the sea. Economic historian Joel Mokyr argued that political fragmentation, the presence of a large number of European states, made it possible for heterodox ideas to thrive, as entrepreneurs, innovators, ideologues and heretics could easily flee to a neighboring state in the event that the one state would try to suppress their ideas and activities. This is what set Europe apart from the technologically advanced, large unitary empires such as China and India by providing "an insurance against economic and technological stagnation". China had both a printing press and movable type, and India had similar levels of scientific and technological achievement as Europe in 1700, yet the Industrial Revolution would occur in Europe, not China or India. In Europe, political fragmentation was coupled with an "integrated market for ideas" where Europe's intellectuals used the lingua franca of Latin, had a shared intellectual basis in Europe's classical heritage and the pan-European institution of the Republic of Letters. Political institutions could contribute to the relation between democratization and economic growth during Great Divergence. In addition, Europe's monarchs desperately needed revenue, pushing them into alliances with their merchant classes. Small groups of merchants were granted monopolies and tax-collecting responsibilities in exchange for payments to the state. Located in a region "at the hub of the largest and most varied network of exchange in history", Europe advanced as the leader of the Industrial Revolution. In the Americas, Europeans found a windfall of silver, timber, fish, and maize, leading historian Peter Stearns to conclude that "Europe's Industrial Revolution stemmed in great part from Europe's ability to draw disproportionately on world resources." Modern capitalism originated in the Italian city-states around the end of the first millennium. The city-states were prosperous cities that were independent from feudal lords. They were largely republics whose governments were typically composed of merchants, manufacturers, members of guilds, bankers and financiers. The Italian city-states built a network of branch banks in leading western European cities and introduced double entry bookkeeping. Italian commerce was supported by schools that taught numeracy in financial calculations through abacus schools. === Causes in Britain === Great Britain provided the legal and cultural foundations that enabled entrepreneurs to pioneer the Industrial Revolution. Key factors fostering this environment were: The period of peace and stability which followed the unification of England and Scotland There were no internal trade barriers, including between England and Scotland, or feudal tolls and tariffs, making Britain the "largest coherent market in Europe": 46  The rule of law (enforcing property rights and respecting the sanctity of contracts) A straightforward legal system that allowed the formation of joint-stock companies (corporations) Free market (capitalism) Geographical and natural resource advantages of Great Britain were the fact that it had extensive coastlines and many navigable rivers in an age where water was the easiest means of transportation and Britain had the highest quality coal in Europe. Britain also had a large number of sites for water power. There were two main values that drove the Industrial Revolution in Britain. These values were self-interest and an entrepreneurial spirit. Because of these interests, many industrial advances were made that resulted in a huge increase in personal wealth and a consumer revolution. These advancements also greatly benefitted British society as a whole. Countries around the world started to recognise the changes and advancements in Britain and use them as an example to begin their own Industrial Revolutions. A debate sparked by Trinidadian politician and historian Eric Williams in his work Capitalism and Slavery (1944) concerned the role of slavery in financing the Industrial Revolution. Williams argued that European capital amassed from slavery was vital in the early years of the revolution, contending that the rise of industrial capitalism was the driving force behind abolitionism instead of humanitarian motivations. These arguments led to significant historiographical debates among historians, with American historian Seymour Drescher critiquing Williams' arguments in Econocide (1977). Instead, the greater liberalisation of trade from a large merchant base may have allowed Britain to produce and use emerging scientific and technological developments more effectively than countries with stronger monarchies, particularly China and Russia. Britain emerged from the Napoleonic Wars as the only European nation not ravaged by financial plunder and economic collapse, and having the only merchant fleet of any useful size (European merchant fleets were destroyed during the war by the Royal Navy). Britain's extensive exporting cottage industries also ensured markets were already available for many early forms of manufactured goods. The conflict resulted in most British warfare being conducted overseas, reducing the devastating effects of territorial conquest that affected much of Europe. This was further aided by Britain's geographical position—an island separated from the rest of mainland Europe. Another theory is that Britain was able to succeed in the Industrial Revolution due to the availability of key resources it possessed. It had a dense population for its small geographical size. Enclosure of common land and the related agricultural revolution made a supply of this labour readily available. There was also a local coincidence of natural resources in the North of England, the English Midlands, South Wales and the Scottish Lowlands. Local supplies of coal, iron, lead, copper, tin, limestone and water power resulted in excellent conditions for the development and expansion of industry. Also, the damp, mild weather conditions of the North West of England provided ideal conditions for the spinning of cotton, providing a natural starting point for the birth of the textiles industry. The stable political situation in Britain from around 1689 following the Glorious Revolution, and British society's greater receptiveness to change (compared with other European countries) can also be said to be factors favouring the Industrial Revolution. Peasant resistance to industrialisation was largely eliminated by the Enclosure movement, and the landed upper classes developed commercial interests that made them pioneers in removing obstacles to the growth of capitalism. (This point is also made in Hilaire Belloc's The Servile State.) The French philosopher Voltaire wrote about capitalism and religious tolerance in his book on English society, Letters on the English (1733), noting why England at that time was more prosperous in comparison to the country's less religiously tolerant European neighbours. "Take a view of the Royal Exchange in London, a place more venerable than many courts of justice, where the representatives of all nations meet for the benefit of mankind. There the Jew, the Mahometan [Muslim], and the Christian transact together, as though they all professed the same religion, and give the name of infidel to none but bankrupts. There the Presbyterian confides in the Anabaptist, and the Churchman depends on the Quaker's word. If one religion only were allowed in England, the Government would very possibly become arbitrary; if there were but two, the people would cut one another's throats; but as there are such a multitude, they all live happy and in peace." Britain's population grew 280% from 1550 to 1820, while the rest of Western Europe grew 50–80%. Seventy percent of European urbanisation happened in Britain from 1750 to 1800. By 1800, only the Netherlands was more urbanised than Britain. This was only possible because coal, coke, imported cotton, brick and slate had replaced wood, charcoal, flax, peat and thatch. The latter compete with land grown to feed people while mined materials do not. Yet more land would be freed when chemical fertilisers replaced manure and horse's work was mechanised. A workhorse needs 1.2 to 2.0 ha (3 to 5 acres) for fodder while even early steam engines produced four times more mechanical energy. In 1700, five-sixths of the coal mined worldwide was in Britain, while the Netherlands had none; so despite having Europe's best transport, lowest taxes, and most urbanised, well-paid, and literate population, it failed to industrialise. In the 18th century, it was the only European country whose cities and population shrank. Without coal, Britain would have run out of suitable river sites for mills by the 1830s. Based on science and experimentation from the continent, the steam engine was developed specifically for pumping water out of mines, many of which in Britain had been mined to below the water table. Although extremely inefficient they were economical because they used unsaleable coal. Iron rails were developed to transport coal, which was a major economic sector in Britain. Economic historian Robert Allen has argued that high wages, cheap capital and very cheap energy in Britain made it the ideal place for the industrial revolution to occur. These factors made it vastly more profitable to invest in research and development, and to put technology to use in Britain than other societies. However, two 2018 studies in The Economic History Review showed that wages were not particularly high in the British spinning sector or the construction sector, casting doubt on Allen's explanation. A 2022 study in the Journal of Political Economy by Morgan Kelly, Joel Mokyr, and Cormac O Grada found that industrialization happened in areas with low wages and high mechanical skills, whereas literacy, banks and proximity to coal had little explanatory power. === Transfer of knowledge === Knowledge of innovation was spread by several means. Workers trained in a technique might move to another employer or be poached. A common method was the study tour, in which individuals gathered information abroad. Throughout the Industrial Revolution and preceding century, European countries and America engaged in such tours; Sweden and France even trained civil servants or technicians to undertake them as policy, while in Britain and America individual manufacturers pursued tours independently. Travel diaries from the tours are invaluable records of period methods. Innovation spread via informal networks such as the Lunar Society of Birmingham, whose members met from 1765 to 1809 to discuss natural philosophy and its industrial applications. They have been described as “the revolutionary committee of that most far-reaching of all the eighteenth-century revolutions, the Industrial Revolution.” Similar societies published papers and proceedings; for example, the Royal Society of Arts issued annual Transactions and illustrated volumes of new inventions. Technical encyclopaedias disseminated methods. John Harris’s Lexicon Technicum (1704) offered extensive scientific and engineering entries. Abraham Rees’s The Cyclopaedia; or, Universal Dictionary of Arts, Sciences, and Literature (1802–19) contained detailed articles and engraved plates on machines and processes. French works such as the Descriptions des Arts et Métiers and Diderot’s Encyclopédie similarly documented foreign techniques with engraved illustrations. Periodicals on manufacturing and patents emerged in the 1790s; for instance, French journals like the Annales des Mines printed engineers’ travel reports on British factories, helping diffuse British innovations abroad. === Protestant work ethic === Another theory is that the British advance was due to the presence of an entrepreneurial class which believed in progress, technology and hard work. The existence of this class is often linked to the Protestant work ethic (see Max Weber) and the particular status of the Baptists and the dissenting Protestant sects, such as the Quakers and Presbyterians that had flourished with the English Civil War. Reinforcement of confidence in the rule of law, which followed establishment of the prototype of constitutional monarchy in Britain in the Glorious Revolution of 1688, and the emergence of a stable financial market there based on the management of the national debt by the Bank of England, contributed to the capacity for, and interest in, private financial investment in industrial ventures. Dissenters found themselves barred or discouraged from almost all public offices, as well as education at England's only two universities at the time (although dissenters were still free to study at Scotland's four universities). When the restoration of the monarchy took place and membership in the official Anglican Church became mandatory due to the Test Act, they thereupon became active in banking, manufacturing and education. The Unitarians, in particular, were very involved in education, by running Dissenting Academies, where, in contrast to the universities of Oxford and Cambridge and schools such as Eton and Harrow, much attention was given to mathematics and the sciences – areas of scholarship vital to the development of manufacturing technologies. Historians sometimes consider this social factor to be extremely important, along with the nature of the national economies involved. While members of these sects were excluded from certain circles of the government, they were considered fellow Protestants, to a limited extent, by many in the middle class, such as traditional financiers or other businessmen. Given this relative tolerance and the supply of capital, the natural outlet for the more enterprising members of these sects would be to seek new opportunities in the technologies created in the wake of the scientific revolution of the 17th century. == Criticisms == The industrial revolution has been criticised for causing ecosystem collapse, mental illness, pollution and detrimental social systems. It has also been criticised for valuing profits and corporate growth over life and wellbeing. Multiple movements have arisen which reject aspects of the industrial revolution, such as the Amish or primitivists. === Humanism and harsh conditions === Some humanists and individualists criticise the Industrial Revolution for mistreating women and children and turning men into work machines that lacked autonomy. Critics of the Industrial revolution promoted a more interventionist state and formed new organisations to promote human rights. === Primitivism === Primitivism argues that the Industrial Revolution has created an unnatural frame of society and the world in which humans need to adapt to an unnatural urban landscape in which humans are perpetual cogs without personal autonomy. Certain primitivists argue for a return to pre-industrial society, while others argue that technology such as modern medicine, and agriculture are all positive for humanity assuming they are controlled by and serve humanity and have no effect on the natural environment. === Pollution and ecological collapse === The Industrial Revolution has been criticised for leading to immense ecological and habitat destruction. It has led to immense decrease in the biodiversity of life on Earth. The Industrial revolution has been said to be inherently unsustainable and will lead to eventual collapse of society, mass hunger, starvation, and resource scarcity. === Opposition from Romanticism === During the Industrial Revolution, an intellectual and artistic hostility towards the new industrialisation developed, associated with the Romantic movement. Romanticism revered the traditionalism of rural life and recoiled against the upheavals caused by industrialisation, urbanisation and the wretchedness of the working classes. Its major exponents in English included the artist and poet William Blake and poets William Wordsworth, Samuel Taylor Coleridge, John Keats, Lord Byron and Percy Bysshe Shelley. The movement stressed the importance of "nature" in art and language, in contrast to "monstrous" machines and factories; the "Dark satanic mills" of Blake's poem "And did those feet in ancient time". Mary Shelley's Frankenstein reflected concerns that scientific progress might be two-edged. French Romanticism likewise was highly critical of industry. == See also == == Footnotes == == References == === Sources === Clark, Gregory (2007). A Farewell to Alms: A Brief Economic History of the World. Princeton University Press. ISBN 978-0-691-12135-2. Haber, Ludwig Fritz (1958). The Chemical Industry During the Nineteenth Century: A Study of the Economic Aspect of Applied Chemistry in Europe and North America. Hunter, Louis C.; Bryant, Lynwood (1991). A History of Industrial Power in the United States, 1730–1930, Vol. 3: The Transmission of Power. Cambridge, MA: MIT Press. ISBN 978-0-262-08198-6. Kindleberger, Charles Poor (1993). A Financial History of Western Europe. Oxford University Press US. ISBN 978-0-19-507738-4. McNeil, Ian, ed. (1990). An Encyclopedia of the History of Technology. London: Routledge. ISBN 978-0-415-14792-7. Timbs, John (1860). Stories of Inventors and Discoverers in Science and the Useful Arts: A Book for Old and Young. Harper & Brothers. == External links == Internet Modern History Sourcebook: Industrial Revolution Archived 20 September 2022 at the Wayback Machine BBC History Home Page: Industrial Revolution Archived 25 December 2019 at the Wayback Machine Factory Workers in the Industrial Revolution Archived 15 August 2009 at the Wayback Machine "The Day the World Took Off" Six-part video series from the University of Cambridge tracing the question "Why did the Industrial Revolution begin when and where it did." Archived 20 September 2022 at the Wayback Machine
Wikipedia/Industrial_Revolution
The Feynman Lectures on Physics is a physics textbook based on a great number of lectures by Richard Feynman, a Nobel laureate who has sometimes been called "The Great Explainer". The lectures were presented before undergraduate students at the California Institute of Technology (Caltech), during 1961–1964. The book's co-authors are Feynman, Robert B. Leighton, and Matthew Sands. A 2013 review in Nature described the book as having "simplicity, beauty, unity ... presented with enthusiasm and insight". == Description == The textbook comprises three volumes. The first volume focuses on mechanics, radiation, and heat, including relativistic effects. The second volume covers mainly electromagnetism and matter. The third volume covers quantum mechanics; for example, it shows how the double-slit experiment demonstrates the essential features of quantum mechanics. The book also includes chapters on the relationship between mathematics and physics, and the relationship of physics to other sciences. In 2013, Caltech in cooperation with The Feynman Lectures Website made the book freely available, on the web site. == Background == By 1960, Richard Feynman’s research and discoveries in physics had resolved a number of troubling inconsistencies in several fundamental theories. In particular, it was his work in quantum electrodynamics for which he was awarded the 1965 Nobel Prize in physics. At the same time that Feynman was at the pinnacle of his fame, the faculty of the California Institute of Technology was concerned about the quality of the introductory courses for undergraduate students. It was thought the courses were burdened by an old-fashioned syllabus and the exciting discoveries of recent years, many of which had occurred at Caltech, were not being taught to the students. Thus, it was decided to reconfigure the first physics course offered to students at Caltech, with the goal being to generate more excitement in the students. Feynman readily agreed to give the course, though only once. Aware of the fact that this would be a historic event, Caltech recorded each lecture and took photographs of each drawing made on the blackboard by Feynman. Based on the lectures and the tape recordings, a team of physicists and graduate students put together a manuscript that would become The Feynman Lectures on Physics. Although Feynman's most valuable technical contribution to the field of physics may have been in the field of quantum electrodynamics, the Feynman Lectures were destined to become his most widely-read work. The Feynman Lectures are considered to be one of the most sophisticated and comprehensive college-level introductions to physics. Feynman himself stated in his original preface that he was “pessimistic” with regard to his success in reaching all of his students. The Feynman lectures were written “to maintain the interest of very enthusiastic and rather smart students coming out of high schools and into Caltech”. Feynman was targeting the lectures to students who, “at the end of two years of our previous course, [were] very discouraged because there were really very few grand, new, modern ideas presented to them”. As a result, some physics students find the lectures more valuable after they have obtained a good grasp of physics by studying more traditional texts, and the books are sometimes seen as more helpful for teachers than for students. While the two-year course (1961–1963) was still underway, rumors of it spread throughout the physics research and teaching community. In a special preface to the 1989 edition, David Goodstein and Gerry Neugebauer claimed that as time went on, the attendance of registered undergraduate students dropped sharply but was matched by a compensating increase in the number of faculty and graduate students. Co-author Matthew Sands, in his memoir accompanying the 2005 edition, contested this claim. Goodstein and Neugebauer also stated that, “it was [Feynman’s] peers — scientists, physicists, and professors — who would be the main beneficiaries of his magnificent achievement, which was nothing less than to see physics through the fresh and dynamic perspective of Richard Feynman”, and that his "gift was that he was an extraordinary teacher of teachers". Addison-Wesley published a collection of exercises and problems to accompany The Feynman Lectures on Physics. The problem sets were first used in the 1962–1963 academic year, and were organized by Robert B. Leighton. Some of the problems are sophisticated and difficult enough to require an understanding of advanced topics, such as Kolmogorov's zero–one law. The original set of books and supplements contained a number of errors, some of which rendered problems insoluble. Various errata were issued, which are now available online. Addison-Wesley also released in CD format all the audio tapes of the lectures, over 103 hours with Richard Feynman, after remastering the sound and clearing the recordings. For the CD release, the order of the lectures was rearranged from that of the original texts. The publisher has released a table showing the correspondence between the books and the CDs. In March 1964, Feynman appeared once again before the freshman physics class as a lecturer, but the notes for this particular guest lecture were lost for a number of years. They were finally located, restored, and made available as Feynman's Lost Lecture: The Motion of Planets Around the Sun. In 2005, Michael A. Gottlieb and Ralph Leighton co-authored Feynman's Tips on Physics, which includes four of Feynman's freshman lectures which had not been included in the main text (three on problem solving, one on inertial guidance), a memoir by Matthew Sands about the origins of the Feynman Lectures on Physics, and exercises (with answers) that were assigned to students by Robert B. Leighton and Rochus Vogt in recitation sections of the Feynman Lectures course at Caltech. Also released in 2005, was a "Definitive Edition" of the lectures which included corrections to the original text. An account of the history of these famous volumes is given by Sands in his memoir article “Capturing the Wisdom of Feynman", and another article "Memories of Feynman" by the physicist T. A. Welton. In a September 13, 2013 email to members of the Feynman Lectures online forum, Gottlieb announced the launch of a new website by Caltech and The Feynman Lectures Website which offers "[A] free high-quality online edition" of the lecture text. To provide a device-independent reading experience, the website takes advantage of modern web technologies like HTML5, SVG, and MathJax to present text, figures, and equations in any sizes while maintaining the display quality. == Contents == === Volume I: Mainly mechanics, radiation, and heat === Preface: “When new ideas came in, I would try either to deduce them if they were deducible or to explain that it was a new idea … and which was not supposed to be provable.” Chapters === Volume II: Mainly electromagnetism and matter === Chapters === Volume III: Quantum mechanics === Chapters == Abbreviated editions == Six readily-accessible chapters were later compiled into a book entitled Six Easy Pieces: Essentials of Physics Explained by Its Most Brilliant Teacher. Six more chapters are in the book Six Not So Easy Pieces: Einstein's Relativity, Symmetry and Space-Time. “Six Easy Pieces grew out of the need to bring to as wide an audience as possible, a substantial yet nontechnical physics primer based on the science of Richard Feynman... General readers are fortunate that Feynman chose to present certain key topics in largely qualitative terms without formal mathematics…” === Six Easy Pieces (1994) === Chapters: Atoms in motion Basic Physics The relation of physics to other sciences Conservation of energy The theory of gravitation Quantum behavior === Six Not-So-Easy Pieces (1998) === Chapters: Vectors Symmetry in physical laws The special theory of relativity Relativistic energy and momentum Space-time Curved space === The Very Best of The Feynman Lectures (Audio, 2005) === Chapters: The Theory of Gravitation (Vol. I, Chapter 7) Curved Space (Vol. II, Chapter 42) Electromagnetism (Vol. II, Chapter 1) Probability (Vol. I, Chapter 6) The Relation of Wave and Particle Viewpoints (Vol. III, Chapter 2) Superconductivity (Vol. III, Chapter 21) == Publishing information == Feynman R, Leighton R, and Sands M. The Feynman Lectures on Physics. Three volumes 1964, 1966. Library of Congress Catalog Card No. 63-20717 ISBN 0-201-02115-3 (1970 paperback three-volume set) ISBN 0-201-50064-7 (1989 commemorative hardcover three-volume set) ISBN 0-8053-9045-6 (2006 the definitive edition, 2nd printing, hardcover) Feynman's Tips On Physics: A Problem-Solving Supplement to the Feynman Lectures on Physics (hardcover) ISBN 0-8053-9063-4 Six Easy Pieces (hardcover book with original Feynman audio on CDs) ISBN 0-201-40896-1 Six Easy Pieces (paperback book) ISBN 0-201-40825-2 Six Not-So-Easy Pieces (paperback book with original Feynman audio on CDs) ISBN 0-201-32841-0 Six Not-So-Easy Pieces (paperback book) ISBN 0-201-32842-9 Exercises for the Feynman Lectures (paperback book) ISBN 2-35648-789-1 (out of print) Feynman R, Leighton R, and Sands M., The Feynman Lectures Website, September 2013. "The Feynman Lectures on Physics, Volume I" (online edition) "The Feynman Lectures on Physics, Volume II" (online edition) "The Feynman Lectures on Physics, Volume III" (online edition) == See also == Berkeley Physics Course – another contemporaneously developed and influential college-level physics series The Character of Physical Law – a condensed series of Feynman lectures for scientists and non-scientists Project Tuva List of textbooks on classical and quantum mechanics List of textbooks on electromagnetism List of textbooks on thermodynamics and statistical mechanics == References == == External links == The Feynman Lectures on Physics California Institute of Technology (Caltech) – HTML edition. The Feynman Lectures on Physics The Feynman Lectures Website – HTML edition and also exercises and other related material.
Wikipedia/The_Feynman_Lectures_on_Physics
In particle physics, quantum electrodynamics (QED) is the relativistic quantum field theory of electrodynamics. In essence, it describes how light and matter interact and is the first theory where full agreement between quantum mechanics and special relativity is achieved. QED mathematically describes all phenomena involving electrically charged particles interacting by means of exchange of photons and represents the quantum counterpart of classical electromagnetism giving a complete account of matter and light interaction. In technical terms, QED can be described as a perturbation theory of the electromagnetic quantum vacuum. Richard Feynman called it "the jewel of physics" for its extremely accurate predictions of quantities like the anomalous magnetic moment of the electron and the Lamb shift of the energy levels of hydrogen.: Ch1  It is the most precise and stringently tested theory in physics. == History == The first formulation of a quantum theory describing radiation and matter interaction is attributed to Paul Dirac, who during the 1920s computed the coefficient of spontaneous emission of an atom. He is credited with coining the term "quantum electrodynamics". Dirac described the quantization of the electromagnetic field as an ensemble of harmonic oscillators with the introduction of the concept of creation and annihilation operators of particles. In the following years, with contributions from Wolfgang Pauli, Eugene Wigner, Pascual Jordan, Werner Heisenberg and Enrico Fermi, physicists came to believe that, in principle, it was possible to perform any computation for any physical process involving photons and charged particles. However, further studies by Felix Bloch with Arnold Nordsieck, and Victor Weisskopf, in 1937 and 1939, revealed that such computations were reliable only at a first order of perturbation theory, a problem already pointed out by Robert Oppenheimer. At higher orders in the series infinities emerged, making such computations meaningless and casting doubt on the theory's internal consistency. This suggested that special relativity and quantum mechanics were fundamentally incompatible. Difficulties increased through the end of the 1940s. Improvements in microwave technology made it possible to take more precise measurements of the shift of the levels of a hydrogen atom, later known as the Lamb shift and magnetic moment of the electron. These experiments exposed discrepancies that the theory was unable to explain. A first indication of a possible solution was given by Bethe in 1947. He made the first non-relativistic computation of the shift of the lines of the hydrogen atom as measured by Lamb and Retherford. Despite limitations of the computation, agreement was excellent. The idea was simply to attach infinities to corrections of mass and charge that were actually fixed to a finite value by experiments. In this way, the infinities get absorbed in those constants and yield a finite result with good experimental agreement. This procedure was named renormalization. Based on Bethe's intuition and fundamental papers on the subject by Shin'ichirō Tomonaga, Julian Schwinger, Richard Feynman and Freeman Dyson, it was finally possible to produce fully covariant formulations that were finite at any order in a perturbation series of quantum electrodynamics. Tomonaga, Schwinger, and Feynman were jointly awarded the 1965 Nobel Prize in Physics for their work in this area. Their contributions, and Dyson's, were about covariant and gauge-invariant formulations of quantum electrodynamics that allow computations of observables at any order of perturbation theory. Feynman's mathematical technique, based on his diagrams, initially seemed unlike the field-theoretic, operator-based approach of Schwinger and Tomonaga, but Dyson later showed that the two approaches were equivalent. Renormalization, the need to attach a physical meaning at certain divergences appearing in the theory through integrals, became one of the fundamental aspects of quantum field theory and is seen as a criterion for a theory's general acceptability. Even though renormalization works well in practice, Feynman was never entirely comfortable with its mathematical validity, referring to renormalization as a "shell game" and "hocus pocus".: 128  Neither Feynman nor Dirac were happy with that way to approach the observations made in theoretical physics, above all in quantum mechanics. QED is the model and template for all subsequent quantum field theories. One such subsequent theory is quantum chromodynamics, which began in the early 1960s and attained its present form in the 1970s, developed by H. David Politzer, Sidney Coleman, David Gross and Frank Wilczek. Building on Schwinger's pioneering work, Gerald Guralnik, Dick Hagen, and Tom Kibble, Peter Higgs, Jeffrey Goldstone, and others, Sheldon Glashow, Steven Weinberg and Abdus Salam independently showed how the weak nuclear force and quantum electrodynamics could be merged into a single electroweak force. == Feynman's view of quantum electrodynamics == === Introduction === Near the end of his life, Richard Feynman gave a series of lectures on QED intended for the lay public. These lectures were transcribed and published as Feynman (1985), QED: The Strange Theory of Light and Matter, a classic non-mathematical exposition of QED from the point of view articulated below. The key components of Feynman's presentation of QED are three basic actions.: 85  A photon goes from one place and time to another place and time. An electron goes from one place and time to another place and time. An electron emits or absorbs a photon at a certain place and time. These actions are represented in the form of visual shorthand by the three basic elements of diagrams: a wavy line for the photon, a straight line for the electron and a junction of two straight lines and a wavy one for a vertex representing emission or absorption of a photon by an electron. These can all be seen in the adjacent diagram. As well as the visual shorthand for the actions, Feynman introduces another kind of shorthand for the numerical quantities called probability amplitudes. The probability is the square of the absolute value of total probability amplitude, probability = | f ( amplitude ) | 2 {\displaystyle {\text{probability}}=|f({\text{amplitude}})|^{2}} . If a photon moves from one place and time A {\displaystyle A} to another place and time B {\displaystyle B} , the associated quantity is written in Feynman's shorthand as P ( A to B ) {\displaystyle P(A{\text{ to }}B)} , and it depends on only the momentum and polarization of the photon. The similar quantity for an electron moving from C {\displaystyle C} to D {\displaystyle D} is written E ( C to D ) {\displaystyle E(C{\text{ to }}D)} . It depends on the momentum and polarization of the electron, in addition to a constant Feynman calls n, sometimes called the "bare" mass of the electron: it is related to, but not the same as, the measured electron mass. Finally, the quantity that tells us about the probability amplitude for an electron to emit or absorb a photon Feynman calls j, and is sometimes called the "bare" charge of the electron: it is a constant, and is related to, but not the same as, the measured electron charge e.: 91  QED is based on the assumption that complex interactions of many electrons and photons can be represented by fitting together a suitable collection of the above three building blocks and then using the probability amplitudes to calculate the probability of any such complex interaction. It turns out that the basic idea of QED can be communicated while assuming that the square of the total of the probability amplitudes mentioned above (P(A to B), E(C to D) and j) acts just like our everyday probability (a simplification made in Feynman's book). Later on, this will be corrected to include specifically quantum-style mathematics, following Feynman. The basic rules of probability amplitudes that will be used are:: 93  The indistinguishability criterion in (a) is very important: it means that there is no observable feature present in the given system that in any way "reveals" which alternative is taken. In such a case, one cannot observe which alternative actually takes place without changing the experimental setup in some way (e.g. by introducing a new apparatus into the system). Whenever one is able to observe which alternative takes place, one always finds that the probability of the event is the sum of the probabilities of the alternatives. Indeed, if this were not the case, the very term "alternatives" to describe these processes would be inappropriate. What (a) says is that once the physical means for observing which alternative occurred is removed, one cannot still say that the event is occurring through "exactly one of the alternatives" in the sense of adding probabilities; one must add the amplitudes instead.: 82  Similarly, the independence criterion in (b) is very important: it only applies to processes which are not "entangled". === Basic constructions === Suppose we start with one electron at a certain place and time (this place and time being given the arbitrary label A) and a photon at another place and time (given the label B). A typical question from a physical standpoint is: "What is the probability of finding an electron at C (another place and a later time) and a photon at D (yet another place and time)?". The simplest process to achieve this end is for the electron to move from A to C (an elementary action) and for the photon to move from B to D (another elementary action). From a knowledge of the probability amplitudes of each of these sub-processes – E(A to C) and P(B to D) – we would expect to calculate the probability amplitude of both happening together by multiplying them, using rule b) above. This gives a simple estimated overall probability amplitude, which is squared to give an estimated probability. But there are other ways in which the result could come about. The electron might move to a place and time E, where it absorbs the photon; then move on before emitting another photon at F; then move on to C, where it is detected, while the new photon moves on to D. The probability of this complex process can again be calculated by knowing the probability amplitudes of each of the individual actions: three electron actions, two photon actions and two vertexes – one emission and one absorption. We would expect to find the total probability amplitude by multiplying the probability amplitudes of each of the actions, for any chosen positions of E and F. We then, using rule a) above, have to add up all these probability amplitudes for all the alternatives for E and F. (This is not elementary in practice and involves integration.) But there is another possibility, which is that the electron first moves to G, where it emits a photon, which goes on to D, while the electron moves on to H, where it absorbs the first photon, before moving on to C. Again, we can calculate the probability amplitude of these possibilities (for all points G and H). We then have a better estimation for the total probability amplitude by adding the probability amplitudes of these two possibilities to our original simple estimate. Incidentally, the name given to this process of a photon interacting with an electron in this way is Compton scattering. An infinite number of other intermediate "virtual" processes exist in which photons are absorbed or emitted. For each of these processes, a Feynman diagram could be drawn describing it. This implies a complex computation for the resulting probability amplitudes, but provided it is the case that the more complicated the diagram, the less it contributes to the result, it is only a matter of time and effort to find as accurate an answer as one wants to the original question. This is the basic approach of QED. To calculate the probability of any interactive process between electrons and photons, it is a matter of first noting, with Feynman diagrams, all the possible ways in which the process can be constructed from the three basic elements. Each diagram involves some calculation involving definite rules to find the associated probability amplitude. That basic scaffolding remains when one moves to a quantum description, but some conceptual changes are needed. One is that whereas we might expect in our everyday life that there would be some constraints on the points to which a particle can move, that is not true in full quantum electrodynamics. There is a nonzero probability amplitude of an electron at A, or a photon at B, moving as a basic action to any other place and time in the universe. That includes places that could only be reached at speeds greater than that of light and also earlier times. (An electron moving backwards in time can be viewed as a positron moving forward in time.): 89, 98–99  === Probability amplitudes === Quantum mechanics introduces an important change in the way probabilities are computed. Probabilities are still represented by the usual real numbers we use for probabilities in our everyday world, but probabilities are computed as the square modulus of probability amplitudes, which are complex numbers. Feynman avoids exposing the reader to the mathematics of complex numbers by using a simple but accurate representation of them as arrows on a piece of paper or screen. (These must not be confused with the arrows of Feynman diagrams, which are simplified representations in two dimensions of a relationship between points in three dimensions of space and one of time.) The amplitude arrows are fundamental to the description of the world given by quantum theory. They are related to our everyday ideas of probability by the simple rule that the probability of an event is the square of the length of the corresponding amplitude arrow. So, for a given process, if two probability amplitudes, v and w, are involved, the probability of the process will be given either by P = | v + w | 2 {\displaystyle P=|\mathbf {v} +\mathbf {w} |^{2}} or P = | v w | 2 . {\displaystyle P=|\mathbf {v} \,\mathbf {w} |^{2}.} The rules as regards adding or multiplying, however, are the same as above. But where you would expect to add or multiply probabilities, instead you add or multiply probability amplitudes that now are complex numbers. Addition and multiplication are common operations in the theory of complex numbers and are given in the figures. The sum is found as follows. Let the start of the second arrow be at the end of the first. The sum is then a third arrow that goes directly from the beginning of the first to the end of the second. The product of two arrows is an arrow whose length is the product of the two lengths. The direction of the product is found by adding the angles that each of the two have been turned through relative to a reference direction: that gives the angle that the product is turned relative to the reference direction. That change, from probabilities to probability amplitudes, complicates the mathematics without changing the basic approach. But that change is still not quite enough because it fails to take into account the fact that both photons and electrons can be polarized, which is to say that their orientations in space and time have to be taken into account. Therefore, P(A to B) consists of 16 complex numbers, or probability amplitude arrows.: 120–121  There are also some minor changes to do with the quantity j, which may have to be rotated by a multiple of 90° for some polarizations, which is only of interest for the detailed bookkeeping. Associated with the fact that the electron can be polarized is another small necessary detail, which is connected with the fact that an electron is a fermion and obeys Fermi–Dirac statistics. The basic rule is that if we have the probability amplitude for a given complex process involving more than one electron, then when we include (as we always must) the complementary Feynman diagram in which we exchange two electron events, the resulting amplitude is the reverse – the negative – of the first. The simplest case would be two electrons starting at A and B ending at C and D. The amplitude would be calculated as the "difference", E(A to D) × E(B to C) − E(A to C) × E(B to D), where we would expect, from our everyday idea of probabilities, that it would be a sum.: 112–113  === Propagators === Finally, one has to compute P(A to B) and E(C to D) corresponding to the probability amplitudes for the photon and the electron respectively. These are essentially the solutions of the Dirac equation, which describe the behavior of the electron's probability amplitude and the Maxwell's equations, which describes the behavior of the photon's probability amplitude. These are called Feynman propagators. The translation to a notation commonly used in the standard literature is as follows: P ( A to B ) → D F ( x B − x A ) , E ( C to D ) → S F ( x D − x C ) , {\displaystyle P(A{\text{ to }}B)\to D_{F}(x_{B}-x_{A}),\quad E(C{\text{ to }}D)\to S_{F}(x_{D}-x_{C}),} where a shorthand symbol such as x A {\displaystyle x_{A}} stands for the four real numbers that give the time and position in three dimensions of the point labeled A. === Mass renormalization === A problem arose historically which held up progress for twenty years: although we start with the assumption of three basic "simple" actions, the rules of the game say that if we want to calculate the probability amplitude for an electron to get from A to B, we must take into account all the possible ways: all possible Feynman diagrams with those endpoints. Thus there will be a way in which the electron travels to C, emits a photon there and then absorbs it again at D before moving on to B. Or it could do this kind of thing twice, or more. In short, we have a fractal-like situation in which if we look closely at a line, it breaks up into a collection of "simple" lines, each of which, if looked at closely, are in turn composed of "simple" lines, and so on ad infinitum. This is a challenging situation to handle. If adding that detail only altered things slightly, then it would not have been too bad, but disaster struck when it was found that the simple correction mentioned above led to infinite probability amplitudes. In time this problem was "fixed" by the technique of renormalization. However, Feynman himself remained unhappy about it, calling it a "dippy process",: 128  and Dirac also criticized this procedure, saying "in mathematics one does not get rid of infinities when it does not please you". === Conclusions === Within the above framework physicists were then able to calculate to a high degree of accuracy some of the properties of electrons, such as the anomalous magnetic dipole moment. However, as Feynman points out, it fails to explain why particles such as the electron have the masses they do. "There is no theory that adequately explains these numbers. We use the numbers in all our theories, but we don't understand them – what they are, or where they come from. I believe that from a fundamental point of view, this is a very interesting and serious problem.": 152  == Mathematical formulation == === QED action === Mathematically, QED is an abelian gauge theory with the symmetry group U(1), defined on Minkowski space (flat spacetime). The gauge field, which mediates the interaction between the charged spin-1/2 fields, is the electromagnetic field. The QED Lagrangian for a spin-1/2 field interacting with the electromagnetic field in natural units gives rise to the action: 78  where γ μ {\displaystyle \gamma ^{\mu }} are Dirac matrices. ψ {\displaystyle \psi } a bispinor field of spin-1/2 particles (e.g. electron–positron field). ψ ¯ ≡ ψ † γ 0 {\displaystyle {\bar {\psi }}\equiv \psi ^{\dagger }\gamma ^{0}} , called "psi-bar", is sometimes referred to as the Dirac adjoint. D μ ≡ ∂ μ + i e A μ + i e B μ {\displaystyle D_{\mu }\equiv \partial _{\mu }+ieA_{\mu }+ieB_{\mu }} is the gauge covariant derivative. e is the coupling constant, equal to the electric charge of the bispinor field. A μ {\displaystyle A_{\mu }} is the covariant four-potential of the electromagnetic field generated by the electron itself. It is also known as a gauge field or a U ( 1 ) {\displaystyle {\text{U}}(1)} connection. B μ {\displaystyle B_{\mu }} is the external field imposed by external source. m is the mass of the electron or positron. F μ ν = ∂ μ A ν − ∂ ν A μ {\displaystyle F_{\mu \nu }=\partial _{\mu }A_{\nu }-\partial _{\nu }A_{\mu }} is the electromagnetic field tensor. This is also known as the curvature of the gauge field. Expanding the covariant derivative reveals a second useful form of the Lagrangian (external field B μ {\displaystyle B_{\mu }} set to zero for simplicity) L = − 1 4 F μ ν F μ ν + ψ ¯ ( i γ μ ∂ μ − m ) ψ − e j μ A μ {\displaystyle {\mathcal {L}}=-{\frac {1}{4}}F_{\mu \nu }F^{\mu \nu }+{\bar {\psi }}(i\gamma ^{\mu }\partial _{\mu }-m)\psi -ej^{\mu }A_{\mu }} where j μ {\displaystyle j^{\mu }} is the conserved U ( 1 ) {\displaystyle {\text{U}}(1)} current arising from Noether's theorem. It is written j μ = ψ ¯ γ μ ψ . {\displaystyle j^{\mu }={\bar {\psi }}\gamma ^{\mu }\psi .} === Equations of motion === Expanding the covariant derivative in the Lagrangian gives L = − 1 4 F μ ν F μ ν + i ψ ¯ γ μ ∂ μ ψ − e ψ ¯ γ μ A μ ψ − m ψ ¯ ψ {\displaystyle {\mathcal {L}}=-{\frac {1}{4}}F_{\mu \nu }F^{\mu \nu }+i{\bar {\psi }}\gamma ^{\mu }\partial _{\mu }\psi -e{\bar {\psi }}\gamma ^{\mu }A_{\mu }\psi -m{\bar {\psi }}\psi } = − 1 4 F μ ν F μ ν + i ψ ¯ γ μ ∂ μ ψ − m ψ ¯ ψ − e j μ A μ . {\displaystyle =-{\frac {1}{4}}F_{\mu \nu }F^{\mu \nu }+i{\bar {\psi }}\gamma ^{\mu }\partial _{\mu }\psi -m{\bar {\psi }}\psi -ej^{\mu }A_{\mu }.} For simplicity, B μ {\displaystyle B_{\mu }} has been set to zero, with no loss of generality. Alternatively, we can absorb B μ {\displaystyle B_{\mu }} into a new gauge field A μ ′ = A μ + B μ {\displaystyle A'_{\mu }=A_{\mu }+B_{\mu }} and relabel the new field as A μ . {\displaystyle A_{\mu }.} From this Lagrangian, the equations of motion for the ψ {\displaystyle \psi } and A μ {\displaystyle A_{\mu }} fields can be obtained. ==== Equation of motion for ψ ==== These arise most straightforwardly by considering the Euler-Lagrange equation for ψ ¯ {\displaystyle {\bar {\psi }}} . Since the Lagrangian contains no ∂ μ ψ ¯ {\displaystyle \partial _{\mu }{\bar {\psi }}} terms, we immediately get ∂ L ∂ ( ∂ μ ψ ¯ ) = 0 {\displaystyle {\frac {\partial {\mathcal {L}}}{\partial (\partial _{\mu }{\bar {\psi }})}}=0} so the equation of motion can be written ( i γ μ ∂ μ − m ) ψ = e γ μ A μ ψ . {\displaystyle (i\gamma ^{\mu }\partial _{\mu }-m)\psi =e\gamma ^{\mu }A_{\mu }\psi .} ==== Equation of motion for Aμ ==== Using the Euler–Lagrange equation for the A μ {\displaystyle A_{\mu }} field, the derivatives this time are ∂ ν ( ∂ L ∂ ( ∂ ν A μ ) ) = ∂ ν ( ∂ μ A ν − ∂ ν A μ ) , {\displaystyle \partial _{\nu }\left({\frac {\partial {\mathcal {L}}}{\partial (\partial _{\nu }A_{\mu })}}\right)=\partial _{\nu }\left(\partial ^{\mu }A^{\nu }-\partial ^{\nu }A^{\mu }\right),} ∂ L ∂ A μ = − e ψ ¯ γ μ ψ . {\displaystyle {\frac {\partial {\mathcal {L}}}{\partial A_{\mu }}}=-e{\bar {\psi }}\gamma ^{\mu }\psi .} Substituting back into (3) leads to ∂ μ F μ ν = e ψ ¯ γ ν ψ {\displaystyle \partial _{\mu }F^{\mu \nu }=e{\bar {\psi }}\gamma ^{\nu }\psi } which can be written in terms of the U ( 1 ) {\displaystyle {\text{U}}(1)} current j μ {\displaystyle j^{\mu }} as Now, if we impose the Lorenz gauge condition ∂ μ A μ = 0 , {\displaystyle \partial _{\mu }A^{\mu }=0,} the equations reduce to ◻ A μ = e j μ , {\displaystyle \Box A^{\mu }=ej^{\mu },} which is a wave equation for the four-potential, the QED version of the classical Maxwell equations in the Lorenz gauge. (The square represents the wave operator, ◻ = ∂ μ ∂ μ {\displaystyle \Box =\partial _{\mu }\partial ^{\mu }} .) === Interaction picture === This theory can be straightforwardly quantized by treating bosonic and fermionic sectors as free. This permits us to build a set of asymptotic states that can be used to start computation of the probability amplitudes for different processes. In order to do so, we have to compute an evolution operator, which for a given initial state | i ⟩ {\displaystyle |i\rangle } will give a final state ⟨ f | {\displaystyle \langle f|} in such a way to have: 5  M f i = ⟨ f | U | i ⟩ . {\displaystyle M_{fi}=\langle f|U|i\rangle .} This technique is also known as the S-matrix. The evolution operator is obtained in the interaction picture, where time evolution is given by the interaction Hamiltonian, which is the integral over space of the second term in the Lagrangian density given above:: 123  V = e ∫ d 3 x ψ ¯ γ μ ψ A μ , {\displaystyle V=e\int d^{3}x\,{\bar {\psi }}\gamma ^{\mu }\psi A_{\mu },} Which can also be written in terms of an integral over the interaction Hamiltonian density H I = e ψ ¯ γ μ ψ A μ {\displaystyle {\mathcal {H}}_{I}=e{\overline {\psi }}\gamma ^{\mu }\psi A_{\mu }} . Thus, one has: 86  U = T exp ⁡ [ − i ℏ ∫ t 0 t d t ′ V ( t ′ ) ] , {\displaystyle U=T\exp \left[-{\frac {i}{\hbar }}\int _{t_{0}}^{t}dt'\,V(t')\right],} where T is the time-ordering operator. This evolution operator only has meaning as a series, and what we get here is a perturbation series with the fine-structure constant as the development parameter. This series expansion of the probability amplitude M f i {\displaystyle M_{fi}} is called the Dyson series, and is given by: M f i = ⟨ f | U | i ⟩ = ⟨ f | ∑ n = 0 ∞ ( − i ) n n ! ∫ d 4 x 1 ⋯ ∫ d 4 x n T { H ( x 1 ) ⋯ H ( x n ) } | i ⟩ {\displaystyle M_{fi}=\langle f|U|i\rangle =\left\langle f\left|\sum _{n=0}^{\infty }{\frac {(-i)^{n}}{n!}}\int d^{4}x_{1}\cdots \int d^{4}x_{n}T{\bigg \{}{\mathcal {H}}(x_{1})\cdots {\mathcal {H}}(x_{n}){\bigg \}}\right|i\right\rangle } === Feynman diagrams === Despite the conceptual clarity of the Feynman approach to QED, almost no early textbooks follow him in their presentation. When performing calculations, it is much easier to work with the Fourier transforms of the propagators. Experimental tests of quantum electrodynamics are typically scattering experiments. In scattering theory, particles' momenta rather than their positions are considered, and it is convenient to think of particles as being created or annihilated when they interact. Feynman diagrams then look the same, but the lines have different interpretations. The electron line represents an electron with a given energy and momentum, with a similar interpretation of the photon line. A vertex diagram represents the annihilation of one electron and the creation of another together with the absorption or creation of a photon, each having specified energies and momenta. Using Wick's theorem on the terms of the Dyson series, all the terms of the S-matrix for quantum electrodynamics can be computed through the technique of Feynman diagrams. In this case, rules for drawing are the following: 801–802  To these rules we must add a further one for closed loops that implies an integration on momenta ∫ d 4 p / ( 2 π ) 4 {\textstyle \int d^{4}p/(2\pi )^{4}} , since these internal ("virtual") particles are not constrained to any specific energy–momentum, even that usually required by special relativity (see Propagator for details). The signature of the metric η μ ν {\displaystyle \eta _{\mu \nu }} is d i a g ( + − − − ) {\displaystyle {\rm {diag}}(+---)} . From them, computations of probability amplitudes are straightforwardly given. An example is Compton scattering, with an electron and a photon undergoing elastic scattering. Feynman diagrams are in this case: 158–159  and so we are able to get the corresponding amplitude at the first order of a perturbation series for the S-matrix: M f i = ( i e ) 2 u ¯ ( p → ′ , s ′ ) ϵ / ′ ( k → ′ , λ ′ ) ∗ p / + k / + m e ( p + k ) 2 − m e 2 ϵ / ( k → , λ ) u ( p → , s ) + ( i e ) 2 u ¯ ( p → ′ , s ′ ) ϵ / ( k → , λ ) p / − k / ′ + m e ( p − k ′ ) 2 − m e 2 ϵ / ′ ( k → ′ , λ ′ ) ∗ u ( p → , s ) , {\displaystyle M_{fi}=(ie)^{2}{\overline {u}}({\vec {p}}',s')\epsilon \!\!\!/\,'({\vec {k}}',\lambda ')^{*}{\frac {p\!\!\!/+k\!\!\!/+m_{e}}{(p+k)^{2}-m_{e}^{2}}}\epsilon \!\!\!/({\vec {k}},\lambda )u({\vec {p}},s)+(ie)^{2}{\overline {u}}({\vec {p}}',s')\epsilon \!\!\!/({\vec {k}},\lambda ){\frac {p\!\!\!/-k\!\!\!/'+m_{e}}{(p-k')^{2}-m_{e}^{2}}}\epsilon \!\!\!/\,'({\vec {k}}',\lambda ')^{*}u({\vec {p}},s),} from which we can compute the cross section for this scattering. === Nonperturbative phenomena === The predictive success of quantum electrodynamics largely rests on the use of perturbation theory, expressed in Feynman diagrams. However, quantum electrodynamics also leads to predictions beyond perturbation theory. In the presence of very strong electric fields, it predicts that electrons and positrons will be spontaneously produced, so causing the decay of the field. This process, called the Schwinger effect, cannot be understood in terms of any finite number of Feynman diagrams and hence is described as nonperturbative. Mathematically, it can be derived by a semiclassical approximation to the path integral of quantum electrodynamics. == Renormalizability == Higher-order terms can be straightforwardly computed for the evolution operator, but these terms display diagrams containing the following simpler ones: ch 10  that, being closed loops, imply the presence of diverging integrals having no mathematical meaning. To overcome this difficulty, a technique called renormalization has been devised, producing finite results in very close agreement with experiments. A criterion for the theory being meaningful after renormalization is that the number of diverging diagrams is finite. In this case, the theory is said to be "renormalizable". The reason for this is that to get observables renormalized, one needs a finite number of constants to maintain the predictive value of the theory untouched. This is exactly the case of quantum electrodynamics displaying just three diverging diagrams. This procedure gives observables in very close agreement with experiment as seen e.g. for electron gyromagnetic ratio. Renormalizability has become an essential criterion for a quantum field theory to be considered as a viable one. All the theories describing fundamental interactions, except gravitation, whose quantum counterpart is only conjectural and presently under very active research, are renormalizable theories. == Nonconvergence of series == An argument by Freeman Dyson shows that the radius of convergence of the perturbation series in QED is zero. The basic argument goes as follows: if the coupling constant were negative, this would be equivalent to the Coulomb force constant being negative. This would "reverse" the electromagnetic interaction so that like charges would attract and unlike charges would repel. This would render the vacuum unstable against decay into a cluster of electrons on one side of the universe and a cluster of positrons on the other side of the universe. Because the theory is "sick" for any negative value of the coupling constant, the series does not converge but is at best an asymptotic series. From a modern perspective, we say that QED is not well defined as a quantum field theory to arbitrarily high energy. The coupling constant runs to infinity at finite energy, signalling a Landau pole. The problem is essentially that QED appears to suffer from quantum triviality issues. This is one of the motivations for embedding QED within a Grand Unified Theory. == Electrodynamics in curved spacetime == This theory can be extended, at least as a classical field theory, to curved spacetime. This arises similarly to the flat spacetime case, from coupling a free electromagnetic theory to a free fermion theory and including an interaction which promotes the partial derivative in the fermion theory to a gauge-covariant derivative. == See also == == References == == Further reading == === Books === Berestetskii, V. B.; Lifshitz, E. M.; Pitaevskii, L. P. (1982). Course of Theoretical Physics, Volume 4: Quantum Electrodynamics (2 ed.). Elsevier. ISBN 978-0-7506-3371-0. De Broglie, L. (1925). Recherches sur la theorie des quanta [Research on quantum theory]. France: Wiley-Interscience. Feynman, R. P. (1998). Quantum Electrodynamics (New ed.). Westview Press. ISBN 978-0-201-36075-2. Greiner, W.; Bromley, D. A.; Müller, B. (2000). Gauge Theory of Weak Interactions. Springer. ISBN 978-3-540-67672-0. Jauch, J. M.; Rohrlich, F. (1980). The Theory of Photons and Electrons. Springer-Verlag. ISBN 978-0-387-07295-1. Kane, G. L. (1993). Modern Elementary Particle Physics. Westview Press. ISBN 978-0-201-62460-1. Miller, A. I. (1995). Early Quantum Electrodynamics: A Sourcebook. Cambridge University Press. ISBN 978-0-521-56891-3. Milonni, P. W. (1994). The Quantum Vacuum: An Introduction to Quantum Electrodynamics. Boston: Academic Press. ISBN 0124980805. LCCN 93029780. OCLC 422797902. Schweber, S. S. (1994). QED and the Men Who Made It. Princeton University Press. ISBN 978-0-691-03327-3. Schwinger, J. (1958). Selected Papers on Quantum Electrodynamics. Dover Publications. ISBN 978-0-486-60444-2. {{cite book}}: ISBN / Date incompatibility (help) Tannoudji-Cohen, C.; Dupont-Roc, Jacques; Grynberg, Gilbert (1997). Photons and Atoms: Introduction to Quantum Electrodynamics. Wiley-Interscience. ISBN 978-0-471-18433-1. === Journals === Dudley, J.M.; Kwan, A.M. (1996). "Richard Feynman's popular lectures on quantum electrodynamics: The 1979 Robb Lectures at Auckland University". American Journal of Physics. 64 (6): 694–98. Bibcode:1996AmJPh..64..694D. doi:10.1119/1.18234. == External links == Feynman's Nobel Prize lecture describing the evolution of QED and his role in it Feynman's New Zealand lectures on QED for non-physicists The Strange Theory of Light | Animation of Feynman pictures light by QED – Animations demonstrating QED
Wikipedia/Quantum_electrodynamics
Physics is the scientific study of matter, its fundamental constituents, its motion and behavior through space and time, and the related entities of energy and force. It is one of the most fundamental scientific disciplines. A scientist who specializes in the field of physics is called a physicist. Physics is one of the oldest academic disciplines. Over much of the past two millennia, physics, chemistry, biology, and certain branches of mathematics were a part of natural philosophy, but during the Scientific Revolution in the 17th century, these natural sciences branched into separate research endeavors. Physics intersects with many interdisciplinary areas of research, such as biophysics and quantum chemistry, and the boundaries of physics are not rigidly defined. New ideas in physics often explain the fundamental mechanisms studied by other sciences and suggest new avenues of research in these and other academic disciplines such as mathematics and philosophy. Advances in physics often enable new technologies. For example, advances in the understanding of electromagnetism, solid-state physics, and nuclear physics led directly to the development of technologies that have transformed modern society, such as television, computers, domestic appliances, and nuclear weapons; advances in thermodynamics led to the development of industrialization; and advances in mechanics inspired the development of calculus. == History == The word physics comes from the Latin physica ('study of nature'), which itself is a borrowing of the Greek φυσική (phusikḗ 'natural science'), a term derived from φύσις (phúsis 'origin, nature, property'). === Ancient astronomy === Astronomy is one of the oldest natural sciences. Early civilizations dating before 3000 BCE, such as the Sumerians, ancient Egyptians, and the Indus Valley Civilisation, had a predictive knowledge and a basic awareness of the motions of the Sun, Moon, and stars. The stars and planets, believed to represent gods, were often worshipped. While the explanations for the observed positions of the stars were often unscientific and lacking in evidence, these early observations laid the foundation for later astronomy, as the stars were found to traverse great circles across the sky, which could not explain the positions of the planets. According to Asger Aaboe, the origins of Western astronomy can be found in Mesopotamia, and all Western efforts in the exact sciences are descended from late Babylonian astronomy. Egyptian astronomers left monuments showing knowledge of the constellations and the motions of the celestial bodies, while Greek poet Homer wrote of various celestial objects in his Iliad and Odyssey; later Greek astronomers provided names, which are still used today, for most constellations visible from the Northern Hemisphere. === Natural philosophy === Natural philosophy has its origins in Greece during the Archaic period (650 BCE – 480 BCE), when pre-Socratic philosophers like Thales rejected non-naturalistic explanations for natural phenomena and proclaimed that every event had a natural cause. They proposed ideas verified by reason and observation, and many of their hypotheses proved successful in experiment; for example, atomism was found to be correct approximately 2000 years after it was proposed by Leucippus and his pupil Democritus. === Aristotle and Hellenistic physics === During the classical period in Greece (6th, 5th and 4th centuries BCE) and in Hellenistic times, natural philosophy developed along many lines of inquiry. Aristotle (Greek: Ἀριστοτέλης, Aristotélēs) (384–322 BCE), a student of Plato, wrote on many subjects, including a substantial treatise on "Physics" – in the 4th century BC. Aristotelian physics was influential for about two millennia. His approach mixed some limited observation with logical deductive arguments, but did not rely on experimental verification of deduced statements. Aristotle's foundational work in Physics, though very imperfect, formed a framework against which later thinkers further developed the field. His approach is entirely superseded today. He explained ideas such as motion (and gravity) with the theory of four elements. Aristotle believed that each of the four classical elements (air, fire, water, earth) had its own natural place. Because of their differing densities, each element will revert to its own specific place in the atmosphere. So, because of their weights, fire would be at the top, air underneath fire, then water, then lastly earth. He also stated that when a small amount of one element enters the natural place of another, the less abundant element will automatically go towards its own natural place. For example, if there is a fire on the ground, the flames go up into the air in an attempt to go back into its natural place where it belongs. His laws of motion included: that heavier objects will fall faster, the speed being proportional to the weight and the speed of the object that is falling depends inversely on the density object it is falling through (e.g. density of air). He also stated that, when it comes to violent motion (motion of an object when a force is applied to it by a second object) that the speed that object moves, will only be as fast or strong as the measure of force applied to it. The problem of motion and its causes was studied carefully, leading to the philosophical notion of a "prime mover" as the ultimate source of all motion in the world (Book 8 of his treatise Physics). === Medieval European and Islamic === The Western Roman Empire fell to invaders and internal decay in the fifth century, resulting in a decline in intellectual pursuits in western Europe. By contrast, the Eastern Roman Empire (usually known as the Byzantine Empire) resisted the attacks from invaders and continued to advance various fields of learning, including physics. In the sixth century, John Philoponus challenged the dominant Aristotelian approach to science although much of his work was focused on Christian theology. In the sixth century, Isidore of Miletus created an important compilation of Archimedes' works that are copied in the Archimedes Palimpsest. Islamic scholarship inherited Aristotelian physics from the Greeks and during the Islamic Golden Age developed it further, especially placing emphasis on observation and a priori reasoning, developing early forms of the scientific method. The most notable innovations under Islamic scholarship were in the field of optics and vision, which came from the works of many scientists like Ibn Sahl, Al-Kindi, Ibn al-Haytham, Al-Farisi and Avicenna. The most notable work was The Book of Optics (also known as Kitāb al-Manāẓir), written by Ibn al-Haytham, in which he presented the alternative to the ancient Greek idea about vision. His discussed his experiments with camera obscura, showing that light moved in a straight line; he encouraged readers to reproduce his experiments making him one of the originators of the scientific method === Scientific Revolution === Physics became a separate science when early modern Europeans used experimental and quantitative methods to discover what are now considered to be the laws of physics. Major developments in this period include the replacement of the geocentric model of the Solar System with the heliocentric Copernican model, the laws governing the motion of planetary bodies (determined by Johannes Kepler between 1609 and 1619), Galileo's pioneering work on telescopes and observational astronomy in the 16th and 17th centuries, and Isaac Newton's discovery and unification of the laws of motion and universal gravitation (that would come to bear his name). Newton, and separately Gottfried Wilhelm Leibniz, developed calculus, the mathematical study of continuous change, and Newton applied it to solve physical problems. === 19th century === The discovery of laws in thermodynamics, chemistry, and electromagnetics resulted from research efforts during the Industrial Revolution as energy needs increased. By the end of the 19th century, theories of thermodynamics, mechanics, and electromagnetics matched a wide variety of observations. Taken together these theories became the basis for what would later be called classical physics.: 2  A few experimental results remained inexplicable. Classical electromagnetism presumed a medium, an luminiferous aether to support the propagation of waves, but this medium could not be detected. The intensity of light from hot glowing blackbody objects did not match the predictions of thermodynamics and electromagnetism. The character of electron emission of illuminated metals differed from predictions. These failures, seemingly insignificant in the big picture would upset the physics world in first two decades of the 20th century. === 20th century === Modern physics began in the early 20th century with the work of Max Planck in quantum theory and Albert Einstein's theory of relativity. Both of these theories came about due to inaccuracies in classical mechanics in certain situations. Classical mechanics predicted that the speed of light depends on the motion of the observer, which could not be resolved with the constant speed predicted by Maxwell's equations of electromagnetism. This discrepancy was corrected by Einstein's theory of special relativity, which replaced classical mechanics for fast-moving bodies and allowed for a constant speed of light. Black-body radiation provided another problem for classical physics, which was corrected when Planck proposed that the excitation of material oscillators is possible only in discrete steps proportional to their frequency. This, along with the photoelectric effect and a complete theory predicting discrete energy levels of electron orbitals, led to the theory of quantum mechanics improving on classical physics at very small scales. Quantum mechanics would come to be pioneered by Werner Heisenberg, Erwin Schrödinger and Paul Dirac. From this early work, and work in related fields, the Standard Model of particle physics was derived. Following the discovery of a particle with properties consistent with the Higgs boson at CERN in 2012, all fundamental particles predicted by the standard model, and no others, appear to exist; however, physics beyond the Standard Model, with theories such as supersymmetry, is an active area of research. Areas of mathematics in general are important to this field, such as the study of probabilities and groups. == Core theories == Physics deals with a wide variety of systems, although certain theories are used by all physicists. Each of these theories was experimentally tested numerous times and found to be an adequate approximation of nature. These central theories are important tools for research into more specialized topics, and any physicist, regardless of their specialization, is expected to be literate in them. These include classical mechanics, quantum mechanics, thermodynamics and statistical mechanics, electromagnetism, and special relativity. === Distinction between classical and modern physics === In the first decades of the 20th century physics was revolutionized by the discoveries of quantum mechanics and relativity. The changes were so fundamental that these new concepts became the foundation of "modern physics", with other topics becoming "classical physics". The majority of applications of physics are essentially classical.: xxxi  The laws of classical physics accurately describe systems whose important length scales are greater than the atomic scale and whose motions are much slower than the speed of light.: xxxii  Outside of this domain, observations do not match predictions provided by classical mechanics.: 6  === Classical theory === Classical physics includes the traditional branches and topics that were recognized and well-developed before the beginning of the 20th century—classical mechanics, thermodynamics, and electromagnetism.: 2  Classical mechanics is concerned with bodies acted on by forces and bodies in motion and may be divided into statics (study of the forces on a body or bodies not subject to an acceleration), kinematics (study of motion without regard to its causes), and dynamics (study of motion and the forces that affect it); mechanics may also be divided into solid mechanics and fluid mechanics (known together as continuum mechanics), the latter include such branches as hydrostatics, hydrodynamics and pneumatics. Acoustics is the study of how sound is produced, controlled, transmitted and received. Important modern branches of acoustics include ultrasonics, the study of sound waves of very high frequency beyond the range of human hearing; bioacoustics, the physics of animal calls and hearing, and electroacoustics, the manipulation of audible sound waves using electronics. Optics, the study of light, is concerned not only with visible light but also with infrared and ultraviolet radiation, which exhibit all of the phenomena of visible light except visibility, e.g., reflection, refraction, interference, diffraction, dispersion, and polarization of light. Heat is a form of energy, the internal energy possessed by the particles of which a substance is composed; thermodynamics deals with the relationships between heat and other forms of energy. Electricity and magnetism have been studied as a single branch of physics since the intimate connection between them was discovered in the early 19th century; an electric current gives rise to a magnetic field, and a changing magnetic field induces an electric current. Electrostatics deals with electric charges at rest, electrodynamics with moving charges, and magnetostatics with magnetic poles at rest. === Modern theory === The discovery of relativity and of quantum mechanics in the first decades of the 20th century transformed the conceptual basis of physics without reducing the practical value of most of the physical theories developed up to that time. Consequently the topics of physics have come to be divided into "classical physics" and "modern physics", with the latter category including effects related to quantum mechanics and relativity.: 2  Classical physics is generally concerned with matter and energy on the normal scale of observation, while much of modern physics is concerned with the behavior of matter and energy under extreme conditions or on a very large or very small scale. For example, atomic and nuclear physics study matter on the smallest scale at which chemical elements can be identified. The physics of elementary particles is on an even smaller scale since it is concerned with the most basic units of matter; this branch of physics is also known as high-energy physics because of the extremely high energies necessary to produce many types of particles in particle accelerators. On this scale, ordinary, commonsensical notions of space, time, matter, and energy are no longer valid. The two chief theories of modern physics present a different picture of the concepts of space, time, and matter from that presented by classical physics. Classical mechanics approximates nature as continuous, while quantum theory is concerned with the discrete nature of many phenomena at the atomic and subatomic level and with the complementary aspects of particles and waves in the description of such phenomena. The theory of relativity is concerned with the description of phenomena that take place in a frame of reference that is in motion with respect to an observer; the special theory of relativity is concerned with motion in the absence of gravitational fields and the general theory of relativity with motion and its connection with gravitation. Both quantum theory and the theory of relativity find applications in many areas of modern physics. Fundamental concepts in modern physics include: Action Causality Covariance Particle Physical field Physical interaction Quantum Statistical ensemble Symmetry Wave == Research == === Scientific method === Physicists use the scientific method to test the validity of a physical theory. By using a methodical approach to compare the implications of a theory with the conclusions drawn from its related experiments and observations, physicists are better able to test the validity of a theory in a logical, unbiased, and repeatable way. To that end, experiments are performed and observations are made in order to determine the validity or invalidity of a theory. A scientific law is a concise verbal or mathematical statement of a relation that expresses a fundamental principle of some theory, such as Newton's law of universal gravitation. === Theory and experiment === Theorists seek to develop mathematical models that both agree with existing experiments and successfully predict future experimental results, while experimentalists devise and perform experiments to test theoretical predictions and explore new phenomena. Although theory and experiment are developed separately, they strongly affect and depend upon each other. Progress in physics frequently comes about when experimental results defy explanation by existing theories, prompting intense focus on applicable modelling, and when new theories generate experimentally testable predictions, which inspire the development of new experiments (and often related equipment). Physicists who work at the interplay of theory and experiment are called phenomenologists, who study complex phenomena observed in experiment and work to relate them to a fundamental theory. Theoretical physics has historically taken inspiration from philosophy; electromagnetism was unified this way. Beyond the known universe, the field of theoretical physics also deals with hypothetical issues, such as parallel universes, a multiverse, and higher dimensions. Theorists invoke these ideas in hopes of solving particular problems with existing theories; they then explore the consequences of these ideas and work toward making testable predictions. Experimental physics expands, and is expanded by, engineering and technology. Experimental physicists who are involved in basic research design and perform experiments with equipment such as particle accelerators and lasers, whereas those involved in applied research often work in industry, developing technologies such as magnetic resonance imaging (MRI) and transistors. Feynman has noted that experimentalists may seek areas that have not been explored well by theorists. === Scope and aims === Physics covers a wide range of phenomena, from elementary particles (such as quarks, neutrinos, and electrons) to the largest superclusters of galaxies. Included in these phenomena are the most basic objects composing all other things. Therefore, physics is sometimes called the "fundamental science". Physics aims to describe the various phenomena that occur in nature in terms of simpler phenomena. Thus, physics aims to both connect the things observable to humans to root causes, and then connect these causes together. For example, the ancient Chinese observed that certain rocks (lodestone and magnetite) were attracted to one another by an invisible force. This effect was later called magnetism, which was first rigorously studied in the 17th century. But even before the Chinese discovered magnetism, the ancient Greeks knew of other objects such as amber, that when rubbed with fur would cause a similar invisible attraction between the two. This was also first studied rigorously in the 17th century and came to be called electricity. Thus, physics had come to understand two observations of nature in terms of some root cause (electricity and magnetism). However, further work in the 19th century revealed that these two forces were just two different aspects of one force—electromagnetism. This process of "unifying" forces continues today, and electromagnetism and the weak nuclear force are now considered to be two aspects of the electroweak interaction. Physics hopes to find an ultimate reason (theory of everything) for why nature is as it is (see section Current research below for more information). === Current research === Research in physics is continually progressing on a large number of fronts. In condensed matter physics, an important unsolved theoretical problem is that of high-temperature superconductivity. Many condensed matter experiments are aiming to fabricate workable spintronics and quantum computers. In particle physics, the first pieces of experimental evidence for physics beyond the Standard Model have begun to appear. Foremost among these are indications that neutrinos have non-zero mass. These experimental results appear to have solved the long-standing solar neutrino problem, and the physics of massive neutrinos remains an area of active theoretical and experimental research. The Large Hadron Collider has already found the Higgs boson, but future research aims to prove or disprove the supersymmetry, which extends the Standard Model of particle physics. Research on the nature of the major mysteries of dark matter and dark energy is also currently ongoing. Although much progress has been made in high-energy, quantum, and astronomical physics, many everyday phenomena involving complexity, chaos, or turbulence are still poorly understood. Complex problems that seem like they could be solved by a clever application of dynamics and mechanics remain unsolved; examples include the formation of sandpiles, nodes in trickling water, the shape of water droplets, mechanisms of surface tension catastrophes, and self-sorting in shaken heterogeneous collections. These complex phenomena have received growing attention since the 1970s for several reasons, including the availability of modern mathematical methods and computers, which enabled complex systems to be modeled in new ways. Complex physics has become part of increasingly interdisciplinary research, as exemplified by the study of turbulence in aerodynamics and the observation of pattern formation in biological systems. In the 1932 Annual Review of Fluid Mechanics, Horace Lamb said: I am an old man now, and when I die and go to heaven there are two matters on which I hope for enlightenment. One is quantum electrodynamics, and the other is the turbulent motion of fluids. And about the former I am rather optimistic. == Branches and fields == === Fields === The major fields of physics, along with their subfields and the theories and concepts they employ, are shown in the following table. Since the 20th century, the individual fields of physics have become increasingly specialised, and today most physicists work in a single field for their entire careers. "Universalists" such as Einstein (1879–1955) and Lev Landau (1908–1968), who worked in multiple fields of physics, are now very rare. Contemporary research in physics can be broadly divided into nuclear and particle physics; condensed matter physics; atomic, molecular, and optical physics; astrophysics; and applied physics. Some physics departments also support physics education research and physics outreach. ==== Nuclear and particle ==== Particle physics is the study of the elementary constituents of matter and energy and the interactions between them. In addition, particle physicists design and develop the high-energy accelerators, detectors, and computer programs necessary for this research. The field is also called "high-energy physics" because many elementary particles do not occur naturally but are created only during high-energy collisions of other particles. Currently, the interactions of elementary particles and fields are described by the Standard Model. The model accounts for the 12 known particles of matter (quarks and leptons) that interact via the strong, weak, and electromagnetic fundamental forces. Dynamics are described in terms of matter particles exchanging gauge bosons (gluons, W and Z bosons, and photons, respectively). The Standard Model also predicts a particle known as the Higgs boson. In July 2012 CERN, the European laboratory for particle physics, announced the detection of a particle consistent with the Higgs boson, an integral part of the Higgs mechanism. Nuclear physics is the field of physics that studies the constituents and interactions of atomic nuclei. The most commonly known applications of nuclear physics are nuclear power generation and nuclear weapons technology, but the research has provided application in many fields, including those in nuclear medicine and magnetic resonance imaging, ion implantation in materials engineering, and radiocarbon dating in geology and archaeology. ==== Atomic, molecular, and optical ==== Atomic, molecular, and optical physics (AMO) is the study of matter—matter and light—matter interactions on the scale of single atoms and molecules. The three areas are grouped together because of their interrelationships, the similarity of methods used, and the commonality of their relevant energy scales. All three areas include both classical, semi-classical and quantum treatments; they can treat their subject from a microscopic view (in contrast to a macroscopic view). Atomic physics studies the electron shells of atoms. Current research focuses on activities in quantum control, cooling and trapping of atoms and ions, low-temperature collision dynamics and the effects of electron correlation on structure and dynamics. Atomic physics is influenced by the nucleus (see hyperfine splitting), but intra-nuclear phenomena such as fission and fusion are considered part of nuclear physics. Molecular physics focuses on multi-atomic structures and their internal and external interactions with matter and light. Optical physics is distinct from optics in that it tends to focus not on the control of classical light fields by macroscopic objects but on the fundamental properties of optical fields and their interactions with matter in the microscopic realm. ==== Condensed matter ==== Condensed matter physics is the field of physics that deals with the macroscopic physical properties of matter. In particular, it is concerned with the "condensed" phases that appear whenever the number of particles in a system is extremely large and the interactions between them are strong. The most familiar examples of condensed phases are solids and liquids, which arise from the bonding by way of the electromagnetic force between atoms. More exotic condensed phases include the superfluid and the Bose–Einstein condensate found in certain atomic systems at very low temperature, the superconducting phase exhibited by conduction electrons in certain materials, and the ferromagnetic and antiferromagnetic phases of spins on atomic lattices. Condensed matter physics is the largest field of contemporary physics. Historically, condensed matter physics grew out of solid-state physics, which is now considered one of its main subfields. The term condensed matter physics was apparently coined by Philip Anderson when he renamed his research group—previously solid-state theory—in 1967. In 1978, the Division of Solid State Physics of the American Physical Society was renamed as the Division of Condensed Matter Physics. Condensed matter physics has a large overlap with chemistry, materials science, nanotechnology and engineering. ==== Astrophysics ==== Astrophysics and astronomy are the application of the theories and methods of physics to the study of stellar structure, stellar evolution, the origin of the Solar System, and related problems of cosmology. Because astrophysics is a broad subject, astrophysicists typically apply many disciplines of physics, including mechanics, electromagnetism, statistical mechanics, thermodynamics, quantum mechanics, relativity, nuclear and particle physics, and atomic and molecular physics. The discovery by Karl Jansky in 1931 that radio signals were emitted by celestial bodies initiated the science of radio astronomy. Most recently, the frontiers of astronomy have been expanded by space exploration. Perturbations and interference from the Earth's atmosphere make space-based observations necessary for infrared, ultraviolet, gamma-ray, and X-ray astronomy. Physical cosmology is the study of the formation and evolution of the universe on its largest scales. Albert Einstein's theory of relativity plays a central role in all modern cosmological theories. In the early 20th century, Hubble's discovery that the universe is expanding, as shown by the Hubble diagram, prompted rival explanations known as the steady state universe and the Big Bang. The Big Bang was confirmed by the success of Big Bang nucleosynthesis and the discovery of the cosmic microwave background in 1964. The Big Bang model rests on two theoretical pillars: Albert Einstein's general relativity and the cosmological principle. Cosmologists have recently established the ΛCDM model of the evolution of the universe, which includes cosmic inflation, dark energy, and dark matter. == Other aspects == === Education === === Careers === === Philosophy === Physics, as with the rest of science, relies on the philosophy of science and its "scientific method" to advance knowledge of the physical world. The scientific method employs a priori and a posteriori reasoning as well as the use of Bayesian inference to measure the validity of a given theory. Study of the philosophical issues surrounding physics, the philosophy of physics, involves issues such as the nature of space and time, determinism, and metaphysical outlooks such as empiricism, naturalism, and realism. Many physicists have written about the philosophical implications of their work, for instance Laplace, who championed causal determinism, and Erwin Schrödinger, who wrote on quantum mechanics. The mathematical physicist Roger Penrose has been called a Platonist by Stephen Hawking, a view Penrose discusses in his book, The Road to Reality. Hawking referred to himself as an "unashamed reductionist" and took issue with Penrose's views. Mathematics provides a compact and exact language used to describe the order in nature. This was noted and advocated by Pythagoras, Plato, Galileo, and Newton. Some theorists, like Hilary Putnam and Penelope Maddy, hold that logical truths, and therefore mathematical reasoning, depend on the empirical world. This is usually combined with the claim that the laws of logic express universal regularities found in the structural features of the world, which may explain the peculiar relation between these fields. Physics uses mathematics to organise and formulate experimental results. From those results, precise or estimated solutions are obtained, or quantitative results, from which new predictions can be made and experimentally confirmed or negated. The results from physics experiments are numerical data, with their units of measure and estimates of the errors in the measurements. Technologies based on mathematics, like computation have made computational physics an active area of research. Ontology is a prerequisite for physics, but not for mathematics. It means physics is ultimately concerned with descriptions of the real world, while mathematics is concerned with abstract patterns, even beyond the real world. Thus physics statements are synthetic, while mathematical statements are analytic. Mathematics contains hypotheses, while physics contains theories. Mathematics statements have to be only logically true, while predictions of physics statements must match observed and experimental data. The distinction is clear-cut, but not always obvious. For example, mathematical physics is the application of mathematics in physics. Its methods are mathematical, but its subject is physical. The problems in this field start with a "mathematical model of a physical situation" (system) and a "mathematical description of a physical law" that will be applied to that system. Every mathematical statement used for solving has a hard-to-find physical meaning. The final mathematical solution has an easier-to-find meaning, because it is what the solver is looking for. === Fundamental vs. applied physics === Physics is a branch of fundamental science (also called basic science). Physics is also called "the fundamental science" because all branches of natural science including chemistry, astronomy, geology, and biology are constrained by laws of physics. Similarly, chemistry is often called the central science because of its role in linking the physical sciences. For example, chemistry studies properties, structures, and reactions of matter (chemistry's focus on the molecular and atomic scale distinguishes it from physics). Structures are formed because particles exert electrical forces on each other, properties include physical characteristics of given substances, and reactions are bound by laws of physics, like conservation of energy, mass, and charge. Fundamental physics seeks to better explain and understand phenomena in all spheres, without a specific practical application as a goal, other than the deeper insight into the phenomema themselves. Applied physics is a general term for physics research and development that is intended for a particular use. An applied physics curriculum usually contains a few classes in an applied discipline, like geology or electrical engineering. It usually differs from engineering in that an applied physicist may not be designing something in particular, but rather is using physics or conducting physics research with the aim of developing new technologies or solving a problem. The approach is similar to that of applied mathematics. Applied physicists use physics in scientific research. For instance, people working on accelerator physics might seek to build better particle detectors for research in theoretical physics. Physics is used heavily in engineering. For example, statics, a subfield of mechanics, is used in the building of bridges and other static structures. The understanding and use of acoustics results in sound control and better concert halls; similarly, the use of optics creates better optical devices. An understanding of physics makes for more realistic flight simulators, video games, and movies, and is often critical in forensic investigations. With the standard consensus that the laws of physics are universal and do not change with time, physics can be used to study things that would ordinarily be mired in uncertainty. For example, in the study of the origin of the Earth, a physicist can reasonably model Earth's mass, temperature, and rate of rotation, as a function of time allowing the extrapolation forward or backward in time and so predict future or prior events. It also allows for simulations in engineering that speed up the development of a new technology. There is also considerable interdisciplinarity, so many other important fields are influenced by physics (e.g., the fields of econophysics and sociophysics). == See also == Earth science – Fields of natural science related to Earth Neurophysics – branch of biophysics dealing with the development and use of physical methods to gain information about the nervous systemPages displaying wikidata descriptions as a fallback Psychophysics – Branch of knowledge relating physical stimuli and psychological perception Relationship between mathematics and physics Science tourism – Travel to notable science locations === Lists === List of important publications in physics List of physicists Lists of physics equations == Notes == == References == == Sources == == External links == Physics at Quanta Magazine Usenet Physics FAQ – FAQ compiled by sci.physics and other physics newsgroups Website of the Nobel Prize in physics – Award for outstanding contributions to the subject World of Physics – Online encyclopedic dictionary of physics Nature Physics – Academic journal Physics – Online magazine by the American Physical Society – Directory of physics related media The Vega Science Trust – Science videos, including physics HyperPhysics website – Physics and astronomy mind-map from Georgia State University Physics at MIT OpenCourseWare – Online course material from Massachusetts Institute of Technology The Feynman Lectures on Physics
Wikipedia/Classical_and_modern_physics
Statics is the branch of classical mechanics that is concerned with the analysis of force and torque acting on a physical system that does not experience an acceleration, but rather is in equilibrium with its environment. If F {\displaystyle {\textbf {F}}} is the total of the forces acting on the system, m {\displaystyle m} is the mass of the system and a {\displaystyle {\textbf {a}}} is the acceleration of the system, Newton's second law states that F = m a {\displaystyle {\textbf {F}}=m{\textbf {a}}\,} (the bold font indicates a vector quantity, i.e. one with both magnitude and direction). If a = 0 {\displaystyle {\textbf {a}}=0} , then F = 0 {\displaystyle {\textbf {F}}=0} . As for a system in static equilibrium, the acceleration equals zero, the system is either at rest, or its center of mass moves at constant velocity. The application of the assumption of zero acceleration to the summation of moments acting on the system leads to M = I α = 0 {\displaystyle {\textbf {M}}=I\alpha =0} , where M {\displaystyle {\textbf {M}}} is the summation of all moments acting on the system, I {\displaystyle I} is the moment of inertia of the mass and α {\displaystyle \alpha } is the angular acceleration of the system. For a system where α = 0 {\displaystyle \alpha =0} , it is also true that M = 0. {\displaystyle {\textbf {M}}=0.} Together, the equations F = m a = 0 {\displaystyle {\textbf {F}}=m{\textbf {a}}=0} (the 'first condition for equilibrium') and M = I α = 0 {\displaystyle {\textbf {M}}=I\alpha =0} (the 'second condition for equilibrium') can be used to solve for unknown quantities acting on the system. == History == Archimedes (c. 287–c. 212 BC) did pioneering work in statics. Later developments in the field of statics are found in works of Thebit. == Background == === Force === Force is the action of one body on another. A force is either a push or a pull, and it tends to move a body in the direction of its action. The action of a force is characterized by its magnitude, by the direction of its action, and by its point of application (or point of contact). Thus, force is a vector quantity, because its effect depends on the direction as well as on the magnitude of the action. Forces are classified as either contact or body forces. A contact force is produced by direct physical contact; an example is the force exerted on a body by a supporting surface. A body force is generated by virtue of the position of a body within a force field such as a gravitational, electric, or magnetic field and is independent of contact with any other body; an example of a body force is the weight of a body in the Earth's gravitational field. === Moment of a force === In addition to the tendency to move a body in the direction of its application, a force can also tend to rotate a body about an axis. The axis may be any line which neither intersects nor is parallel to the line of action of the force. This rotational tendency is known as moment of force (M). Moment is also referred to as torque. ==== Moment about a point ==== The magnitude of the moment of a force at a point O, is equal to the perpendicular distance from O to the line of action of F, multiplied by the magnitude of the force: M = F · d, where F = the force applied d = the perpendicular distance from the axis to the line of action of the force. This perpendicular distance is called the moment arm. The direction of the moment is given by the right hand rule, where counter clockwise (CCW) is out of the page, and clockwise (CW) is into the page. The moment direction may be accounted for by using a stated sign convention, such as a plus sign (+) for counterclockwise moments and a minus sign (−) for clockwise moments, or vice versa. Moments can be added together as vectors. In vector format, the moment can be defined as the cross product between the radius vector, r (the vector from point O to the line of action), and the force vector, F: M O = r × F {\displaystyle {\textbf {M}}_{O}={\textbf {r}}\times {\textbf {F}}} r = ( x 00 . . . x 0 j x 01 . . . x 1 j . . . . . . . . . x i 0 . . . x i j ) {\displaystyle r=\left({\begin{array}{cc}x_{00}&...&x_{0j}\\x_{01}&...&x_{1j}\\...&...&...\\x_{i0}&...&x_{ij}\\\end{array}}\right)} F = ( f 00 . . . f 0 j f 01 . . . f 1 j . . . . . . . . . f i 0 . . . f i j ) {\displaystyle F=\left({\begin{array}{cc}f_{00}&...&f_{0j}\\f_{01}&...&f_{1j}\\...&...&...\\f_{i0}&...&f_{ij}\\\end{array}}\right)} ==== Varignon's theorem ==== Varignon's theorem states that the moment of a force about any point is equal to the sum of the moments of the components of the force about the same point. === Equilibrium equations === The static equilibrium of a particle is an important concept in statics. A particle is in equilibrium only if the resultant of all forces acting on the particle is equal to zero. In a rectangular coordinate system the equilibrium equations can be represented by three scalar equations, where the sums of forces in all three directions are equal to zero. An engineering application of this concept is determining the tensions of up to three cables under load, for example the forces exerted on each cable of a hoist lifting an object or of guy wires restraining a hot air balloon to the ground. === Moment of inertia === In classical mechanics, moment of inertia, also called mass moment, rotational inertia, polar moment of inertia of mass, or the angular mass, (SI units kg·m²) is a measure of an object's resistance to changes to its rotation. It is the inertia of a rotating body with respect to its rotation. The moment of inertia plays much the same role in rotational dynamics as mass does in linear dynamics, describing the relationship between angular momentum and angular velocity, torque and angular acceleration, and several other quantities. The symbols I and J are usually used to refer to the moment of inertia or polar moment of inertia. While a simple scalar treatment of the moment of inertia suffices for many situations, a more advanced tensor treatment allows the analysis of such complicated systems as spinning tops and gyroscopic motion. The concept was introduced by Leonhard Euler in his 1765 book Theoria motus corporum solidorum seu rigidorum; he discussed the moment of inertia and many related concepts, such as the principal axis of inertia. == Applications == === Solids === Statics is used in the analysis of structures, for instance in architectural and structural engineering. Strength of materials is a related field of mechanics that relies heavily on the application of static equilibrium. A key concept is the center of gravity of a body at rest: it represents an imaginary point at which all the mass of a body resides. The position of the point relative to the foundations on which a body lies determines its stability in response to external forces. If the center of gravity exists outside the foundations, then the body is unstable because there is a torque acting: any small disturbance will cause the body to fall or topple. If the center of gravity exists within the foundations, the body is stable since no net torque acts on the body. If the center of gravity coincides with the foundations, then the body is said to be metastable. === Fluids === Hydrostatics, also known as fluid statics, is the study of fluids at rest (i.e. in static equilibrium). The characteristic of any fluid at rest is that the force exerted on any particle of the fluid is the same at all points at the same depth (or altitude) within the fluid. If the net force is greater than zero the fluid will move in the direction of the resulting force. This concept was first formulated in a slightly extended form by French mathematician and philosopher Blaise Pascal in 1647 and became known as Pascal's law. It has many important applications in hydraulics. Archimedes, Abū Rayhān al-Bīrūnī, Al-Khazini and Galileo Galilei were also major figures in the development of hydrostatics. == See also == Cremona diagram Dynamics Solid mechanics == Notes == == References == Beer, F.P. & Johnston Jr, E.R. (1992). Statics and Mechanics of Materials. McGraw-Hill, Inc. Beer, F.P.; Johnston Jr, E.R.; Eisenberg (2009). Vector Mechanics for Engineers: Statics, 9th Ed. McGraw Hill. ISBN 978-0-07-352923-3. Morelon, Régis; Rashed, Roshdi, eds. (1996), Encyclopedia of the History of Arabic Science, vol. 3, Routledge, ISBN 978-0415124102 == External links ==
Wikipedia/Statics
Plasma (from Ancient Greek πλάσμα (plásma) 'moldable substance') is a state of matter characterized by the presence of a significant portion of charged particles in any combination of ions or electrons. It is the most abundant form of ordinary matter in the universe, mostly in stars (including the Sun), but also dominating the rarefied intracluster medium and intergalactic medium. Plasma can be artificially generated, for example, by heating a neutral gas or subjecting it to a strong electromagnetic field. The presence of charged particles makes plasma electrically conductive, with the dynamics of individual particles and macroscopic plasma motion governed by collective electromagnetic fields and very sensitive to externally applied fields. The response of plasma to electromagnetic fields is used in many modern devices and technologies, such as plasma televisions or plasma etching. Depending on temperature and density, a certain number of neutral particles may also be present, in which case plasma is called partially ionized. Neon signs and lightning are examples of partially ionized plasmas. Unlike the phase transitions between the other three states of matter, the transition to plasma is not well defined and is a matter of interpretation and context. Whether a given degree of ionization suffices to call a substance "plasma" depends on the specific phenomenon being considered. == Early history == Plasma was first identified in laboratory by Sir William Crookes. Crookes presented a lecture on what he called "radiant matter" to the British Association for the Advancement of Science, in Sheffield, on Friday, 22 August 1879. Systematic studies of plasma began with the research of Irving Langmuir and his colleagues in the 1920s. Langmuir also introduced the term "plasma" as a description of ionized gas in 1928: Except near the electrodes, where there are sheaths containing very few electrons, the ionized gas contains ions and electrons in about equal numbers so that the resultant space charge is very small. We shall use the name plasma to describe this region containing balanced charges of ions and electrons. Lewi Tonks and Harold Mott-Smith, both of whom worked with Langmuir in the 1920s, recall that Langmuir first used the term by analogy with the blood plasma. Mott-Smith recalls, in particular, that the transport of electrons from thermionic filaments reminded Langmuir of "the way blood plasma carries red and white corpuscles and germs." == Definitions == === The fourth state of matter === Plasma is called the fourth state of matter after solid, liquid, and gas. It is a state of matter in which an ionized substance becomes highly electrically conductive to the point that long-range electric and magnetic fields dominate its behaviour. Plasma is typically an electrically quasineutral medium of unbound positive and negative particles (i.e., the overall charge of a plasma is roughly zero). Although these particles are unbound, they are not "free" in the sense of not experiencing forces. Moving charged particles generate electric currents, and any movement of a charged plasma particle affects and is affected by the fields created by the other charges. In turn, this governs collective behaviour with many degrees of variation. Plasma is distinct from the other states of matter. In particular, describing a low-density plasma as merely an "ionized gas" is wrong and misleading, even though it is similar to the gas phase in that both assume no definite shape or volume. The following table summarizes some principal differences: === Ideal plasma === Three factors define an ideal plasma: The plasma approximation: The plasma approximation applies when the plasma parameter Λ, representing the number of charge carriers within the Debye sphere is much higher than unity. It can be readily shown that this criterion is equivalent to smallness of the ratio of the plasma electrostatic and thermal energy densities. Such plasmas are called weakly coupled. Bulk interactions: The Debye length is much smaller than the physical size of the plasma. This criterion means that interactions in the bulk of the plasma are more important than those at its edges, where boundary effects may take place. When this criterion is satisfied, the plasma is quasineutral. Collisionlessness: The electron plasma frequency (measuring plasma oscillations of the electrons) is much larger than the electron–neutral collision frequency. When this condition is valid, electrostatic interactions dominate over the processes of ordinary gas kinetics. Such plasmas are called collisionless. === Non-neutral plasma === The strength and range of the electric force and the good conductivity of plasmas usually ensure that the densities of positive and negative charges in any sizeable region are equal ("quasineutrality"). A plasma with a significant excess of charge density, or, in the extreme case, is composed of a single species, is called a non-neutral plasma. In such a plasma, electric fields play a dominant role. Examples are charged particle beams, an electron cloud in a Penning trap and positron plasmas. === Dusty plasma === A dusty plasma contains tiny charged particles of dust (typically found in space). The dust particles acquire high charges and interact with each other. A plasma that contains larger particles is called grain plasma. Under laboratory conditions, dusty plasmas are also called complex plasmas. == Properties and parameters == === Density and ionization degree === For plasma to exist, ionization is necessary. The term "plasma density" by itself usually refers to the electron density n e {\displaystyle n_{e}} , that is, the number of charge-contributing electrons per unit volume. The degree of ionization α {\displaystyle \alpha } is defined as fraction of neutral particles that are ionized: α = n i n i + n n , {\displaystyle \alpha ={\frac {n_{i}}{n_{i}+n_{n}}},} where n i {\displaystyle n_{i}} is the ion density and n n {\displaystyle n_{n}} the neutral density (in number of particles per unit volume). In the case of fully ionized matter, α = 1 {\displaystyle \alpha =1} . Because of the quasineutrality of plasma, the electron and ion densities are related by n e = ⟨ Z i ⟩ n i {\displaystyle n_{e}=\langle Z_{i}\rangle n_{i}} , where ⟨ Z i ⟩ {\displaystyle \langle Z_{i}\rangle } is the average ion charge (in units of the elementary charge). === Temperature === Plasma temperature, commonly measured in kelvin or electronvolts, is a measure of the thermal kinetic energy per particle. High temperatures are usually needed to sustain ionization, which is a defining feature of a plasma. The degree of plasma ionization is determined by the electron temperature relative to the ionization energy (and more weakly by the density). In thermal equilibrium, the relationship is given by the Saha equation. At low temperatures, ions and electrons tend to recombine into bound states—atoms—and the plasma will eventually become a gas. In most cases, the electrons and heavy plasma particles (ions and neutral atoms) separately have a relatively well-defined temperature; that is, their energy distribution function is close to a Maxwellian even in the presence of strong electric or magnetic fields. However, because of the large difference in mass between electrons and ions, their temperatures may be different, sometimes significantly so. This is especially common in weakly ionized technological plasmas, where the ions are often near the ambient temperature while electrons reach thousands of kelvin. The opposite case is the z-pinch plasma where the ion temperature may exceed that of electrons. === Plasma potential === Since plasmas are very good electrical conductors, electric potentials play an important role. The average potential in the space between charged particles, independent of how it can be measured, is called the "plasma potential", or the "space potential". If an electrode is inserted into a plasma, its potential will generally lie considerably below the plasma potential due to what is termed a Debye sheath. The good electrical conductivity of plasmas makes their electric fields very small. This results in the important concept of "quasineutrality", which says the density of negative charges is approximately equal to the density of positive charges over large volumes of the plasma ( n e = ⟨ Z ⟩ n i {\displaystyle n_{e}=\langle Z\rangle n_{i}} ), but on the scale of the Debye length, there can be charge imbalance. In the special case that double layers are formed, the charge separation can extend some tens of Debye lengths. The magnitude of the potentials and electric fields must be determined by means other than simply finding the net charge density. A common example is to assume that the electrons satisfy the Boltzmann relation: n e ∝ exp ⁡ ( e Φ / k B T e ) . {\displaystyle n_{e}\propto \exp(e\Phi /k_{\text{B}}T_{e}).} Differentiating this relation provides a means to calculate the electric field from the density: E → = k B T e e ∇ n e n e . {\displaystyle {\vec {E}}={\frac {k_{\text{B}}T_{e}}{e}}{\frac {\nabla n_{e}}{n_{e}}}.} It is possible to produce a plasma that is not quasineutral. An electron beam, for example, has only negative charges. The density of a non-neutral plasma must generally be very low, or it must be very small, otherwise, it will be dissipated by the repulsive electrostatic force. === Magnetization === The existence of charged particles causes the plasma to generate, and be affected by, magnetic fields. Plasma with a magnetic field strong enough to influence the motion of the charged particles is said to be magnetized. A common quantitative criterion is that a particle on average completes at least one gyration around the magnetic-field line before making a collision, i.e., ν c e / ν c o l l > 1 {\displaystyle \nu _{\mathrm {ce} }/\nu _{\mathrm {coll} }>1} , where ν c e {\displaystyle \nu _{\mathrm {ce} }} is the electron gyrofrequency and ν c o l l {\displaystyle \nu _{\mathrm {coll} }} is the electron collision rate. It is often the case that the electrons are magnetized while the ions are not. Magnetized plasmas are anisotropic, meaning that their properties in the direction parallel to the magnetic field are different from those perpendicular to it. While electric fields in plasmas are usually small due to the plasma high conductivity, the electric field associated with a plasma moving with velocity v {\displaystyle \mathbf {v} } in the magnetic field B {\displaystyle \mathbf {B} } is given by the usual Lorentz formula E = − v × B {\displaystyle \mathbf {E} =-\mathbf {v} \times \mathbf {B} } , and is not affected by Debye shielding. == Mathematical descriptions == To completely describe the state of a plasma, all of the particle locations and velocities that describe the electromagnetic field in the plasma region would need to be written down. However, it is generally not practical or necessary to keep track of all the particles in a plasma. Therefore, plasma physicists commonly use less detailed descriptions, of which there are two main types: === Fluid model === Fluid models describe plasmas in terms of smoothed quantities, like density and averaged velocity around each position (see Plasma parameters). One simple fluid model, magnetohydrodynamics, treats the plasma as a single fluid governed by a combination of Maxwell's equations and the Navier–Stokes equations. A more general description is the two-fluid plasma, where the ions and electrons are described separately. Fluid models are often accurate when collisionality is sufficiently high to keep the plasma velocity distribution close to a Maxwell–Boltzmann distribution. Because fluid models usually describe the plasma in terms of a single flow at a certain temperature at each spatial location, they can neither capture velocity space structures like beams or double layers, nor resolve wave-particle effects. === Kinetic model === Kinetic models describe the particle velocity distribution function at each point in the plasma and therefore do not need to assume a Maxwell–Boltzmann distribution. A kinetic description is often necessary for collisionless plasmas. There are two common approaches to kinetic description of a plasma. One is based on representing the smoothed distribution function on a grid in velocity and position. The other, known as the particle-in-cell (PIC) technique, includes kinetic information by following the trajectories of a large number of individual particles. Kinetic models are generally more computationally intensive than fluid models. The Vlasov equation may be used to describe the dynamics of a system of charged particles interacting with an electromagnetic field. In magnetized plasmas, a gyrokinetic approach can substantially reduce the computational expense of a fully kinetic simulation. == Plasma science and technology == Plasmas are studied by the vast academic field of plasma science or plasma physics, including several sub-disciplines such as space plasma physics. Plasmas can appear in nature in various forms and locations, with a few examples given in the following table: === Space and astrophysics === Plasmas are by far the most common phase of ordinary matter in the universe, both by mass and by volume. Above the Earth's surface, the ionosphere is a plasma, and the magnetosphere contains plasma. Within our Solar System, interplanetary space is filled with the plasma expelled via the solar wind, extending from the Sun's surface out to the heliopause. Furthermore, all the distant stars, and much of interstellar space or intergalactic space is also filled with plasma, albeit at very low densities. Astrophysical plasmas are also observed in accretion disks around stars or compact objects like white dwarfs, neutron stars, or black holes in close binary star systems. Plasma is associated with ejection of material in astrophysical jets, which have been observed with accreting black holes or in active galaxies like M87's jet that possibly extends out to 5,000 light-years. === Artificial plasmas === Most artificial plasmas are generated by the application of electric and/or magnetic fields through a gas. Plasma generated in a laboratory setting and for industrial use can be generally categorized by: The type of power source used to generate the plasma—DC, AC (typically with radio frequency (RF)) and microwave The pressure they operate at—vacuum pressure (< 10 mTorr or 1 Pa), moderate pressure (≈1 Torr or 100 Pa), atmospheric pressure (760 Torr or 100 kPa) The degree of ionization within the plasma—fully, partially, or weakly ionized The temperature relationships within the plasma—thermal plasma ( T e = T i = T gas {\displaystyle T_{e}=T_{i}=T_{\text{gas}}} ), non-thermal or "cold" plasma ( T e ≫ T i = T gas {\displaystyle T_{e}\gg T_{i}=T_{\text{gas}}} ) The electrode configuration used to generate the plasma The magnetization of the particles within the plasma—magnetized (both ion and electrons are trapped in Larmor orbits by the magnetic field), partially magnetized (the electrons but not the ions are trapped by the magnetic field), non-magnetized (the magnetic field is too weak to trap the particles in orbits but may generate Lorentz forces) ==== Generation of artificial plasma ==== Just like the many uses of plasma, there are several means for its generation. However, one principle is common to all of them: there must be energy input to produce and sustain it. For this case, plasma is generated when an electric current is applied across a dielectric gas or fluid (an electrically non-conducting material) as can be seen in the adjacent image, which shows a discharge tube as a simple example (DC used for simplicity). The potential difference and subsequent electric field pull the bound electrons (negative) toward the anode (positive electrode) while the cathode (negative electrode) pulls the nucleus. As the voltage increases, the current stresses the material (by electric polarization) beyond its dielectric limit (termed strength) into a stage of electrical breakdown, marked by an electric spark, where the material transforms from being an insulator into a conductor (as it becomes increasingly ionized). The underlying process is the Townsend avalanche, where collisions between electrons and neutral gas atoms create more ions and electrons (as can be seen in the figure on the right). The first impact of an electron on an atom results in one ion and two electrons. Therefore, the number of charged particles increases rapidly (in the millions) only "after about 20 successive sets of collisions", mainly due to a small mean free path (average distance travelled between collisions). ===== Electric arc ===== Electric arc is a continuous electric discharge between two electrodes, similar to lightning. With ample current density, the discharge forms a luminous arc, where the inter-electrode material (usually, a gas) undergoes various stages — saturation, breakdown, glow, transition, and thermal arc. The voltage rises to its maximum in the saturation stage, and thereafter it undergoes fluctuations of the various stages, while the current progressively increases throughout. Electrical resistance along the arc creates heat, which dissociates more gas molecules and ionizes the resulting atoms. Therefore, the electrical energy is given to electrons, which, due to their great mobility and large numbers, are able to disperse it rapidly by elastic collisions to the heavy particles. ==== Examples of industrial plasma ==== Plasmas find applications in many fields of research, technology and industry, for example, in industrial and extractive metallurgy, surface treatments such as plasma spraying (coating), etching in microelectronics, metal cutting and welding; as well as in everyday vehicle exhaust cleanup and fluorescent/luminescent lamps, fuel ignition, and even in supersonic combustion engines for aerospace engineering. ===== Low-pressure discharges ===== Glow discharge plasmas: non-thermal plasmas generated by the application of DC or low frequency RF (<100 kHz) electric field to the gap between two metal electrodes. Probably the most common plasma; this is the type of plasma generated within fluorescent light tubes. Capacitively coupled plasma (CCP): similar to glow discharge plasmas, but generated with high frequency RF electric fields, typically 13.56 MHz. These differ from glow discharges in that the sheaths are much less intense. These are widely used in the microfabrication and integrated circuit manufacturing industries for plasma etching and plasma enhanced chemical vapor deposition. Cascaded arc plasma source: a device to produce low temperature (≈1eV) high density plasmas (HDP). Inductively coupled plasma (ICP): similar to a CCP and with similar applications but the electrode consists of a coil wrapped around the chamber where plasma is formed. Wave heated plasma: similar to CCP and ICP in that it is typically RF (or microwave). Examples include helicon discharge and electron cyclotron resonance (ECR). ===== Atmospheric pressure ===== Arc discharge: this is a high power thermal discharge of very high temperature (≈10,000 K). It can be generated using various power supplies. It is commonly used in metallurgical processes. For example, it is used to smelt minerals containing Al2O3 to produce aluminium. Corona discharge: this is a non-thermal discharge generated by the application of high voltage to sharp electrode tips. It is commonly used in ozone generators and particle precipitators. Dielectric barrier discharge (DBD): this is a non-thermal discharge generated by the application of high voltages across small gaps wherein a non-conducting coating prevents the transition of the plasma discharge into an arc. It is often mislabeled "Corona" discharge in industry and has similar application to corona discharges. A common usage of this discharge is in a plasma actuator for vehicle drag reduction. It is also widely used in the web treatment of fabrics. The application of the discharge to synthetic fabrics and plastics functionalizes the surface and allows for paints, glues and similar materials to adhere. The dielectric barrier discharge was used in the mid-1990s to show that low temperature atmospheric pressure plasma is effective in inactivating bacterial cells. This work and later experiments using mammalian cells led to the establishment of a new field of research known as plasma medicine. The dielectric barrier discharge configuration was also used in the design of low temperature plasma jets. These plasma jets are produced by fast propagating guided ionization waves known as plasma bullets. Capacitive discharge: this is a nonthermal plasma generated by the application of RF power (e.g., 13.56 MHz) to one powered electrode, with a grounded electrode held at a small separation distance on the order of 1 cm. Such discharges are commonly stabilized using a noble gas such as helium or argon. "Piezoelectric direct discharge plasma:" is a nonthermal plasma generated at the high side of a piezoelectric transformer (PT). This generation variant is particularly suited for high efficient and compact devices where a separate high voltage power supply is not desired. ==== MHD converters ==== A world effort was triggered in the 1960s to study magnetohydrodynamic converters in order to bring MHD power conversion to market with commercial power plants of a new kind, converting the kinetic energy of a high velocity plasma into electricity with no moving parts at a high efficiency. Research was also conducted in the field of supersonic and hypersonic aerodynamics to study plasma interaction with magnetic fields to eventually achieve passive and even active flow control around vehicles or projectiles, in order to soften and mitigate shock waves, lower thermal transfer and reduce drag. Such ionized gases used in "plasma technology" ("technological" or "engineered" plasmas) are usually weakly ionized gases in the sense that only a tiny fraction of the gas molecules are ionized. These kinds of weakly ionized gases are also nonthermal "cold" plasmas. In the presence of magnetics fields, the study of such magnetized nonthermal weakly ionized gases involves resistive magnetohydrodynamics with low magnetic Reynolds number, a challenging field of plasma physics where calculations require dyadic tensors in a 7-dimensional phase space. When used in combination with a high Hall parameter, a critical value triggers the problematic electrothermal instability which limited these technological developments. == Complex plasma phenomena == Although the underlying equations governing plasmas are relatively simple, plasma behaviour is extraordinarily varied and subtle: the emergence of unexpected behaviour from a simple model is a typical feature of a complex system. Such systems lie in some sense on the boundary between ordered and disordered behaviour and cannot typically be described either by simple, smooth, mathematical functions, or by pure randomness. The spontaneous formation of interesting spatial features on a wide range of length scales is one manifestation of plasma complexity. The features are interesting, for example, because they are very sharp, spatially intermittent (the distance between features is much larger than the features themselves), or have a fractal form. Many of these features were first studied in the laboratory, and have subsequently been recognized throughout the universe. Examples of complexity and complex structures in plasmas include: === Filamentation === Striations or string-like structures are seen in many plasmas, like the plasma ball, the aurora, lightning, electric arcs, solar flares, and supernova remnants. They are sometimes associated with larger current densities, and the interaction with the magnetic field can form a magnetic rope structure. (See also Plasma pinch) Filamentation also refers to the self-focusing of a high power laser pulse. At high powers, the nonlinear part of the index of refraction becomes important and causes a higher index of refraction in the center of the laser beam, where the laser is brighter than at the edges, causing a feedback that focuses the laser even more. The tighter focused laser has a higher peak brightness (irradiance) that forms a plasma. The plasma has an index of refraction lower than one, and causes a defocusing of the laser beam. The interplay of the focusing index of refraction, and the defocusing plasma makes the formation of a long filament of plasma that can be micrometers to kilometers in length. One interesting aspect of the filamentation generated plasma is the relatively low ion density due to defocusing effects of the ionized electrons. (See also Filament propagation) === Impermeable plasma === Impermeable plasma is a type of thermal plasma which acts like an impermeable solid with respect to gas or cold plasma and can be physically pushed. Interaction of cold gas and thermal plasma was briefly studied by a group led by Hannes Alfvén in 1960s and 1970s for its possible applications in insulation of fusion plasma from the reactor walls. However, later it was found that the external magnetic fields in this configuration could induce kink instabilities in the plasma and subsequently lead to an unexpectedly high heat loss to the walls. In 2013, a group of materials scientists reported that they have successfully generated stable impermeable plasma with no magnetic confinement using only an ultrahigh-pressure blanket of cold gas. While spectroscopic data on the characteristics of plasma were claimed to be difficult to obtain due to the high pressure, the passive effect of plasma on synthesis of different nanostructures clearly suggested the effective confinement. They also showed that upon maintaining the impermeability for a few tens of seconds, screening of ions at the plasma-gas interface could give rise to a strong secondary mode of heating (known as viscous heating) leading to different kinetics of reactions and formation of complex nanomaterials. == Gallery == == See also == == References == == External links == Plasmas: the Fourth State of Matter Archived 30 September 2019 at the Wayback Machine Introduction to Plasma Physics: Graduate course given by Richard Fitzpatrick|M.I.T. Introduction by I.H.Hutchinson Plasma Material Interaction How to make a glowing ball of plasma in your microwave with a grape Archived 6 September 2005 at the Wayback Machine|More (Video) OpenPIC3D – 3D Hybrid Particle-In-Cell simulation of plasma dynamics Plasma Formulary Interactive
Wikipedia/Plasma_physics
M-theory is a theory in physics that unifies all consistent versions of superstring theory. Edward Witten first conjectured the existence of such a theory at a string theory conference at the University of Southern California in 1995. Witten's announcement initiated a flurry of research activity known as the second superstring revolution. Prior to Witten's announcement, string theorists had identified five versions of superstring theory. Although these theories initially appeared to be very different, work by many physicists showed that the theories were related in intricate and nontrivial ways. Physicists found that apparently distinct theories could be unified by mathematical transformations called S-duality and T-duality. Witten's conjecture was based in part on the existence of these dualities and in part on the relationship of the string theories to a field theory called eleven-dimensional supergravity. Although a complete formulation of M-theory is not known, such a formulation should describe two- and five-dimensional objects called branes and should be approximated by eleven-dimensional supergravity at low energies. Modern attempts to formulate M-theory are typically based on matrix theory or the AdS/CFT correspondence. According to Witten, M should stand for "magic", "mystery" or "membrane" according to taste, and the true meaning of the title should be decided when a more fundamental formulation of the theory is known. Investigations of the mathematical structure of M-theory have spawned important theoretical results in physics and mathematics. More speculatively, M-theory may provide a framework for developing a unified theory of all of the fundamental forces of nature. Attempts to connect M-theory to experiment typically focus on compactifying its extra dimensions to construct candidate models of the four-dimensional world, although so far none have been verified to give rise to physics as observed in high-energy physics experiments. == Background == === Quantum gravity and strings === One of the deepest problems in modern physics is the problem of quantum gravity. The current understanding of gravity is based on Albert Einstein's general theory of relativity, which is formulated within the framework of classical physics. However, nongravitational forces are described within the framework of quantum mechanics, a radically different formalism for describing physical phenomena based on probability. A quantum theory of gravity is needed in order to reconcile general relativity with the principles of quantum mechanics, but difficulties arise when one attempts to apply the usual prescriptions of quantum theory to the force of gravity. String theory is a theoretical framework that attempts to reconcile gravity and quantum mechanics. In string theory, the point-like particles of particle physics are replaced by one-dimensional objects called strings. String theory describes how strings propagate through space and interact with each other. In a given version of string theory, there is only one kind of string, which may look like a small loop or segment of ordinary string, and it can vibrate in different ways. On distance scales larger than the string scale, a string will look just like an ordinary particle, with its mass, charge, and other properties determined by the vibrational state of the string. In this way, all of the different elementary particles may be viewed as vibrating strings. One of the vibrational states of a string gives rise to the graviton, a quantum mechanical particle that carries gravitational force. There are several versions of string theory: type I, type IIA, type IIB, and two flavors of heterotic string theory (SO(32) and E8×E8). The different theories allow different types of strings, and the particles that arise at low energies exhibit different symmetries. For example, the type I theory includes both open strings (which are segments with endpoints) and closed strings (which form closed loops), while types IIA and IIB include only closed strings. Each of these five string theories arises as a special limiting case of M-theory. This theory, like its string theory predecessors, is an example of a quantum theory of gravity. It describes a force just like the familiar gravitational force subject to the rules of quantum mechanics. === Number of dimensions === In everyday life, there are three familiar dimensions of space: height, width and depth. Einstein's general theory of relativity treats time as a dimension on par with the three spatial dimensions; in general relativity, space and time are not modeled as separate entities but are instead unified to a four-dimensional spacetime, three spatial dimensions and one time dimension. In this framework, the phenomenon of gravity is viewed as a consequence of the geometry of spacetime. In spite of the fact that the universe is well described by four-dimensional spacetime, there are several reasons why physicists consider theories in other dimensions. In some cases, by modeling spacetime in a different number of dimensions, a theory becomes more mathematically tractable, and one can perform calculations and gain general insights more easily. There are also situations where theories in two or three spacetime dimensions are useful for describing phenomena in condensed matter physics. Finally, there exist scenarios in which there could actually be more than four dimensions of spacetime which have nonetheless managed to escape detection. One notable feature of string theory and M-theory is that these theories require extra dimensions of spacetime for their mathematical consistency. In string theory, spacetime is ten-dimensional (nine spatial dimensions, and one time dimension), while in M-theory it is eleven-dimensional (ten spatial dimensions, and one time dimension). In order to describe real physical phenomena using these theories, one must therefore imagine scenarios in which these extra dimensions would not be observed in experiments. Compactification is one way of modifying the number of dimensions in a physical theory. In compactification, some of the extra dimensions are assumed to "close up" on themselves to form circles. In the limit where these curled-up dimensions become very small, one obtains a theory in which spacetime has effectively a lower number of dimensions. A standard analogy for this is to consider a multidimensional object such as a garden hose. If the hose is viewed from a sufficient distance, it appears to have only one dimension, its length. However, as one approaches the hose, one discovers that it contains a second dimension, its circumference. Thus, an ant crawling on the surface of the hose would move in two dimensions. === Dualities === Theories that arise as different limits of M-theory turn out to be related in highly nontrivial ways. One of the relationships that can exist between these different physical theories is called S-duality. This is a relationship which says that a collection of strongly interacting particles in one theory can, in some cases, be viewed as a collection of weakly interacting particles in a completely different theory. Roughly speaking, a collection of particles is said to be strongly interacting if they combine and decay often and weakly interacting if they do so infrequently. Type I string theory turns out to be equivalent by S-duality to the SO(32) heterotic string theory. Similarly, type IIB string theory is related to itself in a nontrivial way by S-duality. Another relationship between different string theories is T-duality. Here one considers strings propagating around a circular extra dimension. T-duality states that a string propagating around a circle of radius R is equivalent to a string propagating around a circle of radius 1/R in the sense that all observable quantities in one description are identified with quantities in the dual description. For example, a string has momentum as it propagates around a circle, and it can also wind around the circle one or more times. The number of times the string winds around a circle is called the winding number. If a string has momentum p and winding number n in one description, it will have momentum n and winding number p in the dual description. For example, type IIA string theory is equivalent to type IIB string theory via T-duality, and the two versions of heterotic string theory are also related by T-duality. In general, the term duality refers to a situation where two seemingly different physical systems turn out to be equivalent in a nontrivial way. If two theories are related by a duality, it means that one theory can be transformed in some way so that it ends up looking just like the other theory. The two theories are then said to be dual to one another under the transformation. Put differently, the two theories are mathematically different descriptions of the same phenomena. === Supersymmetry === Another important theoretical idea that plays a role in M-theory is supersymmetry. This is a mathematical relation that exists in certain physical theories between a class of particles called bosons and a class of particles called fermions. Roughly speaking, fermions are the constituents of matter, while bosons mediate interactions between particles. In theories with supersymmetry, each boson has a counterpart which is a fermion, and vice versa. When supersymmetry is imposed as a local symmetry, one automatically obtains a quantum mechanical theory that includes gravity. Such a theory is called a supergravity theory. A theory of strings that incorporates the idea of supersymmetry is called a superstring theory. There are several different versions of superstring theory which are all subsumed within the M-theory framework. At low energies, superstring theories are approximated by one of the three supergravities in ten dimensions, known as type I, type IIA, and type IIB supergravity. Similarly, M-theory is approximated at low energies by supergravity in eleven dimensions. === Branes === In string theory and related theories such as supergravity theories, a brane is a physical object that generalizes the notion of a point particle to higher dimensions. For example, a point particle can be viewed as a brane of dimension zero, while a string can be viewed as a brane of dimension one. It is also possible to consider higher-dimensional branes. In dimension p, these are called p-branes. Branes are dynamical objects which can propagate through spacetime according to the rules of quantum mechanics. They can have mass and other attributes such as charge. A p-brane sweeps out a (p + 1)-dimensional volume in spacetime called its worldvolume. Physicists often study fields analogous to the electromagnetic field which live on the worldvolume of a brane. The word brane comes from the word "membrane" which refers to a two-dimensional brane. In string theory, the fundamental objects that give rise to elementary particles are the one-dimensional strings. Although the physical phenomena described by M-theory are still poorly understood, physicists know that the theory describes two- and five-dimensional branes. Much of the current research in M-theory attempts to better understand the properties of these branes. == History and development == === Kaluza–Klein theory === In the early 20th century, physicists and mathematicians including Albert Einstein and Hermann Minkowski pioneered the use of four-dimensional geometry for describing the physical world. These efforts culminated in the formulation of Einstein's general theory of relativity, which relates gravity to the geometry of four-dimensional spacetime. The success of general relativity led to efforts to apply higher dimensional geometry to explain other forces. In 1919, work by Theodor Kaluza showed that by passing to five-dimensional spacetime, one can unify gravity and electromagnetism into a single force. This idea was improved by physicist Oskar Klein, who suggested that the additional dimension proposed by Kaluza could take the form of a circle with radius around 10−30 cm. The Kaluza–Klein theory and subsequent attempts by Einstein to develop unified field theory were never completely successful. In part this was because Kaluza–Klein theory predicted a particle (the radion), that has never been shown to exist, and in part because it was unable to correctly predict the ratio of an electron's mass to its charge. In addition, these theories were being developed just as other physicists were beginning to discover quantum mechanics, which would ultimately prove successful in describing known forces such as electromagnetism, as well as new nuclear forces that were being discovered throughout the middle part of the century. Thus it would take almost fifty years for the idea of new dimensions to be taken seriously again. === Early work on supergravity === New concepts and mathematical tools provided fresh insights into general relativity, giving rise to a period in the 1960s–1970s now known as the golden age of general relativity. In the mid-1970s, physicists began studying higher-dimensional theories combining general relativity with supersymmetry, the so-called supergravity theories. General relativity does not place any limits on the possible dimensions of spacetime. Although the theory is typically formulated in four dimensions, one can write down the same equations for the gravitational field in any number of dimensions. Supergravity is more restrictive because it places an upper limit on the number of dimensions. In 1978, work by Werner Nahm showed that the maximum spacetime dimension in which one can formulate a consistent supersymmetric theory is eleven. In the same year, Eugène Cremmer, Bernard Julia, and Joël Scherk of the École Normale Supérieure showed that supergravity not only permits up to eleven dimensions but is in fact most elegant in this maximal number of dimensions. Initially, many physicists hoped that by compactifying eleven-dimensional supergravity, it might be possible to construct realistic models of our four-dimensional world. The hope was that such models would provide a unified description of the four fundamental forces of nature: electromagnetism, the strong and weak nuclear forces, and gravity. Interest in eleven-dimensional supergravity soon waned as various flaws in this scheme were discovered. One of the problems was that the laws of physics appear to distinguish between clockwise and counterclockwise, a phenomenon known as chirality. Edward Witten and others observed this chirality property cannot be readily derived by compactifying from eleven dimensions. In the first superstring revolution in 1984, many physicists turned to string theory as a unified theory of particle physics and quantum gravity. Unlike supergravity theory, string theory was able to accommodate the chirality of the standard model, and it provided a theory of gravity consistent with quantum effects. Another feature of string theory that many physicists were drawn to in the 1980s and 1990s was its high degree of uniqueness. In ordinary particle theories, one can consider any collection of elementary particles whose classical behavior is described by an arbitrary Lagrangian. In string theory, the possibilities are much more constrained: by the 1990s, physicists had argued that there were only five consistent supersymmetric versions of the theory. === Relationships between string theories === Although there were only a handful of consistent superstring theories, it remained a mystery why there was not just one consistent formulation. However, as physicists began to examine string theory more closely, they realized that these theories are related in intricate and nontrivial ways. In the late 1970s, Claus Montonen and David Olive had conjectured a special property of certain physical theories. A sharpened version of their conjecture concerns a theory called N = 4 supersymmetric Yang–Mills theory, which describes theoretical particles formally similar to the quarks and gluons that make up atomic nuclei. The strength with which the particles of this theory interact is measured by a number called the coupling constant. The result of Montonen and Olive, now known as Montonen–Olive duality, states that N = 4 supersymmetric Yang–Mills theory with coupling constant g is equivalent to the same theory with coupling constant 1/g. In other words, a system of strongly interacting particles (large coupling constant) has an equivalent description as a system of weakly interacting particles (small coupling constant) and vice versa by spin-moment. In the 1990s, several theorists generalized Montonen–Olive duality to the S-duality relationship, which connects different string theories. Ashoke Sen studied S-duality in the context of heterotic strings in four dimensions. Chris Hull and Paul Townsend showed that type IIB string theory with a large coupling constant is equivalent via S-duality to the same theory with small coupling constant. Theorists also found that different string theories may be related by T-duality. This duality implies that strings propagating on completely different spacetime geometries may be physically equivalent. === Membranes and fivebranes === String theory extends ordinary particle physics by replacing zero-dimensional point particles by one-dimensional objects called strings. In the late 1980s, it was natural for theorists to attempt to formulate other extensions in which particles are replaced by two-dimensional supermembranes or by higher-dimensional objects called branes. Such objects had been considered as early as 1962 by Paul Dirac, and they were reconsidered by a small but enthusiastic group of physicists in the 1980s. Supersymmetry severely restricts the possible number of dimensions of a brane. In 1987, Eric Bergshoeff, Ergin Sezgin, and Paul Townsend showed that eleven-dimensional supergravity includes two-dimensional branes. Intuitively, these objects look like sheets or membranes propagating through the eleven-dimensional spacetime. Shortly after this discovery, Michael Duff, Paul Howe, Takeo Inami, and Kellogg Stelle considered a particular compactification of eleven-dimensional supergravity with one of the dimensions curled up into a circle. In this setting, one can imagine the membrane wrapping around the circular dimension. If the radius of the circle is sufficiently small, then this membrane looks just like a string in ten-dimensional spacetime. In fact, Duff and his collaborators showed that this construction reproduces exactly the strings appearing in type IIA superstring theory. In 1990, Andrew Strominger published a similar result which suggested that strongly interacting strings in ten dimensions might have an equivalent description in terms of weakly interacting five-dimensional branes. Initially, physicists were unable to prove this relationship for two important reasons. On the one hand, the Montonen–Olive duality was still unproven, and so Strominger's conjecture was even more tenuous. On the other hand, there were many technical issues related to the quantum properties of five-dimensional branes. The first of these problems was solved in 1993 when Ashoke Sen established that certain physical theories require the existence of objects with both electric and magnetic charge which were predicted by the work of Montonen and Olive. In spite of this progress, the relationship between strings and five-dimensional branes remained conjectural because theorists were unable to quantize the branes. Starting in 1991, a team of researchers including Michael Duff, Ramzi Khuri, Jianxin Lu, and Ruben Minasian considered a special compactification of string theory in which four of the ten dimensions curl up. If one considers a five-dimensional brane wrapped around these extra dimensions, then the brane looks just like a one-dimensional string. In this way, the conjectured relationship between strings and branes was reduced to a relationship between strings and strings, and the latter could be tested using already established theoretical techniques. === Second superstring revolution === Speaking at Strings '95 at the University of Southern California in 1995, Edward Witten of the Institute for Advanced Study made the surprising suggestion that all five superstring theories were in fact just different limiting cases of a single theory in eleven spacetime dimensions. Witten's announcement drew together all of the previous results on S- and T-duality and the appearance of two- and five-dimensional branes in string theory. In the months following Witten's announcement, hundreds of new papers appeared on the Internet confirming that the new theory involved membranes in an important way. Today this flurry of work is known as the second superstring revolution. One of the important developments following Witten's announcement was Witten's work in 1996 with string theorist Petr Hořava. Witten and Hořava studied M-theory on a special spacetime geometry with two ten-dimensional boundary components. Their work shed light on the mathematical structure of M-theory and suggested possible ways of connecting M-theory to real world physics. === Origin of the term === Initially, some physicists suggested that the new theory was a fundamental theory of membranes, but Witten was skeptical of the role of membranes in the theory. In a paper from 1996, Hořava and Witten wrote As it has been proposed that the eleven-dimensional theory is a supermembrane theory but there are some reasons to doubt that interpretation, we will non-committally call it the M-theory, leaving to the future the relation of M to membranes. In the absence of an understanding of the true meaning and structure of M-theory, Witten has suggested that the M should stand for "magic", "mystery", or "membrane" according to taste, and the true meaning of the title should be decided when a more fundamental formulation of the theory is known. Years later, he would state, "I thought my colleagues would understand that it really stood for membrane. Unfortunately, it got people confused." == Matrix theory == === BFSS matrix model === In mathematics, a matrix is a rectangular array of numbers or other data. In physics, a matrix model is a particular kind of physical theory whose mathematical formulation involves the notion of a matrix in an important way. A matrix model describes the behavior of a set of matrices within the framework of quantum mechanics. One important example of a matrix model is the BFSS matrix model proposed by Tom Banks, Willy Fischler, Stephen Shenker, and Leonard Susskind in 1997. This theory describes the behavior of a set of nine large matrices. In their original paper, these authors showed, among other things, that the low energy limit of this matrix model is described by eleven-dimensional supergravity. These calculations led them to propose that the BFSS matrix model is exactly equivalent to M-theory. The BFSS matrix model can therefore be used as a prototype for a correct formulation of M-theory and a tool for investigating the properties of M-theory in a relatively simple setting. === Noncommutative geometry === In geometry, it is often useful to introduce coordinates. For example, in order to study the geometry of the Euclidean plane, one defines the coordinates x and y as the distances between any point in the plane and a pair of axes. In ordinary geometry, the coordinates of a point are numbers, so they can be multiplied, and the product of two coordinates does not depend on the order of multiplication. That is, xy = yx. This property of multiplication is known as the commutative law, and this relationship between geometry and the commutative algebra of coordinates is the starting point for much of modern geometry. Noncommutative geometry is a branch of mathematics that attempts to generalize this situation. Rather than working with ordinary numbers, one considers some similar objects, such as matrices, whose multiplication does not satisfy the commutative law (that is, objects for which xy is not necessarily equal to yx). One imagines that these noncommuting objects are coordinates on some more general notion of "space" and proves theorems about these generalized spaces by exploiting the analogy with ordinary geometry. In a paper from 1998, Alain Connes, Michael R. Douglas, and Albert Schwarz showed that some aspects of matrix models and M-theory are described by a noncommutative quantum field theory, a special kind of physical theory in which the coordinates on spacetime do not satisfy the commutativity property. This established a link between matrix models and M-theory on the one hand, and noncommutative geometry on the other hand. It quickly led to the discovery of other important links between noncommutative geometry and various physical theories. == AdS/CFT correspondence == === Overview === The application of quantum mechanics to physical objects such as the electromagnetic field, which are extended in space and time, is known as quantum field theory. In particle physics, quantum field theories form the basis for our understanding of elementary particles, which are modeled as excitations in the fundamental fields. Quantum field theories are also used throughout condensed matter physics to model particle-like objects called quasiparticles. One approach to formulating M-theory and studying its properties is provided by the anti-de Sitter/conformal field theory (AdS/CFT) correspondence. Proposed by Juan Maldacena in late 1997, the AdS/CFT correspondence is a theoretical result which implies that M-theory is in some cases equivalent to a quantum field theory. In addition to providing insights into the mathematical structure of string and M-theory, the AdS/CFT correspondence has shed light on many aspects of quantum field theory in regimes where traditional calculational techniques are ineffective. In the AdS/CFT correspondence, the geometry of spacetime is described in terms of a certain vacuum solution of Einstein's equation called anti-de Sitter space. In very elementary terms, anti-de Sitter space is a mathematical model of spacetime in which the notion of distance between points (the metric) is different from the notion of distance in ordinary Euclidean geometry. It is closely related to hyperbolic space, which can be viewed as a disk as illustrated on the left. This image shows a tessellation of a disk by triangles and squares. One can define the distance between points of this disk in such a way that all the triangles and squares are the same size and the circular outer boundary is infinitely far from any point in the interior. Now imagine a stack of hyperbolic disks where each disk represents the state of the universe at a given time. The resulting geometric object is three-dimensional anti-de Sitter space. It looks like a solid cylinder in which any cross section is a copy of the hyperbolic disk. Time runs along the vertical direction in this picture. The surface of this cylinder plays an important role in the AdS/CFT correspondence. As with the hyperbolic plane, anti-de Sitter space is curved in such a way that any point in the interior is actually infinitely far from this boundary surface. This construction describes a hypothetical universe with only two space dimensions and one time dimension, but it can be generalized to any number of dimensions. Indeed, hyperbolic space can have more than two dimensions and one can "stack up" copies of hyperbolic space to get higher-dimensional models of anti-de Sitter space. An important feature of anti-de Sitter space is its boundary (which looks like a cylinder in the case of three-dimensional anti-de Sitter space). One property of this boundary is that, within a small region on the surface around any given point, it looks just like Minkowski space, the model of spacetime used in nongravitational physics. One can therefore consider an auxiliary theory in which "spacetime" is given by the boundary of anti-de Sitter space. This observation is the starting point for AdS/CFT correspondence, which states that the boundary of anti-de Sitter space can be regarded as the "spacetime" for a quantum field theory. The claim is that this quantum field theory is equivalent to the gravitational theory on the bulk anti-de Sitter space in the sense that there is a "dictionary" for translating entities and calculations in one theory into their counterparts in the other theory. For example, a single particle in the gravitational theory might correspond to some collection of particles in the boundary theory. In addition, the predictions in the two theories are quantitatively identical so that if two particles have a 40 percent chance of colliding in the gravitational theory, then the corresponding collections in the boundary theory would also have a 40 percent chance of colliding. === 6D (2,0) superconformal field theory === One particular realization of the AdS/CFT correspondence states that M-theory on the product space AdS7×S4 is equivalent to the so-called (2,0)-theory on the six-dimensional boundary. Here "(2,0)" refers to the particular type of supersymmetry that appears in the theory. In this example, the spacetime of the gravitational theory is effectively seven-dimensional (hence the notation AdS7), and there are four additional "compact" dimensions (encoded by the S4 factor). In the real world, spacetime is four-dimensional, at least macroscopically, so this version of the correspondence does not provide a realistic model of gravity. Likewise, the dual theory is not a viable model of any real-world system since it describes a world with six spacetime dimensions. Nevertheless, the (2,0)-theory has proven to be important for studying the general properties of quantum field theories. Indeed, this theory subsumes many mathematically interesting effective quantum field theories and points to new dualities relating these theories. For example, Luis Alday, Davide Gaiotto, and Yuji Tachikawa showed that by compactifying this theory on a surface, one obtains a four-dimensional quantum field theory, and there is a duality known as the AGT correspondence which relates the physics of this theory to certain physical concepts associated with the surface itself. More recently, theorists have extended these ideas to study the theories obtained by compactifying down to three dimensions. In addition to its applications in quantum field theory, the (2,0)-theory has spawned important results in pure mathematics. For example, the existence of the (2,0)-theory was used by Witten to give a "physical" explanation for a conjectural relationship in mathematics called the geometric Langlands correspondence. In subsequent work, Witten showed that the (2,0)-theory could be used to understand a concept in mathematics called Khovanov homology. Developed by Mikhail Khovanov around 2000, Khovanov homology provides a tool in knot theory, the branch of mathematics that studies and classifies the different shapes of knots. Another application of the (2,0)-theory in mathematics is the work of Davide Gaiotto, Greg Moore, and Andrew Neitzke, which used physical ideas to derive new results in hyperkähler geometry. === ABJM superconformal field theory === Another realization of the AdS/CFT correspondence states that M-theory on AdS4×S7 is equivalent to a quantum field theory called the ABJM theory in three dimensions. In this version of the correspondence, seven of the dimensions of M-theory are curled up, leaving four non-compact dimensions. Since the spacetime of our universe is four-dimensional, this version of the correspondence provides a somewhat more realistic description of gravity. The ABJM theory appearing in this version of the correspondence is also interesting for a variety of reasons. Introduced by Aharony, Bergman, Jafferis, and Maldacena, it is closely related to another quantum field theory called Chern–Simons theory. The latter theory was popularized by Witten in the late 1980s because of its applications to knot theory. In addition, the ABJM theory serves as a semi-realistic simplified model for solving problems that arise in condensed matter physics. == Phenomenology == === Overview === In addition to being an idea of considerable theoretical interest, M-theory provides a framework for constructing models of real world physics that combine general relativity with the standard model of particle physics. Phenomenology is the branch of theoretical physics in which physicists construct realistic models of nature from more abstract theoretical ideas. String phenomenology is the part of string theory that attempts to construct realistic models of particle physics based on string and M-theory. Typically, such models are based on the idea of compactification. Starting with the ten- or eleven-dimensional spacetime of string or M-theory, physicists postulate a shape for the extra dimensions. By choosing this shape appropriately, they can construct models roughly similar to the standard model of particle physics, together with additional undiscovered particles, usually supersymmetric partners to analogues of known particles. One popular way of deriving realistic physics from string theory is to start with the heterotic theory in ten dimensions and assume that the six extra dimensions of spacetime are shaped like a six-dimensional Calabi–Yau manifold. This is a special kind of geometric object named after mathematicians Eugenio Calabi and Shing-Tung Yau. Calabi–Yau manifolds offer many ways of extracting realistic physics from string theory. Other similar methods can be used to construct models with physics resembling to some extent that of our four-dimensional world based on M-theory. Partly because of theoretical and mathematical difficulties and partly because of the extremely high energies (beyond what is technologically possible for the foreseeable future) needed to test these theories experimentally, there is so far no experimental evidence that would unambiguously point to any of these models being a correct fundamental description of nature. This has led some in the community to criticize these approaches to unification and question the value of continued research on these problems. === Compactification on G2 manifolds === In one approach to M-theory phenomenology, theorists assume that the seven extra dimensions of M-theory are shaped like a G2 manifold. This is a special kind of seven-dimensional shape constructed by mathematician Dominic Joyce of the University of Oxford. These G2 manifolds are still poorly understood mathematically, and this fact has made it difficult for physicists to fully develop this approach to phenomenology. For example, physicists and mathematicians often assume that space has a mathematical property called smoothness, but this property cannot be assumed in the case of a G2 manifold if one wishes to recover the physics of our four-dimensional world. Another problem is that G2 manifolds are not complex manifolds, so theorists are unable to use tools from the branch of mathematics known as complex analysis. Finally, there are many open questions about the existence, uniqueness, and other mathematical properties of G2 manifolds, and mathematicians lack a systematic way of searching for these manifolds. === Heterotic M-theory === Because of the difficulties with G2 manifolds, most attempts to construct realistic theories of physics based on M-theory have taken a more indirect approach to compactifying eleven-dimensional spacetime. One approach, pioneered by Witten, Hořava, Burt Ovrut, and others, is known as heterotic M-theory. In this approach, one imagines that one of the eleven dimensions of M-theory is shaped like a circle. If this circle is very small, then the spacetime becomes effectively ten-dimensional. One then assumes that six of the ten dimensions form a Calabi–Yau manifold. If this Calabi–Yau manifold is also taken to be small, one is left with a theory in four-dimensions. Heterotic M-theory has been used to construct models of brane cosmology in which the observable universe is thought to exist on a brane in a higher dimensional ambient space. It has also spawned alternative theories of the early universe that do not rely on the theory of cosmic inflation. == References == === Notes === === Citations === === Bibliography === == Popularization == BBC Horizon: "Parallel Universes" – 2002 feature documentary by BBC Horizon, episode "Parallel Universes" focuses on the history and emergence of M-theory, and scientists involved [1] PBS.org-NOVA: The Elegant Universe] – 2003 Emmy Award-winning, three-hour miniseries by Nova with Brian Greene, adapted from his The Elegant Universe book (original PBS broadcast dates: October 28, 8–10 p.m. and November 4, 8–9 p.m., 2003) == See also == F-theory Multiverse == External links == Superstringtheory.com – The "Official String Theory Web Site", created by Patricia Schwarz. References on string theory and M-theory for the layperson and expert. Not Even Wrong – Peter Woit's blog on physics in general, and string theory in particular. M-Theory – Edward Witten (1995) – Witten's 1995 lecture introducing M-Theory.
Wikipedia/M-theory
In physics, action is a scalar quantity that describes how the balance of kinetic versus potential energy of a physical system changes with trajectory. Action is significant because it is an input to the principle of stationary action, an approach to classical mechanics that is simpler for multiple objects. Action and the variational principle are used in Feynman's formulation of quantum mechanics and in general relativity. For systems with small values of action close to the Planck constant, quantum effects are significant. In the simple case of a single particle moving with a constant velocity (thereby undergoing uniform linear motion), the action is the momentum of the particle times the distance it moves, added up along its path; equivalently, action is the difference between the particle's kinetic energy and its potential energy, times the duration for which it has that amount of energy. More formally, action is a mathematical functional which takes the trajectory (also called path or history) of the system as its argument and has a real number as its result. Generally, the action takes different values for different paths. Action has dimensions of energy × time or momentum × length, and its SI unit is joule-second (like the Planck constant h). == Introduction == Introductory physics often begins with Newton's laws of motion, relating force and motion; action is part of a completely equivalent alternative approach with practical and educational advantages. However, the concept took many decades to supplant Newtonian approaches and remains a challenge to introduce to students. === Simple example === For a trajectory of a ball moving in the air on Earth the action is defined between two points in time, t 1 {\displaystyle t_{1}} and t 2 {\displaystyle t_{2}} as the kinetic energy (KE) minus the potential energy (PE), integrated over time. S = ∫ t 1 t 2 ( K E ( t ) − P E ( t ) ) d t {\displaystyle S=\int _{t_{1}}^{t_{2}}\left(KE(t)-PE(t)\right)dt} The action balances kinetic against potential energy. The kinetic energy of a ball of mass m {\displaystyle m} is ( 1 / 2 ) m v 2 {\displaystyle (1/2)mv^{2}} where v {\displaystyle v} is the velocity of the ball; the potential energy is m g x {\displaystyle mgx} where g {\displaystyle g} is the acceleration due to gravity. Then the action between t 1 {\displaystyle t_{1}} and t 2 {\displaystyle t_{2}} is S = ∫ t 1 t 2 ( 1 2 m v 2 ( t ) − m g x ( t ) ) d t {\displaystyle S=\int _{t_{1}}^{t_{2}}\left({\frac {1}{2}}mv^{2}(t)-mgx(t)\right)dt} The action value depends upon the trajectory taken by the ball through x ( t ) {\displaystyle x(t)} and v ( t ) {\displaystyle v(t)} . This makes the action an input to the powerful stationary-action principle for classical and for quantum mechanics. Newton's equations of motion for the ball can be derived from the action using the stationary-action principle, but the advantages of action-based mechanics only begin to appear in cases where Newton's laws are difficult to apply. Replace the ball with an electron: classical mechanics fails but stationary action continues to work. The energy difference in the simple action definition, kinetic minus potential energy, is generalized and called the Lagrangian for more complex cases. === Planck's quantum of action === The Planck constant, written as h {\displaystyle h} is the quantum of action. The quantum of angular momentum is ℏ = h 2 π {\displaystyle \hbar ={\frac {h}{2\pi }}} . These constants have units of energy times time. They appear in all significant quantum equations, like the uncertainty principle and the de Broglie wavelength. Whenever the value of the action approaches the Planck constant, quantum effects are significant. == History == Pierre Louis Maupertuis and Leonhard Euler working in the 1740s developed early versions of the action principle. Joseph Louis Lagrange clarified the mathematics when he invented the calculus of variations. William Rowan Hamilton made the next big breakthrough, formulating Hamilton's principle in 1853.: 740  Hamilton's principle became the cornerstone for classical work with different forms of action until Richard Feynman and Julian Schwinger developed quantum action principles.: 127  == Definitions == Expressed in mathematical language, using the calculus of variations, the evolution of a physical system (i.e., how the system actually progresses from one state to another) corresponds to a stationary point (usually, a minimum) of the action. Action has the dimensions of [energy] × [time], and its SI unit is joule-second, which is identical to the unit of angular momentum. Several different definitions of "the action" are in common use in physics. The action is usually an integral over time. However, when the action pertains to fields, it may be integrated over spatial variables as well. In some cases, the action is integrated along the path followed by the physical system. The action is typically represented as an integral over time, taken along the path of the system between the initial time and the final time of the development of the system: S = ∫ t 1 t 2 L d t , {\displaystyle {\mathcal {S}}=\int _{t_{1}}^{t_{2}}L\,dt,} where the integrand L is called the Lagrangian. For the action integral to be well-defined, the trajectory has to be bounded in time and space. === Action (functional) === Most commonly, the term is used for a functional S {\displaystyle {\mathcal {S}}} which takes a function of time and (for fields) space as input and returns a scalar. In classical mechanics, the input function is the evolution q(t) of the system between two times t1 and t2, where q represents the generalized coordinates. The action S [ q ( t ) ] {\displaystyle {\mathcal {S}}[\mathbf {q} (t)]} is defined as the integral of the Lagrangian L for an input evolution between the two times: S [ q ( t ) ] = ∫ t 1 t 2 L ( q ( t ) , q ˙ ( t ) , t ) d t , {\displaystyle {\mathcal {S}}[\mathbf {q} (t)]=\int _{t_{1}}^{t_{2}}L(\mathbf {q} (t),{\dot {\mathbf {q} }}(t),t)\,dt,} where the endpoints of the evolution are fixed and defined as q 1 = q ( t 1 ) {\displaystyle \mathbf {q} _{1}=\mathbf {q} (t_{1})} and q 2 = q ( t 2 ) {\displaystyle \mathbf {q} _{2}=\mathbf {q} (t_{2})} . According to Hamilton's principle, the true evolution qtrue(t) is an evolution for which the action S [ q ( t ) ] {\displaystyle {\mathcal {S}}[\mathbf {q} (t)]} is stationary (a minimum, maximum, or a saddle point). This principle results in the equations of motion in Lagrangian mechanics. === Abbreviated action (functional) === In addition to the action functional, there is another functional called the abbreviated action. In the abbreviated action, the input function is the path followed by the physical system without regard to its parameterization by time. For example, the path of a planetary orbit is an ellipse, and the path of a particle in a uniform gravitational field is a parabola; in both cases, the path does not depend on how fast the particle traverses the path. The abbreviated action S 0 {\displaystyle {\mathcal {S}}_{0}} (sometime written as W {\displaystyle W} ) is defined as the integral of the generalized momenta, p i = ∂ L ( q , t ) ∂ q ˙ i , {\displaystyle p_{i}={\frac {\partial L(q,t)}{\partial {\dot {q}}_{i}}},} for a system Lagrangian L {\displaystyle L} along a path in the generalized coordinates q i {\displaystyle q_{i}} : S 0 = ∫ q 1 q 2 p ⋅ d q = ∫ q 1 q 2 Σ i p i d q i . {\displaystyle {\mathcal {S}}_{0}=\int _{q_{1}}^{q_{2}}\mathbf {p} \cdot d\mathbf {q} =\int _{q_{1}}^{q_{2}}\Sigma _{i}p_{i}\,dq_{i}.} where q 1 {\displaystyle q_{1}} and q 2 {\displaystyle q_{2}} are the starting and ending coordinates. According to Maupertuis's principle, the true path of the system is a path for which the abbreviated action is stationary. === Hamilton's characteristic function === When the total energy E is conserved, the Hamilton–Jacobi equation can be solved with the additive separation of variables:: 225  S ( q 1 , … , q N , t ) = W ( q 1 , … , q N ) − E ⋅ t , {\displaystyle S(q_{1},\dots ,q_{N},t)=W(q_{1},\dots ,q_{N})-E\cdot t,} where the time-independent function W(q1, q2, ..., qN) is called Hamilton's characteristic function. The physical significance of this function is understood by taking its total time derivative d W d t = ∂ W ∂ q i q ˙ i = p i q ˙ i . {\displaystyle {\frac {dW}{dt}}={\frac {\partial W}{\partial q_{i}}}{\dot {q}}_{i}=p_{i}{\dot {q}}_{i}.} This can be integrated to give W ( q 1 , … , q N ) = ∫ p i q ˙ i d t = ∫ p i d q i , {\displaystyle W(q_{1},\dots ,q_{N})=\int p_{i}{\dot {q}}_{i}\,dt=\int p_{i}\,dq_{i},} which is just the abbreviated action.: 434  === Action of a generalized coordinate === A variable Jk in the action-angle coordinates, called the "action" of the generalized coordinate qk, is defined by integrating a single generalized momentum around a closed path in phase space, corresponding to rotating or oscillating motion:: 454  J k = ∮ p k d q k {\displaystyle J_{k}=\oint p_{k}\,dq_{k}} The corresponding canonical variable conjugate to Jk is its "angle" wk, for reasons described more fully under action-angle coordinates. The integration is only over a single variable qk and, therefore, unlike the integrated dot product in the abbreviated action integral above. The Jk variable equals the change in Sk(qk) as qk is varied around the closed path. For several physical systems of interest, Jk is either a constant or varies very slowly; hence, the variable Jk is often used in perturbation calculations and in determining adiabatic invariants. For example, they are used in the calculation of planetary and satellite orbits.: 477  === Single relativistic particle === When relativistic effects are significant, the action of a point particle of mass m travelling a world line C parametrized by the proper time τ {\displaystyle \tau } is S = − m c 2 ∫ C d τ . {\displaystyle S=-mc^{2}\int _{C}\,d\tau .} If instead, the particle is parametrized by the coordinate time t of the particle and the coordinate time ranges from t1 to t2, then the action becomes S = ∫ t 1 t 2 L d t , {\displaystyle S=\int _{t1}^{t2}L\,dt,} where the Lagrangian is L = − m c 2 1 − v 2 c 2 . {\displaystyle L=-mc^{2}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}.} == Action principles and related ideas == Physical laws are frequently expressed as differential equations, which describe how physical quantities such as position and momentum change continuously with time, space or a generalization thereof. Given the initial and boundary conditions for the situation, the "solution" to these empirical equations is one or more functions that describe the behavior of the system and are called equations of motion. Action is a part of an alternative approach to finding such equations of motion. Classical mechanics postulates that the path actually followed by a physical system is that for which the action is minimized, or more generally, is stationary. In other words, the action satisfies a variational principle: the principle of stationary action (see also below). The action is defined by an integral, and the classical equations of motion of a system can be derived by minimizing the value of that integral. The action principle provides deep insights into physics, and is an important concept in modern theoretical physics. Various action principles and related concepts are summarized below. === Maupertuis's principle === In classical mechanics, Maupertuis's principle (named after Pierre Louis Maupertuis) states that the path followed by a physical system is the one of least length (with a suitable interpretation of path and length). Maupertuis's principle uses the abbreviated action between two generalized points on a path. === Hamilton's principal function === Hamilton's principle states that the differential equations of motion for any physical system can be re-formulated as an equivalent integral equation. Thus, there are two distinct approaches for formulating dynamical models. Hamilton's principle applies not only to the classical mechanics of a single particle, but also to classical fields such as the electromagnetic and gravitational fields. Hamilton's principle has also been extended to quantum mechanics and quantum field theory—in particular the path integral formulation of quantum mechanics makes use of the concept—where a physical system explores all possible paths, with the phase of the probability amplitude for each path being determined by the action for the path; the final probability amplitude adds all paths using their complex amplitude and phase. === Hamilton–Jacobi equation === Hamilton's principal function S = S ( q , t ; q 0 , t 0 ) {\displaystyle S=S(q,t;q_{0},t_{0})} is obtained from the action functional S {\displaystyle {\mathcal {S}}} by fixing the initial time t 0 {\displaystyle t_{0}} and the initial endpoint q 0 , {\displaystyle q_{0},} while allowing the upper time limit t {\displaystyle t} and the second endpoint q {\displaystyle q} to vary. The Hamilton's principal function satisfies the Hamilton–Jacobi equation, a formulation of classical mechanics. Due to a similarity with the Schrödinger equation, the Hamilton–Jacobi equation provides, arguably, the most direct link with quantum mechanics. === Euler–Lagrange equations === In Lagrangian mechanics, the requirement that the action integral be stationary under small perturbations is equivalent to a set of differential equations (called the Euler–Lagrange equations) that may be obtained using the calculus of variations. === Classical fields === The action principle can be extended to obtain the equations of motion for fields, such as the electromagnetic field or gravitational field. Maxwell's equations can be derived as conditions of stationary action. The Einstein equation utilizes the Einstein–Hilbert action as constrained by a variational principle. The trajectory (path in spacetime) of a body in a gravitational field can be found using the action principle. For a free falling body, this trajectory is a geodesic. === Conservation laws === Implications of symmetries in a physical situation can be found with the action principle, together with the Euler–Lagrange equations, which are derived from the action principle. An example is Noether's theorem, which states that to every continuous symmetry in a physical situation there corresponds a conservation law (and conversely). This deep connection requires that the action principle be assumed. === Path integral formulation of quantum field theory === In quantum mechanics, the system does not follow a single path whose action is stationary, but the behavior of the system depends on all permitted paths and the value of their action. The action corresponding to the various paths is used to calculate the path integral, which gives the probability amplitudes of the various outcomes. Although equivalent in classical mechanics with Newton's laws, the action principle is better suited for generalizations and plays an important role in modern physics. Indeed, this principle is one of the great generalizations in physical science. It is best understood within quantum mechanics, particularly in Richard Feynman's path integral formulation, where it arises out of destructive interference of quantum amplitudes. === Modern extensions === The action principle can be generalized still further. For example, the action need not be an integral, because nonlocal actions are possible. The configuration space need not even be a functional space, given certain features such as noncommutative geometry. However, a physical basis for these mathematical extensions remains to be established experimentally. == See also == == References == == Further reading == The Cambridge Handbook of Physics Formulas, G. Woan, Cambridge University Press, 2010, ISBN 978-0-521-57507-2. Dare A. Wells, Lagrangian Dynamics, Schaum's Outline Series (McGraw-Hill, 1967) ISBN 0-07-069258-0, A 350-page comprehensive "outline" of the subject. == External links == Principle of least action interactive Interactive explanation/webpage
Wikipedia/Action_(physics)
In science and engineering the study of high pressure examines its effects on materials and the design and construction of devices, such as a diamond anvil cell, which can create high pressure. High pressure usually means pressures of thousands (kilobars) or millions (megabars) of times atmospheric pressure (about 1 bar or 100,000 Pa). == History and overview == Percy Williams Bridgman received a Nobel Prize in 1946 for advancing this area of physics by two magnitudes of pressure (400 MPa to 40 GPa). The list of founding fathers of this field includes also the names of Harry George Drickamer, Tracy Hall, Francis P. Bundy, Leonid F. Vereschagin, and Sergey M. Stishov. It was by applying high pressure as well as high temperature to carbon that synthetic diamonds were first produced alongside many other interesting discoveries. Almost any material when subjected to high pressure will compact itself into a denser form, for example, quartz (also called silica or silicon dioxide) will first adopt a denser form known as coesite, then upon application of even higher pressure, form stishovite. These two forms of silica were first discovered by high-pressure experimenters, but then found in nature at the site of a meteor impact. Chemical bonding is likely to change under high pressure, when the P*V term in the free energy becomes comparable to the energies of typical chemical bonds – i.e. at around 100 GPa. Among the most striking changes are metallization of oxygen at 96 GPa (rendering oxygen a superconductor), and transition of sodium from a nearly-free-electron metal to a transparent insulator at ~200 GPa. At ultimately high compression, however, all materials will metallize. High-pressure experimentation has led to the discovery of the types of minerals which are believed to exist in the deep mantle of the Earth, such as silicate perovskite, which is thought to make up half of the Earth's bulk, and post-perovskite, which occurs at the core-mantle boundary and explains many anomalies inferred for that region. Pressure "landmarks": typical pressures reached by large-volume presses are up to 30–40 GPa, pressures that can be generated inside diamond anvil cells are ~1000 GPa, pressure in the center of the Earth is 364 GPa, and highest pressures ever achieved in shock waves are over 100,000 GPa. == See also == Synthetic diamond D-DIA == References == == Further reading == Hazen, Robert M. (1993). The new alchemists : breaking through the barriers of high pressure. New York: Times Books. ISBN 978-0-8129-2275-2.
Wikipedia/High-pressure_physics
In physics, lattice field theory is the study of lattice models of quantum field theory. This involves studying field theory on a space or spacetime that has been discretised onto a lattice. == Details == Although most lattice field theories are not exactly solvable, they are immensely appealing due to their feasibility for computer simulation, often using Markov chain Monte Carlo methods. One hopes that, by performing simulations on larger and larger lattices, while making the lattice spacing smaller and smaller, one will be able to recover the behavior of the continuum theory as the continuum limit is approached. Just as in all lattice models, numerical simulation provides access to field configurations that are not accessible to perturbation theory, such as solitons. Similarly, non-trivial vacuum states can be identified and examined. The method is particularly appealing for the quantization of a gauge theory using the Wilson action. Most quantization approaches maintain Poincaré invariance manifest but sacrifice manifest gauge symmetry by requiring gauge fixing. It's only after renormalization that gauge invariance can be recovered. Lattice field theory differs from these in that it keeps manifest gauge invariance, but sacrifices manifest Poincaré invariance—recovering it only after renormalization. The articles on lattice gauge theory and lattice QCD explore these issues in greater detail. == See also == Fermion doubling == Further reading == Creutz, M., Quarks, gluons and lattices, Cambridge University Press, Cambridge, (1985). ISBN 978-0521315357 (renewed version: (2023) ISBN 978-1009290395) DeGrand, T., DeTar, C., Lattice Methods for Quantum Chromodynamics, World Scientific, Singapore, (2006). ISBN 978-9812567277 Gattringer, C., Lang, C. B., Quantum Chromodynamics on the Lattice, Springer, (2010). ISBN 978-3642018497 Knechtli, F., Günther, M., Peardon, M., Lattice Quantum Chromodynamics: Practical Essentials, Springer, (2016). ISBN 978-9402409970 Lin, H., Meyer, H.B., Lattice QCD for Nuclear Physics, Springer, (2014). ISBN 978-3319080215 Makeenko, Y., Methods of contemporary gauge theory, Cambridge University Press, Cambridge, (2002). ISBN 0-521-80911-8. Montvay, I., Münster, G., Quantum Fields on a Lattice, Cambridge University Press, Cambridge, (1997). ISBN 978-0521599177 Rothe, H., Lattice Gauge Theories, An Introduction, World Scientific, Singapore, (2005). ISBN 978-9814365857 Smit, J., Introduction to Quantum Fields on a Lattice, Cambridge University Press, Cambridge, (2002). ISBN 978-0521890519
Wikipedia/Lattice_field_theory
Basic research, also called pure research, fundamental research, basic science, or pure science, is a type of scientific research with the aim of improving scientific theories for better understanding and prediction of natural or other phenomena. In contrast, applied research uses scientific theories to develop technology or techniques, which can be used to intervene and alter natural or other phenomena. Though often driven simply by curiosity, basic research often fuels the technological innovations of applied science. The two aims are often practiced simultaneously in coordinated research and development. In addition to innovations, basic research serves to provide insights and public support of nature, possibly improving conservation efforts. Technological innovations may influence engineering concepts, such as the beak of a kingfisher influencing the design of a high-speed bullet train. == Overview == Basic research advances fundamental knowledge about the world. It focuses on creating and refuting or supporting theories that explain observed phenomena. Pure research is the source of most new scientific ideas and ways of thinking about the world. It can be exploratory, descriptive, or explanatory; however, explanatory research is the most common. Basic research generates new ideas, principles, and theories, which may not be immediately utilized but nonetheless form the basis of progress and development in different fields. Today's computers, for example, could not exist without research in pure mathematics conducted over a century ago, for which there was no known practical application at the time. Basic research rarely helps practitioners directly with their everyday concerns; nevertheless, it stimulates new ways of thinking that have the potential to revolutionize and dramatically improve how practitioners deal with a problem in the future. == By country == In the United States, basic research is funded mainly by the federal government and done mainly at universities and institutes. As government funding has diminished in the 2010s, however, private funding is increasingly important. == Basic versus applied science == Applied science focuses on the development of technology and techniques. In contrast, basic science develops scientific knowledge and predictions, principally in natural sciences but also in other empirical sciences, which are used as the scientific foundation for applied science. Basic science develops and establishes information to predict phenomena and perhaps to understand nature, whereas applied science uses portions of basic science to develop interventions via technology or technique to alter events or outcomes. Applied and basic sciences can interface closely in research and development. The interface between basic research and applied research has been studied by the National Science Foundation. A worker in basic scientific research is motivated by a driving curiosity about the unknown. When his explorations yield new knowledge, he experiences the satisfaction of those who first attain the summit of a mountain or the upper reaches of a river flowing through unmapped territory. Discovery of truth and understanding of nature are his objectives. His professional standing among his fellows depends upon the originality and soundness of his work. Creativeness in science is of a cloth with that of the poet or painter.It conducted a study in which it traced the relationship between basic scientific research efforts and the development of major innovations, such as oral contraceptives and videotape recorders. This study found that basic research played a key role in the development in all of the innovations. The number of basic science research that assisted in the production of a given innovation peaked between 20 and 30 years before the innovation itself. While most innovation takes the form of applied science and most innovation occurs in the private sector, basic research is a necessary precursor to almost all applied science and associated instances of innovation. Roughly 76% of basic research is conducted by universities. A distinction can be made between basic science and disciplines such as medicine and technology. They can be grouped as STM (science, technology, and medicine; not to be confused with STEM [science, technology, engineering, and mathematics]) or STS (science, technology, and society). These groups are interrelated and influence each other, although they may differ in the specifics such as methods and standards. The Nobel Prize mixes basic with applied sciences for its award in Physiology or Medicine. In contrast, the Royal Society of London awards distinguish natural science from applied science. == See also == Blue skies research Hard and soft science Metascience Normative science Physics Precautionary principle Pure mathematics Pure Chemistry == References == == Further reading == Levy, David M. (2002). "Research and Development". In David R. Henderson (ed.). Concise Encyclopedia of Economics (1st ed.). Library of Economics and Liberty. OCLC 317650570, 50016270, 163149563
Wikipedia/Fundamental_science
The Physics (Ancient Greek: Φυσικὴ ἀκρόασις, romanized: Phusike akroasis; Latin: Physica or Naturales Auscultationes, possibly meaning "Lectures on nature") is a named text, written in ancient Greek, collated from a collection of surviving manuscripts known as the Corpus Aristotelicum, attributed to the 4th-century BC philosopher Aristotle. == The meaning of physics in Aristotle == It is a collection of treatises or lessons that deals with the most general (philosophical) principles of natural or moving things, both living and non-living, rather than physical theories (in the modern sense) or investigations of the particular contents of the universe. The chief purpose of the work is to discover the principles and causes of (and not merely to describe) change, or movement, or motion (κίνησις kinesis), especially that of natural wholes (mostly living things, but also inanimate wholes like the cosmos). In the conventional Andronicean ordering of Aristotle's works, it stands at the head of, as well as being foundational to, the long series of physical, cosmological and biological treatises, whose ancient Greek title, τὰ φυσικά, means "the [writings] on nature" or "natural philosophy". == Description of the content == The Physics is composed of eight books, which are further divided into chapters. This system is of ancient origin, now obscure. In modern languages, books are referenced with Roman numerals, standing for ancient Greek capital letters (the Greeks represented numbers with letters, e.g. A for 1). Chapters are identified by Arabic numerals, but the use of the English word "chapter" is strictly conventional. Ancient "chapters" (capita) are generally very short, often less than a page. Additionally, the Bekker numbers give the page and column (a or b) used in the Prussian Academy of Sciences' edition of Aristotle's works, instigated and managed by Bekker himself. These are evident in the 1831 2-volume edition. Bekker's line numbers may be given. These are often given, but unless the edition is the Academy's, they do match any line counts. === Book I (Α; 184a–192b) === Book I introduces Aristotle's approach to nature, which is to be based on principles, causes, and elements. Before offering his particular views, he engages previous theories, such as those offered by Melissus and Parmenides. Aristotle's own view comes out in Ch. 7 where he identifies three principles: substances, opposites, and privation. Chapters 3 and 4 are among the most difficult in all of Aristotle's works and involve subtle refutations of the thought of Parmenides, Melissus and Anaxagoras. In chapter 5, he continues his review of his predecessors, particularly how many first principles there are. Chapter 6 narrows down the number of principles to two or three. He presents his own account of the subject in chapter 7, where he first introduces the word matter (Greek: hyle) to designate fundamental essence (ousia). He defines matter in chapter 9: "For my definition of matter is just this—the primary substratum of each thing, from which it comes to be without qualification, and which persists in the result." Matter in Aristotle's thought is, however, defined in terms of sensible reality; for example, a horse eats grass: the horse changes the grass into itself; the grass as such does not persist in the horse, but some aspect of it – its matter – does. Matter is not specifically described, but consists of whatever is apart from quality or quantity and that of which something may be predicated. Matter in this understanding does not exist independently (i.e. as a substance), but exists interdependently (i.e. as a "principle") with form and only insofar as it underlies change. Matter and form are analogical terms. === Book II (Β; 192b–200b) === Book II identifies "nature" (physis) as "a source or cause of being moved and of being at rest in that to which it belongs primarily" (1.192b21). Thus, those entities are natural which are capable of starting to move, e.g. growing, acquiring qualities, displacing themselves, and finally being born and dying. Aristotle contrasts natural things with the artificial: artificial things can move also, but they move according to what they are made of, not according to what they are. For example, if a wooden bed were buried and somehow sprouted as a tree, it would be according to what it is made of, not what it is. Aristotle contrasts two senses of nature: nature as matter and nature as form or definition. By "nature", Aristotle means the natures of particular things and would perhaps be better translated "a nature." In Book II, however, his appeal to "nature" as a source of activities is more typically to the genera of natural kinds (the secondary substance). But, contra Plato, Aristotle attempts to resolve a philosophical quandary that was well understood in the fourth century. The Eudoxian planetary model sufficed for the wandering stars, but no deduction of terrestrial substance would be forthcoming based solely on the mechanical principles of necessity, (ascribed by Aristotle to material causation in chapter 9). In the Enlightenment, centuries before modern science made good on atomist intuitions, a nominal allegiance to mechanistic materialism gained popularity despite harboring Newton's action at distance, and comprising the native habitat of teleological arguments: Machines or artifacts composed of parts lacking any intrinsic relationship to each other with their order imposed from without. Thus, the source of an apparent thing's activities is not the whole itself, but its parts. While Aristotle asserts that the matter (and parts) are a necessary cause of things – the material cause – he says that nature is primarily the essence or formal cause (1.193b6), that is, the information, the whole species itself. The necessary in nature, then, is plainly what we call by the name of matter, and the changes in it. Both causes must be stated by the physicist, but especially the end; for that is the cause of the matter, not vice versa; and the end is 'that for the sake of which', and the beginning starts from the definition or essence… In chapter 3, Aristotle presents his theory of the four causes (material, efficient, formal, and final). Material cause explains what something is made of (for example, the wood of a house), formal cause explains the form which a thing follows to become that thing (the plans of an architect to build a house), efficient cause is the actual source of the change (the physical building of the house), and final cause is the intended purpose of the change (the final product of the house and its purpose as a shelter and home). Of particular importance is the final cause or purpose (telos). It is a common mistake to conceive of the four causes as additive or alternative forces pushing or pulling; in reality, all four are needed to explain (7.198a22-25). What we typically mean by cause in the modern scientific idiom is only a narrow part of what Aristotle means by efficient cause. He contrasts purpose with the way in which "nature" does not work, chance (or luck), discussed in chapters 4, 5, and 6. (Chance working in the actions of humans is tuche and in unreasoning agents automaton.) Something happens by chance when all the lines of causality converge without that convergence being purposefully chosen, and produce a result similar to the teleologically caused one. In chapters 7 through 9, Aristotle returns to the discussion of nature. With the enrichment of the preceding four chapters, he concludes that nature acts for an end, and he discusses the way that necessity is present in natural things. For Aristotle, the motion of natural things is determined from within them, while in the modern empirical sciences, motion is determined from without (more properly speaking: there is nothing to have an inside). === Book III (Γ; 200b–208a) === In order to understand "nature" as defined in the previous book, one must understand the terms of the definition. To understand motion, book III begins with the definition of change based on Aristotle's notions of potentiality and actuality. Change, he says, is the actualization of a thing's ability insofar as it is able. The rest of the book (chapters 4-8) discusses the infinite (apeiron, the unlimited). He distinguishes between the infinite by addition and the infinite by division, and between the actually infinite and potentially infinite. He argues against the actually infinite in any form, including infinite bodies, substances, and voids. Aristotle here says the only type of infinity that exists is the potentially infinite. Aristotle characterizes this as that which serves as "the matter for the completion of a magnitude and is potentially (but not actually) the completed whole" (207a22-23). The infinite, lacking any form, is thereby unknowable. Aristotle writes, "it is not what has nothing outside it that is infinite, but what always has something outside it" (6.206b33-207a1-2). === Book IV (Δ; 208a–223b) === Book IV discusses the preconditions of motion: place (topos, chapters 1-5), void (kenon, chapters 6-9), and time (khronos, chapters 10-14). The book starts by distinguishing the various ways a thing can "be in" another. He likens place to an immobile container or vessel: "the innermost motionless boundary of what contains" is the primary place of a body (4.212a20). Unlike space, which is a volume co-existent with a body, place is a boundary or surface. He teaches that, contrary to the Atomists and others, a void is not only unnecessary, but leads to contradictions, e.g., making locomotion impossible. Time is a constant attribute of movements and, Aristotle thinks, does not exist on its own but is relative to the motions of things. Tony Roark describes Aristotle's view of time as follows: Aristotle defines time as "a number of motion with respect to the before and after" (Phys. 219b1–2), by which he intends to denote motion's susceptibility to division into undetached parts of arbitrary length, a property that it possesses both by virtue of its intrinsic nature and also by virtue of the capacities and activities of percipient souls. Motion is intrinsically indeterminate, but perceptually determinable, with respect to its length. Acts of perception function as determiners; the result is determinate units of kinetic length, which is precisely what a temporal unit is. === Books V and VI (Ε: 224a–231a; Ζ: 231a–241b) === Books V and VI deal with how motion occurs. Book V classifies four species of movement, depending on where the opposites are located. Movement categories include quantity (e.g. a change in dimensions, from great to small), quality (as for colors: from pale to dark), place (local movements generally go from up downwards and vice versa), or, more controversially, substance. In fact, substances do not have opposites, so it is inappropriate to say that something properly becomes, from not-man, man: generation and corruption are not kinesis in the full sense. Book VI discusses how a changing thing can reach the opposite state, if it has to pass through infinite intermediate stages. It investigates by rational and logical arguments the notions of continuity and division, establishing that change—and, consequently, time and place—are not divisible into indivisible parts; they are not mathematically discrete but continuous, that is, infinitely divisible (in other words, that you cannot build up a continuum out of discrete or indivisible points or moments). Among other things, this implies that there can be no definite (indivisible) moment when a motion begins. This discussion, together with that of speed and the different behavior of the four different species of motion, eventually helps Aristotle answer the famous paradoxes of Zeno, which purport to show the absurdity of motion's existence. === Book VII (Η; 241a25–250b7) === Book VII briefly deals with the relationship of the moved to his mover, which Aristotle describes in substantial divergence with Plato's theory of the soul as capable of setting itself in motion (Laws book X, Phaedrus, Phaedo). Everything which moves is moved by another. He then tries to correlate the species of motion and their speeds, with the local change (locomotion, phorà) as the most fundamental to which the others can be reduced. Book VII.1-3 also exist in an alternative version, not included in the Bekker edition. === Book VIII (Θ; 250a14–267b26) === Book VIII (which occupies almost a fourth of the entire Physics, and probably constituted originally an independent course of lessons) discusses two main topics, though with a wide deployment of arguments: the time limits of the universe, and the existence of a Prime Mover — eternal, indivisible, without parts and without magnitude. Isn't the universe eternal, has it had a beginning, will it ever end? Aristotle's response, as a Greek, could hardly be affirmative, never having been told of a creatio ex nihilo, but he also has philosophical reasons for denying that motion had not always existed, on the grounds of the theory presented in the earlier books of the Physics. Eternity of motion is also confirmed by the existence of a substance which is different from all the others in lacking matter; being pure form, it is also in an eternal actuality, not being imperfect in any respect; hence needing not to move. This is demonstrated by describing the celestial bodies thus: the first things to be moved must undergo an infinite, single and continuous movement, that is, circular. This is not caused by any contact but (integrating the view contained in the Metaphysics, bk. XII) by love and aspiration. == Significance to philosophy and science in the modern world == The works of Aristotle are typically influential to the development of Western science and philosophy. The citations below are not given as any sort of final modern judgement on the interpretation and significance of Aristotle, but are only the notable views of some moderns. === Heidegger === Martin Heidegger writes: The Physics is a lecture in which he seeks to determine beings that arise on their own, τὰ φύσει ὄντα, with regard to their being. Aristotelian "physics" is different from what we mean today by this word, not only to the extent that it belongs to antiquity whereas the modern physical sciences belong to modernity, rather above all it is different by virtue of the fact that Aristotle's "physics" is philosophy, whereas modern physics is a positive science that presupposes a philosophy.... This book determines the warp and woof of the whole of Western thinking, even at that place where it, as modern thinking, appears to think at odds with ancient thinking. But opposition is invariably comprised of a decisive, and often even perilous, dependence. Without Aristotle's Physics there would have been no Galileo. === Russell === Bertrand Russell says of Physics and On the Heavens (which he believed was a continuation of Physics) that they were: ...extremely influential, and dominated science until the time of Galileo ... The historian of philosophy, accordingly, must study them, in spite of the fact that hardly a sentence in either can be accepted in the light of modern science. === Rovelli === Italian theoretical physicist Carlo Rovelli considers Aristotle's physics as a correct and non-intuitive special case of Newtonian physics for the motion of matter in fluid after it has reached terminal velocity (steady state). His theory disregards the initial phase of acceleration, which is too short to be observed by the naked eye. Galileo's inclined plane experiment bypasses the issue, as it slows down acceleration enough to allow observing the initial phase of acceleration by the naked eye. The five elements explain forms of observed motions. Ether explains circular motion in the sky, earth and water explains downward motion, and fire and air explains upward motion. To explain downward motion, instead of postulating one element, he proposed two, because wood moves up in water but down in air, while earth moves down in both water and air. The complex interaction between the 4 elements could explain most of the rising and falling motions of objects with different densities. The velocity of falling objects is equal to C ( W ρ ) n {\displaystyle C\left({\frac {W}{\rho }}\right)^{n}} , where W {\displaystyle W} is the weight of the object, ρ {\displaystyle \rho } is the density of the surrounding fluid (such as air, fire, or water), n > 0 {\displaystyle n>0} is a constant, and C {\displaystyle C} is a constant depending on the shape of the object. This is correct for the terminal velocity of falling objects in fluid in a constant gravitational field, in the case where most of the fluid resistance is drag force, ∝ ρ v 2 {\displaystyle \propto \rho v^{2}} . In this case, the terminal velocity is C ( W ρ ) 1 / 2 {\displaystyle C\left({\frac {W}{\rho }}\right)^{1/2}} == See also == History of physics Horror vacui Euclid's Elements == Notes == == References == == Bibliography == === Recensions of Physics in the ancient Greek === A recension is a selection of a specific text for publication. The manuscripts on a given work attributed to Aristotle offer textual variants. One recension makes a selection of one continuous text, but typically gives notes stating the alternative sections of text. Determining which text is to be presented as "original" is a detailed scholarly investigation. The recension is often known by its scholarly editor's name. === English translations of the Physics === In reverse chronological order: === Classical and medieval commentaries on the Physics === A commentary differs from a note in being a distinct work analyzing the language and subsumed concepts of some other work classically notable. A note appears within the annotated work on the same page or in a separate list. Commentaries are typically arranged by lemmas, or quotes from the notable work, followed by an analysis of the author of the commentary. The commentaries on every work of Aristotle are a vast and mainly unpublished topic. They extend continuously from the death of the philosopher, representing the entire history of Graeco-Roman philosophy. There are thousands of commentators and commentaries known wholly or more typically in fragments of manuscripts. The latter especially occupy the vaults of institutions formerly responsible for copying them, such as monasteries. The process of publishing them is slow and ongoing. Below is a brief representative bibliography of published commentaries on Aristotle's Physics available on or through the Internet. Like the topic itself, they are perforce multi-cultural, but English has been favored, as well as the original languages, ancient Greek and Latin. === Some modern commentaries, monographs and articles === == Further reading == Books Die Aristotelische Physik, W. Wieland, 1962, 2nd revised edition 1970. Articles Machamer, Peter K., "Aristotle on Natural Place and Motion," Isis 69:3 (Sept. 1978), 377–387. == External links == === Commentaries and comments === HTML Greek, in parallel with English translation: Fr. Kenny's collection (with Aquinas's commentary) HTML Greek, in parallel with French translation: P. Remacle's collection Thomas Aquinas's Commentary A 'Bigger' Physics – lecture at MIT on how Aristotle's natural philosophy complements modern science and the need for a general science of nature === Other === Greek text of Physics, as edited by W.D. Ross Perseus edition of Physics in Greek Aristotle: Motion and its Place in Nature entry in the Internet Encyclopedia of Philosophy. Physics, English Translation by Thomas Taylor public domain audiobook at LibriVox Text of Physics, (in html, epub or mobi format) as translated by R. P. Hardie and R. K. Gaye
Wikipedia/Physics_(Aristotle)
The natural sciences saw various advancements during the Golden Age of Islam (from roughly the mid 8th to the mid 13th centuries), adding a number of innovations to the Transmission of the Classics (such as Aristotle, Ptolemy, Euclid, Neoplatonism). During this period, Islamic theology was encouraging of thinkers to find knowledge. Thinkers from this period included Al-Farabi, Abu Bishr Matta, Ibn Sina, al-Hassan Ibn al-Haytham and Ibn Bajjah. These works and the important commentaries on them were the wellspring of science during the medieval period. They were translated into Arabic, the lingua franca of this period. Islamic scholarship in the sciences had inherited Aristotelian physics from the Greeks and during the Islamic Golden Age developed it further. However the Islamic world had a greater respect for knowledge gained from empirical observation, and believed that the universe is governed by a single set of laws. Their use of empirical observation led to the formation of crude forms of the scientific method. The study of physics in the Islamic world started in Iraq and Egypt. Fields of physics studied in this period include optics, mechanics (including statics, dynamics, kinematics and motion), and astronomy. == Physics == Islamic scholarship had inherited Aristotelian physics from the Greeks and during the Islamic Golden Age developed it further, especially placing emphasis on observation and a priori reasoning, developing early forms of the scientific method. With Aristotelian physics, physics was seen as lower than demonstrative mathematical sciences, but in terms of a larger theory of knowledge, physics was higher than astronomy; many of whose principles derive from physics and metaphysics. The primary subject of physics, according to Aristotle, was motion or change; there were three factors involved with this change, underlying thing, privation, and form. In his Metaphysics, Aristotle believed that the Unmoved Mover was responsible for the movement of the cosmos, which Neoplatonists later generalized as the cosmos were eternal. Al-Kindi argued against the idea of the cosmos being eternal by claiming that the eternality of the world lands one in a different sort of absurdity involving the infinite; Al-Kindi asserted that the cosmos must have a temporal origin because traversing an infinite was impossible. One of the first commentaries of Aristotle's Metaphysics is by Al-Farabi. In "'The Aims of Aristotle's Metaphysics", Al-Farabi argues that metaphysics is not specific to natural beings, but at the same time, metaphysics is higher in universality than natural beings. == Optics == One field in physics, optics, developed rapidly in this period. By the ninth century, there were works on physiological optics as well as mirror reflections, and geometrical and physical optics. In the eleventh century, Ibn al-Haytham not only rejected the Greek idea about vision, he came up with a new theory. Ibn Sahl (c. 940–1000), a mathematician and physicist connected with the court of Baghdad, wrote a treatise On Burning Mirrors and Lenses in 984 in which he set out his understanding of how curved mirrors and lenses bend and focus light. Ibn Sahl is credited with discovering the law of refraction, now usually called Snell's law. He used this law to work out the shapes of lenses that focus light with no geometric aberrations, known as anaclastic lenses. Ibn al-Haytham (known in Western Europe as Alhacen or Alhazen) (965-1040), often regarded as the "father of optics" and a pioneer of the scientific method, formulated "the first comprehensive and systematic alternative to Greek optical theories." He postulated in his "Book of Optics" that light was reflected upon different surfaces in different directions, thus causing different light signatures for a certain object that we see. It was a different approach than that which was previously thought by Greek scientists, such as Euclid or Ptolemy, who believed rays were emitted from the eye to an object and back again. Al-Haytham, with this new theory of optics, was able to study the geometric aspects of the visual cone theories without explaining the physiology of perception. Also in his Book of Optics, Ibn al-Haytham used mechanics to try and understand optics. Using projectiles, he observed that objects that hit a target perpendicularly exert much more force than projectiles that hit at an angle. Al-Haytham applied this discovery to optics and tried to explain why direct light hurts the eye, because direct light approaches perpendicularly and not at an oblique angle. He developed a camera obscura to demonstrate that light and color from different candles can be passed through a single aperture in straight lines, without intermingling at the aperture. His theories were transmitted to the West. His work influenced Roger Bacon, John Peckham and Vitello, who built upon his work and ultimately transmitted it to Kepler. Taqī al-Dīn tried to disprove the widely held belief that light is emitted by the eye and not the object that is being observed. He explained that, if light came from our eyes at a constant velocity it would take much too long to illuminate the stars for us to see them while we are still looking at them, because they are so far away. Therefore, the illumination must be coming from the stars so we can see them as soon as we open our eyes. == Astronomy == The Islamic understanding of the astronomical model was based on the Greek Ptolemaic system. However, many early astronomers had started to question the model. It was not always accurate in its predictions and was over complicated because astronomers were trying to mathematically describe the movement of the heavenly bodies. Ibn al-Haytham published Al-Shukuk ala Batiamyus ("Doubts on Ptolemy"), which outlined his many criticisms of the Ptolemaic paradigm. This book encouraged other astronomers to develop new models to explain celestial movement better than Ptolemy. In al-Haytham's Book of Optics he argues that the celestial spheres were not made of solid matter, and that the heavens are less dense than air. Some astronomers theorized about gravity too, al-Khazini suggests that the gravity an object contains varies depending on its distance from the center of the universe. The center of the universe in this case refers to the center of the Earth. == Mechanics == === Impetus === John Philoponus had rejected the Aristotelian view of motion, and argued that an object acquires an inclination to move when it has a motive power impressed on it. In the eleventh century Ibn Sina had roughly adopted this idea, believing that a moving object has force which is dissipated by external agents like air resistance. Ibn Sina made distinction between 'force' and 'inclination' (called "mayl"), he claimed that an object gained mayl when the object is in opposition to its natural motion. So he concluded that continuation of motion is attributed to the inclination that is transferred to the object, and that object will be in motion until the mayl is spent. He also claimed that projectile in a vacuum would not stop unless it is acted upon. This conception of motion is consistent with Newton's first law of motion, inertia, which states that an object in motion will stay in motion unless it is acted on by an external force. This idea which dissented from the Aristotelian view was basically abandoned until it was described as "impetus" by John Buridan, who may have been influenced by Ibn Sina. === Acceleration === In Abū Rayḥān al-Bīrūnī text Shadows, he recognizes that non-uniform motion is the result of acceleration. Ibn-Sina's theory of mayl tried to relate the velocity and weight of a moving object, this idea closely resembled the concept of momentum Aristotle's theory of motion stated that a constant force produces a uniform motion, Abu'l-Barakāt al-Baghdādī contradicted this and developed his own theory of motion. In his theory he showed that velocity and acceleration are two different things and force is proportional to acceleration and not velocity. == See also == Astronomy in the medieval Islamic world History of optics History of physics History of scientific method Islamic world contributions to Medieval Europe Islamic Golden Age Science in the medieval Islamic world Science in the Middle Ages == References ==
Wikipedia/Physics_in_the_medieval_Islamic_world
Scientific laws or laws of science are statements, based on repeated experiments or observations, that describe or predict a range of natural phenomena. The term law has diverse usage in many cases (approximate, accurate, broad, or narrow) across all fields of natural science (physics, chemistry, astronomy, geoscience, biology). Laws are developed from data and can be further developed through mathematics; in all cases they are directly or indirectly based on empirical evidence. It is generally understood that they implicitly reflect, though they do not explicitly assert, causal relationships fundamental to reality, and are discovered rather than invented. Scientific laws summarize the results of experiments or observations, usually within a certain range of application. In general, the accuracy of a law does not change when a new theory of the relevant phenomenon is worked out, but rather the scope of the law's application, since the mathematics or statement representing the law does not change. As with other kinds of scientific knowledge, scientific laws do not express absolute certainty, as mathematical laws do. A scientific law may be contradicted, restricted, or extended by future observations. A law can often be formulated as one or several statements or equations, so that it can predict the outcome of an experiment. Laws differ from hypotheses and postulates, which are proposed during the scientific process before and during validation by experiment and observation. Hypotheses and postulates are not laws, since they have not been verified to the same degree, although they may lead to the formulation of laws. Laws are narrower in scope than scientific theories, which may entail one or several laws. Science distinguishes a law or theory from facts. Calling a law a fact is ambiguous, an overstatement, or an equivocation. The nature of scientific laws has been much discussed in philosophy, but in essence scientific laws are simply empirical conclusions reached by the scientific method; they are intended to be neither laden with ontological commitments nor statements of logical absolutes. Social sciences such as economics have also attempted to formulate scientific laws, though these generally have much less predictive power. == Overview == A scientific law always applies to a physical system under repeated conditions, and it implies that there is a causal relationship involving the elements of the system. Factual and well-confirmed statements like "Mercury is liquid at standard temperature and pressure" are considered too specific to qualify as scientific laws. A central problem in the philosophy of science, going back to David Hume, is that of distinguishing causal relationships (such as those implied by laws) from principles that arise due to constant conjunction. Laws differ from scientific theories in that they do not posit a mechanism or explanation of phenomena: they are merely distillations of the results of repeated observation. As such, the applicability of a law is limited to circumstances resembling those already observed, and the law may be found to be false when extrapolated. Ohm's law only applies to linear networks; Newton's law of universal gravitation only applies in weak gravitational fields; the early laws of aerodynamics, such as Bernoulli's principle, do not apply in the case of compressible flow such as occurs in transonic and supersonic flight; Hooke's law only applies to strain below the elastic limit; Boyle's law applies with perfect accuracy only to the ideal gas, etc. These laws remain useful, but only under the specified conditions where they apply. Many laws take mathematical forms, and thus can be stated as an equation; for example, the law of conservation of energy can be written as Δ E = 0 {\displaystyle \Delta E=0} , where E {\displaystyle E} is the total amount of energy in the universe. Similarly, the first law of thermodynamics can be written as d U = δ Q − δ W {\displaystyle \mathrm {d} U=\delta Q-\delta W\,} , and Newton's second law can be written as F = d p d t . {\displaystyle \textstyle F={\frac {dp}{dt}}.} While these scientific laws explain what our senses perceive, they are still empirical (acquired by observation or scientific experiment) and so are not like mathematical theorems which can be proved purely by mathematics. Like theories and hypotheses, laws make predictions; specifically, they predict that new observations will conform to the given law. Laws can be falsified if they are found in contradiction with new data. Some laws are only approximations of other more general laws, and are good approximations with a restricted domain of applicability. For example, Newtonian dynamics (which is based on Galilean transformations) is the low-speed limit of special relativity (since the Galilean transformation is the low-speed approximation to the Lorentz transformation). Similarly, the Newtonian gravitation law is a low-mass approximation of general relativity, and Coulomb's law is an approximation to quantum electrodynamics at large distances (compared to the range of weak interactions). In such cases it is common to use the simpler, approximate versions of the laws, instead of the more accurate general laws. Laws are constantly being tested experimentally to increasing degrees of precision, which is one of the main goals of science. The fact that laws have never been observed to be violated does not preclude testing them at increased accuracy or in new kinds of conditions to confirm whether they continue to hold, or whether they break, and what can be discovered in the process. It is always possible for laws to be invalidated or proven to have limitations, by repeatable experimental evidence, should any be observed. Well-established laws have indeed been invalidated in some special cases, but the new formulations created to explain the discrepancies generalize upon, rather than overthrow, the originals. That is, the invalidated laws have been found to be only close approximations, to which other terms or factors must be added to cover previously unaccounted-for conditions, e.g. very large or very small scales of time or space, enormous speeds or masses, etc. This, rather than unchanging knowledge, physical laws are better viewed as a series of improving and more precise generalizations. == Properties == Scientific laws are typically conclusions based on repeated scientific experiments and observations over many years and which have become accepted universally within the scientific community. A scientific law is "inferred from particular facts, applicable to a defined group or class of phenomena, and expressible by the statement that a particular phenomenon always occurs if certain conditions be present". The production of a summary description of our environment in the form of such laws is a fundamental aim of science. Several general properties of scientific laws, particularly when referring to laws in physics, have been identified. Scientific laws are: True, at least within their regime of validity. By definition, there have never been repeatable contradicting observations. Universal. They appear to apply everywhere in the universe.: 82  Simple. They are typically expressed in terms of a single mathematical equation. Absolute. Nothing in the universe appears to affect them.: 82  Stable. Unchanged since first discovered (although they may have been shown to be approximations of more accurate laws), All-encompassing. Everything in the universe apparently must comply with them (according to observations). Generally conservative of quantity.: 59  Often expressions of existing homogeneities (symmetries) of space and time. Typically theoretically reversible in time (if non-quantum), although time itself is irreversible. Broad. In physics, laws exclusively refer to the broad domain of matter, motion, energy, and force itself, rather than more specific systems in the universe, such as living systems, e.g. the mechanics of the human body. The term "scientific law" is traditionally associated with the natural sciences, though the social sciences also contain laws. For example, Zipf's law is a law in the social sciences which is based on mathematical statistics. In these cases, laws may describe general trends or expected behaviors rather than being absolutes. In natural science, impossibility assertions come to be widely accepted as overwhelmingly probable rather than considered proved to the point of being unchallengeable. The basis for this strong acceptance is a combination of extensive evidence of something not occurring, combined with an underlying theory, very successful in making predictions, whose assumptions lead logically to the conclusion that something is impossible. While an impossibility assertion in natural science can never be absolutely proved, it could be refuted by the observation of a single counterexample. Such a counterexample would require that the assumptions underlying the theory that implied the impossibility be re-examined. Some examples of widely accepted impossibilities in physics are perpetual motion machines, which violate the law of conservation of energy, exceeding the speed of light, which violates the implications of special relativity, the uncertainty principle of quantum mechanics, which asserts the impossibility of simultaneously knowing both the position and the momentum of a particle, and Bell's theorem: no physical theory of local hidden variables can ever reproduce all of the predictions of quantum mechanics. == Laws as consequences of mathematical symmetries == Some laws reflect mathematical symmetries found in nature (e.g. the Pauli exclusion principle reflects identity of electrons, conservation laws reflect homogeneity of space, time, and Lorentz transformations reflect rotational symmetry of spacetime). Many fundamental physical laws are mathematical consequences of various symmetries of space, time, or other aspects of nature. Specifically, Noether's theorem connects some conservation laws to certain symmetries. For example, conservation of energy is a consequence of the shift symmetry of time (no moment of time is different from any other), while conservation of momentum is a consequence of the symmetry (homogeneity) of space (no place in space is special, or different from any other). The indistinguishability of all particles of each fundamental type (say, electrons, or photons) results in the Dirac and Bose quantum statistics which in turn result in the Pauli exclusion principle for fermions and in Bose–Einstein condensation for bosons. Special relativity uses rapidity to express motion according to the symmetries of hyperbolic rotation, a transformation mixing space and time. Symmetry between inertial and gravitational mass results in general relativity. The inverse square law of interactions mediated by massless bosons is the mathematical consequence of the 3-dimensionality of space. One strategy in the search for the most fundamental laws of nature is to search for the most general mathematical symmetry group that can be applied to the fundamental interactions. == Laws of physics == === Conservation laws === ==== Conservation and symmetry ==== Conservation laws are fundamental laws that follow from the homogeneity of space, time and phase, in other words symmetry. Noether's theorem: Any quantity with a continuously differentiable symmetry in the action has an associated conservation law. Conservation of mass was the first law to be understood since most macroscopic physical processes involving masses, for example, collisions of massive particles or fluid flow, provide the apparent belief that mass is conserved. Mass conservation was observed to be true for all chemical reactions. In general, this is only approximative because with the advent of relativity and experiments in nuclear and particle physics: mass can be transformed into energy and vice versa, so mass is not always conserved but part of the more general conservation of mass–energy. Conservation of energy, momentum and angular momentum for isolated systems can be found to be symmetries in time, translation, and rotation. Conservation of charge was also realized since charge has never been observed to be created or destroyed and only found to move from place to place. ==== Continuity and transfer ==== Conservation laws can be expressed using the general continuity equation (for a conserved quantity) can be written in differential form as: ∂ ρ ∂ t = − ∇ ⋅ J {\displaystyle {\frac {\partial \rho }{\partial t}}=-\nabla \cdot \mathbf {J} } where ρ is some quantity per unit volume, J is the flux of that quantity (change in quantity per unit time per unit area). Intuitively, the divergence (denoted ∇⋅) of a vector field is a measure of flux diverging radially outwards from a point, so the negative is the amount piling up at a point; hence the rate of change of density in a region of space must be the amount of flux leaving or collecting in some region (see the main article for details). In the table below, the fluxes flows for various physical quantities in transport, and their associated continuity equations, are collected for comparison. More general equations are the convection–diffusion equation and Boltzmann transport equation, which have their roots in the continuity equation. === Laws of classical mechanics === ==== Principle of least action ==== Classical mechanics, including Newton's laws, Lagrange's equations, Hamilton's equations, etc., can be derived from the following principle: δ S = δ ∫ t 1 t 2 L ( q , q ˙ , t ) d t = 0 {\displaystyle \delta {\mathcal {S}}=\delta \int _{t_{1}}^{t_{2}}L(\mathbf {q} ,\mathbf {\dot {q}} ,t)\,dt=0} where S {\displaystyle {\mathcal {S}}} is the action; the integral of the Lagrangian L ( q , q ˙ , t ) = T ( q ˙ , t ) − V ( q , q ˙ , t ) {\displaystyle L(\mathbf {q} ,\mathbf {\dot {q}} ,t)=T(\mathbf {\dot {q}} ,t)-V(\mathbf {q} ,\mathbf {\dot {q}} ,t)} of the physical system between two times t1 and t2. The kinetic energy of the system is T (a function of the rate of change of the configuration of the system), and potential energy is V (a function of the configuration and its rate of change). The configuration of a system which has N degrees of freedom is defined by generalized coordinates q = (q1, q2, ... qN). There are generalized momenta conjugate to these coordinates, p = (p1, p2, ..., pN), where: p i = ∂ L ∂ q ˙ i {\displaystyle p_{i}={\frac {\partial L}{\partial {\dot {q}}_{i}}}} The action and Lagrangian both contain the dynamics of the system for all times. The term "path" simply refers to a curve traced out by the system in terms of the generalized coordinates in the configuration space, i.e. the curve q(t), parameterized by time (see also parametric equation for this concept). The action is a functional rather than a function, since it depends on the Lagrangian, and the Lagrangian depends on the path q(t), so the action depends on the entire "shape" of the path for all times (in the time interval from t1 to t2). Between two instants of time, there are infinitely many paths, but one for which the action is stationary (to the first order) is the true path. The stationary value for the entire continuum of Lagrangian values corresponding to some path, not just one value of the Lagrangian, is required (in other words it is not as simple as "differentiating a function and setting it to zero, then solving the equations to find the points of maxima and minima etc", rather this idea is applied to the entire "shape" of the function, see calculus of variations for more details on this procedure). Notice L is not the total energy E of the system due to the difference, rather than the sum: E = T + V {\displaystyle E=T+V} The following general approaches to classical mechanics are summarized below in the order of establishment. They are equivalent formulations. Newton's is commonly used due to simplicity, but Hamilton's and Lagrange's equations are more general, and their range can extend into other branches of physics with suitable modifications. From the above, any equation of motion in classical mechanics can be derived. Corollaries in mechanics : Euler's laws of motion Euler's equations (rigid body dynamics) Corollaries in fluid mechanics : Equations describing fluid flow in various situations can be derived, using the above classical equations of motion and often conservation of mass, energy and momentum. Some elementary examples follow. Archimedes' principle Bernoulli's principle Poiseuille's law Stokes' law Navier–Stokes equations Faxén's law === Laws of gravitation and relativity === Some of the more famous laws of nature are found in Isaac Newton's theories of (now) classical mechanics, presented in his Philosophiae Naturalis Principia Mathematica, and in Albert Einstein's theory of relativity. ==== Modern laws ==== Special relativity : The two postulates of special relativity are not "laws" in themselves, but assumptions of their nature in terms of relative motion. They can be stated as "the laws of physics are the same in all inertial frames" and "the speed of light is constant and has the same value in all inertial frames". The said postulates lead to the Lorentz transformations – the transformation law between two frame of references moving relative to each other. For any 4-vector A ′ = Λ A {\displaystyle A'=\Lambda A} this replaces the Galilean transformation law from classical mechanics. The Lorentz transformations reduce to the Galilean transformations for low velocities much less than the speed of light c. The magnitudes of 4-vectors are invariants – not "conserved", but the same for all inertial frames (i.e. every observer in an inertial frame will agree on the same value), in particular if A is the four-momentum, the magnitude can derive the famous invariant equation for mass–energy and momentum conservation (see invariant mass): E 2 = ( p c ) 2 + ( m c 2 ) 2 {\displaystyle E^{2}=(pc)^{2}+(mc^{2})^{2}} in which the (more famous) mass–energy equivalence E = mc2 is a special case. General relativity : General relativity is governed by the Einstein field equations, which describe the curvature of space-time due to mass–energy equivalent to the gravitational field. Solving the equation for the geometry of space warped due to the mass distribution gives the metric tensor. Using the geodesic equation, the motion of masses falling along the geodesics can be calculated. Gravitoelectromagnetism : In a relatively flat spacetime due to weak gravitational fields, gravitational analogues of Maxwell's equations can be found; the GEM equations, to describe an analogous gravitomagnetic field. They are well established by the theory, and experimental tests form ongoing research. ==== Classical laws ==== Kepler's laws, though originally discovered from planetary observations (also due to Tycho Brahe), are true for any central forces. === Thermodynamics === Newton's law of cooling Fourier's law Ideal gas law, combines a number of separately developed gas laws; Boyle's law Charles's law Gay-Lussac's law Avogadro's law, into one now improved by other equations of state Dalton's law (of partial pressures) Boltzmann equation Carnot's theorem Kopp's law === Electromagnetism === Maxwell's equations give the time-evolution of the electric and magnetic fields due to electric charge and current distributions. Given the fields, the Lorentz force law is the equation of motion for charges in the fields. These equations can be modified to include magnetic monopoles, and are consistent with our observations of monopoles either existing or not existing; if they do not exist, the generalized equations reduce to the ones above, if they do, the equations become fully symmetric in electric and magnetic charges and currents. Indeed, there is a duality transformation where electric and magnetic charges can be "rotated into one another", and still satisfy Maxwell's equations. Pre-Maxwell laws : These laws were found before the formulation of Maxwell's equations. They are not fundamental, since they can be derived from Maxwell's equations. Coulomb's law can be found from Gauss's law (electrostatic form) and the Biot–Savart law can be deduced from Ampere's law (magnetostatic form). Lenz's law and Faraday's law can be incorporated into the Maxwell–Faraday equation. Nonetheless, they are still very effective for simple calculations. Lenz's law Coulomb's law Biot–Savart law Other laws : Ohm's law Kirchhoff's laws Joule's law === Photonics === Classically, optics is based on a variational principle: light travels from one point in space to another in the shortest time. Fermat's principle In geometric optics laws are based on approximations in Euclidean geometry (such as the paraxial approximation). Law of reflection Law of refraction, Snell's law In physical optics, laws are based on physical properties of materials. Brewster's angle Malus's law Beer–Lambert law In actuality, optical properties of matter are significantly more complex and require quantum mechanics. === Laws of quantum mechanics === Quantum mechanics has its roots in postulates. This leads to results which are not usually called "laws", but hold the same status, in that all of quantum mechanics follows from them. These postulates can be summarized as follows: The state of a physical system, be it a particle or a system of many particles, is described by a wavefunction. Every physical quantity is described by an operator acting on the system; the measured quantity has a probabilistic nature. The wavefunction obeys the Schrödinger equation. Solving this wave equation predicts the time-evolution of the system's behavior, analogous to solving Newton's laws in classical mechanics. Two identical particles, such as two electrons, cannot be distinguished from one another by any means. Physical systems are classified by their symmetry properties. These postulates in turn imply many other phenomena, e.g., uncertainty principles and the Pauli exclusion principle. === Radiation laws === Applying electromagnetism, thermodynamics, and quantum mechanics, to atoms and molecules, some laws of electromagnetic radiation and light are as follows. Stefan–Boltzmann law Planck's law of black-body radiation Wien's displacement law Radioactive decay law == Laws of chemistry == Chemical laws are those laws of nature relevant to chemistry. Historically, observations led to many empirical laws, though now it is known that chemistry has its foundations in quantum mechanics. Quantitative analysis : The most fundamental concept in chemistry is the law of conservation of mass, which states that there is no detectable change in the quantity of matter during an ordinary chemical reaction. Modern physics shows that it is actually energy that is conserved, and that energy and mass are related; a concept which becomes important in nuclear chemistry. Conservation of energy leads to the important concepts of equilibrium, thermodynamics, and kinetics. Additional laws of chemistry elaborate on the law of conservation of mass. Joseph Proust's law of definite composition says that pure chemicals are composed of elements in a definite formulation; we now know that the structural arrangement of these elements is also important. Dalton's law of multiple proportions says that these chemicals will present themselves in proportions that are small whole numbers; although in many systems (notably biomacromolecules and minerals) the ratios tend to require large numbers, and are frequently represented as a fraction. The law of definite composition and the law of multiple proportions are the first two of the three laws of stoichiometry, the proportions by which the chemical elements combine to form chemical compounds. The third law of stoichiometry is the law of reciprocal proportions, which provides the basis for establishing equivalent weights for each chemical element. Elemental equivalent weights can then be used to derive atomic weights for each element. More modern laws of chemistry define the relationship between energy and its transformations. Reaction kinetics and equilibria : In equilibrium, molecules exist in mixture defined by the transformations possible on the timescale of the equilibrium, and are in a ratio defined by the intrinsic energy of the molecules—the lower the intrinsic energy, the more abundant the molecule. Le Chatelier's principle states that the system opposes changes in conditions from equilibrium states, i.e. there is an opposition to change the state of an equilibrium reaction. Transforming one structure to another requires the input of energy to cross an energy barrier; this can come from the intrinsic energy of the molecules themselves, or from an external source which will generally accelerate transformations. The higher the energy barrier, the slower the transformation occurs. There is a hypothetical intermediate, or transition structure, that corresponds to the structure at the top of the energy barrier. The Hammond–Leffler postulate states that this structure looks most similar to the product or starting material which has intrinsic energy closest to that of the energy barrier. Stabilizing this hypothetical intermediate through chemical interaction is one way to achieve catalysis. All chemical processes are reversible (law of microscopic reversibility) although some processes have such an energy bias, they are essentially irreversible. The reaction rate has the mathematical parameter known as the rate constant. The Arrhenius equation gives the temperature and activation energy dependence of the rate constant, an empirical law. Thermochemistry : Dulong–Petit law Gibbs–Helmholtz equation Hess's law Gas laws : Raoult's law Henry's law Chemical transport : Fick's laws of diffusion Graham's law Lamm equation == Laws of biology == === Ecology === Competitive exclusion principle or Gause's law === Genetics === Mendelian laws (Dominance and Uniformity, segregation of genes, and Independent Assortment) Hardy–Weinberg principle === Natural selection === Whether or not Natural Selection is a "law of nature" is controversial among biologists. Henry Byerly, an American philosopher known for his work on evolutionary theory, discussed the problem of interpreting a principle of natural selection as a law. He suggested a formulation of natural selection as a framework principle that can contribute to a better understanding of evolutionary theory. His approach was to express relative fitness, the propensity of a genotype to increase in proportionate representation in a competitive environment, as a function of adaptedness (adaptive design) of the organism. == Laws of Earth sciences == === Geography === Arbia's law of geography Tobler's first law of geography Tobler's second law of geography === Geology === Archie's law Buys Ballot's law Birch's law Byerlee's law Principle of original horizontality Law of superposition Principle of lateral continuity Principle of cross-cutting relationships Principle of faunal succession Principle of inclusions and components Walther's law == Other fields == Some mathematical theorems and axioms are referred to as laws because they provide logical foundation to empirical laws. Examples of other observed phenomena sometimes described as laws include the Titius–Bode law of planetary positions, Zipf's law of linguistics, and Moore's law of technological growth. Many of these laws fall within the scope of uncomfortable science. Other laws are pragmatic and observational, such as the law of unintended consequences. By analogy, principles in other fields of study are sometimes loosely referred to as "laws". These include Occam's razor as a principle of philosophy and the Pareto principle of economics. == History == The observation and detection of underlying regularities in nature date from prehistoric times – the recognition of cause-and-effect relationships implicitly recognises the existence of laws of nature. The recognition of such regularities as independent scientific laws per se, though, was limited by their entanglement in animism, and by the attribution of many effects that do not have readily obvious causes—such as physical phenomena—to the actions of gods, spirits, supernatural beings, etc. Observation and speculation about nature were intimately bound up with metaphysics and morality. In Europe, systematic theorizing about nature (physis) began with the early Greek philosophers and scientists and continued into the Hellenistic and Roman imperial periods, during which times the intellectual influence of Roman law increasingly became paramount.The formula "law of nature" first appears as "a live metaphor" favored by Latin poets Lucretius, Virgil, Ovid, Manilius, in time gaining a firm theoretical presence in the prose treatises of Seneca and Pliny. Why this Roman origin? According to [historian and classicist Daryn] Lehoux's persuasive narrative, the idea was made possible by the pivotal role of codified law and forensic argument in Roman life and culture. For the Romans ... the place par excellence where ethics, law, nature, religion and politics overlap is the law court. When we read Seneca's Natural Questions, and watch again and again just how he applies standards of evidence, witness evaluation, argument and proof, we can recognize that we are reading one of the great Roman rhetoricians of the age, thoroughly immersed in forensic method. And not Seneca alone. Legal models of scientific judgment turn up all over the place, and for example prove equally integral to Ptolemy's approach to verification, where the mind is assigned the role of magistrate, the senses that of disclosure of evidence, and dialectical reason that of the law itself. The precise formulation of what are now recognized as modern and valid statements of the laws of nature dates from the 17th century in Europe, with the beginning of accurate experimentation and the development of advanced forms of mathematics. During this period, natural philosophers such as Isaac Newton (1642–1727) were influenced by a religious view – stemming from medieval concepts of divine law – which held that God had instituted absolute, universal and immutable physical laws. In chapter 7 of The World, René Descartes (1596–1650) described "nature" as matter itself, unchanging as created by God, thus changes in parts "are to be attributed to nature. The rules according to which these changes take place I call the 'laws of nature'." The modern scientific method which took shape at this time (with Francis Bacon (1561–1626) and Galileo (1564–1642)) contributed to a trend of separating science from theology, with minimal speculation about metaphysics and ethics. (Natural law in the political sense, conceived as universal (i.e., divorced from sectarian religion and accidents of place), was also elaborated in this period by scholars such as Grotius (1583–1645), Spinoza (1632–1677), and Hobbes (1588–1679).) The distinction between natural law in the political-legal sense and law of nature or physical law in the scientific sense is a modern one, both concepts being equally derived from physis, the Greek word (translated into Latin as natura) for nature. == See also == == References == == Further reading == == External links == Physics Formulary, a useful book in different formats containing many or the physical laws and formulae. Eformulae.com, website containing most of the formulae in different disciplines. Stanford Encyclopedia of Philosophy: "Laws of Nature" by John W. Carroll. Baaquie, Belal E. "Laws of Physics : A Primer". Core Curriculum, National University of Singapore. Francis, Erik Max. "The laws list".. Physics. Alcyone Systems Pazameta, Zoran. "The laws of nature". Archived 2014-02-26 at the Wayback Machine Committee for the scientific investigation of Claims of the Paranormal. The Internet Encyclopedia of Philosophy. "Laws of Nature" – By Norman Swartz Mark Buchanan; Frank Close; Nancy Cartwright; Melvyn Bragg (host) (Oct 19, 2000). "Laws of Nature". In Our Time. BBC Radio 4.
Wikipedia/Laws_of_physics
Psychophysics is the field of psychology which quantitatively investigates the relationship between physical stimuli and the sensations and perceptions they produce. Psychophysics has been described as "the scientific study of the relation between stimulus and sensation" or, more completely, as "the analysis of perceptual processes by studying the effect on a subject's experience or behaviour of systematically varying the properties of a stimulus along one or more physical dimensions". Psychophysics also refers to a general class of methods that can be applied to study a perceptual system. Modern applications rely heavily on threshold measurement, ideal observer analysis, and signal detection theory. Psychophysics has widespread and important practical applications. For instance, in the realm of digital signal processing, insights from psychophysics have guided the development of models and methods for lossy compression. These models help explain why humans typically perceive minimal loss of signal quality when audio and video signals are compressed using lossy techniques. == History == Many of the classical techniques and theories of psychophysics were formulated in 1860 when Gustav Theodor Fechner in Leipzig published Elemente der Psychophysik (Elements of Psychophysics). He coined the term "psychophysics", describing research intended to relate physical stimuli to the contents of consciousness such as sensations (Empfindungen). As a physicist and philosopher, Fechner aimed at developing a method that relates matter to the mind, connecting the publicly observable world and a person's privately experienced impression of it. His ideas were inspired by experimental results on the sense of touch and light obtained in the early 1830s by the German physiologist Ernst Heinrich Weber in Leipzig, most notably those on the minimum discernible difference in intensity of stimuli of moderate strength (just noticeable difference; jnd) which Weber had shown to be a constant fraction of the reference intensity, and which Fechner referred to as Weber's law. From this, Fechner derived his well-known logarithmic scale, now known as Fechner scale. Weber's and Fechner's work formed one of the bases of psychology as a science, with Wilhelm Wundt founding the first laboratory for psychological research in Leipzig (Institut für experimentelle Psychologie). Fechner's work systematised the introspectionist approach (psychology as the science of consciousness), that had to contend with the Behaviorist approach in which even verbal responses are as physical as the stimuli. Fechner's work was studied and extended by Charles S. Peirce, who was aided by his student Joseph Jastrow, who soon became a distinguished experimental psychologist in his own right. Peirce and Jastrow largely confirmed Fechner's empirical findings, but not all. In particular, a classic experiment of Peirce and Jastrow rejected Fechner's estimation of a threshold of perception of weights. In their experiment, Peirce and Jastrow in fact invented randomized experiments: They randomly assigned volunteers to a blinded, repeated-measures design to evaluate their ability to discriminate weights. On the basis of their results they argued that the underlying functions were continuous, and that there is no threshold below which a difference in physical magnitude would be undetected. Peirce's experiment inspired other researchers in psychology and education, which developed a research tradition of randomized experiments in laboratories and specialized textbooks in the 1900s. The Peirce–Jastrow experiments were conducted as part of Peirce's application of his pragmaticism program to human perception; other studies considered the perception of light, etc. Jastrow wrote the following summary: "Mr. Peirce's courses in logic gave me my first real experience of intellectual muscle. Though I promptly took to the laboratory of psychology when that was established by Stanley Hall, it was Peirce who gave me my first training in the handling of a psychological problem, and at the same time stimulated my self-esteem by entrusting me, then fairly innocent of any laboratory habits, with a real bit of research. He borrowed the apparatus for me, which I took to my room, installed at my window, and with which, when conditions of illumination were right, I took the observations. The results were published over our joint names in the Proceedings of the National Academy of Sciences. The demonstration that traces of sensory effect too slight to make any registry in consciousness could none the less influence judgment, may itself have been a persistent motive that induced me years later to undertake a book on The Subconscious." This work clearly distinguishes observable cognitive performance from the expression of consciousness. Modern approaches to sensory perception, such as research on vision, hearing, or touch, measure what the perceiver's judgment extracts from the stimulus, often putting aside the question what sensations are being experienced. One leading method is based on signal detection theory, developed for cases of very weak stimuli. However, the subjectivist approach persists among those in the tradition of Stanley Smith Stevens (1906–1973). Stevens revived the idea of a power law suggested by 19th century researchers, in contrast with Fechner's log-linear function (cf. Stevens' power law). He also advocated the assignment of numbers in ratio to the strengths of stimuli, called magnitude estimation. Stevens added techniques such as magnitude production and cross-modality matching. He opposed the assignment of stimulus strengths to points on a line that are labeled in order of strength. Nevertheless, that sort of response has remained popular in applied psychophysics. Such multiple-category layouts are often misnamed Likert scaling after the question items used by Likert to create multi-item psychometric scales, e.g., seven phrases from "strongly agree" through "strongly disagree". Omar Khaleefa has argued that the medieval scientist Alhazen should be considered the founder of psychophysics. Although al-Haytham made many subjective reports regarding vision, there is no evidence that he used quantitative psychophysical techniques and such claims have been rebuffed. == Thresholds == Psychophysicists usually employ experimental stimuli that can be objectively measured, such as pure tones varying in intensity, or lights varying in luminance. All the canonical senses have been studied: vision, hearing, touch (including skin and enteric perception), taste, smell, and the sense of time. Regardless of the sensory domain, there are three main areas of investigation: absolute thresholds, discrimination thresholds (e.g. the just-noticeable difference), and scaling. A threshold (or limen) is the point of intensity at which the participant can just detect the presence of a stimulus (absolute threshold) or the difference between two stimuli (difference threshold). Stimuli with intensities below this threshold are not detectable and are considered subliminal. Stimuli at values close to a threshold may be detectable on some occasions; therefore, a threshold is defined as the point at which a stimulus or change in a stimulus is detected on a certain proportion p of trials. === Detection === An absolute threshold is the level of intensity at which a subject can detect the presence of a stimulus a certain proportion of the time; a p level of 50% is commonly used. For example, consider the absolute threshold for tactile sensation on the back of one's hand. A participant might not feel a single hair being touched, but might detect the touch of two or three hairs, as this exceeds the threshold. The absolute threshold is also often referred to as the detection threshold. Various methods are employed to measure absolute thresholds, similar to those used for discrimination thresholds (see below). === Discrimination === A difference threshold (or just-noticeable difference, JND) is the magnitude of the smallest difference between two stimuli of differing intensities that a participant can detect a certain proportion of the time, with the specific percentage depending on the task. Several methods are employed to test this threshold. For instance, the subject may be asked to adjust one stimulus until it is perceived as identical to another (method of adjustment), to describe the direction and magnitude of the difference between two stimuli, or to decide whether the intensities in a pair of stimuli are the same or different (forced choice). The just-noticeable difference is not a fixed quantity; rather, it varies depending on the intensity of the stimuli and the specific sense being tested. According to Weber's Law, the just-noticeable difference for any stimulus is a constant proportion, regardless of variations in intensity. In discrimination experiments, the experimenter seeks to determine at what point the difference between two stimuli, such as two weights or two sounds, becomes detectable. The subject is presented with one stimulus, for example, a weight, and is asked to say whether another weight is heavier or lighter. In some experiments, the subject may also indicate that the two weights are the same. At the point of subjective equality (PSE), the subject perceives both weights as identical. The just-noticeable difference, or difference limen (DL), is the magnitude of the difference in stimuli that the subject notices some proportion p of the time; typically, 50% is used for p in the comparison task. Additionally, the two-alternative forced choice (2AFC) paradigm is used to assess the point at which performance reduces to chance in discriminating between two alternatives; here, p is typically 75%, as a 50% success rate corresponds to chance in the 2AFC task. Absolute and difference thresholds are sometimes considered similar in principle because background noise always interferes with our ability to detect stimuli. == Experimentation == In psychophysics, experiments seek to determine whether the subject can detect a stimulus, identify it, differentiate between it and another stimulus, or describe the magnitude or nature of this difference. Software for psychophysical experimentation is overviewed by Strasburger. === Classical psychophysical methods === Psychophysical experiments have traditionally used three methods for testing subjects' perception in stimulus detection and difference detection experiments: the method of limits, the method of constant stimuli and the method of adjustment. ==== Method of limits ==== In the ascending method of limits, some property of the stimulus starts out at a level so low that the stimulus could not be detected, then this level is gradually increased until the participant reports that they are aware of it. For example, if the experiment is testing the minimum amplitude of sound that can be detected, the sound begins too quietly to be perceived, and is made gradually louder. In the descending method of limits, this is reversed. In each case, the threshold is considered to be the level of the stimulus property at which the stimuli are just detected. In experiments, the ascending and descending methods are used alternately and the thresholds are averaged. A possible disadvantage of these methods is that the subject may become accustomed to reporting that they perceive a stimulus and may continue reporting the same way even beyond the threshold (the error of habituation). Conversely, the subject may also anticipate that the stimulus is about to become detectable or undetectable and may make a premature judgment (the error of anticipation). To avoid these potential pitfalls, Georg von Békésy introduced the staircase procedure in 1960 in his study of auditory perception. In this method, the sound starts out audible and gets quieter after each of the subject's responses, until the subject does not report hearing it. At that point, the sound is made louder at each step, until the subject reports hearing it, at which point it is made quieter in steps again. This way the experimenter is able to "zero in" on the threshold. ==== Method of constant stimuli ==== Instead of being presented in ascending or descending order, in the method of constant stimuli the levels of a certain property of the stimulus are not related from one trial to the next, but presented randomly. This prevents the subject from being able to predict the level of the next stimulus, and therefore reduces errors of habituation and expectation. For 'absolute thresholds' again the subject reports whether they are able to detect the stimulus. For 'difference thresholds' there has to be a constant comparison stimulus with each of the varied levels. Friedrich Hegelmaier described the method of constant stimuli in an 1852 paper. This method allows for full sampling of the psychometric function, but can result in a lot of trials when several conditions are interleaved. ==== Method of adjustment ==== In the method of adjustment, the subject is asked to control the level of the stimulus and to alter it until it is just barely detectable against the background noise, or is the same as the level of another stimulus. The adjustment is repeated many times. This is also called the method of average error. In this method, the observers themselves control the magnitude of the variable stimulus, beginning with a level that is distinctly greater or lesser than a standard one and vary it until they are satisfied by the subjective equality of the two. The difference between the variable stimuli and the standard one is recorded after each adjustment, and the error is tabulated for a considerable series. At the end, the mean is calculated giving the average error which can be taken as a measure of sensitivity. === Adaptive psychophysical methods === The classic methods of experimentation are often argued to be inefficient. This is because, in advance of testing, the psychometric threshold is usually unknown and most of the data are collected at points on the psychometric function that provide little information about the parameter of interest, usually the threshold. Adaptive staircase procedures (or the classical method of adjustment) can be used such that the points sampled are clustered around the psychometric threshold. Data points can also be spread in a slightly wider range, if the psychometric function's slope is also of interest. Adaptive methods can thus be optimized for estimating the threshold only, or both threshold and slope. Adaptive methods are classified into staircase procedures (see below) and Bayesian, or maximum-likelihood, methods. Staircase methods rely on the previous response only, and are easier to implement. Bayesian methods take the whole set of previous stimulus-response pairs into account and are generally more robust against lapses in attention. Practical examples are found here. ==== Staircase procedures ==== Staircases usually begin with a high intensity stimulus, which is easy to detect. The intensity is then reduced until the observer makes a mistake, at which point the staircase 'reverses' and intensity is increased until the observer responds correctly, triggering another reversal. The values for the last of these 'reversals' are then averaged. There are many different types of staircase procedures, using different decision and termination rules. Step-size, up/down rules and the spread of the underlying psychometric function dictate where on the psychometric function they converge. Threshold values obtained from staircases can fluctuate wildly, so care must be taken in their design. Many different staircase algorithms have been modeled and some practical recommendations suggested by Garcia-Perez. One of the more common staircase designs (with fixed-step sizes) is the 1-up-N-down staircase. If the participant makes the correct response N times in a row, the stimulus intensity is reduced by one step size. If the participant makes an incorrect response the stimulus intensity is increased by the one size. A threshold is estimated from the mean midpoint of all runs. This estimate approaches, asymptotically, the correct threshold. ==== Bayesian and maximum-likelihood procedures ==== Bayesian and maximum-likelihood (ML) adaptive procedures behave, from the observer's perspective, similar to the staircase procedures. The choice of the next intensity level works differently, however: After each observer response, from the set of this and all previous stimulus/response pairs the likelihood is calculated of where the threshold lies. The point of maximum likelihood is then chosen as the best estimate for the threshold, and the next stimulus is presented at that level (since a decision at that level will add the most information). In a Bayesian procedure, a prior likelihood is further included in the calculation. Compared to staircase procedures, Bayesian and ML procedures are more time-consuming to implement but are considered to be more robust. Well-known procedures of this kind are Quest, ML-PEST, and Kontsevich & Tyler's method. ==== Magnitude estimation ==== In the prototypical case, people are asked to assign numbers in proportion to the magnitude of the stimulus. This psychometric function of the geometric means of their numbers is often a power law with stable, replicable exponent. Although contexts can change the law & exponent, that change too is stable and replicable. Instead of numbers, other sensory or cognitive dimensions can be used to match a stimulus and the method then becomes "magnitude production" or "cross-modality matching". The exponents of those dimensions found in numerical magnitude estimation predict the exponents found in magnitude production. Magnitude estimation generally finds lower exponents for the psychophysical function than multiple-category responses, because of the restricted range of the categorical anchors, such as those used by Likert as items in attitude scales. == See also == == Notes == == References == Steingrimsson, R.; Luce, R. D. (2006). "Empirical evaluation of a model of global psychophysical judgments: III. A form for the psychophysical function and intensity filtering". Journal of Mathematical Psychology. 50: 15–29. doi:10.1016/j.jmp.2005.11.005. Kingdom, Frederick A.A.; Prins, Nicolaas (2016). Psychophysics - A Practical Introduction (2 ed.). Elsevier. ISBN 9780124071568. Retrieved 1 August 2023. == External links == Link German website about a dissertation project with an animation about the staircase method (Transformed Up/Down Staircase Method)
Wikipedia/Psychophysics
Econophysics is a non-orthodox (in economics) interdisciplinary research field, applying theories and methods originally developed by physicists in order to solve problems in economics, usually those including uncertainty or stochastic processes and nonlinear dynamics. Some of its application to the study of financial markets has also been termed statistical finance referring to its roots in statistical physics. Econophysics is closely related to social physics. == History == Physicists' interest in the social sciences is not new (see e.g.,); Daniel Bernoulli, as an example, was the originator of utility-based preferences. One of the founders of neoclassical economic theory, former Yale University Professor of Economics Irving Fisher, was originally trained under the renowned Yale physicist, Josiah Willard Gibbs. Likewise, Jan Tinbergen, who won the first Nobel Memorial Prize in Economic Sciences in 1969 for having developed and applied dynamic models for the analysis of economic processes, studied physics with Paul Ehrenfest at Leiden University. In particular, Tinbergen developed the gravity model of international trade that has become the workhorse of international economics. Econophysics was started in the mid-1990s by several physicists working in the subfield of statistical mechanics. Unsatisfied with the traditional explanations and approaches of economists – which usually prioritized simplified approaches for the sake of soluble theoretical models over agreement with empirical data – they applied tools and methods from physics, first to try to match financial data sets, and then to explain more general economic phenomena. One driving force behind econophysics arising at this time was the sudden availability of large amounts of financial data, starting in the 1980s. It became apparent that traditional methods of analysis were insufficient – standard economic methods dealt with homogeneous agents and equilibrium, while many of the more interesting phenomena in financial markets fundamentally depended on heterogeneous agents and far-from-equilibrium situations. The term "econophysics" was coined by H. Eugene Stanley, to describe the large number of papers written by physicists in the problems of (stock and other) markets, in a conference on statistical physics in Kolkata (erstwhile Calcutta) in 1995 and first appeared in its proceedings publication in Physica A 1996. The inaugural meeting on econophysics was organised in 1998 in Budapest by János Kertész and Imre Kondor. The first book on econophysics was by R. N. Mantegna & H. E. Stanley in 2000. In the same year, 1998, the Palermo International Workshop on Econophysics and Statistical Finance was held at the University of Palermo. The related "Econophysics Colloquium," now an annual event, was first held in Canberra in 2005. The 2018 Econophysics Colloquium was held in Palermo on the 30th anniversary of the original Palermo Workshop; it was organized by Rosario N. Mantegna and Salvatore Miccichè. The almost regular meeting series on the topic include: Econophys-Kolkata (held in Kolkata & Delhi), Econophysics Colloquium, ESHIA/ WEHIA. == Basic tools == Basic tools of econophysics are probabilistic and statistical methods often taken from statistical physics. Physics models that have been applied in economics include the kinetic theory of gas (called the kinetic exchange models of markets), percolation models, chaotic models developed to study cardiac arrest, and models with self-organizing criticality as well as other models developed for earthquake prediction. Moreover, there have been attempts to use the mathematical theory of complexity and information theory, as developed by many scientists among whom are Murray Gell-Mann and Claude E. Shannon, respectively. For potential games, it has been shown that an emergence-producing equilibrium based on information via Shannon information entropy produces the same equilibrium measure (Gibbs measure from statistical mechanics) as a stochastic dynamical equation which represents noisy decisions, both of which are based on bounded rationality models used by economists. The fluctuation-dissipation theorem connects the two to establish a concrete correspondence of "temperature", "entropy", "free potential/energy", and other physics notions to an economics system. The statistical mechanics model is not constructed a-priori - it is a result of a boundedly rational assumption and modeling on existing neoclassical models. It has been used to prove the "inevitability of collusion" result of Huw Dixon in a case for which the neoclassical version of the model does not predict collusion. Here the demand is increasing, as with Veblen goods, stock buyers with the "hot hand" fallacy preferring to buy more successful stocks and sell those that are less successful, or among short traders during a short squeeze as occurred with the WallStreetBets group's collusion to drive up GameStop stock price in 2021. Nobel laureate and founder of experimental economics Vernon L. Smith has used econophysics to model sociability via implementation of ideas in Humanomics. There, noisy decision making and interaction parameters that facilitate the social action responses of reward and punishment result in spin glass models identical to those in physics. Quantifiers derived from information theory were used in several papers by econophysicist Aurelio F. Bariviera and coauthors in order to assess the degree in the informational efficiency of stock markets. Zunino et al. use an innovative statistical tool in the financial literature: the complexity-entropy causality plane. This Cartesian representation establish an efficiency ranking of different markets and distinguish different bond market dynamics. It was found that more developed countries have stock markets with higher entropy and lower complexity, while those markets from emerging countries have lower entropy and higher complexity. Moreover, the authors conclude that the classification derived from the complexity-entropy causality plane is consistent with the qualifications assigned by major rating companies to the sovereign instruments. A similar study developed by Bariviera et al. explore the relationship between credit ratings and informational efficiency of a sample of corporate bonds of US oil and energy companies using also the complexity–entropy causality plane. They find that this classification agrees with the credit ratings assigned by Moody's. Another good example is random matrix theory, which can be used to identify the noise in financial correlation matrices. One paper has argued that this technique can improve the performance of portfolios, e.g., in applied in portfolio optimization. The ideology of econophysics is embodied in the probabilistic economic theory and, on its basis, in the unified market theory. There are also analogies between finance theory and diffusion theory. For instance, the Black–Scholes equation for option pricing is a diffusion-advection equation (see however for a critique of the Black–Scholes methodology). The Black–Scholes theory can be extended to provide an analytical theory of main factors in economic activities. == Subfields == Various other tools from physics that have so far been used, such as fluid dynamics, classical mechanics and quantum mechanics (including so-called classical economy, quantum economics and quantum finance), and the Feynman–Kac formula of statistical mechanics.: 44  === Statistical mechanics === When mathematician Mark Kac attended a lecture by Richard Feynman he realized their work overlapped. Together they worked out a new approach to solving stochastic differential equations. Their approach is used to efficiently calculate solutions to the Black–Scholes equation to price options on stocks. === Quantum finance === Quantum statistical models have been successfully applied to finance by several groups of econophysicists using different approaches, but the origin of their success may not be due to quantum analogies.: 668 : 969  === Quantum economics === The editorial in the inaugural issue of the journal Quantum Economics and Finance says: "Quantum economics and finance is the application of probability based on projective geometry—also known as quantum probability—to modelling in economics and finance. It draws on related areas such as quantum cognition, quantum game theory, quantum computing, and quantum physics." In his overview article in the same issue, David Orrell outlines how neoclassical economics benefited from the concepts of classical mechanics, and yet concepts of quantum mechanics "apparently left economics untouched". He reviews different avenues for quantum economics, some of which he notes are contradictory, settling on "quantum economics therefore needs to take a different kind of leaf from the book of quantum physics, by adopting quantum methods, not because they appear natural or elegant or come pre-approved by some higher authority or bear resemblance to something else, but because they capture in a useful way the most basic properties of what is being studied." == Main results == Econophysics is having some impacts on the more applied field of quantitative finance, whose scope and aims significantly differ from those of economic theory. Various econophysicists have introduced models for price fluctuations in physics of financial markets or original points of view on established models. Presently, one of the main results of econophysics comprises the explanation of the "fat tails" in the distribution of many kinds of financial data as a universal self-similar scaling property (i.e. scale invariant over many orders of magnitude in the data), arising from the tendency of individual market competitors, or of aggregates of them, to exploit systematically and optimally the prevailing "microtrends" (e.g., rising or falling prices). These "fat tails" are not only mathematically important, because they comprise the risks, which may be on the one hand, very small such that one may tend to neglect them, but which - on the other hand - are not negligible at all, i.e. they can never be made exponentially tiny, but instead follow a measurable algebraically decreasing power law, for example with a failure probability of only P ∝ x − 4 , {\displaystyle P\propto x^{-4}\,,} where x is an increasingly large variable in the tail region of the distribution considered (i.e. a price statistics with much more than 108 data). I.e., the events considered are not simply "outliers" but must really be taken into account and cannot be "insured away". It appears that it also plays a role that near a change of the tendency (e.g. from falling to rising prices) there are typical "panic reactions" of the selling or buying agents with algebraically increasing bargain rapidities and volumes. As in quantum field theory the "fat tails" can be obtained by complicated "nonperturbative" methods, mainly by numerical ones, since they contain the deviations from the usual Gaussian approximations, e.g. the Black–Scholes theory. Fat tails can, however, also be due to other phenomena, such as a random number of terms in the central-limit theorem, or any number of other, non-econophysics models. Due to the difficulty in testing such models, they have received less attention in traditional economic analysis. == Criticism == In 2006 economists Mauro Gallegati, Steve Keen, Thomas Lux, and Paul Ormerod, published a critique of econophysics. They cite important empirical contributions primarily in the areas of finance and industrial economics, but list four concerns with work in the field: lack of awareness of economics work, resistance to rigor, a misplaced belief in universal empirical regularity, and inappropriate models. == See also == == References == == Further reading == Emmanual Farjoun and Moshé Machover, Laws of Chaos: a probabilistic approach to political economy, Verso (London, 1983) ISBN 0 86091 768 1 Vladimir Pokrovskii, Econodynamics. The Theory of Social Production, https://www.springer.com/gp/book/9783319720739 (Springer, 2018) Philip Mirowski, More Heat than Light - Economics as Social Physics, Physics as Nature's Economics, Cambridge University Press (Cambridge, UK, 1989) Rosario N. Mantegna, H. Eugene Stanley, An Introduction to Econophysics: Correlations and Complexity in Finance, Cambridge University Press (Cambridge, UK, 1999) Bertrand Roehner, Patterns of Speculation - A Study in Observational Econophysics, Cambridge University Press (Cambridge, UK, 2002) Joseph McCauley, Dynamics of Markets, Econophysics and Finance, Cambridge University Press (Cambridge, UK, 2004) Surya Y., Situngkir, H., Dahlan, R. M., Hariadi, Y., Suroso, R. (2004). Aplikasi Fisika dalam Analisis Keuangan (Physics Applications in Financial Analysis. Bina Sumber Daya MIPA. ISBN 9793073527 Anatoly V. Kondratenko. Physical Modeling of Economic Systems. Classical and Quantum Economies. Novosibirsk, Nauka (Science) (2005), ISBN 5-02-032479-5 Anatoly V. Kondratenko. Probabilistic Theory of Stock Exchanges. Novosibirsk, Nauka (Science) (2021), ISBN 978-5-02-041486-0 Arnab Chatterjee, Sudhakar Yarlagadda, Bikas K Chakrabarti, Econophysics of Wealth Distributions, Springer-Verlag Italia (Milan, 2005) Sitabhra Sinha, Arnab Chatterjee, Anirban Chakraborti, Bikas K Chakrabarti. Econophysics: An Introduction, Wiley-VCH (2010) Ubaldo Garibaldi and Enrico Scalas, Finitary Probabilistic Methods in Econophysics, Cambridge University Press (Cambridge, UK, 2010). Mark Buchanan, What has econophysics ever done for us?, Nature 2013 Nature Physics Focus issue: Complex networks in finance March 2013 Volume 9 No 3 pp 119–128 Martin Shubik and Eric Smith, The Guidance of an Enterprise Economy, MIT Press, Book Details MIT Press (2016) Abergel, F., Aoyama, H., Chakrabarti, B.K., Chakraborti, A., Deo, N., Raina, D., Vodenska, I. (Eds.), Econophysics and Sociophysics: Recent Progress and Future Directions, Econophysics and Sociophysics: Recent Progress and Future Directions, New Economic Windows Series, Springer (2017) Marcelo Byrro Ribeiro, Income Distribution Dynamics of Economic Systems: An Econophysical Approach, Cambridge University Press (Cambridge, UK, 2020). Max Greenberg and H. Oliver Gao, "Twenty-five years of random asset exchange modeling"Twenty-five years of random asset exchange modeling European Physical Journal B, vol. 97 art. 69 (2024). == External links == Is Inequality Inevitable?; Scientific American, November 2019 When Physics Became Undisciplined (& Fathers of Econophysics): Cambridge University Thesis (2018) Conference to mark 25th anniversary of Farjoun and Machover's book Econophysics Colloquium === Lectures === Economic Fluctuations and Statistical Physics: Quantifying Extremely Rare and Much Less Rare Events, Eugene Stanley, Videolectures.net Applications of Statistical Physics to Understanding Complex Systems, Eugene Stanley, Videolectures.net Financial Bubbles, Real Estate Bubbles, Derivative Bubbles, and the Financial and Economic Crisis, Didier Sornette, Videolectures.net Financial crises and risk management, Didier Sornette, Videolectures.net Bubble trouble: how physics can quantify stock-market crashes, Tobias Preis, Physics World Online Lecture Series Archived 2011-12-30 at the Wayback Machine
Wikipedia/Econophysics
Laser science or laser physics is a branch of optics that describes the theory and practice of lasers. Laser science is principally concerned with quantum electronics, laser construction, optical cavity design, the physics of producing a population inversion in laser media, and the temporal evolution of the light field in the laser. It is also concerned with the physics of laser beam propagation, particularly the physics of Gaussian beams, with laser applications, and with associated fields such as nonlinear optics and quantum optics. == History == Laser science predates the invention of the laser itself. Albert Einstein created the foundations for the laser and maser in 1917, via a paper in which he re-derived Max Planck’s law of radiation using a formalism based on probability coefficients (Einstein coefficients) for the absorption, spontaneous emission, and stimulated emission of electromagnetic radiation. The existence of stimulated emission was confirmed in 1928 by Rudolf W. Ladenburg. In 1939, Valentin A. Fabrikant made the earliest laser proposal. He specified the conditions required for light amplification using stimulated emission. In 1947, Willis E. Lamb and R. C. Retherford found apparent stimulated emission in hydrogen spectra and effected the first demonstration of stimulated emission; in 1950, Alfred Kastler (Nobel Prize for Physics 1966) proposed the method of optical pumping, experimentally confirmed, two years later, by Brossel, Kastler, and Winter. The theoretical principles describing the operation of a microwave laser (a maser) were first described by Nikolay Basov and Alexander Prokhorov at the All-Union Conference on Radio Spectroscopy in May 1952. The first maser was built by Charles H. Townes, James P. Gordon, and Herbert J. Zeiger in 1953. Townes, Basov and Prokhorov were awarded the Nobel Prize in Physics in 1964 for their research in the field of stimulated emission. Arthur Ashkin, Gérard Mourou, and Donna Strickland were awarded the Nobel Prize in Physics in 2018 for groundbreaking inventions in the field of laser physics. The first working laser (a pulsed ruby laser) was demonstrated on May 16, 1960, by Theodore Maiman at the Hughes Research Laboratories. == See also == Laser acronyms List of laser types == References == == External links == A very detailed tutorial on lasers
Wikipedia/Laser_physics
Nuclear astrophysics studies the origin of the chemical elements and isotopes, and the role of nuclear energy generation, in cosmic sources such as stars, supernovae, novae, and violent binary-star interactions. It is an interdisciplinary part of both nuclear physics and astrophysics, involving close collaboration among researchers in various subfields of each of these fields. This includes, notably, nuclear reactions and their rates as they occur in cosmic environments, and modeling of astrophysical objects where these nuclear reactions may occur, but also considerations of cosmic evolution of isotopic and elemental composition (often called chemical evolution). Constraints from observations involve multiple messengers, all across the electromagnetic spectrum (nuclear gamma-rays, X-rays, optical, and radio/sub-mm astronomy), as well as isotopic measurements of solar-system materials such as meteorites and their stardust inclusions, cosmic rays, material deposits on Earth and Moon). Nuclear physics experiments address stability (i.e., lifetimes and masses) for atomic nuclei well beyond the regime of stable nuclides into the realm of radioactive/unstable nuclei, almost to the limits of bound nuclei (the drip lines), and under high density (up to neutron star matter) and high temperature (plasma temperatures up to 109 K). Theories and simulations are essential parts herein, as cosmic nuclear reaction environments cannot be realized, but at best partially approximated by experiments. == History == In the 1940s, geologist Hans Suess speculated that the regularity that was observed in the abundances of elements may be related to structural properties of the atomic nucleus. These considerations were seeded by the discovery of radioactivity by Becquerel in 1896 as an aside of advances in chemistry which aimed at production of gold. This remarkable possibility for transformation of matter created much excitement among physicists for the next decades, culminating in discovery of the atomic nucleus, with milestones in Ernest Rutherford's scattering experiments in 1911, and the discovery of the neutron by James Chadwick (1932). After Aston demonstrated that the mass of helium is less than four times that of the proton, Eddington proposed that, through an unknown process in the Sun's core, hydrogen is transmuted into helium, liberating energy. Twenty years later, Bethe and von Weizsäcker independently derived the CN cycle, the first known nuclear reaction that accomplishes this transmutation. The interval between Eddington's proposal and derivation of the CN cycle can mainly be attributed to an incomplete understanding of nuclear structure. The basic principles for explaining the origin of elements and energy generation in stars appear in the concepts describing nucleosynthesis, which arose in the 1940s, led by George Gamow and presented in a 2-page paper in 1948 as the Alpher–Bethe–Gamow paper. A complete concept of processes that make up cosmic nucleosynthesis was presented in the late 1950s by Burbidge, Burbidge, Fowler, and Hoyle, and by Cameron. Fowler is largely credited with initiating collaboration between astronomers, astrophysicists, and theoretical and experimental nuclear physicists, in a field that we now know as nuclear astrophysics (for which he won the 1983 Nobel Prize). During these same decades, Arthur Eddington and others were able to link the liberation of nuclear binding energy through such nuclear reactions to the structural equations of stars. These developments were not without curious deviations. Many notable physicists of the 19th century such as Mayer, Waterson, von Helmholtz, and Lord Kelvin, postulated that the Sun radiates thermal energy by converting gravitational potential energy into heat. Its lifetime as calculated from this assumption using the virial theorem, around 19 million years, was found inconsistent with the interpretation of geological records and the (then new) theory of biological evolution. Alternatively, if the Sun consisted entirely of a fossil fuel like coal, considering the rate of its thermal energy emission, its lifetime would be merely four or five thousand years, clearly inconsistent with records of human civilization. == Basic concepts == During cosmic times, nuclear reactions re-arrange the nucleons that were left behind from the big bang (in the form of isotopes of hydrogen and helium, and traces of lithium, beryllium, and boron) to other isotopes and elements as we find them today (see graph). The driver is a conversion of nuclear binding energy to exothermic energy, favoring nuclei with more binding of their nucleons - these are then lighter as their original components by the binding energy. The most tightly-bound nucleus from symmetric matter of neutrons and protons is 56Ni. The release of nuclear binding energy is what allows stars to shine for up to billions of years, and may disrupt stars in stellar explosions in case of violent reactions (such as 12C+12C fusion for thermonuclear supernova explosions). As matter is processed as such within stars and stellar explosions, some of the products are ejected from the nuclear-reaction site and end up in interstellar gas. Then, it may form new stars, and be processed further through nuclear reactions, in a cycle of matter. This results in compositional evolution of cosmic gas in and between stars and galaxies, enriching such gas with heavier elements. Nuclear astrophysics is the science to describe and understand the nuclear and astrophysical processes within such cosmic and galactic chemical evolution, linking it to knowledge from nuclear physics and astrophysics. Measurements are used to test our understanding: Astronomical constraints are obtained from stellar and interstellar abundance data of elements and isotopes, and other multi-messenger astronomical measurements of the cosmic object phenomena help to understand and model these. Nuclear properties can be obtained from terrestrial nuclear laboratories such as accelerators with their experiments. Theory and simulations are needed to understand and complement such data, providing models for nuclear reaction rates under the variety of cosmic conditions, and for the structure and dynamics of cosmic objects. == Findings, current status, and issues == Nuclear astrophysics remains as a complex puzzle to science. The current consensus on the origins of elements and isotopes are that only hydrogen and helium (and traces of lithium) can be formed in a homogeneous Big Bang (see Big Bang nucleosynthesis), while all other elements and their isotopes are formed in cosmic objects that formed later, such as in stars and their explosions. The Sun's primary energy source is hydrogen fusion to helium at about 15 million degrees. The proton–proton chain reactions dominate, they occur at much lower energies although much more slowly than catalytic hydrogen fusion through CNO cycle reactions. Nuclear astrophysics gives a picture of the Sun's energy source producing a lifetime consistent with the age of the Solar System derived from meteoritic abundances of lead and uranium isotopes – an age of about 4.5 billion years. The core hydrogen burning of stars, as it now occurs in the Sun, defines the main sequence of stars, illustrated in the Hertzsprung-Russell diagram that classifies stages of stellar evolution. The Sun's lifetime of H burning via pp-chains is about 9 billion years. This primarily is determined by extremely slow production of deuterium, which is governed by the weak interaction. Work that led to discovery of neutrino oscillation (implying a non-zero mass for the neutrino absent in the Standard Model of particle physics) was motivated by a solar neutrino flux about three times lower than expected from theories — a long-standing concern in the nuclear astrophysics community colloquially known as the Solar neutrino problem. The concepts of nuclear astrophysics are supported by observation of the element technetium (the lightest chemical element without stable isotopes) in stars, by galactic gamma-ray line emitters (such as 26Al, 60Fe, and 44Ti), by radioactive-decay gamma-ray lines from the 56Ni decay chain observed from two supernovae (SN1987A and SN2014J) coincident with optical supernova light, and by observation of neutrinos from the Sun and from supernova 1987a. These observations have far-reaching implications. 26Al has a lifetime of a million years, which is very short on a galactic timescale, proving that nucleosynthesis is an ongoing process within our Milky Way Galaxy in the current epoch. Current descriptions of the cosmic evolution of elemental abundances are broadly consistent with those observed in the Solar System and galaxy. The roles of specific cosmic objects in producing these elemental abundances are clear for some elements, and heavily debated for others. For example, iron is believed to originate mostly from thermonuclear supernova explosions (also called supernovae of type Ia), and carbon and oxygen is believed to originate mostly from massive stars and their explosions. Lithium, beryllium, and boron are believed to originate from spallation reactions of cosmic-ray nuclei such as carbon and heavier nuclei, breaking these apart. Elements heavier than nickel are produced via the slow and rapid neutron capture processes, each contributing roughly half the abundance of these elements. The s-process is believed to occur in the envelopes of dying stars, whereas some uncertainty exists regarding r-process sites. The r-process is believed to occur in supernova explosions and compact object mergers, though observational evidence is limited to a single event, GW170817, and relative yields of proposed r-process sites leading to observed heavy element abundances are uncertain. The transport of nuclear reaction products from their sources through the interstellar and intergalactic medium also is unclear. Additionally, many nuclei that are involved in cosmic nuclear reactions are unstable and may only exist temporarily in cosmic sites, and their properties (e.g., binding energy) cannot be investigated in the laboratory due to difficulties in their synthesis. Similarly, stellar structure and its dynamics is not satisfactorily described in models and hard to observe except through asteroseismology, and supernova explosion models lack a consistent description based on physical processes, and include heuristic elements. Current research extensively utilizes computation and numerical modeling. == Future work == Although the foundations of nuclear astrophysics appear clear and plausible, many puzzles remain. These include understanding helium fusion (specifically the 12C(α,γ)16O reaction(s)), astrophysical sites of the r-process, anomalous lithium abundances in population II stars, the explosion mechanism in core-collapse supernovae, and progenitors of thermonuclear supernovae. == See also == Nuclear physics Astrophysics Nucleosynthesis Abundance of the chemical elements Joint Institute for Nuclear Astrophysics == References ==
Wikipedia/Nuclear_astrophysics
In cosmology, the steady-state model or steady-state theory was an alternative to the Big Bang theory. In the steady-state model, the density of matter in the expanding universe remains unchanged due to a continuous creation of matter, thus adhering to the perfect cosmological principle, a principle that says that the observable universe is always the same at any time and any place. A static universe, where space is not expanding, also obeys the perfect cosmological principle, but it cannot explain astronomical observations consistent with expansion of space. From the 1940s to the 1960s, the astrophysical community was divided between supporters of the Big Bang theory and supporters of the steady-state theory. The steady-state model is now rejected by most cosmologists, astrophysicists, and astronomers. The observational evidence points to a hot Big Bang cosmology with a finite age of the universe, which the steady-state model does not predict. == History == Cosmological expansion was originally seen through observations by Edwin Hubble. Theoretical calculations also showed that the static universe, as modeled by Albert Einstein (1917), was unstable. The modern Big Bang theory, first advanced by Father Georges Lemaître, is one in which the universe has a finite age and has evolved over time through cooling, expansion, and the formation of structures through gravitational collapse. On the other hand, the steady-state model says while the universe is expanding, it nevertheless does not change its appearance over time (the perfect cosmological principle). E.g., the universe has no beginning and no end. This required that matter be continually created in order to keep the universe's density from decreasing. Influential papers on the topic of a steady-state cosmology were published by Hermann Bondi, Thomas Gold, and Fred Hoyle in 1948. Similar models had been proposed earlier by William Duncan MacMillan, among others. It is now known that Albert Einstein considered a steady-state model of the expanding universe, as indicated in a 1931 manuscript, many years before Hoyle, Bondi and Gold. However, Einstein abandoned the idea. == Observational tests == === Counts of radio sources === Problems with the steady-state model began to emerge in the 1950s and 60s – observations supported the idea that the universe was in fact changing. Bright radio sources (quasars and radio galaxies) were found only at large distances (therefore could have existed only in the distant past due to the effects of the speed of light on astronomy), not in closer galaxies. Whereas the Big Bang theory predicted as much, the steady-state model predicted that such objects would be found throughout the universe, including close to our own galaxy. By 1961, statistical tests based on radio-source surveys provided strong evidence against the steady-state model. Some proponents like Halton Arp insist that the radio data were suspect.: 384  === X-ray background === Gold and Hoyle (1959) considered that matter that is newly created exists in a region that is denser than the average density of the universe. This matter then may radiate and cool faster than the surrounding regions, resulting in a pressure gradient. This gradient would push matter into an over-dense region and result in a thermal instability and emit a large amount of plasma. However, Gould and Burbidge (1963) realized that the thermal bremsstrahlung radiation emitted by such a plasma would exceed the amount of observed X-rays. Therefore, in the steady-state cosmological model, thermal instability does not appear to be important in the formation of galaxy-sized masses. === Cosmic microwave background === In 1964 the cosmic microwave background radiation was discovered as predicted by the Big Bang theory. The steady-state model attempted to explain the microwave background radiation as the result of light from ancient stars that has been scattered by galactic dust. However, the cosmic microwave background level is very even in all directions, making it difficult to explain how it could be generated by numerous point sources, and the microwave background radiation does not show the polarization characteristic of scattering. Furthermore, its spectrum is so close to that of an ideal black body that it could hardly be formed by the superposition of contributions from a multitude of dust clumps at different temperatures as well as at different redshifts. Steven Weinberg wrote in 1972: "The steady state model does not appear to agree with the observed dL versus z relation or with source counts ... In a sense, this disagreement is a credit to the model; alone among all cosmologies, the steady state model makes such definite predictions that it can be disproved even with the limited observational evidence at our disposal. The steady state model is so attractive that many of its adherents still retain hope that the evidence against it will eventually disappear as observations improve. However, if the cosmic microwave radiation ... is really black-body radiation, it will be difficult to doubt that the universe has evolved from a hotter denser early stage." Since this discovery, the Big Bang theory has been considered to provide the best explanation of the origin of the universe. In most astrophysical publications, the Big Bang is implicitly accepted and is used as the basis of more complete theories.: 388  == Quasi-steady state == Quasi-steady-state cosmology (QSS) was proposed in 1993 by Fred Hoyle, Geoffrey Burbidge, and Jayant V. Narlikar as a new incarnation of the steady-state ideas meant to explain additional features unaccounted for in the initial proposal. The model suggests pockets of creation occurring over time within the universe, sometimes referred to as minibangs, mini-creation events, or little bangs. After the observation of an accelerating universe, further modifications of the model were made. The Planck particle is a hypothetical black hole whose Schwarzschild radius is approximately the same as its Compton wavelength; the evaporation of such a particle has been evoked as the source of light elements in an expanding steady-state universe. Astrophysicist and cosmologist Ned Wright has pointed out flaws in the model. These first comments were soon rebutted by the proponents. Wright and other mainstream cosmologists reviewing QSS have pointed out new flaws and discrepancies with observations left unexplained by proponents. == See also == Jainism and non-creationism Non-standard cosmology Copernican principle Large-scale structure of the cosmos Expansion of the universe == References == == Further reading == Burbidge, G., Hoyle, F., "The Origin of Helium and the Other Light Elements", The Astrophysical Journal, 509: L1–L3, 10 December 1998 Hoyle, F.; Burbidge, G.; Narlikar, J. V. (2000). A Different Approach to Cosmology. Cambridge University Press. ISBN 978-0-521-66223-9. Mitton, S. (2005). Conflict in the Cosmos: Fred Hoyle's Life in Science. Joseph Henry Press. ISBN 978-0-309-09313-2. Mitton, S. (2005). Fred Hoyle: A Life in Science. Aurum Press. ISBN 978-1-85410-961-3. Narlikar, Jayant; Burbidge, Geoffrey (2008). Facts and Speculations in Cosmology. Cambridge University Press. ISBN 978-0-521-86504-3.
Wikipedia/Steady-state_model
In astronomy, the geocentric model (also known as geocentrism, often exemplified specifically by the Ptolemaic system) is a superseded description of the Universe with Earth at the center. Under most geocentric models, the Sun, Moon, stars, and planets all orbit Earth. The geocentric model was the predominant description of the cosmos in many European ancient civilizations, such as those of Aristotle in Classical Greece and Ptolemy in Roman Egypt, as well as during the Islamic Golden Age. Two observations supported the idea that Earth was the center of the Universe. First, from anywhere on Earth, the Sun appears to revolve around Earth once per day. While the Moon and the planets have their own motions, they also appear to revolve around Earth about once per day. The stars appeared to be fixed on a celestial sphere rotating once each day about an axis through the geographic poles of Earth. Second, Earth seems to be unmoving from the perspective of an earthbound observer; it feels solid, stable, and stationary. Ancient Greek, ancient Roman, and medieval philosophers usually combined the geocentric model with a spherical Earth, in contrast to the older flat-Earth model implied in some mythology. However, the Greek astronomer and mathematician Aristarchus of Samos (c. 310 – c. 230 BC) developed a heliocentric model placing all of the then-known planets in their correct order around the Sun. The ancient Greeks believed that the motions of the planets were circular, a view that was not challenged in Western culture until the 17th century, when Johannes Kepler postulated that orbits were heliocentric and elliptical (Kepler's first law of planetary motion). In 1687, Isaac Newton showed that elliptical orbits could be derived from his laws of gravitation. The astronomical predictions of Ptolemy's geocentric model, developed in the 2nd century of the Christian era, served as the basis for preparing astrological and astronomical charts for over 1,500 years. The geocentric model held sway into the early modern age, but from the late 16th century onward, it was gradually superseded by the heliocentric model of Copernicus, Galileo, and Kepler. There was much resistance to the transition between these two theories, since for a long time the geocentric postulate produced more accurate results. Additionally some felt that a new, unknown theory could not subvert an accepted consensus for geocentrism. == Ancient Greece == In the 6th century BC, Anaximander proposed a cosmology in which Earth is shaped like a section of a pillar (a cylinder), held aloft at the center of everything. The Sun, Moon, and planets were holes in invisible wheels which surround Earth, and through those holes, humans could see concealed fire. At around the same time, Pythagoras thought that Earth was a sphere (in accordance with observations of eclipses), but not at the center; he believed that it was in motion around an unseen fire. Later these two concepts were combined, so that most of the educated Greeks from the 4th century BC onwards thought that Earth was a sphere at the center of the universe. In the 4th century BC Plato and his student Aristotle, wrote works based on the geocentric model. According to Plato, the Earth was a sphere, stationary at the center of the universe. The stars and planets were carried around the Earth on spheres or circles, arranged in the order (outwards from the center): Moon, Sun, Venus, Mercury, Mars, Jupiter, Saturn, fixed stars, with the fixed stars located on the celestial sphere. In his "Myth of Er", a section of the Republic, Plato describes the cosmos as the Spindle of Necessity, attended by the Sirens and turned by the three Fates. Eudoxus of Cnidus, who worked with Plato, developed a less mythical, more mathematical explanation of the planets' motion based on Plato's dictum stating that all phenomena in the heavens can be explained with uniform circular motion. Aristotle elaborated on Eudoxus' system. In the fully developed Aristotelian system, the spherical Earth is at the center of the universe, and all other heavenly bodies are attached to 47–55 transparent, rotating spheres surrounding the Earth, all concentric with it. (The number is so high because several spheres are needed for each planet.) These spheres, known as crystalline spheres, all moved at different uniform speeds to create the revolution of bodies around the Earth. They were composed of an incorruptible substance called aether. Aristotle believed that the Moon was in the innermost sphere and therefore touches the realm of Earth, causing the dark spots (maculae) and the ability to go through lunar phases. He further described his system by explaining the natural tendencies of the terrestrial elements: earth, water, fire, air, as well as celestial aether. His system held that earth was the heaviest element, with the strongest movement towards the center, thus water formed a layer surrounding the sphere of Earth. The tendency of air and fire, on the other hand, was to move upwards, away from the center, with fire being lighter than air. Beyond the layer of fire, were the solid spheres of aether in which the celestial bodies were embedded. They were also entirely composed of aether. Adherence to the geocentric model stemmed largely from several important observations. First of all, if the Earth did move, then one ought to be able to observe the shifting of the fixed stars due to stellar parallax. Thus if the Earth was moving, the shapes of the constellations should change considerably over the course of a year. As they did not appear to move, either the stars are much farther away than the Sun and the planets than previously conceived, making their motion undetectable, or the Earth is not moving at all. Because the stars are actually much further away than Greek astronomers postulated (making angular movement extremely small), stellar parallax was not detected until the 19th century. Therefore, the Greeks chose the simpler of the two explanations. Another observation used in favor of the geocentric model at the time was the apparent consistency of Venus' luminosity, which implies that it is usually about the same distance from Earth, which in turn is more consistent with geocentrism than heliocentrism. (In fact, Venus' luminous consistency is due to any loss of light caused by its phases being compensated for by an increase in apparent size caused by its varying distance from Earth.) Objectors to heliocentrism noted that terrestrial bodies naturally tend to come to rest as near as possible to the center of the Earth. Further, barring the opportunity to fall closer the center, terrestrial bodies tend not to move unless forced by an outside object, or transformed to a different element by heat or moisture. Atmospheric explanations for many phenomena were preferred because the Eudoxan–Aristotelian model based on perfectly concentric spheres was not intended to explain changes in the brightness of the planets due to a change in distance. Eventually, perfectly concentric spheres were abandoned as it was impossible to develop a sufficiently accurate model under that ideal, with the mathematical methods then available. However, while providing for similar explanations, the later deferent and epicycle model was already flexible enough to accommodate observations. == Ptolemaic model == Although the basic tenets of Greek geocentrism were established by the time of Aristotle, the details of his system did not become standard. The Ptolemaic system, developed by the Hellenistic astronomer Claudius Ptolemaeus in the 2nd century AD, finally standardised geocentrism. His main astronomical work, the Almagest, was the culmination of centuries of work by Hellenic, Hellenistic and Babylonian astronomers. For over a millennium, European and Islamic astronomers assumed it was the correct cosmological model. Because of its influence, people sometimes wrongly think the Ptolemaic system is identical with the geocentric model. Ptolemy argued that the Earth was a sphere in the center of the universe, from the simple observation that half the stars were above the horizon and half were below the horizon at any time (stars on rotating stellar sphere), and the assumption that the stars were all at some modest distance from the center of the universe. If the Earth were substantially displaced from the center, this division into visible and invisible stars would not be equal. === Ptolemaic system === In the Ptolemaic system, each planet is moved by a system of two spheres: one called its deferent; the other, its epicycle. The deferent is a circle whose center point, called the eccentric and marked in the diagram with an X, is distant from the Earth. The original purpose of the eccentric was to account for the difference in length of the seasons (northern autumn was about five days shorter than spring during this time period) by placing the Earth away from the center of rotation of the rest of the universe. Another sphere, the epicycle, is embedded inside the deferent sphere and is represented by the smaller dotted line to the right. A given planet then moves around the epicycle at the same time the epicycle moves along the path marked by the deferent. These combined movements cause the given planet to move closer to and further away from the Earth at different points in its orbit, and explained the observation that planets slowed down, stopped, and moved backward in retrograde motion, and then again reversed to resume normal, or prograde, motion. The deferent-and-epicycle model had been used by Greek astronomers for centuries along with the idea of the eccentric (a deferent whose center is slightly away from the Earth), which was even older. In the illustration, the center of the deferent is not the Earth but the spot marked X, making it eccentric (from the Greek ἐκ ec- meaning "from" and κέντρον kentron meaning "center"), from which the spot takes its name. Unfortunately, the system that was available in Ptolemy's time did not quite match observations, even though it was an improvement over Hipparchus' system. Most noticeably the size of a planet's retrograde loop (especially that of Mars) would be smaller, or sometimes larger, than expected, resulting in positional errors of as much as 30 degrees. To alleviate the problem, Ptolemy developed the equant. The equant was a point near the center of a planet's orbit where, if you were to stand there and watch, the center of the planet's epicycle would always appear to move at uniform speed; all other locations would see non-uniform speed, as on the Earth. By using an equant, Ptolemy claimed to keep motion which was uniform and circular, although it departed from the Platonic ideal of uniform circular motion. The resultant system, which eventually came to be widely accepted in the west, seems unwieldy to modern astronomers; each planet required an epicycle revolving on a deferent, offset by an equant which was different for each planet. It predicted various celestial motions, including the beginning and end of retrograde motion, to within a maximum error of 10 degrees, considerably better than without the equant. The model with epicycles is in fact a very good model of an elliptical orbit with low eccentricity. The well-known ellipse shape does not appear to a noticeable extent when the eccentricity is less than 5%, but the offset distance of the "center" (in fact the focus occupied by the Sun) is very noticeable even with low eccentricities as possessed by the planets. To summarize, Ptolemy conceived a system that was compatible with Aristotelian philosophy and succeeded in tracking actual observations and predicting future movement mostly to within the limits of the next 1000 years of observations. The observed motions and his mechanisms for explaining them include: The geocentric model was eventually replaced by the heliocentric model. Copernican heliocentrism could remove Ptolemy's epicycles because the retrograde motion could be seen to be the result of the combination of the movements and speeds of Earth and planets. Copernicus felt strongly that equants were a violation of Aristotelian purity, and proved that replacement of the equant with a pair of new epicycles was entirely equivalent. Astronomers often continued using the equants instead of the epicycles because the former was easier to calculate, and gave the same result. It has been determined that the Copernican, Ptolemaic and even the Tychonic models provide identical results to identical inputs: they are computationally equivalent. It was not until Kepler demonstrated a physical observation that could show that the physical Sun is directly involved in determining an orbit that a new model was required. The Ptolemaic order of spheres from Earth outward is: Moon Mercury Venus Sun Mars Jupiter Saturn Fixed Stars Primum Mobile ("First Moved") Ptolemy did not invent or work out this order, which aligns with the ancient Seven Heavens religious cosmology common to the major Eurasian religious traditions. It also follows the decreasing orbital periods of the Moon, Sun, planets and stars. === Persian and Arab astronomy and geocentrism === After the translation movement that included the translation of Almagest from Latin to Arabic, Muslims adopted and refined the geocentric model of Ptolemy, which they believed correlated with the teachings of Islam. Muslim astronomers generally accepted the Ptolemaic system and the geocentric model, but by the 10th century, texts appeared regularly whose subject matter expressed doubts concerning Ptolemy (shukūk). Several Muslim scholars questioned Earth's apparent immobility and centrality within the universe. Some Muslim astronomers believed that Earth rotates around its axis, such as Abu Sa'id al-Sijzi (d. circa 1020). According to al-Biruni, Sijzi invented an astrolabe called al-zūraqī, based upon a belief held by some of his contemporaries "that the motion we see is due to the Earth's movement and not to that of the sky". The prevalence of this belief is further confirmed by a reference from the 13th century that states: According to the geometers [or engineers] (muhandisīn), the Earth is in constant circular motion, and what appears to be the motion of the heavens is actually due to the motion of the Earth and not the stars. Early in the 11th century, Alhazen wrote a scathing critique of Ptolemy's model in his Doubts on Ptolemy (c. 1028), which some have interpreted to imply he was criticizing Ptolemy's geocentrism, but most agree that he was actually criticizing the details of Ptolemy's model rather than his geocentrism. In the 12th century, Arzachel departed from the ancient Greek idea of uniform circular motions by hypothesizing that the planet Mercury moves in an elliptic orbit, while Alpetragius proposed a planetary model that abandoned the equant, epicycle and eccentric mechanisms, though this resulted in a system that was mathematically less accurate. His alternative system spread through most of Europe during the 13th century. Fakhr al-Din al-Razi (1149–1209), in dealing with his conception of physics and the physical world in his Matalib, rejects the Aristotelian and Avicennian notion of the Earth's centrality within the universe, but instead argues that there are "a thousand thousand worlds (alfa alfi 'awalim) beyond this world, such that each one of those worlds be bigger and more massive than this world, as well as having the like of what this world has." To support his theological argument, he cites the Qur'anic verse, "All praise belongs to God, Lord of the Worlds", emphasizing the term "Worlds". The "Maragha Revolution" refers to the Maragha school's revolution against Ptolemaic astronomy. The "Maragha school" was an astronomical tradition beginning in the Maragha observatory and continuing with astronomers from the Damascus mosque and Samarkand observatory. Like their Andalusian predecessors, the Maragha astronomers attempted to solve the equant problem (the circle around whose circumference a planet or the center of an epicycle was conceived to move uniformly) and produce alternative configurations to the Ptolemaic model without abandoning geocentrism. They were more successful than their Andalusian predecessors in producing non-Ptolemaic configurations which eliminated the equant and eccentrics, were more accurate than the Ptolemaic model in numerically predicting planetary positions, and were in better agreement with empirical observations. The most important of the Maragha astronomers included Mo'ayyeduddin Urdi (died 1266), Nasīr al-Dīn al-Tūsī (1201–1274), Qutb al-Din al-Shirazi (1236–1311), Ibn al-Shatir (1304–1375), Ali Qushji (c. 1474), Al-Birjandi (died 1525), and Shams al-Din al-Khafri (died 1550). However, the Maragha school never made the paradigm shift to heliocentrism. The influence of the Maragha school on Copernicus remains speculative, since there is no documentary evidence to prove it. The possibility that Copernicus independently developed the Tusi couple remains open, since no researcher has yet demonstrated that he knew about Tusi's work or that of the Maragha school. == Ptolemaic and rival systems == Not all Greeks agreed with the geocentric model. The Pythagorean system has already been mentioned; some Pythagoreans believed the Earth to be one of several planets going around a central fire. Hicetas and Ecphantus, two Pythagoreans of the 5th century BC, and Heraclides Ponticus in the 4th century BC, believed that the Earth rotated on its axis but remained at the center of the universe. Such a system still qualifies as geocentric. It was revived in the Middle Ages by Jean Buridan. Heraclides Ponticus was once thought to have proposed that both Venus and Mercury went around the Sun rather than the Earth, but it is now known that he did not. Martianus Capella definitely put Mercury and Venus in orbit around the Sun. Aristarchus of Samos wrote a work, which has not survived, on heliocentrism, saying that the Sun was at the center of the universe, while the Earth and other planets revolved around it. His theory was not popular, and he had one named follower, Seleucus of Seleucia. Epicurus was the most radical. He correctly realized in the 4th century BC that the universe does not have any single center. This theory was widely accepted by the later Epicureans and was notably defended by Lucretius in his poem De rerum natura. === Copernican system === In 1543, the geocentric system met its first serious challenge with the publication of Copernicus' De revolutionibus orbium coelestium (On the Revolutions of the Heavenly Spheres), which posited that the Earth and the other planets instead revolved around the Sun. The geocentric system was still held for many years afterwards, as at the time the Copernican system did not offer better predictions than the geocentric system, and it posed problems for both natural philosophy and scripture. The Copernican system was no more accurate than Ptolemy's system, because it still used circular orbits. This was not altered until Johannes Kepler postulated that they were elliptical (Kepler's first law of planetary motion). === Tychonic system === Tycho Brahe (1545-1601), made more accurate determinations of the positions of planets and stars. He sought the effect of stellar parallax, which would have been empirically verifiable proof of the Earth's motion around the Sun predicted by the Copernican model. Having observed no effect, he rejected the idea of the Earth's motion. Consequently, he introduced a new system, the Tychonic system, in which the Earth was still at the center of the universe, and around it revolved the Sun, but all the other planets revolved around the Sun in a set of epicycles. His model considered both the benefits of the Copernican model and the lack of evidence for the Earth's motion. === Observation by Galileo and abandonment of the Ptolemaic model === With the invention of the telescope in 1609, observations made by Galileo Galilei (such as that Jupiter has moons) called into question some of the tenets of geocentrism but did not seriously threaten it. Because he observed dark "spots" on the Moon, craters, he remarked that the moon was not a perfect celestial body as had been previously conceived. This was the first detailed observation by telescope of the Moon's imperfections, which had previously been explained by Aristotle as the Moon being contaminated by Earth and its heavier elements, in contrast to the aether of the higher spheres. Galileo could also see the moons of Jupiter, which he dedicated to Cosimo II de' Medici, and stated that they orbited around Jupiter, not Earth. This was a significant claim as it would mean not only that not everything revolved around Earth as stated in the Ptolemaic model, but also showed a secondary celestial body could orbit a moving celestial body, strengthening the heliocentric argument that a moving Earth could retain the Moon. Galileo's observations were verified by other astronomers of the time period who quickly adopted use of the telescope, including Christoph Scheiner, Johannes Kepler, and Giovan Paulo Lembo. In December 1610, Galileo Galilei used his telescope to observe that Venus showed all phases, just like the Moon. He thought that while this observation was incompatible with the Ptolemaic system, it was a natural consequence of the heliocentric system. However, Ptolemy placed Venus' deferent and epicycle entirely inside the sphere of the Sun (between the Sun and Mercury), but this was arbitrary; he could just as easily have swapped Venus and Mercury and put them on the other side of the Sun, or made any other arrangement of Venus and Mercury, as long as they were always near a line running from the Earth through the Sun, such as placing the center of the Venus epicycle near the Sun. In this case, if the Sun is the source of all the light, under the Ptolemaic system: If Venus is between Earth and the Sun, the phase of Venus must always be crescent or all dark. If Venus is beyond the Sun, the phase of Venus must always be gibbous or full. But Galileo saw Venus at first small and full, and later large and crescent. This showed that with a Ptolemaic cosmology, the Venus epicycle can be neither completely inside nor completely outside of the orbit of the Sun. As a result, Ptolemaics abandoned the idea that the epicycle of Venus was completely inside the Sun, and later 17th-century competition between astronomical cosmologies focused on variations of the Tychonic or Copernican systems. === Historical positions of the Roman Catholic hierarchy === The famous Galileo affair pitted the geocentric model against the claims of Galileo. In regards to the theological basis for such an argument, two Popes addressed the question of whether the use of phenomenological language would compel one to admit an error in Scripture. Both taught that it would not. Pope Leo XIII wrote: we have to contend against those who, making an evil use of physical science, minutely scrutinize the Sacred Book in order to detect the writers in a mistake, and to take occasion to vilify its contents. ... There can never, indeed, be any real discrepancy between the theologian and the physicist, as long as each confines himself within his own lines, and both are careful, as St. Augustine warns us, "not to make rash assertions, or to assert what is not known as known". If dissension should arise between them, here is the rule also laid down by St. Augustine, for the theologian: "Whatever they can really demonstrate to be true of physical nature, we must show to be capable of reconciliation with our Scriptures; and whatever they assert in their treatises which is contrary to these Scriptures of ours, that is to Catholic faith, we must either prove it as well as we can to be entirely false, or at all events we must, without the smallest hesitation, believe it to be so." To understand how just is the rule here formulated we must remember, first, that the sacred writers, or to speak more accurately, the Holy Ghost "Who spoke by them, did not intend to teach men these things (that is to say, the essential nature of the things of the visible universe), things in no way profitable unto salvation." Hence they did not seek to penetrate the secrets of nature, but rather described and dealt with things in more or less figurative language, or in terms which were commonly used at the time, and which in many instances are in daily use at this day, even by the most eminent men of science. Ordinary speech primarily and properly describes what comes under the senses; and somewhat in the same way the sacred writers-as the Angelic Doctor also reminds us – "went by what sensibly appeared", or put down what God, speaking to men, signified, in the way men could understand and were accustomed to. Maurice Finocchiaro, author of a book on the Galileo affair, notes that this is "a view of the relationship between biblical interpretation and scientific investigation that corresponds to the one advanced by Galileo in the "Letter to the Grand Duchess Christina". Pope Pius XII repeated his predecessor's teaching: The first and greatest care of Leo XIII was to set forth the teaching on the truth of the Sacred Books and to defend it from attack. Hence with grave words did he proclaim that there is no error whatsoever if the sacred writer, speaking of things of the physical order "went by what sensibly appeared" as the Angelic Doctor says, speaking either "in figurative language, or in terms which were commonly used at the time, and which in many instances are in daily use at this day, even among the most eminent men of science". For "the sacred writers, or to speak more accurately – the words are St. Augustine's – the Holy Spirit, Who spoke by them, did not intend to teach men these things – that is the essential nature of the things of the universe – things in no way profitable to salvation"; which principle "will apply to cognate sciences, and especially to history", that is, by refuting, "in a somewhat similar way the fallacies of the adversaries and defending the historical truth of Sacred Scripture from their attacks". In 1664, Pope Alexander VII republished the Index Librorum Prohibitorum (List of Prohibited Books) and attached the various decrees connected with those books, including those concerned with heliocentrism. He stated in a papal bull that his purpose in doing so was that "the succession of things done from the beginning might be made known [quo rei ab initio gestae series innotescat]". The position of the curia evolved slowly over the centuries towards permitting the heliocentric view. In 1757, during the papacy of Benedict XIV, the Congregation of the Index withdrew the decree that prohibited all books teaching the Earth's motion, although the Dialogue and a few other books continued to be explicitly included. In 1820, the Congregation of the Holy Office, with the pope's approval, decreed that Catholic astronomer Giuseppe Settele was allowed to treat the Earth's motion as an established fact and removed any obstacle for Catholics to hold to the motion of the Earth: The Assessor of the Holy Office has referred the request of Giuseppe Settele, Professor of Optics and Astronomy at La Sapienza University, regarding permission to publish his work Elements of Astronomy in which he espouses the common opinion of the astronomers of our time regarding the Earth’s daily and yearly motions, to His Holiness through Divine Providence, Pope Pius VII. Previously, His Holiness had referred this request to the Supreme Sacred Congregation and concurrently to the consideration of the Most Eminent and Most Reverend General Cardinal Inquisitor. His Holiness has decreed that no obstacles exist for those who sustain Copernicus' affirmation regarding the Earth's movement in the manner in which it is affirmed today, even by Catholic authors. He has, moreover, suggested the insertion of several notations into this work, aimed at demonstrating that the above mentioned affirmation [of Copernicus], as it has come to be understood, does not present any difficulties; difficulties that existed in times past, prior to the subsequent astronomical observations that have now occurred. [Pope Pius VII] has also recommended that the implementation [of these decisions] be given to the Cardinal Secretary of the Supreme Sacred Congregation and Master of the Sacred Apostolic Palace. He is now appointed the task of bringing to an end any concerns and criticisms regarding the printing of this book, and, at the same time, ensuring that in the future, regarding the publication of such works, permission is sought from the Cardinal Vicar whose signature will not be given without the authorization of the Superior of his Order. In 1822, the Congregation of the Holy Office removed the prohibition on the publication of books treating of the Earth's motion in accordance with modern astronomy and Pope Pius VII ratified the decision: The most excellent [cardinals] have decreed that there must be no denial, by the present or by future Masters of the Sacred Apostolic Palace, of permission to print and to publish works which treat of the mobility of the Earth and of the immobility of the sun, according to the common opinion of modern astronomers, as long as there are no other contrary indications, on the basis of the decrees of the Sacred Congregation of the Index of 1757 and of this Supreme [Holy Office] of 1820; and that those who would show themselves to be reluctant or would disobey, should be forced under punishments at the choice of [this] Sacred Congregation, with derogation of [their] claimed privileges, where necessary. The 1835 edition of the Catholic List of Prohibited Books for the first time omits the Dialogue from the list. In his 1921 papal encyclical, In praeclara summorum, Pope Benedict XV stated that, "though this Earth on which we live may not be the center of the universe as at one time was thought, it was the scene of the original happiness of our first ancestors, witness of their unhappy fall, as too of the Redemption of mankind through the Passion and Death of Jesus Christ". In 1965 the Second Vatican Council stated that, "Consequently, we cannot but deplore certain habits of mind, which are sometimes found too among Christians, which do not sufficiently attend to the rightful independence of science and which, from the arguments and controversies they spark, lead many minds to conclude that faith and science are mutually opposed." The footnote on this statement is to Msgr. Pio Paschini's, Vita e opere di Galileo Galilei, 2 volumes, Vatican Press (1964). Pope John Paul II regretted the treatment that Galileo received, in a speech to the Pontifical Academy of Sciences in 1992. The Pope declared the incident to be based on a "tragic mutual miscomprehension". He further stated: Cardinal Poupard has also reminded us that the sentence of 1633 was not irreformable, and that the debate which had not ceased to evolve thereafter, was closed in 1820 with the imprimatur given to the work of Canon Settele. ... The error of the theologians of the time, when they maintained the centrality of the Earth, was to think that our understanding of the physical world's structure was, in some way, imposed by the literal sense of Sacred Scripture. Let us recall the celebrated saying attributed to Baronius "Spiritui Sancto mentem fuisse nos docere quomodo ad coelum eatur, non quomodo coelum gradiatur". In fact, the Bible does not concern itself with the details of the physical world, the understanding of which is the competence of human experience and reasoning. There exist two realms of knowledge, one which has its source in Revelation and one which reason can discover by its own power. To the latter belong especially the experimental sciences and philosophy. The distinction between the two realms of knowledge ought not to be understood as opposition. == Gravitation == Johannes Kepler analysed Tycho Brahe's famously accurate observations, and afterwards constructed his three laws in 1609 and 1619, based upon a heliocentric model wherein the planets move in elliptical paths. Using these laws, he was the first astronomer to successfully predict a transit of Venus for the year 1631. The change from circular orbits to elliptical planetary paths dramatically improved the accuracy of celestial observations and predictions. Because the heliocentric model devised by Copernicus was no more accurate than Ptolemy's system, new observations were needed to persuade those who still adhered to the geocentric model. However, Kepler's laws based upon Brahe's data became a problem that geocentrists could not easily overcome. In 1687, Isaac Newton stated the law of universal gravitation, which was described earlier as a hypothesis by Robert Hooke and others. His main achievement was to mathematically derive Kepler's laws of planetary motion from the law of gravitation, thus helping to prove the latter. This introduced gravitation as the force which kept Earth and the planets moving through the universe, and also kept the atmosphere from flying away. The theory of gravity allowed scientists to rapidly construct a plausible heliocentric model for the Solar System. In his Principia, Newton explained his theory of how gravity, previously thought to be a mysterious, unexplained occult force, directed the movements of celestial bodies, and kept our Solar System in working order. His descriptions of centripetal force were a breakthrough in scientific thought, using the newly developed mathematical discipline of differential calculus, finally replacing the previous schools of scientific thought, which had been dominated by Aristotle and Ptolemy. However, the process was gradual. Several empirical tests of Newton's theory, explaining the longer period of oscillation of a pendulum at the equator and the differing size of a degree of latitude, would gradually become available between 1673 and 1738. In addition, stellar aberration was observed by Robert Hooke in 1674, and tested in a series of observations by Jean Picard over a period of ten years, finishing in 1680. However, it was not explained until 1729, when James Bradley provided an approximate explanation in terms of the Earth's revolution about the Sun. In 1838, astronomer Friedrich Wilhelm Bessel measured the parallax of the star 61 Cygni successfully, and disproved Ptolemy's claim that parallax motion did not exist. This finally confirmed the assumptions made by Copernicus, providing accurate, dependable scientific observations, and conclusively displaying how distant stars are from Earth. A geocentric frame is useful for many everyday activities and most laboratory experiments, but is a less appropriate choice for Solar System mechanics and space travel. While a heliocentric frame is most useful in those cases, galactic and extragalactic astronomy is easier if the Sun is treated as neither stationary nor the center of the universe, but rather rotating around the center of our galaxy, while in turn our galaxy is also not at rest in the cosmic background. == Relativity == Albert Einstein and Leopold Infeld wrote in The Evolution of Physics (1938): "Can we formulate physical laws so that they are valid for all CS [coordinate systems], not only those moving uniformly, but also those moving quite arbitrarily, relative to each other? If this can be done, our difficulties will be over. We shall then be able to apply the laws of nature to any CS. The struggle, so violent in the early days of science, between the views of Ptolemy and Copernicus would then be quite meaningless. Either CS could be used with equal justification. The two sentences, 'the sun is at rest and the Earth moves', or 'the sun moves and the Earth is at rest', would simply mean two different conventions concerning two different CS. Could we build a real relativistic physics valid in all CS; a physics in which there would be no place for absolute, but only for relative, motion? This is indeed possible!" Despite giving more respectability to the geocentric view than Newtonian physics does, relativity is not geocentric. Rather, relativity states that the Sun, the Earth, the Moon, Jupiter, or any other point for that matter could be chosen as a center of the Solar System with equal validity. Relativity agrees with Newtonian predictions that regardless of whether the Sun or the Earth are chosen arbitrarily as the center of the coordinate system describing the Solar System, the paths of the planets form (roughly) ellipses with respect to the Sun, not the Earth. With respect to the average reference frame of the fixed stars, the planets do indeed move around the Sun, which due to its much larger mass, moves far less than its own diameter and the gravity of which is dominant in determining the orbits of the planets (in other words, the center of mass of the Solar System is near the center of the Sun). The Earth and Moon are much closer to being a binary planet; the center of mass around which they both rotate is still inside the Earth, but is about 4,624 km (2,873 miles) or 72.6% of the Earth's radius away from the centre of the Earth (thus closer to the surface than the center). What the principle of relativity points out is that correct mathematical calculations can be made regardless of the reference frame chosen, and these will all agree with each other as to the predictions of actual motions of bodies with respect to each other. It is not necessary to choose the object in the Solar System with the largest gravitational field as the center of the coordinate system in order to predict the motions of planetary bodies, though doing so may make calculations easier to perform or interpret. A geocentric coordinate system can be more convenient when dealing only with bodies mostly influenced by the gravity of the Earth (such as artificial satellites and the Moon), or when calculating what the sky will look like when viewed from Earth (as opposed to an imaginary observer looking down on the entire Solar System, where a different coordinate system might be more convenient). == Religious and contemporary adherence to geocentrism == The Ptolemaic model held sway into the early modern age; from the late 16th century onward it was gradually replaced as the consensus description by the heliocentric model. Geocentrism as a separate religious belief, however, never completely died out. In the United States between 1870 and 1920, for example, various members of the Lutheran Church–Missouri Synod published articles disparaging Copernican astronomy and promoting geocentrism. However, in the 1902 Theological Quarterly, A. L. Graebner observed that the synod had no doctrinal position on geocentrism, heliocentrism, or any scientific model, unless it were to contradict Scripture. He stated that any possible declarations of geocentrists within the synod did not set the position of the church body as a whole. Articles arguing that geocentrism was the biblical perspective appeared in some early creation science newsletters. Contemporary advocates for such religious beliefs include Robert Sungenis (author of the 2006 book Galileo Was Wrong and the 2014 pseudo-documentary film The Principle). Most contemporary creationist organizations reject such perspectives. A few Orthodox Jewish leaders maintain a geocentric model of the universe and an interpretation of Maimonides to the effect that he ruled that the Earth is orbited by the Sun. The Lubavitcher Rebbe also explained that geocentrism is defensible based on the theory of relativity. While geocentrism is important in Maimonides' calendar calculations, the great majority of Jewish religious scholars, who accept the divinity of the Bible and accept many of his rulings as legally binding, do not believe that the Bible or Maimonides command a belief in geocentrism. There have been some modern Islamic scholars who promoted geocentrism. One of them was Ahmed Raza Khan Barelvi, a Sunni scholar of the Indian subcontinent. He rejected the heliocentric model and wrote a book that explains the movement of the sun, moon and other planets around the Earth. According to a report released in 2014 by the National Science Foundation, 26% of Americans surveyed believe that the Sun revolves around the Earth. Morris Berman quotes a 2006 survey that show currently some 20% of the U.S. population believe that the Sun goes around the Earth (geocentricism) rather than the Earth goes around the Sun (heliocentricism), while a further 9% claimed not to know. Polls conducted by Gallup in the 1990s found that 16% of Germans, 18% of Americans and 19% of Britons hold that the Sun revolves around the Earth. A study conducted in 2005 by Jon D. Miller of Northwestern University, an expert in the public understanding of science and technology, found that about 20%, or one in five, of American adults believe that the Sun orbits the Earth. According to 2011 VTSIOM poll, 32% of Russians believe that the Sun orbits the Earth. == Planetariums == Many planetariums can switch between heliocentric and geocentric models. In particular, the geocentric model is still used for projecting the celestial sphere and lunar phases in education and sometimes for navigation. == See also == Aristotelian physics Earth-centered, Earth-fixed coordinate system History of the center of the Universe Hollow Earth § Concave Hollow Earths Religious cosmology Sphere of fire Wolfgang Smith, Catholic mathematician == Notes == == References == == Bibliography == Crowe, Michael J. (1990). Theories of the World from Antiquity to the Copernican Revolution. Mineola, NY: Dover Publications. ISBN 0486261735. Dreyer, J.L.E. (1953). A History of Astronomy from Thales to Kepler. New York: Dover Publications. Evans, James (1998). The History and Practice of Ancient Astronomy. New York: Oxford University Press. Grant, Edward (1984-01-01). "In Defense of the Earth's Centrality and Immobility: Scholastic Reaction to Copernicanism in the Seventeenth Century". Transactions of the American Philosophical Society. New Series. 74 (4): 1–69. doi:10.2307/1006444. ISSN 0065-9746. JSTOR 1006444. Heath, Thomas (1913). Aristarchus of Samos. Oxford: Clarendon Press. Hoyle, Fred (1973). Nicolaus Copernicus. Koestler, Arthur (1986) [1959]. The Sleepwalkers: A History of Man's Changing Vision of the Universe. Penguin Books. ISBN 014055212X. 1990 reprint: ISBN 0140192468. Kuhn, Thomas S. (1957). The Copernican Revolution (PDF). Cambridge: Harvard University Press. ISBN 0674171039. OCLC 1241666716. {{cite book}}: ISBN / Date incompatibility (help) Linton, Christopher M. (2004). From Eudoxus to Einstein—A History of Mathematical Astronomy. Cambridge: Cambridge University Press. ISBN 9780521827508. Qadir, Asghar (1989). Relativity: An introduction to the special theory. Singapore Teaneck, NJ: World Scientific. ISBN 9971-5-0612-2. OCLC 841809663. Walker, Christopher, ed. (1996). Astronomy Before the Telescope. London: British Museum Press. ISBN 0714117463. Wright, J. Edward (2000). The Early History Of Heaven. Oxford University Press. Google Books == External links == Another demonstration of the complexity of observed orbits when assuming a geocentric model of the Solar System Geocentric Perspective animation of the Solar System in 150AD Ptolemy’s system of astronomy The Galileo Project – Ptolemaic System
Wikipedia/Geocentric_model
In physics, there are equations in every field to relate physical quantities to each other and perform calculations. Entire handbooks of equations can only summarize most of the full subject, else are highly specialized within a certain field. Physics is derived of formulae only. == General scope == Variables commonly used in physics Continuity equation Constitutive equation == Specific scope == Defining equation (physical chemistry) List of equations in classical mechanics Table of thermodynamic equations List of equations in wave theory List of relativistic equations List of equations in fluid mechanics List of electromagnetism equations List of equations in gravitation List of photonics equations List of equations in quantum mechanics List of equations in nuclear and particle physics == See also == List of equations Operator (physics) Laws of science == Units and nomenclature == Physical constant Physical quantity SI units SI derived unit SI electromagnetism units List of common physics notations
Wikipedia/Lists_of_physics_equations
Density functional theory (DFT) is a computational quantum mechanical modelling method used in physics, chemistry and materials science to investigate the electronic structure (or nuclear structure) (principally the ground state) of many-body systems, in particular atoms, molecules, and the condensed phases. Using this theory, the properties of a many-electron system can be determined by using functionals - that is, functions that accept a function as input and output a single real number. In the case of DFT, these are functionals of the spatially dependent electron density. DFT is among the most popular and versatile methods available in condensed-matter physics, computational physics, and computational chemistry. DFT has been very popular for calculations in solid-state physics since the 1970s. However, DFT was not considered accurate enough for calculations in quantum chemistry until the 1990s, when the approximations used in the theory were greatly refined to better model the exchange and correlation interactions. Computational costs are relatively low when compared to traditional methods, such as exchange only Hartree–Fock theory and its descendants that include electron correlation. Since, DFT has become an important tool for methods of nuclear spectroscopy such as Mössbauer spectroscopy or perturbed angular correlation, in order to understand the origin of specific electric field gradients in crystals. Despite recent improvements, there are still difficulties in using density functional theory to properly describe: intermolecular interactions (of critical importance to understanding chemical reactions), especially van der Waals forces (dispersion); charge transfer excitations; transition states, global potential energy surfaces, dopant interactions and some strongly correlated systems; and in calculations of the band gap and ferromagnetism in semiconductors. The incomplete treatment of dispersion can adversely affect the accuracy of DFT (at least when used alone and uncorrected) in the treatment of systems which are dominated by dispersion (e.g. interacting noble gas atoms) or where dispersion competes significantly with other effects (e.g. in biomolecules). The development of new DFT methods designed to overcome this problem, by alterations to the functional or by the inclusion of additive terms, is a current research topic. Classical density functional theory uses a similar formalism to calculate the properties of non-uniform classical fluids. Despite the current popularity of these alterations or of the inclusion of additional terms, they are reported to stray away from the search for the exact functional. Further, DFT potentials obtained with adjustable parameters are no longer true DFT potentials, given that they are not functional derivatives of the exchange correlation energy with respect to the charge density. Consequently, it is not clear if the second theorem of DFT holds in such conditions. == Overview of method == In the context of computational materials science, ab initio (from first principles) DFT calculations allow the prediction and calculation of material behavior on the basis of quantum mechanical considerations, without requiring higher-order parameters such as fundamental material properties. In contemporary DFT techniques the electronic structure is evaluated using a potential acting on the system's electrons. This DFT potential is constructed as the sum of external potentials Vext, which is determined solely by the structure and the elemental composition of the system, and an effective potential Veff, which represents interelectronic interactions. Thus, a problem for a representative supercell of a material with n electrons can be studied as a set of n one-electron Schrödinger-like equations, which are also known as Kohn–Sham equations. === Origins === Although density functional theory has its roots in the Thomas–Fermi model for the electronic structure of materials, DFT was first put on a firm theoretical footing by Walter Kohn and Pierre Hohenberg in the framework of the two Hohenberg–Kohn theorems (HK). The original HK theorems held only for non-degenerate ground states in the absence of a magnetic field, although they have since been generalized to encompass these. The first HK theorem demonstrates that the ground-state properties of a many-electron system are uniquely determined by an electron density that depends on only three spatial coordinates. It set down the groundwork for reducing the many-body problem of N electrons with 3N spatial coordinates to three spatial coordinates, through the use of functionals of the electron density. This theorem has since been extended to the time-dependent domain to develop time-dependent density functional theory (TDDFT), which can be used to describe excited states. The second HK theorem defines an energy functional for the system and proves that the ground-state electron density minimizes this energy functional. In work that later won them the Nobel prize in chemistry, the HK theorem was further developed by Walter Kohn and Lu Jeu Sham to produce Kohn–Sham DFT (KS DFT). Within this framework, the intractable many-body problem of interacting electrons in a static external potential is reduced to a tractable problem of noninteracting electrons moving in an effective potential. The effective potential includes the external potential and the effects of the Coulomb interactions between the electrons, e.g., the exchange and correlation interactions. Modeling the latter two interactions becomes the difficulty within KS DFT. The simplest approximation is the local-density approximation (LDA), which is based upon exact exchange energy for a uniform electron gas, which can be obtained from the Thomas–Fermi model, and from fits to the correlation energy for a uniform electron gas. Non-interacting systems are relatively easy to solve, as the wavefunction can be represented as a Slater determinant of orbitals. Further, the kinetic energy functional of such a system is known exactly. The exchange–correlation part of the total energy functional remains unknown and must be approximated. Another approach, less popular than KS DFT but arguably more closely related to the spirit of the original HK theorems, is orbital-free density functional theory (OFDFT), in which approximate functionals are also used for the kinetic energy of the noninteracting system. == Derivation and formalism == As usual in many-body electronic structure calculations, the nuclei of the treated molecules or clusters are seen as fixed (the Born–Oppenheimer approximation), generating a static external potential V, in which the electrons are moving. A stationary electronic state is then described by a wavefunction Ψ(r1, …, rN) satisfying the many-electron time-independent Schrödinger equation H ^ Ψ = [ T ^ + V ^ + U ^ ] Ψ = [ ∑ i = 1 N ( − ℏ 2 2 m i ∇ i 2 ) + ∑ i = 1 N V ( r i ) + ∑ i < j N U ( r i , r j ) ] Ψ = E Ψ , {\displaystyle {\hat {H}}\Psi =\left[{\hat {T}}+{\hat {V}}+{\hat {U}}\right]\Psi =\left[\sum _{i=1}^{N}\left(-{\frac {\hbar ^{2}}{2m_{i}}}\nabla _{i}^{2}\right)+\sum _{i=1}^{N}V(\mathbf {r} _{i})+\sum _{i<j}^{N}U\left(\mathbf {r} _{i},\mathbf {r} _{j}\right)\right]\Psi =E\Psi ,} where, for the N-electron system, Ĥ is the Hamiltonian, E is the total energy, T ^ {\displaystyle {\hat {T}}} is the kinetic energy, V ^ {\displaystyle {\hat {V}}} is the potential energy from the external field due to positively charged nuclei, and Û is the electron–electron interaction energy. The operators T ^ {\displaystyle {\hat {T}}} and Û are called universal operators, as they are the same for any N-electron system, while V ^ {\displaystyle {\hat {V}}} is system-dependent. This complicated many-particle equation is not separable into simpler single-particle equations because of the interaction term Û. There are many sophisticated methods for solving the many-body Schrödinger equation based on the expansion of the wavefunction in Slater determinants. While the simplest one is the Hartree–Fock method, more sophisticated approaches are usually categorized as post-Hartree–Fock methods. However, the problem with these methods is the huge computational effort, which makes it virtually impossible to apply them efficiently to larger, more complex systems. Here DFT provides an appealing alternative, being much more versatile, as it provides a way to systematically map the many-body problem, with Û, onto a single-body problem without Û. In DFT the key variable is the electron density n(r), which for a normalized Ψ is given by n ( r ) = N ∫ d 3 r 2 ⋯ ∫ d 3 r N Ψ ∗ ( r , r 2 , … , r N ) Ψ ( r , r 2 , … , r N ) . {\displaystyle n(\mathbf {r} )=N\int {\mathrm {d} }^{3}\mathbf {r} _{2}\cdots \int {\mathrm {d} }^{3}\mathbf {r} _{N}\,\Psi ^{*}(\mathbf {r} ,\mathbf {r} _{2},\dots ,\mathbf {r} _{N})\Psi (\mathbf {r} ,\mathbf {r} _{2},\dots ,\mathbf {r} _{N}).} This relation can be reversed, i.e., for a given ground-state density n0(r) it is possible, in principle, to calculate the corresponding ground-state wavefunction Ψ0(r1, …, rN). In other words, Ψ is a unique functional of n0, Ψ 0 = Ψ [ n 0 ] , {\displaystyle \Psi _{0}=\Psi [n_{0}],} and consequently the ground-state expectation value of an observable Ô is also a functional of n0: O [ n 0 ] = ⟨ Ψ [ n 0 ] | O ^ | Ψ [ n 0 ] ⟩ . {\displaystyle O[n_{0}]={\big \langle }\Psi [n_{0}]{\big |}{\hat {O}}{\big |}\Psi [n_{0}]{\big \rangle }.} In particular, the ground-state energy is a functional of n0: E 0 = E [ n 0 ] = ⟨ Ψ [ n 0 ] | T ^ + V ^ + U ^ | Ψ [ n 0 ] ⟩ , {\displaystyle E_{0}=E[n_{0}]={\big \langle }\Psi [n_{0}]{\big |}{\hat {T}}+{\hat {V}}+{\hat {U}}{\big |}\Psi [n_{0}]{\big \rangle },} where the contribution of the external potential ⟨ Ψ [ n 0 ] | V ^ | Ψ [ n 0 ] ⟩ {\displaystyle {\big \langle }\Psi [n_{0}]{\big |}{\hat {V}}{\big |}\Psi [n_{0}]{\big \rangle }} can be written explicitly in terms of the ground-state density n 0 {\displaystyle n_{0}} : V [ n 0 ] = ∫ V ( r ) n 0 ( r ) d 3 r . {\displaystyle V[n_{0}]=\int V(\mathbf {r} )n_{0}(\mathbf {r} )\,\mathrm {d} ^{3}\mathbf {r} .} More generally, the contribution of the external potential ⟨ Ψ | V ^ | Ψ ⟩ {\displaystyle {\big \langle }\Psi {\big |}{\hat {V}}{\big |}\Psi {\big \rangle }} can be written explicitly in terms of the density n {\displaystyle n} : V [ n ] = ∫ V ( r ) n ( r ) d 3 r . {\displaystyle V[n]=\int V(\mathbf {r} )n(\mathbf {r} )\,\mathrm {d} ^{3}\mathbf {r} .} The functionals T[n] and U[n] are called universal functionals, while V[n] is called a non-universal functional, as it depends on the system under study. Having specified a system, i.e., having specified V ^ {\displaystyle {\hat {V}}} , one then has to minimize the functional E [ n ] = T [ n ] + U [ n ] + ∫ V ( r ) n ( r ) d 3 r {\displaystyle E[n]=T[n]+U[n]+\int V(\mathbf {r} )n(\mathbf {r} )\,\mathrm {d} ^{3}\mathbf {r} } with respect to n(r), assuming one has reliable expressions for T[n] and U[n]. A successful minimization of the energy functional will yield the ground-state density n0 and thus all other ground-state observables. The variational problems of minimizing the energy functional E[n] can be solved by applying the Lagrangian method of undetermined multipliers. First, one considers an energy functional that does not explicitly have an electron–electron interaction energy term, E s [ n ] = ⟨ Ψ s [ n ] | T ^ + V ^ s | Ψ s [ n ] ⟩ , {\displaystyle E_{s}[n]={\big \langle }\Psi _{\text{s}}[n]{\big |}{\hat {T}}+{\hat {V}}_{\text{s}}{\big |}\Psi _{\text{s}}[n]{\big \rangle },} where T ^ {\displaystyle {\hat {T}}} denotes the kinetic-energy operator, and V ^ s {\displaystyle {\hat {V}}_{\text{s}}} is an effective potential in which the particles are moving. Based on E s {\displaystyle E_{s}} , Kohn–Sham equations of this auxiliary noninteracting system can be derived: [ − ℏ 2 2 m ∇ 2 + V s ( r ) ] φ i ( r ) = ε i φ i ( r ) , {\displaystyle \left[-{\frac {\hbar ^{2}}{2m}}\nabla ^{2}+V_{\text{s}}(\mathbf {r} )\right]\varphi _{i}(\mathbf {r} )=\varepsilon _{i}\varphi _{i}(\mathbf {r} ),} which yields the orbitals φi that reproduce the density n(r) of the original many-body system n ( r ) = ∑ i = 1 N | φ i ( r ) | 2 . {\displaystyle n(\mathbf {r} )=\sum _{i=1}^{N}{\big |}\varphi _{i}(\mathbf {r} ){\big |}^{2}.} The effective single-particle potential can be written as V s ( r ) = V ( r ) + ∫ n ( r ′ ) | r − r ′ | d 3 r ′ + V XC [ n ( r ) ] , {\displaystyle V_{\text{s}}(\mathbf {r} )=V(\mathbf {r} )+\int {\frac {n(\mathbf {r} ')}{|\mathbf {r} -\mathbf {r} '|}}\,\mathrm {d} ^{3}\mathbf {r} '+V_{\text{XC}}[n(\mathbf {r} )],} where V ( r ) {\displaystyle V(\mathbf {r} )} is the external potential, the second term is the Hartree term describing the electron–electron Coulomb repulsion, and the last term VXC is the exchange–correlation potential. Here, VXC includes all the many-particle interactions. Since the Hartree term and VXC depend on n(r), which depends on the φi, which in turn depend on Vs, the problem of solving the Kohn–Sham equation has to be done in a self-consistent (i.e., iterative) way. Usually one starts with an initial guess for n(r), then calculates the corresponding Vs and solves the Kohn–Sham equations for the φi. From these one calculates a new density and starts again. This procedure is then repeated until convergence is reached. A non-iterative approximate formulation called Harris functional DFT is an alternative approach to this. Notes The one-to-one correspondence between electron density and single-particle potential is not so smooth. It contains kinds of non-analytic structure. Es[n] contains kinds of singularities, cuts and branches. This may indicate a limitation of our hope for representing exchange–correlation functional in a simple analytic form. It is possible to extend the DFT idea to the case of the Green function G instead of the density n. It is called as Luttinger–Ward functional (or kinds of similar functionals), written as E[G]. However, G is determined not as its minimum, but as its extremum. Thus we may have some theoretical and practical difficulties. There is no one-to-one correspondence between one-body density matrix n(r, r′) and the one-body potential V(r, r′). (All the eigenvalues of n(r, r′) are 1.) In other words, it ends up with a theory similar to the Hartree–Fock (or hybrid) theory. == Relativistic formulation (ab initio functional forms) == The same theorems can be proven in the case of relativistic electrons, thereby providing generalization of DFT for the relativistic case. Unlike the nonrelativistic theory, in the relativistic case it is possible to derive a few exact and explicit formulas for the relativistic density functional. Let one consider an electron in the hydrogen-like ion obeying the relativistic Dirac equation. The Hamiltonian H for a relativistic electron moving in the Coulomb potential can be chosen in the following form (atomic units are used): H = c ( α ⋅ p ) + e V + m c 2 β , {\displaystyle H=c({\boldsymbol {\alpha }}\cdot \mathbf {p} )+eV+mc^{2}\beta ,} where V = −eZ/r is the Coulomb potential of a pointlike nucleus, p is a momentum operator of the electron, and e, m and c are the elementary charge, electron mass and the speed of light respectively, and finally α and β are a set of Dirac 2 × 2 matrices: α = ( 0 σ σ 0 ) , β = ( I 0 0 − I ) . {\displaystyle {\begin{aligned}{\boldsymbol {\alpha }}&={\begin{pmatrix}0&{\boldsymbol {\sigma }}\\{\boldsymbol {\sigma }}&0\end{pmatrix}},\\\beta &={\begin{pmatrix}I&0\\0&-I\end{pmatrix}}.\end{aligned}}} To find out the eigenfunctions and corresponding energies, one solves the eigenfunction equation H Ψ = E Ψ , {\displaystyle H\Psi =E\Psi ,} where Ψ = (Ψ(1), Ψ(2), Ψ(3), Ψ(4))T is a four-component wavefunction, and E is the associated eigenenergy. It is demonstrated in Brack (1983) that application of the virial theorem to the eigenfunction equation produces the following formula for the eigenenergy of any bound state: E = m c 2 ⟨ Ψ | β | Ψ ⟩ = m c 2 ∫ | Ψ ( 1 ) | 2 + | Ψ ( 2 ) | 2 − | Ψ ( 3 ) | 2 − | Ψ ( 4 ) | 2 d τ , {\displaystyle E=mc^{2}\langle \Psi |\beta |\Psi \rangle =mc^{2}\int {\big |}\Psi (1){\big |}^{2}+{\big |}\Psi (2){\big |}^{2}-{\big |}\Psi (3){\big |}^{2}-{\big |}\Psi (4){\big |}^{2}\,\mathrm {d} \tau ,} and analogously, the virial theorem applied to the eigenfunction equation with the square of the Hamiltonian yields E 2 = m 2 c 4 + e m c 2 ⟨ Ψ | V β | Ψ ⟩ . {\displaystyle E^{2}=m^{2}c^{4}+emc^{2}\langle \Psi |V\beta |\Psi \rangle .} It is easy to see that both of the above formulae represent density functionals. The former formula can be easily generalized for the multi-electron case. One may observe that both of the functionals written above do not have extremals, of course, if a reasonably wide set of functions is allowed for variation. Nevertheless, it is possible to design a density functional with desired extremal properties out of those ones. Let us make it in the following way: F [ n ] = 1 m c 2 ( m c 2 ∫ n d τ − m 2 c 4 + e m c 2 ∫ V n d τ ) 2 + δ n , n e m c 2 ∫ n d τ , {\displaystyle F[n]={\frac {1}{mc^{2}}}\left(mc^{2}\int n\,d\tau -{\sqrt {m^{2}c^{4}+emc^{2}\int Vn\,d\tau }}\right)^{2}+\delta _{n,n_{e}}mc^{2}\int n\,d\tau ,} where ne in Kronecker delta symbol of the second term denotes any extremal for the functional represented by the first term of the functional F. The second term amounts to zero for any function that is not an extremal for the first term of functional F. To proceed further we'd like to find Lagrange equation for this functional. In order to do this, we should allocate a linear part of functional increment when the argument function is altered: F [ n e + δ n ] = 1 m c 2 ( m c 2 ∫ ( n e + δ n ) d τ − m 2 c 4 + e m c 2 ∫ V ( n e + δ n ) d τ ) 2 . {\displaystyle F[n_{e}+\delta n]={\frac {1}{mc^{2}}}\left(mc^{2}\int (n_{e}+\delta n)\,d\tau -{\sqrt {m^{2}c^{4}+emc^{2}\int V(n_{e}+\delta n)\,d\tau }}\right)^{2}.} Deploying written above equation, it is easy to find the following formula for functional derivative: δ F [ n e ] δ n = 2 A − 2 B 2 + A e V ( τ 0 ) B + e V ( τ 0 ) , {\displaystyle {\frac {\delta F[n_{e}]}{\delta n}}=2A-{\frac {2B^{2}+AeV(\tau _{0})}{B}}+eV(\tau _{0}),} where A = mc2∫ ne dτ, and B = √m2c4 + emc2∫Vne dτ, and V(τ0) is a value of potential at some point, specified by support of variation function δn, which is supposed to be infinitesimal. To advance toward Lagrange equation, we equate functional derivative to zero and after simple algebraic manipulations arrive to the following equation: 2 B ( A − B ) = e V ( τ 0 ) ( A − B ) . {\displaystyle 2B(A-B)=eV(\tau _{0})(A-B).} Apparently, this equation could have solution only if A = B. This last condition provides us with Lagrange equation for functional F, which could be finally written down in the following form: ( m c 2 ∫ n d τ ) 2 = m 2 c 4 + e m c 2 ∫ V n d τ . {\displaystyle \left(mc^{2}\int n\,d\tau \right)^{2}=m^{2}c^{4}+emc^{2}\int Vn\,d\tau .} Solutions of this equation represent extremals for functional F. It's easy to see that all real densities, that is, densities corresponding to the bound states of the system in question, are solutions of written above equation, which could be called the Kohn–Sham equation in this particular case. Looking back onto the definition of the functional F, we clearly see that the functional produces energy of the system for appropriate density, because the first term amounts to zero for such density and the second one delivers the energy value. == Approximations (exchange–correlation functionals) == The major problem with DFT is that the exact functionals for exchange and correlation are not known, except for the free-electron gas. However, approximations exist which permit the calculation of certain physical quantities quite accurately. One of the simplest approximations is the local-density approximation (LDA), where the functional depends only on the density at the coordinate where the functional is evaluated: E XC LDA [ n ] = ∫ ε XC ( n ) n ( r ) d 3 r . {\displaystyle E_{\text{XC}}^{\text{LDA}}[n]=\int \varepsilon _{\text{XC}}(n)n(\mathbf {r} )\,\mathrm {d} ^{3}\mathbf {r} .} The local spin-density approximation (LSDA) is a straightforward generalization of the LDA to include electron spin: E XC LSDA [ n ↑ , n ↓ ] = ∫ ε XC ( n ↑ , n ↓ ) n ( r ) d 3 r . {\displaystyle E_{\text{XC}}^{\text{LSDA}}[n_{\uparrow },n_{\downarrow }]=\int \varepsilon _{\text{XC}}(n_{\uparrow },n_{\downarrow })n(\mathbf {r} )\,\mathrm {d} ^{3}\mathbf {r} .} In LDA, the exchange–correlation energy is typically separated into the exchange part and the correlation part: εXC = εX + εC. The exchange part is called the Dirac (or sometimes Slater) exchange, which takes the form εX ∝ n1/3. There are, however, many mathematical forms for the correlation part. Highly accurate formulae for the correlation energy density εC(n↑, n↓) have been constructed from quantum Monte Carlo simulations of jellium. A simple first-principles correlation functional has been recently proposed as well. Although unrelated to the Monte Carlo simulation, the two variants provide comparable accuracy. The LDA assumes that the density is the same everywhere. Because of this, the LDA has a tendency to underestimate the exchange energy and over-estimate the correlation energy. The errors due to the exchange and correlation parts tend to compensate each other to a certain degree. To correct for this tendency, it is common to expand in terms of the gradient of the density in order to account for the non-homogeneity of the true electron density. This allows corrections based on the changes in density away from the coordinate. These expansions are referred to as generalized gradient approximations (GGA) and have the following form: E XC GGA [ n ↑ , n ↓ ] = ∫ ε XC ( n ↑ , n ↓ , ∇ n ↑ , ∇ n ↓ ) n ( r ) d 3 r . {\displaystyle E_{\text{XC}}^{\text{GGA}}[n_{\uparrow },n_{\downarrow }]=\int \varepsilon _{\text{XC}}(n_{\uparrow },n_{\downarrow },\nabla n_{\uparrow },\nabla n_{\downarrow })n(\mathbf {r} )\,\mathrm {d} ^{3}\mathbf {r} .} Using the latter (GGA), very good results for molecular geometries and ground-state energies have been achieved. Potentially more accurate than the GGA functionals are the meta-GGA functionals, a natural development after the GGA (generalized gradient approximation). Meta-GGA DFT functional in its original form includes the second derivative of the electron density (the Laplacian), whereas GGA includes only the density and its first derivative in the exchange–correlation potential. Functionals of this type are, for example, TPSS and the Minnesota Functionals. These functionals include a further term in the expansion, depending on the density, the gradient of the density and the Laplacian (second derivative) of the density. Difficulties in expressing the exchange part of the energy can be relieved by including a component of the exact exchange energy calculated from Hartree–Fock theory. Functionals of this type are known as hybrid functionals. == Generalizations to include magnetic fields == The DFT formalism described above breaks down, to various degrees, in the presence of a vector potential, i.e. a magnetic field. In such a situation, the one-to-one mapping between the ground-state electron density and wavefunction is lost. Generalizations to include the effects of magnetic fields have led to two different theories: current density functional theory (CDFT) and magnetic field density functional theory (BDFT). In both these theories, the functional used for the exchange and correlation must be generalized to include more than just the electron density. In current density functional theory, developed by Vignale and Rasolt, the functionals become dependent on both the electron density and the paramagnetic current density. In magnetic field density functional theory, developed by Salsbury, Grayce and Harris, the functionals depend on the electron density and the magnetic field, and the functional form can depend on the form of the magnetic field. In both of these theories it has been difficult to develop functionals beyond their equivalent to LDA, which are also readily implementable computationally. == Applications == In general, density functional theory finds increasingly broad application in chemistry and materials science for the interpretation and prediction of complex system behavior at an atomic scale. Specifically, DFT computational methods are applied for synthesis-related systems and processing parameters. In such systems, experimental studies are often encumbered by inconsistent results and non-equilibrium conditions. Examples of contemporary DFT applications include studying the effects of dopants on phase transformation behavior in oxides, magnetic behavior in dilute magnetic semiconductor materials, and the study of magnetic and electronic behavior in ferroelectrics and dilute magnetic semiconductors. It has also been shown that DFT gives good results in the prediction of sensitivity of some nanostructures to environmental pollutants like sulfur dioxide or acrolein, as well as prediction of mechanical properties. In practice, Kohn–Sham theory can be applied in several distinct ways, depending on what is being investigated. In solid-state calculations, the local density approximations are still commonly used along with plane-wave basis sets, as an electron-gas approach is more appropriate for electrons delocalised through an infinite solid. In molecular calculations, however, more sophisticated functionals are needed, and a huge variety of exchange–correlation functionals have been developed for chemical applications. Some of these are inconsistent with the uniform electron-gas approximation; however, they must reduce to LDA in the electron-gas limit. Among physicists, one of the most widely used functionals is the revised Perdew–Burke–Ernzerhof exchange model (a direct generalized gradient parameterization of the free-electron gas with no free parameters); however, this is not sufficiently calorimetrically accurate for gas-phase molecular calculations. In the chemistry community, one popular functional is known as BLYP (from the name Becke for the exchange part and Lee, Yang and Parr for the correlation part). Even more widely used is B3LYP, which is a hybrid functional in which the exchange energy, in this case from Becke's exchange functional, is combined with the exact energy from Hartree–Fock theory. Along with the component exchange and correlation funсtionals, three parameters define the hybrid functional, specifying how much of the exact exchange is mixed in. The adjustable parameters in hybrid functionals are generally fitted to a "training set" of molecules. Although the results obtained with these functionals are usually sufficiently accurate for most applications, there is no systematic way of improving them (in contrast to some of the traditional wavefunction-based methods like configuration interaction or coupled cluster theory). In the current DFT approach it is not possible to estimate the error of the calculations without comparing them to other methods or experiments. Density functional theory is generally highly accurate but highly computationally-expensive. In recent years, DFT has been used with machine learning techniques - especially graph neural networks - to create machine learning potentials. These graph neural networks approximate DFT, with the aim of achieving similar accuracies with much less computation, and are especially beneficial for large systems. They are trained using DFT-calculated properties of a known set of molecules. Researchers have been trying to approximate DFT with machine learning for decades, but have only recently made good estimators. Breakthroughs in model architecture and data preprocessing that more heavily encoded theoretical knowledge, especially regarding symmetries and invariances, have enabled huge leaps in model performance. Using backpropagation, the process by which neural networks learn from training errors, to extract meaningful information about forces and densities, has similarly improved machine learning potentials accuracy. By 2023, for example, the DFT approximator Matlantis could simulate 72 elements, handle up to 20,000 atoms at a time, and execute calculations up to 20,000,000 times faster than DFT with similar accuracy, showcasing the power of DFT approximators in the artificial intelligence age. ML approximations of DFT have historically faced substantial transferability issues, with models failing to generalize potentials from some types of elements and compounds to others; improvements in architecture and data have slowly mitigated, but not eliminated, this issue. For very large systems, electrically nonneutral simulations, and intricate reaction pathways, DFT approximators often remain insufficiently computationally-lightweight or insufficiently accurate. == Thomas–Fermi model == The predecessor to density functional theory was the Thomas–Fermi model, developed independently by both Llewellyn Thomas and Enrico Fermi in 1927. They used a statistical model to approximate the distribution of electrons in an atom. The mathematical basis postulated that electrons are distributed uniformly in phase space with two electrons in every h 3 {\displaystyle h^{3}} of volume. For each element of coordinate space volume d 3 r {\displaystyle \mathrm {d} ^{3}\mathbf {r} } we can fill out a sphere of momentum space up to the Fermi momentum p F {\displaystyle p_{\text{F}}} 4 3 π p F 3 ( r ) . {\displaystyle {\tfrac {4}{3}}\pi p_{\text{F}}^{3}(\mathbf {r} ).} Equating the number of electrons in coordinate space to that in phase space gives n ( r ) = 8 π 3 h 3 p F 3 ( r ) . {\displaystyle n(\mathbf {r} )={\frac {8\pi }{3h^{3}}}p_{\text{F}}^{3}(\mathbf {r} ).} Solving for pF and substituting into the classical kinetic energy formula then leads directly to a kinetic energy represented as a functional of the electron density: t TF [ n ] = p 2 2 m e ∝ ( n 1 / 3 ) 2 2 m e ∝ n 2 / 3 ( r ) , T TF [ n ] = C F ∫ n ( r ) n 2 / 3 ( r ) d 3 r = C F ∫ n 5 / 3 ( r ) d 3 r , {\displaystyle {\begin{aligned}t_{\text{TF}}[n]&={\frac {p^{2}}{2m_{e}}}\propto {\frac {(n^{1/3})^{2}}{2m_{e}}}\propto n^{2/3}(\mathbf {r} ),\\T_{\text{TF}}[n]&=C_{\text{F}}\int n(\mathbf {r} )n^{2/3}(\mathbf {r} )\,\mathrm {d} ^{3}\mathbf {r} =C_{\text{F}}\int n^{5/3}(\mathbf {r} )\,\mathrm {d} ^{3}\mathbf {r} ,\end{aligned}}} where C F = 3 h 2 10 m e ( 3 8 π ) 2 / 3 . {\displaystyle C_{\text{F}}={\frac {3h^{2}}{10m_{e}}}\left({\frac {3}{8\pi }}\right)^{2/3}.} As such, they were able to calculate the energy of an atom using this kinetic-energy functional combined with the classical expressions for the nucleus–electron and electron–electron interactions (which can both also be represented in terms of the electron density). Although this was an important first step, the Thomas–Fermi equation's accuracy is limited because the resulting kinetic-energy functional is only approximate, and because the method does not attempt to represent the exchange energy of an atom as a conclusion of the Pauli principle. An exchange-energy functional was added by Paul Dirac in 1928. However, the Thomas–Fermi–Dirac theory remained rather inaccurate for most applications. The largest source of error was in the representation of the kinetic energy, followed by the errors in the exchange energy, and due to the complete neglect of electron correlation. Edward Teller (1962) showed that Thomas–Fermi theory cannot describe molecular bonding. This can be overcome by improving the kinetic-energy functional. The kinetic-energy functional can be improved by adding the von Weizsäcker (1935) correction: T W [ n ] = ℏ 2 8 m ∫ | ∇ n ( r ) | 2 n ( r ) d 3 r . {\displaystyle T_{\text{W}}[n]={\frac {\hbar ^{2}}{8m}}\int {\frac {|\nabla n(\mathbf {r} )|^{2}}{n(\mathbf {r} )}}\,\mathrm {d} ^{3}\mathbf {r} .} == Hohenberg–Kohn theorems == The Hohenberg–Kohn theorems relate to any system consisting of electrons moving under the influence of an external potential. Theorem 1. The external potential (and hence the total energy), is a unique functional of the electron density. If two systems of electrons, one trapped in a potential v 1 ( r ) {\displaystyle v_{1}(\mathbf {r} )} and the other in v 2 ( r ) {\displaystyle v_{2}(\mathbf {r} )} , have the same ground-state density n ( r ) {\displaystyle n(\mathbf {r} )} , then v 1 ( r ) − v 2 ( r ) {\displaystyle v_{1}(\mathbf {r} )-v_{2}(\mathbf {r} )} is necessarily a constant. Corollary 1: the ground-state density uniquely determines the potential and thus all properties of the system, including the many-body wavefunction. In particular, the HK functional, defined as F [ n ] = T [ n ] + U [ n ] {\displaystyle F[n]=T[n]+U[n]} , is a universal functional of the density (not depending explicitly on the external potential). Corollary 2: In light of the fact that the sum of the occupied energies provides the energy content of the Hamiltonian, a unique functional of the ground state charge density, the spectrum of the Hamiltonian is also a unique functional of the ground state charge density. Theorem 2. The functional that delivers the ground-state energy of the system gives the lowest energy if and only if the input density is the true ground-state density. In other words, the energy content of the Hamiltonian reaches its absolute minimum, i.e., the ground state, when the charge density is that of the ground state. For any positive integer N {\displaystyle N} and potential v ( r ) {\displaystyle v(\mathbf {r} )} , a density functional F [ n ] {\displaystyle F[n]} exists such that E ( v , N ) [ n ] = F [ n ] + ∫ v ( r ) n ( r ) d 3 r {\displaystyle E_{(v,N)}[n]=F[n]+\int v(\mathbf {r} )n(\mathbf {r} )\,\mathrm {d} ^{3}\mathbf {r} } reaches its minimal value at the ground-state density of N {\displaystyle N} electrons in the potential v ( r ) {\displaystyle v(\mathbf {r} )} . The minimal value of E ( v , N ) [ n ] {\displaystyle E_{(v,N)}[n]} is then the ground-state energy of this system. == Pseudo-potentials == The many-electron Schrödinger equation can be very much simplified if electrons are divided in two groups: valence electrons and inner core electrons. The electrons in the inner shells are strongly bound and do not play a significant role in the chemical binding of atoms; they also partially screen the nucleus, thus forming with the nucleus an almost inert core. Binding properties are almost completely due to the valence electrons, especially in metals and semiconductors. This separation suggests that inner electrons can be ignored in a large number of cases, thereby reducing the atom to an ionic core that interacts with the valence electrons. The use of an effective interaction, a pseudopotential, that approximates the potential felt by the valence electrons, was first proposed by Fermi in 1934 and Hellmann in 1935. In spite of the simplification pseudo-potentials introduce in calculations, they remained forgotten until the late 1950s. === Ab initio pseudo-potentials === A crucial step toward more realistic pseudo-potentials was given by William C. Topp and John Hopfield, who suggested that the pseudo-potential should be adjusted such that they describe the valence charge density accurately. Based on that idea, modern pseudo-potentials are obtained inverting the free-atom Schrödinger equation for a given reference electronic configuration and forcing the pseudo-wavefunctions to coincide with the true valence wavefunctions beyond a certain distance rl. The pseudo-wavefunctions are also forced to have the same norm (i.e., the so-called norm-conserving condition) as the true valence wavefunctions and can be written as R l PP ( r ) = R n l AE ( r ) , for r > r l , ∫ 0 r l | R l PP ( r ) | 2 r 2 d r = ∫ 0 r l | R n l AE ( r ) | 2 r 2 d r , {\displaystyle {\begin{aligned}R_{l}^{\text{PP}}(r)&=R_{nl}^{\text{AE}}(r),{\text{ for }}r>r_{l},\\\int _{0}^{r_{l}}{\big |}R_{l}^{\text{PP}}(r){\big |}^{2}r^{2}\,\mathrm {d} r&=\int _{0}^{r_{l}}{\big |}R_{nl}^{\text{AE}}(r){\big |}^{2}r^{2}\,\mathrm {d} r,\end{aligned}}} where Rl(r) is the radial part of the wavefunction with angular momentum l, and PP and AE denote the pseudo-wavefunction and the true (all-electron) wavefunction respectively. The index n in the true wavefunctions denotes the valence level. The distance rl beyond which the true and the pseudo-wavefunctions are equal is also dependent on l. == Electron smearing == The electrons of a system will occupy the lowest Kohn–Sham eigenstates up to a given energy level according to the Aufbau principle. This corresponds to the steplike Fermi–Dirac distribution at absolute zero. If there are several degenerate or close to degenerate eigenstates at the Fermi level, it is possible to get convergence problems, since very small perturbations may change the electron occupation. One way of damping these oscillations is to smear the electrons, i.e. allowing fractional occupancies. One approach of doing this is to assign a finite temperature to the electron Fermi–Dirac distribution. Other ways is to assign a cumulative Gaussian distribution of the electrons or using a Methfessel–Paxton method. == Classical density functional theory == Classical density functional theory is a classical statistical method to investigate the properties of many-body systems consisting of interacting molecules, macromolecules, nanoparticles or microparticles. The classical non-relativistic method is correct for classical fluids with particle velocities less than the speed of light and thermal de Broglie wavelength smaller than the distance between particles. The theory is based on the calculus of variations of a thermodynamic functional, which is a function of the spatially dependent density function of particles, thus the name. The same name is used for quantum DFT, which is the theory to calculate the electronic structure of electrons based on spatially dependent electron density with quantum and relativistic effects. Classical DFT is a popular and useful method to study fluid phase transitions, ordering in complex liquids, physical characteristics of interfaces and nanomaterials. Since the 1970s it has been applied to the fields of materials science, biophysics, chemical engineering and civil engineering. Computational costs are much lower than for molecular dynamics simulations, which provide similar data and a more detailed description but are limited to small systems and short time scales. Classical DFT is valuable to interpret and test numerical results and to define trends although details of the precise motion of the particles are lost due to averaging over all possible particle trajectories. As in electronic systems, there are fundamental and numerical difficulties in using DFT to quantitatively describe the effect of intermolecular interaction on structure, correlations and thermodynamic properties. Classical DFT addresses the difficulty of describing thermodynamic equilibrium states of many-particle systems with nonuniform density. Classical DFT has its roots in theories such as the van der Waals theory for the equation of state and the virial expansion method for the pressure. In order to account for correlation in the positions of particles the direct correlation function was introduced as the effective interaction between two particles in the presence of a number of surrounding particles by Leonard Ornstein and Frits Zernike in 1914. The connection to the density pair distribution function was given by the Ornstein–Zernike equation. The importance of correlation for thermodynamic properties was explored through density distribution functions. The functional derivative was introduced to define the distribution functions of classical mechanical systems. Theories were developed for simple and complex liquids using the ideal gas as a basis for the free energy and adding molecular forces as a second-order perturbation. A term in the gradient of the density was added to account for non-uniformity in density in the presence of external fields or surfaces. These theories can be considered precursors of DFT. To develop a formalism for the statistical thermodynamics of non-uniform fluids functional differentiation was used extensively by Percus and Lebowitz (1961), which led to the Percus–Yevick equation linking the density distribution function and the direct correlation. Other closure relations were also proposed;the Classical-map hypernetted-chain method, the BBGKY hierarchy. In the late 1970s classical DFT was applied to the liquid–vapor interface and the calculation of surface tension. Other applications followed: the freezing of simple fluids, formation of the glass phase, the crystal–melt interface and dislocation in crystals, properties of polymer systems, and liquid crystal ordering. Classical DFT was applied to colloid dispersions, which were discovered to be good models for atomic systems. By assuming local chemical equilibrium and using the local chemical potential of the fluid from DFT as the driving force in fluid transport equations, equilibrium DFT is extended to describe non-equilibrium phenomena and fluid dynamics on small scales. Classical DFT allows the calculation of the equilibrium particle density and prediction of thermodynamic properties and behavior of a many-body system on the basis of model interactions between particles. The spatially dependent density determines the local structure and composition of the material. It is determined as a function that optimizes the thermodynamic potential of the grand canonical ensemble. The grand potential is evaluated as the sum of the ideal-gas term with the contribution from external fields and an excess thermodynamic free energy arising from interparticle interactions. In the simplest approach the excess free-energy term is expanded on a system of uniform density using a functional Taylor expansion. The excess free energy is then a sum of the contributions from s-body interactions with density-dependent effective potentials representing the interactions between s particles. In most calculations the terms in the interactions of three or more particles are neglected (second-order DFT). When the structure of the system to be studied is not well approximated by a low-order perturbation expansion with a uniform phase as the zero-order term, non-perturbative free-energy functionals have also been developed. The minimization of the grand potential functional in arbitrary local density functions for fixed chemical potential, volume and temperature provides self-consistent thermodynamic equilibrium conditions, in particular, for the local chemical potential. The functional is not in general a convex functional of the density; solutions may not be local minima. Limiting to low-order corrections in the local density is a well-known problem, although the results agree (reasonably) well on comparison to experiment. A variational principle is used to determine the equilibrium density. It can be shown that for constant temperature and volume the correct equilibrium density minimizes the grand potential functional Ω {\displaystyle \Omega } of the grand canonical ensemble over density functions n ( r ) {\displaystyle n(\mathbf {r} )} . In the language of functional differentiation (Mermin theorem): δ Ω δ n ( r ) = 0. {\displaystyle {\frac {\delta \Omega }{\delta n(\mathbf {r} )}}=0.} The Helmholtz free energy functional F {\displaystyle F} is defined as F = Ω + ∫ d 3 r n ( r ) μ ( r ) {\displaystyle F=\Omega +\int d^{3}\mathbf {r} \,n(\mathbf {r} )\mu (\mathbf {r} )} . The functional derivative in the density function determines the local chemical potential: μ ( r ) = δ F ( r ) / δ n ( r ) {\displaystyle \mu (\mathbf {r} )=\delta F(\mathbf {r} )/\delta n(\mathbf {r} )} . In classical statistical mechanics the partition function is a sum over probability for a given microstate of N classical particles as measured by the Boltzmann factor in the Hamiltonian of the system. The Hamiltonian splits into kinetic and potential energy, which includes interactions between particles, as well as external potentials. The partition function of the grand canonical ensemble defines the grand potential. A correlation function is introduced to describe the effective interaction between particles. The s-body density distribution function is defined as the statistical ensemble average ⟨ … ⟩ {\displaystyle \langle \dots \rangle } of particle positions. It measures the probability to find s particles at points in space r 1 , … , r s {\displaystyle \mathbf {r} _{1},\dots ,\mathbf {r} _{s}} : n s ( r 1 , … , r s ) = N ! ( N − s ) ! ⟨ δ ( r 1 − r 1 ′ ) … δ ( r s − r s ′ ) ⟩ . {\displaystyle n_{s}(\mathbf {r} _{1},\dots ,\mathbf {r} _{s})={\frac {N!}{(N-s)!}}{\big \langle }\delta (\mathbf {r} _{1}-\mathbf {r} '_{1})\dots \delta (\mathbf {r} _{s}-\mathbf {r} '_{s}){\big \rangle }.} From the definition of the grand potential, the functional derivative with respect to the local chemical potential is the density; higher-order density correlations for two, three, four or more particles are found from higher-order derivatives: δ s Ω δ μ ( r 1 ) … δ μ ( r s ) = ( − 1 ) s n s ( r 1 , … , r s ) . {\displaystyle {\frac {\delta ^{s}\Omega }{\delta \mu (\mathbf {r} _{1})\dots \delta \mu (\mathbf {r} _{s})}}=(-1)^{s}n_{s}(\mathbf {r} _{1},\dots ,\mathbf {r} _{s}).} The radial distribution function with s = 2 measures the change in the density at a given point for a change of the local chemical interaction at a distant point. In a fluid the free energy is a sum of the ideal free energy and the excess free-energy contribution Δ F {\displaystyle \Delta F} from interactions between particles. In the grand ensemble the functional derivatives in the density yield the direct correlation functions c s {\displaystyle c_{s}} : 1 k T δ s Δ F δ n ( r 1 ) … δ n ( r s ) = c s ( r 1 , … , r s ) . {\displaystyle {\frac {1}{kT}}{\frac {\delta ^{s}\Delta F}{\delta n(\mathbf {r} _{1})\dots \delta n(\mathbf {r} _{s})}}=c_{s}(\mathbf {r} _{1},\dots ,\mathbf {r} _{s}).} The one-body direct correlation function plays the role of an effective mean field. The functional derivative in density of the one-body direct correlation results in the direct correlation function between two particles c 2 {\displaystyle c_{2}} . The direct correlation function is the correlation contribution to the change of local chemical potential at a point r {\displaystyle \mathbf {r} } for a density change at r ′ {\displaystyle \mathbf {r} '} and is related to the work of creating density changes at different positions. In dilute gases the direct correlation function is simply the pair-wise interaction between particles (Debye–Huckel equation). The Ornstein–Zernike equation between the pair and the direct correlation functions is derived from the equation ∫ d 3 r ″ δ μ ( r ) δ n ( r ″ ) δ n ( r ″ ) δ μ ( r ′ ) = δ ( r − r ′ ) . {\displaystyle \int d^{3}\mathbf {r} ''\,{\frac {\delta \mu (\mathbf {r} )}{\delta n(\mathbf {r} '')}}{\frac {\delta n(\mathbf {r} '')}{\delta \mu (\mathbf {r} ')}}=\delta (\mathbf {r} -\mathbf {r} ').} Various assumptions and approximations adapted to the system under study lead to expressions for the free energy. Correlation functions are used to calculate the free-energy functional as an expansion on a known reference system. If the non-uniform fluid can be described by a density distribution that is not far from uniform density a functional Taylor expansion of the free energy in density increments leads to an expression for the thermodynamic potential using known correlation functions of the uniform system. In the square gradient approximation a strong non-uniform density contributes a term in the gradient of the density. In a perturbation theory approach the direct correlation function is given by the sum of the direct correlation in a known system such as hard spheres and a term in a weak interaction such as the long range London dispersion force. In a local density approximation the local excess free energy is calculated from the effective interactions with particles distributed at uniform density of the fluid in a cell surrounding a particle. Other improvements have been suggested such as the weighted density approximation for a direct correlation function of a uniform system which distributes the neighboring particles with an effective weighted density calculated from a self-consistent condition on the direct correlation function. The variational Mermin principle leads to an equation for the equilibrium density and system properties are calculated from the solution for the density. The equation is a non-linear integro-differential equation and finding a solution is not trivial, requiring numerical methods, except for the simplest models. Classical DFT is supported by standard software packages, and specific software is currently under development. Assumptions can be made to propose trial functions as solutions, and the free energy is expressed in the trial functions and optimized with respect to parameters of the trial functions. Examples are a localized Gaussian function centered on crystal lattice points for the density in a solid, the hyperbolic function tanh ⁡ ( r ) {\displaystyle \tanh(r)} for interfacial density profiles. Classical DFT has found many applications, for example: developing new functional materials in materials science, in particular nanotechnology; studying the properties of fluids at surfaces and the phenomena of wetting and adsorption; understanding life processes in biotechnology; improving filtration methods for gases and fluids in chemical engineering; fighting pollution of water and air in environmental science; cell membranes by modelling complex systems with amphiphile compounds; generating new procedures in microfluidics and nanofluidics. The extension of classical DFT towards nonequilibrium systems is known as dynamical density functional theory (DDFT). DDFT allows to describe the time evolution of the one-body density ρ ( r , t ) {\displaystyle \rho ({\boldsymbol {r}},t)} of a colloidal system, which is governed by the equation ∂ ρ ∂ t = Γ ∇ ⋅ ( ρ ∇ δ F δ ρ ) {\displaystyle {\frac {\partial \rho }{\partial t}}=\Gamma \nabla \cdot \left(\rho \nabla {\frac {\delta F}{\delta \rho }}\right)} with the mobility Γ {\displaystyle \Gamma } and the free energy F {\displaystyle F} . DDFT can be derived from the microscopic equations of motion for a colloidal system (Langevin equations or Smoluchowski equation) based on the adiabatic approximation, which corresponds to the assumption that the two-body distribution in a nonequilibrium system is identical to that in an equilibrium system with the same one-body density. For a system of noninteracting particles, DDFT reduces to the standard diffusion equation. == See also == === Lists === List of quantum chemistry and solid state physics software List of software for molecular mechanics modeling == References == == Sources == == External links == Walter Kohn, Nobel Laureate – Video interview with Walter on his work developing density functional theory by the Vega Science Trust Capelle, Klaus (2002). "A bird's-eye view of density-functional theory". arXiv:cond-mat/0211443. Walter Kohn, Nobel Lecture Argaman, Nathan; Makov, Guy (2000). "Density Functional Theory -- an introduction". American Journal of Physics. 68 (2000): 69–79. arXiv:physics/9806013. Bibcode:2000AmJPh..68...69A. doi:10.1119/1.19375. S2CID 119102923. Electron Density Functional Theory – Lecture Notes Density Functional Theory through Legendre Transformation Archived 2010-05-10 at the Wayback Machinepdf Burke, Kieron. "The ABC of DFT" (PDF). Modeling Materials Continuum, Atomistic and Multiscale Techniques, Book NIST Jarvis-DFT Clary, David C. (2024). Walter Kohn: From Kindertransport and Internment to DFT and the Nobel Prize. World Scientific Publishing.
Wikipedia/Density_functional_theory
In physics, the fundamental interactions or fundamental forces are interactions in nature that appear not to be reducible to more basic interactions. There are four fundamental interactions known to exist: gravity electromagnetism weak interaction strong interaction The gravitational and electromagnetic interactions produce long-range forces whose effects can be seen directly in everyday life. The strong and weak interactions produce forces at subatomic scales and govern nuclear interactions inside atoms. Some scientists hypothesize that a fifth force might exist, but these hypotheses remain speculative. Each of the known fundamental interactions can be described mathematically as a field. The gravitational interaction is attributed to the curvature of spacetime, described by Einstein's general theory of relativity. The other three are discrete quantum fields, and their interactions are mediated by elementary particles described by the Standard Model of particle physics. Within the Standard Model, the strong interaction is carried by a particle called the gluon and is responsible for quarks binding together to form hadrons, such as protons and neutrons. As a residual effect, it creates the nuclear force that binds the latter particles to form atomic nuclei. The weak interaction is carried by particles called W and Z bosons, and also acts on the nucleus of atoms, mediating radioactive decay. The electromagnetic force, carried by the photon, creates electric and magnetic fields, which are responsible for the attraction between orbital electrons and atomic nuclei which holds atoms together, as well as chemical bonding and electromagnetic waves, including visible light, and forms the basis for electrical technology. Although the electromagnetic force is far stronger than gravity, it tends to cancel itself out within large objects, so over large (astronomical) distances gravity tends to be the dominant force, and is responsible for holding together the large scale structures in the universe, such as planets, stars, and galaxies. The historical success of models that show relationships between fundamental interactions have led to efforts to go beyond the Standard Model and combine all four forces in to a theory of everything. == History == === Classical theory === In his 1687 theory, Isaac Newton postulated space as an infinite and unalterable physical structure existing before, within, and around all objects while their states and relations unfold at a constant pace everywhere, thus absolute space and time. Inferring that all objects bearing mass approach at a constant rate, but collide by impact proportional to their masses, Newton inferred that matter exhibits an attractive force. His law of universal gravitation implied there to be instant interaction among all objects. As conventionally interpreted, Newton's theory of motion modelled a central force without a communicating medium. Thus Newton's theory violated the tradition, going back to Descartes, that there should be no action at a distance. Conversely, during the 1820s, when explaining magnetism, Michael Faraday inferred a field filling space and transmitting that force. Faraday conjectured that ultimately, all forces unified into one. In 1873, James Clerk Maxwell unified electricity and magnetism as effects of an electromagnetic field whose third consequence was light, travelling at constant speed in vacuum. If his electromagnetic field theory held true in all inertial frames of reference, this would contradict Newton's theory of motion, which relied on Galilean relativity. If, instead, his field theory only applied to reference frames at rest relative to a mechanical luminiferous aether—presumed to fill all space whether within matter or in vacuum and to manifest the electromagnetic field—then it could be reconciled with Galilean relativity and Newton's laws. (However, such a "Maxwell aether" was later disproven; Newton's laws did, in fact, have to be replaced.) === Standard Model === The Standard Model of particle physics was developed throughout the latter half of the 20th century. In the Standard Model, the electromagnetic, strong, and weak interactions associate with elementary particles, whose behaviours are modelled in quantum mechanics (QM). For predictive success with QM's probabilistic outcomes, particle physics conventionally models QM events across a field set to special relativity, altogether relativistic quantum field theory (QFT). Force particles, called gauge bosons—force carriers or messenger particles of underlying fields—interact with matter particles, called fermions. Everyday matter is atoms, composed of three fermion types: up-quarks and down-quarks constituting, as well as electrons orbiting, the atom's nucleus. Atoms interact, form molecules, and manifest further properties through electromagnetic interactions among their electrons absorbing and emitting photons, the electromagnetic field's force carrier, which if unimpeded traverse potentially infinite distance. Electromagnetism's QFT is quantum electrodynamics (QED). The force carriers of the weak interaction are the massive W and Z bosons. Electroweak theory (EWT) covers both electromagnetism and the weak interaction. At the high temperatures shortly after the Big Bang, the weak interaction, the electromagnetic interaction, and the Higgs boson were originally mixed components of a different set of ancient pre-symmetry-breaking fields. As the early universe cooled, these fields split into the long-range electromagnetic interaction, the short-range weak interaction, and the Higgs boson. In the Higgs mechanism, the Higgs field manifests Higgs bosons that interact with some quantum particles in a way that endows those particles with mass. The strong interaction, whose force carrier is the gluon, traversing minuscule distance among quarks, is modeled in quantum chromodynamics (QCD). EWT, QCD, and the Higgs mechanism comprise particle physics' Standard Model (SM). Predictions are usually made using calculational approximation methods, although such perturbation theory is inadequate to model some experimental observations (for instance bound states and solitons). Still, physicists widely accept the Standard Model as science's most experimentally confirmed theory. Beyond the Standard Model, some theorists work to unite the electroweak and strong interactions within a Grand Unified Theory (GUT). Some attempts at GUTs hypothesize "shadow" particles, such that every known matter particle associates with an undiscovered force particle, and vice versa, altogether supersymmetry (SUSY). Other theorists seek to quantize the gravitational field by the modelling behaviour of its hypothetical force carrier, the graviton and achieve quantum gravity (QG). One approach to QG is loop quantum gravity (LQG). Still other theorists seek both QG and GUT within one framework, reducing all four fundamental interactions to a Theory of Everything (ToE). The most prevalent aim at a ToE is string theory, although to model matter particles, it added SUSY to force particles—and so, strictly speaking, became superstring theory. Multiple, seemingly disparate superstring theories were unified on a backbone, M-theory. Theories beyond the Standard Model remain highly speculative, lacking great experimental support. == Overview of the fundamental interactions == In the conceptual model of fundamental interactions, matter consists of fermions, which carry properties called charges and spin ±1⁄2 (intrinsic angular momentum ±ħ⁄2, where ħ is the reduced Planck constant). They attract or repel each other by exchanging bosons. The interaction of any pair of fermions in perturbation theory can then be modelled thus: Two fermions go in → interaction by boson exchange → two changed fermions go out. The exchange of bosons always carries energy and momentum between the fermions, thereby changing their speed and direction. The exchange may also transport a charge between the fermions, changing the charges of the fermions in the process (e.g., turn them from one type of fermion to another). Since bosons carry one unit of angular momentum, the fermion's spin direction will flip from +1⁄2 to −1⁄2 (or vice versa) during such an exchange (in units of the reduced Planck constant). Since such interactions result in a change in momentum, they can give rise to classical Newtonian forces. In quantum mechanics, physicists often use the terms "force" and "interaction" interchangeably; for example, the weak interaction is sometimes referred to as the "weak force". According to the present understanding, there are four fundamental interactions or forces: gravitation, electromagnetism, the weak interaction, and the strong interaction. Their magnitude and behaviour vary greatly, as described in the table below. Modern physics attempts to explain every observed physical phenomenon by these fundamental interactions. Moreover, reducing the number of different interaction types is seen as desirable. Two cases in point are the unification of: Electric and magnetic force into electromagnetism; The electromagnetic interaction and the weak interaction into the electroweak interaction; see below. Both magnitude ("relative strength") and "range" of the associated potential, as given in the table, are meaningful only within a rather complex theoretical framework. The table below lists properties of a conceptual scheme that remains the subject of ongoing research. The modern (perturbative) quantum mechanical view of the fundamental forces other than gravity is that particles of matter (fermions) do not directly interact with each other, but rather carry a charge, and exchange virtual particles (gauge bosons), which are the interaction carriers or force mediators. For example, photons mediate the interaction of electric charges, and gluons mediate the interaction of color charges. The full theory includes perturbations beyond simply fermions exchanging bosons; these additional perturbations can involve bosons that exchange fermions, as well as the creation or destruction of particles: see Feynman diagrams for examples. == Interactions == === Gravity === Gravitation is the weakest of the four interactions at the atomic scale, where electromagnetic interactions dominate. Gravitation is the most important of the four fundamental forces for astronomical objects over astronomical distances for two reasons. First, gravitation has an infinite effective range, like electromagnetism but unlike the strong and weak interactions. Second, gravity always attracts and never repels; in contrast, astronomical bodies tend toward a near-neutral net electric charge, such that the attraction to one type of charge and the repulsion from the opposite charge mostly cancel each other out. Even though electromagnetism is far stronger than gravitation, electrostatic attraction is not relevant for large celestial bodies, such as planets, stars, and galaxies, simply because such bodies contain equal numbers of protons and electrons and so have a net electric charge of zero. Nothing "cancels" gravity, since it is only attractive, unlike electric forces which can be attractive or repulsive. On the other hand, all objects having mass are subject to the gravitational force, which only attracts. Therefore, only gravitation matters on the large-scale structure of the universe. The long range of gravitation makes it responsible for such large-scale phenomena as the structure of galaxies and black holes and, being only attractive, it slows down the expansion of the universe. Gravitation also explains astronomical phenomena on more modest scales, such as planetary orbits, as well as everyday experience: objects fall; heavy objects act as if they were glued to the ground, and animals can only jump so high. Gravitation was the first interaction to be described mathematically. In ancient times, Aristotle hypothesized that objects of different masses fall at different rates. During the Scientific Revolution, Galileo Galilei experimentally determined that this hypothesis was wrong under certain circumstances—neglecting the friction due to air resistance and buoyancy forces if an atmosphere is present (e.g. the case of a dropped air-filled balloon vs a water-filled balloon), all objects accelerate toward the Earth at the same rate. Isaac Newton's law of Universal Gravitation (1687) was a good approximation of the behaviour of gravitation. Present-day understanding of gravitation stems from Einstein's General Theory of Relativity of 1915, a more accurate (especially for cosmological masses and distances) description of gravitation in terms of the geometry of spacetime. Merging general relativity and quantum mechanics (or quantum field theory) into a more general theory of quantum gravity is an area of active research. It is hypothesized that gravitation is mediated by a massless spin-2 particle called the graviton. Although general relativity has been experimentally confirmed (at least for weak fields, i.e. not black holes) on all but the smallest scales, there are alternatives to general relativity. These theories must reduce to general relativity in some limit, and the focus of observational work is to establish limits on what deviations from general relativity are possible. Proposed extra dimensions could explain why the gravity force is so weak. === Electroweak interaction === Electromagnetism and weak interaction appear to be very different at everyday low energies. They can be modeled using two different theories. However, above unification energy, on the order of 100 GeV, they would merge into a single electroweak force. The electroweak theory is very important for modern cosmology, particularly on how the universe evolved. This is because shortly after the Big Bang, when the temperature was still above approximately 1015 K, the electromagnetic force and the weak force were still merged as a combined electroweak force. For contributions to the unification of the weak and electromagnetic interaction between elementary particles, Abdus Salam, Sheldon Glashow and Steven Weinberg were awarded the Nobel Prize in Physics in 1979. ==== Electromagnetism ==== Electromagnetism is the force that acts between electrically charged particles. This phenomenon includes the electrostatic force acting between charged particles at rest, and the combined effect of electric and magnetic forces acting between charged particles moving relative to each other. Electromagnetism has an infinite range, as gravity does, but is vastly stronger. It is the force that binds electrons to atoms, and it holds molecules together. It is responsible for everyday phenomena like light, magnets, electricity, and friction. Electromagnetism fundamentally determines all macroscopic, and many atomic-level, properties of the chemical elements. In a four kilogram (~1 gallon) jug of water, there is of total electron charge. Thus, if we place two such jugs a meter apart, the electrons in one of the jugs repel those in the other jug with a force of This force is many times larger than the weight of the planet Earth. The atomic nuclei in one jug also repel those in the other with the same force. However, these repulsive forces are canceled by the attraction of the electrons in jug A with the nuclei in jug B and the attraction of the nuclei in jug A with the electrons in jug B, resulting in no net force. Electromagnetic forces are tremendously stronger than gravity, but tend to cancel out so that for astronomical-scale bodies, gravity dominates. Electrical and magnetic phenomena have been observed since ancient times, but it was only in the 19th century James Clerk Maxwell discovered that electricity and magnetism are two aspects of the same fundamental interaction. By 1864, Maxwell's equations had rigorously quantified this unified interaction. Maxwell's theory, restated using vector calculus, is the classical theory of electromagnetism, suitable for most technological purposes. The constant speed of light in vacuum (customarily denoted with a lowercase letter c) can be derived from Maxwell's equations, which are consistent with the theory of special relativity. Albert Einstein's 1905 theory of special relativity, however, which follows from the observation that the speed of light is constant no matter how fast the observer is moving, showed that the theoretical result implied by Maxwell's equations has profound implications far beyond electromagnetism on the very nature of time and space. In another work that departed from classical electro-magnetism, Einstein also explained the photoelectric effect by utilizing Max Planck's discovery that light was transmitted in 'quanta' of specific energy content based on the frequency, which we now call photons. Starting around 1927, Paul Dirac combined quantum mechanics with the relativistic theory of electromagnetism. Further work in the 1940s, by Richard Feynman, Freeman Dyson, Julian Schwinger, and Sin-Itiro Tomonaga, completed this theory, which is now called quantum electrodynamics, the revised theory of electromagnetism. Quantum electrodynamics and quantum mechanics provide a theoretical basis for electromagnetic behavior such as quantum tunneling, in which a certain percentage of electrically charged particles move in ways that would be impossible under the classical electromagnetic theory, that is necessary for everyday electronic devices such as transistors to function. ==== Weak interaction ==== The weak interaction or weak nuclear force is responsible for some nuclear phenomena such as beta decay. Electromagnetism and the weak force are now understood to be two aspects of a unified electroweak interaction — this discovery was the first step toward the unified theory known as the Standard Model. In the theory of the electroweak interaction, the carriers of the weak force are the massive gauge bosons called the W and Z bosons. The weak interaction is the only known interaction that does not conserve parity; it is left–right asymmetric. The weak interaction even violates CP symmetry but does conserve CPT. === Strong interaction === The strong interaction, or strong nuclear force, is the most complicated interaction, mainly because of the way it varies with distance. The nuclear force is powerfully attractive between nucleons at distances of about 1 femtometre (fm, or 10−15 metres), but it rapidly decreases to insignificance at distances beyond about 2.5 fm. At distances less than 0.7 fm, the nuclear force becomes repulsive. This repulsive component is responsible for the physical size of nuclei, since the nucleons can come no closer than the force allows. After the nucleus was discovered in 1908, it was clear that a new force, today known as the nuclear force, was needed to overcome the electrostatic repulsion, a manifestation of electromagnetism, of the positively charged protons. Otherwise, the nucleus could not exist. Moreover, the force had to be strong enough to squeeze the protons into a volume whose diameter is about 10−15 m, much smaller than that of the entire atom. From the short range of this force, Hideki Yukawa predicted that it was associated with a massive force particle, whose mass is approximately 100 MeV. The 1947 discovery of the pion ushered in the modern era of particle physics. Hundreds of hadrons were discovered from the 1940s to 1960s, and an extremely complicated theory of hadrons as strongly interacting particles was developed. Most notably: The pions were understood to be oscillations of vacuum condensates; Jun John Sakurai proposed the rho and omega vector bosons to be force carrying particles for approximate symmetries of isospin and hypercharge; Geoffrey Chew, Edward K. Burdett and Steven Frautschi grouped the heavier hadrons into families that could be understood as vibrational and rotational excitations of strings. While each of these approaches offered insights, no approach led directly to a fundamental theory. Murray Gell-Mann along with George Zweig first proposed fractionally charged quarks in 1961. Throughout the 1960s, different authors considered theories similar to the modern fundamental theory of quantum chromodynamics (QCD) as simple models for the interactions of quarks. The first to hypothesize the gluons of QCD were Moo-Young Han and Yoichiro Nambu, who introduced the quark color charge. Han and Nambu hypothesized that it might be associated with a force-carrying field. At that time, however, it was difficult to see how such a model could permanently confine quarks. Han and Nambu also assigned each quark color an integer electrical charge, so that the quarks were fractionally charged only on average, and they did not expect the quarks in their model to be permanently confined. In 1971, Murray Gell-Mann and Harald Fritzsch proposed that the Han/Nambu color gauge field was the correct theory of the short-distance interactions of fractionally charged quarks. A little later, David Gross, Frank Wilczek, and David Politzer discovered that this theory had the property of asymptotic freedom, allowing them to make contact with experimental evidence. They concluded that QCD was the complete theory of the strong interactions, correct at all distance scales. The discovery of asymptotic freedom led most physicists to accept QCD since it became clear that even the long-distance properties of the strong interactions could be consistent with experiment if the quarks are permanently confined: the strong force increases indefinitely with distance, trapping quarks inside the hadrons. Assuming that quarks are confined, Mikhail Shifman, Arkady Vainshtein and Valentine Zakharov were able to compute the properties of many low-lying hadrons directly from QCD, with only a few extra parameters to describe the vacuum. In 1980, Kenneth G. Wilson published computer calculations based on the first principles of QCD, establishing, to a level of confidence tantamount to certainty, that QCD will confine quarks. Since then, QCD has been the established theory of strong interactions. QCD is a theory of fractionally charged quarks interacting by means of 8 bosonic particles called gluons. The gluons also interact with each other, not just with the quarks, and at long distances the lines of force collimate into strings, loosely modeled by a linear potential, a constant attractive force. In this way, the mathematical theory of QCD not only explains how quarks interact over short distances but also the string-like behavior, discovered by Chew and Frautschi, which they manifest over longer distances. === Higgs interaction === Conventionally, the Higgs interaction is not counted among the four fundamental forces. Nonetheless, although not a gauge interaction nor generated by any diffeomorphism symmetry, the Higgs field's cubic Yukawa coupling produces a weakly attractive fifth interaction. After spontaneous symmetry breaking via the Higgs mechanism, Yukawa terms remain of the form λ i 2 ψ ¯ ϕ ′ ψ = m i ν ψ ¯ ϕ ′ ψ {\displaystyle {\frac {\lambda _{i}}{\sqrt {2}}}{\bar {\psi }}\phi '\psi ={\frac {m_{i}}{\nu }}{\bar {\psi }}\phi '\psi } , with Yukawa coupling λ i {\displaystyle \lambda _{i}} , particle mass m i {\displaystyle m_{i}} (in eV), and Higgs vacuum expectation value 246.22 GeV. Hence coupled particles can exchange a virtual Higgs boson, yielding classical potentials of the form V ( r ) = − m i m j m H 2 1 4 π r e − m H c r / ℏ {\displaystyle V(r)=-{\frac {m_{i}m_{j}}{m_{\rm {H}}^{2}}}{\frac {1}{4\pi r}}e^{-m_{\rm {H}}\,c\,r/\hbar }} , with Higgs mass 125.18 GeV. Because the reduced Compton wavelength of the Higgs boson is so small (1.576×10−18 m, comparable to the W and Z bosons), this potential has an effective range of a few attometers. Between two electrons, it begins roughly 1011 times weaker than the weak interaction, and grows exponentially weaker at non-zero distances. === Beyond the Standard Model === The fundamental forces may become unified into a single force at very high energies and on a minuscule scale, the Planck scale. Particle accelerators cannot produce the enormous energies required to experimentally probe this regime. The weak and electromagnetic forces have already been unified with the electroweak theory of Sheldon Glashow, Abdus Salam, and Steven Weinberg, for which they received the 1979 Nobel Prize in physics. Numerous theoretical efforts have been made to systematize the existing four fundamental interactions on the model of electroweak unification. Grand Unified Theories (GUTs) are proposals to show that each of the three fundamental interactions described by the Standard Model is a different manifestation of a single interaction with symmetries that break down and create separate interactions below some extremely high level of energy. GUTs are also expected to predict some of the relationships between constants of nature that the Standard Model treats as unrelated and gauge coupling unification for the relative strengths of the electromagnetic, weak, and strong forces. A so-called theory of everything, which would integrate GUTs with a quantum gravity theory, faces a greater barrier because no quantum gravity theory (e.g., string theory, loop quantum gravity, and twistor theory) has secured wide acceptance. Some theories look for a graviton to complete the Standard Model list of force-carrying particles, while others, like loop quantum gravity, emphasize the possibility that time-space itself may have a quantum aspect to it. Some theories beyond the Standard Model include a hypothetical fifth force, and the search for such a force is an ongoing line of experimental physics research. In supersymmetric theories, some particles, known as moduli, acquire their masses only through supersymmetry breaking effects and can mediate new forces. Another reason to look for new forces is the discovery that the expansion of the universe is accelerating (also known as dark energy), creating a need to explain a nonzero cosmological constant and possibly other modifications of general relativity. Fifth forces have also been suggested to explain phenomena such as CP violations, dark matter, and dark flow. == See also == Quintessence, a hypothesized fifth force Gerardus 't Hooft Edward Witten Howard Georgi == References == == Bibliography == Davies, Paul (1986), The Forces of Nature, Cambridge Univ. Press 2nd ed. Feynman, Richard (1967), The Character of Physical Law, MIT Press, ISBN 978-0-262-56003-0 Schumm, Bruce A. (2004), Deep Down Things, Johns Hopkins University Press While all interactions are discussed, discussion is especially thorough on the weak. Weinberg, Steven (1993), The First Three Minutes: A Modern View of the Origin of the Universe, Basic Books, ISBN 978-0-465-02437-7 Weinberg, Steven (1994), Dreams of a Final Theory, Basic Books, ISBN 978-0-679-74408-5 Padmanabhan, T. (1998), After The First Three Minutes: The Story of Our Universe, Cambridge Univ. Press, ISBN 978-0-521-62972-0 Perkins, Donald H. (2000), Introduction to High Energy Physics (4th ed.), Cambridge Univ. Press, ISBN 978-0-521-62196-0 Riazuddin (December 29, 2009). "Non-standard interactions" (PDF). NCP 5th Particle Physics Sypnoisis. 1 (1): 1–25. Archived from the original (PDF) on March 3, 2016. Retrieved March 19, 2011.
Wikipedia/Fundamental_force
In mathematics, an expression or equation is in closed form if it is formed with constants, variables, and a set of functions considered as basic and connected by arithmetic operations (+, −, ×, /, and integer powers) and function composition. Commonly, the basic functions that are allowed in closed forms are nth root, exponential function, logarithm, and trigonometric functions. However, the set of basic functions depends on the context. For example, if one adds polynomial roots to the basic functions, the functions that have a closed form are called elementary functions. The closed-form problem arises when new ways are introduced for specifying mathematical objects, such as limits, series, and integrals: given an object specified with such tools, a natural problem is to find, if possible, a closed-form expression of this object; that is, an expression of this object in terms of previous ways of specifying it. == Example: roots of polynomials == The quadratic formula x = − b ± b 2 − 4 a c 2 a {\displaystyle x={\frac {-b\pm {\sqrt {b^{2}-4ac}}}{2a}}} is a closed form of the solutions to the general quadratic equation a x 2 + b x + c = 0. {\displaystyle ax^{2}+bx+c=0.} More generally, in the context of polynomial equations, a closed form of a solution is a solution in radicals; that is, a closed-form expression for which the allowed functions are only nth-roots and field operations ( + , − , × , / ) . {\displaystyle (+,-,\times ,/).} In fact, field theory allows showing that if a solution of a polynomial equation has a closed form involving exponentials, logarithms or trigonometric functions, then it has also a closed form that does not involve these functions. There are expressions in radicals for all solutions of cubic equations (degree 3) and quartic equations (degree 4). The size of these expressions increases significantly with the degree, limiting their usefulness. In higher degrees, the Abel–Ruffini theorem states that there are equations whose solutions cannot be expressed in radicals, and, thus, have no closed forms. A simple example is the equation x 5 − x − 1 = 0. {\displaystyle x^{5}-x-1=0.} Galois theory provides an algorithmic method for deciding whether a particular polynomial equation can be solved in radicals. == Symbolic integration == Symbolic integration consists essentially of the search of closed forms for antiderivatives of functions that are specified by closed-form expressions. In this context, the basic functions used for defining closed forms are commonly logarithms, exponential function and polynomial roots. Functions that have a closed form for these basic functions are called elementary functions and include trigonometric functions, inverse trigonometric functions, hyperbolic functions, and inverse hyperbolic functions. The fundamental problem of symbolic integration is thus, given an elementary function specified by a closed-form expression, to decide whether its antiderivative is an elementary function, and, if it is, to find a closed-form expression for this antiderivative. For rational functions; that is, for fractions of two polynomial functions; antiderivatives are not always rational fractions, but are always elementary functions that may involve logarithms and polynomial roots. This is usually proved with partial fraction decomposition. The need for logarithms and polynomial roots is illustrated by the formula ∫ f ( x ) g ( x ) d x = ∑ α ∈ Roots ⁡ ( g ( x ) ) f ( α ) g ′ ( α ) ln ⁡ ( x − α ) , {\displaystyle \int {\frac {f(x)}{g(x)}}\,dx=\sum _{\alpha \in \operatorname {Roots} (g(x))}{\frac {f(\alpha )}{g'(\alpha )}}\ln(x-\alpha ),} which is valid if f {\displaystyle f} and g {\displaystyle g} are coprime polynomials such that g {\displaystyle g} is square free and deg ⁡ f < deg ⁡ g . {\displaystyle \deg f<\deg g.} == Alternative definitions == Changing the basic functions to include additional functions can change the set of equations with closed-form solutions. Many cumulative distribution functions cannot be expressed in closed form, unless one considers special functions such as the error function or gamma function to be basic. It is possible to solve the quintic equation if general hypergeometric functions are included, although the solution is far too complicated algebraically to be useful. For many practical computer applications, it is entirely reasonable to assume that the gamma function and other special functions are basic since numerical implementations are widely available. == Analytic expression == This is a term that is sometimes understood as a synonym for closed-form (see "Wolfram Mathworld".) but this usage is contested (see "Math Stackexchange".). It is unclear the extent to which this term is genuinely in use as opposed to the result of uncited earlier versions of this page. == Comparison of different classes of expressions == The closed-form expressions do not include infinite series or continued fractions; neither includes integrals or limits. Indeed, by the Stone–Weierstrass theorem, any continuous function on the unit interval can be expressed as a limit of polynomials, so any class of functions containing the polynomials and closed under limits will necessarily include all continuous functions. Similarly, an equation or system of equations is said to have a closed-form solution if and only if at least one solution can be expressed as a closed-form expression; and it is said to have an analytic solution if and only if at least one solution can be expressed as an analytic expression. There is a subtle distinction between a "closed-form function" and a "closed-form number" in the discussion of a "closed-form solution", discussed in (Chow 1999) and below. A closed-form or analytic solution is sometimes referred to as an explicit solution. == Dealing with non-closed-form expressions == === Transformation into closed-form expressions === The expression: f ( x ) = ∑ n = 0 ∞ x 2 n {\displaystyle f(x)=\sum _{n=0}^{\infty }{\frac {x}{2^{n}}}} is not in closed form because the summation entails an infinite number of elementary operations. However, by summing a geometric series this expression can be expressed in the closed form: f ( x ) = 2 x . {\displaystyle f(x)=2x.} === Differential Galois theory === The integral of a closed-form expression may or may not itself be expressible as a closed-form expression. This study is referred to as differential Galois theory, by analogy with algebraic Galois theory. The basic theorem of differential Galois theory is due to Joseph Liouville in the 1830s and 1840s and hence referred to as Liouville's theorem. A standard example of an elementary function whose antiderivative does not have a closed-form expression is: e − x 2 , {\displaystyle e^{-x^{2}},} whose one antiderivative is (up to a multiplicative constant) the error function: erf ⁡ ( x ) = 2 π ∫ 0 x e − t 2 d t . {\displaystyle \operatorname {erf} (x)={\frac {2}{\sqrt {\pi }}}\int _{0}^{x}e^{-t^{2}}\,dt.} === Mathematical modelling and computer simulation === Equations or systems too complex for closed-form or analytic solutions can often be analysed by mathematical modelling and computer simulation (for an example in physics, see). == Closed-form number == Three subfields of the complex numbers C have been suggested as encoding the notion of a "closed-form number"; in increasing order of generality, these are the Liouvillian numbers (not to be confused with Liouville numbers in the sense of rational approximation), EL numbers and elementary numbers. The Liouvillian numbers, denoted L, form the smallest algebraically closed subfield of C closed under exponentiation and logarithm (formally, intersection of all such subfields)—that is, numbers which involve explicit exponentiation and logarithms, but allow explicit and implicit polynomials (roots of polynomials); this is defined in (Ritt 1948, p. 60). L was originally referred to as elementary numbers, but this term is now used more broadly to refer to numbers defined explicitly or implicitly in terms of algebraic operations, exponentials, and logarithms. A narrower definition proposed in (Chow 1999, pp. 441–442), denoted E, and referred to as EL numbers, is the smallest subfield of C closed under exponentiation and logarithm—this need not be algebraically closed, and corresponds to explicit algebraic, exponential, and logarithmic operations. "EL" stands both for "exponential–logarithmic" and as an abbreviation for "elementary". Whether a number is a closed-form number is related to whether a number is transcendental. Formally, Liouvillian numbers and elementary numbers contain the algebraic numbers, and they include some but not all transcendental numbers. In contrast, EL numbers do not contain all algebraic numbers, but do include some transcendental numbers. Closed-form numbers can be studied via transcendental number theory, in which a major result is the Gelfond–Schneider theorem, and a major open question is Schanuel's conjecture. == Numerical computations == For purposes of numeric computations, being in closed form is not in general necessary, as many limits and integrals can be efficiently computed. Some equations have no closed form solution, such as those that represent the Three-body problem or the Hodgkin–Huxley model. Therefore, the future states of these systems must be computed numerically. == Conversion from numerical forms == There is software that attempts to find closed-form expressions for numerical values, including RIES, identify in Maple and SymPy, Plouffe's Inverter, and the Inverse Symbolic Calculator. == See also == Algebraic solution – Solution in radicals of a polynomial equationPages displaying short descriptions of redirect targets Computer simulation – Process of mathematical modelling, performed on a computer Elementary function – A kind of mathematical function Finitary operation – Addition, multiplication, division, ...Pages displaying short descriptions of redirect targets Numerical solution – Methods for numerical approximationsPages displaying short descriptions of redirect targets Liouvillian function – Elementary functions and their finitely iterated integrals Symbolic regression – Type of regression analysis Tarski's high school algebra problem – Mathematical problem Term (logic) – Components of a mathematical or logical formula Tupper's self-referential formula – Formula that visually represents itself when graphed == Notes == == References == == Further reading == Ritt, J. F. (1948), Integration in finite terms Chow, Timothy Y. (May 1999), "What is a Closed-Form Number?", American Mathematical Monthly, 106 (5): 440–448, arXiv:math/9805045, doi:10.2307/2589148, JSTOR 2589148 Jonathan M. Borwein and Richard E. Crandall (January 2013), "Closed Forms: What They Are and Why We Care", Notices of the American Mathematical Society, 60 (1): 50–65, doi:10.1090/noti936 == External links == Weisstein, Eric W. "Closed-Form Solution". MathWorld. Closed-form continuous-time neural networks
Wikipedia/Analytic_solution
In the philosophy of science, the special sciences are all sciences other than fundamental physics, including, for example, chemistry, biology, and neuroscience. The distinction reflects a view that "all events which fall under the laws of any science are physical events and hence fall under the laws of physics". In this view, all sciences except fundamental physics are special sciences. However, the legitimacy of this view, and the status of other sciences and their relation to physics, are unresolved matters. Jerry Fodor, a key writer on this subject, refers to "many philosophers" who hold this position, but in an opposing argument he has argued for strong autonomy, concluding that the special sciences are not even in principle reducible to physics. As such, Fodor has often been credited for having helped turn the tide against reductionist physicalism. == See also == Emergence – Unpredictable phenomenon in complex systems Emergentism – Philosophical belief in emergence Multiple realizability – Thesis in the philosophy of mind Reductionism – Philosophical view explaining systems in terms of smaller parts Supervenience – Relation between sets of properties or facts The central science – Term often associated with chemistry Unity of science – Theory in the philosophy of science == References ==
Wikipedia/Special_sciences
The Standard Model of particle physics is the theory describing three of the four known fundamental forces (electromagnetic, weak and strong interactions – excluding gravity) in the universe and classifying all known elementary particles. It was developed in stages throughout the latter half of the 20th century, through the work of many scientists worldwide, with the current formulation being finalized in the mid-1970s upon experimental confirmation of the existence of quarks. Since then, proof of the top quark (1995), the tau neutrino (2000), and the Higgs boson (2012) have added further credence to the Standard Model. In addition, the Standard Model has predicted various properties of weak neutral currents and the W and Z bosons with great accuracy. Although the Standard Model is believed to be theoretically self-consistent and has demonstrated some success in providing experimental predictions, it leaves some physical phenomena unexplained and so falls short of being a complete theory of fundamental interactions. For example, it does not fully explain why there is more matter than anti-matter, incorporate the full theory of gravitation as described by general relativity, or account for the universe's accelerating expansion as possibly described by dark energy. The model does not contain any viable dark matter particle that possesses all of the required properties deduced from observational cosmology. It also does not incorporate neutrino oscillations and their non-zero masses. The development of the Standard Model was driven by theoretical and experimental particle physicists alike. The Standard Model is a paradigm of a quantum field theory for theorists, exhibiting a wide range of phenomena, including spontaneous symmetry breaking, anomalies, and non-perturbative behavior. It is used as a basis for building more exotic models that incorporate hypothetical particles, extra dimensions, and elaborate symmetries (such as supersymmetry) to explain experimental results at variance with the Standard Model, such as the existence of dark matter and neutrino oscillations. == Historical background == In 1928, Paul Dirac introduced the Dirac equation, which implied the existence of antimatter. In 1954, Yang Chen-Ning and Robert Mills extended the concept of gauge theory for abelian groups, e.g. quantum electrodynamics, to nonabelian groups to provide an explanation for strong interactions. In 1957, Chien-Shiung Wu demonstrated parity was not conserved in the weak interaction. In 1961, Sheldon Glashow combined the electromagnetic and weak interactions. In 1964, Murray Gell-Mann and George Zweig introduced quarks and that same year Oscar W. Greenberg implicitly introduced color charge of quarks. In 1967 Steven Weinberg and Abdus Salam incorporated the Higgs mechanism into Glashow's electroweak interaction, giving it its modern form. In 1970, Sheldon Glashow, John Iliopoulos, and Luciano Maiani introduced the GIM mechanism, predicting the charm quark. In 1973 Gross and Wilczek and Politzer independently discovered that non-Abelian gauge theories, like the color theory of the strong force, have asymptotic freedom. In 1976, Martin Perl discovered the tau lepton at the SLAC. In 1977, a team led by Leon Lederman at Fermilab discovered the bottom quark. The Higgs mechanism is believed to give rise to the masses of all the elementary particles in the Standard Model. This includes the masses of the W and Z bosons, and the masses of the fermions, i.e. the quarks and leptons. After the neutral weak currents caused by Z boson exchange were discovered at CERN in 1973, the electroweak theory became widely accepted and Glashow, Salam, and Weinberg shared the 1979 Nobel Prize in Physics for discovering it. The W± and Z0 bosons were discovered experimentally in 1983; and the ratio of their masses was found to be as the Standard Model predicted. The theory of the strong interaction (i.e. quantum chromodynamics, QCD), to which many contributed, acquired its modern form in 1973–74 when asymptotic freedom was proposed (a development that made QCD the main focus of theoretical research) and experiments confirmed that the hadrons were composed of fractionally charged quarks. The term "Standard Model" was introduced by Abraham Pais and Sam Treiman in 1975, with reference to the electroweak theory with four quarks. Steven Weinberg has since claimed priority, explaining that he chose the term Standard Model out of a sense of modesty and used it in 1973 during a talk in Aix-en-Provence in France. == Particle content == The Standard Model includes members of several classes of elementary particles, which in turn can be distinguished by other characteristics, such as color charge. All particles can be summarized as follows: Notes: [†] An anti-electron (e+) is conventionally called a "positron". === Fermions === The Standard Model includes 12 elementary particles of spin 1⁄2, known as fermions. Fermions respect the Pauli exclusion principle, meaning that two identical fermions cannot simultaneously occupy the same quantum state in the same atom. Each fermion has a corresponding antiparticle, which are particles that have corresponding properties with the exception of opposite charges. Fermions are classified based on how they interact, which is determined by the charges they carry, into two groups: quarks and leptons. Within each group, pairs of particles that exhibit similar physical behaviors are then grouped into generations (see the table). Each member of a generation has a greater mass than the corresponding particle of generations prior. Thus, there are three generations of quarks and leptons. As first-generation particles do not decay, they comprise all of ordinary (baryonic) matter. Specifically, all atoms consist of electrons orbiting around the atomic nucleus, ultimately constituted of up and down quarks. On the other hand, second- and third-generation charged particles decay with very short half-lives and can only be observed in high-energy environments. Neutrinos of all generations also do not decay, and pervade the universe, but rarely interact with baryonic matter. There are six quarks: up, down, charm, strange, top, and bottom. Quarks carry color charge, and hence interact via the strong interaction. The color confinement phenomenon results in quarks being strongly bound together such that they form color-neutral composite particles called hadrons; quarks cannot individually exist and must always bind with other quarks. Hadrons can contain either a quark-antiquark pair (mesons) or three quarks (baryons). The lightest baryons are the nucleons: the proton and neutron. Quarks also carry electric charge and weak isospin, and thus interact with other fermions through electromagnetism and weak interaction. The six leptons consist of the electron, electron neutrino, muon, muon neutrino, tau, and tau neutrino. The leptons do not carry color charge, and do not respond to strong interaction. The charged leptons carry an electric charge of −1 e, while the three neutrinos carry zero electric charge. Thus, the neutrinos' motions are influenced by only the weak interaction and gravity, making them difficult to observe. === Gauge bosons === The Standard Model includes 4 kinds of gauge bosons of spin 1, with bosons being quantum particles containing an integer spin. The gauge bosons are defined as force carriers, as they are responsible for mediating the fundamental interactions. The Standard Model explains the four fundamental forces as arising from the interactions, with fermions exchanging virtual force carrier particles, thus mediating the forces. At a macroscopic scale, this manifests as a force. As a result, they do not follow the Pauli exclusion principle that constrains fermions; bosons do not have a theoretical limit on their spatial density. The types of gauge bosons are described below. Electromagnetism: Photons mediate the electromagnetic force, responsible for interactions between electrically charged particles. The photon is massless and is described by the theory of quantum electrodynamics (QED). Strong Interactions: Gluons mediate the strong interactions, which binds quarks to each other by influencing the color charge, with the interactions being described in the theory of quantum chromodynamics (QCD). They have no mass, and there are eight distinct gluons, with each being denoted through a color-anticolor charge combination (e.g. red–antigreen). As gluons have an effective color charge, they can also interact amongst themselves. Weak Interactions: The W+, W−, and Z gauge bosons mediate the weak interactions between all fermions, being responsible for radioactivity. They contain mass, with the Z having more mass than the W±. The weak interactions involving the W± act only on left-handed particles and right-handed antiparticles respectively. The W± carries an electric charge of +1 and −1 and couples to the electromagnetic interaction. The electrically neutral Z boson interacts with both left-handed particles and right-handed antiparticles. These three gauge bosons along with the photons are grouped together, as collectively mediating the electroweak interaction. Gravity: It is currently unexplained in the Standard Model, as the hypothetical mediating particle graviton has been proposed, but not observed. This is due to the incompatibility of quantum mechanics and Einstein's theory of general relativity, regarded as being the best explanation for gravity. In general relativity, gravity is explained as being the geometric curving of spacetime. The Feynman diagram calculations, which are a graphical representation of the perturbation theory approximation, invoke "force mediating particles", and when applied to analyze high-energy scattering experiments are in reasonable agreement with the data. However, perturbation theory (and with it the concept of a "force-mediating particle") fails in other situations. These include low-energy quantum chromodynamics, bound states, and solitons. The interactions between all the particles described by the Standard Model are summarized by the diagrams on the right of this section. === Higgs boson === The Higgs particle is a massive scalar elementary particle theorized by Peter Higgs (and others) in 1964, when he showed that Goldstone's 1962 theorem (generic continuous symmetry, which is spontaneously broken) provides a third polarisation of a massive vector field. Hence, Goldstone's original scalar doublet, the massive spin-zero particle, was proposed as the Higgs boson, and is a key building block in the Standard Model. It has no intrinsic spin, and for that reason is classified as a boson with spin-0. The Higgs boson plays a unique role in the Standard Model, by explaining why the other elementary particles, except the photon and gluon, are massive. In particular, the Higgs boson explains why the photon has no mass, while the W and Z bosons are very heavy. Elementary-particle masses and the differences between electromagnetism (mediated by the photon) and the weak force (mediated by the W and Z bosons) are critical to many aspects of the structure of microscopic (and hence macroscopic) matter. In electroweak theory, the Higgs boson generates the masses of the leptons (electron, muon, and tau) and quarks. As the Higgs boson is massive, it must interact with itself. Because the Higgs boson is a very massive particle and also decays almost immediately when created, only a very high-energy particle accelerator can observe and record it. Experiments to confirm and determine the nature of the Higgs boson using the Large Hadron Collider (LHC) at CERN began in early 2010 and were performed at Fermilab's Tevatron until its closure in late 2011. Mathematical consistency of the Standard Model requires that any mechanism capable of generating the masses of elementary particles must become visible at energies above 1.4 TeV; therefore, the LHC (designed to collide two 7 TeV proton beams) was built to answer the question of whether the Higgs boson actually exists. On 4 July 2012, two of the experiments at the LHC (ATLAS and CMS) both reported independently that they had found a new particle with a mass of about 125 GeV/c2 (about 133 proton masses, on the order of 10−25 kg), which is "consistent with the Higgs boson". On 13 March 2013, it was confirmed to be the searched-for Higgs boson. == Theoretical aspects == === Construction of the Standard Model Lagrangian === Technically, quantum field theory provides the mathematical framework for the Standard Model, in which a Lagrangian controls the dynamics and kinematics of the theory. Each kind of particle is described in terms of a dynamical field that pervades space-time. The construction of the Standard Model proceeds following the modern method of constructing most field theories: by first postulating a set of symmetries of the system, and then by writing down the most general renormalizable Lagrangian from its particle (field) content that observes these symmetries. The global Poincaré symmetry is postulated for all relativistic quantum field theories. It consists of the familiar translational symmetry, rotational symmetry and the inertial reference frame invariance central to the theory of special relativity. The local SU(3) × SU(2) × U(1) gauge symmetry is an internal symmetry that essentially defines the Standard Model. Roughly, the three factors of the gauge symmetry give rise to the three fundamental interactions. The fields fall into different representations of the various symmetry groups of the Standard Model (see table). Upon writing the most general Lagrangian, one finds that the dynamics depends on 19 parameters, whose numerical values are established by experiment. The parameters are summarized in the table (made visible by clicking "show") above. ==== Quantum chromodynamics sector ==== The quantum chromodynamics (QCD) sector defines the interactions between quarks and gluons, which is a Yang–Mills gauge theory with SU(3) symmetry, generated by T a = λ a / 2 {\displaystyle T^{a}=\lambda ^{a}/2} . Since leptons do not interact with gluons, they are not affected by this sector. The Dirac Lagrangian of the quarks coupled to the gluon fields is given by L QCD = ψ ¯ i γ μ D μ ψ − 1 4 G μ ν a G a μ ν , {\displaystyle {\mathcal {L}}_{\text{QCD}}={\overline {\psi }}i\gamma ^{\mu }D_{\mu }\psi -{\frac {1}{4}}G_{\mu \nu }^{a}G_{a}^{\mu \nu },} where ψ {\displaystyle \psi } is a three component column vector of Dirac spinors, each element of which refers to a quark field with a specific color charge (i.e. red, blue, and green) and summation over flavor (i.e. up, down, strange, etc.) is implied. The gauge covariant derivative of QCD is defined by D μ ≡ ∂ μ − i g s 1 2 λ a G μ a {\displaystyle D_{\mu }\equiv \partial _{\mu }-ig_{\text{s}}{\frac {1}{2}}\lambda ^{a}G_{\mu }^{a}} , where γμ are the Dirac matrices, Gaμ is the 8-component ( a = 1 , 2 , … , 8 {\displaystyle a=1,2,\dots ,8} ) SU(3) gauge field, λa are the 3 × 3 Gell-Mann matrices, generators of the SU(3) color group, Gaμν represents the gluon field strength tensor, and gs is the strong coupling constant. The QCD Lagrangian is invariant under local SU(3) gauge transformations; i.e., transformations of the form ψ → ψ ′ = U ψ {\displaystyle \psi \rightarrow \psi '=U\psi } , where U = e − i g s λ a ϕ a ( x ) {\displaystyle U=e^{-ig_{\text{s}}\lambda ^{a}\phi ^{a}(x)}} is 3 × 3 unitary matrix with determinant 1, making it a member of the group SU(3), and ϕ a ( x ) {\displaystyle \phi ^{a}(x)} is an arbitrary function of spacetime. ==== Electroweak sector ==== The electroweak sector is a Yang–Mills gauge theory with the symmetry group U(1) × SU(2)L, L EW = Q ¯ L j i γ μ D μ Q L j + u ¯ R j i γ μ D μ u R j + d ¯ R j i γ μ D μ d R j + ℓ ¯ L j i γ μ D μ ℓ L j + e ¯ R j i γ μ D μ e R j − 1 4 W a μ ν W μ ν a − 1 4 B μ ν B μ ν , {\displaystyle {\mathcal {L}}_{\text{EW}}={\overline {Q}}_{{\text{L}}j}i\gamma ^{\mu }D_{\mu }Q_{{\text{L}}j}+{\overline {u}}_{{\text{R}}j}i\gamma ^{\mu }D_{\mu }u_{{\text{R}}j}+{\overline {d}}_{{\text{R}}j}i\gamma ^{\mu }D_{\mu }d_{{\text{R}}j}+{\overline {\ell }}_{{\text{L}}j}i\gamma ^{\mu }D_{\mu }\ell _{{\text{L}}j}+{\overline {e}}_{{\text{R}}j}i\gamma ^{\mu }D_{\mu }e_{{\text{R}}j}-{\tfrac {1}{4}}W_{a}^{\mu \nu }W_{\mu \nu }^{a}-{\tfrac {1}{4}}B^{\mu \nu }B_{\mu \nu },} where the subscript j {\displaystyle j} sums over the three generations of fermions; Q L , u R {\displaystyle Q_{\text{L}},u_{\text{R}}} , and d R {\displaystyle d_{\text{R}}} are the left-handed doublet, right-handed singlet up type, and right handed singlet down type quark fields; and ℓ L {\displaystyle \ell _{\text{L}}} and e R {\displaystyle e_{\text{R}}} are the left-handed doublet and right-handed singlet lepton fields. The electroweak gauge covariant derivative is defined as D μ ≡ ∂ μ − i g ′ 1 2 Y W B μ − i g 1 2 τ → L W → μ {\displaystyle D_{\mu }\equiv \partial _{\mu }-ig'{\tfrac {1}{2}}Y_{\text{W}}B_{\mu }-ig{\tfrac {1}{2}}{\vec {\tau }}_{\text{L}}{\vec {W}}_{\mu }} , where Bμ is the U(1) gauge field, YW is the weak hypercharge – the generator of the U(1) group, W→μ is the 3-component SU(2) gauge field, →τL are the Pauli matrices – infinitesimal generators of the SU(2) group – with subscript L to indicate that they only act on left-chiral fermions, g' and g are the U(1) and SU(2) coupling constants respectively, W a μ ν {\displaystyle W^{a\mu \nu }} ( a = 1 , 2 , 3 {\displaystyle a=1,2,3} ) and B μ ν {\displaystyle B^{\mu \nu }} are the field strength tensors for the weak isospin and weak hypercharge fields. Notice that the addition of fermion mass terms into the electroweak Lagrangian is forbidden, since terms of the form m ψ ¯ ψ {\displaystyle m{\overline {\psi }}\psi } do not respect U(1) × SU(2)L gauge invariance. Neither is it possible to add explicit mass terms for the U(1) and SU(2) gauge fields. The Higgs mechanism is responsible for the generation of the gauge boson masses, and the fermion masses result from Yukawa-type interactions with the Higgs field. ==== Higgs sector ==== In the Standard Model, the Higgs field is an SU(2)L doublet of complex scalar fields with four degrees of freedom: φ = ( φ + φ 0 ) = 1 2 ( φ 1 + i φ 2 φ 3 + i φ 4 ) , {\displaystyle \varphi ={\begin{pmatrix}\varphi ^{+}\\\varphi ^{0}\end{pmatrix}}={\frac {1}{\sqrt {2}}}{\begin{pmatrix}\varphi _{1}+i\varphi _{2}\\\varphi _{3}+i\varphi _{4}\end{pmatrix}},} where the superscripts + and 0 indicate the electric charge Q {\displaystyle Q} of the components. The weak hypercharge Y W {\displaystyle Y_{\text{W}}} of both components is 1. Before symmetry breaking, the Higgs Lagrangian is L H = ( D μ φ ) † ( D μ φ ) − V ( φ ) , {\displaystyle {\mathcal {L}}_{\text{H}}=\left(D_{\mu }\varphi \right)^{\dagger }\left(D^{\mu }\varphi \right)-V(\varphi ),} where D μ {\displaystyle D_{\mu }} is the electroweak gauge covariant derivative defined above and V ( φ ) {\displaystyle V(\varphi )} is the potential of the Higgs field. The square of the covariant derivative leads to three and four point interactions between the electroweak gauge fields W μ a {\displaystyle W_{\mu }^{a}} and B μ {\displaystyle B_{\mu }} and the scalar field φ {\displaystyle \varphi } . The scalar potential is given by V ( φ ) = − μ 2 φ † φ + λ ( φ † φ ) 2 , {\displaystyle V(\varphi )=-\mu ^{2}\varphi ^{\dagger }\varphi +\lambda \left(\varphi ^{\dagger }\varphi \right)^{2},} where μ 2 > 0 {\displaystyle \mu ^{2}>0} , so that φ {\displaystyle \varphi } acquires a non-zero Vacuum expectation value, which generates masses for the Electroweak gauge fields (the Higgs mechanism), and λ > 0 {\displaystyle \lambda >0} , so that the potential is bounded from below. The quartic term describes self-interactions of the scalar field φ {\displaystyle \varphi } . The minimum of the potential is degenerate with an infinite number of equivalent ground state solutions, which occurs when φ † φ = μ 2 2 λ {\displaystyle \varphi ^{\dagger }\varphi ={\tfrac {\mu ^{2}}{2\lambda }}} . It is possible to perform a gauge transformation on φ {\displaystyle \varphi } such that the ground state is transformed to a basis where φ 1 = φ 2 = φ 4 = 0 {\displaystyle \varphi _{1}=\varphi _{2}=\varphi _{4}=0} and φ 3 = μ λ ≡ v {\displaystyle \varphi _{3}={\tfrac {\mu }{\sqrt {\lambda }}}\equiv v} . This breaks the symmetry of the ground state. The expectation value of φ {\displaystyle \varphi } now becomes ⟨ φ ⟩ = 1 2 ( 0 v ) , {\displaystyle \langle \varphi \rangle ={\frac {1}{\sqrt {2}}}{\begin{pmatrix}0\\v\end{pmatrix}},} where v {\displaystyle v} has units of mass and sets the scale of electroweak physics. This is the only dimensional parameter of the Standard Model and has a measured value of ~246 GeV/c2. After symmetry breaking, the masses of the W and Z are given by m W = 1 2 g v {\displaystyle m_{\text{W}}={\frac {1}{2}}gv} and m Z = 1 2 g 2 + g ′ 2 v {\displaystyle m_{\text{Z}}={\frac {1}{2}}{\sqrt {g^{2}+g'^{2}}}v} , which can be viewed as predictions of the theory. The photon remains massless. The mass of the Higgs boson is m H = 2 μ 2 = 2 λ v {\displaystyle m_{\text{H}}={\sqrt {2\mu ^{2}}}={\sqrt {2\lambda }}v} . Since μ {\displaystyle \mu } and λ {\displaystyle \lambda } are free parameters, the Higgs's mass could not be predicted beforehand and had to be determined experimentally. ==== Yukawa sector ==== The Yukawa interaction terms are: L Yukawa = ( Y u ) m n ( Q ¯ L ) m φ ~ ( u R ) n + ( Y d ) m n ( Q ¯ L ) m φ ( d R ) n + ( Y e ) m n ( ℓ ¯ L ) m φ ( e R ) n + h . c . {\displaystyle {\mathcal {L}}_{\text{Yukawa}}=(Y_{\text{u}})_{mn}({\bar {Q}}_{\text{L}})_{m}{\tilde {\varphi }}(u_{\text{R}})_{n}+(Y_{\text{d}})_{mn}({\bar {Q}}_{\text{L}})_{m}\varphi (d_{\text{R}})_{n}+(Y_{\text{e}})_{mn}({\bar {\ell }}_{\text{L}})_{m}{\varphi }(e_{\text{R}})_{n}+\mathrm {h.c.} } where Y u {\displaystyle Y_{\text{u}}} , Y d {\displaystyle Y_{\text{d}}} , and Y e {\displaystyle Y_{\text{e}}} are 3 × 3 matrices of Yukawa couplings, with the mn term giving the coupling of the generations m and n, and h.c. means Hermitian conjugate of preceding terms. The fields Q L {\displaystyle Q_{\text{L}}} and ℓ L {\displaystyle \ell _{\text{L}}} are left-handed quark and lepton doublets. Likewise, u R , d R {\displaystyle u_{\text{R}},d_{\text{R}}} and e R {\displaystyle e_{\text{R}}} are right-handed up-type quark, down-type quark, and lepton singlets. Finally φ {\displaystyle \varphi } is the Higgs doublet and φ ~ = i τ 2 φ ∗ {\displaystyle {\tilde {\varphi }}=i\tau _{2}\varphi ^{*}} is its charge conjugate state. The Yukawa terms are invariant under the SU(2)L × U(1)Y gauge symmetry of the Standard Model and generate masses for all fermions after spontaneous symmetry breaking. == Fundamental interactions == The Standard Model describes three of the four fundamental interactions in nature; only gravity remains unexplained. In the Standard Model, such an interaction is described as an exchange of bosons between the objects affected, such as a photon for the electromagnetic force and a gluon for the strong interaction. Those particles are called force carriers or messenger particles. === Gravity === Despite being perhaps the most familiar fundamental interaction, gravity is not described by the Standard Model, due to contradictions that arise when combining general relativity, the modern theory of gravity, and quantum mechanics. However, gravity is so weak at microscopic scales, that it is essentially unmeasurable. The graviton is postulated to be the mediating particle, but has not yet been proved to exist. === Electromagnetism === Electromagnetism is the only long-range force in the Standard Model. It is mediated by photons and couples to electric charge. Electromagnetism is responsible for a wide range of phenomena including atomic electron shell structure, chemical bonds, electric circuits and electronics. Electromagnetic interactions in the Standard Model are described by quantum electrodynamics. === Weak nuclear force === The weak interaction is responsible for various forms of particle decay, such as beta decay. It is weak and short-range, due to the fact that the weak mediating particles, W and Z bosons, have mass. W bosons have electric charge and mediate interactions that change the particle type (referred to as flavor) and charge. Interactions mediated by W bosons are charged current interactions. Z bosons are neutral and mediate neutral current interactions, which do not change particle flavor. Thus Z bosons are similar to the photon, aside from them being massive and interacting with the neutrino. The weak interaction is also the only interaction to violate parity and CP. Parity violation is maximal for charged current interactions, since the W boson interacts exclusively with left-handed fermions and right-handed antifermions. In the Standard Model, the weak force is understood in terms of the electroweak theory, which states that the weak and electromagnetic interactions become united into a single electroweak interaction at high energies. === Strong nuclear force === The strong nuclear force is responsible for hadronic and nuclear binding. It is mediated by gluons, which couple to color charge. Since gluons themselves have color charge, the strong force exhibits confinement and asymptotic freedom. Confinement means that only color-neutral particles can exist in isolation, therefore quarks can only exist in hadrons and never in isolation, at low energies. Asymptotic freedom means that the strong force becomes weaker, as the energy scale increases. The strong force overpowers the electrostatic repulsion of protons and quarks in nuclei and hadrons respectively, at their respective scales. While quarks are bound in hadrons by the fundamental strong interaction, which is mediated by gluons, nucleons are bound by an emergent phenomenon termed the residual strong force or nuclear force. This interaction is mediated by mesons, such as the pion. The color charges inside the nucleon cancel out, meaning most of the gluon and quark fields cancel out outside of the nucleon. However, some residue is "leaked", which appears as the exchange of virtual mesons, that causes the attractive force between nucleons. The (fundamental) strong interaction is described by quantum chromodynamics, which is a component of the Standard Model. == Tests and predictions == The Standard Model predicted the existence of the W and Z bosons, gluon, top quark and charm quark, and predicted many of their properties before these particles were observed. The predictions were experimentally confirmed with good precision. The Standard Model also predicted the existence of the Higgs boson, which was found in 2012 at the Large Hadron Collider, the final fundamental particle predicted by the Standard Model to be experimentally confirmed. == Challenges == Self-consistency of the Standard Model (currently formulated as a non-abelian gauge theory quantized through path-integrals) has not been mathematically proved. While regularized versions useful for approximate computations (for example lattice gauge theory) exist, it is not known whether they converge (in the sense of S-matrix elements) in the limit that the regulator is removed. A key question related to the consistency is the Yang–Mills existence and mass gap problem. Experiments indicate that neutrinos have mass, which the classic Standard Model did not allow. To accommodate this finding, the classic Standard Model can be modified to include neutrino mass, although it is not obvious exactly how this should be done. If one insists on using only Standard Model particles, this can be achieved by adding a non-renormalizable interaction of leptons with the Higgs boson. On a fundamental level, such an interaction emerges in the seesaw mechanism where heavy right-handed neutrinos are added to the theory. This is natural in the left-right symmetric extension of the Standard Model and in certain grand unified theories. As long as new physics appears below or around 1014 GeV, the neutrino masses can be of the right order of magnitude. Theoretical and experimental research has attempted to extend the Standard Model into a unified field theory or a theory of everything, a complete theory explaining all physical phenomena including constants. Inadequacies of the Standard Model that motivate such research include: The model does not explain gravitation, although physical confirmation of a theoretical particle known as a graviton would account for it to a degree. Though it addresses strong and electroweak interactions, the Standard Model does not consistently explain the canonical theory of gravitation, general relativity, in terms of quantum field theory. The reason for this is, among other things, that quantum field theories of gravity generally break down before reaching the Planck scale. As a consequence, we have no reliable theory for the very early universe. Some physicists consider it to be ad hoc and inelegant, requiring 19 numerical constants whose values are unrelated and arbitrary. Although the Standard Model, as it now stands, can explain why neutrinos have masses, the specifics of neutrino mass are still unclear. It is believed that explaining neutrino mass will require an additional 7 or 8 constants, which are also arbitrary parameters. The Higgs mechanism gives rise to the hierarchy problem if some new physics (coupled to the Higgs) is present at high energy scales. In these cases, in order for the weak scale to be much smaller than the Planck scale, severe fine tuning of the parameters is required; there are, however, other scenarios that include quantum gravity in which such fine tuning can be avoided. There are also issues of quantum triviality, which suggests that it may not be possible to create a consistent quantum field theory involving elementary scalar particles. The model is inconsistent with the emerging Lambda-CDM model of cosmology. Contentions include the absence of an explanation in the Standard Model of particle physics for the observed amount of cold dark matter (CDM) and its contributions to dark energy, which are many orders of magnitude too large. It is also difficult to accommodate the observed predominance of matter over antimatter (matter/antimatter asymmetry). The isotropy and homogeneity of the visible universe over large distances seems to require a mechanism like cosmic inflation, which would also constitute an extension of the Standard Model. Currently, no proposed theory of everything has been widely accepted or verified. == See also == == Notes == == References == == Further reading == Oerter, Robert (2006). The Theory of Almost Everything: The Standard Model, the Unsung Triumph of Modern Physics. Plume. ISBN 978-0-452-28786-0. Schumm, Bruce A. (2004). Deep Down Things: The Breathtaking Beauty of Particle Physics. Johns Hopkins University Press. ISBN 978-0-8018-7971-5. "The Standard Model of Particle Physics Interactive Graphic". === Introductory textbooks === Robert Mann (2009). An Introduction to Particle Physics and the Standard Model. CRC Press. ISBN 9780429141225. W. Greiner; B. Müller (2000). Gauge Theory of Weak Interactions. Springer. ISBN 978-3-540-67672-0. J.E. Dodd; B.M. Gripaios (2020). The Ideas of Particle Physics: An Introduction for Scientists. Cambridge University Press. ISBN 978-1-108-72740-2. D.J. Griffiths (1987). Introduction to Elementary Particles. John Wiley & Sons. ISBN 978-0-471-60386-3. W. N. Cottingham and D. A. Greenwood (2023). An Introduction to the Standard Model of Particle Physics. Cambridge University Press. ISBN 9781009401685. === Advanced textbooks === T.P. Cheng; L.F. Li (2006). Gauge theory of elementary particle physics. Oxford University Press. ISBN 978-0-19-851961-4. Highlights the gauge theory aspects of the Standard Model. J.F. Donoghue; E. Golowich; B.R. Holstein (1994). Dynamics of the Standard Model. Cambridge University Press. ISBN 978-0-521-47652-2. Highlights dynamical and phenomenological aspects of the Standard Model. Ken J. Barnes (2010). Group Theory for the Standard Model of Particle Physics and Beyond. Taylor & Francis. ISBN 9780429184550. Nagashima, Yorikiyo (2013). Elementary Particle Physics: Foundations of the Standard Model, Volume 2. Wiley. ISBN 978-3-527-64890-0. 920 pages. Schwartz, Matthew D. (2014). Quantum Field Theory and the Standard Model. Cambridge University. ISBN 978-1-107-03473-0. 952 pages. Langacker, Paul (2009). The Standard Model and Beyond. CRC Press. ISBN 978-1-4200-7907-4. 670 pages. Highlights group-theoretical aspects of the Standard Model. === Journal articles === E.S. Abers; B.W. Lee (1973). "Gauge theories". Physics Reports. 9 (1): 1–141. Bibcode:1973PhR.....9....1A. doi:10.1016/0370-1573(73)90027-6. M. Baak; et al. (2012). "The Electroweak Fit of the Standard Model after the Discovery of a New Boson at the LHC". The European Physical Journal C. 72 (11): 2205. arXiv:1209.2716. Bibcode:2012EPJC...72.2205B. doi:10.1140/epjc/s10052-012-2205-9. S2CID 15052448. Y. Hayato; et al. (1999). "Search for Proton Decay through p → νK+ in a Large Water Cherenkov Detector". Physical Review Letters. 83 (8): 1529–1533. arXiv:hep-ex/9904020. Bibcode:1999PhRvL..83.1529H. doi:10.1103/PhysRevLett.83.1529. S2CID 118326409. S.F. Novaes (2000). "Standard Model: An Introduction". arXiv:hep-ph/0001283. D.P. Roy (1999). "Basic Constituents of Matter and their Interactions – A Progress Report". arXiv:hep-ph/9912523. F. Wilczek (2004). "The Universe Is A Strange Place". Nuclear Physics B: Proceedings Supplements. 134: 3. arXiv:astro-ph/0401347. Bibcode:2004NuPhS.134....3W. doi:10.1016/j.nuclphysbps.2004.08.001. S2CID 28234516. == External links == "The Standard Model explained in Detail by CERN's John Ellis" omega tau podcast. The Standard Model on the CERN website explains how the basic building blocks of matter interact, governed by four fundamental forces. Particle Physics: Standard Model, Leonard Susskind lectures (2010).
Wikipedia/Standard_Model_of_particle_physics
In physical cosmology and astronomy, dark energy is a proposed form of energy that affects the universe on the largest scales. Its primary effect is to drive the accelerating expansion of the universe. It also slows the rate of structure formation. Assuming that the lambda-CDM model of cosmology is correct, dark energy dominates the universe, contributing 68% of the total energy in the present-day observable universe while dark matter and ordinary (baryonic) matter contribute 27% and 5%, respectively, and other components such as neutrinos and photons are nearly negligible. Dark energy's density is very low: 7×10−30 g/cm3 (6×10−10 J/m3 in mass-energy), much less than the density of ordinary matter or dark matter within galaxies. However, it dominates the universe's mass–energy content because it is uniform across space. The first observational evidence for dark energy's existence came from measurements of supernovae. Type Ia supernovae have constant luminosity, which means that they can be used as accurate distance measures. Comparing this distance to the redshift (which measures the speed at which the supernova is receding) shows that the universe's expansion is accelerating. Prior to this observation, scientists thought that the gravitational attraction of matter and energy in the universe would cause the universe's expansion to slow over time. Since the discovery of accelerating expansion, several independent lines of evidence have been discovered that support the existence of dark energy. The exact nature of dark energy remains a mystery, and many possible explanations have been theorized. The main candidates are a cosmological constant (representing a constant energy density filling space homogeneously) and scalar fields (dynamic quantities having energy densities that vary in time and space) such as quintessence or moduli. A cosmological constant would remain constant across time and space, while scalar fields can vary. Yet other possibilities are interacting dark energy (see the section Dark energy § Theories of dark energy) an observational effect, cosmological coupling and shockwave cosmology (see the section § Alternatives to dark energy). == History of discovery and previous speculation == === Einstein's cosmological constant === The "cosmological constant" is a constant term that can be added to Einstein field equations of general relativity. If considered as a "source term" in the field equation, it can be viewed as equivalent to the mass of empty space (which conceptually could be either positive or negative), or "vacuum energy". The cosmological constant was first proposed by Einstein as a mechanism to obtain a solution to the gravitational field equation that would lead to a static universe, effectively using dark energy to balance gravity. Einstein gave the cosmological constant the symbol Λ (capital lambda). Einstein stated that the cosmological constant required that 'empty space takes the role of gravitating negative masses that are distributed all over the interstellar space'. The mechanism was an example of fine-tuning, and it was later realized that Einstein's static universe would not be stable: local inhomogeneities would ultimately lead to either the runaway expansion or contraction of the universe. The equilibrium is unstable: if the universe expands slightly, then the expansion releases vacuum energy, which causes yet more expansion. Likewise, a universe that contracts slightly will continue contracting. According to Einstein, "empty space" can possess its own energy. Because this energy is a property of space itself, it would not be diluted as space expands. As more space comes into existence, more of this energy-of-space would appear, thereby causing accelerated expansion. These sorts of disturbances are inevitable, due to the uneven distribution of matter throughout the universe. Further, observations made by Edwin Hubble in 1929 showed that the universe appears to be expanding and is not static. Einstein reportedly referred to his failure to predict the idea of a dynamic universe, in contrast to a static universe, as his greatest blunder. === Inflationary dark energy === Alan Guth and Alexei Starobinsky proposed in 1980 that a negative pressure field, similar in concept to dark energy, could drive cosmic inflation in the very early universe. Inflation postulates that some repulsive force, qualitatively similar to dark energy, resulted in an enormous and exponential expansion of the universe during its earliest stages. Such expansion is an essential feature of most current models of the Big Bang. However, inflation must have occurred at a much higher energy density than the dark energy we observe today, and inflation is thought to have completely ended when the universe was just a fraction of a second old. It is unclear what relation, if any, exists between dark energy and inflation. Even after inflationary models became accepted, the cosmological constant was thought to be irrelevant to the current universe. Nearly all inflation models predict that the total (matter+energy) density of the universe should be very close to the critical density. During the 1980s, most cosmological research focused on models with critical density in matter only, usually 95% cold dark matter (CDM) and 5% ordinary matter (baryons). These models were found to be successful at forming realistic galaxies and clusters, but some problems appeared in the late 1980s: in particular, the model required a value for the Hubble constant lower than preferred by observations, and the model under-predicted observations of large-scale galaxy clustering. These difficulties became stronger after the discovery of anisotropy in the cosmic microwave background by the COBE spacecraft in 1992, and several modified CDM models came under active study through the mid-1990s: these included the Lambda-CDM model and a mixed cold/hot dark matter model. The first direct evidence for dark energy came from supernova observations in 1998 of accelerated expansion in Riess et al. and in Perlmutter et al., and the Lambda-CDM model then became the leading model. Soon after, dark energy was supported by independent observations: in 2000, the BOOMERanG and Maxima cosmic microwave background experiments observed the first acoustic peak in the cosmic microwave background, showing that the total (matter+energy) density is close to 100% of critical density. Then in 2001, the 2dF Galaxy Redshift Survey gave strong evidence that the matter density is around 30% of critical. The large difference between these two supports a smooth component of dark energy making up the difference. Much more precise measurements from WMAP in 2003–2010 have continued to support the standard model and give more accurate measurements of the key parameters. The term "dark energy", echoing Fritz Zwicky's "dark matter" from the 1930s, was coined by Michael S. Turner in 1998. == Nature == The nature of dark energy is more hypothetical than that of dark matter, and many things about it remain in the realm of speculation. Dark energy is thought to be very homogeneous and not dense, and is not known to interact through any of the fundamental forces other than gravity. Since it is rarefied and un-massive—roughly 10−27 kg/m3—it is unlikely to be detectable in laboratory experiments. The reason dark energy can have such a profound effect on the universe, making up 68% of universal density in spite of being so dilute, is that it is believed to uniformly fill otherwise empty space. The vacuum energy, that is, the particle-antiparticle pairs generated and mutually annihilated within a time frame in accord with Heisenberg's uncertainty principle in the energy-time formulation, has been often invoked as the main contribution to dark energy. The mass–energy equivalence postulated by general relativity implies that the vacuum energy should exert a gravitational force. Hence, the vacuum energy is expected to contribute to the cosmological constant, which in turn impinges on the accelerated expansion of the universe. However, the cosmological constant problem asserts that there is a huge disagreement between the observed values of vacuum energy density and the theoretical large value of zero-point energy obtained by quantum field theory; the problem remains unresolved. Independently of its actual nature, dark energy would need to have a strong negative pressure to explain the observed acceleration of the expansion of the universe. According to general relativity, the pressure within a substance contributes to its gravitational attraction for other objects just as its mass density does. This happens because the physical quantity that causes matter to generate gravitational effects is the stress–energy tensor, which contains both the energy (or matter) density of a substance and its pressure. In the Friedmann–Lemaître–Robertson–Walker metric, it can be shown that a strong constant negative pressure (i.e., tension) in all the universe causes an acceleration in the expansion if the universe is already expanding, or a deceleration in contraction if the universe is already contracting. This accelerating expansion effect is sometimes labeled "gravitational repulsion". === Technical definition === In standard cosmology, there are three components of the universe: matter, radiation, and dark energy. This matter is anything whose energy density scales with the inverse cube of the scale factor, i.e., ρ ∝ a−3, while radiation is anything whose energy density scales to the inverse fourth power of the scale factor (ρ ∝ a−4). This can be understood intuitively: for an ordinary particle in a cube-shaped box, doubling the length of an edge of the box decreases the density (and hence energy density) by a factor of eight (23). For radiation, the decrease in energy density is greater, because an increase in spatial distance also causes a redshift. The final component is dark energy: it is an intrinsic property of space and has a constant energy density, regardless of the dimensions of the volume under consideration (ρ ∝ a0). Thus, unlike ordinary matter, it is not diluted by the expansion of space. === Change in expansion over time === High-precision measurements of the expansion of the universe are required to understand how the expansion rate changes over time and space. In general relativity, the evolution of the expansion rate is estimated from the curvature of the universe and the cosmological equation of state (the relationship between temperature, pressure, and combined matter, energy, and vacuum energy density for any region of space). Measuring the equation of state for dark energy is one of the biggest efforts in observational cosmology today. Adding the cosmological constant to cosmology's standard FLRW metric leads to the Lambda-CDM model, which has been referred to as the "standard model of cosmology" because of its precise agreement with observations. As of 2013, the Lambda-CDM model is consistent with a series of increasingly rigorous cosmological observations, including the Planck spacecraft and the Supernova Legacy Survey. First results from the SNLS reveal that the average behavior (i.e., equation of state) of dark energy behaves like Einstein's cosmological constant to a precision of 10%. Recent results from the Hubble Space Telescope Higher-Z Team indicate that dark energy has been present for at least 9 billion years and during the period preceding cosmic acceleration. In March 2025, the Dark Energy Spectroscopic Instrument (DESI) collaboration announce that evidence for evolving dark energy has been discovered in analysis combining DESI data on baryon acoustic oscillations (BAO) with the CMB, weak lensing and supernovae dataset, with significance ranging from 2.8 to 4.2σ. Results suggest that the density of dark energy is slowly decreasing with time. == Evidence of existence == The evidence for dark energy is indirect but comes from three independent sources: Distance measurements and their relation to redshift, which suggest the universe has expanded more in the latter half of its life than in the former half of its life. The theoretical need for a type of additional energy that is not matter or dark matter to form the observationally flat universe (absence of any detectable global curvature). Measurements of large-scale wave patterns of mass density in the universe. === Supernovae === In 1998, the High-Z Supernova Search Team published observations of Type Ia ("one-A") supernovae. In 1999, the Supernova Cosmology Project followed by suggesting that the expansion of the universe is accelerating. The 2011 Nobel Prize in Physics was awarded to Saul Perlmutter, Brian P. Schmidt, and Adam G. Riess for their leadership in the discovery. Since then, these observations have been corroborated by several independent sources. Measurements of the cosmic microwave background, gravitational lensing, and the large-scale structure of the cosmos, as well as improved measurements of supernovae, have been consistent with the Lambda-CDM model. Some people argue that the only indications for the existence of dark energy are observations of distance measurements and their associated redshifts. Cosmic microwave background anisotropies and baryon acoustic oscillations serve only to demonstrate that distances to a given redshift are larger than would be expected from a "dusty" Friedmann–Lemaître universe and the local measured Hubble constant. Supernovae are useful for cosmology because they are excellent standard candles across cosmological distances. They allow researchers to measure the expansion history of the universe by looking at the relationship between the distance to an object and its redshift, which gives how fast it is receding from us. The relationship is roughly linear, according to Hubble's law. It is relatively easy to measure redshift, but finding the distance to an object is more difficult. Usually, astronomers use standard candles: objects for which the intrinsic brightness, or absolute magnitude, is known. This allows the object's distance to be measured from its actual observed brightness, or apparent magnitude. Type Ia supernovae are the most accurate known standard candles across cosmological distances because of their extreme and consistent luminosity. Recent observations of supernovae are consistent with a universe made up 66.6% of dark energy and 33.4% of a combination of dark matter and baryonic matter assuming a flat Lambda-CDM model. === Large-scale structure === The theory of large-scale structure, which governs the formation of structures in the universe (stars, quasars, galaxies and galaxy groups and clusters), also suggests that the density of matter in the universe is only 30% of the critical density. A 2011 survey, the WiggleZ galaxy survey of more than 200,000 galaxies, provided further evidence towards the existence of dark energy, although the exact physics behind it remains unknown. The WiggleZ survey from the Australian Astronomical Observatory scanned the galaxies to determine their redshift. Then, by exploiting the fact that baryon acoustic oscillations have left voids regularly of ≈150 Mpc diameter, surrounded by the galaxies, the voids were used as standard rulers to estimate distances to galaxies as far as 2,000 Mpc (redshift 0.6), allowing for accurate estimate of the speeds of galaxies from their redshift and distance. The data confirmed cosmic acceleration up to half of the age of the universe (7 billion years) and constrain its inhomogeneity to 1 part in 10. This provides a confirmation to cosmic acceleration independent of supernovae. === Cosmic microwave background === The existence of dark energy, in whatever form, is needed to reconcile the measured geometry of space with the total amount of matter in the universe. Measurements of cosmic microwave background anisotropies indicate that the universe is close to flat. For the shape of the universe to be flat, the mass–energy density of the universe must be equal to the critical density. The total amount of matter in the universe (including baryons and dark matter), as measured from the cosmic microwave background spectrum, accounts for only about 30% of the critical density. This implies the existence of an additional form of energy to account for the remaining 70%. The Wilkinson Microwave Anisotropy Probe (WMAP) spacecraft seven-year analysis estimated a universe made up of 72.8% dark energy, 22.7% dark matter, and 4.5% ordinary matter. Work done in 2013 based on the Planck spacecraft observations of the cosmic microwave background gave a more accurate estimate of 68.3% dark energy, 26.8% dark matter, and 4.9% ordinary matter. === Late-time integrated Sachs–Wolfe effect === Accelerated cosmic expansion causes gravitational potential wells and hills to flatten as photons pass through them, producing cold spots and hot spots on the cosmic microwave background aligned with vast supervoids and superclusters. This so-called late-time Integrated Sachs–Wolfe effect (ISW) is a direct signal of dark energy in a flat universe. It was reported at high significance in 2008 by Ho et al. and Giannantonio et al. === Observational Hubble constant data === A new approach to test evidence of dark energy through observational Hubble constant data (OHD), also known as cosmic chronometers, has gained significant attention in recent years. The Hubble constant, H(z), is measured as a function of cosmological redshift. OHD directly tracks the expansion history of the universe by taking passively evolving early-type galaxies as "cosmic chronometers". From this point, this approach provides standard clocks in the universe. The core of this idea is the measurement of the differential age evolution as a function of redshift of these cosmic chronometers. Thus, it provides a direct estimate of the Hubble parameter H ( z ) = − 1 1 + z d z d t ≈ − 1 1 + z Δ z Δ t . {\displaystyle H(z)=-{\frac {1}{1+z}}{\frac {dz}{dt}}\approx -{\frac {1}{1+z}}{\frac {\Delta z}{\Delta t}}.} The reliance on a differential quantity, ⁠Δz/Δt⁠, brings more information and is appealing for computation: It can minimize many common issues and systematic effects. Analyses of supernovae and baryon acoustic oscillations (BAO) are based on integrals of the Hubble parameter, whereas ⁠Δz/Δt⁠ measures it directly. For these reasons, this method has been widely used to examine the accelerated cosmic expansion and study properties of dark energy. == Theories of dark energy == Dark energy's status as a hypothetical force with unknown properties makes it an active target of research. The problem is attacked from a variety of angles, such as modifying the prevailing theory of gravity (general relativity), attempting to pin down the properties of dark energy, and finding alternative ways to explain the observational data. === Cosmological constant === The simplest explanation for dark energy is that it is an intrinsic, fundamental energy of space. This is the cosmological constant, usually represented by the Greek letter Λ (Lambda, hence the name Lambda-CDM model). Since energy and mass are related according to the equation E = mc2, Einstein's theory of general relativity predicts that this energy will have a gravitational effect. It is sometimes called vacuum energy because it is the energy density of empty space – of vacuum. A major outstanding problem is that the same quantum field theories predict a huge cosmological constant, about 120 orders of magnitude too large. This would need to be almost, but not exactly, cancelled by an equally large term of the opposite sign. Some supersymmetric theories require a cosmological constant that is exactly zero. Also, it is unknown whether there is a metastable vacuum state in string theory with a positive cosmological constant, and it has been conjectured by Ulf Danielsson et al. that no such state exists. This conjecture would not rule out other models of dark energy, such as quintessence, that could be compatible with string theory. === Quintessence === In quintessence models of dark energy, the observed acceleration of the scale factor is caused by the potential energy of a dynamical field, referred to as quintessence field. Quintessence differs from the cosmological constant in that it can vary in space and time. In order for it not to clump and form structure like matter, the field must be very light so that it has a large Compton wavelength. In the simplest scenarios, the quintessence field has a canonical kinetic term, is minimally coupled to gravity, and does not feature higher order operations in its Lagrangian. No evidence of quintessence is yet available, nor has it been ruled out. It generally predicts a slightly slower acceleration of the expansion of the universe than the cosmological constant. Some scientists think that the best evidence for quintessence would come from violations of Einstein's equivalence principle and variation of the fundamental constants in space or time. Scalar fields are predicted by the Standard Model of particle physics and string theory, but an analogous problem to the cosmological constant problem (or the problem of constructing models of cosmological inflation) occurs: renormalization theory predicts that scalar fields should acquire large masses. The coincidence problem asks why the acceleration of the Universe began when it did. If acceleration began earlier in the universe, structures such as galaxies would never have had time to form, and life, at least as we know it, would never have had a chance to exist. Proponents of the anthropic principle view this as support for their arguments. However, many models of quintessence have a so-called "tracker" behavior, which solves this problem. In these models, the quintessence field has a density which closely tracks (but is less than) the radiation density until matter–radiation equality, which triggers quintessence to start behaving as dark energy, eventually dominating the universe. This naturally sets the low energy scale of the dark energy. In 2004, when scientists fit the evolution of dark energy with the cosmological data, they found that the equation of state ω {\displaystyle \omega } had possibly crossed the cosmological constant boundary ω = − 1 {\displaystyle \omega =-1} from above to below. A no-go theorem has been proved that this scenario requires models with at least two types of scalar fields. This scenario is called Quintom, which was proposed by Xinmin Zhang's group in 2004. Some special cases of quintessence are phantom dark energy, in which the energy density of quintessence actually increases with time, and k-essence (short for kinetic quintessence) which has a non-standard form of kinetic energy such as a negative kinetic energy. They can have unusual properties: phantom dark energy, for example, can cause a Big Rip. A group of researchers argued in 2021 that observations of the Hubble tension may imply that only quintessence models with a nonzero coupling constant are viable. === Interacting dark energy === This class of theories attempts to come up with an all-encompassing theory of both dark matter and dark energy as a single phenomenon that modifies the laws of gravity at various scales. This could, for example, treat dark energy and dark matter as different facets of the same unknown substance, or postulate that cold dark matter decays into dark energy. Another class of theories that unifies dark matter and dark energy are suggested to be covariant theories of modified gravities. These theories alter the dynamics of spacetime such that the modified dynamics stems to what have been assigned to the presence of dark energy and dark matter. Dark energy could in principle interact not only with the rest of the dark sector, but also with ordinary matter. However, cosmology alone is not sufficient to effectively constrain the strength of the coupling between dark energy and baryons, so that other indirect techniques or laboratory searches have to be adopted. It was briefly theorized in the early 2020s that excess observed in the XENON1T detector in Italy may have been caused by a chameleon model of dark energy, but further experiments disproved this possibility. === Variable dark energy models === The density of dark energy might have varied in time during the history of the universe. Modern observational data allows us to estimate the present density of dark energy. Using baryon acoustic oscillations, it is possible to investigate the effect of dark energy in the history of the universe, and constrain parameters of the equation of state of dark energy. To that end, several models have been proposed. One of the most popular models is the Chevallier–Polarski–Linder model (CPL). Some other common models are Barboza & Alcaniz (2008), Jassal et al. (2005), Wetterich. (2004), and Oztas et al. (2018). There is some observational evidence that dark energy is indeed decreasing with time. Data from the Dark Energy Spectroscopic Instrument (DESI), tracking the size of baryon acoustic oscillations over the universe's expansion history, suggests that the amount of dark energy is 10% lower than it was 4.5 billion years ago. However, there is not yet sufficient data to rule out dark energy being the cosmological constant. == Alternatives to dark energy == === Modified gravity === The evidence for dark energy is heavily dependent on the theory of general relativity. Therefore, it is conceivable that a modification to general relativity also eliminates the need for dark energy. There are many such theories, and research is ongoing. The measurement of the speed of gravity in the first gravitational wave measured by non-gravitational means (GW170817) ruled out many modified gravity theories as explanations to dark energy. Astrophysicist Ethan Siegel states that, while such alternatives gain mainstream press coverage, almost all professional astrophysicists are confident that dark energy exists and that none of the competing theories successfully explain observations to the same level of precision as standard dark energy. === Observational skepticism === Some alternatives to dark energy, such as inhomogeneous cosmology, aim to explain the observational data by a more refined use of established theories. In this scenario, dark energy does not actually exist, and is merely a measurement artifact. For example, if we are located in an emptier-than-average region of space, the observed cosmic expansion rate could be mistaken for a variation in time, or acceleration. A different approach uses a cosmological extension of the equivalence principle to show how space might appear to be expanding more rapidly in the voids surrounding our local cluster. While weak, such effects considered cumulatively over billions of years could become significant, creating the illusion of cosmic acceleration, and making it appear as if we live in a Hubble bubble. Yet other possibilities are that the accelerated expansion of the universe is an illusion caused by the relative motion of us to the rest of the universe, or that the statistical methods employed were flawed. A laboratory direct detection attempt failed to detect any force associated with dark energy. Observational skepticism explanations of dark energy have generally not gained much traction among cosmologists. For example, a paper that suggested the anisotropy of the local Universe has been misrepresented as dark energy was quickly countered by another paper claiming errors in the original paper. Another study questioning the essential assumption that the luminosity of Type Ia supernovae does not vary with stellar population age was also swiftly rebutted by other cosmologists. === As a general relativistic effect due to black holes === This theory was formulated by researchers of the University of Hawaiʻi at Mānoa in February 2023. The idea is that if one requires the Kerr metric (which describes rotating black holes) to asymptote to the Friedmann-Robertson-Walker metric (which describes the isotropic and homogeneous universe that is the basic assumption of modern cosmology), then one finds that black holes gain mass as the universe expands. The rate is measured to be ∝a3, where a is the scale factor. This particular rate means that the energy density of black holes remains constant over time, mimicking dark energy (see Dark energy#Technical definition). The theory is called "cosmological coupling" because the black holes couple to a cosmological requirement. Other astrophysicists are skeptical, with a variety of papers claiming that the theory fails to explain other observations. === Shockwave Cosmology === Shockwave cosmology, proposed by Joel Smoller and Blake Temple in 2003, has the “big bang” as an explosion inside a black hole, producing the expanding volume of space and matter that includes the observable universe. A related theory by Smoller, Temple, and Vogler proposes that this shockwave may have resulted in our part of the universe having a lower density than that surrounding it, causing the accelerated expansion normally attributed to dark energy. They also propose that this related theory could be tested: a universe with dark energy should give a figure for the cubic correction to redshift versus luminosity C = −0.180 at a = a whereas for Smoller, Temple, and Vogler's alternative C should be positive rather than negative. They give a more precise calculation for their shockwave model alternative as: the cubic correction to redshift versus luminosity at a = a is C = 0.359. Although shockwave cosmology produces a universe that "looks essentially identical to the aftermath of the big bang", cosmologists consider that it needs further development before it could be considered as a more advantageous model than the big bang theory (or standard model) in explaining the universe. In particular, and especially for the proposed alternative to dark energy, it would need to explain big bang nucleosynthesis, the quantitative details of the microwave background anisotropies, the Lyman-alpha forest, and galaxy surveys. == Implications for the fate of the universe == Cosmologists estimate that the acceleration began roughly 5 billion years ago. Before that, it is thought that the expansion was decelerating, due to the attractive influence of matter. The density of dark matter in an expanding universe decreases more quickly than dark energy, and eventually the dark energy dominates. Specifically, when the volume of the universe doubles, the density of dark matter is halved, but the density of dark energy is nearly unchanged (it is exactly constant in the case of a cosmological constant). Projections into the future can differ radically for different models of dark energy. For a cosmological constant, or any other model that predicts that the acceleration will continue indefinitely, the ultimate result will be that galaxies outside the Local Group will have a line-of-sight velocity that continually increases with time, eventually far exceeding the speed of light. This is not a violation of special relativity because the notion of "velocity" used here is different from that of velocity in a local inertial frame of reference, which is still constrained to be less than the speed of light for any massive object (see Uses of the proper distance for a discussion of the subtleties of defining any notion of relative velocity in cosmology). Because the Hubble parameter is decreasing with time, there can actually be cases where a galaxy that is receding from us faster than light does manage to emit a signal which reaches us eventually. However, because of the accelerating expansion, it is projected that most galaxies will eventually cross a type of cosmological event horizon where any light they emit past that point will never be able to reach us at any time in the infinite future because the light never reaches a point where its "peculiar velocity" toward us exceeds the expansion velocity away from us (these two notions of velocity are also discussed in Uses of the proper distance). Assuming the dark energy is constant (a cosmological constant), the current distance to this cosmological event horizon is about 16 billion light years, meaning that a signal from an event happening at present would eventually be able to reach us in the future if the event were less than 16 billion light years away, but the signal would never reach us if the event were more than 16 billion light years away. As galaxies approach the point of crossing this cosmological event horizon, the light from them will become more and more redshifted, to the point where the wavelength becomes too large to detect in practice and the galaxies appear to vanish completely (see Future of an expanding universe). Planet Earth, the Milky Way, and the Local Group of galaxies of which the Milky Way is a part, would all remain virtually undisturbed as the rest of the universe recedes and disappears from view. In this scenario, the Local Group would ultimately suffer heat death, just as was hypothesized for the flat, matter-dominated universe before measurements of cosmic acceleration. There are other, more speculative ideas about the future of the universe. The phantom dark energy model of dark energy results in divergent expansion, which would imply that the effective force of dark energy continues growing until it dominates all other forces in the universe. Under this scenario, dark energy would ultimately tear apart all gravitationally bound structures, including galaxies and solar systems, and eventually overcome the electrical and nuclear forces to tear apart atoms themselves, ending the universe in a "Big Rip". On the other hand, dark energy might dissipate with time or even become attractive. Such uncertainties leave open the possibility of gravity eventually prevailing and lead to a universe that contracts in on itself in a "Big Crunch", or that there may even be a dark energy cycle, which implies a cyclic model of the universe in which every iteration (Big Bang then eventually a Big Crunch) takes about a trillion (1012) years. While none of these are supported by observations, they are not ruled out. == In philosophy of science == The astrophysicist David Merritt identifies dark energy as an example of an "auxiliary hypothesis", an ad hoc postulate that is added to a theory in response to observations that falsify it. He argues that the dark energy hypothesis is a conventionalist hypothesis, that is, a hypothesis that adds no empirical content and hence is unfalsifiable in the sense defined by Karl Popper. However, his opinion is not shared by all scientists. == See also == == Notes == == References == == External links == Euclid ESA Satellite, a mission to map the geometry of the dark universe "Surveying the dark side" by Roberto Trotta and Richard Bower, Astron.Geophys.
Wikipedia/Dark_energy
Uniformitarianism, also known as the Doctrine of Uniformity or the Uniformitarian Principle, is the assumption that the same natural laws and processes that operate in our present-day scientific observations have always operated in the universe in the past and apply everywhere in the universe. It refers to invariance in the metaphysical principles underpinning science, such as the constancy of cause and effect throughout space-time, but has also been used to describe spatiotemporal invariance of physical laws. Though an unprovable postulate that cannot be verified using the scientific method, some consider that uniformitarianism should be a required first principle in scientific research. In geology, uniformitarianism has included the gradualistic concept that "the present is the key to the past" and that geological events occur at the same rate now as they have always done, though many modern geologists no longer hold to a strict gradualism. Coined by William Whewell, uniformitarianism was originally proposed in contrast to catastrophism by British naturalists in the late 18th century, starting with the work of the geologist James Hutton in his many books including Theory of the Earth. Hutton's work was later refined by scientist John Playfair and popularised by geologist Charles Lyell's Principles of Geology in 1830. Today, Earth's history is considered to have been a slow, gradual process, punctuated by occasional natural catastrophic events. == History == === 18th century === Abraham Gottlob Werner (1749–1817) proposed Neptunism, where strata represented deposits from shrinking seas precipitated onto primordial rocks such as granite. In 1785 James Hutton proposed an opposing, self-maintaining infinite cycle based on natural history and not on the Biblical account. The solid parts of the present land appear in general, to have been composed of the productions of the sea, and of other materials similar to those now found upon the shores. Hence we find a reason to conclude: 1st, That the land on which we rest is not simple and original, but that it is a composition, and had been formed by the operation of second causes. 2nd, That before the present land was made, there had subsisted a world composed of sea and land, in which were tides and currents, with such operations at the bottom of the sea as now take place. And, Lastly, That while the present land was forming at the bottom of the ocean, the former land maintained plants and animals; at least the sea was then inhabited by animals, in a similar manner as it is at present. Hence we are led to conclude, that the greater part of our land, if not the whole had been produced by operations natural to this globe; but that in order to make this land a permanent body, resisting the operations of the waters, two things had been required; 1st, The consolidation of masses formed by collections of loose or incoherent materials; 2ndly, The elevation of those consolidated masses from the bottom of the sea, the place where they were collected, to the stations in which they now remain above the level of the ocean. Hutton then sought evidence to support his idea that there must have been repeated cycles, each involving deposition on the seabed, uplift with tilting and erosion, and then moving undersea again for further layers to be deposited. At Glen Tilt in the Cairngorm mountains he found granite penetrating metamorphic schists, in a way which indicated to him that the presumed primordial rock had been molten after the strata had formed. He had read about angular unconformities as interpreted by Neptunists, and found an unconformity at Jedburgh where layers of greywacke in the lower layers of the cliff face have been tilted almost vertically before being eroded to form a level plane, under horizontal layers of Old Red Sandstone. In the spring of 1788 he took a boat trip along the Berwickshire coast with John Playfair and the geologist Sir James Hall, and found a dramatic unconformity showing the same sequence at Siccar Point. Playfair later recalled that "the mind seemed to grow giddy by looking so far into the abyss of time", and Hutton concluded a 1788 paper he presented at the Royal Society of Edinburgh, later rewritten as a book, with the phrase "we find no vestige of a beginning, no prospect of an end". Both Playfair and Hall wrote their own books on the theory, and for decades robust debate continued between Hutton's supporters and the Neptunists. Georges Cuvier's paleontological work in the 1790s, which established the reality of extinction, explained this by local catastrophes, after which other fixed species repopulated the affected areas. In Britain, geologists adapted this idea into "diluvial theory" which proposed repeated worldwide annihilation and creation of new fixed species adapted to a changed environment, initially identifying the most recent catastrophe as the biblical flood. === 19th century === From 1830 to 1833 Charles Lyell's multi-volume Principles of Geology was published. The work's subtitle was "An attempt to explain the former changes of the Earth's surface by reference to causes now in operation". He drew his explanations from field studies conducted directly before he went to work on the founding geology text, and developed Hutton's idea that the earth was shaped entirely by slow-moving forces still in operation today, acting over a very long period of time. The terms uniformitarianism for this idea, and catastrophism for the opposing viewpoint, was coined by William Whewell in a review of Lyell's book. Principles of Geology was the most influential geological work in the middle of the 19th century. ==== Systems of inorganic earth history ==== Geoscientists support diverse systems of Earth history, the nature of which rests on a certain mixture of views about the process, control, rate, and state which are preferred. Because geologists and geomorphologists tend to adopt opposite views over process, rate, and state in the inorganic world, there are eight different systems of beliefs in the development of the terrestrial sphere. All geoscientists stand by the principle of uniformity of law. Most, but not all, are directed by the principle of simplicity. All make definite assertions about the quality of rate and state in the inorganic realm. ==== Lyell ==== Lyell's uniformitarianism is a family of four related propositions, not a single idea: Uniformity of law – the laws of nature are constant across time and space. Uniformity of methodology – the appropriate hypotheses for explaining the geological past are those with analogy today. Uniformity of kind – past and present causes are all of the same kind, have the same energy, and produce the same effects. Uniformity of degree – geological circumstances have remained the same over time. None of these connotations requires another, and they are not all equally inferred by uniformitarians. Gould explained Lyell's propositions in Time's Arrow, Time's Cycle (1987), stating that Lyell conflated two different types of propositions: a pair of methodological assumptions with a pair of substantive hypotheses. The four together make up Lyell's uniformitarianism. ===== Methodological assumptions ===== The two methodological assumptions below are accepted to be true by the majority of scientists and geologists. Gould claims that these philosophical propositions must be assumed before you can proceed as a scientist doing science. "You cannot go to a rocky outcrop and observe either the constancy of nature's laws or the working of unknown processes. It works the other way around." You first assume these propositions and "then you go to the outcrop." Uniformity of law across time and space: Natural laws are constant across space and time. The axiom of uniformity of law is necessary in order for scientists to extrapolate (by inductive inference) into the unobservable past. The constancy of natural laws must be assumed in the study of the past; else we cannot meaningfully study it. Uniformity of process across time and space: Natural processes are constant across time and space. Though similar to uniformity of law, this second a priori assumption, shared by the vast majority of scientists, deals with geological causes, not physicochemical laws. The past is to be explained by processes acting currently in time and space rather than inventing extra esoteric or unknown processes without good reason, otherwise known as parsimony or Occam's razor. ===== Substantive hypotheses ===== The substantive hypotheses were controversial and, in some cases, accepted by few. These hypotheses are judged true or false on empirical grounds through scientific observation and repeated experimental data. This is in contrast with the previous two philosophical assumptions that come before one can do science and so cannot be tested or falsified by science. Uniformity of rate across time and space: Change is typically slow, steady, and gradual. Uniformity of rate (or gradualism) is what most people (including geologists) think of when they hear the word "uniformitarianism", confusing this hypothesis with the entire definition. As late as 1990, Lemon, in his textbook of stratigraphy, affirmed that "The uniformitarian view of earth history held that all geologic processes proceed continuously and at a very slow pace." Gould explained Hutton's view of uniformity of rate; mountain ranges or grand canyons are built by the accumulation of nearly insensible changes added up through vast time. Some major events such as floods, earthquakes, and eruptions, do occur. But these catastrophes are strictly local. They neither occurred in the past nor shall happen in the future, at any greater frequency or extent than they display at present. In particular, the whole earth is never convulsed at once. Uniformity of state across time and space: Change is evenly distributed throughout space and time. The uniformity of state hypothesis implies that throughout the history of our earth there is no progress in any inexorable direction. The planet has almost always looked and behaved as it does now. Change is continuous but leads nowhere. The earth is in balance: a dynamic steady state. === 20th century === Stephen Jay Gould's first scientific paper, "Is uniformitarianism necessary?" (1965), reduced these four assumptions to two. He dismissed the first principle, which asserted spatial and temporal invariance of natural laws, as no longer an issue of debate. He rejected the third (uniformity of rate) as an unjustified limitation on scientific inquiry, as it constrains past geologic rates and conditions to those of the present. So, Lyell's uniformitarianism was deemed unnecessary. Uniformitarianism was proposed in contrast to catastrophism, which states that the distant past "consisted of epochs of paroxysmal and catastrophic action interposed between periods of comparative tranquility" Especially in the late 19th and early 20th centuries, most geologists took this interpretation to mean that catastrophic events are not important in geologic time; one example of this is the debate of the formation of the Channeled Scablands due to the catastrophic Missoula glacial outburst floods. An important result of this debate and others was the re-clarification that, while the same principles operate in geologic time, catastrophic events that are infrequent on human time-scales can have important consequences in geologic history. Derek Ager has noted that "geologists do not deny uniformitarianism in its true sense, that is to say, of interpreting the past by means of the processes that are seen going on at the present day, so long as we remember that the periodic catastrophe is one of those processes. Those periodic catastrophes make more showing in the stratigraphical record than we have hitherto assumed." Modern geologists do not apply uniformitarianism in the same way as Lyell. They question if rates of processes were uniform through time and only those values measured during the history of geology are to be accepted. The present may not be a long enough key to penetrating the deep lock of the past. Geologic processes may have been active at different rates in the past that humans have not observed. "By force of popularity, uniformity of rate has persisted to our present day. For more than a century, Lyell's rhetoric conflating axiom with hypotheses has descended in unmodified form. Many geologists have been stifled by the belief that proper methodology includes an a priori commitment to gradual change, and by a preference for explaining large-scale phenomena as the concatenation of innumerable tiny changes." The current consensus is that Earth's history is a slow, gradual process punctuated by occasional natural catastrophic events that have affected Earth and its inhabitants. In practice it is reduced from Lyell's conflation, or blending, to simply the two philosophical assumptions. This is also known as the principle of geological actualism, which states that all past geological action was like all present geological action. The principle of actualism is the cornerstone of paleoecology. == Social sciences == Uniformitarianism has also been applied in historical linguistics, where it is considered a foundational principle of the field. Linguist Donald Ringe gives the following definition: If language was normally acquired in the past in the same way as it is today – usually by native acquisition in early childhood – and if it was used in the same ways – to transmit information, to express solidarity with family, friends, and neighbors, to mark one's social position, etc. – then it must have had the same general structure and organization in the past as it does today, and it must have changed in the same ways as it does today.The principle is known in linguistics, after William Labov and associates, as the Uniformitarian Principle or Unifomitarian Hypothesis. == See also == Conservation law Noether's theorem Law of universal gravitation Astronomical spectroscopy Cosmological principle History of paleontology Paradigm shift Physical constant Physical cosmology Scientific consensus Time-variation of fundamental constants == Notes == == References == Bowler, Peter J. (2003). Evolution: The History of an Idea (3rd ed.). University of California Press. ISBN 0-520-23693-9. Gordon, B. L. (2013). "In Defense of Uniformitarianism". Perspectives on Science and Christian Faith. 65: 79–86. Gould, S. J. (1965). "Is uniformitarianism necessary?". American Journal of Science. 263 (3): 223–228. Bibcode:1965AmJS..263..223G. doi:10.2475/ajs.263.3.223. Gould, S. J. (1984). "Toward the vindication of punctuational change in catastrophes and earth history". In Berggren, W. A.; Van Couvering, J. A. (eds.). In Catastrophes and Earth History. Princeton, New Jersey: Princeton University Press. p. 11. Gould, Stephen J. (1987). Time's Arrow, Time's Cycle: Myth and Metaphor in the Discovery of Geological Time. Cambridge, MA: Harvard University Press. Hooykaas, Reijer (1963). The Principle of Uniformity in Geology, Biology, and Theology. Natural Law and Divine Miracle. London: E.J. Brill. p. 38. Huggett, Richard (1990). Catastophism: Systems of Earth History. London: Edward Arnold. Simpson, G. G. (1963). "Historical science". In Albritton, C. C. Jr. (ed.). Fabric of geology. Stanford, California: Freeman, Cooper, and Company. pp. 24–48. Web Pidwirny, Michael; Jones, Scott (1999). "Fundamentals of Physical Geography, (2 Edition). Chapter 10: Introduction to the Lithosphere, Section C: Concept of Uniformitarianism". University of British Columbia, Okanagan. Thomson, 1st Baron Kelvin, William (1866). "The "Doctrine of Uniformity" in Geology Briefly Refuted". Proceedings of the Royal Society of Edinburgh. pp. 512–13.{{cite web}}: CS1 maint: numeric names: authors list (link) == External links == Uniformitarianism at Physical Geography "Uniformitarianism". Physical Geography. About. Have physical constants changed with time?
Wikipedia/Uniformitarianism_(science)
Materials science is an interdisciplinary field of researching and discovering materials. Materials engineering is an engineering field of finding uses for materials in other fields and industries. The intellectual origins of materials science stem from the Age of Enlightenment, when researchers began to use analytical thinking from chemistry, physics, and engineering to understand ancient, phenomenological observations in metallurgy and mineralogy. Materials science still incorporates elements of physics, chemistry, and engineering. As such, the field was long considered by academic institutions as a sub-field of these related fields. Beginning in the 1940s, materials science began to be more widely recognized as a specific and distinct field of science and engineering, and major technical universities around the world created dedicated schools for its study. Materials scientists emphasize understanding how the history of a material (processing) influences its structure, and thus the material's properties and performance. The understanding of processing -structure-properties relationships is called the materials paradigm. This paradigm is used to advance understanding in a variety of research areas, including nanotechnology, biomaterials, and metallurgy. Materials science is also an important part of forensic engineering and failure analysis – investigating materials, products, structures or components, which fail or do not function as intended, causing personal injury or damage to property. Such investigations are key to understanding, for example, the causes of various aviation accidents and incidents. == History == The material of choice of a given era is often a defining point. Phases such as Stone Age, Bronze Age, Iron Age, and Steel Age are historic, if arbitrary examples. Originally deriving from the manufacture of ceramics and its putative derivative metallurgy, materials science is one of the oldest forms of engineering and applied science. Modern materials science evolved directly from metallurgy, which itself evolved from the use of fire. A major breakthrough in the understanding of materials occurred in the late 19th century, when the American scientist Josiah Willard Gibbs demonstrated that the thermodynamic properties related to atomic structure in various phases are related to the physical properties of a material. Important elements of modern materials science were products of the Space Race; the understanding and engineering of the metallic alloys, and silica and carbon materials, used in building space vehicles enabling the exploration of space. Materials science has driven, and been driven by, the development of revolutionary technologies such as rubbers, plastics, semiconductors, and biomaterials. Before the 1960s (and in some cases decades after), many eventual materials science departments were metallurgy or ceramics engineering departments, reflecting the 19th and early 20th-century emphasis on metals and ceramics. The growth of material science in the United States was catalyzed in part by the Advanced Research Projects Agency, which funded a series of university-hosted laboratories in the early 1960s, "to expand the national program of basic research and training in the materials sciences." In comparison with mechanical engineering, the nascent material science field focused on addressing materials from the macro-level and on the approach that materials are designed on the basis of knowledge of behavior at the microscopic level. Due to the expanded knowledge of the link between atomic and molecular processes as well as the overall properties of materials, the design of materials came to be based on specific desired properties. The materials science field has since broadened to include every class of materials, including ceramics, polymers, semiconductors, magnetic materials, biomaterials, and nanomaterials, generally classified into three distinct groups: ceramics, metals, and polymers. The prominent change in materials science during the recent decades is active usage of computer simulations to find new materials, predict properties and understand phenomena. == Fundamentals == A material is defined as a substance (most often a solid, but other condensed phases can be included) that is intended to be used for certain applications. There are a myriad of materials around us; they can be found in anything from new and advanced materials that are being developed include nanomaterials, biomaterials, and energy materials to name a few. The basis of materials science is studying the interplay between the structure of materials, the processing methods to make that material, and the resulting material properties. The complex combination of these produce the performance of a material in a specific application. Many features across many length scales impact material performance, from the constituent chemical elements, its microstructure, and macroscopic features from processing. Together with the laws of thermodynamics and kinetics materials scientists aim to understand and improve materials. === Structure === Structure is one of the most important components of the field of materials science. The very definition of the field holds that it is concerned with the investigation of "the relationships that exist between the structures and properties of materials". Materials science examines the structure of materials from the atomic scale, all the way up to the macro scale. Characterization is the way materials scientists examine the structure of a material. This involves methods such as diffraction with X-rays, electrons or neutrons, and various forms of spectroscopy and chemical analysis such as Raman spectroscopy, energy-dispersive spectroscopy, chromatography, thermal analysis, electron microscope analysis, etc. Structure is studied in the following levels. ==== Atomic structure ==== Atomic structure deals with the atoms of the materials, and how they are arranged to give rise to molecules, crystals, etc. Much of the electrical, magnetic and chemical properties of materials arise from this level of structure. The length scales involved are in angstroms (Å). The chemical bonding and atomic arrangement (crystallography) are fundamental to studying the properties and behavior of any material. ===== Bonding ===== To obtain a full understanding of the material structure and how it relates to its properties, the materials scientist must study how the different atoms, ions and molecules are arranged and bonded to each other. This involves the study and use of quantum chemistry or quantum physics. Solid-state physics, solid-state chemistry and physical chemistry are also involved in the study of bonding and structure. ===== Crystallography ===== Crystallography is the science that examines the arrangement of atoms in crystalline solids. Crystallography is a useful tool for materials scientists. One of the fundamental concepts regarding the crystal structure of a material includes the unit cell, which is the smallest unit of a crystal lattice (space lattice) that repeats to make up the macroscopic crystal structure. Most common structural materials include parallelpiped and hexagonal lattice types. In single crystals, the effects of the crystalline arrangement of atoms is often easy to see macroscopically, because the natural shapes of crystals reflect the atomic structure. Further, physical properties are often controlled by crystalline defects. The understanding of crystal structures is an important prerequisite for understanding crystallographic defects. Examples of crystal defects consist of dislocations including edges, screws, vacancies, self inter-stitials, and more that are linear, planar, and three dimensional types of defects. New and advanced materials that are being developed include nanomaterials, biomaterials. Mostly, materials do not occur as a single crystal, but in polycrystalline form, as an aggregate of small crystals or grains with different orientations. Because of this, the powder diffraction method, which uses diffraction patterns of polycrystalline samples with a large number of crystals, plays an important role in structural determination. Most materials have a crystalline structure, but some important materials do not exhibit regular crystal structure. Polymers display varying degrees of crystallinity, and many are completely non-crystalline. Glass, some ceramics, and many natural materials are amorphous, not possessing any long-range order in their atomic arrangements. The study of polymers combines elements of chemical and statistical thermodynamics to give thermodynamic and mechanical descriptions of physical properties. ==== Nanostructure ==== Materials, which atoms and molecules form constituents in the nanoscale (i.e., they form nanostructures) are called nanomaterials. Nanomaterials are the subject of intense research in the materials science community due to the unique properties that they exhibit. Nanostructure deals with objects and structures that are in the 1 – 100 nm range. In many materials, atoms or molecules agglomerate to form objects at the nanoscale. This causes many interesting electrical, magnetic, optical, and mechanical properties. In describing nanostructures, it is necessary to differentiate between the number of dimensions on the nanoscale. Nanotextured surfaces have one dimension on the nanoscale, i.e., only the thickness of the surface of an object is between 0.1 and 100 nm. Nanotubes have two dimensions on the nanoscale, i.e., the diameter of the tube is between 0.1 and 100 nm; its length could be much greater. Finally, spherical nanoparticles have three dimensions on the nanoscale, i.e., the particle is between 0.1 and 100 nm in each spatial dimension. The terms nanoparticles and ultrafine particles (UFP) often are used synonymously although UFP can reach into the micrometre range. The term 'nanostructure' is often used, when referring to magnetic technology. Nanoscale structure in biology is often called ultrastructure. ==== Microstructure ==== Microstructure is defined as the structure of a prepared surface or thin foil of material as revealed by a microscope above 25× magnification. It deals with objects from 100 nm to a few cm. The microstructure of a material (which can be broadly classified into metallic, polymeric, ceramic and composite) can strongly influence physical properties such as strength, toughness, ductility, hardness, corrosion resistance, high/low temperature behavior, wear resistance, and so on. Most of the traditional materials (such as metals and ceramics) are microstructured. The manufacture of a perfect crystal of a material is physically impossible. For example, any crystalline material will contain defects such as precipitates, grain boundaries (Hall–Petch relationship), vacancies, interstitial atoms or substitutional atoms. The microstructure of materials reveals these larger defects and advances in simulation have allowed an increased understanding of how defects can be used to enhance material properties. ==== Macrostructure ==== Macrostructure is the appearance of a material in the scale millimeters to meters, it is the structure of the material as seen with the naked eye. === Properties === Materials exhibit myriad properties, including the following. Mechanical properties, see Strength of materials Chemical properties, see Chemistry Electrical properties, see Electricity Thermal properties, see Thermodynamics Optical properties, see Optics and Photonics Magnetic properties, see Magnetism The properties of a material determine its usability and hence its engineering application. === Processing === Synthesis and processing involves the creation of a material with the desired micro-nanostructure. A material cannot be used in industry if no economically viable production method for it has been developed. Therefore, developing processing methods for materials that are reasonably effective and cost-efficient is vital to the field of materials science. Different materials require different processing or synthesis methods. For example, the processing of metals has historically defined eras such as the Bronze Age and Iron Age and is studied under the branch of materials science named physical metallurgy. Chemical and physical methods are also used to synthesize other materials such as polymers, ceramics, semiconductors, and thin films. As of the early 21st century, new methods are being developed to synthesize nanomaterials such as graphene. === Thermodynamics === Thermodynamics is concerned with heat and temperature and their relation to energy and work. It defines macroscopic variables, such as internal energy, entropy, and pressure, that partly describe a body of matter or radiation. It states that the behavior of those variables is subject to general constraints common to all materials. These general constraints are expressed in the four laws of thermodynamics. Thermodynamics describes the bulk behavior of the body, not the microscopic behaviors of the very large numbers of its microscopic constituents, such as molecules. The behavior of these microscopic particles is described by, and the laws of thermodynamics are derived from, statistical mechanics. The study of thermodynamics is fundamental to materials science. It forms the foundation to treat general phenomena in materials science and engineering, including chemical reactions, magnetism, polarizability, and elasticity. It explains fundamental tools such as phase diagrams and concepts such as phase equilibrium. === Kinetics === Chemical kinetics is the study of the rates at which systems that are out of equilibrium change under the influence of various forces. When applied to materials science, it deals with how a material changes with time (moves from non-equilibrium to equilibrium state) due to application of a certain field. It details the rate of various processes evolving in materials including shape, size, composition and structure. Diffusion is important in the study of kinetics as this is the most common mechanism by which materials undergo change. Kinetics is essential in processing of materials because, among other things, it details how the microstructure changes with application of heat. == Research == Materials science is a highly active area of research. Together with materials science departments, physics, chemistry, and many engineering departments are involved in materials research. Materials research covers a broad range of topics; the following non-exhaustive list highlights a few important research areas. === Nanomaterials === Nanomaterials describe, in principle, materials of which a single unit is sized (in at least one dimension) between 1 and 1000 nanometers (10−9 meter), but is usually 1 nm – 100 nm. Nanomaterials research takes a materials science based approach to nanotechnology, using advances in materials metrology and synthesis, which have been developed in support of microfabrication research. Materials with structure at the nanoscale often have unique optical, electronic, or mechanical properties. The field of nanomaterials is loosely organized, like the traditional field of chemistry, into organic (carbon-based) nanomaterials, such as fullerenes, and inorganic nanomaterials based on other elements, such as silicon. Examples of nanomaterials include fullerenes, carbon nanotubes, nanocrystals, etc. === Biomaterials === A biomaterial is any matter, surface, or construct that interacts with biological systems. Biomaterials science encompasses elements of medicine, biology, chemistry, tissue engineering, and materials science. Biomaterials can be derived either from nature or synthesized in a laboratory using a variety of chemical approaches using metallic components, polymers, bioceramics, or composite materials. They are often intended or adapted for medical applications, such as biomedical devices which perform, augment, or replace a natural function. Such functions may be benign, like being used for a heart valve, or may be bioactive with a more interactive functionality such as hydroxylapatite-coated hip implants. Biomaterials are also used every day in dental applications, surgery, and drug delivery. For example, a construct with impregnated pharmaceutical products can be placed into the body, which permits the prolonged release of a drug over an extended period of time. A biomaterial may also be an autograft, allograft or xenograft used as an organ transplant material. === Electronic, optical, and magnetic === Semiconductors, metals, and ceramics are used today to form highly complex systems, such as integrated electronic circuits, optoelectronic devices, and magnetic and optical mass storage media. These materials form the basis of our modern computing world, and hence research into these materials is of vital importance. Semiconductors are a traditional example of these types of materials. They are materials that have properties that are intermediate between conductors and insulators. Their electrical conductivities are very sensitive to the concentration of impurities, which allows the use of doping to achieve desirable electronic properties. Hence, semiconductors form the basis of the traditional computer. This field also includes new areas of research such as superconducting materials, spintronics, metamaterials, etc. The study of these materials involves knowledge of materials science and solid-state physics or condensed matter physics. === Computational materials science === With continuing increases in computing power, simulating the behavior of materials has become possible. This enables materials scientists to understand behavior and mechanisms, design new materials, and explain properties formerly poorly understood. Efforts surrounding integrated computational materials engineering are now focusing on combining computational methods with experiments to drastically reduce the time and effort to optimize materials properties for a given application. This involves simulating materials at all length scales, using methods such as density functional theory, molecular dynamics, Monte Carlo, dislocation dynamics, phase field, finite element, and many more. == Industry == Radical materials advances can drive the creation of new products or even new industries, but stable industries also employ materials scientists to make incremental improvements and troubleshoot issues with currently used materials. Industrial applications of materials science include materials design, cost-benefit tradeoffs in industrial production of materials, processing methods (casting, rolling, welding, ion implantation, crystal growth, thin-film deposition, sintering, glassblowing, etc.), and analytic methods (characterization methods such as electron microscopy, X-ray diffraction, calorimetry, nuclear microscopy (HEFIB), Rutherford backscattering, neutron diffraction, small-angle X-ray scattering (SAXS), etc.). Besides material characterization, the material scientist or engineer also deals with extracting materials and converting them into useful forms. Thus ingot casting, foundry methods, blast furnace extraction, and electrolytic extraction are all part of the required knowledge of a materials engineer. Often the presence, absence, or variation of minute quantities of secondary elements and compounds in a bulk material will greatly affect the final properties of the materials produced. For example, steels are classified based on 1/10 and 1/100 weight percentages of the carbon and other alloying elements they contain. Thus, the extracting and purifying methods used to extract iron in a blast furnace can affect the quality of steel that is produced. Solid materials are generally grouped into three basic classifications: ceramics, metals, and polymers. This broad classification is based on the empirical makeup and atomic structure of the solid materials, and most solids fall into one of these broad categories. An item that is often made from each of these materials types is the beverage container. The material types used for beverage containers accordingly provide different advantages and disadvantages, depending on the material used. Ceramic (glass) containers are optically transparent, impervious to the passage of carbon dioxide, relatively inexpensive, and are easily recycled, but are also heavy and fracture easily. Metal (aluminum alloy) is relatively strong, is a good barrier to the diffusion of carbon dioxide, and is easily recycled. However, the cans are opaque, expensive to produce, and are easily dented and punctured. Polymers (polyethylene plastic) are relatively strong, can be optically transparent, are inexpensive and lightweight, and can be recyclable, but are not as impervious to the passage of carbon dioxide as aluminum and glass. === Ceramics and glasses === Another application of materials science is the study of ceramics and glasses, typically the most brittle materials with industrial relevance. Many ceramics and glasses exhibit covalent or ionic-covalent bonding with SiO2 (silica) as a fundamental building block. Ceramics – not to be confused with raw, unfired clay – are usually seen in crystalline form. The vast majority of commercial glasses contain a metal oxide fused with silica. At the high temperatures used to prepare glass, the material is a viscous liquid which solidifies into a disordered state upon cooling. Windowpanes and eyeglasses are important examples. Fibers of glass are also used for long-range telecommunication and optical transmission. Scratch resistant Corning Gorilla Glass is a well-known example of the application of materials science to drastically improve the properties of common components. Engineering ceramics are known for their stiffness and stability under high temperatures, compression and electrical stress. Alumina, silicon carbide, and tungsten carbide are made from a fine powder of their constituents in a process of sintering with a binder. Hot pressing provides higher density material. Chemical vapor deposition can place a film of a ceramic on another material. Cermets are ceramic particles containing some metals. The wear resistance of tools is derived from cemented carbides with the metal phase of cobalt and nickel typically added to modify properties. Ceramics can be significantly strengthened for engineering applications using the principle of crack deflection. This process involves the strategic addition of second-phase particles within a ceramic matrix, optimizing their shape, size, and distribution to direct and control crack propagation. This approach enhances fracture toughness, paving the way for the creation of advanced, high-performance ceramics in various industries. === Composites === Another application of materials science in industry is making composite materials. These are structured materials composed of two or more macroscopic phases. Applications range from structural elements such as steel-reinforced concrete, to the thermal insulating tiles, which play a key and integral role in NASA's Space Shuttle thermal protection system, which is used to protect the surface of the shuttle from the heat of re-entry into the Earth's atmosphere. One example is reinforced Carbon-Carbon (RCC), the light gray material, which withstands re-entry temperatures up to 1,510 °C (2,750 °F) and protects the Space Shuttle's wing leading edges and nose cap. RCC is a laminated composite material made from graphite rayon cloth and impregnated with a phenolic resin. After curing at high temperature in an autoclave, the laminate is pyrolized to convert the resin to carbon, impregnated with furfuryl alcohol in a vacuum chamber, and cured-pyrolized to convert the furfuryl alcohol to carbon. To provide oxidation resistance for reusability, the outer layers of the RCC are converted to silicon carbide. Other examples can be seen in the "plastic" casings of television sets, cell-phones and so on. These plastic casings are usually a composite material made up of a thermoplastic matrix such as acrylonitrile butadiene styrene (ABS) in which calcium carbonate chalk, talc, glass fibers or carbon fibers have been added for added strength, bulk, or electrostatic dispersion. These additions may be termed reinforcing fibers, or dispersants, depending on their purpose. === Polymers === Polymers are chemical compounds made up of a large number of identical components linked together like chains. Polymers are the raw materials (the resins) used to make what are commonly called plastics and rubber. Plastics and rubber are the final product, created after one or more polymers or additives have been added to a resin during processing, which is then shaped into a final form. Plastics in former and in current widespread use include polyethylene, polypropylene, polyvinyl chloride (PVC), polystyrene, nylons, polyesters, acrylics, polyurethanes, and polycarbonates. Rubbers include natural rubber, styrene-butadiene rubber, chloroprene, and butadiene rubber. Plastics are generally classified as commodity, specialty and engineering plastics. Polyvinyl chloride (PVC) is widely used, inexpensive, and annual production quantities are large. It lends itself to a vast array of applications, from artificial leather to electrical insulation and cabling, packaging, and containers. Its fabrication and processing are simple and well-established. The versatility of PVC is due to the wide range of plasticisers and other additives that it accepts. The term "additives" in polymer science refers to the chemicals and compounds added to the polymer base to modify its material properties. Polycarbonate would be normally considered an engineering plastic (other examples include PEEK, ABS). Such plastics are valued for their superior strengths and other special material properties. They are usually not used for disposable applications, unlike commodity plastics. Specialty plastics are materials with unique characteristics, such as ultra-high strength, electrical conductivity, electro-fluorescence, high thermal stability, etc. The dividing lines between the various types of plastics is not based on material but rather on their properties and applications. For example, polyethylene (PE) is a cheap, low friction polymer commonly used to make disposable bags for shopping and trash, and is considered a commodity plastic, whereas medium-density polyethylene (MDPE) is used for underground gas and water pipes, and another variety called ultra-high-molecular-weight polyethylene (UHMWPE) is an engineering plastic which is used extensively as the glide rails for industrial equipment and the low-friction socket in implanted hip joints. === Metal alloys === The alloys of iron (steel, stainless steel, cast iron, tool steel, alloy steels) make up the largest proportion of metals today both by quantity and commercial value. Iron alloyed with various proportions of carbon gives low, mid and high carbon steels. An iron-carbon alloy is only considered steel if the carbon level is between 0.01% and 2.00% by weight. For steels, the hardness and tensile strength of the steel is related to the amount of carbon present, with increasing carbon levels also leading to lower ductility and toughness. Heat treatment processes such as quenching and tempering can significantly change these properties, however. In contrast, certain metal alloys exhibit unique properties where their size and density remain unchanged across a range of temperatures. Cast iron is defined as an iron–carbon alloy with more than 2.00%, but less than 6.67% carbon. Stainless steel is defined as a regular steel alloy with greater than 10% by weight alloying content of chromium. Nickel and molybdenum are typically also added in stainless steels. Other significant metallic alloys are those of aluminium, titanium, copper and magnesium. Copper alloys have been known for a long time (since the Bronze Age), while the alloys of the other three metals have been relatively recently developed. Due to the chemical reactivity of these metals, the electrolytic extraction processes required were only developed relatively recently. The alloys of aluminium, titanium and magnesium are also known and valued for their high strength to weight ratios and, in the case of magnesium, their ability to provide electromagnetic shielding. These materials are ideal for situations where high strength to weight ratios are more important than bulk cost, such as in the aerospace industry and certain automotive engineering applications. === Semiconductors === A semiconductor is a material that has a resistivity between a conductor and insulator. Modern day electronics run on semiconductors, and the industry had an estimated US$530 billion market in 2021. Its electronic properties can be greatly altered through intentionally introducing impurities in a process referred to as doping. Semiconductor materials are used to build diodes, transistors, light-emitting diodes (LEDs), and analog and digital electric circuits, among their many uses. Semiconductor devices have replaced thermionic devices like vacuum tubes in most applications. Semiconductor devices are manufactured both as single discrete devices and as integrated circuits (ICs), which consist of a number—from a few to millions—of devices manufactured and interconnected on a single semiconductor substrate. Of all the semiconductors in use today, silicon makes up the largest portion both by quantity and commercial value. Monocrystalline silicon is used to produce wafers used in the semiconductor and electronics industry. Gallium arsenide (GaAs) is the second most popular semiconductor used. Due to its higher electron mobility and saturation velocity compared to silicon, it is a material of choice for high-speed electronics applications. These superior properties are compelling reasons to use GaAs circuitry in mobile phones, satellite communications, microwave point-to-point links and higher frequency radar systems. Other semiconductor materials include germanium, silicon carbide, and gallium nitride and have various applications. == Relation with other fields == Materials science evolved, starting from the 1950s because it was recognized that to create, discover and design new materials, one had to approach it in a unified manner. Thus, materials science and engineering emerged in many ways: renaming and/or combining existing metallurgy and ceramics engineering departments; splitting from existing solid state physics research (itself growing into condensed matter physics); pulling in relatively new polymer engineering and polymer science; recombining from the previous, as well as chemistry, chemical engineering, mechanical engineering, and electrical engineering; and more. The field of materials science and engineering is important both from a scientific perspective, as well as for applications field. Materials are of the utmost importance for engineers (or other applied fields) because usage of the appropriate materials is crucial when designing systems. As a result, materials science is an increasingly important part of an engineer's education. Materials physics is the use of physics to describe the physical properties of materials. It is a synthesis of physical sciences such as chemistry, solid mechanics, solid state physics, and materials science. Materials physics is considered a subset of condensed matter physics and applies fundamental condensed matter concepts to complex multiphase media, including materials of technological interest. Current fields that materials physicists work in include electronic, optical, and magnetic materials, novel materials and structures, quantum phenomena in materials, nonequilibrium physics, and soft condensed matter physics. New experimental and computational tools are constantly improving how materials systems are modeled and studied and are also fields when materials physicists work in. The field is inherently interdisciplinary, and the materials scientists or engineers must be aware and make use of the methods of the physicist, chemist and engineer. Conversely, fields such as life sciences and archaeology can inspire the development of new materials and processes, in bioinspired and paleoinspired approaches. Thus, there remain close relationships with these fields. Conversely, many physicists, chemists and engineers find themselves working in materials science due to the significant overlaps between the fields. == Emerging technologies == == Subdisciplines == The main branches of materials science stem from the four main classes of materials: ceramics, metals, polymers and composites. Ceramic engineering Metallurgy Polymer science and engineering Composite engineering There are additionally broadly applicable, materials independent, endeavors. Materials characterization (spectroscopy, microscopy, diffraction) Computational materials science Materials informatics and selection There are also relatively broad focuses across materials on specific phenomena and techniques. Crystallography Surface science Tribology Microelectronics == Related or interdisciplinary fields == Condensed matter physics, solid-state physics and solid-state chemistry Nanotechnology Mineralogy Supramolecular chemistry Biomaterials science == Professional societies == American Ceramic Society ASM International Association for Iron and Steel Technology Materials Research Society The Minerals, Metals & Materials Society == See also == == References == === Citations === === Bibliography === Ashby, Michael; Hugh Shercliff; David Cebon (2007). Materials: engineering, science, processing and design (1st ed.). Butterworth-Heinemann. ISBN 978-0-7506-8391-3. Askeland, Donald R.; Pradeep P. Phulé (2005). The Science & Engineering of Materials (5th ed.). Thomson-Engineering. ISBN 978-0-534-55396-8. Callister, Jr., William D. (2000). Materials Science and Engineering – An Introduction (5th ed.). John Wiley and Sons. ISBN 978-0-471-32013-5. Eberhart, Mark (2003). Why Things Break: Understanding the World by the Way It Comes Apart. Harmony. ISBN 978-1-4000-4760-4. Gaskell, David R. (1995). Introduction to the Thermodynamics of Materials (4th ed.). Taylor and Francis Publishing. ISBN 978-1-56032-992-3. González-Viñas, W. & Mancini, H.L. (2004). An Introduction to Materials Science. Princeton University Press. ISBN 978-0-691-07097-1. Gordon, James Edward (1984). The New Science of Strong Materials or Why You Don't Fall Through the Floor (eissue ed.). Princeton University Press. ISBN 978-0-691-02380-9. Mathews, F.L. & Rawlings, R.D. (1999). Composite Materials: Engineering and Science. Boca Raton: CRC Press. ISBN 978-0-8493-0621-1. Lewis, P.R.; Reynolds, K. & Gagg, C. (2003). Forensic Materials Engineering: Case Studies. Boca Raton: CRC Press. ISBN 9780849311826. Wachtman, John B. (1996). Mechanical Properties of Ceramics. New York: Wiley-Interscience, John Wiley & Son's. ISBN 978-0-471-13316-2. Walker, P., ed. (1993). Chambers Dictionary of Materials Science and Technology. Chambers Publishing. ISBN 978-0-550-13249-9. Mahajan, S. (2015). "The role of materials science in the evolution of microelectronics". MRS Bulletin. 12 (40): 1079–1088. Bibcode:2015MRSBu..40.1079M. doi:10.1557/mrs.2015.276. == Further reading == Timeline of Materials Science Archived 2011-07-27 at the Wayback Machine at The Minerals, Metals & Materials Society (TMS) – accessed March 2007 Burns, G.; Glazer, A.M. (1990). Space Groups for Scientists and Engineers (2nd ed.). Boston: Academic Press, Inc. ISBN 978-0-12-145761-7. Cullity, B.D. (1978). Elements of X-Ray Diffraction (2nd ed.). Reading, Massachusetts: Addison-Wesley Publishing Company. ISBN 978-0-534-55396-8. Giacovazzo, C; Monaco HL; Viterbo D; Scordari F; Gilli G; Zanotti G; Catti M (1992). Fundamentals of Crystallography. Oxford: Oxford University Press. ISBN 978-0-19-855578-0. Green, D.J.; Hannink, R.; Swain, M.V. (1989). Transformation Toughening of Ceramics. Boca Raton: CRC Press. ISBN 978-0-8493-6594-2. Lovesey, S. W. (1984). Theory of Neutron Scattering from Condensed Matter; Volume 1: Neutron Scattering. Oxford: Clarendon Press. ISBN 978-0-19-852015-3. Lovesey, S. W. (1984). Theory of Neutron Scattering from Condensed Matter; Volume 2: Condensed Matter. Oxford: Clarendon Press. ISBN 978-0-19-852017-7. O'Keeffe, M.; Hyde, B.G. (1996). "Crystal Structures; I. Patterns and Symmetry". Zeitschrift für Kristallographie – Crystalline Materials. 212 (12). Washington, DC: Mineralogical Society of America, Monograph Series: 899. Bibcode:1997ZK....212..899K. doi:10.1524/zkri.1997.212.12.899. ISBN 978-0-939950-40-9. Squires, G.L. (1996). Introduction to the Theory of Thermal Neutron Scattering (2nd ed.). Mineola, New York: Dover Publications Inc. ISBN 978-0-486-69447-4. Young, R.A., ed. (1993). The Rietveld Method. Oxford: Oxford University Press & International Union of Crystallography. ISBN 978-0-19-855577-3. == External links == MS&T conference organized by the main materials societies MIT OpenCourseWare for MSE
Wikipedia/Materials_engineering
A quantum mechanical system or particle that is bound—that is, confined spatially—can only take on certain discrete values of energy, called energy levels. This contrasts with classical particles, which can have any amount of energy. The term is commonly used for the energy levels of the electrons in atoms, ions, or molecules, which are bound by the electric field of the nucleus, but can also refer to energy levels of nuclei or vibrational or rotational energy levels in molecules. The energy spectrum of a system with such discrete energy levels is said to be quantized. In chemistry and atomic physics, an electron shell, or principal energy level, may be thought of as the orbit of one or more electrons around an atom's nucleus. The closest shell to the nucleus is called the "1 shell" (also called "K shell"), followed by the "2 shell" (or "L shell"), then the "3 shell" (or "M shell"), and so on further and further from the nucleus. The shells correspond with the principal quantum numbers (n = 1, 2, 3, 4, ...) or are labeled alphabetically with letters used in the X-ray notation (K, L, M, N, ...). Each shell can contain only a fixed number of electrons: The first shell can hold up to two electrons, the second shell can hold up to eight (2 + 6) electrons, the third shell can hold up to 18 (2 + 6 + 10) and so on. The general formula is that the nth shell can in principle hold up to 2n2 electrons. Since electrons are electrically attracted to the nucleus, an atom's electrons will generally occupy outer shells only if the more inner shells have already been completely filled by other electrons. However, this is not a strict requirement: atoms may have two or even three incomplete outer shells. (See Madelung rule for more details.) For an explanation of why electrons exist in these shells see electron configuration. If the potential energy is set to zero at infinite distance from the atomic nucleus or molecule, the usual convention, then bound electron states have negative potential energy. If an atom, ion, or molecule is at the lowest possible energy level, it and its electrons are said to be in the ground state. If it is at a higher energy level, it is said to be excited, or any electrons that have higher energy than the ground state are excited. An energy level is regarded as degenerate if there is more than one measurable quantum mechanical state associated with it. == Explanation == Quantized energy levels result from the wave behavior of particles, which gives a relationship between a particle's energy and its wavelength. For a confined particle such as an electron in an atom, the wave functions that have well defined energies have the form of a standing wave. States having well-defined energies are called stationary states because they are the states that do not change in time. Informally, these states correspond to a whole number of wavelengths of the wavefunction along a closed path (a path that ends where it started), such as a circular orbit around an atom, where the number of wavelengths gives the type of atomic orbital (0 for s-orbitals, 1 for p-orbitals and so on). Elementary examples that show mathematically how energy levels come about are the particle in a box and the quantum harmonic oscillator. Any superposition (linear combination) of energy states is also a quantum state, but such states change with time and do not have well-defined energies. A measurement of the energy results in the collapse of the wavefunction, which results in a new state that consists of just a single energy state. Measurement of the possible energy levels of an object is called spectroscopy. == History == The first evidence of quantization in atoms was the observation of spectral lines in light from the sun in the early 1800s by Joseph von Fraunhofer and William Hyde Wollaston. The notion of energy levels was proposed in 1913 by Danish physicist Niels Bohr in the Bohr theory of the atom. The modern quantum mechanical theory giving an explanation of these energy levels in terms of the Schrödinger equation was advanced by Erwin Schrödinger and Werner Heisenberg in 1926. == Atoms == === Intrinsic energy levels === In the formulas for energy of electrons at various levels given below in an atom, the zero point for energy is set when the electron in question has completely left the atom; i.e. when the electron's principal quantum number n = ∞. When the electron is bound to the atom in any closer value of n, the electron's energy is lower and is considered negative. ==== Orbital state energy level: atom/ion with nucleus + one electron ==== Assume there is one electron in a given atomic orbital in a hydrogen-like atom (ion). The energy of its state is mainly determined by the electrostatic interaction of the (negative) electron with the (positive) nucleus. The energy levels of an electron around a nucleus are given by: E n = − h c R ∞ Z 2 n 2 {\displaystyle E_{n}=-hcR_{\infty }{\frac {Z^{2}}{n^{2}}}} (typically between 1 eV and 103 eV), where R∞ is the Rydberg constant, Z is the atomic number, n is the principal quantum number, h is the Planck constant, and c is the speed of light. For hydrogen-like atoms (ions) only, the Rydberg levels depend only on the principal quantum number n. This equation is obtained from combining the Rydberg formula for any hydrogen-like element (shown below) with E = hν = hc / λ assuming that the principal quantum number n above = n1 in the Rydberg formula and n2 = ∞ (principal quantum number of the energy level the electron descends from, when emitting a photon). The Rydberg formula was derived from empirical spectroscopic emission data. 1 λ = R Z 2 ( 1 n 1 2 − 1 n 2 2 ) {\displaystyle {\frac {1}{\lambda }}=RZ^{2}\left({\frac {1}{n_{1}^{2}}}-{\frac {1}{n_{2}^{2}}}\right)} An equivalent formula can be derived quantum mechanically from the time-independent Schrödinger equation with a kinetic energy Hamiltonian operator using a wave function as an eigenfunction to obtain the energy levels as eigenvalues, but the Rydberg constant would be replaced by other fundamental physics constants. ==== Electron–electron interactions in atoms ==== If there is more than one electron around the atom, electron–electron interactions raise the energy level. These interactions are often neglected if the spatial overlap of the electron wavefunctions is low. For multi-electron atoms, interactions between electrons cause the preceding equation to be no longer accurate as stated simply with Z as the atomic number. A simple (though not complete) way to understand this is as a shielding effect, where the outer electrons see an effective nucleus of reduced charge, since the inner electrons are bound tightly to the nucleus and partially cancel its charge. This leads to an approximate correction where Z is substituted with an effective nuclear charge symbolized as Zeff that depends strongly on the principal quantum number. E n , ℓ = − h c R ∞ Z e f f 2 n 2 {\displaystyle E_{n,\ell }=-hcR_{\infty }{\frac {{Z_{\rm {eff}}}^{2}}{n^{2}}}} In such cases, the orbital types (determined by the azimuthal quantum number ℓ) as well as their levels within the molecule affect Zeff and therefore also affect the various atomic electron energy levels. The Aufbau principle of filling an atom with electrons for an electron configuration takes these differing energy levels into account. For filling an atom with electrons in the ground state, the lowest energy levels are filled first and consistent with the Pauli exclusion principle, the Aufbau principle, and Hund's rule. ==== Fine structure splitting ==== Fine structure arises from relativistic kinetic energy corrections, spin–orbit coupling (an electrodynamic interaction between the electron's spin and motion and the nucleus's electric field) and the Darwin term (contact term interaction of s shell electrons inside the nucleus). These affect the levels by a typical order of magnitude of 10−3 eV. ==== Hyperfine structure ==== This even finer structure is due to electron–nucleus spin–spin interaction, resulting in a typical change in the energy levels by a typical order of magnitude of 10−4 eV. === Energy levels due to external fields === ==== Zeeman effect ==== There is an interaction energy associated with the magnetic dipole moment, μL, arising from the electronic orbital angular momentum, L, given by U = − μ L ⋅ B {\displaystyle U=-{\boldsymbol {\mu }}_{L}\cdot \mathbf {B} } with − μ L = e ℏ 2 m L = μ B L {\displaystyle -{\boldsymbol {\mu }}_{L}={\dfrac {e\hbar }{2m}}\mathbf {L} =\mu _{B}\mathbf {L} } . Additionally taking into account the magnetic momentum arising from the electron spin. Due to relativistic effects (Dirac equation), there is a magnetic momentum, μS, arising from the electron spin − μ S = − μ B g S S {\displaystyle -{\boldsymbol {\mu }}_{S}=-\mu _{\text{B}}g_{S}\mathbf {S} } , with gS the electron-spin g-factor (about 2), resulting in a total magnetic moment, μ, μ = μ L + μ S {\displaystyle {\boldsymbol {\mu }}={\boldsymbol {\mu }}_{L}+{\boldsymbol {\mu }}_{S}} . The interaction energy therefore becomes U B = − μ ⋅ B = μ B B ( M L + g S M S ) {\displaystyle U_{B}=-{\boldsymbol {\mu }}\cdot \mathbf {B} =\mu _{\text{B}}B(M_{L}+g_{S}M_{S})} . ==== Stark effect ==== == Molecules == Chemical bonds between atoms in a molecule form because they make the situation more stable for the involved atoms, which generally means the sum energy level for the involved atoms in the molecule is lower than if the atoms were not so bonded. As separate atoms approach each other to covalently bond, their orbitals affect each other's energy levels to form bonding and antibonding molecular orbitals. The energy level of the bonding orbitals is lower, and the energy level of the antibonding orbitals is higher. For the bond in the molecule to be stable, the covalent bonding electrons occupy the lower energy bonding orbital, which may be signified by such symbols as σ or π depending on the situation. Corresponding anti-bonding orbitals can be signified by adding an asterisk to get σ* or π* orbitals. A non-bonding orbital in a molecule is an orbital with electrons in outer shells which do not participate in bonding and its energy level is the same as that of the constituent atom. Such orbitals can be designated as n orbitals. The electrons in an n orbital are typically lone pairs. In polyatomic molecules, different vibrational and rotational energy levels are also involved. Roughly speaking, a molecular energy state (i.e., an eigenstate of the molecular Hamiltonian) is the sum of the electronic, vibrational, rotational, nuclear, and translational components, such that: E = E electronic + E vibrational + E rotational + E nuclear + E translational {\displaystyle E=E_{\text{electronic}}+E_{\text{vibrational}}+E_{\text{rotational}}+E_{\text{nuclear}}+E_{\text{translational}}} where Eelectronic is an eigenvalue of the electronic molecular Hamiltonian (the value of the potential energy surface) at the equilibrium geometry of the molecule. The molecular energy levels are labelled by the molecular term symbols. The specific energies of these components vary with the specific energy state and the substance. === Energy level diagrams === There are various types of energy level diagrams for bonds between atoms in a molecule. Examples Molecular orbital diagrams, Jablonski diagrams, and Franck–Condon diagrams. == Energy level transitions == Electrons in atoms and molecules can change (make transitions in) energy levels by emitting or absorbing a photon (of electromagnetic radiation), whose energy must be exactly equal to the energy difference between the two levels. Electrons can also be completely removed from a chemical species such as an atom, molecule, or ion. Complete removal of an electron from an atom can be a form of ionization, which is effectively moving the electron out to an orbital with an infinite principal quantum number, in effect so far away so as to have practically no more effect on the remaining atom (ion). For various types of atoms, there are 1st, 2nd, 3rd, etc. ionization energies for removing the 1st, then the 2nd, then the 3rd, etc. of the highest energy electrons, respectively, from the atom originally in the ground state. Energy in corresponding opposite quantities can also be released, sometimes in the form of photon energy, when electrons are added to positively charged ions or sometimes atoms. Molecules can also undergo transitions in their vibrational or rotational energy levels. Energy level transitions can also be nonradiative, meaning emission or absorption of a photon is not involved. If an atom, ion, or molecule is at the lowest possible energy level, it and its electrons are said to be in the ground state. If it is at a higher energy level, it is said to be excited, or any electrons that have higher energy than the ground state are excited. Such a species can be excited to a higher energy level by absorbing a photon whose energy is equal to the energy difference between the levels. Conversely, an excited species can go to a lower energy level by spontaneously emitting a photon equal to the energy difference. A photon's energy is equal to the Planck constant (h) times its frequency (f) and thus is proportional to its frequency, or inversely to its wavelength (λ). ΔE = hf = hc / λ, since c, the speed of light, equals to fλ Correspondingly, many kinds of spectroscopy are based on detecting the frequency or wavelength of the emitted or absorbed photons to provide information on the material analyzed, including information on the energy levels and electronic structure of materials obtained by analyzing the spectrum. An asterisk is commonly used to designate an excited state. An electron transition in a molecule's bond from a ground state to an excited state may have a designation such as σ → σ*, π → π*, or n → π* meaning excitation of an electron from a σ bonding to a σ antibonding orbital, from a π bonding to a π antibonding orbital, or from an n non-bonding to a π antibonding orbital. Reverse electron transitions for all these types of excited molecules are also possible to return to their ground states, which can be designated as σ* → σ, π* → π, or π* → n. A transition in an energy level of an electron in a molecule may be combined with a vibrational transition and called a vibronic transition. A vibrational and rotational transition may be combined by rovibrational coupling. In rovibronic coupling, electron transitions are simultaneously combined with both vibrational and rotational transitions. Photons involved in transitions may have energy of various ranges in the electromagnetic spectrum, such as X-ray, ultraviolet, visible light, infrared, or microwave radiation, depending on the type of transition. In a very general way, energy level differences between electronic states are larger, differences between vibrational levels are intermediate, and differences between rotational levels are smaller, although there can be overlap. Translational energy levels are practically continuous and can be calculated as kinetic energy using classical mechanics. Higher temperature causes fluid atoms and molecules to move faster increasing their translational energy, and thermally excites molecules to higher average amplitudes of vibrational and rotational modes (excites the molecules to higher internal energy levels). This means that as temperature rises, translational, vibrational, and rotational contributions to molecular heat capacity let molecules absorb heat and hold more internal energy. Conduction of heat typically occurs as molecules or atoms collide transferring the heat between each other. At even higher temperatures, electrons can be thermally excited to higher energy orbitals in atoms or molecules. A subsequent drop of an electron to a lower energy level can release a photon, causing a possibly coloured glow. An electron further from the nucleus has higher potential energy than an electron closer to the nucleus, thus it becomes less bound to the nucleus, since its potential energy is negative and inversely dependent on its distance from the nucleus. == Crystalline materials == Crystalline solids are found to have energy bands, instead of or in addition to energy levels. Electrons can take on any energy within an unfilled band. At first this appears to be an exception to the requirement for energy levels. However, as shown in band theory, energy bands are actually made up of many discrete energy levels which are too close together to resolve. Within a band the number of levels is of the order of the number of atoms in the crystal, so although electrons are actually restricted to these energies, they appear to be able to take on a continuum of values. The important energy levels in a crystal are the top of the valence band, the bottom of the conduction band, the Fermi level, the vacuum level, and the energy levels of any defect states in the crystal. == See also == Perturbation theory (quantum mechanics) Atomic clock Computational chemistry == References ==
Wikipedia/Energy_levels
The Lambda-CDM, Lambda cold dark matter, or ΛCDM model is a mathematical model of the Big Bang theory with three major components: a cosmological constant, denoted by lambda (Λ), associated with dark energy; the postulated cold dark matter, denoted by CDM; ordinary matter. It is the current standard model of Big Bang cosmology, as it is the simplest model that provides a reasonably good account of: the existence and structure of the cosmic microwave background; the large-scale structure in the distribution of galaxies; the observed abundances of hydrogen (including deuterium), helium, and lithium; the accelerating expansion of the universe observed in the light from distant galaxies and supernovae. The model assumes that general relativity is the correct theory of gravity on cosmological scales. It emerged in the late 1990s as a concordance cosmology, after a period when disparate observed properties of the universe appeared mutually inconsistent, and there was no consensus on the makeup of the energy density of the universe. The ΛCDM model has been successful in modeling a broad collection of astronomical observations over decades. Remaining issues challenge the assumptions of the ΛCDM model and have led to many alternative models. == Overview == The ΛCDM model is based on three postulates on the structure of spacetime:: 227  The cosmological principle, that the universe is the same everywhere and in all directions, and that it is expanding, A postulate by Hermann Weyl that the lines of spacetime (geodesics) intersect at only one point, where time along each line can be synchronized; the behavior resembles an expanding perfect fluid,: 175  general relativity that relates the geometry of spacetime to the distribution of matter and energy. This combination greatly simplifies the equations of general relativity into a form called the Friedmann equations. These equations specify the evolution of the scale factor of the universe in terms of the pressure and density of a perfect fluid. The evolving density is composed of different kinds of energy and matter, each with its own role in affecting the scale factor.: 7  For example, a model might include baryons, photons, neutrinos, and dark matter.: 25.1.1  These component densities become parameters extracted when the model is constrained to match astrophysical observations. The model aims to describe the observable universe from approximately 0.1 s to the present.: 605  The most accurate observations which are sensitive to the component densities are consequences of statistical inhomogeneity called "perturbations" in the early universe. Since the Friedmann equations assume homogeneity, additional theory must be added before comparison to experiments. Inflation is a simple model producing perturbations by postulating an extremely rapid expansion early in the universe that separates quantum fluctuations before they can equilibrate. The perturbations are characterized by additional parameters also determined by matching observations.: 25.1.2  Finally, the light which will become astronomical observations must pass through the universe. The latter part of that journey will pass through ionized space, where the electrons can scatter the light, altering the anisotropies. This effect is characterized by one additional parameter.: 25.1.3  The ΛCDM model includes an expansion of the spatial metric that is well documented, both as the redshift of prominent spectral absorption or emission lines in the light from distant galaxies, and as the time dilation in the light decay of supernova luminosity curves. Both effects are attributed to a Doppler shift in electromagnetic radiation as it travels across expanding space. Although this expansion increases the distance between objects that are not under shared gravitational influence, it does not increase the size of the objects (e.g. galaxies) in space. Also, since it originates from ordinary general relativity, it, like general relativity, allows for distant galaxies to recede from each other at speeds greater than the speed of light; local expansion is less than the speed of light, but expansion summed across great distances can collectively exceed the speed of light. The letter Λ (lambda) represents the cosmological constant, which is associated with a vacuum energy or dark energy in empty space that is used to explain the contemporary accelerating expansion of space against the attractive effects of gravity. A cosmological constant has negative pressure, p = − ρ c 2 {\displaystyle p=-\rho c^{2}} , which contributes to the stress–energy tensor that, according to the general theory of relativity, causes accelerating expansion. The fraction of the total energy density of our (flat or almost flat) universe that is dark energy, Ω Λ {\displaystyle \Omega _{\Lambda }} , is estimated to be 0.669 ± 0.038 based on the 2018 Dark Energy Survey results using Type Ia supernovae or 0.6847±0.0073 based on the 2018 release of Planck satellite data, or more than 68.3% (2018 estimate) of the mass–energy density of the universe. Dark matter is postulated in order to account for gravitational effects observed in very large-scale structures (the "non-keplerian" rotation curves of galaxies; the gravitational lensing of light by galaxy clusters; and the enhanced clustering of galaxies) that cannot be accounted for by the quantity of observed matter. The ΛCDM model proposes specifically cold dark matter, hypothesized as: Non-baryonic: Consists of matter other than protons and neutrons (and electrons, by convention, although electrons are not baryons) Cold: Its velocity is far less than the speed of light at the epoch of radiation–matter equality (thus neutrinos are excluded, being non-baryonic but not cold) Dissipationless: Cannot cool by radiating photons Collisionless: Dark matter particles interact with each other and other particles only through gravity and possibly the weak force Dark matter constitutes about 26.5% of the mass–energy density of the universe. The remaining 4.9% comprises all ordinary matter observed as atoms, chemical elements, gas and plasma, the stuff of which visible planets, stars and galaxies are made. The great majority of ordinary matter in the universe is unseen, since visible stars and gas inside galaxies and clusters account for less than 10% of the ordinary matter contribution to the mass–energy density of the universe. The model includes a single originating event, the "Big Bang", which was not an explosion but the abrupt appearance of expanding spacetime containing radiation at temperatures of around 1015 K. This was immediately (within 10−29 seconds) followed by an exponential expansion of space by a scale multiplier of 1027 or more, known as cosmic inflation. The early universe remained hot (above 10 000 K) for several hundred thousand years, a state that is detectable as a residual cosmic microwave background, or CMB, a very low-energy radiation emanating from all parts of the sky. The "Big Bang" scenario, with cosmic inflation and standard particle physics, is the only cosmological model consistent with the observed continuing expansion of space, the observed distribution of lighter elements in the universe (hydrogen, helium, and lithium), and the spatial texture of minute irregularities (anisotropies) in the CMB radiation. Cosmic inflation also addresses the "horizon problem" in the CMB; indeed, it seems likely that the universe is larger than the observable particle horizon. == Cosmic expansion history == The expansion of the universe is parameterized by a dimensionless scale factor a = a ( t ) {\displaystyle a=a(t)} (with time t {\displaystyle t} counted from the birth of the universe), defined relative to the present time, so a 0 = a ( t 0 ) = 1 {\displaystyle a_{0}=a(t_{0})=1} ; the usual convention in cosmology is that subscript 0 denotes present-day values, so t 0 {\displaystyle t_{0}} denotes the age of the universe. The scale factor is related to the observed redshift z {\displaystyle z} of the light emitted at time t e m {\displaystyle t_{\mathrm {em} }} by a ( t em ) = 1 1 + z . {\displaystyle a(t_{\text{em}})={\frac {1}{1+z}}\,.} The expansion rate is described by the time-dependent Hubble parameter, H ( t ) {\displaystyle H(t)} , defined as H ( t ) ≡ a ˙ a , {\displaystyle H(t)\equiv {\frac {\dot {a}}{a}},} where a ˙ {\displaystyle {\dot {a}}} is the time-derivative of the scale factor. The first Friedmann equation gives the expansion rate in terms of the matter+radiation density ρ {\displaystyle \rho } , the curvature k {\displaystyle k} , and the cosmological constant Λ {\displaystyle \Lambda } , H 2 = ( a ˙ a ) 2 = 8 π G 3 ρ − k c 2 a 2 + Λ c 2 3 , {\displaystyle H^{2}=\left({\frac {\dot {a}}{a}}\right)^{2}={\frac {8\pi G}{3}}\rho -{\frac {kc^{2}}{a^{2}}}+{\frac {\Lambda c^{2}}{3}},} where, as usual c {\displaystyle c} is the speed of light and G {\displaystyle G} is the gravitational constant. A critical density ρ c r i t {\displaystyle \rho _{\mathrm {crit} }} is the present-day density, which gives zero curvature k {\displaystyle k} , assuming the cosmological constant Λ {\displaystyle \Lambda } is zero, regardless of its actual value. Substituting these conditions to the Friedmann equation gives ρ c r i t = 3 H 0 2 8 π G = 1.878 47 ( 23 ) × 10 − 26 h 2 k g ⋅ m − 3 , {\displaystyle \rho _{\mathrm {crit} }={\frac {3H_{0}^{2}}{8\pi G}}=1.878\;47(23)\times 10^{-26}\;h^{2}\;\mathrm {kg{\cdot }m^{-3}} ,} where h ≡ H 0 / ( 100 k m ⋅ s − 1 ⋅ M p c − 1 ) {\displaystyle h\equiv H_{0}/(100\;\mathrm {km{\cdot }s^{-1}{\cdot }Mpc^{-1}} )} is the reduced Hubble constant. If the cosmological constant were actually zero, the critical density would also mark the dividing line between eventual recollapse of the universe to a Big Crunch, or unlimited expansion. For the Lambda-CDM model with a positive cosmological constant (as observed), the universe is predicted to expand forever regardless of whether the total density is slightly above or below the critical density; though other outcomes are possible in extended models where the dark energy is not constant but actually time-dependent. The present-day density parameter Ω x {\displaystyle \Omega _{x}} for various species is defined as the dimensionless ratio: 74  Ω x ≡ ρ x ( t = t 0 ) ρ c r i t = 8 π G ρ x ( t = t 0 ) 3 H 0 2 {\displaystyle \Omega _{x}\equiv {\frac {\rho _{x}(t=t_{0})}{\rho _{\mathrm {crit} }}}={\frac {8\pi G\rho _{x}(t=t_{0})}{3H_{0}^{2}}}} where the subscript x {\displaystyle x} is one of b {\displaystyle \mathrm {b} } for baryons, c {\displaystyle \mathrm {c} } for cold dark matter, r a d {\displaystyle \mathrm {rad} } for radiation (photons plus relativistic neutrinos), and Λ {\displaystyle \Lambda } for dark energy. Since the densities of various species scale as different powers of a {\displaystyle a} , e.g. a − 3 {\displaystyle a^{-3}} for matter etc., the Friedmann equation can be conveniently rewritten in terms of the various density parameters as H ( a ) ≡ a ˙ a = H 0 ( Ω c + Ω b ) a − 3 + Ω r a d a − 4 + Ω k a − 2 + Ω Λ a − 3 ( 1 + w ) , {\displaystyle H(a)\equiv {\frac {\dot {a}}{a}}=H_{0}{\sqrt {(\Omega _{\rm {c}}+\Omega _{\rm {b}})a^{-3}+\Omega _{\mathrm {rad} }a^{-4}+\Omega _{k}a^{-2}+\Omega _{\Lambda }a^{-3(1+w)}}},} where w {\displaystyle w} is the equation of state parameter of dark energy, and assuming negligible neutrino mass (significant neutrino mass requires a more complex equation). The various Ω {\displaystyle \Omega } parameters add up to 1 {\displaystyle 1} by construction. In the general case this is integrated by computer to give the expansion history a ( t ) {\displaystyle a(t)} and also observable distance–redshift relations for any chosen values of the cosmological parameters, which can then be compared with observations such as supernovae and baryon acoustic oscillations. In the minimal 6-parameter Lambda-CDM model, it is assumed that curvature Ω k {\displaystyle \Omega _{k}} is zero and w = − 1 {\displaystyle w=-1} , so this simplifies to H ( a ) = H 0 Ω m a − 3 + Ω r a d a − 4 + Ω Λ {\displaystyle H(a)=H_{0}{\sqrt {\Omega _{\rm {m}}a^{-3}+\Omega _{\mathrm {rad} }a^{-4}+\Omega _{\Lambda }}}} Observations show that the radiation density is very small today, Ω rad ∼ 10 − 4 {\displaystyle \Omega _{\text{rad}}\sim 10^{-4}} ; if this term is neglected the above has an analytic solution a ( t ) = ( Ω m / Ω Λ ) 1 / 3 sinh 2 / 3 ⁡ ( t / t Λ ) {\displaystyle a(t)=(\Omega _{\rm {m}}/\Omega _{\Lambda })^{1/3}\,\sinh ^{2/3}(t/t_{\Lambda })} where t Λ ≡ 2 / ( 3 H 0 Ω Λ ) ; {\displaystyle t_{\Lambda }\equiv 2/(3H_{0}{\sqrt {\Omega _{\Lambda }}})\ ;} this is fairly accurate for a > 0.01 {\displaystyle a>0.01} or t > 10 {\displaystyle t>10} million years. Solving for a ( t ) = 1 {\displaystyle a(t)=1} gives the present age of the universe t 0 {\displaystyle t_{0}} in terms of the other parameters. It follows that the transition from decelerating to accelerating expansion (the second derivative a ¨ {\displaystyle {\ddot {a}}} crossing zero) occurred when a = ( Ω m / 2 Ω Λ ) 1 / 3 , {\displaystyle a=(\Omega _{\rm {m}}/2\Omega _{\Lambda })^{1/3},} which evaluates to a ∼ 0.6 {\displaystyle a\sim 0.6} or z ∼ 0.66 {\displaystyle z\sim 0.66} for the best-fit parameters estimated from the Planck spacecraft. == Parameters == Multiple variants of the ΛCDM model are used with some differences in parameters.: 25.1  One such set is outlined in the table below. The Planck collaboration version of the ΛCDM model is based on six parameters: baryon density parameter; dark matter density parameter; scalar spectral index; two parameters related to curvature fluctuation amplitude; and the probability that photons from the early universe will be scattered once on route (called reionization optical depth). Six is the smallest number of parameters needed to give an acceptable fit to the observations; other possible parameters are fixed at "natural" values, e.g. total density parameter = 1.00, dark energy equation of state = −1. The parameter values, and uncertainties, are estimated using computer searches to locate the region of parameter space providing an acceptable match to cosmological observations. From these six parameters, the other model values, such as the Hubble constant and the dark energy density, can be calculated. == Historical development == The discovery of the cosmic microwave background (CMB) in 1964 confirmed a key prediction of the Big Bang cosmology. From that point on, it was generally accepted that the universe started in a hot, dense state and has been expanding over time. The rate of expansion depends on the types of matter and energy present in the universe, and in particular, whether the total density is above or below the so-called critical density. During the 1970s, most attention focused on pure-baryonic models, but there were serious challenges explaining the formation of galaxies, given the small anisotropies in the CMB (upper limits at that time). In the early 1980s, it was realized that this could be resolved if cold dark matter dominated over the baryons, and the theory of cosmic inflation motivated models with critical density. During the 1980s, most research focused on cold dark matter with critical density in matter, around 95% CDM and 5% baryons: these showed success at forming galaxies and clusters of galaxies, but problems remained; notably, the model required a Hubble constant lower than preferred by observations, and observations around 1988–1990 showed more large-scale galaxy clustering than predicted. These difficulties sharpened with the discovery of CMB anisotropy by the Cosmic Background Explorer in 1992, and several modified CDM models, including ΛCDM and mixed cold and hot dark matter, came under active consideration through the mid-1990s. The ΛCDM model then became the leading model following the observations of accelerating expansion in 1998, and was quickly supported by other observations: in 2000, the BOOMERanG microwave background experiment measured the total (matter–energy) density to be close to 100% of critical, whereas in 2001 the 2dFGRS galaxy redshift survey measured the matter density to be near 25%; the large difference between these values supports a positive Λ or dark energy. Much more precise spacecraft measurements of the microwave background from WMAP in 2003–2010 and Planck in 2013–2015 have continued to support the model and pin down the parameter values, most of which are constrained below 1 percent uncertainty. == Successes == Among all cosmological models, the ΛCDM model has been the most successful; it describes a wide range of astronomical observations with remarkable accuracy.: 58  The notable successes include: Accurate modeling the high-precision CMB angular distribution measure by the Planck mission and Atacama Cosmology Telescope. Accurate description of the linear E-mode polarization of the CMB radiation due to fluctuations on the surface of last scattering events. Prediction of the observed B-mode polarization of the CMB light due to primordial gravitational waves. Observations of H2O emission spectra from a galaxy 12.8 billion light years away consistent with molecules excited by cosmic background radiation much more energetic – 16-20K – than the CMB we observe now, 3K. Predictions of the primordial abundance of deuterium as a result of Big Bang nucleosynthesis. The observed abundance matches the one derived from the nucleosynthesis model with the value for baryon density derived from CMB measurements.: 4.1.2  In addition to explaining many pre-2000 observations, the model has made a number of successful predictions: notably the existence of the baryon acoustic oscillation feature, discovered in 2005 in the predicted location; and the statistics of weak gravitational lensing, first observed in 2000 by several teams. The polarization of the CMB, discovered in 2002 by DASI, has been successfully predicted by the model: in the 2015 Planck data release, there are seven observed peaks in the temperature (TT) power spectrum, six peaks in the temperature–polarization (TE) cross spectrum, and five peaks in the polarization (EE) spectrum. The six free parameters can be well constrained by the TT spectrum alone, and then the TE and EE spectra can be predicted theoretically to few-percent precision with no further adjustments allowed. == Challenges == Despite the widespread success of ΛCDM in matching observations of our universe, cosmologists believe that the model may be an approximation of a more fundamental model. === Lack of detection === Extensive searches for dark matter particles have so far shown no well-agreed detection, while dark energy may be almost impossible to detect in a laboratory, and its value is extremely small compared to vacuum energy theoretical predictions. === Violations of the cosmological principle === The ΛCDM model, like all models built on the Friedmann–Lemaître–Robertson–Walker metric, assume that the universe looks the same in all directions (isotropy) and from every location (homogeneity) on a large enough scale: "the universe looks the same whoever and wherever you are." This cosmological principle allows a metric, Friedmann–Lemaître–Robertson–Walker metric, to be derived and developed into a theory to compare to experiments. Without the principle, a metric would need to be extracted from astronomical data, which may not be possible.: 408  The assumptions were carried over into the ΛCDM model. However, some findings suggested violations of the cosmological principle. ==== Violations of isotropy ==== Evidence from galaxy clusters, quasars, and type Ia supernovae suggest that isotropy is violated on large scales. Data from the Planck Mission shows hemispheric bias in the cosmic microwave background in two respects: one with respect to average temperature (i.e. temperature fluctuations), the second with respect to larger variations in the degree of perturbations (i.e. densities). The European Space Agency (the governing body of the Planck Mission) has concluded that these anisotropies in the CMB are, in fact, statistically significant and can no longer be ignored. Already in 1967, Dennis Sciama predicted that the cosmic microwave background has a significant dipole anisotropy. In recent years, the CMB dipole has been tested, and the results suggest our motion with respect to distant radio galaxies and quasars differs from our motion with respect to the cosmic microwave background. The same conclusion has been reached in recent studies of the Hubble diagram of Type Ia supernovae and quasars. This contradicts the cosmological principle. The CMB dipole is hinted at through a number of other observations. First, even within the cosmic microwave background, there are curious directional alignments and an anomalous parity asymmetry that may have an origin in the CMB dipole. Separately, the CMB dipole direction has emerged as a preferred direction in studies of alignments in quasar polarizations, scaling relations in galaxy clusters, strong lensing time delay, Type Ia supernovae, and quasars and gamma-ray bursts as standard candles. The fact that all these independent observables, based on different physics, are tracking the CMB dipole direction suggests that the Universe is anisotropic in the direction of the CMB dipole. Nevertheless, some authors have stated that the universe around Earth is isotropic at high significance by studies of the combined cosmic microwave background temperature and polarization maps. ==== Violations of homogeneity ==== The homogeneity of the universe needed for the ΛCDM applies to very large volumes of space. N-body simulations in ΛCDM show that the spatial distribution of galaxies is statistically homogeneous if averaged over scales 260/h Mpc or more. Numerous claims of large-scale structures reported to be in conflict with the predicted scale of homogeneity for ΛCDM do not withstand statistical analysis.: 7.8  === El Gordo galaxy cluster collision === El Gordo is a massive interacting galaxy cluster in the early Universe ( z = 0.87 {\displaystyle z=0.87} ). The extreme properties of El Gordo in terms of its redshift, mass, and the collision velocity leads to strong ( 6.16 σ {\displaystyle 6.16\sigma } ) tension with the ΛCDM model. The properties of El Gordo are however consistent with cosmological simulations in the framework of MOND due to more rapid structure formation. === KBC void === The KBC void is an immense, comparatively empty region of space containing the Milky Way approximately 2 billion light-years (600 megaparsecs, Mpc) in diameter. Some authors have said the existence of the KBC void violates the assumption that the CMB reflects baryonic density fluctuations at z = 1100 {\displaystyle z=1100} or Einstein's theory of general relativity, either of which would violate the ΛCDM model, while other authors have claimed that supervoids as large as the KBC void are consistent with the ΛCDM model. === Hubble tension === Statistically significant differences remain in values of the Hubble constant derived by matching the ΛCDM model to data from the "early universe", like the cosmic background radiation, compared to values derived from stellar distance measurements, called the "late universe". While systematic error in the measurements remains a possibility, many different kinds of observations agree with one of these two values of the constant. This difference, called the Hubble tension, widely acknowledged to be a major problem for the ΛCDM model. Dozens of proposals for modifications of ΛCDM or completely new models have been published to explain the Hubble tension. Among these models are many that modify the properties of dark energy or of dark matter over time, interactions between dark energy and dark matter, unified dark energy and matter, other forms of dark radiation like sterile neutrinos, modifications to the properties of gravity, or the modification of the effects of inflation, changes to the properties of elementary particles in the early universe, among others. None of these models can simultaneously explain the breadth of other cosmological data as well as ΛCDM. === S8 tension === The " S 8 {\displaystyle S_{8}} tension" is a name for another question mark for the ΛCDM model. The S 8 {\displaystyle S_{8}} parameter in the ΛCDM model quantifies the amplitude of matter fluctuations in the late universe and is defined as S 8 ≡ σ 8 Ω m / 0.3 {\displaystyle S_{8}\equiv \sigma _{8}{\sqrt {\Omega _{\rm {m}}/0.3}}} Early- (e.g. from CMB data collected using the Planck observatory) and late-time (e.g. measuring weak gravitational lensing events) facilitate increasingly precise values of S 8 {\displaystyle S_{8}} . However, these two categories of measurement differ by more standard deviations than their uncertainties. This discrepancy is called the S 8 {\displaystyle S_{8}} tension. The name "tension" reflects that the disagreement is not merely between two data sets: the many sets of early- and late-time measurements agree well within their own categories, but there is an unexplained difference between values obtained from different points in the evolution of the universe. Such a tension indicates that the ΛCDM model may be incomplete or in need of correction. Some values for S 8 {\displaystyle S_{8}} are 0.832±0.013 (2020 Planck), 0.766+0.020−0.014 (2021 KIDS), 0.776±0.017 (2022 DES), 0.790+0.018−0.014 (2023 DES+KIDS), 0.769+0.031−0.034 – 0.776+0.032−0.033 (2023 HSC-SSP), 0.86±0.01 (2024 EROSITA). Values have also obtained using peculiar velocities, 0.637±0.054 (2020) and 0.776±0.033 (2020), among other methods. === Axis of evil === The "axis of evil" is a name given to a purported correlation between the plane of the Solar System and aspects of the cosmic microwave background (CMB). Such a correlation would give the plane of the Solar System and hence the location of Earth a greater significance than might be expected by chance, a result which has been claimed to be evidence of a departure from the Copernican principle. However, a 2016 study compared isotropic and anisotropic cosmological models against WMAP and Planck data and found no evidence for anisotropy. === Cosmological lithium problem === The actual observable amount of lithium in the universe is less than the calculated amount from the ΛCDM model by a factor of 3–4.: 141  If every calculation is correct, then solutions beyond the existing ΛCDM model might be needed. === Shape of the universe === The ΛCDM model assumes that the shape of the universe is of zero curvature (is flat) and has an undetermined topology. In 2019, interpretation of Planck data suggested that the curvature of the universe might be positive (often called "closed"), which would contradict the ΛCDM model. Some authors have suggested that the Planck data detecting a positive curvature could be evidence of a local inhomogeneity in the curvature of the universe rather than the universe actually being globally a 3-manifold of positive curvature. === Violations of the strong equivalence principle === The ΛCDM model assumes that the strong equivalence principle is true. However, in 2020 a group of astronomers analyzed data from the Spitzer Photometry and Accurate Rotation Curves (SPARC) sample, together with estimates of the large-scale external gravitational field from an all-sky galaxy catalog. They concluded that there was highly statistically significant evidence of violations of the strong equivalence principle in weak gravitational fields in the vicinity of rotationally supported galaxies. They observed an effect inconsistent with tidal effects in the ΛCDM model. These results have been challenged as failing to consider inaccuracies in the rotation curves and correlations between galaxy properties and clustering strength. and as inconsistent with similar analysis of other galaxies. === Cold dark matter discrepancies === Several discrepancies between the predictions of cold dark matter in the ΛCDM model and observations of galaxies and their clustering have arisen. Some of these problems have proposed solutions, but it remains unclear whether they can be solved without abandoning the ΛCDM model. Milgrom, McGaugh, and Kroupa have criticized the dark matter portions of the theory from the perspective of galaxy formation models and supporting the alternative modified Newtonian dynamics (MOND) theory, which requires a modification of the Einstein field equations and the Friedmann equations as seen in proposals such as modified gravity theory (MOG theory) or tensor–vector–scalar gravity theory (TeVeS theory). Other proposals by theoretical astrophysicists of cosmological alternatives to Einstein's general relativity that attempt to account for dark energy or dark matter include f(R) gravity, scalar–tensor theories such as galileon theories (see Galilean invariance), brane cosmologies, the DGP model, and massive gravity and its extensions such as bimetric gravity. ==== Cuspy halo problem ==== The density distributions of dark matter halos in cold dark matter simulations (at least those that do not include the impact of baryonic feedback) are much more peaked than what is observed in galaxies by investigating their rotation curves. ==== Dwarf galaxy problem ==== Cold dark matter simulations predict large numbers of small dark matter halos, more numerous than the number of small dwarf galaxies that are observed around galaxies like the Milky Way. ==== Satellite disk problem ==== Dwarf galaxies around the Milky Way and Andromeda galaxies are observed to be orbiting in thin, planar structures whereas the simulations predict that they should be distributed randomly about their parent galaxies. However, latest research suggests this seemingly bizarre alignment is just a quirk which will dissolve over time. ==== High-velocity galaxy problem ==== Galaxies in the NGC 3109 association are moving away too rapidly to be consistent with expectations in the ΛCDM model. In this framework, NGC 3109 is too massive and distant from the Local Group for it to have been flung out in a three-body interaction involving the Milky Way or Andromeda Galaxy. ==== Galaxy morphology problem ==== If galaxies grew hierarchically, then massive galaxies required many mergers. Major mergers inevitably create a classical bulge. On the contrary, about 80% of observed galaxies give evidence of no such bulges, and giant pure-disc galaxies are commonplace. The tension can be quantified by comparing the observed distribution of galaxy shapes today with predictions from high-resolution hydrodynamical cosmological simulations in the ΛCDM framework, revealing a highly significant problem that is unlikely to be solved by improving the resolution of the simulations. The high bulgeless fraction was nearly constant for 8 billion years. ==== Fast galaxy bar problem ==== If galaxies were embedded within massive halos of cold dark matter, then the bars that often develop in their central regions would be slowed down by dynamical friction with the halo. This is in serious tension with the fact that observed galaxy bars are typically fast. ==== Small scale crisis ==== Comparison of the model with observations may have some problems on sub-galaxy scales, possibly predicting too many dwarf galaxies and too much dark matter in the innermost regions of galaxies. This problem is called the "small scale crisis". These small scales are harder to resolve in computer simulations, so it is not yet clear whether the problem is the simulations, non-standard properties of dark matter, or a more radical error in the model. ==== High redshift galaxies ==== Observations from the James Webb Space Telescope have resulted in various galaxies confirmed by spectroscopy at high redshift, such as JADES-GS-z13-0 at cosmological redshift of 13.2. Other candidate galaxies which have not been confirmed by spectroscopy include CEERS-93316 at cosmological redshift of 16.4. Existence of surprisingly massive galaxies in the early universe challenges the preferred models describing how dark matter halos drive galaxy formation. It remains to be seen whether a revision of the Lambda-CDM model with parameters given by Planck Collaboration is necessary to resolve this issue. The discrepancies could also be explained by particular properties (stellar masses or effective volume) of the candidate galaxies, yet unknown force or particle outside of the Standard Model through which dark matter interacts, more efficient baryonic matter accumulation by the dark matter halos, early dark energy models, or the hypothesized long-sought Population III stars. === Missing baryon problem === Massimo Persic and Paolo Salucci first estimated the baryonic density today present in ellipticals, spirals, groups and clusters of galaxies. They performed an integration of the baryonic mass-to-light ratio over luminosity (in the following M b / L {\textstyle M_{\rm {b}}/L} ), weighted with the luminosity function ϕ ( L ) {\textstyle \phi (L)} over the previously mentioned classes of astrophysical objects: ρ b = ∑ ∫ L ϕ ( L ) M b L d L . {\displaystyle \rho _{\rm {b}}=\sum \int L\phi (L){\frac {M_{\rm {b}}}{L}}\,dL.} The result was: Ω b = Ω ∗ + Ω gas = 2.2 × 10 − 3 + 1.5 × 10 − 3 h − 1.3 ≃ 0.003 , {\displaystyle \Omega _{\rm {b}}=\Omega _{*}+\Omega _{\text{gas}}=2.2\times 10^{-3}+1.5\times 10^{-3}\;h^{-1.3}\simeq 0.003,} where h ≃ 0.72 {\displaystyle h\simeq 0.72} . Note that this value is much lower than the prediction of standard cosmic nucleosynthesis Ω b ≃ 0.0486 {\displaystyle \Omega _{\rm {b}}\simeq 0.0486} , so that stars and gas in galaxies and in galaxy groups and clusters account for less than 10% of the primordially synthesized baryons. This issue is known as the problem of the "missing baryons". The missing baryon problem is claimed to be resolved. Using observations of the kinematic Sunyaev–Zel'dovich effect spanning more than 90% of the lifetime of the Universe, in 2021 astrophysicists found that approximately 50% of all baryonic matter is outside dark matter haloes, filling the space between galaxies. Together with the amount of baryons inside galaxies and surrounding them, the total amount of baryons in the late time Universe is compatible with early Universe measurements. === Conventionalism === It has been argued that the ΛCDM model has adopted conventionalist stratagems, rendering it unfalsifiable in the sense defined by Karl Popper. When faced with new data not in accord with a prevailing model, the conventionalist will find ways to adapt the theory rather than declare it false. Thus dark matter was added after the observations of anomalous galaxy rotation rates. Thomas Kuhn viewed the process differently, as "problem solving" within the existing paradigm. == Extended models == Extended models allow one or more of the "fixed" parameters above to vary, in addition to the basic six; so these models join smoothly to the basic six-parameter model in the limit that the additional parameter(s) approach the default values. For example, possible extensions of the simplest ΛCDM model allow for spatial curvature ( Ω tot {\displaystyle \Omega _{\text{tot}}} may be different from 1); or quintessence rather than a cosmological constant where the equation of state of dark energy is allowed to differ from −1. Cosmic inflation predicts tensor fluctuations (gravitational waves). Their amplitude is parameterized by the tensor-to-scalar ratio (denoted r {\displaystyle r} ), which is determined by the unknown energy scale of inflation. Other modifications allow hot dark matter in the form of neutrinos more massive than the minimal value, or a running spectral index; the latter is generally not favoured by simple cosmic inflation models. Allowing additional variable parameter(s) will generally increase the uncertainties in the standard six parameters quoted above, and may also shift the central values slightly. The table below shows results for each of the possible "6+1" scenarios with one additional variable parameter; this indicates that, as of 2015, there is no convincing evidence that any additional parameter is different from its default value. Some researchers have suggested that there is a running spectral index, but no statistically significant study has revealed one. Theoretical expectations suggest that the tensor-to-scalar ratio r {\displaystyle r} should be between 0 and 0.3, and the latest results are within those limits. == See also == Bolshoi cosmological simulation Galaxy formation and evolution Illustris project List of cosmological computation software Millennium Run Weakly interacting massive particles (WIMPs) The ΛCDM model is also known as the standard model of cosmology, but is not related to the Standard Model of particle physics. Inhomogeneous cosmology == References == == Further reading == Ostriker, J. P.; Steinhardt, P. J. (1995). "Cosmic Concordance". arXiv:astro-ph/9505066. Ostriker, Jeremiah P.; Mitton, Simon (2013). Heart of Darkness: Unraveling the mysteries of the invisible universe. Princeton, NJ: Princeton University Press. ISBN 978-0-691-13430-7. Rebolo, R.; et al. (2004). "Cosmological parameter estimation using Very Small Array data out to ℓ= 1500". Monthly Notices of the Royal Astronomical Society. 353 (3): 747–759. arXiv:astro-ph/0402466. Bibcode:2004MNRAS.353..747R. doi:10.1111/j.1365-2966.2004.08102.x. S2CID 13971059. == External links == Cosmology tutorial/NedWright Millennium Simulation WMAP estimated cosmological parameters/Latest Summary
Wikipedia/Lambda-CDM_model
Spin is an intrinsic form of angular momentum carried by elementary particles, and thus by composite particles such as hadrons, atomic nuclei, and atoms.: 183–184  Spin is quantized, and accurate models for the interaction with spin require relativistic quantum mechanics or quantum field theory. The existence of electron spin angular momentum is inferred from experiments, such as the Stern–Gerlach experiment, in which silver atoms were observed to possess two possible discrete angular momenta despite having no orbital angular momentum. The relativistic spin–statistics theorem connects electron spin quantization to the Pauli exclusion principle: observations of exclusion imply half-integer spin, and observations of half-integer spin imply exclusion. Spin is described mathematically as a vector for some particles such as photons, and as a spinor or bispinor for other particles such as electrons. Spinors and bispinors behave similarly to vectors: they have definite magnitudes and change under rotations; however, they use an unconventional "direction". All elementary particles of a given kind have the same magnitude of spin angular momentum, though its direction may change. These are indicated by assigning the particle a spin quantum number.: 183–184  The SI units of spin are the same as classical angular momentum (i.e., N·m·s, J·s, or kg·m2·s−1). In quantum mechanics, angular momentum and spin angular momentum take discrete values proportional to the Planck constant. In practice, spin is usually given as a dimensionless spin quantum number by dividing the spin angular momentum by the reduced Planck constant ħ. Often, the "spin quantum number" is simply called "spin". == Models == === Rotating charged mass === The earliest models for electron spin imagined a rotating charged mass, but this model fails when examined in detail: the required space distribution does not match limits on the electron radius: the required rotation speed exceeds the speed of light. In the Standard Model, the fundamental particles are all considered "point-like": they have their effects through the field that surrounds them. Any model for spin based on mass rotation would need to be consistent with that model. === Pauli's "classically non-describable two-valuedness" === Wolfgang Pauli, a central figure in the history of quantum spin, initially rejected any idea that the "degree of freedom" he introduced to explain experimental observations was related to rotation. He called it "classically non-describable two-valuedness". Later, he allowed that it is related to angular momentum, but insisted on considering spin an abstract property. This approach allowed Pauli to develop a proof of his fundamental Pauli exclusion principle, a proof now called the spin-statistics theorem. In retrospect, this insistence and the style of his proof initiated the modern particle-physics era, where abstract quantum properties derived from symmetry properties dominate. Concrete interpretation became secondary and optional. === Circulation of classical fields === The first classical model for spin proposed a small rigid particle rotating about an axis, as ordinary use of the word may suggest. Angular momentum can be computed from a classical field as well.: 63  By applying Frederik Belinfante's approach to calculating the angular momentum of a field, Hans C. Ohanian showed that "spin is essentially a wave property ... generated by a circulating flow of charge in the wave field of the electron". This same concept of spin can be applied to gravity waves in water: "spin is generated by subwavelength circular motion of water particles". Unlike classical wavefield circulation, which allows continuous values of angular momentum, quantum wavefields allow only discrete values. Consequently, energy transfer to or from spin states always occurs in fixed quantum steps. Only a few steps are allowed: for many qualitative purposes, the complexity of the spin quantum wavefields can be ignored and the system properties can be discussed in terms of "integer" or "half-integer" spin models as discussed in quantum numbers below. === In Bohmian mechanics === Spin can be understood differently depending on the interpretations of quantum mechanics. In the de Broglie–Bohm interpretation, particles have definitive trajectories but their motion is driven by the wave function or pilot wave. In this interpretation, the spin is a property of the pilot wave and not of the particle themselves. === Dirac's relativistic electron === Quantitative calculations of spin properties for electrons requires the Dirac relativistic wave equation. == Relation to orbital angular momentum == As the name suggests, spin was originally conceived as the rotation of a particle around some axis. Historically orbital angular momentum related to particle orbits.: 131  While the names based on mechanical models have survived, the physical explanation has not. Quantization fundamentally alters the character of both spin and orbital angular momentum. Since elementary particles are point-like, self-rotation is not well-defined for them. However, spin implies that the phase of the particle depends on the angle as e i S θ , {\displaystyle e^{iS\theta }\ ,} for rotation of angle θ around the axis parallel to the spin S. This is equivalent to the quantum-mechanical interpretation of momentum as phase dependence in the position, and of orbital angular momentum as phase dependence in the angular position. For fermions, the picture is less clear: From the Ehrenfest theorem, the angular velocity is equal to the derivative of the Hamiltonian to its conjugate momentum, which is the total angular momentum operator J = L + S . Therefore, if the Hamiltonian H has any dependence on the spin S, then ⁠ ∂ H / ∂ S ⁠ must be non-zero; consequently, for classical mechanics, the existence of spin in the Hamiltonian will produce an actual angular velocity, and hence an actual physical rotation – that is, a change in the phase-angle, θ, over time. However, whether this holds true for free electrons is ambiguous, since for an electron, | S |² is a constant ⁠ 1 / 2 ⁠ ℏ , and one might decide that since it cannot change, no partial (∂) can exist. Therefore it is a matter of interpretation whether the Hamiltonian must include such a term, and whether this aspect of classical mechanics extends into quantum mechanics (any particle's intrinsic spin angular momentum, S, is a quantum number arising from a "spinor" in the mathematical solution to the Dirac equation, rather than being a more nearly physical quantity, like orbital angular momentum L). Nevertheless, spin appears in the Dirac equation, and thus the relativistic Hamiltonian of the electron, treated as a Dirac field, can be interpreted as including a dependence in the spin S. == Quantum number == Spin obeys the mathematical laws of angular momentum quantization. The specific properties of spin angular momenta include: Spin quantum numbers may take either half-integer or integer values. Although the direction of its spin can be changed, the magnitude of the spin of an elementary particle cannot be changed. The spin of a charged particle is associated with a magnetic dipole moment with a g-factor that differs from 1. (In the classical context, this would imply the internal charge and mass distributions differing for a rotating object.) The conventional definition of the spin quantum number is s = ⁠n/2⁠, where n can be any non-negative integer. Hence the allowed values of s are 0, ⁠1/2⁠, 1, ⁠3/2⁠, 2, etc. The value of s for an elementary particle depends only on the type of particle and cannot be altered in any known way (in contrast to the spin direction described below). The spin angular momentum S of any physical system is quantized. The allowed values of S are S = ℏ s ( s + 1 ) = h 2 π n 2 ( n + 2 ) 2 = h 4 π n ( n + 2 ) , {\displaystyle S=\hbar \,{\sqrt {s(s+1)}}={\frac {h}{2\pi }}\,{\sqrt {{\frac {n}{2}}{\frac {(n+2)}{2}}}}={\frac {h}{4\pi }}\,{\sqrt {n(n+2)}},} where h is the Planck constant, and ℏ = h 2 π {\textstyle \hbar ={\frac {h}{2\pi }}} is the reduced Planck constant. In contrast, orbital angular momentum can only take on integer values of s; i.e., even-numbered values of n. === Fermions and bosons === Those particles with half-integer spins, such as ⁠1/2⁠, ⁠3/2⁠, ⁠5/2⁠, are known as fermions, while those particles with integer spins, such as 0, 1, 2, are known as bosons. The two families of particles obey different rules and broadly have different roles in the world around us. A key distinction between the two families is that fermions obey the Pauli exclusion principle: that is, there cannot be two identical fermions simultaneously having the same quantum numbers (meaning, roughly, having the same position, velocity and spin direction). Fermions obey the rules of Fermi–Dirac statistics. In contrast, bosons obey the rules of Bose–Einstein statistics and have no such restriction, so they may "bunch together" in identical states. Also, composite particles can have spins different from their component particles. For example, a helium-4 atom in the ground state has spin 0 and behaves like a boson, even though the quarks and electrons which make it up are all fermions. This has some profound consequences: Quarks and leptons (including electrons and neutrinos), which make up what is classically known as matter, are all fermions with spin ⁠1/2⁠. The common idea that "matter takes up space" actually comes from the Pauli exclusion principle acting on these particles to prevent the fermions from being in the same quantum state. Further compaction would require electrons to occupy the same energy states, and therefore a kind of pressure (sometimes known as degeneracy pressure of electrons) acts to resist the fermions being overly close. Elementary fermions with other spins (⁠3/2⁠, ⁠5/2⁠, etc.) are not known to exist. Elementary particles which are thought of as carrying forces are all bosons with spin 1. They include the photon, which carries the electromagnetic force, the gluon (strong force), and the W and Z bosons (weak force). The ability of bosons to occupy the same quantum state is used in the laser, which aligns many photons having the same quantum number (the same direction and frequency), superfluid liquid helium resulting from helium-4 atoms being bosons, and superconductivity, where pairs of electrons (which individually are fermions) act as single composite bosons. Elementary bosons with other spins (0, 2, 3, etc.) were not historically known to exist, although they have received considerable theoretical treatment and are well established within their respective mainstream theories. In particular, theoreticians have proposed the graviton (predicted to exist by some quantum gravity theories) with spin 2, and the Higgs boson (explaining electroweak symmetry breaking) with spin 0. Since 2013, the Higgs boson with spin 0 has been considered proven to exist. It is the first scalar elementary particle (spin 0) known to exist in nature. Atomic nuclei have nuclear spin which may be either half-integer or integer, so that the nuclei may be either fermions or bosons. === Spin–statistics theorem === The spin–statistics theorem splits particles into two groups: bosons and fermions, where bosons obey Bose–Einstein statistics, and fermions obey Fermi–Dirac statistics (and therefore the Pauli exclusion principle). Specifically, the theorem requires that particles with half-integer spins obey the Pauli exclusion principle while particles with integer spin do not. As an example, electrons have half-integer spin and are fermions that obey the Pauli exclusion principle, while photons have integer spin and do not. The theorem was derived by Wolfgang Pauli in 1940; it relies on both quantum mechanics and the theory of special relativity. Pauli described this connection between spin and statistics as "one of the most important applications of the special relativity theory". == Magnetic moments == Particles with spin can possess a magnetic dipole moment, just like a rotating electrically charged body in classical electrodynamics. These magnetic moments can be experimentally observed in several ways, e.g. by the deflection of particles by inhomogeneous magnetic fields in a Stern–Gerlach experiment, or by measuring the magnetic fields generated by the particles themselves. The intrinsic magnetic moment μ of a spin-⁠1/2⁠ particle with charge q, mass m, and spin angular momentum S is μ = g s q 2 m S , {\displaystyle {\boldsymbol {\mu }}={\frac {g_{\text{s}}q}{2m}}\mathbf {S} ,} where the dimensionless quantity gs is called the spin g-factor. For exclusively orbital rotations, it would be 1 (assuming that the mass and the charge occupy spheres of equal radius). The electron, being a charged elementary particle, possesses a nonzero magnetic moment. One of the triumphs of the theory of quantum electrodynamics is its accurate prediction of the electron g-factor, which has been experimentally determined to have the value −2.00231930436092(36), with the digits in parentheses denoting measurement uncertainty in the last two digits at one standard deviation. The value of 2 arises from the Dirac equation, a fundamental equation connecting the electron's spin with its electromagnetic properties; and the deviation from −2 arises from the electron's interaction with the surrounding quantum fields, including its own electromagnetic field and virtual particles. Composite particles also possess magnetic moments associated with their spin. In particular, the neutron possesses a non-zero magnetic moment despite being electrically neutral. This fact was an early indication that the neutron is not an elementary particle. In fact, it is made up of quarks, which are electrically charged particles. The magnetic moment of the neutron comes from the spins of the individual quarks and their orbital motions. Neutrinos are both elementary and electrically neutral. The minimally extended Standard Model that takes into account non-zero neutrino masses predicts neutrino magnetic moments of: μ ν ≈ 3 × 10 − 19 μ B m ν c 2 eV , {\displaystyle \mu _{\nu }\approx 3\times 10^{-19}\mu _{\text{B}}{\frac {m_{\nu }c^{2}}{\text{eV}}},} where the μν are the neutrino magnetic moments, mν are the neutrino masses, and μB is the Bohr magneton. New physics above the electroweak scale could, however, lead to significantly higher neutrino magnetic moments. It can be shown in a model-independent way that neutrino magnetic moments larger than about 10−14 μB are "unnatural" because they would also lead to large radiative contributions to the neutrino mass. Since the neutrino masses are known to be at most about 1 eV/c2, fine-tuning would be necessary in order to prevent large contributions to the neutrino mass via radiative corrections. The measurement of neutrino magnetic moments is an active area of research. Experimental results have put the neutrino magnetic moment at less than 1.2×10−10 times the electron's magnetic moment. On the other hand, elementary particles with spin but without electric charge, such as the photon and Z boson, do not have a magnetic moment. == Direction == === Spin projection quantum number and multiplicity === In classical mechanics, the angular momentum of a particle possesses not only a magnitude (how fast the body is rotating), but also a direction (either up or down on the axis of rotation of the particle). Quantum-mechanical spin also contains information about direction, but in a more subtle form. Quantum mechanics states that the component of angular momentum for a spin-s particle measured along any direction can only take on the values S i = ℏ s i , s i ∈ { − s , − ( s − 1 ) , … , s − 1 , s } , {\displaystyle S_{i}=\hbar s_{i},\quad s_{i}\in \{-s,-(s-1),\dots ,s-1,s\},} where Si is the spin component along the i-th axis (either x, y, or z), si is the spin projection quantum number along the i-th axis, and s is the principal spin quantum number (discussed in the previous section). Conventionally the direction chosen is the z axis: S z = ℏ s z , s z ∈ { − s , − ( s − 1 ) , … , s − 1 , s } , {\displaystyle S_{z}=\hbar s_{z},\quad s_{z}\in \{-s,-(s-1),\dots ,s-1,s\},} where Sz is the spin component along the z axis, sz is the spin projection quantum number along the z axis. One can see that there are 2s + 1 possible values of sz. The number "2s + 1" is the multiplicity of the spin system. For example, there are only two possible values for a spin-⁠1/2⁠ particle: sz = +⁠1/2⁠ and sz = −⁠1/2⁠. These correspond to quantum states in which the spin component is pointing in the +z or −z directions respectively, and are often referred to as "spin up" and "spin down". For a spin-⁠3/2⁠ particle, like a delta baryon, the possible values are +⁠3/2⁠, +⁠1/2⁠, −⁠1/2⁠, −⁠3/2⁠. === Vector === For a given quantum state, one could think of a spin vector ⟨ S ⟩ {\textstyle \langle S\rangle } whose components are the expectation values of the spin components along each axis, i.e., ⟨ S ⟩ = [ ⟨ S x ⟩ , ⟨ S y ⟩ , ⟨ S z ⟩ ] {\textstyle \langle S\rangle =[\langle S_{x}\rangle ,\langle S_{y}\rangle ,\langle S_{z}\rangle ]} . This vector then would describe the "direction" in which the spin is pointing, corresponding to the classical concept of the axis of rotation. It turns out that the spin vector is not very useful in actual quantum-mechanical calculations, because it cannot be measured directly: sx, sy and sz cannot possess simultaneous definite values, because of a quantum uncertainty relation between them. However, for statistically large collections of particles that have been placed in the same pure quantum state, such as through the use of a Stern–Gerlach apparatus, the spin vector does have a well-defined experimental meaning: It specifies the direction in ordinary space in which a subsequent detector must be oriented in order to achieve the maximum possible probability (100%) of detecting every particle in the collection. For spin-⁠1/2⁠ particles, this probability drops off smoothly as the angle between the spin vector and the detector increases, until at an angle of 180°—that is, for detectors oriented in the opposite direction to the spin vector—the expectation of detecting particles from the collection reaches a minimum of 0%. As a qualitative concept, the spin vector is often handy because it is easy to picture classically. For instance, quantum-mechanical spin can exhibit phenomena analogous to classical gyroscopic effects. For example, one can exert a kind of "torque" on an electron by putting it in a magnetic field (the field acts upon the electron's intrinsic magnetic dipole moment—see the following section). The result is that the spin vector undergoes precession, just like a classical gyroscope. This phenomenon is known as electron spin resonance (ESR). The equivalent behaviour of protons in atomic nuclei is used in nuclear magnetic resonance (NMR) spectroscopy and imaging. Mathematically, quantum-mechanical spin states are described by vector-like objects known as spinors. There are subtle differences between the behavior of spinors and vectors under coordinate rotations. For example, rotating a spin-⁠1/2⁠ particle by 360° does not bring it back to the same quantum state, but to the state with the opposite quantum phase; this is detectable, in principle, with interference experiments. To return the particle to its exact original state, one needs a 720° rotation. (The plate trick and Möbius strip give non-quantum analogies.) A spin-zero particle can only have a single quantum state, even after torque is applied. Rotating a spin-2 particle 180° can bring it back to the same quantum state, and a spin-4 particle should be rotated 90° to bring it back to the same quantum state. The spin-2 particle can be analogous to a straight stick that looks the same even after it is rotated 180°, and a spin-0 particle can be imagined as sphere, which looks the same after whatever angle it is turned through. == Mathematical formulation == === Operator === Spin obeys commutation relations analogous to those of the orbital angular momentum: [ S ^ j , S ^ k ] = i ℏ ε j k l S ^ l , {\displaystyle \left[{\hat {S}}_{j},{\hat {S}}_{k}\right]=i\hbar \varepsilon _{jkl}{\hat {S}}_{l},} where εjkl is the Levi-Civita symbol. It follows (as with angular momentum) that the eigenvectors of S ^ 2 {\displaystyle {\hat {S}}^{2}} and S ^ z {\displaystyle {\hat {S}}_{z}} (expressed as kets in the total S basis) are: 166  S ^ 2 | s , m s ⟩ = ℏ 2 s ( s + 1 ) | s , m s ⟩ , S ^ z | s , m s ⟩ = ℏ m s | s , m s ⟩ . {\displaystyle {\begin{aligned}{\hat {S}}^{2}|s,m_{s}\rangle &=\hbar ^{2}s(s+1)|s,m_{s}\rangle ,\\{\hat {S}}_{z}|s,m_{s}\rangle &=\hbar m_{s}|s,m_{s}\rangle .\end{aligned}}} The spin raising and lowering operators acting on these eigenvectors give S ^ ± | s , m s ⟩ = ℏ s ( s + 1 ) − m s ( m s ± 1 ) | s , m s ± 1 ⟩ , {\displaystyle {\hat {S}}_{\pm }|s,m_{s}\rangle =\hbar {\sqrt {s(s+1)-m_{s}(m_{s}\pm 1)}}|s,m_{s}\pm 1\rangle ,} where S ^ ± = S ^ x ± i S ^ y {\displaystyle {\hat {S}}_{\pm }={\hat {S}}_{x}\pm i{\hat {S}}_{y}} .: 166  But unlike orbital angular momentum, the eigenvectors are not spherical harmonics. They are not functions of θ and φ. There is also no reason to exclude half-integer values of s and ms. All quantum-mechanical particles possess an intrinsic spin s {\displaystyle s} (though this value may be equal to zero). The projection of the spin s {\displaystyle s} on any axis is quantized in units of the reduced Planck constant, such that the state function of the particle is, say, not ψ = ψ ( r ) {\displaystyle \psi =\psi (\mathbf {r} )} , but ψ = ψ ( r , s z ) {\displaystyle \psi =\psi (\mathbf {r} ,s_{z})} , where s z {\displaystyle s_{z}} can take only the values of the following discrete set: s z ∈ { − s ℏ , − ( s − 1 ) ℏ , … , + ( s − 1 ) ℏ , + s ℏ } . {\displaystyle s_{z}\in \{-s\hbar ,-(s-1)\hbar ,\dots ,+(s-1)\hbar ,+s\hbar \}.} One distinguishes bosons (integer spin) and fermions (half-integer spin). The total angular momentum conserved in interaction processes is then the sum of the orbital angular momentum and the spin. === Pauli matrices === The quantum-mechanical operators associated with spin-⁠1/2⁠ observables are S ^ = ℏ 2 σ , {\displaystyle {\hat {\mathbf {S} }}={\frac {\hbar }{2}}{\boldsymbol {\sigma }},} where in Cartesian components S x = ℏ 2 σ x , S y = ℏ 2 σ y , S z = ℏ 2 σ z . {\displaystyle S_{x}={\frac {\hbar }{2}}\sigma _{x},\quad S_{y}={\frac {\hbar }{2}}\sigma _{y},\quad S_{z}={\frac {\hbar }{2}}\sigma _{z}.} For the special case of spin-⁠1/2⁠ particles, σx, σy and σz are the three Pauli matrices: σ x = ( 0 1 1 0 ) , σ y = ( 0 − i i 0 ) , σ z = ( 1 0 0 − 1 ) . {\displaystyle \sigma _{x}={\begin{pmatrix}0&1\\1&0\end{pmatrix}},\quad \sigma _{y}={\begin{pmatrix}0&-i\\i&0\end{pmatrix}},\quad \sigma _{z}={\begin{pmatrix}1&0\\0&-1\end{pmatrix}}.} === Pauli exclusion principle === The Pauli exclusion principle states that the wavefunction ψ ( r 1 , σ 1 , … , r N , σ N ) {\displaystyle \psi (\mathbf {r} _{1},\sigma _{1},\dots ,\mathbf {r} _{N},\sigma _{N})} for a system of N identical particles having spin s must change upon interchanges of any two of the N particles as ψ ( … , r i , σ i , … , r j , σ j , … ) = ( − 1 ) 2 s ψ ( … , r j , σ j , … , r i , σ i , … ) . {\displaystyle \psi (\dots ,\mathbf {r} _{i},\sigma _{i},\dots ,\mathbf {r} _{j},\sigma _{j},\dots )=(-1)^{2s}\psi (\dots ,\mathbf {r} _{j},\sigma _{j},\dots ,\mathbf {r} _{i},\sigma _{i},\dots ).} Thus, for bosons the prefactor (−1)2s will reduce to +1, for fermions to −1. This permutation postulate for N-particle state functions has most important consequences in daily life, e.g. the periodic table of the chemical elements. === Rotations === As described above, quantum mechanics states that components of angular momentum measured along any direction can only take a number of discrete values. The most convenient quantum-mechanical description of particle's spin is therefore with a set of complex numbers corresponding to amplitudes of finding a given value of projection of its intrinsic angular momentum on a given axis. For instance, for a spin-⁠1/2⁠ particle, we would need two numbers a±1/2, giving amplitudes of finding it with projection of angular momentum equal to +⁠ħ/2⁠ and −⁠ħ/2⁠, satisfying the requirement | a + 1 / 2 | 2 + | a − 1 / 2 | 2 = 1. {\displaystyle |a_{+1/2}|^{2}+|a_{-1/2}|^{2}=1.} For a generic particle with spin s, we would need 2s + 1 such parameters. Since these numbers depend on the choice of the axis, they transform into each other non-trivially when this axis is rotated. It is clear that the transformation law must be linear, so we can represent it by associating a matrix with each rotation, and the product of two transformation matrices corresponding to rotations A and B must be equal (up to phase) to the matrix representing rotation AB. Further, rotations preserve the quantum-mechanical inner product, and so should our transformation matrices: ∑ m = − j j a m ∗ b m = ∑ m = − j j ( ∑ n = − j j U n m a n ) ∗ ( ∑ k = − j j U k m b k ) , {\displaystyle \sum _{m=-j}^{j}a_{m}^{*}b_{m}=\sum _{m=-j}^{j}\left(\sum _{n=-j}^{j}U_{nm}a_{n}\right)^{*}\left(\sum _{k=-j}^{j}U_{km}b_{k}\right),} ∑ n = − j j ∑ k = − j j U n p ∗ U k q = δ p q . {\displaystyle \sum _{n=-j}^{j}\sum _{k=-j}^{j}U_{np}^{*}U_{kq}=\delta _{pq}.} Mathematically speaking, these matrices furnish a unitary projective representation of the rotation group SO(3). Each such representation corresponds to a representation of the covering group of SO(3), which is SU(2). There is one n-dimensional irreducible representation of SU(2) for each dimension, though this representation is n-dimensional real for odd n and n-dimensional complex for even n (hence of real dimension 2n). For a rotation by angle θ in the plane with normal vector θ ^ {\textstyle {\hat {\boldsymbol {\theta }}}} , U = e − i ℏ θ ⋅ S , {\displaystyle U=e^{-{\frac {i}{\hbar }}{\boldsymbol {\theta }}\cdot \mathbf {S} },} where θ = θ θ ^ {\textstyle {\boldsymbol {\theta }}=\theta {\hat {\boldsymbol {\theta }}}} , and S is the vector of spin operators. A generic rotation in 3-dimensional space can be built by compounding operators of this type using Euler angles: R ( α , β , γ ) = e − i α S x e − i β S y e − i γ S z . {\displaystyle {\mathcal {R}}(\alpha ,\beta ,\gamma )=e^{-i\alpha S_{x}}e^{-i\beta S_{y}}e^{-i\gamma S_{z}}.} An irreducible representation of this group of operators is furnished by the Wigner D-matrix: D m ′ m s ( α , β , γ ) ≡ ⟨ s m ′ | R ( α , β , γ ) | s m ⟩ = e − i m ′ α d m ′ m s ( β ) e − i m γ , {\displaystyle D_{m'm}^{s}(\alpha ,\beta ,\gamma )\equiv \langle sm'|{\mathcal {R}}(\alpha ,\beta ,\gamma )|sm\rangle =e^{-im'\alpha }d_{m'm}^{s}(\beta )e^{-im\gamma },} where d m ′ m s ( β ) = ⟨ s m ′ | e − i β s y | s m ⟩ {\displaystyle d_{m'm}^{s}(\beta )=\langle sm'|e^{-i\beta s_{y}}|sm\rangle } is Wigner's small d-matrix. Note that for γ = 2π and α = β = 0; i.e., a full rotation about the z axis, the Wigner D-matrix elements become D m ′ m s ( 0 , 0 , 2 π ) = d m ′ m s ( 0 ) e − i m 2 π = δ m ′ m ( − 1 ) 2 m . {\displaystyle D_{m'm}^{s}(0,0,2\pi )=d_{m'm}^{s}(0)e^{-im2\pi }=\delta _{m'm}(-1)^{2m}.} Recalling that a generic spin state can be written as a superposition of states with definite m, we see that if s is an integer, the values of m are all integers, and this matrix corresponds to the identity operator. However, if s is a half-integer, the values of m are also all half-integers, giving (−1)2m = −1 for all m, and hence upon rotation by 2π the state picks up a minus sign. This fact is a crucial element of the proof of the spin–statistics theorem. === Lorentz transformations === We could try the same approach to determine the behavior of spin under general Lorentz transformations, but we would immediately discover a major obstacle. Unlike SO(3), the group of Lorentz transformations SO(3,1) is non-compact and therefore does not have any faithful, unitary, finite-dimensional representations. In case of spin-⁠1/2⁠ particles, it is possible to find a construction that includes both a finite-dimensional representation and a scalar product that is preserved by this representation. We associate a 4-component Dirac spinor ψ with each particle. These spinors transform under Lorentz transformations according to the law ψ ′ = exp ⁡ ( 1 8 ω μ ν [ γ μ , γ ν ] ) ψ , {\displaystyle \psi '=\exp {\left({\tfrac {1}{8}}\omega _{\mu \nu }[\gamma _{\mu },\gamma _{\nu }]\right)}\psi ,} where γν are gamma matrices, and ωμν is an antisymmetric 4 × 4 matrix parametrizing the transformation. It can be shown that the scalar product ⟨ ψ | ϕ ⟩ = ψ ¯ ϕ = ψ † γ 0 ϕ {\displaystyle \langle \psi |\phi \rangle ={\bar {\psi }}\phi =\psi ^{\dagger }\gamma _{0}\phi } is preserved. It is not, however, positive-definite, so the representation is not unitary. === Measurement of spin along the x, y, or z axes === Each of the (Hermitian) Pauli matrices of spin-⁠1/2⁠ particles has two eigenvalues, +1 and −1. The corresponding normalized eigenvectors are ψ x + = | 1 2 , + 1 2 ⟩ x = 1 2 ( 1 1 ) , ψ x − = | 1 2 , − 1 2 ⟩ x = 1 2 ( 1 − 1 ) , ψ y + = | 1 2 , + 1 2 ⟩ y = 1 2 ( 1 i ) , ψ y − = | 1 2 , − 1 2 ⟩ y = 1 2 ( 1 − i ) , ψ z + = | 1 2 , + 1 2 ⟩ z = ( 1 0 ) , ψ z − = | 1 2 , − 1 2 ⟩ z = ( 0 1 ) . {\displaystyle {\begin{array}{lclc}\psi _{x+}=\left|{\frac {1}{2}},{\frac {+1}{2}}\right\rangle _{x}=\displaystyle {\frac {1}{\sqrt {2}}}\!\!\!\!\!&{\begin{pmatrix}{1}\\{1}\end{pmatrix}},&\psi _{x-}=\left|{\frac {1}{2}},{\frac {-1}{2}}\right\rangle _{x}=\displaystyle {\frac {1}{\sqrt {2}}}\!\!\!\!\!&{\begin{pmatrix}{1}\\{-1}\end{pmatrix}},\\\psi _{y+}=\left|{\frac {1}{2}},{\frac {+1}{2}}\right\rangle _{y}=\displaystyle {\frac {1}{\sqrt {2}}}\!\!\!\!\!&{\begin{pmatrix}{1}\\{i}\end{pmatrix}},&\psi _{y-}=\left|{\frac {1}{2}},{\frac {-1}{2}}\right\rangle _{y}=\displaystyle {\frac {1}{\sqrt {2}}}\!\!\!\!\!&{\begin{pmatrix}{1}\\{-i}\end{pmatrix}},\\\psi _{z+}=\left|{\frac {1}{2}},{\frac {+1}{2}}\right\rangle _{z}=&{\begin{pmatrix}1\\0\end{pmatrix}},&\psi _{z-}=\left|{\frac {1}{2}},{\frac {-1}{2}}\right\rangle _{z}=&{\begin{pmatrix}0\\1\end{pmatrix}}.\end{array}}} (Because any eigenvector multiplied by a constant is still an eigenvector, there is ambiguity about the overall sign. In this article, the convention is chosen to make the first element imaginary and negative if there is a sign ambiguity. The present convention is used by software such as SymPy; while many physics textbooks, such as Sakurai and Griffiths, prefer to make it real and positive.) By the postulates of quantum mechanics, an experiment designed to measure the electron spin on the x, y, or z axis can only yield an eigenvalue of the corresponding spin operator (Sx, Sy or Sz) on that axis, i.e. ⁠ħ/2⁠ or −⁠ħ/2⁠. The quantum state of a particle (with respect to spin), can be represented by a two-component spinor: ψ = ( a + b i c + d i ) . {\displaystyle \psi ={\begin{pmatrix}a+bi\\c+di\end{pmatrix}}.} When the spin of this particle is measured with respect to a given axis (in this example, the x axis), the probability that its spin will be measured as ⁠ħ/2⁠ is just | ⟨ ψ x + | ψ ⟩ | 2 {\displaystyle {\big |}\langle \psi _{x+}|\psi \rangle {\big |}^{2}} . Correspondingly, the probability that its spin will be measured as −⁠ħ/2⁠ is just | ⟨ ψ x − | ψ ⟩ | 2 {\displaystyle {\big |}\langle \psi _{x-}|\psi \rangle {\big |}^{2}} . Following the measurement, the spin state of the particle collapses into the corresponding eigenstate. As a result, if the particle's spin along a given axis has been measured to have a given eigenvalue, all measurements will yield the same eigenvalue (since | ⟨ ψ x + | ψ x + ⟩ | 2 = 1 {\displaystyle {\big |}\langle \psi _{x+}|\psi _{x+}\rangle {\big |}^{2}=1} , etc.), provided that no measurements of the spin are made along other axes. === Measurement of spin along an arbitrary axis === The operator to measure spin along an arbitrary axis direction is easily obtained from the Pauli spin matrices. Let u = (ux, uy, uz) be an arbitrary unit vector. Then the operator for spin in this direction is simply S u = ℏ 2 ( u x σ x + u y σ y + u z σ z ) . {\displaystyle S_{u}={\frac {\hbar }{2}}(u_{x}\sigma _{x}+u_{y}\sigma _{y}+u_{z}\sigma _{z}).} The operator Su has eigenvalues of ±⁠ħ/2⁠, just like the usual spin matrices. This method of finding the operator for spin in an arbitrary direction generalizes to higher spin states, one takes the dot product of the direction with a vector of the three operators for the three x-, y-, z-axis directions. A normalized spinor for spin-⁠1/2⁠ in the (ux, uy, uz) direction (which works for all spin states except spin down, where it will give ⁠0/0⁠) is 1 2 + 2 u z ( 1 + u z u x + i u y ) . {\displaystyle {\frac {1}{\sqrt {2+2u_{z}}}}{\begin{pmatrix}1+u_{z}\\u_{x}+iu_{y}\end{pmatrix}}.} The above spinor is obtained in the usual way by diagonalizing the σu matrix and finding the eigenstates corresponding to the eigenvalues. In quantum mechanics, vectors are termed "normalized" when multiplied by a normalizing factor, which results in the vector having a length of unity. === Compatibility of spin measurements === Since the Pauli matrices do not commute, measurements of spin along the different axes are incompatible. This means that if, for example, we know the spin along the x axis, and we then measure the spin along the y axis, we have invalidated our previous knowledge of the x axis spin. This can be seen from the property of the eigenvectors (i.e. eigenstates) of the Pauli matrices that | ⟨ ψ x ± | ψ y ± ⟩ | 2 = | ⟨ ψ x ± | ψ z ± ⟩ | 2 = | ⟨ ψ y ± | ψ z ± ⟩ | 2 = 1 2 . {\displaystyle {\big |}\langle \psi _{x\pm }|\psi _{y\pm }\rangle {\big |}^{2}={\big |}\langle \psi _{x\pm }|\psi _{z\pm }\rangle {\big |}^{2}={\big |}\langle \psi _{y\pm }|\psi _{z\pm }\rangle {\big |}^{2}={\tfrac {1}{2}}.} So when physicists measure the spin of a particle along the x axis as, for example, ⁠ħ/2⁠, the particle's spin state collapses into the eigenstate | ψ x + ⟩ {\displaystyle |\psi _{x+}\rangle } . When we then subsequently measure the particle's spin along the y axis, the spin state will now collapse into either | ψ y + ⟩ {\displaystyle |\psi _{y+}\rangle } or | ψ y − ⟩ {\displaystyle |\psi _{y-}\rangle } , each with probability ⁠1/2⁠. Let us say, in our example, that we measure −⁠ħ/2⁠. When we now return to measure the particle's spin along the x axis again, the probabilities that we will measure ⁠ħ/2⁠ or −⁠ħ/2⁠ are each ⁠1/2⁠ (i.e. they are | ⟨ ψ x + | ψ y − ⟩ | 2 {\displaystyle {\big |}\langle \psi _{x+}|\psi _{y-}\rangle {\big |}^{2}} and | ⟨ ψ x − | ψ y − ⟩ | 2 {\displaystyle {\big |}\langle \psi _{x-}|\psi _{y-}\rangle {\big |}^{2}} respectively). This implies that the original measurement of the spin along the x axis is no longer valid, since the spin along the x axis will now be measured to have either eigenvalue with equal probability. === Higher spins === The spin-⁠1/2⁠ operator S = ⁠ħ/2⁠σ forms the fundamental representation of SU(2). By taking Kronecker products of this representation with itself repeatedly, one may construct all higher irreducible representations. That is, the resulting spin operators for higher-spin systems in three spatial dimensions can be calculated for arbitrarily large s using this spin operator and ladder operators. For example, taking the Kronecker product of two spin-⁠1/2⁠ yields a four-dimensional representation, which is separable into a 3-dimensional spin-1 (triplet states) and a 1-dimensional spin-0 representation (singlet state). The resulting irreducible representations yield the following spin matrices and eigenvalues in the z-basis: Also useful in the quantum mechanics of multiparticle systems, the general Pauli group Gn is defined to consist of all n-fold tensor products of Pauli matrices. The analog formula of Euler's formula in terms of the Pauli matrices R ^ ( θ , n ^ ) = e i θ 2 n ^ ⋅ σ = I cos ⁡ θ 2 + i ( n ^ ⋅ σ ) sin ⁡ θ 2 {\displaystyle {\hat {R}}(\theta ,{\hat {\mathbf {n} }})=e^{i{\frac {\theta }{2}}{\hat {\mathbf {n} }}\cdot {\boldsymbol {\sigma }}}=I\cos {\frac {\theta }{2}}+i\left({\hat {\mathbf {n} }}\cdot {\boldsymbol {\sigma }}\right)\sin {\frac {\theta }{2}}} for higher spins is tractable, but less simple. == Parity == In tables of the spin quantum number s for nuclei or particles, the spin is often followed by a "+" or "−". This refers to the parity with "+" for even parity (wave function unchanged by spatial inversion) and "−" for odd parity (wave function negated by spatial inversion). For example, see the isotopes of bismuth, in which the list of isotopes includes the column nuclear spin and parity. For Bi-209, the longest-lived isotope, the entry 9/2− means that the nuclear spin is 9/2 and the parity is odd. == Measuring spin == The nuclear spin of atoms can be determined by sophisticated improvements to the original Stern-Gerlach experiment. A single-energy (monochromatic) molecular beam of atoms in an inhomogeneous magnetic field will split into beams representing each possible spin quantum state. For an atom with electronic spin S and nuclear spin I, there are (2S + 1)(2I + 1) spin states. For example, neutral Na atoms, which have S = 1/2, were passed through a series of inhomogeneous magnetic fields that selected one of the two electronic spin states and separated the nuclear spin states, from which four beams were observed. Thus, the nuclear spin for 23Na atoms was found to be I = 3/2. The spin of pions, a type of elementary particle, was determined by the principle of detailed balance applied to those collisions of protons that produced charged pions and deuterium. p + p → π + + d {\displaystyle p+p\rightarrow \pi ^{+}+d} The known spin values for protons and deuterium allows analysis of the collision cross-section to show that π + {\displaystyle \pi ^{+}} has spin s π = 0 {\displaystyle s_{\pi }=0} . A different approach is needed for neutral pions. In that case the decay produced two gamma ray photons with spin one: π 0 → 2 γ {\displaystyle \pi ^{0}\rightarrow 2\gamma } This result supplemented with additional analysis leads to the conclusion that the neutral pion also has spin zero.: 66  == Applications == Spin has important theoretical implications and practical applications. Well-established direct applications of spin include: Nuclear magnetic resonance (NMR) spectroscopy in chemistry; Electron spin resonance (ESR or EPR) spectroscopy in chemistry and physics; Magnetic resonance imaging (MRI) in medicine, a type of applied NMR, which relies on proton spin density; Giant magnetoresistive (GMR) drive-head technology in modern hard disks. Electron spin plays an important role in magnetism, with applications for instance in computer memories. The manipulation of nuclear spin by radio-frequency waves (nuclear magnetic resonance) is important in chemical spectroscopy and medical imaging. Spin–orbit coupling leads to the fine structure of atomic spectra, which is used in atomic clocks and in the modern definition of the second. Precise measurements of the g-factor of the electron have played an important role in the development and verification of quantum electrodynamics. Photon spin is associated with the polarization of light (photon polarization). An emerging application of spin is as a binary information carrier in spin transistors. The original concept, proposed in 1990, is known as Datta–Das spin transistor. Electronics based on spin transistors are referred to as spintronics. The manipulation of spin in dilute magnetic semiconductor materials, such as metal-doped ZnO or TiO2 imparts a further degree of freedom and has the potential to facilitate the fabrication of more efficient electronics. There are many indirect applications and manifestations of spin and the associated Pauli exclusion principle, starting with the periodic table of chemistry. == History == Spin was first discovered in the context of the emission spectrum of alkali metals. Starting around 1910, many experiments on different atoms produced a collection of relationships involving quantum numbers for atomic energy levels partially summarized in Bohr's model for the atom: 106  Transitions between levels obeyed selection rules and the rules were known to be correlated with even or odd atomic number. Additional information was known from changes to atomic spectra observed in strong magnetic fields, known as the Zeeman effect. In 1924, Wolfgang Pauli used this large collection of empirical observations to propose a new degree of freedom, introducing what he called a "two-valuedness not describable classically" associated with the electron in the outermost shell. The physical interpretation of Pauli's "degree of freedom" was initially unknown. Ralph Kronig, one of Alfred Landé's assistants, suggested in early 1925 that it was produced by the self-rotation of the electron. When Pauli heard about the idea, he criticized it severely, noting that the electron's hypothetical surface would have to be moving faster than the speed of light in order for it to rotate quickly enough to produce the necessary angular momentum. This would violate the theory of relativity. Largely due to Pauli's criticism, Kronig decided not to publish his idea. In the autumn of 1925, the same thought came to Dutch physicists George Uhlenbeck and Samuel Goudsmit at Leiden University. Under the advice of Paul Ehrenfest, they published their results. The young physicists immediately regretted the publication: Hendrik Lorentz and Werner Heisenberg both pointed out problems with the concept of a spinning electron. Pauli was especially unconvinced and continued to pursue his two-valued degree of freedom. This allowed him to formulate the Pauli exclusion principle, stating that no two electrons can have the same quantum state in the same quantum system. Fortunately, by February 1926, Llewellyn Thomas managed to resolve a factor-of-two discrepancy between experimental results for the fine structure in the hydrogen spectrum and calculations based on Uhlenbeck and Goudsmit's (and Kronig's unpublished) model.: 385  This discrepancy was due to a relativistic effect, the difference between the electron's rotating rest frame and the nuclear rest frame; the effect is now known as Thomas precession. Thomas' result convinced Pauli that electron spin was the correct interpretation of his two-valued degree of freedom, while he continued to insist that the classical rotating charge model is invalid. In 1927, Pauli formalized the theory of spin using the theory of quantum mechanics invented by Erwin Schrödinger and Werner Heisenberg. He pioneered the use of Pauli matrices as a representation of the spin operators and introduced a two-component spinor wave-function. Pauli's theory of spin was non-relativistic. In 1928, Paul Dirac published his relativistic electron equation, using a four-component spinor (known as a "Dirac spinor") for the electron wave-function. In 1940, Pauli proved the spin–statistics theorem, which states that fermions have half-integer spin, and bosons have integer spin. In retrospect, the first direct experimental evidence of the electron spin was the Stern–Gerlach experiment of 1922. However, the correct explanation of this experiment was only given in 1927. The original interpretation assumed the two spots observed in the experiment were due to quantized orbital angular momentum. However, in 1927 Ronald Fraser showed that Sodium atoms are isotropic with no orbital angular momentum and suggested that the observed magnetic properties were due to electron spin. In the same year, Phipps and Taylor applied the Stern-Gerlach technique to hydrogen atoms; the ground state of hydrogen has zero angular momentum but the measurements again showed two peaks. Once the quantum theory became established, it became clear that the original interpretation could not have been correct: the possible values of orbital angular momentum along one axis is always an odd number, unlike the observations. Hydrogen atoms have a single electron with two spin states giving the two spots observed; silver atoms have closed shells which do not contribute to the magnetic moment and only the unmatched outer electron's spin responds to the field. == See also == == References == == Further reading == == External links == Quotations related to Spin (physics) at Wikiquote Goudsmit on the discovery of electron spin Nature: "Milestones in 'spin' since 1896." ECE 495N Lecture 36: Spin Online lecture by S. Datta
Wikipedia/Spin_(physics)
In philosophy, naturalism is the idea that only natural laws and forces (as opposed to supernatural ones) operate in the universe. In its primary sense, it is also known as ontological naturalism, metaphysical naturalism, pure naturalism, philosophical naturalism and antisupernaturalism. "Ontological" refers to ontology, the philosophical study of what exists. Philosophers often treat naturalism as equivalent to materialism, but there are important distinctions between the philosophies. For example, philosopher Paul Kurtz argued that nature is best accounted for by reference to material principles. These principles include mass, energy, and other physical and chemical properties accepted by the scientific community. Further, this sense of naturalism holds that spirits, deities, and ghosts are not real and that there is no "purpose" in nature. This stronger formulation of naturalism is commonly referred to as metaphysical naturalism. On the other hand, the more moderate view that naturalism should be assumed in one's working methods as the current paradigm, without any further consideration of whether naturalism is true in the robust metaphysical sense, is called methodological naturalism. With the exception of pantheists – who believe that nature is identical with divinity while not recognizing a distinct personal anthropomorphic god – theists challenge the idea that nature contains all of reality. According to some theists, natural laws may be viewed as secondary causes of God(s). In the 20th century, Willard Van Orman Quine, George Santayana, and other philosophers argued that the success of naturalism in science meant that scientific methods should also be used in philosophy. According to this view, science and philosophy are not always distinct from one another, but instead form a continuum. "Naturalism is not so much a special system as a point of view or tendency common to a number of philosophical and religious systems; not so much a well-defined set of positive and negative doctrines as an attitude or spirit pervading and influencing many doctrines. As the name implies, this tendency consists essentially in looking upon nature as the one original and fundamental source of all that exists, and in attempting to explain everything in terms of nature. Either the limits of nature are also the limits of existing reality, or at least the first cause, if its existence is found necessary, has nothing to do with the working of natural agencies. All events, therefore, find their adequate explanation within nature itself. But, as the terms nature and natural are themselves used in more than one sense, the term naturalism is also far from having one fixed meaning". == History == === Ancient and medieval philosophy === Naturalism is most notably a Western phenomenon, but an equivalent idea has long existed in the East. Naturalism was the foundation of two out of six orthodox schools and one heterodox school of Hinduism. Samkhya, one of the oldest schools of Indian philosophy puts nature (Prakriti) as the primary cause of the universe, without assuming the existence of a personal God or Ishvara. The Carvaka, Nyaya, Vaisheshika schools originated in the 7th, 6th, and 2nd century BCE, respectively. Similarly, though unnamed and never articulated into a coherent system, one tradition within Confucian philosophy embraced a form of Naturalism dating to the Wang Chong in the 1st century, if not earlier, but it arose independently and had little influence on the development of modern naturalist philosophy or on Eastern or Western culture. Western metaphysical naturalism originated in ancient Greek philosophy. The earliest pre-Socratic philosophers, especially the Milesians (Thales, Anaximander, and Anaximenes) and the atomists (Leucippus and Democritus), were labeled by their peers and successors "the physikoi" (from the Greek φυσικός or physikos, meaning "natural philosopher" borrowing on the word φύσις or physis, meaning "nature") because they investigated natural causes, often excluding any role for gods in the creation or operation of the world. This eventually led to fully developed systems such as Epicureanism, which sought to explain everything that exists as the product of atoms falling and swerving in a void. Aristotle surveyed the thought of his predecessors and conceived of nature in a way that charted a middle course between their excesses. Plato's world of eternal and unchanging Forms, imperfectly represented in matter by a divine Artisan, contrasts sharply with the various mechanistic Weltanschauungen, of which atomism was, by the fourth century at least, the most prominent … This debate was to persist throughout the ancient world. Atomistic mechanism got a shot in the arm from Epicurus … while the Stoics adopted a divine teleology … The choice seems simple: either show how a structured, regular world could arise out of undirected processes, or inject intelligence into the system. This was how Aristotle… when still a young acolyte of Plato, saw matters. Cicero… preserves Aristotle's own cave-image: if troglodytes were brought on a sudden into the upper world, they would immediately suppose it to have been intelligently arranged. But Aristotle grew to abandon this view; although he believes in a divine being, the Prime Mover is not the efficient cause of action in the Universe, and plays no part in constructing or arranging it … But, although he rejects the divine Artificer, Aristotle does not resort to a pure mechanism of random forces. Instead he seeks to find a middle way between the two positions, one which relies heavily on the notion of Nature, or phusis. With the rise and dominance of Christianity in the West and the later spread of Islam, metaphysical naturalism was generally abandoned by intellectuals. Thus, there is little evidence for it in medieval philosophy. === Modern philosophy === It was not until the early modern era of philosophy and the Age of Enlightenment that naturalists like Benedict Spinoza (who put forward a theory of psychophysical parallelism), David Hume, and the proponents of French materialism (notably Denis Diderot, Julien La Mettrie, and Baron d'Holbach) started to emerge again in the 17th and 18th centuries. In this period, some metaphysical naturalists adhered to a distinct doctrine, materialism, which became the dominant category of metaphysical naturalism widely defended until the end of the 19th century. Thomas Hobbes was a proponent of naturalism in ethics who acknowledged normative truths and properties. Immanuel Kant rejected (reductionist) materialist positions in metaphysics, but he was not hostile to naturalism. His transcendental philosophy is considered to be a form of liberal naturalism. In late modern philosophy, Naturphilosophie, a form of natural philosophy, was developed by Friedrich Wilhelm Joseph von Schelling and Georg Wilhelm Friedrich Hegel as an attempt to comprehend nature in its totality and to outline its general theoretical structure. A version of naturalism that arose after Hegel was Ludwig Feuerbach's anthropological materialism, which influenced Karl Marx and Friedrich Engels's historical materialism, Engels's "materialist dialectic" philosophy of nature (Dialectics of Nature), and their follower Georgi Plekhanov's dialectical materialism. Another notable school of late modern philosophy advocating naturalism was German materialism: members included Ludwig Büchner, Jacob Moleschott, and Carl Vogt. The current usage of the term naturalism "derives from debates in America in the first half of the 20th century. The self-proclaimed 'naturalists' from that period included John Dewey, Ernest Nagel, Sidney Hook, and Roy Wood Sellars." === Contemporary philosophy === Currently, metaphysical naturalism is more widely embraced than in previous centuries, especially but not exclusively in the natural sciences and the Anglo-American, analytic philosophical communities. While the vast majority of the population of the world remains firmly committed to non-naturalistic worldviews, contemporary defenders of naturalism and/or naturalistic theses and doctrines today include Graham Oppy, Kai Nielsen, J. J. C. Smart, David Malet Armstrong, David Papineau, Paul Kurtz, Brian Leiter, Daniel Dennett, Michael Devitt, Fred Dretske, Paul and Patricia Churchland, Mario Bunge, Jonathan Schaffer, Hilary Kornblith, Leonard Olson, Quentin Smith, Paul Draper and Michael Martin, among many other academic philosophers. According to David Papineau, contemporary naturalism is a consequence of the build-up of scientific evidence during the twentieth century for the "causal closure of the physical", the doctrine that all physical effects can be accounted for by physical causes. By the middle of the twentieth century, the acceptance of the causal closure of the physical realm led to even stronger naturalist views. The causal closure thesis implies that any mental and biological causes must themselves be physically constituted, if they are to produce physical effects. It thus gives rise to a particularly strong form of ontological naturalism, namely the physicalist doctrine that any state that has physical effects must itself be physical. From the 1950s onwards, philosophers began to formulate arguments for ontological physicalism. Some of these arguments appealed explicitly to the causal closure of the physical realm (Feigl 1958, Oppenheim and Putnam 1958). In other cases, the reliance on causal closure lay below the surface. However, it is not hard to see that even in these latter cases the causal closure thesis played a crucial role. In contemporary continental philosophy, Quentin Meillassoux proposed speculative materialism, a post-Kantian return to David Hume which can strengthen classical materialist ideas. This speculative approach to philosophical naturalism has been further developed by other contemporary thinkers including Ray Brassier and Drew M. Dalton. === Etymology === The term "methodological naturalism" is much more recent, though. According to Ronald Numbers, it was coined in 1983 by Paul de Vries, a Wheaton College philosopher. De Vries distinguished between what he called "methodological naturalism", a disciplinary method that says nothing about God's existence, and "metaphysical naturalism", which "denies the existence of a transcendent God". The term "methodological naturalism" had been used in 1937 by Edgar S. Brightman in an article in The Philosophical Review as a contrast to "naturalism" in general, but there the idea was not really developed to its more recent distinctions. == Description == According to Steven Schafersman, naturalism is a philosophy that maintains that; "Nature encompasses all that exists throughout space and time; Nature (the universe or cosmos) consists only of natural elements, that is, of spatio-temporal physical substance – mass –energy. Non-physical or quasi-physical substance, such as information, ideas, values, logic, mathematics, intellect, and other emergent phenomena, either supervene upon the physical or can be reduced to a physical account; Nature operates by the laws of physics and in principle, can be explained and understood by science and philosophy; The supernatural does not exist, i.e., only nature is real. Naturalism is therefore a metaphysical philosophy opposed primarily by supernaturalism". Or, as Carl Sagan succinctly put it: "The Cosmos is all that is or ever was or ever will be." In addition Arthur C. Danto states that naturalism, in recent usage, is a species of philosophical monism according to which whatever exists or happens is natural in the sense of being susceptible to explanation through methods which, although paradigmatically exemplified in the natural sciences, are continuous from domain to domain of objects and events. Hence, naturalism is polemically defined as repudiating the view that there exists or could exist any entities which lie, in principle, beyond the scope of scientific explanation. Arthur Newell Strahler states: "The naturalistic view is that the particular universe we observe came into existence and has operated through all time and in all its parts without the impetus or guidance of any supernatural agency." "The great majority of contemporary philosophers urge that that reality is exhausted by nature, containing nothing 'supernatural', and that the scientific method should be used to investigate all areas of reality, including the 'human spirit'." Philosophers widely regard naturalism as a "positive" term, and "few active philosophers nowadays are happy to announce themselves as 'non-naturalists'". "Philosophers concerned with religion tend to be less enthusiastic about 'naturalism'" and that despite an "inevitable" divergence due to its popularity, if more narrowly construed, (to the chagrin of John McDowell, David Chalmers and Jennifer Hornsby, for example), those not so disqualified remain nonetheless content "to set the bar for 'naturalism' higher." Alvin Plantinga stated that Naturalism is presumed to not be a religion. However, in one very important respect it resembles religion by performing the cognitive function of a religion. There is a set of deep human questions to which a religion typically provides an answer. In like manner naturalism gives a set of answers to these questions". == Providing assumptions required for science == According to Robert Priddy, all scientific study inescapably builds on at least some essential assumptions that cannot be tested by scientific processes; that is, that scientists must start with some assumptions as to the ultimate analysis of the facts with which it deals. These assumptions would then be justified partly by their adherence to the types of occurrence of which we are directly conscious, and partly by their success in representing the observed facts with a certain generality, devoid of ad hoc suppositions." Kuhn also claims that all science is based on assumptions about the character of the universe, rather than merely on empirical facts. These assumptions – a paradigm – comprise a collection of beliefs, values and techniques that are held by a given scientific community, which legitimize their systems and set the limitations to their investigation. For naturalists, nature is the only reality, the "correct" paradigm, and there is no such thing as supernatural, i.e. anything above, beyond, or outside of nature. The scientific method is to be used to investigate all reality, including the human spirit. Some claim that naturalism is the implicit philosophy of working scientists, and that the following basic assumptions are needed to justify the scientific method: That there is an objective reality shared by all rational observers."The basis for rationality is acceptance of an external objective reality." "Objective reality is clearly an essential thing if we are to develop a meaningful perspective of the world. Nevertheless its very existence is assumed." "Our belief that objective reality exist is an assumption that it arises from a real world outside of ourselves. As infants we made this assumption unconsciously. People are happy to make this assumption that adds meaning to our sensations and feelings, than live with solipsism." "Without this assumption, there would be only the thoughts and images in our own mind (which would be the only existing mind) and there would be no need of science, or anything else." That this objective reality is governed by natural laws; "Science, at least today, assumes that the universe obeys knowable principles that don't depend on time or place, nor on subjective parameters such as what we think, know or how we behave." Hugh Gauch argues that science presupposes that "the physical world is orderly and comprehensible." That reality can be discovered by means of systematic observation and experimentation.Stanley Sobottka said: "The assumption of external reality is necessary for science to function and to flourish. For the most part, science is the discovering and explaining of the external world." "Science attempts to produce knowledge that is as universal and objective as possible within the realm of human understanding." That Nature has uniformity of laws and most if not all things in nature must have at least a natural cause.Biologist Stephen Jay Gould referred to these two closely related propositions as the constancy of nature's laws and the operation of known processes. Simpson agrees that the axiom of uniformity of law, an unprovable postulate, is necessary in order for scientists to extrapolate inductive inference into the unobservable past in order to meaningfully study it. "The assumption of spatial and temporal invariance of natural laws is by no means unique to geology since it amounts to a warrant for inductive inference which, as Bacon showed nearly four hundred years ago, is the basic mode of reasoning in empirical science. Without assuming this spatial and temporal invariance, we have no basis for extrapolating from the known to the unknown and, therefore, no way of reaching general conclusions from a finite number of observations. (Since the assumption is itself vindicated by induction, it can in no way "prove" the validity of induction — an endeavor virtually abandoned after Hume demonstrated its futility two centuries ago)." Gould also notes that natural processes such as Lyell's "uniformity of process" are an assumption: "As such, it is another a priori assumption shared by all scientists and not a statement about the empirical world." According to R. Hooykaas: "The principle of uniformity is not a law, not a rule established after comparison of facts, but a principle, preceding the observation of facts ... It is the logical principle of parsimony of causes and of economy of scientific notions. By explaining past changes by analogy with present phenomena, a limit is set to conjecture, for there is only one way in which two things are equal, but there are an infinity of ways in which they could be supposed different." That experimental procedures will be done satisfactorily without any deliberate or unintentional mistakes that will influence the results. That experimenters won't be significantly biased by their presumptions. That random sampling is representative of the entire population.A simple random sample (SRS) is the most basic probabilistic option used for creating a sample from a population. The benefit of SRS is that the investigator is guaranteed to choose a sample that represents the population that ensures statistically valid conclusions. == Methodological naturalism == Methodological naturalism, the second sense of the term "naturalism", (see above) is "the adoption or assumption of philosophical naturalism … with or without fully accepting or believing it." Robert T. Pennock used the term to clarify that the scientific method confines itself to natural explanations without assuming the existence or non-existence of the supernatural. "We may therefore be agnostic about the ultimate truth of [philosophical] naturalism, but nevertheless adopt it and investigate nature as if nature is all that there is." According to Ronald Numbers, the term "methodological naturalism" was coined in 1983 by Paul de Vries, a Wheaton College philosopher. Both Schafersman and Strahler assert that it is illogical to try to decouple the two senses of naturalism. "While science as a process only requires methodological naturalism, the practice or adoption of methodological naturalism entails a logical and moral belief in philosophical naturalism, so they are not logically decoupled." This “[philosophical] naturalistic view is espoused by science as its fundamental assumption." But Eugenie Scott finds it imperative to do so for the expediency of deprogramming the religious. "Scientists can defuse some of the opposition to evolution by first recognizing that the vast majority of Americans are believers, and that most Americans want to retain their faith." Scott apparently believes that "individuals can retain religious beliefs and still accept evolution through methodological naturalism. Scientists should therefore avoid mentioning metaphysical naturalism and use methodological naturalism instead." "Even someone who may disagree with my logic … often understands the strategic reasons for separating methodological from philosophical naturalism—if we want more Americans to understand evolution." Scott's approach has found success as illustrated in Ecklund's study where some religious scientists reported that their religious beliefs affect the way they think about the implications – often moral – of their work, but not the way they practice science within methodological naturalism. Papineau notes that "Philosophers concerned with religion tend to be less enthusiastic about metaphysical naturalism and that those not so disqualified remain content "to set the bar for 'naturalism' higher." In contrast to Schafersman, Strahler, and Scott, Robert T. Pennock, an expert witness at the Kitzmiller v. Dover Area School District trial and cited by the Judge in his Memorandum Opinion, described "methodological naturalism" stating that it is not based on dogmatic metaphysical naturalism. Pennock further states that as supernatural agents and powers "are above and beyond the natural world and its agents and powers" and "are not constrained by natural laws", only logical impossibilities constrain what a supernatural agent cannot do. In addition he says: "If we could apply natural knowledge to understand supernatural powers, then, by definition, they would not be supernatural." "Because the supernatural is necessarily a mystery to us, it can provide no grounds on which one can judge scientific models." "Experimentation requires observation and control of the variables.... But by definition we have no control over supernatural entities or forces." The position that the study of the function of nature is also the study of the origin of nature is in contrast with opponents who take the position that functioning of the cosmos is unrelated to how it originated. While they are open to supernatural fiat in its invention and coming into existence, during scientific study to explain the functioning of the cosmos, they do not appeal to the supernatural. They agree that allowing "science to appeal to untestable supernatural powers to explain how nature functions would make the scientist's task meaningless, undermine the discipline that allows science to make progress, and would be as profoundly unsatisfying as the ancient Greek playwright's reliance upon the deus ex machina to extract his hero from a difficult predicament." === Views on methodological naturalism === ==== W. V. O. Quine ==== W. V. O. Quine describes naturalism as the position that there is no higher tribunal for truth than natural science itself. In his view, there is no better method than the scientific method for judging the claims of science, and there is neither any need nor any place for a "first philosophy", such as (abstract) metaphysics or epistemology, that could stand behind and justify science or the scientific method. Therefore, philosophy should feel free to make use of the findings of scientists in its own pursuit, while also feeling free to offer criticism when those claims are ungrounded, confused, or inconsistent. In Quine's view, philosophy is "continuous with" science, and both are empirical. Naturalism is not a dogmatic belief that the modern view of science is entirely correct. Instead, it simply holds that science is the best way to explore the processes of the universe and that those processes are what modern science is striving to understand. ==== Karl Popper ==== Karl Popper equated naturalism with inductive theory of science. He rejected it based on his general critique of induction (see problem of induction), yet acknowledged its utility as means for inventing conjectures. A naturalistic methodology (sometimes called an "inductive theory of science") has its value, no doubt. … I reject the naturalistic view: It is uncritical. Its upholders fail to notice that whenever they believe to have discovered a fact, they have only proposed a convention. Hence the convention is liable to turn into a dogma. This criticism of the naturalistic view applies not only to its criterion of meaning, but also to its idea of science, and consequently to its idea of empirical method. Popper instead proposed that science should adopt a methodology based on falsifiability for demarcation, because no number of experiments can ever prove a theory, but a single experiment can contradict one. Popper holds that scientific theories are characterized by falsifiability. ==== Alvin Plantinga ==== Alvin Plantinga, Professor Emeritus of Philosophy at Notre Dame, and a Christian, has become a well-known critic of naturalism. He suggests, in his evolutionary argument against naturalism, that the probability that evolution has produced humans with reliable true beliefs, is low or inscrutable, unless the evolution of humans was guided (for example, by God). According to David Kahan of the University of Glasgow, in order to understand how beliefs are warranted, a justification must be found in the context of supernatural theism, as in Plantinga's epistemology. (See also supernormal stimuli). Plantinga argues that together, naturalism and evolution provide an insurmountable "defeater for the belief that our cognitive faculties are reliable", i.e., a skeptical argument along the lines of Descartes' evil demon or brain in a vat. Take philosophical naturalism to be the belief that there aren't any supernatural entities – no such person as God, for example, but also no other supernatural entities, and nothing at all like God. My claim was that naturalism and contemporary evolutionary theory are at serious odds with one another – and this despite the fact that the latter is ordinarily thought to be one of the main pillars supporting the edifice of the former. (Of course I am not attacking the theory of evolution, or anything in that neighborhood; I am instead attacking the conjunction of naturalism with the view that human beings have evolved in that way. I see no similar problems with the conjunction of theism and the idea that human beings have evolved in the way contemporary evolutionary science suggests.) More particularly, I argued that the conjunction of naturalism with the belief that we human beings have evolved in conformity with current evolutionary doctrine … is in a certain interesting way self-defeating or self-referentially incoherent. The argument is controversial and has been criticized as seriously flawed, for example, by Elliott Sober. ==== Robert T. Pennock ==== Robert T. Pennock states that as supernatural agents and powers "are above and beyond the natural world and its agents and powers" and "are not constrained by natural laws", only logical impossibilities constrain what a supernatural agent cannot do. He says: "If we could apply natural knowledge to understand supernatural powers, then, by definition, they would not be supernatural." As the supernatural is necessarily a mystery to us, it can provide no grounds on which one can judge scientific models. "Experimentation requires observation and control of the variables.... But by definition we have no control over supernatural entities or forces." Science does not deal with meanings; the closed system of scientific reasoning cannot be used to define itself. Allowing science to appeal to untestable supernatural powers would make the scientist's task meaningless, undermine the discipline that allows science to make progress, and "would be as profoundly unsatisfying as the ancient Greek playwright's reliance upon the deus ex machina to extract his hero from a difficult predicament." Naturalism of this sort says nothing about the existence or nonexistence of the supernatural, which by this definition is beyond natural testing. As a practical consideration, the rejection of supernatural explanations would merely be pragmatic, thus it would nonetheless be possible for an ontological supernaturalist to espouse and practice methodological naturalism. For example, scientists may believe in God while practicing methodological naturalism in their scientific work. This position does not preclude knowledge that is somehow connected to the supernatural. Generally however, anything that one can examine and explain scientifically would not be supernatural, simply by definition. == See also == == References == === Citations === === References === === Further reading === Mario De Caro and David Macarthur (eds) Naturalism in Question. Cambridge, Mass: Harvard University Press, 2004. Mario De Caro and David Macarthur (eds) Naturalism and Normativity. New York: Columbia University Press, 2010. Friedrich Albert Lange, The History of Materialism, London: Kegan Paul, Trench, Trubner & Co Ltd, 1925, ISBN 0-415-22525-6 David Macarthur, "Quinean Naturalism in Question," Philo. vol 11, no. 1 (2008). Sander Verhaeg, Working from Within: The Nature and Development of Quine's Naturalism. New York: Oxford University Press, 2018. == External links == Media related to Naturalism (philosophy) at Wikimedia Commons
Wikipedia/Methodological_naturalism
In physics, a string is a physical entity postulated in string theory and related subjects. Unlike elementary particles, which are zero-dimensional or point-like by definition, strings are one-dimensional extended entities. Researchers often have an interest in string theories because theories in which the fundamental entities are strings rather than point particles automatically have many properties that some physicists expect to hold in a fundamental theory of physics. Most notably, a theory of strings that evolve and interact according to the rules of quantum mechanics will automatically describe quantum gravity. == Overview == In string theory, the strings may be open (forming a segment with two endpoints) or closed (forming a loop like a circle) and may have other special properties. Prior to 1995, there were five known versions of string theory incorporating the idea of supersymmetry (these five are known as superstring theories) and two versions without supersymmetry known as bosonic string theories, which differed in the type of strings and in other aspects. Today these different superstring theories are thought to arise as different limiting cases of a single theory called M-theory. In string theories of particle physics, the strings are very tiny; much smaller than can be observed in today's particle accelerators. The characteristic length scale of strings is typically on the order of the Planck length, about 10−35 meter, the scale at which the effects of quantum gravity are believed to become significant. Therefore on much larger length scales, such as the scales visible in physics laboratories, such entities would appear to be zero-dimensional point particles. Strings are able to vibrate as harmonic oscillators, and different vibrational states of the same string are interpreted as different types of particles. In string theories, strings vibrating at different frequencies constitute the multiple fundamental particles found in the current Standard Model of particle physics. Strings are also sometimes studied in nuclear physics where they are used to model flux tubes. As the string propagates through spacetime, a string sweeps out a two-dimensional surface called its worldsheet. This is analogous to the one-dimensional worldline traced out by a point particle. The physics of a string is described by means of a two-dimensional conformal field theory associated with the worldsheet. The formalism of two-dimensional conformal field theory also has many applications outside of string theory, for example in condensed matter physics and parts of pure mathematics. == Types of strings == === Closed and open strings === Strings can be either open or closed. A closed string is a string that has no end-points, and therefore is topologically equivalent to a circle. An open string, on the other hand, has two end-points and is topologically equivalent to a line interval. Not all string theories contain open strings, but every theory must contain closed strings, as interactions between open strings can always result in closed strings. The oldest superstring theory containing open strings was type I string theory. However, the developments in string theory in the 1990s have shown that the open strings should always be thought of as ending on a new physical degree of freedom called D-branes, and the spectrum of possibilities for open strings has significantly increased. Open and closed strings are generally associated with characteristic vibrational modes. One of the vibration modes of a closed string can be identified as the graviton. In certain string theories, the lowest-energy vibration of an open string is a tachyon and can undergo tachyon condensation. Other vibrational modes of open strings exhibit the properties of photons and gluons. === Orientation === Strings can also possess an orientation, which can be thought of as an internal "arrow" that distinguishes the string from one with the opposite orientation. By contrast, an unoriented string is one with no such arrow on it. == See also == Cosmic strings Elementary particle Brane D-brane == References == Schwarz, John (2000). "Introduction to Superstring Theory". Retrieved Dec. 12, 2005. "NOVA's strings homepage"
Wikipedia/String_(physics)
The scientific method is an empirical method for acquiring knowledge that has been referred to while doing science since at least the 17th century. Historically, it was developed through the centuries from the ancient and medieval world. The scientific method involves careful observation coupled with rigorous skepticism, because cognitive assumptions can distort the interpretation of the observation. Scientific inquiry includes creating a testable hypothesis through inductive reasoning, testing it through experiments and statistical analysis, and adjusting or discarding the hypothesis based on the results. Although procedures vary across fields, the underlying process is often similar. In more detail: the scientific method involves making conjectures (hypothetical explanations), predicting the logical consequences of hypothesis, then carrying out experiments or empirical observations based on those predictions. A hypothesis is a conjecture based on knowledge obtained while seeking answers to the question. Hypotheses can be very specific or broad but must be falsifiable, implying that it is possible to identify a possible outcome of an experiment or observation that conflicts with predictions deduced from the hypothesis; otherwise, the hypothesis cannot be meaningfully tested. While the scientific method is often presented as a fixed sequence of steps, it actually represents a set of general principles. Not all steps take place in every scientific inquiry (nor to the same degree), and they are not always in the same order. Numerous discoveries have not followed the textbook model of the scientific method and chance has played a role, for instance. == History == The history of the scientific method considers changes in the methodology of scientific inquiry, not the history of science itself. The development of rules for scientific reasoning has not been straightforward; the scientific method has been the subject of intense and recurring debate throughout the history of science, and eminent natural philosophers and scientists have argued for the primacy of various approaches to establishing scientific knowledge. Different early expressions of empiricism and the scientific method can be found throughout history, for instance with the ancient Stoics, Aristotle, Epicurus, Alhazen, Avicenna, Al-Biruni, Roger Bacon, and William of Ockham. In the Scientific Revolution of the 16th and 17th centuries, some of the most important developments were the furthering of empiricism by Francis Bacon and Robert Hooke, the rationalist approach described by René Descartes, and inductivism, brought to particular prominence by Isaac Newton and those who followed him. Experiments were advocated by Francis Bacon and performed by Giambattista della Porta, Johannes Kepler, and Galileo Galilei. There was particular development aided by theoretical works by the skeptic Francisco Sanches, by idealists as well as empiricists John Locke, George Berkeley, and David Hume. C. S. Peirce formulated the hypothetico-deductive model in the 20th century, and the model has undergone significant revision since. The term "scientific method" emerged in the 19th century, as a result of significant institutional development of science, and terminologies establishing clear boundaries between science and non-science, such as "scientist" and "pseudoscience". Throughout the 1830s and 1850s, when Baconianism was popular, naturalists like William Whewell, John Herschel, and John Stuart Mill engaged in debates over "induction" and "facts," and were focused on how to generate knowledge. In the late 19th and early 20th centuries, a debate over realism vs. antirealism was conducted as powerful scientific theories extended beyond the realm of the observable. === Modern use and critical thought === The term "scientific method" came into popular use in the twentieth century; Dewey's 1910 book, How We Think, inspired popular guidelines. It appeared in dictionaries and science textbooks, although there was little consensus on its meaning. Although there was growth through the middle of the twentieth century, by the 1960s and 1970s numerous influential philosophers of science such as Thomas Kuhn and Paul Feyerabend had questioned the universality of the "scientific method," and largely replaced the notion of science as a homogeneous and universal method with that of it being a heterogeneous and local practice. In particular, Paul Feyerabend, in the 1975 first edition of his book Against Method, argued against there being any universal rules of science; Karl Popper, and Gauch 2003, disagreed with Feyerabend's claim. Later stances include physicist Lee Smolin's 2013 essay "There Is No Scientific Method", in which he espouses two ethical principles, and historian of science Daniel Thurs' chapter in the 2015 book Newton's Apple and Other Myths about Science, which concluded that the scientific method is a myth or, at best, an idealization. As myths are beliefs, they are subject to the narrative fallacy, as pointed out by Taleb. Philosophers Robert Nola and Howard Sankey, in their 2007 book Theories of Scientific Method, said that debates over the scientific method continue, and argued that Feyerabend, despite the title of Against Method, accepted certain rules of method and attempted to justify those rules with a meta methodology. Staddon (2017) argues it is a mistake to try following rules in the absence of an algorithmic scientific method; in that case, "science is best understood through examples". But algorithmic methods, such as disproof of existing theory by experiment have been used since Alhacen (1027) and his Book of Optics, and Galileo (1638) and his Two New Sciences, and The Assayer, which still stand as scientific method. == Elements of inquiry == === Overview === The scientific method is the process by which science is carried out. As in other areas of inquiry, science (through the scientific method) can build on previous knowledge, and unify understanding of its studied topics over time. Historically, the development of the scientific method was critical to the Scientific Revolution. The overall process involves making conjectures (hypotheses), predicting their logical consequences, then carrying out experiments based on those predictions to determine whether the original conjecture was correct. However, there are difficulties in a formulaic statement of method. Though the scientific method is often presented as a fixed sequence of steps, these actions are more accurately general principles. Not all steps take place in every scientific inquiry (nor to the same degree), and they are not always done in the same order. ==== Factors of scientific inquiry ==== There are different ways of outlining the basic method used for scientific inquiry. The scientific community and philosophers of science generally agree on the following classification of method components. These methodological elements and organization of procedures tend to be more characteristic of experimental sciences than social sciences. Nonetheless, the cycle of formulating hypotheses, testing and analyzing the results, and formulating new hypotheses, will resemble the cycle described below.The scientific method is an iterative, cyclical process through which information is continually revised. It is generally recognized to develop advances in knowledge through the following elements, in varying combinations or contributions: Characterizations (observations, definitions, and measurements of the subject of inquiry) Hypotheses (theoretical, hypothetical explanations of observations and measurements of the subject) Predictions (inductive and deductive reasoning from the hypothesis or theory) Experiments (tests of all of the above) Each element of the scientific method is subject to peer review for possible mistakes. These activities do not describe all that scientists do but apply mostly to experimental sciences (e.g., physics, chemistry, biology, and psychology). The elements above are often taught in the educational system as "the scientific method". The scientific method is not a single recipe: it requires intelligence, imagination, and creativity. In this sense, it is not a mindless set of standards and procedures to follow but is rather an ongoing cycle, constantly developing more useful, accurate, and comprehensive models and methods. For example, when Einstein developed the Special and General Theories of Relativity, he did not in any way refute or discount Newton's Principia. On the contrary, if the astronomically massive, the feather-light, and the extremely fast are removed from Einstein's theories – all phenomena Newton could not have observed – Newton's equations are what remain. Einstein's theories are expansions and refinements of Newton's theories and, thus, increase confidence in Newton's work. An iterative, pragmatic scheme of the four points above is sometimes offered as a guideline for proceeding: Define a question Gather information and resources (observe) Form an explanatory hypothesis Test the hypothesis by performing an experiment and collecting data in a reproducible manner Analyze the data Interpret the data and draw conclusions that serve as a starting point for a new hypothesis Publish results Retest (frequently done by other scientists) The iterative cycle inherent in this step-by-step method goes from point 3 to 6 and back to 3 again. While this schema outlines a typical hypothesis/testing method, many philosophers, historians, and sociologists of science, including Paul Feyerabend, claim that such descriptions of scientific method have little relation to the ways that science is actually practiced. === Characterizations === The basic elements of the scientific method are illustrated by the following example (which occurred from 1944 to 1953) from the discovery of the structure of DNA (marked with and indented). In 1950, it was known that genetic inheritance had a mathematical description, starting with the studies of Gregor Mendel, and that DNA contained genetic information (Oswald Avery's transforming principle). But the mechanism of storing genetic information (i.e., genes) in DNA was unclear. Researchers in Bragg's laboratory at Cambridge University made X-ray diffraction pictures of various molecules, starting with crystals of salt, and proceeding to more complicated substances. Using clues painstakingly assembled over decades, beginning with its chemical composition, it was determined that it should be possible to characterize the physical structure of DNA, and the X-ray images would be the vehicle. The scientific method depends upon increasingly sophisticated characterizations of the subjects of investigation. (The subjects can also be called unsolved problems or the unknowns.) For example, Benjamin Franklin conjectured, correctly, that St. Elmo's fire was electrical in nature, but it has taken a long series of experiments and theoretical changes to establish this. While seeking the pertinent properties of the subjects, careful thought may also entail some definitions and observations; these observations often demand careful measurements and/or counting can take the form of expansive empirical research. A scientific question can refer to the explanation of a specific observation, as in "Why is the sky blue?" but can also be open-ended, as in "How can I design a drug to cure this particular disease?" This stage frequently involves finding and evaluating evidence from previous experiments, personal scientific observations or assertions, as well as the work of other scientists. If the answer is already known, a different question that builds on the evidence can be posed. When applying the scientific method to research, determining a good question can be very difficult and it will affect the outcome of the investigation. The systematic, careful collection of measurements or counts of relevant quantities is often the critical difference between pseudo-sciences, such as alchemy, and science, such as chemistry or biology. Scientific measurements are usually tabulated, graphed, or mapped, and statistical manipulations, such as correlation and regression, performed on them. The measurements might be made in a controlled setting, such as a laboratory, or made on more or less inaccessible or unmanipulatable objects such as stars or human populations. The measurements often require specialized scientific instruments such as thermometers, spectroscopes, particle accelerators, or voltmeters, and the progress of a scientific field is usually intimately tied to their invention and improvement. I am not accustomed to saying anything with certainty after only one or two observations. ==== Definition ==== The scientific definition of a term sometimes differs substantially from its natural language usage. For example, mass and weight overlap in meaning in common discourse, but have distinct meanings in mechanics. Scientific quantities are often characterized by their units of measure which can later be described in terms of conventional physical units when communicating the work. New theories are sometimes developed after realizing certain terms have not previously been sufficiently clearly defined. For example, Albert Einstein's first paper on relativity begins by defining simultaneity and the means for determining length. These ideas were skipped over by Isaac Newton with, "I do not define time, space, place and motion, as being well known to all." Einstein's paper then demonstrates that they (viz., absolute time and length independent of motion) were approximations. Francis Crick cautions us that when characterizing a subject, however, it can be premature to define something when it remains ill-understood. In Crick's study of consciousness, he actually found it easier to study awareness in the visual system, rather than to study free will, for example. His cautionary example was the gene; the gene was much more poorly understood before Watson and Crick's pioneering discovery of the structure of DNA; it would have been counterproductive to spend much time on the definition of the gene, before them. === Hypothesis development === Linus Pauling proposed that DNA might be a triple helix. This hypothesis was also considered by Francis Crick and James D. Watson but discarded. When Watson and Crick learned of Pauling's hypothesis, they understood from existing data that Pauling was wrong. and that Pauling would soon admit his difficulties with that structure. A hypothesis is a suggested explanation of a phenomenon, or alternately a reasoned proposal suggesting a possible correlation between or among a set of phenomena. Normally, hypotheses have the form of a mathematical model. Sometimes, but not always, they can also be formulated as existential statements, stating that some particular instance of the phenomenon being studied has some characteristic and causal explanations, which have the general form of universal statements, stating that every instance of the phenomenon has a particular characteristic. Scientists are free to use whatever resources they have – their own creativity, ideas from other fields, inductive reasoning, Bayesian inference, and so on – to imagine possible explanations for a phenomenon under study. Albert Einstein once observed that "there is no logical bridge between phenomena and their theoretical principles." Charles Sanders Peirce, borrowing a page from Aristotle (Prior Analytics, 2.25) described the incipient stages of inquiry, instigated by the "irritation of doubt" to venture a plausible guess, as abductive reasoning.: II, p.290  The history of science is filled with stories of scientists claiming a "flash of inspiration", or a hunch, which then motivated them to look for evidence to support or refute their idea. Michael Polanyi made such creativity the centerpiece of his discussion of methodology. William Glen observes that the success of a hypothesis, or its service to science, lies not simply in its perceived "truth", or power to displace, subsume or reduce a predecessor idea, but perhaps more in its ability to stimulate the research that will illuminate ... bald suppositions and areas of vagueness. In general, scientists tend to look for theories that are "elegant" or "beautiful". Scientists often use these terms to refer to a theory that is following the known facts but is nevertheless relatively simple and easy to handle. Occam's Razor serves as a rule of thumb for choosing the most desirable amongst a group of equally explanatory hypotheses. To minimize the confirmation bias that results from entertaining a single hypothesis, strong inference emphasizes the need for entertaining multiple alternative hypotheses, and avoiding artifacts. === Predictions from the hypothesis === James D. Watson, Francis Crick, and others hypothesized that DNA had a helical structure. This implied that DNA's X-ray diffraction pattern would be 'x shaped'. This prediction followed from the work of Cochran, Crick and Vand (and independently by Stokes). The Cochran-Crick-Vand-Stokes theorem provided a mathematical explanation for the empirical observation that diffraction from helical structures produces x-shaped patterns. In their first paper, Watson and Crick also noted that the double helix structure they proposed provided a simple mechanism for DNA replication, writing, "It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material".Any useful hypothesis will enable predictions, by reasoning including deductive reasoning. It might predict the outcome of an experiment in a laboratory setting or the observation of a phenomenon in nature. The prediction can also be statistical and deal only with probabilities. It is essential that the outcome of testing such a prediction be currently unknown. Only in this case does a successful outcome increase the probability that the hypothesis is true. If the outcome is already known, it is called a consequence and should have already been considered while formulating the hypothesis. If the predictions are not accessible by observation or experience, the hypothesis is not yet testable and so will remain to that extent unscientific in a strict sense. A new technology or theory might make the necessary experiments feasible. For example, while a hypothesis on the existence of other intelligent species may be convincing with scientifically based speculation, no known experiment can test this hypothesis. Therefore, science itself can have little to say about the possibility. In the future, a new technique may allow for an experimental test and the speculation would then become part of accepted science. For example, Einstein's theory of general relativity makes several specific predictions about the observable structure of spacetime, such as that light bends in a gravitational field, and that the amount of bending depends in a precise way on the strength of that gravitational field. Arthur Eddington's observations made during a 1919 solar eclipse supported General Relativity rather than Newtonian gravitation. === Experiments === Watson and Crick showed an initial (and incorrect) proposal for the structure of DNA to a team from King's College London – Rosalind Franklin, Maurice Wilkins, and Raymond Gosling. Franklin immediately spotted the flaws which concerned the water content. Later Watson saw Franklin's photo 51, a detailed X-ray diffraction image, which showed an X-shape and was able to confirm the structure was helical. Once predictions are made, they can be sought by experiments. If the test results contradict the predictions, the hypotheses which entailed them are called into question and become less tenable. Sometimes the experiments are conducted incorrectly or are not very well designed when compared to a crucial experiment. If the experimental results confirm the predictions, then the hypotheses are considered more likely to be correct, but might still be wrong and continue to be subject to further testing. The experimental control is a technique for dealing with observational error. This technique uses the contrast between multiple samples, or observations, or populations, under differing conditions, to see what varies or what remains the same. We vary the conditions for the acts of measurement, to help isolate what has changed. Mill's canons can then help us figure out what the important factor is. Factor analysis is one technique for discovering the important factor in an effect. Depending on the predictions, the experiments can have different shapes. It could be a classical experiment in a laboratory setting, a double-blind study or an archaeological excavation. Even taking a plane from New York to Paris is an experiment that tests the aerodynamical hypotheses used for constructing the plane. These institutions thereby reduce the research function to a cost/benefit, which is expressed as money, and the time and attention of the researchers to be expended, in exchange for a report to their constituents. Current large instruments, such as CERN's Large Hadron Collider (LHC), or LIGO, or the National Ignition Facility (NIF), or the International Space Station (ISS), or the James Webb Space Telescope (JWST), entail expected costs of billions of dollars, and timeframes extending over decades. These kinds of institutions affect public policy, on a national or even international basis, and the researchers would require shared access to such machines and their adjunct infrastructure. Scientists assume an attitude of openness and accountability on the part of those experimenting. Detailed record-keeping is essential, to aid in recording and reporting on the experimental results, and supports the effectiveness and integrity of the procedure. They will also assist in reproducing the experimental results, likely by others. Traces of this approach can be seen in the work of Hipparchus (190–120 BCE), when determining a value for the precession of the Earth, while controlled experiments can be seen in the works of al-Battani (853–929 CE) and Alhazen (965–1039 CE). === Communication and iteration === Watson and Crick then produced their model, using this information along with the previously known information about DNA's composition, especially Chargaff's rules of base pairing. After considerable fruitless experimentation, being discouraged by their superior from continuing, and numerous false starts, Watson and Crick were able to infer the essential structure of DNA by concrete modeling of the physical shapes of the nucleotides which comprise it. They were guided by the bond lengths which had been deduced by Linus Pauling and by Rosalind Franklin's X-ray diffraction images. The scientific method is iterative. At any stage, it is possible to refine its accuracy and precision, so that some consideration will lead the scientist to repeat an earlier part of the process. Failure to develop an interesting hypothesis may lead a scientist to re-define the subject under consideration. Failure of a hypothesis to produce interesting and testable predictions may lead to reconsideration of the hypothesis or of the definition of the subject. Failure of an experiment to produce interesting results may lead a scientist to reconsider the experimental method, the hypothesis, or the definition of the subject. This manner of iteration can span decades and sometimes centuries. Published papers can be built upon. For example: By 1027, Alhazen, based on his measurements of the refraction of light, was able to deduce that outer space was less dense than air, that is: "the body of the heavens is rarer than the body of air". In 1079 Ibn Mu'adh's Treatise On Twilight was able to infer that Earth's atmosphere was 50 miles thick, based on atmospheric refraction of the sun's rays. This is why the scientific method is often represented as circular – new information leads to new characterisations, and the cycle of science continues. Measurements collected can be archived, passed onwards and used by others. Other scientists may start their own research and enter the process at any stage. They might adopt the characterization and formulate their own hypothesis, or they might adopt the hypothesis and deduce their own predictions. Often the experiment is not done by the person who made the prediction, and the characterization is based on experiments done by someone else. Published results of experiments can also serve as a hypothesis predicting their own reproducibility. === Confirmation === Science is a social enterprise, and scientific work tends to be accepted by the scientific community when it has been confirmed. Crucially, experimental and theoretical results must be reproduced by others within the scientific community. Researchers have given their lives for this vision; Georg Wilhelm Richmann was killed by ball lightning (1753) when attempting to replicate the 1752 kite-flying experiment of Benjamin Franklin. If an experiment cannot be repeated to produce the same results, this implies that the original results might have been in error. As a result, it is common for a single experiment to be performed multiple times, especially when there are uncontrolled variables or other indications of experimental error. For significant or surprising results, other scientists may also attempt to replicate the results for themselves, especially if those results would be important to their own work. Replication has become a contentious issue in social and biomedical science where treatments are administered to groups of individuals. Typically an experimental group gets the treatment, such as a drug, and the control group gets a placebo. John Ioannidis in 2005 pointed out that the method being used has led to many findings that cannot be replicated. The process of peer review involves the evaluation of the experiment by experts, who typically give their opinions anonymously. Some journals request that the experimenter provide lists of possible peer reviewers, especially if the field is highly specialized. Peer review does not certify the correctness of the results, only that, in the opinion of the reviewer, the experiments themselves were sound (based on the description supplied by the experimenter). If the work passes peer review, which occasionally may require new experiments requested by the reviewers, it will be published in a peer-reviewed scientific journal. The specific journal that publishes the results indicates the perceived quality of the work. Scientists typically are careful in recording their data, a requirement promoted by Ludwik Fleck (1896–1961) and others. Though not typically required, they might be requested to supply this data to other scientists who wish to replicate their original results (or parts of their original results), extending to the sharing of any experimental samples that may be difficult to obtain. To protect against bad science and fraudulent data, government research-granting agencies such as the National Science Foundation, and science journals, including Nature and Science, have a policy that researchers must archive their data and methods so that other researchers can test the data and methods and build on the research that has gone before. Scientific data archiving can be done at several national archives in the U.S. or the World Data Center. == Foundational principles == === Honesty, openness, and falsifiability === The unfettered principles of science are to strive for accuracy and the creed of honesty; openness already being a matter of degrees. Openness is restricted by the general rigour of scepticism. And of course the matter of non-science. Smolin, in 2013, espoused ethical principles rather than giving any potentially limited definition of the rules of inquiry. His ideas stand in the context of the scale of data–driven and big science, which has seen increased importance of honesty and consequently reproducibility. His thought is that science is a community effort by those who have accreditation and are working within the community. He also warns against overzealous parsimony. Popper previously took ethical principles even further, going as far as to ascribe value to theories only if they were falsifiable. Popper used the falsifiability criterion to demarcate a scientific theory from a theory like astrology: both "explain" observations, but the scientific theory takes the risk of making predictions that decide whether it is right or wrong: "Those among us who are unwilling to expose their ideas to the hazard of refutation do not take part in the game of science." === Theory's interactions with observation === Science has limits. Those limits are usually deemed to be answers to questions that aren't in science's domain, such as faith. Science has other limits as well, as it seeks to make true statements about reality. The nature of truth and the discussion on how scientific statements relate to reality is best left to the article on the philosophy of science here. More immediately topical limitations show themselves in the observation of reality. It is the natural limitations of scientific inquiry that there is no pure observation as theory is required to interpret empirical data, and observation is therefore influenced by the observer's conceptual framework. As science is an unfinished project, this does lead to difficulties. Namely, that false conclusions are drawn, because of limited information. An example here are the experiments of Kepler and Brahe, used by Hanson to illustrate the concept. Despite observing the same sunrise the two scientists came to different conclusions—their intersubjectivity leading to differing conclusions. Johannes Kepler used Tycho Brahe's method of observation, which was to project the image of the Sun on a piece of paper through a pinhole aperture, instead of looking directly at the Sun. He disagreed with Brahe's conclusion that total eclipses of the Sun were impossible because, contrary to Brahe, he knew that there were historical accounts of total eclipses. Instead, he deduced that the images taken would become more accurate, the larger the aperture—this fact is now fundamental for optical system design. Another historic example here is the discovery of Neptune, credited as being found via mathematics because previous observers didn't know what they were looking at. === Empiricism, rationalism, and more pragmatic views === Scientific endeavour can be characterised as the pursuit of truths about the natural world or as the elimination of doubt about the same. The former is the direct construction of explanations from empirical data and logic, the latter the reduction of potential explanations. It was established above how the interpretation of empirical data is theory-laden, so neither approach is trivial. The ubiquitous element in the scientific method is empiricism, which holds that knowledge is created by a process involving observation; scientific theories generalize observations. This is in opposition to stringent forms of rationalism, which holds that knowledge is created by the human intellect; later clarified by Popper to be built on prior theory. The scientific method embodies the position that reason alone cannot solve a particular scientific problem; it unequivocally refutes claims that revelation, political or religious dogma, appeals to tradition, commonly held beliefs, common sense, or currently held theories pose the only possible means of demonstrating truth. In 1877, C. S. Peirce characterized inquiry in general not as the pursuit of truth per se but as the struggle to move from irritating, inhibitory doubts born of surprises, disagreements, and the like, and to reach a secure belief, the belief being that on which one is prepared to act. His pragmatic views framed scientific inquiry as part of a broader spectrum and as spurred, like inquiry generally, by actual doubt, not mere verbal or "hyperbolic doubt", which he held to be fruitless. This "hyperbolic doubt" Peirce argues against here is of course just another name for Cartesian doubt associated with René Descartes. It is a methodological route to certain knowledge by identifying what can't be doubted. A strong formulation of the scientific method is not always aligned with a form of empiricism in which the empirical data is put forward in the form of experience or other abstracted forms of knowledge as in current scientific practice the use of scientific modelling and reliance on abstract typologies and theories is normally accepted. In 2010, Hawking suggested that physics' models of reality should simply be accepted where they prove to make useful predictions. He calls the concept model-dependent realism. == Rationality == Rationality embodies the essence of sound reasoning, a cornerstone not only in philosophical discourse but also in the realms of science and practical decision-making. According to the traditional viewpoint, rationality serves a dual purpose: it governs beliefs, ensuring they align with logical principles, and it steers actions, directing them towards coherent and beneficial outcomes. This understanding underscores the pivotal role of reason in shaping our understanding of the world and in informing our choices and behaviours. The following section will first explore beliefs and biases, and then get to the rational reasoning most associated with the sciences. === Beliefs and biases === Scientific methodology often directs that hypotheses be tested in controlled conditions wherever possible. This is frequently possible in certain areas, such as in the biological sciences, and more difficult in other areas, such as in astronomy. The practice of experimental control and reproducibility can have the effect of diminishing the potentially harmful effects of circumstance, and to a degree, personal bias. For example, pre-existing beliefs can alter the interpretation of results, as in confirmation bias; this is a heuristic that leads a person with a particular belief to see things as reinforcing their belief, even if another observer might disagree (in other words, people tend to observe what they expect to observe). [T]he action of thought is excited by the irritation of doubt, and ceases when belief is attained. A historical example is the belief that the legs of a galloping horse are splayed at the point when none of the horse's legs touch the ground, to the point of this image being included in paintings by its supporters. However, the first stop-action pictures of a horse's gallop by Eadweard Muybridge showed this to be false, and that the legs are instead gathered together. Another important human bias that plays a role is a preference for new, surprising statements (see Appeal to novelty), which can result in a search for evidence that the new is true. Poorly attested beliefs can be believed and acted upon via a less rigorous heuristic. Goldhaber and Nieto published in 2010 the observation that if theoretical structures with "many closely neighboring subjects are described by connecting theoretical concepts, then the theoretical structure acquires a robustness which makes it increasingly hard – though certainly never impossible – to overturn". When a narrative is constructed its elements become easier to believe. Fleck (1979), p. 27 notes "Words and ideas are originally phonetic and mental equivalences of the experiences coinciding with them. ... Such proto-ideas are at first always too broad and insufficiently specialized. ... Once a structurally complete and closed system of opinions consisting of many details and relations has been formed, it offers enduring resistance to anything that contradicts it". Sometimes, these relations have their elements assumed a priori, or contain some other logical or methodological flaw in the process that ultimately produced them. Donald M. MacKay has analyzed these elements in terms of limits to the accuracy of measurement and has related them to instrumental elements in a category of measurement. === Deductive and inductive reasoning === The idea of there being two opposed justifications for truth has shown up throughout the history of scientific method as analysis versus synthesis, non-ampliative/ampliative, or even confirmation and verification. (And there are other kinds of reasoning.) One to use what is observed to build towards fundamental truths – and the other to derive from those fundamental truths more specific principles. Deductive reasoning is the building of knowledge based on what has been shown to be true before. It requires the assumption of fact established prior, and, given the truth of the assumptions, a valid deduction guarantees the truth of the conclusion. Inductive reasoning builds knowledge not from established truth, but from a body of observations. It requires stringent scepticism regarding observed phenomena, because cognitive assumptions can distort the interpretation of initial perceptions. An example for how inductive and deductive reasoning works can be found in the history of gravitational theory. It took thousands of years of measurements, from the Chaldean, Indian, Persian, Greek, Arabic, and European astronomers, to fully record the motion of planet Earth. Kepler(and others) were then able to build their early theories by generalizing the collected data inductively, and Newton was able to unify prior theory and measurements into the consequences of his laws of motion in 1727. Another common example of inductive reasoning is the observation of a counterexample to current theory inducing the need for new ideas. Le Verrier in 1859 pointed out problems with the perihelion of Mercury that showed Newton's theory to be at least incomplete. The observed difference of Mercury's precession between Newtonian theory and observation was one of the things that occurred to Einstein as a possible early test of his theory of relativity. His relativistic calculations matched observation much more closely than Newtonian theory did. Though, today's Standard Model of physics suggests that we still do not know at least some of the concepts surrounding Einstein's theory, it holds to this day and is being built on deductively. A theory being assumed as true and subsequently built on is a common example of deductive reasoning. Theory building on Einstein's achievement can simply state that 'we have shown that this case fulfils the conditions under which general/special relativity applies, therefore its conclusions apply also'. If it was properly shown that 'this case' fulfils the conditions, the conclusion follows. An extension of this is the assumption of a solution to an open problem. This weaker kind of deductive reasoning will get used in current research, when multiple scientists or even teams of researchers are all gradually solving specific cases in working towards proving a larger theory. This often sees hypotheses being revised again and again as new proof emerges. This way of presenting inductive and deductive reasoning shows part of why science is often presented as being a cycle of iteration. It is important to keep in mind that that cycle's foundations lie in reasoning, and not wholly in the following of procedure. === Certainty, probabilities, and statistical inference === Claims of scientific truth can be opposed in three ways: by falsifying them, by questioning their certainty, or by asserting the claim itself to be incoherent. Incoherence, here, means internal errors in logic, like stating opposites to be true; falsification is what Popper would have called the honest work of conjecture and refutation — certainty, perhaps, is where difficulties in telling truths from non-truths arise most easily. Measurements in scientific work are usually accompanied by estimates of their uncertainty. The uncertainty is often estimated by making repeated measurements of the desired quantity. Uncertainties may also be calculated by consideration of the uncertainties of the individual underlying quantities used. Counts of things, such as the number of people in a nation at a particular time, may also have an uncertainty due to data collection limitations. Or counts may represent a sample of desired quantities, with an uncertainty that depends upon the sampling method used and the number of samples taken. In the case of measurement imprecision, there will simply be a 'probable deviation' expressing itself in a study's conclusions. Statistics are different. Inductive statistical generalisation will take sample data and extrapolate more general conclusions, which has to be justified — and scrutinised. It can even be said that statistical models are only ever useful, but never a complete representation of circumstances. In statistical analysis, expected and unexpected bias is a large factor. Research questions, the collection of data, or the interpretation of results, all are subject to larger amounts of scrutiny than in comfortably logical environments. Statistical models go through a process for validation, for which one could even say that awareness of potential biases is more important than the hard logic; errors in logic are easier to find in peer review, after all. More general, claims to rational knowledge, and especially statistics, have to be put into their appropriate context. Simple statements such as '9 out of 10 doctors recommend' are therefore of unknown quality because they do not justify their methodology. Lack of familiarity with statistical methodologies can result in erroneous conclusions. Foregoing the easy example, multiple probabilities interacting is where, for example medical professionals, have shown a lack of proper understanding. Bayes' theorem is the mathematical principle lining out how standing probabilities are adjusted given new information. The boy or girl paradox is a common example. In knowledge representation, Bayesian estimation of mutual information between random variables is a way to measure dependence, independence, or interdependence of the information under scrutiny. Beyond commonly associated survey methodology of field research, the concept together with probabilistic reasoning is used to advance fields of science where research objects have no definitive states of being. For example, in statistical mechanics. == Methods of inquiry == === Hypothetico-deductive method === The hypothetico-deductive model, or hypothesis-testing method, or "traditional" scientific method is, as the name implies, based on the formation of hypotheses and their testing via deductive reasoning. A hypothesis stating implications, often called predictions, that are falsifiable via experiment is of central importance here, as not the hypothesis but its implications are what is tested. Basically, scientists will look at the hypothetical consequences a (potential) theory holds and prove or disprove those instead of the theory itself. If an experimental test of those hypothetical consequences shows them to be false, it follows logically that the part of the theory that implied them was false also. If they show as true however, it does not prove the theory definitively. The logic of this testing is what affords this method of inquiry to be reasoned deductively. The formulated hypothesis is assumed to be 'true', and from that 'true' statement implications are inferred. If the following tests show the implications to be false, it follows that the hypothesis was false also. If test show the implications to be true, new insights will be gained. It is important to be aware that a positive test here will at best strongly imply but not definitively prove the tested hypothesis, as deductive inference (A ⇒ B) is not equivalent like that; only (¬B ⇒ ¬A) is valid logic. Their positive outcomes however, as Hempel put it, provide "at least some support, some corroboration or confirmation for it". This is why Popper insisted on fielded hypotheses to be falsifieable, as successful tests imply very little otherwise. As Gillies put it, "successful theories are those that survive elimination through falsification". Deductive reasoning in this mode of inquiry will sometimes be replaced by abductive reasoning—the search for the most plausible explanation via logical inference. For example, in biology, where general laws are few, as valid deductions rely on solid presuppositions. === Inductive method === The inductivist approach to deriving scientific truth first rose to prominence with Francis Bacon and particularly with Isaac Newton and those who followed him. After the establishment of the HD-method, it was often put aside as something of a "fishing expedition" though. It is still valid to some degree, but today's inductive method is often far removed from the historic approach—the scale of the data collected lending new effectiveness to the method. It is most-associated with data-mining projects or large-scale observation projects. In both these cases, it is often not at all clear what the results of proposed experiments will be, and thus knowledge will arise after the collection of data through inductive reasoning. Where the traditional method of inquiry does both, the inductive approach usually formulates only a research question, not a hypothesis. Following the initial question instead, a suitable "high-throughput method" of data-collection is determined, the resulting data processed and 'cleaned up', and conclusions drawn after. "This shift in focus elevates the data to the supreme role of revealing novel insights by themselves". The advantage the inductive method has over methods formulating a hypothesis that it is essentially free of "a researcher's preconceived notions" regarding their subject. On the other hand, inductive reasoning is always attached to a measure of certainty, as all inductively reasoned conclusions are. This measure of certainty can reach quite high degrees, though. For example, in the determination of large primes, which are used in encryption software. === Mathematical modelling === Mathematical modelling, or allochthonous reasoning, typically is the formulation of a hypothesis followed by building mathematical constructs that can be tested in place of conducting physical laboratory experiments. This approach has two main factors: simplification/abstraction and secondly a set of correspondence rules. The correspondence rules lay out how the constructed model will relate back to reality-how truth is derived; and the simplifying steps taken in the abstraction of the given system are to reduce factors that do not bear relevance and thereby reduce unexpected errors. These steps can also help the researcher in understanding the important factors of the system, how far parsimony can be taken until the system becomes more and more unchangeable and thereby stable. Parsimony and related principles are further explored below. Once this translation into mathematics is complete, the resulting model, in place of the corresponding system, can be analysed through purely mathematical and computational means. The results of this analysis are of course also purely mathematical in nature and get translated back to the system as it exists in reality via the previously determined correspondence rules—iteration following review and interpretation of the findings. The way such models are reasoned will often be mathematically deductive—but they don't have to be. An example here are Monte-Carlo simulations. These generate empirical data "arbitrarily", and, while they may not be able to reveal universal principles, they can nevertheless be useful. == Scientific inquiry == Scientific inquiry generally aims to obtain knowledge in the form of testable explanations that scientists can use to predict the results of future experiments. This allows scientists to gain a better understanding of the topic under study, and later to use that understanding to intervene in its causal mechanisms (such as to cure disease). The better an explanation is at making predictions, the more useful it frequently can be, and the more likely it will continue to explain a body of evidence better than its alternatives. The most successful explanations – those that explain and make accurate predictions in a wide range of circumstances – are often called scientific theories. Most experimental results do not produce large changes in human understanding; improvements in theoretical scientific understanding typically result from a gradual process of development over time, sometimes across different domains of science. Scientific models vary in the extent to which they have been experimentally tested and for how long, and in their acceptance in the scientific community. In general, explanations become accepted over time as evidence accumulates on a given topic, and the explanation in question proves more powerful than its alternatives at explaining the evidence. Often subsequent researchers re-formulate the explanations over time, or combined explanations to produce new explanations. === Properties of scientific inquiry === Scientific knowledge is closely tied to empirical findings and can remain subject to falsification if new experimental observations are incompatible with what is found. That is, no theory can ever be considered final since new problematic evidence might be discovered. If such evidence is found, a new theory may be proposed, or (more commonly) it is found that modifications to the previous theory are sufficient to explain the new evidence. The strength of a theory relates to how long it has persisted without major alteration to its core principles. Theories can also become subsumed by other theories. For example, Newton's laws explained thousands of years of scientific observations of the planets almost perfectly. However, these laws were then determined to be special cases of a more general theory (relativity), which explained both the (previously unexplained) exceptions to Newton's laws and predicted and explained other observations such as the deflection of light by gravity. Thus, in certain cases independent, unconnected, scientific observations can be connected, unified by principles of increasing explanatory power. Since new theories might be more comprehensive than what preceded them, and thus be able to explain more than previous ones, successor theories might be able to meet a higher standard by explaining a larger body of observations than their predecessors. For example, the theory of evolution explains the diversity of life on Earth, how species adapt to their environments, and many other patterns observed in the natural world; its most recent major modification was unification with genetics to form the modern evolutionary synthesis. In subsequent modifications, it has also subsumed aspects of many other fields such as biochemistry and molecular biology. == Heuristics == === Confirmation theory === During the course of history, one theory has succeeded another, and some have suggested further work while others have seemed content just to explain the phenomena. The reasons why one theory has replaced another are not always obvious or simple. The philosophy of science includes the question: What criteria are satisfied by a 'good' theory. This question has a long history, and many scientists, as well as philosophers, have considered it. The objective is to be able to choose one theory as preferable to another without introducing cognitive bias. Though different thinkers emphasize different aspects, a good theory: is accurate (the trivial element); is consistent, both internally and with other relevant currently accepted theories; has explanatory power, meaning its consequences extend beyond the data it is required to explain; has unificatory power; as in its organizing otherwise confused and isolated phenomena and is fruitful for further research. In trying to look for such theories, scientists will, given a lack of guidance by empirical evidence, try to adhere to: parsimony in causal explanations and look for invariant observations. Scientists will sometimes also list the very subjective criteria of "formal elegance" which can indicate multiple different things. The goal here is to make the choice between theories less arbitrary. Nonetheless, these criteria contain subjective elements, and should be considered heuristics rather than a definitive. Also, criteria such as these do not necessarily decide between alternative theories. Quoting Bird: "[Such criteria] cannot determine scientific choice. First, which features of a theory satisfy these criteria may be disputable (e.g. does simplicity concern the ontological commitments of a theory or its mathematical form?). Secondly, these criteria are imprecise, and so there is room for disagreement about the degree to which they hold. Thirdly, there can be disagreement about how they are to be weighted relative to one another, especially when they conflict." It also is debatable whether existing scientific theories satisfy all these criteria, which may represent goals not yet achieved. For example, explanatory power over all existing observations is satisfied by no one theory at the moment. ==== Parsimony ==== The desiderata of a "good" theory have been debated for centuries, going back perhaps even earlier than Occam's razor, which is often taken as an attribute of a good theory. Science tries to be simple. When gathered data supports multiple explanations, the most simple explanation for phenomena or the most simple formation of a theory is recommended by the principle of parsimony. Scientists go as far as to call simple proofs of complex statements beautiful. We are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances. The concept of parsimony should not be held to imply complete frugality in the pursuit of scientific truth. The general process starts at the opposite end of there being a vast number of potential explanations and general disorder. An example can be seen in Paul Krugman's process, who makes explicit to "dare to be silly". He writes that in his work on new theories of international trade he reviewed prior work with an open frame of mind and broadened his initial viewpoint even in unlikely directions. Once he had a sufficient body of ideas, he would try to simplify and thus find what worked among what did not. Specific to Krugman here was to "question the question". He recognised that prior work had applied erroneous models to already present evidence, commenting that "intelligent commentary was ignored". Thus touching on the need to bridge the common bias against other circles of thought. ==== Elegance ==== Occam's razor might fall under the heading of "simple elegance", but it is arguable that parsimony and elegance pull in different directions. Introducing additional elements could simplify theory formulation, whereas simplifying a theory's ontology might lead to increased syntactical complexity. Sometimes ad-hoc modifications of a failing idea may also be dismissed as lacking "formal elegance". This appeal to what may be called "aesthetic" is hard to characterise, but essentially about a sort of familiarity. Though, argument based on "elegance" is contentious and over-reliance on familiarity will breed stagnation. ==== Invariance ==== Principles of invariance have been a theme in scientific writing, and especially physics, since at least the early 20th century. The basic idea here is that good structures to look for are those independent of perspective, an idea that has featured earlier of course for example in Mill's Methods of difference and agreement—methods that would be referred back to in the context of contrast and invariance. But as tends to be the case, there is a difference between something being a basic consideration and something being given weight. Principles of invariance have only been given weight in the wake of Einstein's theories of relativity, which reduced everything to relations and were thereby fundamentally unchangeable, unable to be varied. As David Deutsch put it in 2009: "the search for hard-to-vary explanations is the origin of all progress". An example here can be found in one of Einstein's thought experiments. The one of a lab suspended in empty space is an example of a useful invariant observation. He imagined the absence of gravity and an experimenter free floating in the lab. — If now an entity pulls the lab upwards, accelerating uniformly, the experimenter would perceive the resulting force as gravity. The entity however would feel the work needed to accelerate the lab continuously. Through this experiment Einstein was able to equate gravitational and inertial mass; something unexplained by Newton's laws, and an early but "powerful argument for a generalised postulate of relativity". The feature, which suggests reality, is always some kind of invariance of a structure independent of the aspect, the projection. The discussion on invariance in physics is often had in the more specific context of symmetry. The Einstein example above, in the parlance of Mill would be an agreement between two values. In the context of invariance, it is a variable that remains unchanged through some kind of transformation or change in perspective. And discussion focused on symmetry would view the two perspectives as systems that share a relevant aspect and are therefore symmetrical. Related principles here are falsifiability and testability. The opposite of something being hard-to-vary are theories that resist falsification—a frustration that was expressed colourfully by Wolfgang Pauli as them being "not even wrong". The importance of scientific theories to be falsifiable finds especial emphasis in the philosophy of Karl Popper. The broader view here is testability, since it includes the former and allows for additional practical considerations. == Philosophy and discourse == Philosophy of science looks at the underpinning logic of the scientific method, at what separates science from non-science, and the ethic that is implicit in science. There are basic assumptions, derived from philosophy by at least one prominent scientist, that form the base of the scientific method – namely, that reality is objective and consistent, that humans have the capacity to perceive reality accurately, and that rational explanations exist for elements of the real world. These assumptions from methodological naturalism form a basis on which science may be grounded. Logical positivist, empiricist, falsificationist, and other theories have criticized these assumptions and given alternative accounts of the logic of science, but each has also itself been criticized. There are several kinds of modern philosophical conceptualizations and attempts at definitions of the method of science. The one attempted by the unificationists, who argue for the existence of a unified definition that is useful (or at least 'works' in every context of science). The pluralists, arguing degrees of science being too fractured for a universal definition of its method to by useful. And those, who argue that the very attempt at definition is already detrimental to the free flow of ideas. Additionally, there have been views on the social framework in which science is done, and the impact of the sciences social environment on research. Also, there is 'scientific method' as popularised by Dewey in How We Think (1910) and Karl Pearson in Grammar of Science (1892), as used in fairly uncritical manner in education. === Pluralism === Scientific pluralism is a position within the philosophy of science that rejects various proposed unities of scientific method and subject matter. Scientific pluralists hold that science is not unified in one or more of the following ways: the metaphysics of its subject matter, the epistemology of scientific knowledge, or the research methods and models that should be used. Some pluralists believe that pluralism is necessary due to the nature of science. Others say that since scientific disciplines already vary in practice, there is no reason to believe this variation is wrong until a specific unification is empirically proven. Finally, some hold that pluralism should be allowed for normative reasons, even if unity were possible in theory. === Unificationism === Unificationism, in science, was a central tenet of logical positivism. Different logical positivists construed this doctrine in several different ways, e.g. as a reductionist thesis, that the objects investigated by the special sciences reduce to the objects of a common, putatively more basic domain of science, usually thought to be physics; as the thesis that all theories and results of the various sciences can or ought to be expressed in a common language or "universal slang"; or as the thesis that all the special sciences share a common scientific method. Development of the idea has been troubled by accelerated advancement in technology that has opened up many new ways to look at the world. The fact that the standards of scientific success shift with time does not only make the philosophy of science difficult; it also raises problems for the public understanding of science. We do not have a fixed scientific method to rally around and defend. === Epistemological anarchism === Paul Feyerabend examined the history of science, and was led to deny that science is genuinely a methodological process. In his 1975 book Against Method he argued that no description of scientific method could possibly be broad enough to include all the approaches and methods used by scientists, and that there are no useful and exception-free methodological rules governing the progress of science. In essence, he said that for any specific method or norm of science, one can find a historic episode where violating it has contributed to the progress of science. He jokingly suggested that, if believers in the scientific method wish to express a single universally valid rule, it should be 'anything goes'. As has been argued before him however, this is uneconomic; problem solvers, and researchers are to be prudent with their resources during their inquiry. A more general inference against formalised method has been found through research involving interviews with scientists regarding their conception of method. This research indicated that scientists frequently encounter difficulty in determining whether the available evidence supports their hypotheses. This reveals that there are no straightforward mappings between overarching methodological concepts and precise strategies to direct the conduct of research. === Education === In science education, the idea of a general and universal scientific method has been notably influential, and numerous studies (in the US) have shown that this framing of method often forms part of both students’ and teachers’ conception of science. This convention of traditional education has been argued against by scientists, as there is a consensus that educations' sequential elements and unified view of scientific method do not reflect how scientists actually work. Major organizations of scientists such as the American Association for the Advancement of Science (AAAS) consider the sciences to be a part of the liberal arts traditions of learning and proper understating of science includes understanding of philosophy and history, not just science in isolation. How the sciences make knowledge has been taught in the context of "the" scientific method (singular) since the early 20th century. Various systems of education, including but not limited to the US, have taught the method of science as a process or procedure, structured as a definitive series of steps: observation, hypothesis, prediction, experiment. This version of the method of science has been a long-established standard in primary and secondary education, as well as the biomedical sciences. It has long been held to be an inaccurate idealisation of how some scientific inquiries are structured. The taught presentation of science had to defend demerits such as: it pays no regard to the social context of science, it suggests a singular methodology of deriving knowledge, it overemphasises experimentation, it oversimplifies science, giving the impression that following a scientific process automatically leads to knowledge, it gives the illusion of determination; that questions necessarily lead to some kind of answers and answers are preceded by (specific) questions, and, it holds that scientific theories arise from observed phenomena only. The scientific method no longer features in the standards for US education of 2013 (NGSS) that replaced those of 1996 (NRC). They, too, influenced international science education, and the standards measured for have shifted since from the singular hypothesis-testing method to a broader conception of scientific methods. These scientific methods, which are rooted in scientific practices and not epistemology, are described as the 3 dimensions of scientific and engineering practices, crosscutting concepts (interdisciplinary ideas), and disciplinary core ideas. The scientific method, as a result of simplified and universal explanations, is often held to have reached a kind of mythological status; as a tool for communication or, at best, an idealisation. Education's approach was heavily influenced by John Dewey's, How We Think (1910). Van der Ploeg (2016) indicated that Dewey's views on education had long been used to further an idea of citizen education removed from "sound education", claiming that references to Dewey in such arguments were undue interpretations (of Dewey). === Sociology of knowledge === The sociology of knowledge is a concept in the discussion around scientific method, claiming the underlying method of science to be sociological. King explains that sociology distinguishes here between the system of ideas that govern the sciences through an inner logic, and the social system in which those ideas arise. ==== Thought collectives ==== A perhaps accessible lead into what is claimed is Fleck's thought, echoed in Kuhn's concept of normal science. According to Fleck, scientists' work is based on a thought-style, that cannot be rationally reconstructed. It gets instilled through the experience of learning, and science is then advanced based on a tradition of shared assumptions held by what he called thought collectives. Fleck also claims this phenomenon to be largely invisible to members of the group. Comparably, following the field research in an academic scientific laboratory by Latour and Woolgar, Karin Knorr Cetina has conducted a comparative study of two scientific fields (namely high energy physics and molecular biology) to conclude that the epistemic practices and reasonings within both scientific communities are different enough to introduce the concept of "epistemic cultures", in contradiction with the idea that a so-called "scientific method" is unique and a unifying concept. ==== Situated cognition and relativism ==== On the idea of Fleck's thought collectives sociologists built the concept of situated cognition: that the perspective of the researcher fundamentally affects their work; and, too, more radical views. Norwood Russell Hanson, alongside Thomas Kuhn and Paul Feyerabend, extensively explored the theory-laden nature of observation in science. Hanson introduced the concept in 1958, emphasizing that observation is influenced by the observer's conceptual framework. He used the concept of gestalt to show how preconceptions can affect both observation and description, and illustrated this with examples like the initial rejection of Golgi bodies as an artefact of staining technique, and the differing interpretations of the same sunrise by Tycho Brahe and Johannes Kepler. Intersubjectivity led to different conclusions. Kuhn and Feyerabend acknowledged Hanson's pioneering work, although Feyerabend's views on methodological pluralism were more radical. Criticisms like those from Kuhn and Feyerabend prompted discussions leading to the development of the strong programme, a sociological approach that seeks to explain scientific knowledge without recourse to the truth or validity of scientific theories. It examines how scientific beliefs are shaped by social factors such as power, ideology, and interests. The postmodernist critiques of science have themselves been the subject of intense controversy. This ongoing debate, known as the science wars, is the result of conflicting values and assumptions between postmodernist and realist perspectives. Postmodernists argue that scientific knowledge is merely a discourse, devoid of any claim to fundamental truth. In contrast, realists within the scientific community maintain that science uncovers real and fundamental truths about reality. Many books have been written by scientists which take on this problem and challenge the assertions of the postmodernists while defending science as a legitimate way of deriving truth. == Limits of method == === Role of chance in discovery === Somewhere between 33% and 50% of all scientific discoveries are estimated to have been stumbled upon, rather than sought out. This may explain why scientists so often express that they were lucky. Scientists themselves in the 19th and 20th century acknowledged the role of fortunate luck or serendipity in discoveries. Louis Pasteur is credited with the famous saying that "Luck favours the prepared mind", but some psychologists have begun to study what it means to be 'prepared for luck' in the scientific context. Research is showing that scientists are taught various heuristics that tend to harness chance and the unexpected. This is what Nassim Nicholas Taleb calls "Anti-fragility"; while some systems of investigation are fragile in the face of human error, human bias, and randomness, the scientific method is more than resistant or tough – it actually benefits from such randomness in many ways (it is anti-fragile). Taleb believes that the more anti-fragile the system, the more it will flourish in the real world. Psychologist Kevin Dunbar says the process of discovery often starts with researchers finding bugs in their experiments. These unexpected results lead researchers to try to fix what they think is an error in their method. Eventually, the researcher decides the error is too persistent and systematic to be a coincidence. The highly controlled, cautious, and curious aspects of the scientific method are thus what make it well suited for identifying such persistent systematic errors. At this point, the researcher will begin to think of theoretical explanations for the error, often seeking the help of colleagues across different domains of expertise. === Relationship with statistics === When the scientific method employs statistics as a key part of its arsenal, there are mathematical and practical issues that can have a deleterious effect on the reliability of the output of scientific methods. This is described in a popular 2005 scientific paper "Why Most Published Research Findings Are False" by John Ioannidis, which is considered foundational to the field of metascience. Much research in metascience seeks to identify poor use of statistics and improve its use, an example being the misuse of p-values. The points raised are both statistical and economical. Statistically, research findings are less likely to be true when studies are small and when there is significant flexibility in study design, definitions, outcomes, and analytical approaches. Economically, the reliability of findings decreases in fields with greater financial interests, biases, and a high level of competition among research teams. As a result, most research findings are considered false across various designs and scientific fields, particularly in modern biomedical research, which often operates in areas with very low pre- and post-study probabilities of yielding true findings. Nevertheless, despite these challenges, most new discoveries will continue to arise from hypothesis-generating research that begins with low or very low pre-study odds. This suggests that expanding the frontiers of knowledge will depend on investigating areas outside the mainstream, where the chances of success may initially appear slim. === Science of complex systems === Science applied to complex systems can involve elements such as transdisciplinarity, systems theory, control theory, and scientific modelling. In general, the scientific method may be difficult to apply stringently to diverse, interconnected systems and large data sets. In particular, practices used within Big data, such as predictive analytics, may be considered to be at odds with the scientific method, as some of the data may have been stripped of the parameters which might be material in alternative hypotheses for an explanation; thus the stripped data would only serve to support the null hypothesis in the predictive analytics application. Fleck (1979), pp. 38–50 notes "a scientific discovery remains incomplete without considerations of the social practices that condition it". == Relationship with mathematics == Science is the process of gathering, comparing, and evaluating proposed models against observables. A model can be a simulation, mathematical or chemical formula, or set of proposed steps. Science is like mathematics in that researchers in both disciplines try to distinguish what is known from what is unknown at each stage of discovery. Models, in both science and mathematics, need to be internally consistent and also ought to be falsifiable (capable of disproof). In mathematics, a statement need not yet be proved; at such a stage, that statement would be called a conjecture. Mathematical work and scientific work can inspire each other. For example, the technical concept of time arose in science, and timelessness was a hallmark of a mathematical topic. But today, the Poincaré conjecture has been proved using time as a mathematical concept in which objects can flow (see Ricci flow). Nevertheless, the connection between mathematics and reality (and so science to the extent it describes reality) remains obscure. Eugene Wigner's paper, "The Unreasonable Effectiveness of Mathematics in the Natural Sciences", is a very well-known account of the issue from a Nobel Prize-winning physicist. In fact, some observers (including some well-known mathematicians such as Gregory Chaitin, and others such as Lakoff and Núñez) have suggested that mathematics is the result of practitioner bias and human limitation (including cultural ones), somewhat like the post-modernist view of science. George Pólya's work on problem solving, the construction of mathematical proofs, and heuristic show that the mathematical method and the scientific method differ in detail, while nevertheless resembling each other in using iterative or recursive steps. In Pólya's view, understanding involves restating unfamiliar definitions in your own words, resorting to geometrical figures, and questioning what we know and do not know already; analysis, which Pólya takes from Pappus, involves free and heuristic construction of plausible arguments, working backward from the goal, and devising a plan for constructing the proof; synthesis is the strict Euclidean exposition of step-by-step details of the proof; review involves reconsidering and re-examining the result and the path taken to it. Building on Pólya's work, Imre Lakatos argued that mathematicians actually use contradiction, criticism, and revision as principles for improving their work. In like manner to science, where truth is sought, but certainty is not found, in Proofs and Refutations, what Lakatos tried to establish was that no theorem of informal mathematics is final or perfect. This means that, in non-axiomatic mathematics, we should not think that a theorem is ultimately true, only that no counterexample has yet been found. Once a counterexample, i.e. an entity contradicting/not explained by the theorem is found, we adjust the theorem, possibly extending the domain of its validity. This is a continuous way our knowledge accumulates, through the logic and process of proofs and refutations. (However, if axioms are given for a branch of mathematics, this creates a logical system —Wittgenstein 1921 Tractatus Logico-Philosophicus 5.13; Lakatos claimed that proofs from such a system were tautological, i.e. internally logically true, by rewriting forms, as shown by Poincaré, who demonstrated the technique of transforming tautologically true forms (viz. the Euler characteristic) into or out of forms from homology, or more abstractly, from homological algebra. Lakatos proposed an account of mathematical knowledge based on Polya's idea of heuristics. In Proofs and Refutations, Lakatos gave several basic rules for finding proofs and counterexamples to conjectures. He thought that mathematical 'thought experiments' are a valid way to discover mathematical conjectures and proofs. Gauss, when asked how he came about his theorems, once replied "durch planmässiges Tattonieren" (through systematic palpable experimentation). == See also == Empirical limits in science – Idea that knowledge comes only/mainly from sensory experiencePages displaying short descriptions of redirect targets Evidence-based practices – Pragmatic methodologyPages displaying short descriptions of redirect targets Methodology – Study of research methods Metascience – Scientific study of science Outline of scientific method Quantitative research – All procedures for the numerical representation of empirical facts Research transparency Scientific law – Statement based on repeated empirical observations that describes some natural phenomenon Scientific technique – systematic way of obtaining informationPages displaying wikidata descriptions as a fallback Testability – Extent to which truthness or falseness of a hypothesis/declaration can be tested == Notes == === Notes: Problem-solving via scientific method === === Notes: Philosophical expressions of method === == References == == Sources == == Further reading == == External links == Andersen, Hanne; Hepburn, Brian. "Scientific Method". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy. "Confirmation and Induction". Internet Encyclopedia of Philosophy. Scientific method at PhilPapers Scientific method at the Indiana Philosophy Ontology Project An Introduction to Science: Scientific Thinking and a scientific method Archived 2018-01-01 at the Wayback Machine by Steven D. Schafersman. Introduction to the scientific method at the University of Rochester The scientific method from a philosophical perspective Theory-ladenness by Paul Newall at The Galilean Library Lecture on Scientific Method by Greg Anderson (archived 28 April 2006) Using the scientific method for designing science fair projects Scientific Methods an online book by Richard D. Jarrard Richard Feynman on the Key to Science (one minute, three seconds), from the Cornell Lectures. Lectures on the Scientific Method by Nick Josh Karean, Kevin Padian, Michael Shermer and Richard Dawkins (archived 21 January 2013). "How Do We Know What Is True?" (animated video; 2:52)
Wikipedia/Scientific_method
Atomic, molecular, and optical physics (AMO) is the study of matter–matter and light–matter interactions, at the scale of one or a few atoms and energy scales around several electron volts.: 1356  The three areas are closely interrelated. AMO theory includes classical, semi-classical and quantum treatments. Typically, the theory and applications of emission, absorption, scattering of electromagnetic radiation (light) from excited atoms and molecules, analysis of spectroscopy, generation of lasers and masers, and the optical properties of matter in general, fall into these categories. == Atomic and molecular physics == Atomic physics is the subfield of AMO that studies atoms as an isolated system of electrons and an atomic nucleus, while molecular physics is the study of the physical properties of molecules. The term atomic physics is often associated with nuclear power and nuclear bombs, due to the synonymous use of atomic and nuclear in standard English. However, physicists distinguish between atomic physics — which deals with the atom as a system consisting of a nucleus and electrons — and nuclear physics, which considers atomic nuclei alone. The important experimental techniques are the various types of spectroscopy. Molecular physics, while closely related to atomic physics, also overlaps greatly with theoretical chemistry, physical chemistry and chemical physics. Both subfields are primarily concerned with electronic structure and the dynamical processes by which these arrangements change. Generally this work involves using quantum mechanics. For molecular physics, this approach is known as quantum chemistry. One important aspect of molecular physics is that the essential atomic orbital theory in the field of atomic physics expands to the molecular orbital theory. Molecular physics is concerned with atomic processes in molecules, but it is additionally concerned with effects due to the molecular structure. Additionally to the electronic excitation states which are known from atoms, molecules are able to rotate and to vibrate. These rotations and vibrations are quantized; there are discrete energy levels. The smallest energy differences exist between different rotational states, therefore pure rotational spectra are in the far infrared region (about 30 - 150 μm wavelength) of the electromagnetic spectrum. Vibrational spectra are in the near infrared (about 1 - 5 μm) and spectra resulting from electronic transitions are mostly in the visible and ultraviolet regions. From measuring rotational and vibrational spectra properties of molecules like the distance between the nuclei can be calculated. As with many scientific fields, strict delineation can be highly contrived and atomic physics is often considered in the wider context of atomic, molecular, and optical physics. Physics research groups are usually so classified. == Optical physics == Optical physics is the study of the generation of electromagnetic radiation, the properties of that radiation, and the interaction of that radiation with matter, especially its manipulation and control. It differs from general optics and optical engineering in that it is focused on the discovery and application of new phenomena. There is no strong distinction, however, between optical physics, applied optics, and optical engineering, since the devices of optical engineering and the applications of applied optics are necessary for basic research in optical physics, and that research leads to the development of new devices and applications. Often the same people are involved in both the basic research and the applied technology development, for example the experimental demonstration of electromagnetically induced transparency by S. E. Harris and of slow light by Harris and Lene Vestergaard Hau. Researchers in optical physics use and develop light sources that span the electromagnetic spectrum from microwaves to X-rays. The field includes the generation and detection of light, linear and nonlinear optical processes, and spectroscopy. Lasers and laser spectroscopy have transformed optical science. Major study in optical physics is also devoted to quantum optics and coherence, and to femtosecond optics. In optical physics, support is also provided in areas such as the nonlinear response of isolated atoms to intense, ultra-short electromagnetic fields, the atom-cavity interaction at high fields, and quantum properties of the electromagnetic field. Other important areas of research include the development of novel optical techniques for nano-optical measurements, diffractive optics, low-coherence interferometry, optical coherence tomography, and near-field microscopy. Research in optical physics places an emphasis on ultrafast optical science and technology. The applications of optical physics create advancements in communications, medicine, manufacturing, and even entertainment. == History == One of the earliest steps towards atomic physics was the recognition that matter was composed of atoms, in modern terms the basic unit of a chemical element. This theory was developed by John Dalton in the 18th century. At this stage, it wasn't clear what atoms were - although they could be described and classified by their observable properties in bulk; summarized by the developing periodic table, by John Newlands and Dmitri Mendeleyev around the mid to late 19th century. Later, the connection between atomic physics and optical physics became apparent, by the discovery of spectral lines and attempts to describe the phenomenon - notably by Joseph von Fraunhofer, Fresnel, and others in the 19th century. From that time to the 1920s, physicists were seeking to explain atomic spectra and blackbody radiation. One attempt to explain hydrogen spectral lines was the Bohr atom model. Experiments including electromagnetic radiation and matter - such as the photoelectric effect, Compton effect, and spectra of sunlight the due to the unknown element of Helium, the limitation of the Bohr model to Hydrogen, and numerous other reasons, lead to an entirely new mathematical model of matter and light: quantum mechanics. === Classical oscillator model of matter === Early models to explain the origin of the index of refraction treated an electron in an atomic system classically according to the model of Paul Drude and Hendrik Lorentz. The theory was developed to attempt to provide an origin for the wavelength-dependent refractive index n of a material. In this model, incident electromagnetic waves forced an electron bound to an atom to oscillate. The amplitude of the oscillation would then have a relationship to the frequency of the incident electromagnetic wave and the resonant frequencies of the oscillator. The superposition of these emitted waves from many oscillators would then lead to a wave which moved more slowly. : 4–8  === Early quantum model of matter and light === Max Planck derived a formula to describe the electromagnetic field inside a box when in thermal equilibrium in 1900.: 8–9  His model consisted of a superposition of standing waves. In one dimension, the box has length L, and only sinusoidal waves of wavenumber k = n π L {\displaystyle k={\frac {n\pi }{L}}} can occur in the box, where n is a positive integer (mathematically denoted by n ∈ N 1 {\displaystyle \scriptstyle n\in \mathbb {N} _{1}} ). The equation describing these standing waves is given by: E = E 0 sin ⁡ ( n π L x ) {\displaystyle E=E_{0}\sin \left({\frac {n\pi }{L}}x\right)\,\!} . where E0 is the magnitude of the electric field amplitude, and E is the magnitude of the electric field at position x. From this basic, Planck's law was derived.: 4–8, 51–52  In 1911, Ernest Rutherford concluded, based on alpha particle scattering, that an atom has a central pointlike proton. He also thought that an electron would be still attracted to the proton by Coulomb's law, which he had verified still held at small scales. As a result, he believed that electrons revolved around the proton. Niels Bohr, in 1913, combined the Rutherford model of the atom with the quantisation ideas of Planck. Only specific and well-defined orbits of the electron could exist, which also do not radiate light. In jumping orbit the electron would emit or absorb light corresponding to the difference in energy of the orbits. His prediction of the energy levels was then consistent with observation.: 9–10  These results, based on a discrete set of specific standing waves, were inconsistent with the continuous classical oscillator model.: 8  Work by Albert Einstein in 1905 on the photoelectric effect led to the association of a light wave of frequency ν {\displaystyle \nu } with a photon of energy h ν {\displaystyle h\nu } . In 1917 Einstein created an extension to Bohrs model by the introduction of the three processes of stimulated emission, spontaneous emission and absorption (electromagnetic radiation).: 11  == Modern treatments == The largest steps towards the modern treatment was the formulation of quantum mechanics with the matrix mechanics approach by Werner Heisenberg and the discovery of the Schrödinger equation by Erwin Schrödinger.: 12  There are a variety of semi-classical treatments within AMO. Which aspects of the problem are treated quantum mechanically and which are treated classically is dependent on the specific problem at hand. The semi-classical approach is ubiquitous in computational work within AMO, largely due to the large decrease in computational cost and complexity associated with it. For matter under the action of a laser, a fully quantum mechanical treatment of the atomic or molecular system is combined with the system being under the action of a classical electromagnetic field.: 14  Since the field is treated classically it can not deal with spontaneous emission.: 16  This semi-classical treatment is valid for most systems,: 997  particular those under the action of high intensity laser fields.: 724  The distinction between optical physics and quantum optics is the use of semi-classical and fully quantum treatments respectively.: 997  Within collision dynamics and using the semi-classical treatment, the internal degrees of freedom may be treated quantum mechanically, whilst the relative motion of the quantum systems under consideration are treated classically.: 556  When considering medium to high speed collisions, the nuclei can be treated classically while the electron is treated quantum mechanically. In low speed collisions the approximation fails.: 754  Classical Monte-Carlo methods for the dynamics of electrons can be described as semi-classical in that the initial conditions are calculated using a fully quantum treatment, but all further treatment is classical.: 871  == Isolated atoms and molecules == Atomic, Molecular and Optical physics frequently considers atoms and molecules in isolation. Atomic models will consist of a single nucleus that may be surrounded by one or more bound electrons, whilst molecular models are typically concerned with molecular hydrogen and its molecular hydrogen ion. It is concerned with processes such as ionization, above threshold ionization and excitation by photons or collisions with atomic particles. While modelling atoms in isolation may not seem realistic, if one considers molecules in a gas or plasma then the time-scales for molecule-molecule interactions are huge in comparison to the atomic and molecular processes that we are concerned with. This means that the individual molecules can be treated as if each were in isolation for the vast majority of the time. By this consideration atomic and molecular physics provides the underlying theory in plasma physics and atmospheric physics even though both deal with huge numbers of molecules. == Electronic configuration == Electrons form notional shells around the nucleus. These are naturally in a ground state but can be excited by the absorption of energy from light (photons), magnetic fields, or interaction with a colliding particle (typically other electrons). Electrons that populate a shell are said to be in a bound state. The energy necessary to remove an electron from its shell (taking it to infinity) is called the binding energy. Any quantity of energy absorbed by the electron in excess of this amount is converted to kinetic energy according to the conservation of energy. The atom is said to have undergone the process of ionization. In the event that the electron absorbs a quantity of energy less than the binding energy, it may transition to an excited state or to a virtual state. After a statistically sufficient quantity of time, an electron in an excited state will undergo a transition to a lower state via spontaneous emission. The change in energy between the two energy levels must be accounted for (conservation of energy). In a neutral atom, the system will emit a photon of the difference in energy. However, if the lower state is in an inner shell, a phenomenon known as the Auger effect may take place where the energy is transferred to another bound electrons causing it to go into the continuum. This allows one to multiply ionize an atom with a single photon. There are strict selection rules as to the electronic configurations that can be reached by excitation by light—however there are no such rules for excitation by collision processes. == See also == == Notes == == References == == External links == ScienceDirect - Advances In Atomic, Molecular, and Optical Physics Journal of Physics B: Atomic, Molecular and Optical Physics === Institutions === American Physical Society - Division of Atomic, Molecular & Optical Physics European Physical Society - Atomic, Molecular & Optical Physics Division National Science Foundation - Atomic, Molecular and Optical Physics MIT-Harvard Center for Ultracold Atoms Stanford QFARM Initiative for Quantum Science & Enginneering JILA - Atomic and Molecular Physics Joint Quantum Institute at University of Maryland and NIST ORNL Physics Division Queen's University Belfast - Center for Theoretical, Atomic, Molecular and Optical Physics, University of California, Berkeley - Atomic, Molecular and Optical Physics
Wikipedia/Optical_physics
Accelerator physics is a branch of applied physics, concerned with designing, building and operating particle accelerators. As such, it can be described as the study of motion, manipulation and observation of relativistic charged particle beams and their interaction with accelerator structures by electromagnetic fields. It is also related to other fields: Microwave engineering (for acceleration/deflection structures in the radio frequency range). Optics with an emphasis on geometrical optics (beam focusing and bending) and laser physics (laser-particle interaction). Computer technology with an emphasis on digital signal processing; e.g., for automated manipulation of the particle beam. Plasma physics, for the description of intense beams. The experiments conducted with particle accelerators are not regarded as part of accelerator physics, but belong (according to the objectives of the experiments) to, e.g., particle physics, nuclear physics, condensed matter physics or materials physics. The types of experiments done at a particular accelerator facility are determined by characteristics of the generated particle beam such as average energy, particle type, intensity, and dimensions. == Acceleration and interaction of particles with RF structures == While it is possible to accelerate charged particles using electrostatic fields, like in a Cockcroft-Walton voltage multiplier, this method has limits given by electrical breakdown at high voltages. Furthermore, due to electrostatic fields being conservative, the maximum voltage limits the kinetic energy that is applicable to the particles. To circumvent this problem, linear particle accelerators operate using time-varying fields. To control this fields using hollow macroscopic structures through which the particles are passing (wavelength restrictions), the frequency of such acceleration fields is located in the radio frequency region of the electromagnetic spectrum. The space around a particle beam is evacuated to prevent scattering with gas atoms, requiring it to be enclosed in a vacuum chamber (or beam pipe). Due to the strong electromagnetic fields that follow the beam, it is possible for it to interact with any electrical impedance in the walls of the beam pipe. This may be in the form of a resistive impedance (i.e., the finite resistivity of the beam pipe material) or an inductive/capacitive impedance (due to the geometric changes in the beam pipe's cross section). These impedances will induce wakefields (a strong warping of the electromagnetic field of the beam) that can interact with later particles. Since this interaction may have negative effects, it is studied to determine its magnitude, and to determine any actions that may be taken to mitigate it. == Beam dynamics == Due to the high velocity of the particles, and the resulting Lorentz force for magnetic fields, adjustments to the beam direction are mainly controlled by magnetostatic fields that deflect particles. In most accelerator concepts (excluding compact structures like the cyclotron or betatron), these are applied by dedicated electromagnets with different properties and functions. An important step in the development of these types of accelerators was the understanding of strong focusing. Dipole magnets are used to guide the beam through the structure, while quadrupole magnets are used for beam focusing, and sextupole magnets are used for correction of dispersion effects. A particle on the exact design trajectory (or design orbit) of the accelerator only experiences dipole field components, while particles with transverse position deviation x ( s ) {\displaystyle x(s)} are re-focused to the design orbit. For preliminary calculations, neglecting all fields components higher than quadrupolar, an inhomogenic Hill differential equation d 2 d s 2 x ( s ) + k ( s ) x ( s ) = 1 ρ Δ p p {\displaystyle {\frac {d^{2}}{ds^{2}}}\,x(s)+k(s)\,x(s)={\frac {1}{\rho }}\,{\frac {\Delta p}{p}}} can be used as an approximation, with a non-constant focusing force k ( s ) {\displaystyle k(s)} , including strong focusing and weak focusing effects the relative deviation from the design beam impulse Δ p / p {\displaystyle \Delta p/p} the trajectory radius of curvature ρ {\displaystyle \rho } , and the design path length s {\displaystyle s} , thus identifying the system as a parametric oscillator. Beam parameters for the accelerator can then be calculated using Ray transfer matrix analysis; e.g., a quadrupolar field is analogous to a lens in geometrical optics, having similar properties regarding beam focusing (but obeying Earnshaw's theorem). The general equations of motion originate from relativistic Hamiltonian mechanics, in almost all cases using the Paraxial approximation. Even in the cases of strongly nonlinear magnetic fields, and without the paraxial approximation, a Lie transform may be used to construct an integrator with a high degree of accuracy. == Modeling Codes == There are many different software packages available for modeling the different aspects of accelerator physics. One must model the elements that create the electric and magnetic fields, and then one must model the charged particle evolution within those fields. == Beam diagnostics == A vital component of any accelerator are the diagnostic devices that allow various properties of the particle bunches to be measured. A typical machine may use many different types of measurement device in order to measure different properties. These include (but are not limited to) Beam Position Monitors (BPMs) to measure the position of the bunch, screens (fluorescent screens, Optical Transition Radiation (OTR) devices) to image the profile of the bunch, wire-scanners to measure its cross-section, and toroids or ICTs to measure the bunch charge (i.e., the number of particles per bunch). While many of these devices rely on well understood technology, designing a device capable of measuring a beam for a particular machine is a complex task requiring much expertise. Not only is a full understanding of the physics of the operation of the device necessary, but it is also necessary to ensure that the device is capable of measuring the expected parameters of the machine under consideration. Success of the full range of beam diagnostics often underpins the success of the machine as a whole. == Machine tolerances == Errors in the alignment of components, field strength, etc., are inevitable in machines of this scale, so it is important to consider the tolerances under which a machine may operate. Engineers will provide the physicists with expected tolerances for the alignment and manufacture of each component to allow full physics simulations of the expected behaviour of the machine under these conditions. In many cases it will be found that the performance is degraded to an unacceptable level, requiring either re-engineering of the components, or the invention of algorithms that allow the machine performance to be 'tuned' back to the design level. This may require many simulations of different error conditions in order to determine the relative success of each tuning algorithm, and to allow recommendations for the collection of algorithms to be deployed on the real machine. == See also == Particle accelerator Significant publications for accelerator physics Category:Accelerator physics Category:Accelerator physicists Category:Particle accelerators == References == Schopper, Herwig F. (1993). Advances of accelerator physics and technologies. World Scientific. ISBN 978-981-02-0957-5. Retrieved March 9, 2012. Wiedemann, Helmut (2015). Particle Accelerator Physics. Graduate Texts in Physics. Cham: Springer International Publishing. Bibcode:2015pap..book.....W. doi:10.1007/978-3-319-18317-6. ISBN 978-3-319-18316-9. Lee, Shyh-Yuan (2004). Accelerator physics (2nd ed.). World Scientific. ISBN 978-981-256-200-5. Chao, Alex W.; Tigner, Maury, eds. (2013). Handbook of accelerator physics and engineering (2nd ed.). World Scientific. doi:10.1142/8543. ISBN 978-981-4417-17-4. S2CID 108427390. Chao, Alex W.; Chou, Weiren (2014). Reviews of Accelerator Science and Technology Volume 6. World Scientific. doi:10.1142/9079. ISBN 978-981-4583-24-4. Chao, Alex W.; Chou, Weiren (2013). Reviews of Accelerator Science and Technology Volume 5. World Scientific. doi:10.1142/8721. ISBN 978-981-4449-94-6. Chao, Alex W.; Chou, Weiren (2012). Reviews of Accelerator Science and Technology Volume 4. World Scientific. doi:10.1142/8380. ISBN 978-981-438-398-1. == External links == United States Particle Accelerator School UCB/LBL Beam Physics site BNL page on The Alternating Gradient Concept
Wikipedia/Accelerator_physics
The Standard Model of particle physics is the theory describing three of the four known fundamental forces (electromagnetic, weak and strong interactions – excluding gravity) in the universe and classifying all known elementary particles. It was developed in stages throughout the latter half of the 20th century, through the work of many scientists worldwide, with the current formulation being finalized in the mid-1970s upon experimental confirmation of the existence of quarks. Since then, proof of the top quark (1995), the tau neutrino (2000), and the Higgs boson (2012) have added further credence to the Standard Model. In addition, the Standard Model has predicted various properties of weak neutral currents and the W and Z bosons with great accuracy. Although the Standard Model is believed to be theoretically self-consistent and has demonstrated some success in providing experimental predictions, it leaves some physical phenomena unexplained and so falls short of being a complete theory of fundamental interactions. For example, it does not fully explain why there is more matter than anti-matter, incorporate the full theory of gravitation as described by general relativity, or account for the universe's accelerating expansion as possibly described by dark energy. The model does not contain any viable dark matter particle that possesses all of the required properties deduced from observational cosmology. It also does not incorporate neutrino oscillations and their non-zero masses. The development of the Standard Model was driven by theoretical and experimental particle physicists alike. The Standard Model is a paradigm of a quantum field theory for theorists, exhibiting a wide range of phenomena, including spontaneous symmetry breaking, anomalies, and non-perturbative behavior. It is used as a basis for building more exotic models that incorporate hypothetical particles, extra dimensions, and elaborate symmetries (such as supersymmetry) to explain experimental results at variance with the Standard Model, such as the existence of dark matter and neutrino oscillations. == Historical background == In 1928, Paul Dirac introduced the Dirac equation, which implied the existence of antimatter. In 1954, Yang Chen-Ning and Robert Mills extended the concept of gauge theory for abelian groups, e.g. quantum electrodynamics, to nonabelian groups to provide an explanation for strong interactions. In 1957, Chien-Shiung Wu demonstrated parity was not conserved in the weak interaction. In 1961, Sheldon Glashow combined the electromagnetic and weak interactions. In 1964, Murray Gell-Mann and George Zweig introduced quarks and that same year Oscar W. Greenberg implicitly introduced color charge of quarks. In 1967 Steven Weinberg and Abdus Salam incorporated the Higgs mechanism into Glashow's electroweak interaction, giving it its modern form. In 1970, Sheldon Glashow, John Iliopoulos, and Luciano Maiani introduced the GIM mechanism, predicting the charm quark. In 1973 Gross and Wilczek and Politzer independently discovered that non-Abelian gauge theories, like the color theory of the strong force, have asymptotic freedom. In 1976, Martin Perl discovered the tau lepton at the SLAC. In 1977, a team led by Leon Lederman at Fermilab discovered the bottom quark. The Higgs mechanism is believed to give rise to the masses of all the elementary particles in the Standard Model. This includes the masses of the W and Z bosons, and the masses of the fermions, i.e. the quarks and leptons. After the neutral weak currents caused by Z boson exchange were discovered at CERN in 1973, the electroweak theory became widely accepted and Glashow, Salam, and Weinberg shared the 1979 Nobel Prize in Physics for discovering it. The W± and Z0 bosons were discovered experimentally in 1983; and the ratio of their masses was found to be as the Standard Model predicted. The theory of the strong interaction (i.e. quantum chromodynamics, QCD), to which many contributed, acquired its modern form in 1973–74 when asymptotic freedom was proposed (a development that made QCD the main focus of theoretical research) and experiments confirmed that the hadrons were composed of fractionally charged quarks. The term "Standard Model" was introduced by Abraham Pais and Sam Treiman in 1975, with reference to the electroweak theory with four quarks. Steven Weinberg has since claimed priority, explaining that he chose the term Standard Model out of a sense of modesty and used it in 1973 during a talk in Aix-en-Provence in France. == Particle content == The Standard Model includes members of several classes of elementary particles, which in turn can be distinguished by other characteristics, such as color charge. All particles can be summarized as follows: Notes: [†] An anti-electron (e+) is conventionally called a "positron". === Fermions === The Standard Model includes 12 elementary particles of spin 1⁄2, known as fermions. Fermions respect the Pauli exclusion principle, meaning that two identical fermions cannot simultaneously occupy the same quantum state in the same atom. Each fermion has a corresponding antiparticle, which are particles that have corresponding properties with the exception of opposite charges. Fermions are classified based on how they interact, which is determined by the charges they carry, into two groups: quarks and leptons. Within each group, pairs of particles that exhibit similar physical behaviors are then grouped into generations (see the table). Each member of a generation has a greater mass than the corresponding particle of generations prior. Thus, there are three generations of quarks and leptons. As first-generation particles do not decay, they comprise all of ordinary (baryonic) matter. Specifically, all atoms consist of electrons orbiting around the atomic nucleus, ultimately constituted of up and down quarks. On the other hand, second- and third-generation charged particles decay with very short half-lives and can only be observed in high-energy environments. Neutrinos of all generations also do not decay, and pervade the universe, but rarely interact with baryonic matter. There are six quarks: up, down, charm, strange, top, and bottom. Quarks carry color charge, and hence interact via the strong interaction. The color confinement phenomenon results in quarks being strongly bound together such that they form color-neutral composite particles called hadrons; quarks cannot individually exist and must always bind with other quarks. Hadrons can contain either a quark-antiquark pair (mesons) or three quarks (baryons). The lightest baryons are the nucleons: the proton and neutron. Quarks also carry electric charge and weak isospin, and thus interact with other fermions through electromagnetism and weak interaction. The six leptons consist of the electron, electron neutrino, muon, muon neutrino, tau, and tau neutrino. The leptons do not carry color charge, and do not respond to strong interaction. The charged leptons carry an electric charge of −1 e, while the three neutrinos carry zero electric charge. Thus, the neutrinos' motions are influenced by only the weak interaction and gravity, making them difficult to observe. === Gauge bosons === The Standard Model includes 4 kinds of gauge bosons of spin 1, with bosons being quantum particles containing an integer spin. The gauge bosons are defined as force carriers, as they are responsible for mediating the fundamental interactions. The Standard Model explains the four fundamental forces as arising from the interactions, with fermions exchanging virtual force carrier particles, thus mediating the forces. At a macroscopic scale, this manifests as a force. As a result, they do not follow the Pauli exclusion principle that constrains fermions; bosons do not have a theoretical limit on their spatial density. The types of gauge bosons are described below. Electromagnetism: Photons mediate the electromagnetic force, responsible for interactions between electrically charged particles. The photon is massless and is described by the theory of quantum electrodynamics (QED). Strong Interactions: Gluons mediate the strong interactions, which binds quarks to each other by influencing the color charge, with the interactions being described in the theory of quantum chromodynamics (QCD). They have no mass, and there are eight distinct gluons, with each being denoted through a color-anticolor charge combination (e.g. red–antigreen). As gluons have an effective color charge, they can also interact amongst themselves. Weak Interactions: The W+, W−, and Z gauge bosons mediate the weak interactions between all fermions, being responsible for radioactivity. They contain mass, with the Z having more mass than the W±. The weak interactions involving the W± act only on left-handed particles and right-handed antiparticles respectively. The W± carries an electric charge of +1 and −1 and couples to the electromagnetic interaction. The electrically neutral Z boson interacts with both left-handed particles and right-handed antiparticles. These three gauge bosons along with the photons are grouped together, as collectively mediating the electroweak interaction. Gravity: It is currently unexplained in the Standard Model, as the hypothetical mediating particle graviton has been proposed, but not observed. This is due to the incompatibility of quantum mechanics and Einstein's theory of general relativity, regarded as being the best explanation for gravity. In general relativity, gravity is explained as being the geometric curving of spacetime. The Feynman diagram calculations, which are a graphical representation of the perturbation theory approximation, invoke "force mediating particles", and when applied to analyze high-energy scattering experiments are in reasonable agreement with the data. However, perturbation theory (and with it the concept of a "force-mediating particle") fails in other situations. These include low-energy quantum chromodynamics, bound states, and solitons. The interactions between all the particles described by the Standard Model are summarized by the diagrams on the right of this section. === Higgs boson === The Higgs particle is a massive scalar elementary particle theorized by Peter Higgs (and others) in 1964, when he showed that Goldstone's 1962 theorem (generic continuous symmetry, which is spontaneously broken) provides a third polarisation of a massive vector field. Hence, Goldstone's original scalar doublet, the massive spin-zero particle, was proposed as the Higgs boson, and is a key building block in the Standard Model. It has no intrinsic spin, and for that reason is classified as a boson with spin-0. The Higgs boson plays a unique role in the Standard Model, by explaining why the other elementary particles, except the photon and gluon, are massive. In particular, the Higgs boson explains why the photon has no mass, while the W and Z bosons are very heavy. Elementary-particle masses and the differences between electromagnetism (mediated by the photon) and the weak force (mediated by the W and Z bosons) are critical to many aspects of the structure of microscopic (and hence macroscopic) matter. In electroweak theory, the Higgs boson generates the masses of the leptons (electron, muon, and tau) and quarks. As the Higgs boson is massive, it must interact with itself. Because the Higgs boson is a very massive particle and also decays almost immediately when created, only a very high-energy particle accelerator can observe and record it. Experiments to confirm and determine the nature of the Higgs boson using the Large Hadron Collider (LHC) at CERN began in early 2010 and were performed at Fermilab's Tevatron until its closure in late 2011. Mathematical consistency of the Standard Model requires that any mechanism capable of generating the masses of elementary particles must become visible at energies above 1.4 TeV; therefore, the LHC (designed to collide two 7 TeV proton beams) was built to answer the question of whether the Higgs boson actually exists. On 4 July 2012, two of the experiments at the LHC (ATLAS and CMS) both reported independently that they had found a new particle with a mass of about 125 GeV/c2 (about 133 proton masses, on the order of 10−25 kg), which is "consistent with the Higgs boson". On 13 March 2013, it was confirmed to be the searched-for Higgs boson. == Theoretical aspects == === Construction of the Standard Model Lagrangian === Technically, quantum field theory provides the mathematical framework for the Standard Model, in which a Lagrangian controls the dynamics and kinematics of the theory. Each kind of particle is described in terms of a dynamical field that pervades space-time. The construction of the Standard Model proceeds following the modern method of constructing most field theories: by first postulating a set of symmetries of the system, and then by writing down the most general renormalizable Lagrangian from its particle (field) content that observes these symmetries. The global Poincaré symmetry is postulated for all relativistic quantum field theories. It consists of the familiar translational symmetry, rotational symmetry and the inertial reference frame invariance central to the theory of special relativity. The local SU(3) × SU(2) × U(1) gauge symmetry is an internal symmetry that essentially defines the Standard Model. Roughly, the three factors of the gauge symmetry give rise to the three fundamental interactions. The fields fall into different representations of the various symmetry groups of the Standard Model (see table). Upon writing the most general Lagrangian, one finds that the dynamics depends on 19 parameters, whose numerical values are established by experiment. The parameters are summarized in the table (made visible by clicking "show") above. ==== Quantum chromodynamics sector ==== The quantum chromodynamics (QCD) sector defines the interactions between quarks and gluons, which is a Yang–Mills gauge theory with SU(3) symmetry, generated by T a = λ a / 2 {\displaystyle T^{a}=\lambda ^{a}/2} . Since leptons do not interact with gluons, they are not affected by this sector. The Dirac Lagrangian of the quarks coupled to the gluon fields is given by L QCD = ψ ¯ i γ μ D μ ψ − 1 4 G μ ν a G a μ ν , {\displaystyle {\mathcal {L}}_{\text{QCD}}={\overline {\psi }}i\gamma ^{\mu }D_{\mu }\psi -{\frac {1}{4}}G_{\mu \nu }^{a}G_{a}^{\mu \nu },} where ψ {\displaystyle \psi } is a three component column vector of Dirac spinors, each element of which refers to a quark field with a specific color charge (i.e. red, blue, and green) and summation over flavor (i.e. up, down, strange, etc.) is implied. The gauge covariant derivative of QCD is defined by D μ ≡ ∂ μ − i g s 1 2 λ a G μ a {\displaystyle D_{\mu }\equiv \partial _{\mu }-ig_{\text{s}}{\frac {1}{2}}\lambda ^{a}G_{\mu }^{a}} , where γμ are the Dirac matrices, Gaμ is the 8-component ( a = 1 , 2 , … , 8 {\displaystyle a=1,2,\dots ,8} ) SU(3) gauge field, λa are the 3 × 3 Gell-Mann matrices, generators of the SU(3) color group, Gaμν represents the gluon field strength tensor, and gs is the strong coupling constant. The QCD Lagrangian is invariant under local SU(3) gauge transformations; i.e., transformations of the form ψ → ψ ′ = U ψ {\displaystyle \psi \rightarrow \psi '=U\psi } , where U = e − i g s λ a ϕ a ( x ) {\displaystyle U=e^{-ig_{\text{s}}\lambda ^{a}\phi ^{a}(x)}} is 3 × 3 unitary matrix with determinant 1, making it a member of the group SU(3), and ϕ a ( x ) {\displaystyle \phi ^{a}(x)} is an arbitrary function of spacetime. ==== Electroweak sector ==== The electroweak sector is a Yang–Mills gauge theory with the symmetry group U(1) × SU(2)L, L EW = Q ¯ L j i γ μ D μ Q L j + u ¯ R j i γ μ D μ u R j + d ¯ R j i γ μ D μ d R j + ℓ ¯ L j i γ μ D μ ℓ L j + e ¯ R j i γ μ D μ e R j − 1 4 W a μ ν W μ ν a − 1 4 B μ ν B μ ν , {\displaystyle {\mathcal {L}}_{\text{EW}}={\overline {Q}}_{{\text{L}}j}i\gamma ^{\mu }D_{\mu }Q_{{\text{L}}j}+{\overline {u}}_{{\text{R}}j}i\gamma ^{\mu }D_{\mu }u_{{\text{R}}j}+{\overline {d}}_{{\text{R}}j}i\gamma ^{\mu }D_{\mu }d_{{\text{R}}j}+{\overline {\ell }}_{{\text{L}}j}i\gamma ^{\mu }D_{\mu }\ell _{{\text{L}}j}+{\overline {e}}_{{\text{R}}j}i\gamma ^{\mu }D_{\mu }e_{{\text{R}}j}-{\tfrac {1}{4}}W_{a}^{\mu \nu }W_{\mu \nu }^{a}-{\tfrac {1}{4}}B^{\mu \nu }B_{\mu \nu },} where the subscript j {\displaystyle j} sums over the three generations of fermions; Q L , u R {\displaystyle Q_{\text{L}},u_{\text{R}}} , and d R {\displaystyle d_{\text{R}}} are the left-handed doublet, right-handed singlet up type, and right handed singlet down type quark fields; and ℓ L {\displaystyle \ell _{\text{L}}} and e R {\displaystyle e_{\text{R}}} are the left-handed doublet and right-handed singlet lepton fields. The electroweak gauge covariant derivative is defined as D μ ≡ ∂ μ − i g ′ 1 2 Y W B μ − i g 1 2 τ → L W → μ {\displaystyle D_{\mu }\equiv \partial _{\mu }-ig'{\tfrac {1}{2}}Y_{\text{W}}B_{\mu }-ig{\tfrac {1}{2}}{\vec {\tau }}_{\text{L}}{\vec {W}}_{\mu }} , where Bμ is the U(1) gauge field, YW is the weak hypercharge – the generator of the U(1) group, W→μ is the 3-component SU(2) gauge field, →τL are the Pauli matrices – infinitesimal generators of the SU(2) group – with subscript L to indicate that they only act on left-chiral fermions, g' and g are the U(1) and SU(2) coupling constants respectively, W a μ ν {\displaystyle W^{a\mu \nu }} ( a = 1 , 2 , 3 {\displaystyle a=1,2,3} ) and B μ ν {\displaystyle B^{\mu \nu }} are the field strength tensors for the weak isospin and weak hypercharge fields. Notice that the addition of fermion mass terms into the electroweak Lagrangian is forbidden, since terms of the form m ψ ¯ ψ {\displaystyle m{\overline {\psi }}\psi } do not respect U(1) × SU(2)L gauge invariance. Neither is it possible to add explicit mass terms for the U(1) and SU(2) gauge fields. The Higgs mechanism is responsible for the generation of the gauge boson masses, and the fermion masses result from Yukawa-type interactions with the Higgs field. ==== Higgs sector ==== In the Standard Model, the Higgs field is an SU(2)L doublet of complex scalar fields with four degrees of freedom: φ = ( φ + φ 0 ) = 1 2 ( φ 1 + i φ 2 φ 3 + i φ 4 ) , {\displaystyle \varphi ={\begin{pmatrix}\varphi ^{+}\\\varphi ^{0}\end{pmatrix}}={\frac {1}{\sqrt {2}}}{\begin{pmatrix}\varphi _{1}+i\varphi _{2}\\\varphi _{3}+i\varphi _{4}\end{pmatrix}},} where the superscripts + and 0 indicate the electric charge Q {\displaystyle Q} of the components. The weak hypercharge Y W {\displaystyle Y_{\text{W}}} of both components is 1. Before symmetry breaking, the Higgs Lagrangian is L H = ( D μ φ ) † ( D μ φ ) − V ( φ ) , {\displaystyle {\mathcal {L}}_{\text{H}}=\left(D_{\mu }\varphi \right)^{\dagger }\left(D^{\mu }\varphi \right)-V(\varphi ),} where D μ {\displaystyle D_{\mu }} is the electroweak gauge covariant derivative defined above and V ( φ ) {\displaystyle V(\varphi )} is the potential of the Higgs field. The square of the covariant derivative leads to three and four point interactions between the electroweak gauge fields W μ a {\displaystyle W_{\mu }^{a}} and B μ {\displaystyle B_{\mu }} and the scalar field φ {\displaystyle \varphi } . The scalar potential is given by V ( φ ) = − μ 2 φ † φ + λ ( φ † φ ) 2 , {\displaystyle V(\varphi )=-\mu ^{2}\varphi ^{\dagger }\varphi +\lambda \left(\varphi ^{\dagger }\varphi \right)^{2},} where μ 2 > 0 {\displaystyle \mu ^{2}>0} , so that φ {\displaystyle \varphi } acquires a non-zero Vacuum expectation value, which generates masses for the Electroweak gauge fields (the Higgs mechanism), and λ > 0 {\displaystyle \lambda >0} , so that the potential is bounded from below. The quartic term describes self-interactions of the scalar field φ {\displaystyle \varphi } . The minimum of the potential is degenerate with an infinite number of equivalent ground state solutions, which occurs when φ † φ = μ 2 2 λ {\displaystyle \varphi ^{\dagger }\varphi ={\tfrac {\mu ^{2}}{2\lambda }}} . It is possible to perform a gauge transformation on φ {\displaystyle \varphi } such that the ground state is transformed to a basis where φ 1 = φ 2 = φ 4 = 0 {\displaystyle \varphi _{1}=\varphi _{2}=\varphi _{4}=0} and φ 3 = μ λ ≡ v {\displaystyle \varphi _{3}={\tfrac {\mu }{\sqrt {\lambda }}}\equiv v} . This breaks the symmetry of the ground state. The expectation value of φ {\displaystyle \varphi } now becomes ⟨ φ ⟩ = 1 2 ( 0 v ) , {\displaystyle \langle \varphi \rangle ={\frac {1}{\sqrt {2}}}{\begin{pmatrix}0\\v\end{pmatrix}},} where v {\displaystyle v} has units of mass and sets the scale of electroweak physics. This is the only dimensional parameter of the Standard Model and has a measured value of ~246 GeV/c2. After symmetry breaking, the masses of the W and Z are given by m W = 1 2 g v {\displaystyle m_{\text{W}}={\frac {1}{2}}gv} and m Z = 1 2 g 2 + g ′ 2 v {\displaystyle m_{\text{Z}}={\frac {1}{2}}{\sqrt {g^{2}+g'^{2}}}v} , which can be viewed as predictions of the theory. The photon remains massless. The mass of the Higgs boson is m H = 2 μ 2 = 2 λ v {\displaystyle m_{\text{H}}={\sqrt {2\mu ^{2}}}={\sqrt {2\lambda }}v} . Since μ {\displaystyle \mu } and λ {\displaystyle \lambda } are free parameters, the Higgs's mass could not be predicted beforehand and had to be determined experimentally. ==== Yukawa sector ==== The Yukawa interaction terms are: L Yukawa = ( Y u ) m n ( Q ¯ L ) m φ ~ ( u R ) n + ( Y d ) m n ( Q ¯ L ) m φ ( d R ) n + ( Y e ) m n ( ℓ ¯ L ) m φ ( e R ) n + h . c . {\displaystyle {\mathcal {L}}_{\text{Yukawa}}=(Y_{\text{u}})_{mn}({\bar {Q}}_{\text{L}})_{m}{\tilde {\varphi }}(u_{\text{R}})_{n}+(Y_{\text{d}})_{mn}({\bar {Q}}_{\text{L}})_{m}\varphi (d_{\text{R}})_{n}+(Y_{\text{e}})_{mn}({\bar {\ell }}_{\text{L}})_{m}{\varphi }(e_{\text{R}})_{n}+\mathrm {h.c.} } where Y u {\displaystyle Y_{\text{u}}} , Y d {\displaystyle Y_{\text{d}}} , and Y e {\displaystyle Y_{\text{e}}} are 3 × 3 matrices of Yukawa couplings, with the mn term giving the coupling of the generations m and n, and h.c. means Hermitian conjugate of preceding terms. The fields Q L {\displaystyle Q_{\text{L}}} and ℓ L {\displaystyle \ell _{\text{L}}} are left-handed quark and lepton doublets. Likewise, u R , d R {\displaystyle u_{\text{R}},d_{\text{R}}} and e R {\displaystyle e_{\text{R}}} are right-handed up-type quark, down-type quark, and lepton singlets. Finally φ {\displaystyle \varphi } is the Higgs doublet and φ ~ = i τ 2 φ ∗ {\displaystyle {\tilde {\varphi }}=i\tau _{2}\varphi ^{*}} is its charge conjugate state. The Yukawa terms are invariant under the SU(2)L × U(1)Y gauge symmetry of the Standard Model and generate masses for all fermions after spontaneous symmetry breaking. == Fundamental interactions == The Standard Model describes three of the four fundamental interactions in nature; only gravity remains unexplained. In the Standard Model, such an interaction is described as an exchange of bosons between the objects affected, such as a photon for the electromagnetic force and a gluon for the strong interaction. Those particles are called force carriers or messenger particles. === Gravity === Despite being perhaps the most familiar fundamental interaction, gravity is not described by the Standard Model, due to contradictions that arise when combining general relativity, the modern theory of gravity, and quantum mechanics. However, gravity is so weak at microscopic scales, that it is essentially unmeasurable. The graviton is postulated to be the mediating particle, but has not yet been proved to exist. === Electromagnetism === Electromagnetism is the only long-range force in the Standard Model. It is mediated by photons and couples to electric charge. Electromagnetism is responsible for a wide range of phenomena including atomic electron shell structure, chemical bonds, electric circuits and electronics. Electromagnetic interactions in the Standard Model are described by quantum electrodynamics. === Weak nuclear force === The weak interaction is responsible for various forms of particle decay, such as beta decay. It is weak and short-range, due to the fact that the weak mediating particles, W and Z bosons, have mass. W bosons have electric charge and mediate interactions that change the particle type (referred to as flavor) and charge. Interactions mediated by W bosons are charged current interactions. Z bosons are neutral and mediate neutral current interactions, which do not change particle flavor. Thus Z bosons are similar to the photon, aside from them being massive and interacting with the neutrino. The weak interaction is also the only interaction to violate parity and CP. Parity violation is maximal for charged current interactions, since the W boson interacts exclusively with left-handed fermions and right-handed antifermions. In the Standard Model, the weak force is understood in terms of the electroweak theory, which states that the weak and electromagnetic interactions become united into a single electroweak interaction at high energies. === Strong nuclear force === The strong nuclear force is responsible for hadronic and nuclear binding. It is mediated by gluons, which couple to color charge. Since gluons themselves have color charge, the strong force exhibits confinement and asymptotic freedom. Confinement means that only color-neutral particles can exist in isolation, therefore quarks can only exist in hadrons and never in isolation, at low energies. Asymptotic freedom means that the strong force becomes weaker, as the energy scale increases. The strong force overpowers the electrostatic repulsion of protons and quarks in nuclei and hadrons respectively, at their respective scales. While quarks are bound in hadrons by the fundamental strong interaction, which is mediated by gluons, nucleons are bound by an emergent phenomenon termed the residual strong force or nuclear force. This interaction is mediated by mesons, such as the pion. The color charges inside the nucleon cancel out, meaning most of the gluon and quark fields cancel out outside of the nucleon. However, some residue is "leaked", which appears as the exchange of virtual mesons, that causes the attractive force between nucleons. The (fundamental) strong interaction is described by quantum chromodynamics, which is a component of the Standard Model. == Tests and predictions == The Standard Model predicted the existence of the W and Z bosons, gluon, top quark and charm quark, and predicted many of their properties before these particles were observed. The predictions were experimentally confirmed with good precision. The Standard Model also predicted the existence of the Higgs boson, which was found in 2012 at the Large Hadron Collider, the final fundamental particle predicted by the Standard Model to be experimentally confirmed. == Challenges == Self-consistency of the Standard Model (currently formulated as a non-abelian gauge theory quantized through path-integrals) has not been mathematically proved. While regularized versions useful for approximate computations (for example lattice gauge theory) exist, it is not known whether they converge (in the sense of S-matrix elements) in the limit that the regulator is removed. A key question related to the consistency is the Yang–Mills existence and mass gap problem. Experiments indicate that neutrinos have mass, which the classic Standard Model did not allow. To accommodate this finding, the classic Standard Model can be modified to include neutrino mass, although it is not obvious exactly how this should be done. If one insists on using only Standard Model particles, this can be achieved by adding a non-renormalizable interaction of leptons with the Higgs boson. On a fundamental level, such an interaction emerges in the seesaw mechanism where heavy right-handed neutrinos are added to the theory. This is natural in the left-right symmetric extension of the Standard Model and in certain grand unified theories. As long as new physics appears below or around 1014 GeV, the neutrino masses can be of the right order of magnitude. Theoretical and experimental research has attempted to extend the Standard Model into a unified field theory or a theory of everything, a complete theory explaining all physical phenomena including constants. Inadequacies of the Standard Model that motivate such research include: The model does not explain gravitation, although physical confirmation of a theoretical particle known as a graviton would account for it to a degree. Though it addresses strong and electroweak interactions, the Standard Model does not consistently explain the canonical theory of gravitation, general relativity, in terms of quantum field theory. The reason for this is, among other things, that quantum field theories of gravity generally break down before reaching the Planck scale. As a consequence, we have no reliable theory for the very early universe. Some physicists consider it to be ad hoc and inelegant, requiring 19 numerical constants whose values are unrelated and arbitrary. Although the Standard Model, as it now stands, can explain why neutrinos have masses, the specifics of neutrino mass are still unclear. It is believed that explaining neutrino mass will require an additional 7 or 8 constants, which are also arbitrary parameters. The Higgs mechanism gives rise to the hierarchy problem if some new physics (coupled to the Higgs) is present at high energy scales. In these cases, in order for the weak scale to be much smaller than the Planck scale, severe fine tuning of the parameters is required; there are, however, other scenarios that include quantum gravity in which such fine tuning can be avoided. There are also issues of quantum triviality, which suggests that it may not be possible to create a consistent quantum field theory involving elementary scalar particles. The model is inconsistent with the emerging Lambda-CDM model of cosmology. Contentions include the absence of an explanation in the Standard Model of particle physics for the observed amount of cold dark matter (CDM) and its contributions to dark energy, which are many orders of magnitude too large. It is also difficult to accommodate the observed predominance of matter over antimatter (matter/antimatter asymmetry). The isotropy and homogeneity of the visible universe over large distances seems to require a mechanism like cosmic inflation, which would also constitute an extension of the Standard Model. Currently, no proposed theory of everything has been widely accepted or verified. == See also == == Notes == == References == == Further reading == Oerter, Robert (2006). The Theory of Almost Everything: The Standard Model, the Unsung Triumph of Modern Physics. Plume. ISBN 978-0-452-28786-0. Schumm, Bruce A. (2004). Deep Down Things: The Breathtaking Beauty of Particle Physics. Johns Hopkins University Press. ISBN 978-0-8018-7971-5. "The Standard Model of Particle Physics Interactive Graphic". === Introductory textbooks === Robert Mann (2009). An Introduction to Particle Physics and the Standard Model. CRC Press. ISBN 9780429141225. W. Greiner; B. Müller (2000). Gauge Theory of Weak Interactions. Springer. ISBN 978-3-540-67672-0. J.E. Dodd; B.M. Gripaios (2020). The Ideas of Particle Physics: An Introduction for Scientists. Cambridge University Press. ISBN 978-1-108-72740-2. D.J. Griffiths (1987). Introduction to Elementary Particles. John Wiley & Sons. ISBN 978-0-471-60386-3. W. N. Cottingham and D. A. Greenwood (2023). An Introduction to the Standard Model of Particle Physics. Cambridge University Press. ISBN 9781009401685. === Advanced textbooks === T.P. Cheng; L.F. Li (2006). Gauge theory of elementary particle physics. Oxford University Press. ISBN 978-0-19-851961-4. Highlights the gauge theory aspects of the Standard Model. J.F. Donoghue; E. Golowich; B.R. Holstein (1994). Dynamics of the Standard Model. Cambridge University Press. ISBN 978-0-521-47652-2. Highlights dynamical and phenomenological aspects of the Standard Model. Ken J. Barnes (2010). Group Theory for the Standard Model of Particle Physics and Beyond. Taylor & Francis. ISBN 9780429184550. Nagashima, Yorikiyo (2013). Elementary Particle Physics: Foundations of the Standard Model, Volume 2. Wiley. ISBN 978-3-527-64890-0. 920 pages. Schwartz, Matthew D. (2014). Quantum Field Theory and the Standard Model. Cambridge University. ISBN 978-1-107-03473-0. 952 pages. Langacker, Paul (2009). The Standard Model and Beyond. CRC Press. ISBN 978-1-4200-7907-4. 670 pages. Highlights group-theoretical aspects of the Standard Model. === Journal articles === E.S. Abers; B.W. Lee (1973). "Gauge theories". Physics Reports. 9 (1): 1–141. Bibcode:1973PhR.....9....1A. doi:10.1016/0370-1573(73)90027-6. M. Baak; et al. (2012). "The Electroweak Fit of the Standard Model after the Discovery of a New Boson at the LHC". The European Physical Journal C. 72 (11): 2205. arXiv:1209.2716. Bibcode:2012EPJC...72.2205B. doi:10.1140/epjc/s10052-012-2205-9. S2CID 15052448. Y. Hayato; et al. (1999). "Search for Proton Decay through p → νK+ in a Large Water Cherenkov Detector". Physical Review Letters. 83 (8): 1529–1533. arXiv:hep-ex/9904020. Bibcode:1999PhRvL..83.1529H. doi:10.1103/PhysRevLett.83.1529. S2CID 118326409. S.F. Novaes (2000). "Standard Model: An Introduction". arXiv:hep-ph/0001283. D.P. Roy (1999). "Basic Constituents of Matter and their Interactions – A Progress Report". arXiv:hep-ph/9912523. F. Wilczek (2004). "The Universe Is A Strange Place". Nuclear Physics B: Proceedings Supplements. 134: 3. arXiv:astro-ph/0401347. Bibcode:2004NuPhS.134....3W. doi:10.1016/j.nuclphysbps.2004.08.001. S2CID 28234516. == External links == "The Standard Model explained in Detail by CERN's John Ellis" omega tau podcast. The Standard Model on the CERN website explains how the basic building blocks of matter interact, governed by four fundamental forces. Particle Physics: Standard Model, Leonard Susskind lectures (2010).
Wikipedia/Standard_Model
The exact sciences or quantitative sciences, sometimes called the exact mathematical sciences, are those sciences "which admit of absolute precision in their results"; especially the mathematical sciences. Examples of the exact sciences are mathematics, optics, astronomy, and physics, which many philosophers from René Descartes, Gottfried Leibniz, and Immanuel Kant to the logical positivists took as paradigms of rational and objective knowledge. These sciences have been practiced in many cultures from antiquity to modern times. Given their ties to mathematics, the exact sciences are characterized by accurate quantitative expression, precise predictions and/or rigorous methods of testing hypotheses involving quantifiable predictions and measurements. The distinction between the quantitative exact sciences and those sciences that deal with the causes of things is due to Aristotle, who distinguished mathematics from natural philosophy and considered the exact sciences to be the "more natural of the branches of mathematics." Thomas Aquinas employed this distinction when he said that astronomy explains the spherical shape of the Earth by mathematical reasoning while physics explains it by material causes. This distinction was widely, but not universally, accepted until the scientific revolution of the 17th century. Edward Grant has proposed that a fundamental change leading to the new sciences was the unification of the exact sciences and physics by Johannes Kepler, Isaac Newton, and others, which resulted in a quantitative investigation of the physical causes of natural phenomena. == See also == Hard and soft science Fundamental science Demarcation problem == References ==
Wikipedia/Exact_science
Plasma (from Ancient Greek πλάσμα (plásma) 'moldable substance') is a state of matter characterized by the presence of a significant portion of charged particles in any combination of ions or electrons. It is the most abundant form of ordinary matter in the universe, mostly in stars (including the Sun), but also dominating the rarefied intracluster medium and intergalactic medium. Plasma can be artificially generated, for example, by heating a neutral gas or subjecting it to a strong electromagnetic field. The presence of charged particles makes plasma electrically conductive, with the dynamics of individual particles and macroscopic plasma motion governed by collective electromagnetic fields and very sensitive to externally applied fields. The response of plasma to electromagnetic fields is used in many modern devices and technologies, such as plasma televisions or plasma etching. Depending on temperature and density, a certain number of neutral particles may also be present, in which case plasma is called partially ionized. Neon signs and lightning are examples of partially ionized plasmas. Unlike the phase transitions between the other three states of matter, the transition to plasma is not well defined and is a matter of interpretation and context. Whether a given degree of ionization suffices to call a substance "plasma" depends on the specific phenomenon being considered. == Early history == Plasma was first identified in laboratory by Sir William Crookes. Crookes presented a lecture on what he called "radiant matter" to the British Association for the Advancement of Science, in Sheffield, on Friday, 22 August 1879. Systematic studies of plasma began with the research of Irving Langmuir and his colleagues in the 1920s. Langmuir also introduced the term "plasma" as a description of ionized gas in 1928: Except near the electrodes, where there are sheaths containing very few electrons, the ionized gas contains ions and electrons in about equal numbers so that the resultant space charge is very small. We shall use the name plasma to describe this region containing balanced charges of ions and electrons. Lewi Tonks and Harold Mott-Smith, both of whom worked with Langmuir in the 1920s, recall that Langmuir first used the term by analogy with the blood plasma. Mott-Smith recalls, in particular, that the transport of electrons from thermionic filaments reminded Langmuir of "the way blood plasma carries red and white corpuscles and germs." == Definitions == === The fourth state of matter === Plasma is called the fourth state of matter after solid, liquid, and gas. It is a state of matter in which an ionized substance becomes highly electrically conductive to the point that long-range electric and magnetic fields dominate its behaviour. Plasma is typically an electrically quasineutral medium of unbound positive and negative particles (i.e., the overall charge of a plasma is roughly zero). Although these particles are unbound, they are not "free" in the sense of not experiencing forces. Moving charged particles generate electric currents, and any movement of a charged plasma particle affects and is affected by the fields created by the other charges. In turn, this governs collective behaviour with many degrees of variation. Plasma is distinct from the other states of matter. In particular, describing a low-density plasma as merely an "ionized gas" is wrong and misleading, even though it is similar to the gas phase in that both assume no definite shape or volume. The following table summarizes some principal differences: === Ideal plasma === Three factors define an ideal plasma: The plasma approximation: The plasma approximation applies when the plasma parameter Λ, representing the number of charge carriers within the Debye sphere is much higher than unity. It can be readily shown that this criterion is equivalent to smallness of the ratio of the plasma electrostatic and thermal energy densities. Such plasmas are called weakly coupled. Bulk interactions: The Debye length is much smaller than the physical size of the plasma. This criterion means that interactions in the bulk of the plasma are more important than those at its edges, where boundary effects may take place. When this criterion is satisfied, the plasma is quasineutral. Collisionlessness: The electron plasma frequency (measuring plasma oscillations of the electrons) is much larger than the electron–neutral collision frequency. When this condition is valid, electrostatic interactions dominate over the processes of ordinary gas kinetics. Such plasmas are called collisionless. === Non-neutral plasma === The strength and range of the electric force and the good conductivity of plasmas usually ensure that the densities of positive and negative charges in any sizeable region are equal ("quasineutrality"). A plasma with a significant excess of charge density, or, in the extreme case, is composed of a single species, is called a non-neutral plasma. In such a plasma, electric fields play a dominant role. Examples are charged particle beams, an electron cloud in a Penning trap and positron plasmas. === Dusty plasma === A dusty plasma contains tiny charged particles of dust (typically found in space). The dust particles acquire high charges and interact with each other. A plasma that contains larger particles is called grain plasma. Under laboratory conditions, dusty plasmas are also called complex plasmas. == Properties and parameters == === Density and ionization degree === For plasma to exist, ionization is necessary. The term "plasma density" by itself usually refers to the electron density n e {\displaystyle n_{e}} , that is, the number of charge-contributing electrons per unit volume. The degree of ionization α {\displaystyle \alpha } is defined as fraction of neutral particles that are ionized: α = n i n i + n n , {\displaystyle \alpha ={\frac {n_{i}}{n_{i}+n_{n}}},} where n i {\displaystyle n_{i}} is the ion density and n n {\displaystyle n_{n}} the neutral density (in number of particles per unit volume). In the case of fully ionized matter, α = 1 {\displaystyle \alpha =1} . Because of the quasineutrality of plasma, the electron and ion densities are related by n e = ⟨ Z i ⟩ n i {\displaystyle n_{e}=\langle Z_{i}\rangle n_{i}} , where ⟨ Z i ⟩ {\displaystyle \langle Z_{i}\rangle } is the average ion charge (in units of the elementary charge). === Temperature === Plasma temperature, commonly measured in kelvin or electronvolts, is a measure of the thermal kinetic energy per particle. High temperatures are usually needed to sustain ionization, which is a defining feature of a plasma. The degree of plasma ionization is determined by the electron temperature relative to the ionization energy (and more weakly by the density). In thermal equilibrium, the relationship is given by the Saha equation. At low temperatures, ions and electrons tend to recombine into bound states—atoms—and the plasma will eventually become a gas. In most cases, the electrons and heavy plasma particles (ions and neutral atoms) separately have a relatively well-defined temperature; that is, their energy distribution function is close to a Maxwellian even in the presence of strong electric or magnetic fields. However, because of the large difference in mass between electrons and ions, their temperatures may be different, sometimes significantly so. This is especially common in weakly ionized technological plasmas, where the ions are often near the ambient temperature while electrons reach thousands of kelvin. The opposite case is the z-pinch plasma where the ion temperature may exceed that of electrons. === Plasma potential === Since plasmas are very good electrical conductors, electric potentials play an important role. The average potential in the space between charged particles, independent of how it can be measured, is called the "plasma potential", or the "space potential". If an electrode is inserted into a plasma, its potential will generally lie considerably below the plasma potential due to what is termed a Debye sheath. The good electrical conductivity of plasmas makes their electric fields very small. This results in the important concept of "quasineutrality", which says the density of negative charges is approximately equal to the density of positive charges over large volumes of the plasma ( n e = ⟨ Z ⟩ n i {\displaystyle n_{e}=\langle Z\rangle n_{i}} ), but on the scale of the Debye length, there can be charge imbalance. In the special case that double layers are formed, the charge separation can extend some tens of Debye lengths. The magnitude of the potentials and electric fields must be determined by means other than simply finding the net charge density. A common example is to assume that the electrons satisfy the Boltzmann relation: n e ∝ exp ⁡ ( e Φ / k B T e ) . {\displaystyle n_{e}\propto \exp(e\Phi /k_{\text{B}}T_{e}).} Differentiating this relation provides a means to calculate the electric field from the density: E → = k B T e e ∇ n e n e . {\displaystyle {\vec {E}}={\frac {k_{\text{B}}T_{e}}{e}}{\frac {\nabla n_{e}}{n_{e}}}.} It is possible to produce a plasma that is not quasineutral. An electron beam, for example, has only negative charges. The density of a non-neutral plasma must generally be very low, or it must be very small, otherwise, it will be dissipated by the repulsive electrostatic force. === Magnetization === The existence of charged particles causes the plasma to generate, and be affected by, magnetic fields. Plasma with a magnetic field strong enough to influence the motion of the charged particles is said to be magnetized. A common quantitative criterion is that a particle on average completes at least one gyration around the magnetic-field line before making a collision, i.e., ν c e / ν c o l l > 1 {\displaystyle \nu _{\mathrm {ce} }/\nu _{\mathrm {coll} }>1} , where ν c e {\displaystyle \nu _{\mathrm {ce} }} is the electron gyrofrequency and ν c o l l {\displaystyle \nu _{\mathrm {coll} }} is the electron collision rate. It is often the case that the electrons are magnetized while the ions are not. Magnetized plasmas are anisotropic, meaning that their properties in the direction parallel to the magnetic field are different from those perpendicular to it. While electric fields in plasmas are usually small due to the plasma high conductivity, the electric field associated with a plasma moving with velocity v {\displaystyle \mathbf {v} } in the magnetic field B {\displaystyle \mathbf {B} } is given by the usual Lorentz formula E = − v × B {\displaystyle \mathbf {E} =-\mathbf {v} \times \mathbf {B} } , and is not affected by Debye shielding. == Mathematical descriptions == To completely describe the state of a plasma, all of the particle locations and velocities that describe the electromagnetic field in the plasma region would need to be written down. However, it is generally not practical or necessary to keep track of all the particles in a plasma. Therefore, plasma physicists commonly use less detailed descriptions, of which there are two main types: === Fluid model === Fluid models describe plasmas in terms of smoothed quantities, like density and averaged velocity around each position (see Plasma parameters). One simple fluid model, magnetohydrodynamics, treats the plasma as a single fluid governed by a combination of Maxwell's equations and the Navier–Stokes equations. A more general description is the two-fluid plasma, where the ions and electrons are described separately. Fluid models are often accurate when collisionality is sufficiently high to keep the plasma velocity distribution close to a Maxwell–Boltzmann distribution. Because fluid models usually describe the plasma in terms of a single flow at a certain temperature at each spatial location, they can neither capture velocity space structures like beams or double layers, nor resolve wave-particle effects. === Kinetic model === Kinetic models describe the particle velocity distribution function at each point in the plasma and therefore do not need to assume a Maxwell–Boltzmann distribution. A kinetic description is often necessary for collisionless plasmas. There are two common approaches to kinetic description of a plasma. One is based on representing the smoothed distribution function on a grid in velocity and position. The other, known as the particle-in-cell (PIC) technique, includes kinetic information by following the trajectories of a large number of individual particles. Kinetic models are generally more computationally intensive than fluid models. The Vlasov equation may be used to describe the dynamics of a system of charged particles interacting with an electromagnetic field. In magnetized plasmas, a gyrokinetic approach can substantially reduce the computational expense of a fully kinetic simulation. == Plasma science and technology == Plasmas are studied by the vast academic field of plasma science or plasma physics, including several sub-disciplines such as space plasma physics. Plasmas can appear in nature in various forms and locations, with a few examples given in the following table: === Space and astrophysics === Plasmas are by far the most common phase of ordinary matter in the universe, both by mass and by volume. Above the Earth's surface, the ionosphere is a plasma, and the magnetosphere contains plasma. Within our Solar System, interplanetary space is filled with the plasma expelled via the solar wind, extending from the Sun's surface out to the heliopause. Furthermore, all the distant stars, and much of interstellar space or intergalactic space is also filled with plasma, albeit at very low densities. Astrophysical plasmas are also observed in accretion disks around stars or compact objects like white dwarfs, neutron stars, or black holes in close binary star systems. Plasma is associated with ejection of material in astrophysical jets, which have been observed with accreting black holes or in active galaxies like M87's jet that possibly extends out to 5,000 light-years. === Artificial plasmas === Most artificial plasmas are generated by the application of electric and/or magnetic fields through a gas. Plasma generated in a laboratory setting and for industrial use can be generally categorized by: The type of power source used to generate the plasma—DC, AC (typically with radio frequency (RF)) and microwave The pressure they operate at—vacuum pressure (< 10 mTorr or 1 Pa), moderate pressure (≈1 Torr or 100 Pa), atmospheric pressure (760 Torr or 100 kPa) The degree of ionization within the plasma—fully, partially, or weakly ionized The temperature relationships within the plasma—thermal plasma ( T e = T i = T gas {\displaystyle T_{e}=T_{i}=T_{\text{gas}}} ), non-thermal or "cold" plasma ( T e ≫ T i = T gas {\displaystyle T_{e}\gg T_{i}=T_{\text{gas}}} ) The electrode configuration used to generate the plasma The magnetization of the particles within the plasma—magnetized (both ion and electrons are trapped in Larmor orbits by the magnetic field), partially magnetized (the electrons but not the ions are trapped by the magnetic field), non-magnetized (the magnetic field is too weak to trap the particles in orbits but may generate Lorentz forces) ==== Generation of artificial plasma ==== Just like the many uses of plasma, there are several means for its generation. However, one principle is common to all of them: there must be energy input to produce and sustain it. For this case, plasma is generated when an electric current is applied across a dielectric gas or fluid (an electrically non-conducting material) as can be seen in the adjacent image, which shows a discharge tube as a simple example (DC used for simplicity). The potential difference and subsequent electric field pull the bound electrons (negative) toward the anode (positive electrode) while the cathode (negative electrode) pulls the nucleus. As the voltage increases, the current stresses the material (by electric polarization) beyond its dielectric limit (termed strength) into a stage of electrical breakdown, marked by an electric spark, where the material transforms from being an insulator into a conductor (as it becomes increasingly ionized). The underlying process is the Townsend avalanche, where collisions between electrons and neutral gas atoms create more ions and electrons (as can be seen in the figure on the right). The first impact of an electron on an atom results in one ion and two electrons. Therefore, the number of charged particles increases rapidly (in the millions) only "after about 20 successive sets of collisions", mainly due to a small mean free path (average distance travelled between collisions). ===== Electric arc ===== Electric arc is a continuous electric discharge between two electrodes, similar to lightning. With ample current density, the discharge forms a luminous arc, where the inter-electrode material (usually, a gas) undergoes various stages — saturation, breakdown, glow, transition, and thermal arc. The voltage rises to its maximum in the saturation stage, and thereafter it undergoes fluctuations of the various stages, while the current progressively increases throughout. Electrical resistance along the arc creates heat, which dissociates more gas molecules and ionizes the resulting atoms. Therefore, the electrical energy is given to electrons, which, due to their great mobility and large numbers, are able to disperse it rapidly by elastic collisions to the heavy particles. ==== Examples of industrial plasma ==== Plasmas find applications in many fields of research, technology and industry, for example, in industrial and extractive metallurgy, surface treatments such as plasma spraying (coating), etching in microelectronics, metal cutting and welding; as well as in everyday vehicle exhaust cleanup and fluorescent/luminescent lamps, fuel ignition, and even in supersonic combustion engines for aerospace engineering. ===== Low-pressure discharges ===== Glow discharge plasmas: non-thermal plasmas generated by the application of DC or low frequency RF (<100 kHz) electric field to the gap between two metal electrodes. Probably the most common plasma; this is the type of plasma generated within fluorescent light tubes. Capacitively coupled plasma (CCP): similar to glow discharge plasmas, but generated with high frequency RF electric fields, typically 13.56 MHz. These differ from glow discharges in that the sheaths are much less intense. These are widely used in the microfabrication and integrated circuit manufacturing industries for plasma etching and plasma enhanced chemical vapor deposition. Cascaded arc plasma source: a device to produce low temperature (≈1eV) high density plasmas (HDP). Inductively coupled plasma (ICP): similar to a CCP and with similar applications but the electrode consists of a coil wrapped around the chamber where plasma is formed. Wave heated plasma: similar to CCP and ICP in that it is typically RF (or microwave). Examples include helicon discharge and electron cyclotron resonance (ECR). ===== Atmospheric pressure ===== Arc discharge: this is a high power thermal discharge of very high temperature (≈10,000 K). It can be generated using various power supplies. It is commonly used in metallurgical processes. For example, it is used to smelt minerals containing Al2O3 to produce aluminium. Corona discharge: this is a non-thermal discharge generated by the application of high voltage to sharp electrode tips. It is commonly used in ozone generators and particle precipitators. Dielectric barrier discharge (DBD): this is a non-thermal discharge generated by the application of high voltages across small gaps wherein a non-conducting coating prevents the transition of the plasma discharge into an arc. It is often mislabeled "Corona" discharge in industry and has similar application to corona discharges. A common usage of this discharge is in a plasma actuator for vehicle drag reduction. It is also widely used in the web treatment of fabrics. The application of the discharge to synthetic fabrics and plastics functionalizes the surface and allows for paints, glues and similar materials to adhere. The dielectric barrier discharge was used in the mid-1990s to show that low temperature atmospheric pressure plasma is effective in inactivating bacterial cells. This work and later experiments using mammalian cells led to the establishment of a new field of research known as plasma medicine. The dielectric barrier discharge configuration was also used in the design of low temperature plasma jets. These plasma jets are produced by fast propagating guided ionization waves known as plasma bullets. Capacitive discharge: this is a nonthermal plasma generated by the application of RF power (e.g., 13.56 MHz) to one powered electrode, with a grounded electrode held at a small separation distance on the order of 1 cm. Such discharges are commonly stabilized using a noble gas such as helium or argon. "Piezoelectric direct discharge plasma:" is a nonthermal plasma generated at the high side of a piezoelectric transformer (PT). This generation variant is particularly suited for high efficient and compact devices where a separate high voltage power supply is not desired. ==== MHD converters ==== A world effort was triggered in the 1960s to study magnetohydrodynamic converters in order to bring MHD power conversion to market with commercial power plants of a new kind, converting the kinetic energy of a high velocity plasma into electricity with no moving parts at a high efficiency. Research was also conducted in the field of supersonic and hypersonic aerodynamics to study plasma interaction with magnetic fields to eventually achieve passive and even active flow control around vehicles or projectiles, in order to soften and mitigate shock waves, lower thermal transfer and reduce drag. Such ionized gases used in "plasma technology" ("technological" or "engineered" plasmas) are usually weakly ionized gases in the sense that only a tiny fraction of the gas molecules are ionized. These kinds of weakly ionized gases are also nonthermal "cold" plasmas. In the presence of magnetics fields, the study of such magnetized nonthermal weakly ionized gases involves resistive magnetohydrodynamics with low magnetic Reynolds number, a challenging field of plasma physics where calculations require dyadic tensors in a 7-dimensional phase space. When used in combination with a high Hall parameter, a critical value triggers the problematic electrothermal instability which limited these technological developments. == Complex plasma phenomena == Although the underlying equations governing plasmas are relatively simple, plasma behaviour is extraordinarily varied and subtle: the emergence of unexpected behaviour from a simple model is a typical feature of a complex system. Such systems lie in some sense on the boundary between ordered and disordered behaviour and cannot typically be described either by simple, smooth, mathematical functions, or by pure randomness. The spontaneous formation of interesting spatial features on a wide range of length scales is one manifestation of plasma complexity. The features are interesting, for example, because they are very sharp, spatially intermittent (the distance between features is much larger than the features themselves), or have a fractal form. Many of these features were first studied in the laboratory, and have subsequently been recognized throughout the universe. Examples of complexity and complex structures in plasmas include: === Filamentation === Striations or string-like structures are seen in many plasmas, like the plasma ball, the aurora, lightning, electric arcs, solar flares, and supernova remnants. They are sometimes associated with larger current densities, and the interaction with the magnetic field can form a magnetic rope structure. (See also Plasma pinch) Filamentation also refers to the self-focusing of a high power laser pulse. At high powers, the nonlinear part of the index of refraction becomes important and causes a higher index of refraction in the center of the laser beam, where the laser is brighter than at the edges, causing a feedback that focuses the laser even more. The tighter focused laser has a higher peak brightness (irradiance) that forms a plasma. The plasma has an index of refraction lower than one, and causes a defocusing of the laser beam. The interplay of the focusing index of refraction, and the defocusing plasma makes the formation of a long filament of plasma that can be micrometers to kilometers in length. One interesting aspect of the filamentation generated plasma is the relatively low ion density due to defocusing effects of the ionized electrons. (See also Filament propagation) === Impermeable plasma === Impermeable plasma is a type of thermal plasma which acts like an impermeable solid with respect to gas or cold plasma and can be physically pushed. Interaction of cold gas and thermal plasma was briefly studied by a group led by Hannes Alfvén in 1960s and 1970s for its possible applications in insulation of fusion plasma from the reactor walls. However, later it was found that the external magnetic fields in this configuration could induce kink instabilities in the plasma and subsequently lead to an unexpectedly high heat loss to the walls. In 2013, a group of materials scientists reported that they have successfully generated stable impermeable plasma with no magnetic confinement using only an ultrahigh-pressure blanket of cold gas. While spectroscopic data on the characteristics of plasma were claimed to be difficult to obtain due to the high pressure, the passive effect of plasma on synthesis of different nanostructures clearly suggested the effective confinement. They also showed that upon maintaining the impermeability for a few tens of seconds, screening of ions at the plasma-gas interface could give rise to a strong secondary mode of heating (known as viscous heating) leading to different kinetics of reactions and formation of complex nanomaterials. == Gallery == == See also == == References == == External links == Plasmas: the Fourth State of Matter Archived 30 September 2019 at the Wayback Machine Introduction to Plasma Physics: Graduate course given by Richard Fitzpatrick|M.I.T. Introduction by I.H.Hutchinson Plasma Material Interaction How to make a glowing ball of plasma in your microwave with a grape Archived 6 September 2005 at the Wayback Machine|More (Video) OpenPIC3D – 3D Hybrid Particle-In-Cell simulation of plasma dynamics Plasma Formulary Interactive
Wikipedia/Plasma_(physics)
A complex system is a system composed of many components which may interact with each other. Examples of complex systems are Earth's global climate, organisms, the human brain, infrastructure such as power grid, transportation or communication systems, complex software and electronic systems, social and economic organizations (like cities), an ecosystem, a living cell, and, ultimately, for some authors, the entire universe. The behavior of a complex system is intrinsically difficult to model due to the dependencies, competitions, relationships, and other types of interactions between their parts or between a given system and its environment. Systems that are "complex" have distinct properties that arise from these relationships, such as nonlinearity, emergence, spontaneous order, adaptation, and feedback loops, among others. Because such systems appear in a wide variety of fields, the commonalities among them have become the topic of their independent area of research. In many cases, it is useful to represent such a system as a network where the nodes represent the components and links represent their interactions. The term complex systems often refers to the study of complex systems, which is an approach to science that investigates how relationships between a system's parts give rise to its collective behaviors and how the system interacts and forms relationships with its environment. The study of complex systems regards collective, or system-wide, behaviors as the fundamental object of study; for this reason, complex systems can be understood as an alternative paradigm to reductionism, which attempts to explain systems in terms of their constituent parts and the individual interactions between them. As an interdisciplinary domain, complex systems draw contributions from many different fields, such as the study of self-organization and critical phenomena from physics, of spontaneous order from the social sciences, chaos from mathematics, adaptation from biology, and many others. Complex systems is therefore often used as a broad term encompassing a research approach to problems in many diverse disciplines, including statistical physics, information theory, nonlinear dynamics, anthropology, computer science, meteorology, sociology, economics, psychology, and biology. == Types of systems == Complex systems can be: Complex adaptive systems which have the capacity to change. Polycentric systems : “where many elements are capable of making mutual adjustments for ordering their relationships with one another within a general system of rules where each element acts with independence of other elements”. Disorganised systems involving localized interactions of multiple entities that do not form a coherent whole. Disorganised systems are linked to self-organisation processes. Hierarchic systems which are analyzable into successive sets of subsystems. They can also be called nested or embedded systems. Cybernetic systems involve information feedback loops. == Key concepts == === Adaptation === Complex adaptive systems are special cases of complex systems that are adaptive in that they have the capacity to change and learn from experience. Examples of complex adaptive systems include the international trade markets, social insect and ant colonies, the biosphere and the ecosystem, the brain and the immune system, the cell and the developing embryo, cities, manufacturing businesses and any human social group-based endeavor in a cultural and social system such as political parties or communities. === Decomposability === A system is decomposable if the parts of the system (subsystems) are independent from each other, for exemple the model of a perfect gas consider the relations among molecules negligeable. In a nearly decomposable system, the interactions between subsystems are weak but not negligeable, this is often the case in social systems. Conceptually, a system is nearly decomposable if the variables composing it can be separated into classes and subclasses, if these variables are independent for many functions but affect each other, and if the whole system is greater than the parts. == Features == Complex systems may have the following features: Complex systems may be open Complex systems are usually open systems – that is, they exist in a thermodynamic gradient and dissipate energy. In other words, complex systems are frequently far from energetic equilibrium: but despite this flux, there may be pattern stability, see synergetics. Complex systems may exhibit critical transitions Critical transitions are abrupt shifts in the state of ecosystems, the climate, financial and economic systems or other complex systems that may occur when changing conditions pass a critical or bifurcation point. The 'direction of critical slowing down' in a system's state space may be indicative of a system's future state after such transitions when delayed negative feedbacks leading to oscillatory or other complex dynamics are weak. Complex systems may be nested The components of a complex system may themselves be complex systems. For example, an economy is made up of organisations, which are made up of people, which are made up of cells – all of which are complex systems. The arrangement of interactions within complex bipartite networks may be nested as well. More specifically, bipartite ecological and organisational networks of mutually beneficial interactions were found to have a nested structure. This structure promotes indirect facilitation and a system's capacity to persist under increasingly harsh circumstances as well as the potential for large-scale systemic regime shifts. Dynamic network of multiplicity As well as coupling rules, the dynamic network of a complex system is important. Small-world or scale-free networks which have many local interactions and a smaller number of inter-area connections are often employed. Natural complex systems often exhibit such topologies. In the human cortex for example, we see dense local connectivity and a few very long axon projections between regions inside the cortex and to other brain regions. May produce emergent phenomena Complex systems may exhibit behaviors that are emergent, which is to say that while the results may be sufficiently determined by the activity of the systems' basic constituents, they may have properties that can only be studied at a higher level. For example, empirical food webs display regular, scale-invariant features across aquatic and terrestrial ecosystems when studied at the level of clustered 'trophic' species. Another example is offered by the termites in a mound which have physiology, biochemistry and biological development at one level of analysis, whereas their social behavior and mound building is a property that emerges from the collection of termites and needs to be analyzed at a different level. Relationships are non-linear In practical terms, this means a small perturbation may cause a large effect (see butterfly effect), a proportional effect, or even no effect at all. In linear systems, the effect is always directly proportional to cause. See nonlinearity. Relationships contain feedback loops Both negative (damping) and positive (amplifying) feedback are always found in complex systems. The effects of an element's behavior are fed back in such a way that the element itself is altered. == History == In 1948, Dr. Warren Weaver published an essay on "Science and Complexity", exploring the diversity of problem types by contrasting problems of simplicity, disorganized complexity, and organized complexity. Weaver described these as "problems which involve dealing simultaneously with a sizable number of factors which are interrelated into an organic whole." While the explicit study of complex systems dates at least to the 1970s, the first research institute focused on complex systems, the Santa Fe Institute, was founded in 1984. Early Santa Fe Institute participants included physics Nobel laureates Murray Gell-Mann and Philip Anderson, economics Nobel laureate Kenneth Arrow, and Manhattan Project scientists George Cowan and Herb Anderson. Today, there are over 50 institutes and research centers focusing on complex systems. Since the late 1990s, the interest of mathematical physicists in researching economic phenomena has been on the rise. The proliferation of cross-disciplinary research with the application of solutions originated from the physics epistemology has entailed a gradual paradigm shift in the theoretical articulations and methodological approaches in economics, primarily in financial economics. The development has resulted in the emergence of a new branch of discipline, namely "econophysics", which is broadly defined as a cross-discipline that applies statistical physics methodologies which are mostly based on the complex systems theory and the chaos theory for economics analysis. The 2021 Nobel Prize in Physics was awarded to Syukuro Manabe, Klaus Hasselmann, and Giorgio Parisi for their work to understand complex systems. Their work was used to create more accurate computer models of the effect of global warming on the Earth's climate. == Applications == === Complexity in practice === The traditional approach to dealing with complexity is to reduce or constrain it. Typically, this involves compartmentalization: dividing a large system into separate parts. Organizations, for instance, divide their work into departments that each deal with separate issues. Engineering systems are often designed using modular components. However, modular designs become susceptible to failure when issues arise that bridge the divisions. === Complexity of cities === Jane Jacobs described cities as being a problem in organized complexity in 1961, citing Dr. Weaver's 1948 essay. As an example, she explains how an abundance of factors interplay into how various urban spaces lead to a diversity of interactions, and how changing those factors can change how the space is used, and how well the space supports the functions of the city. She further illustrates how cities have been severely damaged when approached as a problem in simplicity by replacing organized complexity with simple and predictable spaces, such as Le Corbusier's "Radiant City" and Ebenezer Howard's "Garden City". Since then, others have written at length on the complexity of cities. === Complexity economics === Over the last decades, within the emerging field of complexity economics, new predictive tools have been developed to explain economic growth. Such is the case with the models built by the Santa Fe Institute in 1989 and the more recent economic complexity index (ECI), introduced by the MIT physicist Cesar A. Hidalgo and the Harvard economist Ricardo Hausmann. Recurrence quantification analysis has been employed to detect the characteristic of business cycles and economic development. To this end, Orlando et al. developed the so-called recurrence quantification correlation index (RQCI) to test correlations of RQA on a sample signal and then investigated the application to business time series. The said index has been proven to detect hidden changes in time series. Further, Orlando et al., over an extensive dataset, shown that recurrence quantification analysis may help in anticipating transitions from laminar (i.e. regular) to turbulent (i.e. chaotic) phases such as USA GDP in 1949, 1953, etc. Last but not least, it has been demonstrated that recurrence quantification analysis can detect differences between macroeconomic variables and highlight hidden features of economic dynamics. === Complexity and education === Focusing on issues of student persistence with their studies, Forsman, Moll and Linder explore the "viability of using complexity science as a frame to extend methodological applications for physics education research", finding that "framing a social network analysis within a complexity science perspective offers a new and powerful applicability across a broad range of PER topics". === Complexity in healthcare research and practice === Healthcare systems are prime examples of complex systems, characterized by interactions among diverse stakeholders, such as patients, providers, policymakers, and researchers, across various sectors like health, government, community, and education. These systems demonstrate properties like non-linearity, emergence, adaptation, and feedback loops. Complexity science in healthcare frames knowledge translation as a dynamic and interconnected network of processes—problem identification, knowledge creation, synthesis, implementation, and evaluation—rather than a linear or cyclical sequence. Such approaches emphasize the importance of understanding and leveraging the interactions within and between these processes and stakeholders to optimize the creation and movement of knowledge. By acknowledging the complex, adaptive nature of healthcare systems, complexity science advocates for continuous stakeholder engagement, transdisciplinary collaboration, and flexible strategies to effectively translate research into practice. === Complexity and biology === Complexity science has been applied to living organisms, and in particular to biological systems. Within the emerging field of fractal physiology, bodily signals, such as heart rate or brain activity, are characterized using entropy or fractal indices. The goal is often to assess the state and the health of the underlying system, and diagnose potential disorders and illnesses. === Complexity and chaos theory === Complex systems theory is related to chaos theory, which in turn has its origins more than a century ago in the work of the French mathematician Henri Poincaré. Chaos is sometimes viewed as extremely complicated information, rather than as an absence of order. Chaotic systems remain deterministic, though their long-term behavior can be difficult to predict with any accuracy. With perfect knowledge of the initial conditions and the relevant equations describing the chaotic system's behavior, one can theoretically make perfectly accurate predictions of the system, though in practice this is impossible to do with arbitrary accuracy. The emergence of complex systems theory shows a domain between deterministic order and randomness which is complex. This is referred to as the "edge of chaos". When one analyzes complex systems, sensitivity to initial conditions, for example, is not an issue as important as it is within chaos theory, in which it prevails. As stated by Colander, the study of complexity is the opposite of the study of chaos. Complexity is about how a huge number of extremely complicated and dynamic sets of relationships can generate some simple behavioral patterns, whereas chaotic behavior, in the sense of deterministic chaos, is the result of a relatively small number of non-linear interactions. For recent examples in economics and business see Stoop et al. who discussed Android's market position, Orlando who explained the corporate dynamics in terms of mutual synchronization and chaos regularization of bursts in a group of chaotically bursting cells and Orlando et al. who modelled financial data (Financial Stress Index, swap and equity, emerging and developed, corporate and government, short and long maturity) with a low-dimensional deterministic model. Therefore, the main difference between chaotic systems and complex systems is their history. Chaotic systems do not rely on their history as complex ones do. Chaotic behavior pushes a system in equilibrium into chaotic order, which means, in other words, out of what we traditionally define as 'order'. On the other hand, complex systems evolve far from equilibrium at the edge of chaos. They evolve at a critical state built up by a history of irreversible and unexpected events, which physicist Murray Gell-Mann called "an accumulation of frozen accidents". In a sense chaotic systems can be regarded as a subset of complex systems distinguished precisely by this absence of historical dependence. Many real complex systems are, in practice and over long but finite periods, robust. However, they do possess the potential for radical qualitative change of kind whilst retaining systemic integrity. Metamorphosis serves as perhaps more than a metaphor for such transformations. === Complexity and network science === A complex system is usually composed of many components and their interactions. Such a system can be represented by a network where nodes represent the components and links represent their interactions. For example, the Internet can be represented as a network composed of nodes (computers) and links (direct connections between computers). Other examples of complex networks include social networks, financial institution interdependencies, airline networks, and biological networks. == Notable scholars == == See also == == References == == Further reading == Complexity Explained. L.A.N. Amaral and J.M. Ottino, Complex networks – augmenting the framework for the study of complex system, 2004. Chu, D.; Strand, R.; Fjelland, R. (2003). "Theories of complexity". Complexity. 8 (3): 19–30. Bibcode:2003Cmplx...8c..19C. doi:10.1002/cplx.10059. Walter Clemens, Jr., Complexity Science and World Affairs, SUNY Press, 2013. Gell-Mann, Murray (1995). "Let's Call It Plectics". Complexity. 1 (5): 3–5. Bibcode:1996Cmplx...1e...3G. doi:10.1002/cplx.6130010502. A. Gogolin, A. Nersesyan and A. Tsvelik, Theory of strongly correlated systems , Cambridge University Press, 1999. Nigel Goldenfeld and Leo P. Kadanoff, Simple Lessons from Complexity Archived 2017-09-28 at the Wayback Machine, 1999 Kelly, K. (1995). Out of Control, Perseus Books Group. Orlando, Giuseppe Orlando; Pisarchick, Alexander; Stoop, Ruedi (2021). Nonlinearities in Economics. Dynamic Modeling and Econometrics in Economics and Finance. Vol. 29. doi:10.1007/978-3-030-70982-2. ISBN 978-3-030-70981-5. S2CID 239756912. Syed M. Mehmud (2011), A Healthcare Exchange Complexity Model Preiser-Kapeller, Johannes, "Calculating Byzantium. Social Network Analysis and Complexity Sciences as tools for the exploration of medieval social dynamics". August 2010 Donald Snooks, Graeme (2008). "A general theory of complex living systems: Exploring the demand side of dynamics". Complexity. 13 (6): 12–20. Bibcode:2008Cmplx..13f..12S. doi:10.1002/cplx.20225. Stefan Thurner, Peter Klimek, Rudolf Hanel: Introduction to the Theory of Complex Systems, Oxford University Press, 2018, ISBN 978-0198821939 SFI @30, Foundations & Frontiers (2014). == External links == "The Open Agent-Based Modeling Consortium". "Complexity Science Focus". Archived from the original on 2017-12-05. Retrieved 2017-09-22. "Santa Fe Institute". "The Center for the Study of Complex Systems, Univ. of Michigan Ann Arbor". Archived from the original on 2017-12-13. Retrieved 2017-09-22. "INDECS". (Interdisciplinary Description of Complex Systems) "Introduction to Complexity – Free online course by Melanie Mitchell". Archived from the original on 2018-08-30. Retrieved 2018-08-29. Jessie Henshaw (October 24, 2013). "Complex Systems". Encyclopedia of Earth. Complex systems in scholarpedia. Complex Systems Society (Australian) Complex systems research network. Complex Systems Modeling based on Luis M. Rocha, 1999. CRM Complex systems research group The Center for Complex Systems Research, Univ. of Illinois at Urbana-Champaign Institute for Cross-Disciplinary Physics and Complex Systems (IFISC)
Wikipedia/Complex_systems
Fermi liquid theory (also known as Landau's Fermi-liquid theory) is a theoretical model of interacting fermions that describes the normal state of the conduction electrons in most metals at sufficiently low temperatures. The theory describes the behavior of many-body systems of particles in which the interactions between particles may be strong. The phenomenological theory of Fermi liquids was introduced by the Soviet physicist Lev Davidovich Landau in 1956, and later developed by Alexei Abrikosov and Isaak Khalatnikov using diagrammatic perturbation theory. The theory explains why some of the properties of an interacting fermion system are very similar to those of the ideal Fermi gas (collection of non-interacting fermions), and why other properties differ. Fermi liquid theory applies most notably to conduction electrons in normal (non-superconducting) metals, and to liquid helium-3. Liquid helium-3 is a Fermi liquid at low temperatures (but not low enough to be in its superfluid phase). An atom of helium-3 has two protons, one neutron and two electrons, giving an odd number of fermions, so the atom itself is a fermion. Fermi liquid theory also describes the low-temperature behavior of electrons in heavy fermion materials, which are metallic rare-earth alloys having partially filled f orbitals. The effective mass of electrons in these materials is much larger than the free-electron mass because of interactions with other electrons, so these systems are known as heavy Fermi liquids. Strontium ruthenate displays some key properties of Fermi liquids, despite being a strongly correlated material that is similar to high temperature superconductors such as the cuprates. The low-momentum interactions of nucleons (protons and neutrons) in atomic nuclei are also described by Fermi liquid theory. == Description == The key ideas behind Landau's theory are the notion of adiabaticity and the Pauli exclusion principle. Consider a non-interacting fermion system (a Fermi gas), and suppose we "turn on" the interaction slowly. Landau argued that in this situation, the ground state of the Fermi gas would adiabatically transform into the ground state of the interacting system. By Pauli's exclusion principle, the ground state Ψ 0 {\displaystyle \Psi _{0}} of a Fermi gas consists of fermions occupying all momentum states corresponding to momentum p < p F {\displaystyle p<p_{\rm {F}}} with all higher momentum states unoccupied. As the interaction is turned on, the spin, charge and momentum of the fermions corresponding to the occupied states remain unchanged, while their dynamical properties, such as their mass, magnetic moment etc. are renormalized to new values. Thus, there is a one-to-one correspondence between the elementary excitations of a Fermi gas system and a Fermi liquid system. In the context of Fermi liquids, these excitations are called "quasiparticles". Landau quasiparticles are long-lived excitations with a lifetime τ {\displaystyle \tau } that satisfies ℏ / τ ≪ ε p {\displaystyle {\hbar }/{\tau }\ll \varepsilon _{\rm {p}}} where ε p {\displaystyle \varepsilon _{\rm {p}}} is the quasiparticle energy (measured from the Fermi energy). At finite temperature, ε p {\displaystyle \varepsilon _{\rm {p}}} is on the order of the thermal energy k B T {\displaystyle k_{\rm {B}}T} , and the condition for Landau quasiparticles can be reformulated as ℏ / τ ≪ k B T {\displaystyle {\hbar }/{\tau }\ll k_{\rm {B}}T} . For this system, the many-body Green's function can be written (near its poles) in the form G ( ω , p ) ≈ Z ω + μ − ε ( p ) {\displaystyle G(\omega ,\mathbf {p} )\approx {\frac {Z}{\omega +\mu -\varepsilon (\mathbf {p} )}}} where μ {\displaystyle \mu } is the chemical potential, ε ( p ) {\displaystyle \varepsilon (\mathbf {p} )} is the energy corresponding to the given momentum state and Z > 0 {\displaystyle Z>0} is called the quasiparticle residue or renormalisation constant which is very characteristic of Fermi liquid theory. The spectral function for the system can be directly observed via angle-resolved photoemission spectroscopy (ARPES), and can be written (in the limit of low-lying excitations) in the form: A ( k , ω ) = Z δ ( ω − v F k ‖ ) {\displaystyle A(\mathbf {k} ,\omega )=Z\delta (\omega -v_{\rm {F}}k_{\|})} where v F {\displaystyle v_{\rm {F}}} is the Fermi velocity. Physically, we can say that a propagating fermion interacts with its surrounding in such a way that the net effect of the interactions is to make the fermion behave as a "dressed" fermion, altering its effective mass and other dynamical properties. These "dressed" fermions are what we think of as "quasiparticles". Another important property of Fermi liquids is related to the scattering cross section for electrons. Suppose we have an electron with energy ε 1 {\displaystyle \varepsilon _{1}} above the Fermi surface, and suppose it scatters with a particle in the Fermi sea with energy ε 2 {\displaystyle \varepsilon _{2}} . By Pauli's exclusion principle, both the particles after scattering have to lie above the Fermi surface, with energies ε 3 , ε 4 > ε F {\displaystyle \varepsilon _{3},\varepsilon _{4}>\varepsilon _{\rm {F}}} . Now, suppose the initial electron has energy very close to the Fermi surface ε ≈ ε F {\displaystyle \varepsilon \approx \varepsilon _{\rm {F}}} Then, we have that ε 2 , ε 3 , ε 4 {\displaystyle \varepsilon _{2},\varepsilon _{3},\varepsilon _{4}} also have to be very close to the Fermi surface. This reduces the phase space volume of the possible states after scattering, and hence, by Fermi's golden rule, the scattering cross section goes to zero. Thus we can say that the lifetime of particles at the Fermi surface goes to infinity. == Similarities to Fermi gas == The Fermi liquid is qualitatively analogous to the non-interacting Fermi gas, in the following sense: The system's dynamics and thermodynamics at low excitation energies and temperatures may be described by substituting the non-interacting fermions with interacting quasiparticles, each of which carries the same spin, charge and momentum as the original particles. Physically these may be thought of as being particles whose motion is disturbed by the surrounding particles and which themselves perturb the particles in their vicinity. Each many-particle excited state of the interacting system may be described by listing all occupied momentum states, just as in the non-interacting system. As a consequence, quantities such as the heat capacity of the Fermi liquid behave qualitatively in the same way as in the Fermi gas (e.g. the heat capacity rises linearly with temperature). == Differences from Fermi gas == The following differences to the non-interacting Fermi gas arise: === Energy === The energy of a many-particle state is not simply a sum of the single-particle energies of all occupied states. Instead, the change in energy for a given change δ n k {\displaystyle \delta n_{k}} in occupation of states k {\displaystyle k} contains terms both linear and quadratic in δ n k {\displaystyle \delta n_{k}} (for the Fermi gas, it would only be linear, δ n k ε k {\displaystyle \delta n_{k}\varepsilon _{k}} , where ε k {\displaystyle \varepsilon _{k}} denotes the single-particle energies). The linear contribution corresponds to renormalized single-particle energies, which involve, e.g., a change in the effective mass of particles. The quadratic terms correspond to a sort of "mean-field" interaction between quasiparticles, which is parametrized by so-called Landau Fermi liquid parameters and determines the behaviour of density oscillations (and spin-density oscillations) in the Fermi liquid. Still, these mean-field interactions do not lead to a scattering of quasi-particles with a transfer of particles between different momentum states. The renormalization of the mass of a fluid of interacting fermions can be calculated from first principles using many-body computational techniques. For the two-dimensional homogeneous electron gas, GW calculations and quantum Monte Carlo methods have been used to calculate renormalized quasiparticle effective masses. === Specific heat and compressibility === Specific heat, compressibility, spin-susceptibility and other quantities show the same qualitative behaviour (e.g. dependence on temperature) as in the Fermi gas, but the magnitude is (sometimes strongly) changed. === Interactions === In addition to the mean-field interactions, some weak interactions between quasiparticles remain, which lead to scattering of quasiparticles off each other. Therefore, quasiparticles acquire a finite lifetime. However, at low enough energies above the Fermi surface, this lifetime becomes very long, such that the product of excitation energy (expressed in frequency) and lifetime is much larger than one. In this sense, the quasiparticle energy is still well-defined (in the opposite limit, Heisenberg's uncertainty relation would prevent an accurate definition of the energy). === Structure === The structure of the "bare" particles (as opposed to quasiparticle) many-body Green's function is similar to that in the Fermi gas (where, for a given momentum, the Green's function in frequency space is a delta peak at the respective single-particle energy). The delta peak in the density-of-states is broadened (with a width given by the quasiparticle lifetime). In addition (and in contrast to the quasiparticle Green's function), its weight (integral over frequency) is suppressed by a quasiparticle weight factor 0 < Z < 1 {\displaystyle 0<Z<1} . The remainder of the total weight is in a broad "incoherent background", corresponding to the strong effects of interactions on the fermions at short time scales. === Distribution === The distribution of particles (as opposed to quasiparticles) over momentum states at zero temperature still shows a discontinuous jump at the Fermi surface (as in the Fermi gas), but it does not drop from 1 to 0: the step is only of size Z {\displaystyle Z} . === Electrical resistivity === In a metal the resistivity at low temperatures is dominated by electron–electron scattering in combination with umklapp scattering. For a Fermi liquid, the resistivity from this mechanism varies as T 2 {\displaystyle T^{2}} , which is often taken as an experimental check for Fermi liquid behaviour (in addition to the linear temperature-dependence of the specific heat), although it only arises in combination with the lattice. In certain cases, umklapp scattering is not required. For example, the resistivity of compensated semimetals scales as T 2 {\displaystyle T^{2}} because of mutual scattering of electron and hole. This is known as the Baber mechanism. === Optical response === Fermi liquid theory predicts that the scattering rate, which governs the optical response of metals, not only depends quadratically on temperature (thus causing the T 2 {\displaystyle T^{2}} dependence of the DC resistance), but it also depends quadratically on frequency. This is in contrast to the Drude prediction for non-interacting metallic electrons, where the scattering rate is a constant as a function of frequency. One material in which optical Fermi liquid behavior was experimentally observed is the low-temperature metallic phase of Sr2RuO4. == Instabilities == The experimental observation of exotic phases in strongly correlated systems has triggered an enormous effort from the theoretical community to try to understand their microscopic origin. One possible route to detect instabilities of a Fermi liquid is precisely the analysis done by Isaak Pomeranchuk. Due to that, the Pomeranchuk instability has been studied by several authors with different techniques in the last few years and in particular, the instability of the Fermi liquid towards the nematic phase was investigated for several models. == Non-Fermi liquids == Non-Fermi liquids are systems in which the Fermi-liquid behaviour breaks down. The simplest example is a system of interacting fermions in one dimension, called the Luttinger liquid. Although Luttinger liquids are physically similar to Fermi liquids, the restriction to one dimension gives rise to several qualitative differences such as the absence of a quasiparticle peak in the momentum dependent spectral function, and the presence of spin-charge separation and of spin-density waves. One cannot ignore the existence of interactions in one dimension and has to describe the problem with a non-Fermi theory, where Luttinger liquid is one of them. At small finite spin temperatures in one dimension the ground state of the system is described by spin-incoherent Luttinger liquid (SILL). Another example of non-Fermi-liquid behaviour is observed at quantum critical points of certain second-order phase transitions, such as heavy fermion criticality, Mott criticality and high- T c {\displaystyle T_{\rm {c}}} cuprate phase transitions. The ground state of such transitions is characterized by the presence of a sharp Fermi surface, although there may not be well-defined quasiparticles. That is, on approaching the critical point, it is observed that the quasiparticle residue Z → 0 {\displaystyle Z\to 0} . In optimally doped cuprates and iron-based superconductors, the normal state above the critical temperature shows signs of non-Fermi liquid behaviour, and is often called a strange metal. In this region of phase diagram, resistivity increases linearly in temperature and the Hall coefficient is found to depend on temperature. Understanding the behaviour of non-Fermi liquids is an important problem in condensed matter physics. Approaches towards explaining these phenomena include the treatment of marginal Fermi liquids; attempts to understand critical points and derive scaling relations; and descriptions using emergent gauge theories with techniques of holographic gauge/gravity duality. == See also == Classical fluid Fermionic condensate Luttinger liquid Luttinger's theorem Strongly correlated quantum spin liquid == References == == Further reading == Baym, Gordon; Pethick, Christopher (1991). Landau Fermi-Liquid Theory: Concepts and Applications (1 ed.). Wiley. doi:10.1002/9783527617159. ISBN 978-0-471-82418-3. Coleman, Piers (2015). "Landau Fermi-liquid theory". Introduction to Many-Body Physics. Cambridge, U.K.: Cambridge University Press. ISBN 9780521864886. Pines, David; Nozières, Philippe (1989). The Theory of Quantum Liquids: Normal Fermi Liquids. CRC Press. doi:10.4324/9780429492662. ISBN 978-0-429-49266-2. Vignale, Giovanni (2022). "Fermi Liquids" (PDF). In Pavarini, Eva; Koch, Erik; Lichtenstein, Alexander; Vollhardt, Dieter (eds.). Dynamical Mean-Field Theory of Correlated Electrons. Verlag des Forschungszentrum Jülich. ISBN 978-3-95806-619-9.
Wikipedia/Fermi_liquid_theory
In physics, electromagnetism is an interaction that occurs between particles with electric charge via electromagnetic fields. The electromagnetic force is one of the four fundamental forces of nature. It is the dominant force in the interactions of atoms and molecules. Electromagnetism can be thought of as a combination of electrostatics and magnetism, which are distinct but closely intertwined phenomena. Electromagnetic forces occur between any two charged particles. Electric forces cause an attraction between particles with opposite charges and repulsion between particles with the same charge, while magnetism is an interaction that occurs between charged particles in relative motion. These two forces are described in terms of electromagnetic fields. Macroscopic charged objects are described in terms of Coulomb's law for electricity and Ampère's force law for magnetism; the Lorentz force describes microscopic charged particles. The electromagnetic force is responsible for many of the chemical and physical phenomena observed in daily life. The electrostatic attraction between atomic nuclei and their electrons holds atoms together. Electric forces also allow different atoms to combine into molecules, including the macromolecules such as proteins that form the basis of life. Meanwhile, magnetic interactions between the spin and angular momentum magnetic moments of electrons also play a role in chemical reactivity; such relationships are studied in spin chemistry. Electromagnetism also plays several crucial roles in modern technology: electrical energy production, transformation and distribution; light, heat, and sound production and detection; fiber optic and wireless communication; sensors; computation; electrolysis; electroplating; and mechanical motors and actuators. Electromagnetism has been studied since ancient times. Many ancient civilizations, including the Greeks and the Mayans, created wide-ranging theories to explain lightning, static electricity, and the attraction between magnetized pieces of iron ore. However, it was not until the late 18th century that scientists began to develop a mathematical basis for understanding the nature of electromagnetic interactions. In the 18th and 19th centuries, prominent scientists and mathematicians such as Coulomb, Gauss and Faraday developed namesake laws which helped to explain the formation and interaction of electromagnetic fields. This process culminated in the 1860s with the discovery of Maxwell's equations, a set of four partial differential equations which provide a complete description of classical electromagnetic fields. Maxwell's equations provided a sound mathematical basis for the relationships between electricity and magnetism that scientists had been exploring for centuries, and predicted the existence of self-sustaining electromagnetic waves. Maxwell postulated that such waves make up visible light, which was later shown to be true. Gamma-rays, x-rays, ultraviolet, visible, infrared radiation, microwaves and radio waves were all determined to be electromagnetic radiation differing only in their range of frequencies. In the modern era, scientists continue to refine the theory of electromagnetism to account for the effects of modern physics, including quantum mechanics and relativity. The theoretical implications of electromagnetism, particularly the requirement that observations remain consistent when viewed from various moving frames of reference (relativistic electromagnetism) and the establishment of the speed of light based on properties of the medium of propagation (permeability and permittivity), helped inspire Einstein's theory of special relativity in 1905. Quantum electrodynamics (QED) modifies Maxwell's equations to be consistent with the quantized nature of matter. In QED, changes in the electromagnetic field are expressed in terms of discrete excitations, particles known as photons, the quanta of light. == History == === Ancient world === Investigation into electromagnetic phenomena began about 5,000 years ago. There is evidence that the ancient Chinese, Mayan, and potentially even Egyptian civilizations knew that the naturally magnetic mineral magnetite had attractive properties, and many incorporated it into their art and architecture. Ancient people were also aware of lightning and static electricity, although they had no idea of the mechanisms behind these phenomena. The Greek philosopher Thales of Miletus discovered around 600 B.C.E. that amber could acquire an electric charge when it was rubbed with cloth, which allowed it to pick up light objects such as pieces of straw. Thales also experimented with the ability of magnetic rocks to attract one other, and hypothesized that this phenomenon might be connected to the attractive power of amber, foreshadowing the deep connections between electricity and magnetism that would be discovered over 2,000 years later. Despite all this investigation, ancient civilizations had no understanding of the mathematical basis of electromagnetism, and often analyzed its impacts through the lens of religion rather than science (lightning, for instance, was considered to be a creation of the gods in many cultures). === 19th century === Electricity and magnetism were originally considered to be two separate forces. This view changed with the publication of James Clerk Maxwell's 1873 A Treatise on Electricity and Magnetism in which the interactions of positive and negative charges were shown to be mediated by one force. There are four main effects resulting from these interactions, all of which have been clearly demonstrated by experiments: Electric charges attract or repel one another with a force inversely proportional to the square of the distance between them: opposite charges attract, like charges repel. Magnetic poles (or states of polarization at individual points) attract or repel one another in a manner similar to positive and negative charges and always exist as pairs: every north pole is yoked to a south pole. An electric current inside a wire creates a corresponding circumferential magnetic field outside the wire. Its direction (clockwise or counter-clockwise) depends on the direction of the current in the wire. A current is induced in a loop of wire when it is moved toward or away from a magnetic field, or a magnet is moved towards or away from it; the direction of current depends on that of the movement. In April 1820, Hans Christian Ørsted observed that an electrical current in a wire caused a nearby compass needle to move. At the time of discovery, Ørsted did not suggest any satisfactory explanation of the phenomenon, nor did he try to represent the phenomenon in a mathematical framework. However, three months later he began more intensive investigations. Soon thereafter he published his findings, proving that an electric current produces a magnetic field as it flows through a wire. The CGS unit of magnetic induction (oersted) is named in honor of his contributions to the field of electromagnetism. His findings resulted in intensive research throughout the scientific community in electrodynamics. They influenced French physicist André-Marie Ampère's developments of a single mathematical form to represent the magnetic forces between current-carrying conductors. Ørsted's discovery also represented a major step toward a unified concept of energy. This unification, which was observed by Michael Faraday, extended by James Clerk Maxwell, and partially reformulated by Oliver Heaviside and Heinrich Hertz, is one of the key accomplishments of 19th-century mathematical physics. It has had far-reaching consequences, one of which was the understanding of the nature of light. Unlike what was proposed by the electromagnetic theory of that time, light and other electromagnetic waves are at present seen as taking the form of quantized, self-propagating oscillatory electromagnetic field disturbances called photons. Different frequencies of oscillation give rise to the different forms of electromagnetic radiation, from radio waves at the lowest frequencies, to visible light at intermediate frequencies, to gamma rays at the highest frequencies. Ørsted was not the only person to examine the relationship between electricity and magnetism. In 1802, Gian Domenico Romagnosi, an Italian legal scholar, deflected a magnetic needle using a Voltaic pile. The factual setup of the experiment is not completely clear, nor if current flowed across the needle or not. An account of the discovery was published in 1802 in an Italian newspaper, but it was largely overlooked by the contemporary scientific community, because Romagnosi seemingly did not belong to this community. An earlier (1735), and often neglected, connection between electricity and magnetism was reported by a Dr. Cookson. The account stated:A tradesman at Wakefield in Yorkshire, having put up a great number of knives and forks in a large box ... and having placed the box in the corner of a large room, there happened a sudden storm of thunder, lightning, &c. ... The owner emptying the box on a counter where some nails lay, the persons who took up the knives, that lay on the nails, observed that the knives took up the nails. On this the whole number was tried, and found to do the same, and that, to such a degree as to take up large nails, packing needles, and other iron things of considerable weight ... E. T. Whittaker suggested in 1910 that this particular event was responsible for lightning to be "credited with the power of magnetizing steel; and it was doubtless this which led Franklin in 1751 to attempt to magnetize a sewing-needle by means of the discharge of Leyden jars." == A fundamental force == The electromagnetic force is the second strongest of the four known fundamental forces and has unlimited range. All other forces, known as non-fundamental forces. (e.g., friction, contact forces) are derived from the four fundamental forces. At high energy, the weak force and electromagnetic force are unified as a single interaction called the electroweak interaction. Most of the forces involved in interactions between atoms are explained by electromagnetic forces between electrically charged atomic nuclei and electrons. The electromagnetic force is also involved in all forms of chemical phenomena. Electromagnetism explains how materials carry momentum despite being composed of individual particles and empty space. The forces we experience when "pushing" or "pulling" ordinary material objects result from intermolecular forces between individual molecules in our bodies and in the objects. The effective forces generated by the momentum of electrons' movement is a necessary part of understanding atomic and intermolecular interactions. As electrons move between interacting atoms, they carry momentum with them. As a collection of electrons becomes more confined, their minimum momentum necessarily increases due to the Pauli exclusion principle. The behavior of matter at the molecular scale, including its density, is determined by the balance between the electromagnetic force and the force generated by the exchange of momentum carried by the electrons themselves. == Classical electrodynamics == In 1600, William Gilbert proposed, in his De Magnete, that electricity and magnetism, while both capable of causing attraction and repulsion of objects, were distinct effects. Mariners had noticed that lightning strikes had the ability to disturb a compass needle. The link between lightning and electricity was not confirmed until Benjamin Franklin's proposed experiments in 1752 were conducted on 10 May 1752 by Thomas-François Dalibard of France using a 40-foot-tall (12 m) iron rod instead of a kite and he successfully extracted electrical sparks from a cloud. One of the first to discover and publish a link between human-made electric current and magnetism was Gian Romagnosi, who in 1802 noticed that connecting a wire across a voltaic pile deflected a nearby compass needle. However, the effect did not become widely known until 1820, when Ørsted performed a similar experiment. Ørsted's work influenced Ampère to conduct further experiments, which eventually gave rise to a new area of physics: electrodynamics. By determining a force law for the interaction between elements of electric current, Ampère placed the subject on a solid mathematical foundation. A theory of electromagnetism, known as classical electromagnetism, was developed by several physicists during the period between 1820 and 1873, when James Clerk Maxwell's treatise was published, which unified previous developments into a single theory, proposing that light was an electromagnetic wave propagating in the luminiferous ether. In classical electromagnetism, the behavior of the electromagnetic field is described by a set of equations known as Maxwell's equations, and the electromagnetic force is given by the Lorentz force law. One of the peculiarities of classical electromagnetism is that it is difficult to reconcile with classical mechanics, but it is compatible with special relativity. According to Maxwell's equations, the speed of light in vacuum is a universal constant that is dependent only on the electrical permittivity and magnetic permeability of free space. This violates Galilean invariance, a long-standing cornerstone of classical mechanics. One way to reconcile the two theories (electromagnetism and classical mechanics) is to assume the existence of a luminiferous aether through which the light propagates. However, subsequent experimental efforts failed to detect the presence of the aether. After important contributions of Hendrik Lorentz and Henri Poincaré, in 1905, Albert Einstein solved the problem with the introduction of special relativity, which replaced classical kinematics with a new theory of kinematics compatible with classical electromagnetism. (For more information, see History of special relativity.) In addition, relativity theory implies that in moving frames of reference, a magnetic field transforms to a field with a nonzero electric component and conversely, a moving electric field transforms to a nonzero magnetic component, thus firmly showing that the phenomena are two sides of the same coin. Hence the term "electromagnetism". (For more information, see Classical electromagnetism and special relativity and Covariant formulation of classical electromagnetism.) Today few problems in electromagnetism remain unsolved. These include: the lack of magnetic monopoles, Abraham–Minkowski controversy, the location in space of the electromagnetic field energy, and the mechanism by which some organisms can sense electric and magnetic fields. == Extension to nonlinear phenomena == The Maxwell equations are linear, in that a change in the sources (the charges and currents) results in a proportional change of the fields. Nonlinear dynamics can occur when electromagnetic fields couple to matter that follows nonlinear dynamical laws. This is studied, for example, in the subject of magnetohydrodynamics, which combines Maxwell theory with the Navier–Stokes equations. Another branch of electromagnetism dealing with nonlinearity is nonlinear optics. == Quantities and units == Here is a list of common units related to electromagnetism: In the electromagnetic CGS system, electric current is a fundamental quantity defined via Ampère's law and takes the permeability as a dimensionless quantity (relative permeability) whose value in vacuum is unity. As a consequence, the square of the speed of light appears explicitly in some of the equations interrelating quantities in this system. Formulas for physical laws of electromagnetism (such as Maxwell's equations) need to be adjusted depending on what system of units one uses. This is because there is no one-to-one correspondence between electromagnetic units in SI and those in CGS, as is the case for mechanical units. Furthermore, within CGS, there are several plausible choices of electromagnetic units, leading to different unit "sub-systems", including Gaussian, "ESU", "EMU", and Heaviside–Lorentz. Among these choices, Gaussian units are the most common today, and in fact the phrase "CGS units" is often used to refer specifically to CGS-Gaussian units. == Applications == The study of electromagnetism informs electric circuits, magnetic circuits, and semiconductor devices' construction. == See also == == References == == Further reading == === Web sources === === Textbooks === === General coverage === == External links == Magnetic Field Strength Converter Electromagnetic Force – from Eric Weisstein's World of Physics
Wikipedia/Electromagnetic_force
Physics outreach encompasses facets of science outreach and physics education, and a variety of activities by schools, research institutes, universities, clubs and institutions such as science museums aimed at broadening the audience for and awareness and understanding of physics. While the general public may sometimes be the focus of such activities, physics outreach often centers on developing and providing resources and making presentations to students, educators in other disciplines, and in some cases researchers within different areas of physics. == History == Ongoing efforts to expand the understanding of physics to a wider audience have been undertaken by individuals and institutions since the early 19th century. Historic works, such as the Dialogue Concerning the Two Chief World Systems, and Two New Sciences by Galileo Galilei, sought to present revolutionary knowledge in astronomy, frames of reference, and kinematics in a manner that a general audience could understand with great effect. In the mid-1800s, English physicist and chemist, Michael Faraday gave a series of nineteen lectures aimed towards young adults with the hopes of conveying scientific phenomena. His intentions were to raise awareness, inspire them and generate revenue of the Royal Institution. This series became known as the Christmas lectures, and still continues today. By the early 20th century, the public notoriety of physicists such as Albert Einstein and Marie Curie, and inventions such as radio led to a growing interest in physics. In 1921, in the United States, the establishment of Sigma Pi Sigma physics honor society at universities was instrumental in the expanding number of physics presentations, and led to the creation of physics clubs open to all students. Museums were an important form of outreach but most early science museums were generally focused on natural history. Some specialized museums, such as the Cavendish Museum at University of Cambridge, housed many of the historically important pieces of apparatus that contributed to the major discoveries by Maxwell, Thomson, Rutherford, etc. However, such venues provided little opportunity for hands-on learning or demonstrations. In August 1969, Frank Oppenheimer dedicated his new Exploratorium in San Francisco primarily to interactive science exhibits that demonstrated principles in physics. The Exploratorium published the details of their own exhibits in "Cookbooks" that served as an inspiration to many other museums around the world, and since then has diversified into many outreach programs. Oppenheimer had researched European science museums while on a Guggenheim Fellowship in 1965. He noted that three museums served as important influences on the Exploratorium: the Palais de la Découverte, which displayed models to teach scientific concepts and employed students as demonstrators, a practice that directly inspired the Exploratorium's much-lauded High School Explainer Program; the South Kensington Museum of Science and Art, which Oppenheimer and his wife visited frequently; and the Deutsches Museum in Munich, the world's largest science museum, which had a number of interactive displays that impressed the Oppenheimers. In the ensuing years, physics outreach, and science outreach more generally, continued to expand and took on new popular forms, including highly successful television shows such as Cosmos: A Personal Voyage, first broadcast in 1980. As a form of outreach within the physics education community for teachers and students, in 1997 the US National Science Foundation (NSF) and Department of Energy USDOE established QuarkNet, a professional teacher development program. In 2012, the University of Notre Dame received a $6.1M, five-year grant to support a nationwide expansion of the Quarknet program. Also in 1997, the European Particle Physics Outreach Group, led by Christopher Llewellyn Smith, FRS, and Director General of CERN, was formed to create a community of scientists, science educators, and communication specialists in science education and public outreach for particle physics. This group became the International Particle Physics Outreach Group (IPPOG) in 2011 after the start up of the LHC. == Innovation == Many contemporary initiatives in physics outreach have begun to shift focus, transcending traditional field boundaries, seeking to engage students and the public by integrating elements of aesthetic design and popular culture. The goal has been not only to push physics out of a strictly science education framework but also to draw in professionals and students from other fields to bring their perspectives on physical phenomena. Such work includes artists creating sculptures using ferrofluids, and art photography using high speed and ultra high speed photography. Other efforts, such as University of Cambridge's Physics at Work program have created annual events to demonstrate to secondary students uses of physics in everyday life and a Senior Physics Challenge. Seeing the importance these initiatives, Cambridge has established a full-time physics outreach organization, an Educational Outreach Office, and aspirations for a Center of Physics and expanded industrial partnerships that "would include a well equipped core team of outreach officers dedicated to demonstrating the real life applications of physics, showing that physics is an accessible and relevant subject". The French research group, La Physique Autrement (Physics Reimagined), of the Laboratoire de Physique des Solides, works on research about new ways to present modern solid-state physics and to engage the general public. In 2013, Physics Today covered this group in an article entitled "Quantum Physics For Everyone" which discussed how with the help of designers and unconventional demonstrations, the project sought out and succeeded to engage people who never thought of themselves as interested in science. The Science & Entertainment Exchange was developed by the United States National Academy of Sciences (NAS) to increase public awareness, knowledge, and understanding of science and advanced science technology through its representation in television, film, and other media. It was officially launched in 2008 as a partnership between the NAS and Hollywood. The Exchanged is Based in Los Angeles, California. == Museums and public venues primarily focused on physical phenomena == === Canada === Montreal Science Centre (Montreal, Quebec) displays many hands-on activities involving various physics phenomena. === Finland === Heureka (Helsinki) is an NPO science center run by the Finnish Science Centre Foundation with a broad spectrum of physics-related exhibits. === France === Cité des Sciences et de l'Industrie (Paris) is the largest French science museum, and contains permanent exhibits and hands-on experiments. Palais de la Découverte (Paris) contains permanent exhibits and interactive experiments with commentaries by lecturers. It includes a Zeiss planetarium with 15-metre dome. It was created in 1937 by the French Nobel Prize physicist Jean Baptiste Perrin. Musée des Arts et Métiers (Paris) focuses on the preservation of scientific instruments and inventions. Other science museums that are part of the Cultural Center of Science, Technology and Industry (CCSTI) exist all across France : Espace des Sciences (Rennes), La Casemate (Grenoble), the Cité de l'espace (Toulouse). === Germany === Deutsches Museum (Munich) is the world's largest science museum. One of the most popular events is the high voltage demonstration of a Faraday cage as part of their series on electric power. === Islamic Republic of Iran === Iran Science and Technology Museum (Tehran) is the largest science museum in Iran. This museum, by holding varied scientific and educational programs, provides the required situation for creation and propagation of scientific thought in the society. One of these programs is the "Physics Show". === Netherlands === NEMO (Amsterdam) is the largest science center in the Netherlands, with hands-on science exhibitions. === United States === Exploratorium (San Francisco) is one of the foremost interactive science and art museums in the United States dedicated to exploring how the world works and consists of interactive exhibits, experiences and curious exploration. The Exploratorium was opened in 1969, and now attracts over a million visitors annually. The American Museum of Natural History in New York City is both a museum and a research facility with a department in astrophysics. As a natural history museum, it focuses on educating the public about human cultures, the natural world, and the universe, and has many interactive programs and lectures all year round. The Franklin Institute in Philadelphia is one of the oldest centers for science education and research in the United States. == Scientific institutions and societies with physics outreach programs == === Canada === Perimeter Institute for Theoretical Physics was founded in 1999 in Waterloo, Ontario, Canada, the institute is a center for scientific research, training and educational outreach in theoretical physics. Located in Vancouver, British Columbia, TRIUMF is Canada's national laboratory for particle and nuclear physics and accelerator-based science. In addition to its science mission, the laboratory is committed to physics outreach, offering public tours of its facilities, public talks, an artist in residence program, student fellowships, and other opportunities. The Canadian Association of Physicists (CAP), or in French Association canadienne des physiciens et physiciennes (ACP) is a Canadian professional society that focuses on creating awareness amongst Canadians and Canadian legislators of physics issues, sponsoring physics related events, [physics outreach], and publishes Physics in Canada. === France === French Physics Society has a specific section devoted to outreach and popularization of science. The European Physical Society (EPS) is based in France, but works to promote physics and physicists in Europe. === Germany === Deutsche Physikalische Gesellschaft (DPG, German Physical Society) is the world's largest organization of physicists. The DPG actively participates in communication between physics and the general public with several popular scientific publications and events such as the "Highlights of Physics" which is an annual physics festival organized jointly by the DPG and the Federal Ministry of Education and Research. This festival is the largest of its kind in Germany and attracts about 30,000 visitors every year. === United Kingdom === Institute of Physics is an international charitable institution that aims to advance physics education, research and application. === United States === American Association for the Advancement of Science American Association of Physics Teachers American Institute of Physics (AIP) has an outreach program focused on advocating science policy to the US Congress and the general public. American Physical Society (APS) has a program dedicated to "Communicating the excitement and importance of physics to everyone." Leonardo, the International Society for the Arts, Sciences and Technology (Leonardo/ISAST) is a nonprofit organization that serves the global network of distinguished scholars, artists, scientists, researchers and thinkers. The institution focuses on interdisciplinary work, creative output and innovation. Its journal Leonardo is published by MIT Press. == Media and Internet == === Media === The Big Bang Theory is an American sitcom created in 2007 and revolves around the lives of scientists at the California Institute of Technology. This show has been widely recognized for popularizing science and noted by the New York Times as "helping physics and fiction collide". In 2014, the program was the most popular sitcom and most popular non-sports program on American TV with an average of 20 million viewers. However, the show has been criticized for sometimes portraying the scientific community inaccurately. C'est pas sorcier is a French educational television program that originally aired from November 5, 1994, to present. 20 shows dealt with astronomy and space topics and 13 about physics. Particle Fever is a 2013 documentary film that provides an intimate and accessible view of the first experiments at the Large Hadron Collider from the perspectives of the experimental physicists at CERN who run the experiments, as well as the theoretical physicists who attempt to provide a conceptual framework for the LHC's results. Reviewers praised the film for making theoretical arguments seem comprehensible, for making scientific experiments seem thrilling, and for making particle physicists seem human. Through the Wormhole is an American science documentary television series narrated and hosted by American actor Morgan Freeman and has featured physicists such as such as Michio Kaku and Brian Cox (physicist). === Internet === MinutePhysics is a series of educational videos created by Henry Reich and disseminated through its YouTube channel. It displays a series of pedagogical short videos about various physics phenomena and theories. Physics World publication, run by the Institute of Physics, started explaining scientific concepts through its YouTube channel. Palais de la Découverte in Paris hosts online videos that display various interviews about science, including physics. Unisciel, a French online university, hosts educational videos through its YouTube channel. Veritasium is a series of educational videos created by Derek Muller and disseminated through its YouTube channel. It displays a series of pedagogical short videos about science, including physics. Saint Mary's Physics Demonstrations is an online repository for physics classroom demonstrations. It shows teachers the experiments they can do in class while also hosting videos of said experiments. Periodic Videos is a portal of educational videos explaining the characteristics of each element and supporting topics such as nuclear reactions. The project is sponsored by the University of Nottingham and hosted by Prof. Sir Martyn Poliakoff. == Prominent individuals == === Austria === Fritjof Capra is an Austrian-born American physicist, who attended the University of Vienna, where he earned his Ph.D. in theoretical physics in 1966. He is a founding director of the Center for Ecoliteracy in Berkeley, California, and is on the faculty of Schumacher College. Capra is the author of several books, including The Tao of Physics (1975) and has also done research in Paris and London. === France === Camille Flammarion was a French astronomer author of many popular science books. Étienne Klein is a French physicist and philosopher of science involved in outreach efforts about particle and quantum physics. Roland Lehoucq is a French astrophysicist known for his outreach efforts especially in relationship with fiction and science fiction. Hubert Reeves is a French Canadian astrophysicist and popularizer of science. === United Kingdom === Brian Cox is a British physicist and musician best known to the public as the presenter of a number of science programs for the BBC. Wendy J. Sadler promotes science and engineering as part of popular culture through Science Made Simple, an educational spin-off company of Cardiff University that reaches students through live presentations. She also trains scientists and engineers to improve their communications skills to enable them to extend their research across a broader audience. Sadler was the IoP Young Professional Physicist of the Year in 2005. Robert Matthews is a Fellow of the Royal Statistical Society, a Chartered Physicist, a Member of the Institute of Physics, and a Fellow of the Royal Astronomical Society. Matthews is a distinguished science journalist. He is currently anchorman for the science magazine BBC Focus, and a freelance columnist for the Financial Times. In the past, he has been science correspondent for the Sunday Telegraph. === United States === Richard Feynman was a Nobel-prize-winning theoretical physicist also known as a science popularizer through his books and lectures ranging from physics topics (quantum physics, nanophysics...) to autobiographical essays. George Gamow was a theoretical physicist and cosmologist who also wrote popular books on science, some of which are still in print more than a half-century after their original publication Brian Greene is a theoretical physicist involved in various outreach activities (books, TV shows). He co-founded the World Science Festival in 2008. Clifford Victor Johnson is a theoretical physicist involved in various outreach activities (blog, TV shows...). Michio Kaku is a theoretical physicist who is a futurist and communicator and popularizer of physics. He is most well known for his three New York Times Best Sellers on physics: Physics of the Impossible (2008), Physics of the Future (2011), and The Future of the Mind (2014). Lawrence M. Krauss is an American theoretical physicist and cosmologist who is Foundation Professor of the School of Earth and Space Exploration at Arizona State University and is known as an advocate of the public understanding of science, of public policy based on sound empirical data, of scientific skepticism and of science education and works to reduce the impact of superstition and religious dogma in pop culture. Don Lincoln is a physicist at Fermi National Accelerator Laboratory. While his research focuses on the Large Hadron Collider, he is known for his efforts to spread public awareness of physics and cosmology. He is the face of the Fermilab YouTube channel, where he has made over 150 videos. He is also a frequent contributor to CNN, Forbes, and many other online journals. He is also author of several books, including "Understanding the Universe", published by World Scientific, and "The Large Hadron Collider: The Extraordinary Story of the Higgs Boson and Other Things That Will Blow Your Mind," published by Johns Hopkins University Press. Jennifer Ouellette is the former director of the Science & Entertainment Exchange, an initiative of the National Academy of Sciences (NAS) designed to connect entertainment industry professionals with top scientists and engineers to help the creators of television shows, films, video games, and other productions incorporate science into their work. She is currently a freelance writer contributing to a physics outreach dialogue with articles in a variety of publications such as Physics World, Discover magazine, New Scientist, Physics Today, and The Wall Street Journal. Carl Sagan was an astrophysicist and science popularizer, one of his important contributions being the 1980 television series Cosmos: A Personal Voyage Neil deGrasse Tyson is an astrophysicist and science communicator who participated to TV and radio shows and wrote various outreach books. Jearl Walker is a physics professor at Cleveland State University. He wrote the Amateur Scientist column in Scientific American from 1978 to 1988 and authored the popular science book The Flying Circus of Physics. == Funding sources == American Physical Society awards grants up to $10,000 to help APS members develop new physics outreach activities. Institute for Complex Adaptive Matter (ICAM) provides grants and fellowships for physics outreach. Wellcome Trust, while mostly focused on biological sciences, the Wellcome Trust also touches on physics and encourages physics outreach. They aim to improve biology, chemistry, and physics A levels in the UK. Institute of Physics (IoP) The IoP aims to provide positive and compelling experiences of physics for public audiences through engaging and entertaining activities and events. The public engagement grant scheme is designed to give financial support of up to £1500 to individuals and organisations running physics-based events and activities in the UK and Ireland. == Awards == Kalinga Prize for the Popularization of Science is an award given by UNESCO for exceptional skill in presenting scientific ideas to lay people Klopsteg Memorial Award is presented by the American Association of Physics Teachers and given in memory of the physicist Paul E. Klopsteg Kelvin Prize is awarded by the Institute of Physics to acknowledge outstanding contributions to the public understanding of physics. The Michael Faraday Prize for communicating science to a UK audience is awarded by the Royal Society. Prix Jean Perrin for popularization in physics is attributed by the French Physics Society. == References ==
Wikipedia/Physics_outreach
In physics, phenomenology is the application of theoretical physics to experimental data by making quantitative predictions based upon known theories. It is related to the philosophical notion of the same name in that these predictions describe anticipated behaviors for the phenomena in reality. Phenomenology stands in contrast with experimentation in the scientific method, in which the goal of the experiment is to test a scientific hypothesis instead of making predictions. Phenomenology is commonly applied to the field of particle physics, where it forms a bridge between the mathematical models of theoretical physics (such as quantum field theories and theories of the structure of space-time) and the results of the high-energy particle experiments. It is sometimes used in other fields such as in condensed matter physics and plasma physics, when there are no existing theories for the observed experimental data. == Applications in particle physics == === Standard Model consequences === Within the well-tested and generally accepted Standard Model, phenomenology is the calculating of detailed predictions for experiments, usually at high precision (e.g., including radiative corrections). Examples include: Next-to-leading order calculations of particle production rates and distributions. Monte Carlo simulation studies of physics processes at colliders. Extraction of parton distribution functions from data. ==== CKM matrix calculations ==== The CKM matrix is useful in these predictions: Application of heavy quark effective field theory to extract CKM matrix elements. Using lattice QCD to extract quark masses and CKM matrix elements from experiment. === Theoretical models === In Physics beyond the Standard Model, phenomenology addresses the experimental consequences of new models: how their new particles could be searched for, how the model parameters could be measured, and how the model could be distinguished from other, competing models. ==== Phenomenological analysis ==== Phenomenological analyses, in which one studies the experimental consequences of adding the most general set of beyond-the-Standard-Model effects in a given sector of the Standard Model, usually parameterized in terms of anomalous couplings and higher-dimensional operators. In this case, the term "phenomenological" is being used more in its philosophy of science sense. == See also == Effective theory Phenomenological model Phenomenological quantum gravity == References == == External links == Papers on phenomenology are available on the hep-ph archive of the ArXiv.org e-print archive List of topics on phenomenology from IPPP, the Institute for Particle Physics Phenomenology at University of Durham, UK Collider Phenomenology: Basic knowledge and techniques, lectures by Tao Han Pheno '08 Symposium on particle physics phenomenology, including slides from the talks linked from the symposium program.
Wikipedia/Phenomenology_(physics)
In nuclear physics and particle physics, the weak interaction, weak force or the weak nuclear force, is one of the four known fundamental interactions, with the others being electromagnetism, the strong interaction, and gravitation. It is the mechanism of interaction between subatomic particles that is responsible for the radioactive decay of atoms: The weak interaction participates in nuclear fission and nuclear fusion. The theory describing its behaviour and effects is sometimes called quantum flavordynamics (QFD); however, the term QFD is rarely used, because the weak force is better understood by electroweak theory (EWT). The effective range of the weak force is limited to subatomic distances and is less than the diameter of a proton. == Background == The Standard Model of particle physics provides a uniform framework for understanding electromagnetic, weak, and strong interactions. An interaction occurs when two particles (typically, but not necessarily, half-integer spin fermions) exchange integer-spin, force-carrying bosons. The fermions involved in such exchanges can be either electric (e.g., electrons or quarks) or composite (e.g. protons or neutrons), although at the deepest levels, all weak interactions ultimately are between elementary particles. In the weak interaction, fermions can exchange three types of force carriers, namely W+, W−, and Z bosons. The masses of these bosons are far greater than the mass of a proton or neutron, which is consistent with the short range of the weak force. In fact, the force is termed weak because its field strength over any set distance is typically several orders of magnitude less than that of the electromagnetic force, which itself is further orders of magnitude less than the strong nuclear force. The weak interaction is the only fundamental interaction that breaks parity symmetry, and similarly, but far more rarely, the only interaction to break charge–parity symmetry. Quarks, which make up composite particles like neutrons and protons, come in six "flavours" – up, down, charm, strange, top and bottom – which give those composite particles their properties. The weak interaction is unique in that it allows quarks to swap their flavour for another. The swapping of those properties is mediated by the force carrier bosons. For example, during beta-minus decay, a down quark within a neutron is changed into an up quark, thus converting the neutron to a proton and resulting in the emission of an electron and an electron antineutrino. Weak interaction is important in the fusion of hydrogen into helium in a star. This is because it can convert a proton (hydrogen) into a neutron to form deuterium which is important for the continuation of nuclear fusion to form helium. The accumulation of neutrons facilitates the buildup of heavy nuclei in a star. Most fermions decay by a weak interaction over time. Such decay makes radiocarbon dating possible, as carbon-14 decays through the weak interaction to nitrogen-14. It can also create radioluminescence, commonly used in tritium luminescence, and in the related field of betavoltaics (but not similar to radium luminescence). The electroweak force is believed to have separated into the electromagnetic and weak forces during the quark epoch of the early universe. == History == In 1933, Enrico Fermi proposed the first theory of the weak interaction, known as Fermi's interaction. He suggested that beta decay could be explained by a four-fermion interaction, involving a contact force with no range. In the mid-1950s, Chen-Ning Yang and Tsung-Dao Lee first suggested that the handedness of the spins of particles in weak interaction might violate the conservation law or symmetry. In 1957, the Wu experiment, carried by Chien Shiung Wu and collaborators confirmed the symmetry violation. In the 1960s, Sheldon Glashow, Abdus Salam and Steven Weinberg unified the electromagnetic force and the weak interaction by showing them to be two aspects of a single force, now termed the electroweak force. The existence of the W and Z bosons was not directly confirmed until 1983.(p8) == Properties == The electrically charged weak interaction is unique in a number of respects: It is the only interaction that can change the flavour of quarks and leptons (i.e., of changing one type of quark into another). It is the only interaction that violates P, or parity symmetry. It is also the only one that violates charge–parity (CP) symmetry. Both the electrically charged and the electrically neutral interactions are mediated (propagated) by force carrier particles that have significant masses, an unusual feature which is explained in the Standard Model by the Higgs mechanism. Due to their large mass (approximately 90 GeV/c2) these carrier particles, called the W and Z bosons, are short-lived with a lifetime of under 10−24 seconds. The weak interaction has a coupling constant (an indicator of how frequently interactions occur) between 10−7 and 10−6, compared to the electromagnetic coupling constant of about 10−2 and the strong interaction coupling constant of about 1; consequently the weak interaction is "weak" in terms of intensity. The weak interaction has a very short effective range (around 10−17 to 10−16 m (0.01 to 0.1 fm)). At distances around 10−18 meters (0.001 fm), the weak interaction has an intensity of a similar magnitude to the electromagnetic force, but this starts to decrease exponentially with increasing distance. Scaled up by just one and a half orders of magnitude, at distances of around 3×10−17 m, the weak interaction becomes 10,000 times weaker. The weak interaction affects all the fermions of the Standard Model, as well as the Higgs boson; neutrinos interact only through gravity and the weak interaction. The weak interaction does not produce bound states, nor does it involve binding energy – something that gravity does on an astronomical scale, the electromagnetic force does at the molecular and atomic levels, and the strong nuclear force does only at the subatomic level, inside of nuclei. Its most noticeable effect is due to its first unique feature: The charged weak interaction causes flavour change. For example, a neutron is heavier than a proton (its partner nucleon) and can decay into a proton by changing the flavour (type) of one of its two down quarks to an up quark. Neither the strong interaction nor electromagnetism permit flavour changing, so this can only proceed by weak decay; without weak decay, quark properties such as strangeness and charm (associated with the strange quark and charm quark, respectively) would also be conserved across all interactions. All mesons are unstable because of weak decay.(p29) In the process known as beta decay, a down quark in the neutron can change into an up quark by emitting a virtual W− boson, which then decays into an electron and an electron antineutrino.(p28) Another example is electron capture – a common variant of radioactive decay – wherein a proton and an electron within an atom interact and are changed to a neutron (an up quark is changed to a down quark), and an electron neutrino is emitted. Due to the large masses of the W bosons, particle transformations or decays (e.g., flavour change) that depend on the weak interaction typically occur much more slowly than transformations or decays that depend only on the strong or electromagnetic forces. For example, a neutral pion decays electromagnetically, and so has a life of only about 10−16 seconds. In contrast, a charged pion can only decay through the weak interaction, and so lives about 10−8 seconds, or a hundred million times longer than a neutral pion.(p30) A particularly extreme example is the weak-force decay of a free neutron, which takes about 15 minutes.(p28) === Weak isospin and weak hypercharge === All particles have a property called weak isospin (symbol T3), which serves as an additive quantum number that restricts how the particle can interact with the W± of the weak force. Weak isospin plays the same role in the weak interaction with W± as electric charge does in electromagnetism, and color charge in the strong interaction; a different number with a similar name, weak charge, discussed below, is used for interactions with the Z0. All left-handed fermions have a weak isospin value of either ⁠++1/2⁠ or ⁠−+1/2⁠; all right-handed fermions have 0 isospin. For example, the up quark has T3 = ⁠++1/2⁠ and the down quark has T3 = ⁠−+1/2⁠. A quark never decays through the weak interaction into a quark of the same T3: Quarks with a T3 of ⁠++1/2⁠ only decay into quarks with a T3 of ⁠−+1/2⁠ and conversely. In any given strong, electromagnetic, or weak interaction, weak isospin is conserved: The sum of the weak isospin numbers of the particles entering the interaction equals the sum of the weak isospin numbers of the particles exiting that interaction. For example, a (left-handed) π+, with a weak isospin of +1 normally decays into a νμ (with T3 = ⁠++1/2⁠) and a μ+ (as a right-handed antiparticle, ⁠++1/2⁠).(p30) For the development of the electroweak theory, another property, weak hypercharge, was invented, defined as Y W = 2 ( Q − T 3 ) , {\displaystyle Y_{\text{W}}=2\,(Q-T_{3}),} where YW is the weak hypercharge of a particle with electrical charge Q (in elementary charge units) and weak isospin T3. Weak hypercharge is the generator of the U(1) component of the electroweak gauge group; whereas some particles have a weak isospin of zero, all known spin-⁠1/2⁠ particles have a non-zero weak hypercharge. == Interaction types == There are two types of weak interaction (called vertices). The first type is called the "charged-current interaction" because the weakly interacting fermions form a current with total electric charge that is nonzero. The second type is called the "neutral-current interaction" because the weakly interacting fermions form a current with total electric charge of zero. It is responsible for the (rare) deflection of neutrinos. The two types of interaction follow different selection rules. This naming convention is often misunderstood to label the electric charge of the W and Z bosons, however the naming convention predates the concept of the mediator bosons, and clearly (at least in name) labels the charge of the current (formed from the fermions), not necessarily the bosons. === Charged-current interaction === In one type of charged current interaction, a charged lepton (such as an electron or a muon, having a charge of −1) can absorb a W+ boson (a particle with a charge of +1) and be thereby converted into a corresponding neutrino (with a charge of 0), where the type ("flavour") of neutrino (electron νe, muon νμ, or tau ντ) is the same as the type of lepton in the interaction, for example: μ − + W + → ν μ {\displaystyle \mu ^{-}+\mathrm {W} ^{+}\to \nu _{\mu }} Similarly, a down-type quark (d, s, or b, with a charge of ⁠−+ 1 /3⁠) can be converted into an up-type quark (u, c, or t, with a charge of ⁠++ 2 /3⁠), by emitting a W− boson or by absorbing a W+ boson. More precisely, the down-type quark becomes a quantum superposition of up-type quarks: that is to say, it has a possibility of becoming any one of the three up-type quarks, with the probabilities given in the CKM matrix tables. Conversely, an up-type quark can emit a W+ boson, or absorb a W− boson, and thereby be converted into a down-type quark, for example: d → u + W − d + W + → u c → s + W + c + W − → s {\displaystyle {\begin{aligned}\mathrm {d} &\to \mathrm {u} +\mathrm {W} ^{-}\\\mathrm {d} +\mathrm {W} ^{+}&\to \mathrm {u} \\\mathrm {c} &\to \mathrm {s} +\mathrm {W} ^{+}\\\mathrm {c} +\mathrm {W} ^{-}&\to \mathrm {s} \end{aligned}}} The W boson is unstable so will rapidly decay, with a very short lifetime. For example: W − → e − + ν ¯ e W + → e + + ν e {\displaystyle {\begin{aligned}\mathrm {W} ^{-}&\to \mathrm {e} ^{-}+{\bar {\nu }}_{\mathrm {e} }~\\\mathrm {W} ^{+}&\to \mathrm {e} ^{+}+\nu _{\mathrm {e} }~\end{aligned}}} Decay of a W boson to other products can happen, with varying probabilities. In the so-called beta decay of a neutron (see picture, above), a down quark within the neutron emits a virtual W− boson and is thereby converted into an up quark, converting the neutron into a proton. Because of the limited energy involved in the process (i.e., the mass difference between the down quark and the up quark), the virtual W− boson can only carry sufficient energy to produce an electron and an electron-antineutrino – the two lowest-possible masses among its prospective decay products. At the quark level, the process can be represented as: d → u + e − + ν ¯ e {\displaystyle \mathrm {d} \to \mathrm {u} +\mathrm {e} ^{-}+{\bar {\nu }}_{\mathrm {e} }~} === Neutral-current interaction === In neutral current interactions, a quark or a lepton (e.g., an electron or a muon) emits or absorbs a neutral Z boson. For example: e − → e − + Z 0 {\displaystyle \mathrm {e} ^{-}\to \mathrm {e} ^{-}+\mathrm {Z} ^{0}} Like the W± bosons, the Z0 boson also decays rapidly, for example: Z 0 → b + b ¯ {\displaystyle \mathrm {Z} ^{0}\to \mathrm {b} +{\bar {\mathrm {b} }}} Unlike the charged-current interaction, whose selection rules are strictly limited by chirality, electric charge, and / or weak isospin, the neutral-current Z0 interaction can cause any two fermions in the standard model to deflect: Either particles or anti-particles, with any electric charge, and both left- and right-chirality, although the strength of the interaction differs. The quantum number weak charge (QW) serves the same role in the neutral current interaction with the Z0 that electric charge (Q, with no subscript) does in the electromagnetic interaction: It quantifies the vector part of the interaction. Its value is given by: Q w = 2 T 3 − 4 Q sin 2 ⁡ θ w = 2 T 3 − Q + ( 1 − 4 sin 2 ⁡ θ w ) Q . {\displaystyle Q_{\mathsf {w}}=2\,T_{3}-4\,Q\,\sin ^{2}\theta _{\mathsf {w}}=2\,T_{3}-Q+(1-4\,\sin ^{2}\theta _{\mathsf {w}})\,Q~.} Since the weak mixing angle ⁠ θ w ≈ 29 ∘ {\displaystyle \theta _{\mathsf {w}}\approx 29^{\circ }} ⁠, the parenthetic expression ⁠ ( 1 − 4 sin 2 ⁡ θ w ) ≈ 0.060 {\displaystyle (1-4\,\sin ^{2}\theta _{\mathsf {w}})\approx 0.060} ⁠, with its value varying slightly with the momentum difference (called "running") between the particles involved. Hence Q w ≈ 2 T 3 − Q = sgn ⁡ ( Q ) ( 1 − | Q | ) , {\displaystyle \ Q_{\mathsf {w}}\approx 2\ T_{3}-Q=\operatorname {sgn}(Q)\ {\big (}1-|Q|{\big )}\ ,} since by convention ⁠ sgn ⁡ T 3 ≡ sgn ⁡ Q {\displaystyle \operatorname {sgn} T_{3}\equiv \operatorname {sgn} Q} ⁠, and for all fermions involved in the weak interaction ⁠ T 3 = ± 1 2 {\displaystyle T_{3}=\pm {\tfrac {1}{2}}} ⁠. The weak charge of charged leptons is then close to zero, so these mostly interact with the Z boson through the axial coupling. == Electroweak theory == The Standard Model of particle physics describes the electromagnetic interaction and the weak interaction as two different aspects of a single electroweak interaction. This theory was developed around 1968 by Sheldon Glashow, Abdus Salam, and Steven Weinberg, and they were awarded the 1979 Nobel Prize in Physics for their work. The Higgs mechanism provides an explanation for the presence of three massive gauge bosons (W+, W−, Z0, the three carriers of the weak interaction), and the photon (γ, the massless gauge boson that carries the electromagnetic interaction). According to the electroweak theory, at very high energies, the universe has four components of the Higgs field whose interactions are carried by four massless scalar bosons forming a complex scalar Higgs field doublet. Likewise, there are four massless electroweak vector bosons, each similar to the photon. However, at low energies, this gauge symmetry is spontaneously broken down to the U(1) symmetry of electromagnetism, since one of the Higgs fields acquires a vacuum expectation value. Naïvely, the symmetry-breaking would be expected to produce three massless bosons, but instead those "extra" three Higgs bosons become incorporated into the three weak bosons, which then acquire mass through the Higgs mechanism. These three composite bosons are the W+, W−, and Z0 bosons actually observed in the weak interaction. The fourth electroweak gauge boson is the photon (γ) of electromagnetism, which does not couple to any of the Higgs fields and so remains massless. This theory has made a number of predictions, including a prediction of the masses of the Z and W bosons before their discovery and detection in 1983. On 4 July 2012, the CMS and the ATLAS experimental teams at the Large Hadron Collider independently announced that they had confirmed the formal discovery of a previously unknown boson of mass between 125 and 127 GeV/c2, whose behaviour so far was "consistent with" a Higgs boson, while adding a cautious note that further data and analysis were needed before positively identifying the new boson as being a Higgs boson of some type. By 14 March 2013, a Higgs boson was tentatively confirmed to exist. In a speculative case where the electroweak symmetry breaking scale were lowered, the unbroken SU(2) interaction would eventually become confining. Alternative models where SU(2) becomes confining above that scale appear quantitatively similar to the Standard Model at lower energies, but dramatically different above symmetry breaking. == Violation of symmetry == The laws of nature were long thought to remain the same under mirror reflection. The results of an experiment viewed via a mirror were expected to be identical to the results of a separately constructed, mirror-reflected copy of the experimental apparatus watched through the mirror. This so-called law of parity conservation was known to be respected by classical gravitation, electromagnetism and the strong interaction; it was assumed to be a universal law. However, in the mid-1950s Chen-Ning Yang and Tsung-Dao Lee suggested that the weak interaction might violate this law. Chien Shiung Wu and collaborators in 1957 discovered that the weak interaction violates parity, earning Yang and Lee the 1957 Nobel Prize in Physics. Although the weak interaction was once described by Fermi's theory, the discovery of parity violation and renormalization theory suggested that a new approach was needed. In 1957, Robert Marshak and George Sudarshan and, somewhat later, Richard Feynman and Murray Gell-Mann proposed a V − A (vector minus axial vector or left-handed) Lagrangian for weak interactions. In this theory, the weak interaction acts only on left-handed particles (and right-handed antiparticles). Since the mirror reflection of a left-handed particle is right-handed, this explains the maximal violation of parity. The V − A theory was developed before the discovery of the Z boson, so it did not include the right-handed fields that enter in the neutral current interaction. However, this theory allowed a compound symmetry CP to be conserved. CP combines parity P (switching left to right) with charge conjugation C (switching particles with antiparticles). Physicists were again surprised when in 1964, James Cronin and Val Fitch provided clear evidence in kaon decays that CP symmetry could be broken too, winning them the 1980 Nobel Prize in Physics. In 1973, Makoto Kobayashi and Toshihide Maskawa showed that CP violation in the weak interaction required more than two generations of particles, effectively predicting the existence of a then unknown third generation. This discovery earned them half of the 2008 Nobel Prize in Physics. Unlike parity violation, CP violation occurs only in rare circumstances. Despite its limited occurrence under present conditions, it is widely believed to be the reason that there is much more matter than antimatter in the universe, and thus forms one of Andrei Sakharov's three conditions for baryogenesis. == See also == Weakless universe – the postulate that weak interactions are not anthropically necessary Gravity Strong interaction Electromagnetism == Footnotes == == References == == Sources == === Technical === === For general readers === == External links == Harry Cheung, The Weak Force @Fermilab Fundamental Forces @Hyperphysics, Georgia State University. Brian Koberlein, What is the weak force?
Wikipedia/Weak_nuclear_force
Communication physics is one of the applied branches of physics. It deals with various kinds of communication systems. These can range from basic ideas such as mobile phone communication to quantum communication via quantum entanglement. Communication physics is also a journal edition created in 2018 published by Nature Research that aims to publish research that involves a different way of thinking in the research field. == Applications == Communication physics aims to study and explain how a communication system works. This can be applied in a hard science way via Computer Communication or in the way of how people communicate. An example of communication physics is how computers can transmit and receive data through networks. This would also deal with explaining how these devices encode and decode messages. == See also == Electronic communication Optical communication Computer communication Telephone Telegraph Radio Television Mobile phone communication Nanoscale network == References ==
Wikipedia/Communication_physics
Space physics, also known as space plasma physics, is the study of naturally occurring plasmas within Earth's upper atmosphere and the rest of the Solar System. It includes the topics of aeronomy, aurorae, planetary ionospheres and magnetospheres, radiation belts, and space weather (collectively known as solar-terrestrial physics). It also encompasses the discipline of heliophysics, which studies the solar physics of the Sun, its solar wind, the coronal heating problem, solar energetic particles, and the heliosphere. Space physics is both a pure science and an applied science, with applications in radio transmission, spacecraft operations (particularly communications and weather satellites), and in meteorology. Important physical processes in space physics include magnetic reconnection, synchrotron radiation, ring currents, Alfvén waves and plasma instabilities. It is studied using direct in situ measurements by sounding rockets and spacecraft, indirect remote sensing of electromagnetic radiation produced by the plasmas, and theoretical magnetohydrodynamics. Closely related fields include plasma physics, which studies more fundamental physics and artificial plasmas; atmospheric physics, which investigates lower levels of Earth's atmosphere; and astrophysical plasmas, which are natural plasmas beyond the Solar System. == History == Space physics can be traced to the Chinese who discovered the principle of the compass, but did not understand how it worked. During the 16th century, in De Magnete, William Gilbert gave the first description of the Earth's magnetic field, showing that the Earth itself is a great magnet, which explained why a compass needle points north. Deviations of the compass needle magnetic declination were recorded on navigation charts, and a detailed study of the declination near London by watchmaker George Graham resulted in the discovery of irregular magnetic fluctuations that we now call magnetic storms, so named by Alexander Von Humboldt. Gauss and William Weber made very careful measurements of Earth's magnetic field which showed systematic variations and random fluctuations. This suggested that the Earth was not an isolated body, but was influenced by external forces – especially from the Sun and the appearance of sunspots. A relationship between individual aurora and accompanying geomagnetic disturbances was noticed by Anders Celsius and Olof Peter Hiorter in 1747. In 1860, Elias Loomis (1811–1889) showed that the highest incidence of aurora is seen inside an oval of 20 - 25 degrees around the magnetic pole. In 1881, Hermann Fritz published a map of the "isochasms" or lines of constant magnetic field. In the late 1870s, Henri Becquerel offered the first physical explanation for the statistical correlations that had been recorded: sunspots must be a source of fast protons. They are guided to the poles by the Earth's magnetic field. In the early twentieth century, these ideas led Kristian Birkeland to build a terrella, or laboratory device which simulates the Earth's magnetic field in a vacuum chamber, and which uses a cathode ray tube to simulate the energetic particles which compose the solar wind. A theory began to be formulated about the interaction between the Earth's magnetic field and the solar wind. Space physics began in earnest with the first in situ measurements in the early 1950s, when a team led by Van Allen launched the first rockets to a height around 110 km. Geiger counters on board the second Soviet satellite, Sputnik 2, and the first US satellite, Explorer 1, detected the Earth's radiation belts, later named the Van Allen belts. The boundary between the Earth's magnetic field and interplanetary space was studied by Explorer 10. Future space craft would travel outside Earth orbit and study the composition and structure of the solar wind in much greater detail. These include WIND (spacecraft), (1994), Advanced Composition Explorer (ACE), Ulysses, the Interstellar Boundary Explorer (IBEX) in 2008, and Parker Solar Probe. Other spacecraft would study the sun, such as STEREO and Solar and Heliospheric Observatory (SOHO). == See also == Effects of spaceflight on the human body Space environment Space science Weightlessness == References == == Further reading == Kallenrode, May-Britt (2004). Space Physics: An Introduction to Plasmas and Particles in the Heliosphere and Magnetospheres. Springer. ISBN 978-3-540-20617-0. Gombosi, Tamas (1998). Physics of the Space Environment. New York: Cambridge University Press. ISBN 978-0-521-59264-2. == External links == Media related to Space physics at Wikimedia Commons
Wikipedia/Space_physics
Physics beyond the Standard Model (BSM) refers to the theoretical developments needed to explain the deficiencies of the Standard Model, such as the inability to explain the fundamental dimensionless physical constants of the standard model, the strong CP problem, neutrino oscillations, matter–antimatter asymmetry, and the nature of dark matter and dark energy. Another problem lies within the mathematical framework of the Standard Model itself: the Standard Model is inconsistent with that of general relativity, and one or both theories break down under certain conditions, such as spacetime singularities like the Big Bang and black hole event horizons. Theories that lie beyond the Standard Model include various extensions of the standard model through supersymmetry, such as the Minimal Supersymmetric Standard Model (MSSM) and Next-to-Minimal Supersymmetric Standard Model (NMSSM), and entirely novel explanations, such as string theory, M-theory, and extra dimensions. As these theories tend to reproduce the entirety of current phenomena, the question of which theory is the right one, or at least the "best step" towards a Theory of Everything, can only be settled via experiments, and is one of the most active areas of research in both theoretical and experimental physics. == Problems with the Standard Model == Despite being the most successful theory of particle physics to date, the Standard Model is not perfect. A large share of the published output of theoretical physicists consists of proposals for various forms of "Beyond the Standard Model" new physics proposals that would modify the Standard Model in ways subtle enough to be consistent with existing data, yet address its imperfections materially enough to predict non-Standard Model outcomes of new experiments that can be proposed. === Phenomena not explained === The Standard Model is inherently an incomplete theory. There are fundamental physical phenomena in nature that the Standard Model does not adequately explain: Dimensionless physical constants. The standard model does not explain the masses of the elementary particles (as fractions of the Planck mass), their mixing angles and phases, the coupling constants, the cosmological constant (multiplied with the Planck length), and the number of spatial dimensions. Gravity. The standard model does not explain gravity. The approach of simply adding a graviton to the Standard Model does not recreate what is observed experimentally without other modifications, as yet undiscovered, to the Standard Model. Moreover, the Standard Model is widely considered to be incompatible with the most successful theory of gravity to date, general relativity. Dark matter. Assuming that general relativity and Lambda CDM are true, cosmological observations tell us the standard model explains about 5% of the mass-energy present in the universe. About 26% should be dark matter (the remaining 69% being dark energy) which would behave just like other matter, but which only interacts weakly (if at all) with the Standard Model fields. Yet, the Standard Model does not supply any fundamental particles that are good dark matter candidates. Dark energy. As mentioned, the remaining 69% of the universe's energy should consist of the so-called dark energy, a constant energy density for the vacuum. Attempts to explain dark energy in terms of vacuum energy of the standard model lead to a mismatch of 120 orders of magnitude. Neutrino oscillations. According to the Standard Model, neutrinos do not oscillate. However, experiments and astronomical observations have shown that neutrino oscillation does occur. These are typically explained by postulating that neutrinos have mass. Neutrinos do not have mass in the Standard Model, and mass terms for the neutrinos can be added to the Standard Model by hand, but these lead to new theoretical problems. For example, the mass terms need to be extraordinarily small and it is not clear if the neutrino masses would arise in the same way that the masses of other fundamental particles do in the Standard Model. There are also other extensions of the Standard Model for neutrino oscillations which do not assume massive neutrinos, such as Lorentz-violating neutrino oscillations. Matter–antimatter asymmetry. The universe is made out of mostly matter. However, the standard model predicts that matter and antimatter should have been created in (almost) equal amounts if the initial conditions of the universe did not involve disproportionate matter relative to antimatter. Yet, there is no mechanism in the Standard Model to sufficiently explain this asymmetry. ==== Experimental results not explained ==== No experimental result is accepted as definitively contradicting the Standard Model at the 5 σ level, widely considered to be the threshold of a discovery in particle physics. Because every experiment contains some degree of statistical and systemic uncertainty, and the theoretical predictions themselves are also almost never calculated exactly and are subject to uncertainties in measurements of the fundamental constants of the Standard Model (some of which are tiny and others of which are substantial), it is to be expected that some of the hundreds of experimental tests of the Standard Model will deviate from it to some extent, even if there were no new physics to be discovered. At any given moment there are several experimental results standing that significantly differ from a Standard Model-based prediction. In the past, many of these discrepancies have been found to be statistical flukes or experimental errors that vanish as more data has been collected, or when the same experiments were conducted more carefully. On the other hand, any physics beyond the Standard Model would necessarily first appear in experiments as a statistically significant difference between an experiment and the theoretical prediction. The task is to determine which is the case. In each case, physicists seek to determine if a result is merely a statistical fluke or experimental error on the one hand, or a sign of new physics on the other. More statistically significant results cannot be mere statistical flukes but can still result from experimental error or inaccurate estimates of experimental precision. Frequently, experiments are tailored to be more sensitive to experimental results that would distinguish the Standard Model from theoretical alternatives. Some of the most notable examples include the following: B meson decay etc. – results from a BaBar experiment may suggest a surplus over Standard Model predictions of a type of particle decay ( B → D(*) τ− ντ ). In this, an electron and positron collide, resulting in a B meson and an antimatter B meson, which then decays into a D meson and a tau lepton as well as a tau antineutrino. While the level of certainty of the excess (3.4 σ in statistical jargon) is not enough to declare a break from the Standard Model, the results are a potential sign of something amiss and are likely to affect existing theories, including those attempting to deduce the properties of Higgs bosons. In 2015, LHCb reported observing a 2.1 σ excess in the same ratio of branching fractions. The Belle experiment also reported an excess. In 2017 a meta analysis of all available data reported a cumulative 5 σ deviation from SM. Neutron lifetime puzzle - Free neutrons are not stable but decay after some time. Currently there are two methods used to measure this lifetime ("bottle" versus "beam") that give different values not within each other's error margin. Currently the lifetime from the bottle method is at τ n = 877.75 s {\displaystyle \tau _{n}=877.75s} with a difference of 10 seconds below the beam method value of τ n = 887.7 s {\displaystyle \tau _{n}=887.7s} . This problem may be solved by taking into account neutron scattering which decreases the lifetime of the involved neutrons. This error occurs in the bottle method and the effect depends on the shape of the bottle – thus this might be a bottle method only systematic error. === Theoretical predictions not observed === Observation at particle colliders of all of the fundamental particles predicted by the Standard Model has been confirmed. The Higgs boson is predicted by the Standard Model's explanation of the Higgs mechanism, which describes how the weak SU(2) gauge symmetry is broken and how fundamental particles obtain mass; it was the last particle predicted by the Standard Model to be observed. On July 4, 2012, CERN scientists using the Large Hadron Collider announced the discovery of a particle consistent with the Higgs boson, with a mass of about 126 GeV/c2. A Higgs boson was confirmed to exist on March 14, 2013, although efforts to confirm that it has all of the properties predicted by the Standard Model are ongoing. A few hadrons (i.e. composite particles made of quarks) whose existence is predicted by the Standard Model, which can be produced only at very high energies in very low frequencies have not yet been definitively observed, and "glueballs" (i.e. composite particles made of gluons) have also not yet been definitively observed. Some very low frequency particle decays predicted by the Standard Model have also not yet been definitively observed because insufficient data is available to make a statistically significant observation. === Unexplained relations === Koide formula – an unexplained empirical equation remarked upon by Yoshio Koide in 1981, and later by others. It relates the masses of the three charged leptons: Q = m e + m μ + m τ ( m e + m μ + m τ ) 2 = 0.666661 ( 7 ) ≈ 2 3 {\displaystyle Q={\frac {m_{e}+m_{\mu }+m_{\tau }}{{\big (}{\sqrt {m_{e}}}+{\sqrt {m_{\mu }}}+{\sqrt {m_{\tau }}}{\big )}^{2}}}=0.666661(7)\approx {\frac {2}{3}}} . The Standard Model does not predict lepton masses (they are free parameters of the theory). However, the value of the Koide formula being equal to 2/3 within experimental errors of the measured lepton masses suggests the existence of a theory which is able to predict lepton masses. The CKM matrix, if interpreted as a rotation matrix in a 3-dimensional vector space, "rotates" a vector composed of square roots of down-type quark masses ( m d , m s , m b ) {\displaystyle ({\sqrt {m_{d}}},{\sqrt {m_{s}}},{\sqrt {m_{b}}}{\big )}} into a vector of square roots of up-type quark masses ( m u , m c , m t ) {\displaystyle ({\sqrt {m_{u}}},{\sqrt {m_{c}}},{\sqrt {m_{t}}}{\big )}} , up to vector lengths, a result due to Kohzo Nishida. The sum of squares of the Yukawa couplings of all Standard Model fermions is approximately 0.984, which is very close to 1. To put it another way, the sum of squares of fermion masses is very close to half of squared Higgs vacuum expectation value. This sum is dominated by the top quark. The sum of squares of boson masses (that is, W, Z, and Higgs bosons) is also very close to half of squared Higgs vacuum expectation value, the ratio is approximately 1.004. Consequently, the sum of squared masses of all Standard Model particles is very close to the squared Higgs vacuum expectation value, the ratio is approximately 0.994. It is unclear if these empirical relationships represent any underlying physics; according to Koide, the rule he discovered "may be an accidental coincidence". === Theoretical problems === Some features of the standard model are added in an ad hoc way. These are not problems per se (i.e. the theory works fine with the ad hoc insertions), but they imply a lack of understanding. These contrived features have motivated theorists to look for more fundamental theories with fewer parameters. Some of the contrivances are: Hierarchy problem – the standard model introduces particle masses through a process known as spontaneous symmetry breaking caused by the Higgs field. Within the standard model, the mass of the Higgs particle gets some very large quantum corrections due to the presence of virtual particles (mostly virtual top quarks). These corrections are much larger than the actual mass of the Higgs. This means that the bare mass parameter of the Higgs in the standard model must be fine tuned in such a way that almost completely cancels the quantum corrections. This level of fine-tuning is deemed unnatural by many theorists. Number of parameters – the standard model depends on 19 parameter numbers. Their values are known from experiment, but the origin of the values is unknown. Some theorists have tried to find relations between different parameters, for example, between the masses of particles in different generations or calculating particle masses, such as in asymptotic safety scenarios. Quantum triviality – suggests that it may not be possible to create a consistent quantum field theory involving elementary scalar Higgs particles. This is sometimes called the Landau pole problem. A possible solution is that the renormalized value could go to zero as the cut-off is removed, meaning that the bare value is completely screened by quantum fluctuations. Strong CP problem – it can be argued theoretically that the standard model should contain a term in the strong interaction that breaks CP symmetry, causing slightly different interaction rates for matter vs. antimatter. Experimentally, however, no such violation has been found, implying that the coefficient of this term – if any – would be suspiciously close to zero. == Additional experimental results == Research from experimental data on the cosmological constant, LIGO noise, and pulsar timing, suggests it's very unlikely that there are any new particles with masses much higher than those which can be found in the standard model or the Large Hadron Collider. However, this research has also indicated that quantum gravity or perturbative quantum field theory will become strongly coupled before 1 PeV, leading to other new physics in the TeVs. == Grand unified theories == The standard model has three gauge symmetries; the colour SU(3), the weak isospin SU(2), and the weak hypercharge U(1) symmetry, corresponding to the three fundamental forces. Due to renormalization the coupling constants of each of these symmetries vary with the energy at which they are measured. Around 1016 GeV these couplings become approximately equal. This has led to speculation that above this energy the three gauge symmetries of the standard model are unified in one single gauge symmetry with a simple gauge group, and just one coupling constant. Below this energy the symmetry is spontaneously broken to the standard model symmetries. Popular choices for the unifying group are the special unitary group in five dimensions SU(5) and the special orthogonal group in ten dimensions SO(10). Theories that unify the standard model symmetries in this way are called Grand Unified Theories (or GUTs), and the energy scale at which the unified symmetry is broken is called the GUT scale. Generically, grand unified theories predict the creation of magnetic monopoles in the early universe, and instability of the proton. Neither of these have been observed, and this absence of observation puts limits on the possible GUTs. == Supersymmetry == Supersymmetry extends the Standard Model by adding another class of symmetries to the Lagrangian. These symmetries exchange fermionic particles with bosonic ones. Such a symmetry predicts the existence of supersymmetric particles, abbreviated as sparticles, which include the sleptons, squarks, neutralinos and charginos. Each particle in the Standard Model would have a superpartner whose spin differs by 1/2 from the ordinary particle. Due to the breaking of supersymmetry, the sparticles are much heavier than their ordinary counterparts; they are so heavy that existing particle colliders may not be powerful enough to produce them. == Neutrinos == In the standard model, neutrinos cannot spontaneously change flavor. Measurements however indicated that neutrinos do spontaneously change flavor, in what is called neutrino oscillations. Neutrino oscillations are usually explained using massive neutrinos. In the standard model, neutrinos have exactly zero mass, as the standard model only contains left-handed neutrinos. With no suitable right-handed partner, it is impossible to add a renormalizable mass term to the standard model. These measurements only give the mass differences between the different flavours. The best constraint on the absolute mass of the neutrinos comes from precision measurements of tritium decay, providing an upper limit 2 eV, which makes them at least five orders of magnitude lighter than the other particles in the standard model. This necessitates an extension of the standard model, which not only needs to explain how neutrinos get their mass, but also why the mass is so small. One approach to add masses to the neutrinos, the so-called seesaw mechanism, is to add right-handed neutrinos and have these couple to left-handed neutrinos with a Dirac mass term. The right-handed neutrinos have to be sterile, meaning that they do not participate in any of the standard model interactions. Because they have no charges, the right-handed neutrinos can act as their own anti-particles, and have a Majorana mass term. Like the other Dirac masses in the standard model, the neutrino Dirac mass is expected to be generated through the Higgs mechanism, and is therefore unpredictable. The standard model fermion masses differ by many orders of magnitude; the Dirac neutrino mass has at least the same uncertainty. On the other hand, the Majorana mass for the right-handed neutrinos does not arise from the Higgs mechanism, and is therefore expected to be tied to some energy scale of new physics beyond the standard model, for example the Planck scale. Therefore, any process involving right-handed neutrinos will be suppressed at low energies. The correction due to these suppressed processes effectively gives the left-handed neutrinos a mass that is inversely proportional to the right-handed Majorana mass, a mechanism known as the see-saw. The presence of heavy right-handed neutrinos thereby explains both the small mass of the left-handed neutrinos and the absence of the right-handed neutrinos in observations. However, due to the uncertainty in the Dirac neutrino masses, the right-handed neutrino masses can lie anywhere. For example, they could be as light as keV and be dark matter, they can have a mass in the LHC energy range and lead to observable lepton number violation, or they can be near the GUT scale, linking the right-handed neutrinos to the possibility of a grand unified theory. The mass terms mix neutrinos of different generations. This mixing is parameterized by the PMNS matrix, which is the neutrino analogue of the CKM quark mixing matrix. Unlike the quark mixing, which is almost minimal, the mixing of the neutrinos appears to be almost maximal. This has led to various speculations of symmetries between the various generations that could explain the mixing patterns. The mixing matrix could also contain several complex phases that break CP invariance, although there has been no experimental probe of these. These phases could potentially create a surplus of leptons over anti-leptons in the early universe, a process known as leptogenesis. This asymmetry could then at a later stage be converted in an excess of baryons over anti-baryons, and explain the matter-antimatter asymmetry in the universe. The light neutrinos are disfavored as an explanation for the observation of dark matter, based on considerations of large-scale structure formation in the early universe. Simulations of structure formation show that they are too hot – that is, their kinetic energy is large compared to their mass – while formation of structures similar to the galaxies in our universe requires cold dark matter. The simulations show that neutrinos can at best explain a few percent of the missing mass in dark matter. However, the heavy, sterile, right-handed neutrinos are a possible candidate for a dark matter WIMP. There are however other explanations for neutrino oscillations which do not necessarily require neutrinos to have masses, such as Lorentz-violating neutrino oscillations. == Preon models == Several preon models have been proposed to address the unsolved problem concerning the fact that there are three generations of quarks and leptons. Preon models generally postulate some additional new particles which are further postulated to be able to combine to form the quarks and leptons of the standard model. One of the earliest preon models was the Rishon model. To date, no preon model is widely accepted or fully verified. == Theories of everything == Theoretical physics continues to strive toward a theory of everything, a theory that fully explains and links together all known physical phenomena, and predicts the outcome of any experiment that could be carried out in principle. In practical terms the immediate goal in this regard is to develop a theory which would unify the Standard Model with General Relativity in a theory of quantum gravity. Additional features, such as overcoming conceptual flaws in either theory or accurate prediction of particle masses, would be desired. The challenges in putting together such a theory are not just conceptual - they include the experimental aspects of the very high energies needed to probe exotic realms. Several notable attempts in this direction are supersymmetry, loop quantum gravity, and String theory. === Supersymmetry === === Loop quantum gravity === Theories of quantum gravity such as loop quantum gravity and others are thought by some to be promising candidates to the mathematical unification of quantum field theory and general relativity, requiring less drastic changes to existing theories. However recent work places stringent limits on the putative effects of quantum gravity on the speed of light, and disfavours some current models of quantum gravity. === String theory === Extensions, revisions, replacements, and reorganizations of the Standard Model exist in attempt to correct for these and other issues. String theory is one such reinvention, and many theoretical physicists think that such theories are the next theoretical step toward a true Theory of Everything. Among the numerous variants of string theory, M-theory, whose mathematical existence was first proposed at a String Conference in 1995 by Edward Witten, is believed by many to be a proper "ToE" candidate, notably by physicists Brian Greene and Stephen Hawking. Though a full mathematical description is not yet known, solutions to the theory exist for specific cases. Recent works have also proposed alternate string models, some of which lack the various harder-to-test features of M-theory (e.g. the existence of Calabi–Yau manifolds, many extra dimensions, etc.) including works by well-published physicists such as Lisa Randall. == See also == == Footnotes == == References == == Further reading == Lisa Randall (2005). Warped Passages: Unraveling the Mysteries of the Universe's Hidden Dimensions. HarperCollins. ISBN 978-0-06-053108-9. == External resources == Standard Model Theory @ SLAC Scientific American Apr 2006 LHC. Nature July 2007 Les Houches Conference, Summer 2005
Wikipedia/Physics_beyond_the_Standard_Model
In physics, the Bardeen–Cooper–Schrieffer (BCS) theory (named after John Bardeen, Leon Cooper, and John Robert Schrieffer) is the first microscopic theory of superconductivity since Heike Kamerlingh Onnes's 1911 discovery. The theory describes superconductivity as a microscopic effect caused by a condensation of Cooper pairs. The theory is also used in nuclear physics to describe the pairing interaction between nucleons in an atomic nucleus. It was proposed by Bardeen, Cooper, and Schrieffer in 1957; they received the Nobel Prize in Physics for this theory in 1972. == History == Rapid progress in the understanding of superconductivity gained momentum in the mid-1950s. It began with the 1948 paper, "On the Problem of the Molecular Theory of Superconductivity", where Fritz London proposed that the phenomenological London equations may be consequences of the coherence of a quantum state. In 1953, Brian Pippard, motivated by penetration experiments, proposed that this would modify the London equations via a new scale parameter called the coherence length. John Bardeen then argued in the 1955 paper, "Theory of the Meissner Effect in Superconductors", that such a modification naturally occurs in a theory with an energy gap. The key ingredient was Leon Cooper's calculation of the bound states of electrons subject to an attractive force in his 1956 paper, "Bound Electron Pairs in a Degenerate Fermi Gas". In 1957 Bardeen and Cooper assembled these ingredients and constructed such a theory, the BCS theory, with Robert Schrieffer. The theory was first published in April 1957 in the letter, "Microscopic theory of superconductivity". The demonstration that the phase transition is second order, that it reproduces the Meissner effect and the calculations of specific heats and penetration depths appeared in the December 1957 article, "Theory of superconductivity". They received the Nobel Prize in Physics in 1972 for this theory. In 1986, high-temperature superconductivity was discovered in La-Ba-Cu-O, at temperatures up to 30 K. Following experiments determined more materials with transition temperatures up to about 130 K, considerably above the previous limit of about 30 K. It is experimentally very well known that the transition temperature strongly depends on pressure. In general, it is believed that BCS theory alone cannot explain this phenomenon and that other effects are in play. These effects are still not yet fully understood; it is possible that they even control superconductivity at low temperatures for some materials. == Overview == At sufficiently low temperatures, electrons near the Fermi surface become unstable against the formation of Cooper pairs. Cooper showed such binding will occur in the presence of an attractive potential, no matter how weak. In conventional superconductors, an attraction is generally attributed to an electron-lattice interaction. The BCS theory, however, requires only that the potential be attractive, regardless of its origin. In the BCS framework, superconductivity is a macroscopic effect which results from the condensation of Cooper pairs. These have some bosonic properties, and bosons, at sufficiently low temperature, can form a large Bose–Einstein condensate. Superconductivity was simultaneously explained by Nikolay Bogolyubov, by means of the Bogoliubov transformations. In many superconductors, the attractive interaction between electrons (necessary for pairing) is brought about indirectly by the interaction between the electrons and the vibrating crystal lattice (the phonons). Roughly speaking the picture is the following: An electron moving through a conductor will attract nearby positive charges in the lattice. This deformation of the lattice causes another electron, with opposite spin, to move into the region of higher positive charge density. The two electrons then become correlated. Because there are a lot of such electron pairs in a superconductor, these pairs overlap very strongly and form a highly collective condensate. In this "condensed" state, the breaking of one pair will change the energy of the entire condensate - not just a single electron, or a single pair. Thus, the energy required to break any single pair is related to the energy required to break all of the pairs (or more than just two electrons). Because the pairing increases this energy barrier, kicks from oscillating atoms in the conductor (which are small at sufficiently low temperatures) are not enough to affect the condensate as a whole, or any individual "member pair" within the condensate. Thus the electrons stay paired together and resist all kicks, and the electron flow as a whole (the current through the superconductor) will not experience resistance. Thus, the collective behavior of the condensate is a crucial ingredient necessary for superconductivity. === Details === BCS theory starts from the assumption that there is some attraction between electrons, which can overcome the Coulomb repulsion. In most materials (in low temperature superconductors), this attraction is brought about indirectly by the coupling of electrons to the crystal lattice (as explained above). However, the results of BCS theory do not depend on the origin of the attractive interaction. For instance, Cooper pairs have been observed in ultracold gases of fermions where a homogeneous magnetic field has been tuned to their Feshbach resonance. The original results of BCS (discussed below) described an s-wave superconducting state, which is the rule among low-temperature superconductors but is not realized in many unconventional superconductors such as the d-wave high-temperature superconductors. Extensions of BCS theory exist to describe these other cases, although they are insufficient to completely describe the observed features of high-temperature superconductivity. BCS is able to give an approximation for the quantum-mechanical many-body state of the system of (attractively interacting) electrons inside the metal. This state is now known as the BCS state. In the normal state of a metal, electrons move independently, whereas in the BCS state, they are bound into Cooper pairs by the attractive interaction. The BCS formalism is based on the reduced potential for the electrons' attraction. Within this potential, a variational ansatz for the wave function is proposed. This ansatz was later shown to be exact in the dense limit of pairs. Note that the continuous crossover between the dilute and dense regimes of attracting pairs of fermions is still an open problem, which now attracts a lot of attention within the field of ultracold gases. === Underlying evidence === The hyperphysics website pages at Georgia State University summarize some key background to BCS theory as follows: Evidence of a band gap at the Fermi level (described as "a key piece in the puzzle") the existence of a critical temperature and critical magnetic field implied a band gap, and suggested a phase transition, but single electrons are forbidden from condensing to the same energy level by the Pauli exclusion principle. The site comments that "a drastic change in conductivity demanded a drastic change in electron behavior". Conceivably, pairs of electrons might perhaps act like bosons instead, which are bound by different condensate rules and do not have the same limitation. Isotope effect on the critical temperature, suggesting lattice interactions The Debye frequency of phonons in a lattice is proportional to the inverse of the square root of the mass of lattice ions. It was shown that the superconducting transition temperature of mercury indeed showed the same dependence, by substituting the most abundant natural mercury isotope, 202Hg, with a different isotope, 198Hg. An exponential rise in heat capacity near the critical temperature for some superconductors An exponential increase in heat capacity near the critical temperature also suggests an energy bandgap for the superconducting material. As superconducting vanadium is warmed toward its critical temperature, its heat capacity increases greatly in a very few degrees; this suggests an energy gap being bridged by thermal energy. The lessening of the measured energy gap towards the critical temperature This suggests a type of situation where some kind of binding energy exists but it is gradually weakened as the temperature increases toward the critical temperature. A binding energy suggests two or more particles or other entities that are bound together in the superconducting state. This helped to support the idea of bound particles – specifically electron pairs – and together with the above helped to paint a general picture of paired electrons and their lattice interactions. == Implications == BCS derived several important theoretical predictions that are independent of the details of the interaction, since the quantitative predictions mentioned below hold for any sufficiently weak attraction between the electrons and this last condition is fulfilled for many low temperature superconductors - the so-called weak-coupling case. These have been confirmed in numerous experiments: The electrons are bound into Cooper pairs, and these pairs are correlated due to the Pauli exclusion principle for the electrons, from which they are constructed. Therefore, in order to break a pair, one has to change energies of all other pairs. This means there is an energy gap for single-particle excitation, unlike in the normal metal (where the state of an electron can be changed by adding an arbitrarily small amount of energy). This energy gap is highest at low temperatures but vanishes at the transition temperature when superconductivity ceases to exist. The BCS theory gives an expression that shows how the gap grows with the strength of the attractive interaction and the (normal phase) single particle density of states at the Fermi level. Furthermore, it describes how the density of states is changed on entering the superconducting state, where there are no electronic states any more at the Fermi level. The energy gap is most directly observed in tunneling experiments and in reflection of microwaves from superconductors. BCS theory predicts the dependence of the value of the energy gap Δ at temperature T on the critical temperature Tc. The ratio between the value of the energy gap at zero temperature and the value of the superconducting transition temperature (expressed in energy units) takes the universal value Δ ( T = 0 ) = 1.764 k B T c , {\displaystyle \Delta (T=0)=1.764\,k_{\rm {B}}T_{\rm {c}},} independent of material. Near the critical temperature the relation asymptotes to Δ ( T → T c ) ≈ 3.06 k B T c 1 − ( T / T c ) {\displaystyle \Delta (T\to T_{\rm {c}})\approx 3.06\,k_{\rm {B}}T_{\rm {c}}{\sqrt {1-(T/T_{\rm {c}})}}} which is of the form suggested the previous year by M. J. Buckingham based on the fact that the superconducting phase transition is second order, that the superconducting phase has a mass gap and on Blevins, Gordy and Fairbank's experimental results the previous year on the absorption of millimeter waves by superconducting tin. Due to the energy gap, the specific heat of the superconductor is suppressed strongly (exponentially) at low temperatures, there being no thermal excitations left. However, before reaching the transition temperature, the specific heat of the superconductor becomes even higher than that of the normal conductor (measured immediately above the transition) and the ratio of these two values is found to be universally given by 2.5. BCS theory correctly predicts the Meissner effect, i.e. the expulsion of a magnetic field from the superconductor and the variation of the penetration depth (the extent of the screening currents flowing below the metal's surface) with temperature. It also describes the variation of the critical magnetic field (above which the superconductor can no longer expel the field but becomes normal conducting) with temperature. BCS theory relates the value of the critical field at zero temperature to the value of the transition temperature and the density of states at the Fermi level. In its simplest form, BCS gives the superconducting transition temperature Tc in terms of the electron-phonon coupling potential V and the Debye cutoff energy ED: k B T c = 1.134 E D e − 1 / N ( 0 ) V , {\displaystyle k_{\rm {B}}\,T_{\rm {c}}=1.134E_{\rm {D}}\,{e^{-1/N(0)\,V}},} where N(0) is the electronic density of states at the Fermi level. For more details, see Cooper pairs. The BCS theory reproduces the isotope effect, which is the experimental observation that for a given superconducting material, the critical temperature is inversely proportional to the square-root of the mass of the isotope used in the material. The isotope effect was reported by two groups on 24 March 1950, who discovered it independently working with different mercury isotopes, although a few days before publication they learned of each other's results at the ONR conference in Atlanta. The two groups are Emanuel Maxwell, and C. A. Reynolds, B. Serin, W. H. Wright, and L. B. Nesbitt. The choice of isotope ordinarily has little effect on the electrical properties of a material, but does affect the frequency of lattice vibrations. This effect suggests that superconductivity is related to vibrations of the lattice. This is incorporated into BCS theory, where lattice vibrations yield the binding energy of electrons in a Cooper pair. Little–Parks experiment - One of the first indications to the importance of the Cooper-pairing principle. == See also == Magnesium diboride, considered a BCS superconductor Quasiparticle Little–Parks effect, one of the first indications of the importance of the Cooper pairing principle. == References == === Primary sources === Cooper, Leon N. (1956). "Bound Electron Pairs in a Degenerate Fermi Gas". Physical Review. 104 (4): 1189–1190. Bibcode:1956PhRv..104.1189C. doi:10.1103/PhysRev.104.1189. Bardeen, J.; Cooper, L. N.; Schrieffer, J. R. (1957). "Microscopic Theory of Superconductivity". Physical Review. 106 (1): 162–164. Bibcode:1957PhRv..106..162B. doi:10.1103/PhysRev.106.162. Bardeen, J.; Cooper, L. N.; Schrieffer, J. R. (1957). "Theory of Superconductivity". Physical Review. 108 (5): 1175–1204. Bibcode:1957PhRv..108.1175B. doi:10.1103/PhysRev.108.1175. == Further reading == John Robert Schrieffer, Theory of Superconductivity, (1964), ISBN 0-7382-0120-0 Michael Tinkham, Introduction to Superconductivity, ISBN 0-486-43503-2 Pierre-Gilles de Gennes, Superconductivity of Metals and Alloys, ISBN 0-7382-0101-4. Cooper, Leon N; Feldman, Dmitri, eds. (2010). BCS: 50 Years (book). World Scientific. ISBN 978-981-4304-64-1. Schmidt, Vadim Vasil'evich. The physics of superconductors: Introduction to fundamentals and applications. Springer Science & Business Media, 2013. == External links == Hyperphysics page on BCS Dance analogy Archived 2011-06-29 at the Wayback Machine of BCS theory as explained by Bob Schrieffer (audio recording) Mean-Field Theory: Hartree-Fock and BCS in E. Pavarini, E. Koch, J. van den Brink, and G. Sawatzky: Quantum materials: Experiments and Theory, Jülich 2016, ISBN 978-3-95806-159-0
Wikipedia/BCS_theory
Causality is the relationship between causes and effects. While causality is also a topic studied from the perspectives of philosophy and physics, it is operationalized so that causes of an event must be in the past light cone of the event and ultimately reducible to fundamental interactions. Similarly, a cause cannot have an effect outside its future light cone. == Macroscopic vs microscopic causality == Causality can be defined macroscopically, at the level of human observers, or microscopically, for fundamental events at the atomic level. The strong causality principle forbids information transfer faster than the speed of light; the weak causality principle operates at the microscopic level and need not lead to information transfer. Physical models can obey the weak principle without obeying the strong version. == Macroscopic causality == In classical physics, an effect cannot occur before its cause which is why solutions such as the advanced time solutions of the Liénard–Wiechert potential are discarded as physically meaningless. In both Einstein's theory of special and general relativity, causality means that an effect cannot occur from a cause that is not in the back (past) light cone of that event. Similarly, a cause cannot have an effect outside its front (future) light cone. These restrictions are consistent with the constraint that mass and energy that act as causal influences cannot travel faster than the speed of light and/or backwards in time. In quantum field theory, observables of events with a spacelike relationship, "elsewhere", have to commute, so the order of observations or measurements of such observables do not impact each other. Another requirement of causality is that cause and effect be mediated across space and time (requirement of contiguity). This requirement has been very influential in the past, in the first place as a result of direct observation of causal processes (like pushing a cart), in the second place as a problematic aspect of Newton's theory of gravitation (attraction of the earth by the sun by means of action at a distance) replacing mechanistic proposals like Descartes' vortex theory; in the third place as an incentive to develop dynamic field theories (e.g., Maxwell's electrodynamics and Einstein's general theory of relativity) restoring contiguity in the transmission of influences in a more successful way than in Descartes' theory. == Simultaneity == In modern physics, the notion of causality had to be clarified. The word simultaneous is observer-dependent in special relativity. The principle is relativity of simultaneity. Consequently, the relativistic principle of causality says that the cause must precede its effect according to all inertial observers. This is equivalent to the statement that the cause and its effect are separated by a timelike interval, and the effect belongs to the future of its cause. If a timelike interval separates the two events, this means that a signal could be sent between them at less than the speed of light. On the other hand, if signals could move faster than the speed of light, this would violate causality because it would allow a signal to be sent across spacelike intervals, which means that at least to some inertial observers the signal would travel backward in time. For this reason, special relativity does not allow communication faster than the speed of light. In the theory of general relativity, the concept of causality is generalized in the most straightforward way: the effect must belong to the future light cone of its cause, even if the spacetime is curved. New subtleties must be taken into account when we investigate causality in quantum mechanics and relativistic quantum field theory in particular. In those two theories, causality is closely related to the principle of locality. Bell's Theorem shows that conditions of "local causality" in experiments involving quantum entanglement result in non-classical correlations predicted by quantum mechanics. Despite these subtleties, causality remains an important and valid concept in physical theories. For example, the notion that events can be ordered into causes and effects is necessary to prevent (or at least outline) causality paradoxes such as the grandfather paradox, which asks what happens if a time-traveler kills his own grandfather before he ever meets the time-traveler's grandmother. See also Chronology protection conjecture. == Determinism (or, what causality is not) == The word causality in this context means that all effects must have specific physical causes due to fundamental interactions. Causality in this context is not associated with definitional principles such as Newton's second law. As such, in the context of causality, a force does not cause a mass to accelerate nor vice versa. Rather, Newton's second law can be derived from the conservation of momentum, which itself is a consequence of the spatial homogeneity of physical laws. The empiricists' aversion to metaphysical explanations (like Descartes' vortex theory) meant that scholastic arguments about what caused phenomena were either rejected for being untestable or were just ignored. The complaint that physics does not explain the cause of phenomena has accordingly been dismissed as a problem that is philosophical or metaphysical rather than empirical (e.g., Newton's "Hypotheses non fingo"). According to Ernst Mach the notion of force in Newton's second law was pleonastic, tautological and superfluous and, as indicated above, is not considered a consequence of any principle of causality. Indeed, it is possible to consider the Newtonian equations of motion of the gravitational interaction of two bodies, m 1 d 2 r 1 d t 2 = − m 1 m 2 G ( r 1 − r 2 ) | r 1 − r 2 | 3 ; m 2 d 2 r 2 d t 2 = − m 1 m 2 G ( r 2 − r 1 ) | r 2 − r 1 | 3 , {\displaystyle m_{1}{\frac {d^{2}{\mathbf {r} }_{1}}{dt^{2}}}=-{\frac {m_{1}m_{2}G({\mathbf {r} }_{1}-{\mathbf {r} }_{2})}{|{\mathbf {r} }_{1}-{\mathbf {r} }_{2}|^{3}}};\;m_{2}{\frac {d^{2}{\mathbf {r} }_{2}}{dt^{2}}}=-{\frac {m_{1}m_{2}G({\mathbf {r} }_{2}-{\mathbf {r} }_{1})}{|{\mathbf {r} }_{2}-{\mathbf {r} }_{1}|^{3}}},} as two coupled equations describing the positions r 1 ( t ) {\displaystyle \scriptstyle {\mathbf {r} }_{1}(t)} and r 2 ( t ) {\displaystyle \scriptstyle {\mathbf {r} }_{2}(t)} of the two bodies, without interpreting the right hand sides of these equations as forces; the equations just describe a process of interaction, without any necessity to interpret one body as the cause of the motion of the other, and allow one to predict the states of the system at later (as well as earlier) times. The ordinary situations in which humans singled out some factors in a physical interaction as being prior and therefore supplying the "because" of the interaction were often ones in which humans decided to bring about some state of affairs and directed their energies to producing that state of affairs—a process that took time to establish and left a new state of affairs that persisted beyond the time of activity of the actor. It would be difficult and pointless, however, to explain the motions of binary stars with respect to each other in that way which, indeed, are time-reversible and agnostic to the arrow of time, but with such a direction of time established, the entire evolution system could then be completely determined. The possibility of such a time-independent view is at the basis of the deductive-nomological (D-N) view of scientific explanation, considering an event to be explained if it can be subsumed under a scientific law. In the D-N view, a physical state is considered to be explained if, applying the (deterministic) law, it can be derived from given initial conditions. (Such initial conditions could include the momenta and distance from each other of binary stars at any given moment.) Such 'explanation by determinism' is sometimes referred to as causal determinism. A disadvantage of the D-N view is that causality and determinism are more or less identified. Thus, in classical physics, it was assumed that all events are caused by earlier ones according to the known laws of nature, culminating in Pierre-Simon Laplace's claim that if the current state of the world were known with precision, it could be computed for any time in the future or the past (see Laplace's demon). However, this is usually referred to as Laplace determinism (rather than 'Laplace causality') because it hinges on determinism in mathematical models as dealt with in the mathematical Cauchy problem. Confusion between causality and determinism is particularly acute in quantum mechanics, this theory being acausal in the sense that it is unable in many cases to identify the causes of actually observed effects or to predict the effects of identical causes, but arguably deterministic in some interpretations (e.g. if the wave function is presumed not to actually collapse as in the many-worlds interpretation, or if its collapse is due to hidden variables, or simply redefining determinism as meaning that probabilities rather than specific effects are determined). == Distributed causality == Theories in physics like the butterfly effect from chaos theory open up the possibility of a type of distributed parameter systems in causality. The butterfly effect theory proposes: "Small variations of the initial condition of a nonlinear dynamical system may produce large variations in the long term behavior of the system." This opens up the opportunity to understand a distributed causality. A related way to interpret the butterfly effect is to see it as highlighting the difference between the application of the notion of causality in physics and a more general use of causality as represented by Mackie's INUS conditions. In classical (Newtonian) physics, in general, only those conditions are (explicitly) taken into account, that are both necessary and sufficient. For instance, when a massive sphere is caused to roll down a slope starting from a point of unstable equilibrium, then its velocity is assumed to be caused by the force of gravity accelerating it; the small push that was needed to set it into motion is not explicitly dealt with as a cause. In order to be a physical cause there must be a certain proportionality with the ensuing effect. A distinction is drawn between triggering and causation of the ball's motion. By the same token the butterfly can be seen as triggering a tornado, its cause being assumed to be seated in the atmospherical energies already present beforehand, rather than in the movements of a butterfly. == Causal sets == In causal set theory, causality takes an even more prominent place. The basis for this approach to quantum gravity is in a theorem by David Malament. This theorem states that the causal structure of a spacetime suffices to reconstruct its conformal class, so knowing the conformal factor and the causal structure is enough to know the spacetime. Based on this, Rafael Sorkin proposed the idea of Causal Set Theory, which is a fundamentally discrete approach to quantum gravity. The causal structure of the spacetime is represented as a poset, while the conformal factor can be reconstructed by identifying each poset element with a unit volume. == See also == == References == == Further reading == Bohm, David. (2005). Causality and Chance in Modern Physics. London: Taylor and Francis. Espinoza, Miguel (2006). Théorie du déterminisme causal. Paris: L'Harmattan. ISBN 2-296-01198-5. == External links == Causal Processes, Stanford Encyclopedia of Philosophy Caltech Tutorial on Relativity — A nice discussion of how observers moving relatively to each other see different slices of time. Faster-than-c signals, special relativity, and causality. This article explains that faster than light signals do not necessarily lead to a violation of causality.
Wikipedia/Causality_(physics)
The law of conservation of energy states that the total energy of an isolated system remains constant; it is said to be conserved over time. In the case of a closed system, the principle says that the total amount of energy within the system can only be changed through energy entering or leaving the system. Energy can neither be created nor destroyed; rather, it can only be transformed or transferred from one form to another. For instance, chemical energy is converted to kinetic energy when a stick of dynamite explodes. If one adds up all forms of energy that were released in the explosion, such as the kinetic energy and potential energy of the pieces, as well as heat and sound, one will get the exact decrease of chemical energy in the combustion of the dynamite. Classically, the conservation of energy was distinct from the conservation of mass. However, special relativity shows that mass is related to energy and vice versa by E = m c 2 {\displaystyle E=mc^{2}} , the equation representing mass–energy equivalence, and science now takes the view that mass-energy as a whole is conserved. This implies that mass can be converted to energy, and vice versa. This is observed in the nuclear binding energy of atomic nuclei, where a mass defect is measured. It is believed that mass-energy equivalence becomes important in extreme physical conditions, such as those that likely existed in the universe very shortly after the Big Bang or when black holes emit Hawking radiation. Given the stationary-action principle, the conservation of energy can be rigorously proven by Noether's theorem as a consequence of continuous time translation symmetry; that is, from the fact that the laws of physics do not change over time. A consequence of the law of conservation of energy is that a perpetual motion machine of the first kind cannot exist; that is to say, no system without an external energy supply can deliver an unlimited amount of energy to its surroundings. Depending on the definition of energy, the conservation of energy can arguably be violated by general relativity on the cosmological scale. In quantum mechanics, Noether's theorem is known to apply to the expected value, making any consistent conservation violation provably impossible, but whether individual conservation-violating events could ever exist or be observed is subject to some debate. == History == Ancient philosophers as far back as Thales of Miletus c. 550 BCE had inklings of the conservation of some underlying substance of which everything is made. However, there is no particular reason to identify their theories with what we know today as "mass-energy" (for example, Thales thought it was water). Empedocles (490–430 BCE) wrote that in his universal system, composed of four roots (earth, air, water, fire), "nothing comes to be or perishes"; instead, these elements suffer continual rearrangement. Epicurus (c. 350 BCE) on the other hand believed everything in the universe to be composed of indivisible units of matter—the ancient precursor to 'atoms'—and he too had some idea of the necessity of conservation, stating that "the sum total of things was always such as it is now, and such it will ever remain." In 1605, the Flemish scientist Simon Stevin was able to solve a number of problems in statics based on the principle that perpetual motion was impossible. In 1639, Galileo published his analysis of several situations—including the celebrated "interrupted pendulum"—which can be described (in modern language) as conservatively converting potential energy to kinetic energy and back again. Essentially, he pointed out that the height a moving body rises is equal to the height from which it falls, and used this observation to infer the idea of inertia. The remarkable aspect of this observation is that the height to which a moving body ascends on a frictionless surface does not depend on the shape of the surface. In 1669, Christiaan Huygens published a brief account on his laws of collision. Among the quantities he listed as being invariant before and after the collision of bodies were both the sum of their linear momenta as well as the sum of their kinetic energies. However, the difference between elastic and inelastic collision was not understood at the time. This led to the dispute among later researchers as to which of these conserved quantities was the more fundamental. In his Horologium Oscillatorium, Huygens gave a much clearer statement regarding the height of ascent of a moving body, and connected this idea with the impossibility of perpetual motion. His study of the dynamics of pendulum motion was based on a single principle, known as Torricelli's Principle: that the center of gravity of a heavy object, or collection of objects, cannot lift itself. Using this principle, Huygens was able to derive the formula for the center of oscillation by an "energy" method, without dealing with forces or torques. Between 1676 and 1689, Gottfried Leibniz first attempted a mathematical formulation of the kind of energy that is associated with motion (kinetic energy). Using Huygens's work on collision, Leibniz noticed that in many mechanical systems (of several masses mi, each with velocity vi), ∑ i m i v i 2 {\displaystyle \sum _{i}m_{i}v_{i}^{2}} was conserved so long as the masses did not interact. He called this quantity the vis viva or living force of the system. The principle represents an accurate statement of the approximate conservation of kinetic energy in situations where there is no friction. Many physicists at that time, including Isaac Newton, held that the conservation of momentum, which holds even in systems with friction, as defined by the momentum: ∑ i m i v i {\displaystyle \sum _{i}m_{i}v_{i}} was the conserved vis viva. It was later shown that both quantities are conserved simultaneously given the proper conditions, such as in an elastic collision. In 1687, Isaac Newton published his Principia, which set out his laws of motion. It was organized around the concept of force and momentum. However, the researchers were quick to recognize that the principles set out in the book, while fine for point masses, were not sufficient to tackle the motions of rigid and fluid bodies. Some other principles were also required. By the 1690s, Leibniz was arguing that conservation of vis viva and conservation of momentum undermined the then-popular philosophical doctrine of interactionist dualism. (During the 19th century, when conservation of energy was better understood, Leibniz's basic argument would gain widespread acceptance. Some modern scholars continue to champion specifically conservation-based attacks on dualism, while others subsume the argument into a more general argument about causal closure.) The law of conservation of vis viva was championed by the father and son duo, Johann and Daniel Bernoulli. The former enunciated the principle of virtual work as used in statics in its full generality in 1715, while the latter based his Hydrodynamica, published in 1738, on this single vis viva conservation principle. Daniel's study of loss of vis viva of flowing water led him to formulate the Bernoulli's principle, which asserts the loss to be proportional to the change in hydrodynamic pressure. Daniel also formulated the notion of work and efficiency for hydraulic machines; and he gave a kinetic theory of gases, and linked the kinetic energy of gas molecules with the temperature of the gas. This focus on the vis viva by the continental physicists eventually led to the discovery of stationarity principles governing mechanics, such as the D'Alembert's principle, Lagrangian, and Hamiltonian formulations of mechanics. Émilie du Châtelet (1706–1749) proposed and tested the hypothesis of the conservation of total energy, as distinct from momentum. Inspired by the theories of Gottfried Leibniz, she repeated and publicized an experiment originally devised by Willem 's Gravesande in 1722 in which balls were dropped from different heights into a sheet of soft clay. Each ball's kinetic energy—as indicated by the quantity of material displaced—was shown to be proportional to the square of the velocity. The deformation of the clay was found to be directly proportional to the height from which the balls were dropped, equal to the initial potential energy. Some earlier workers, including Newton and Voltaire, had believed that "energy" was not distinct from momentum and therefore proportional to velocity. According to this understanding, the deformation of the clay should have been proportional to the square root of the height from which the balls were dropped. In classical physics, the correct formula is E k = 1 2 m v 2 {\displaystyle E_{k}={\frac {1}{2}}mv^{2}} , where E k {\displaystyle E_{k}} is the kinetic energy of an object, m {\displaystyle m} its mass and v {\displaystyle v} its speed. On this basis, du Châtelet proposed that energy must always have the same dimensions in any form, which is necessary to be able to consider it in different forms (kinetic, potential, heat, ...). Engineers such as John Smeaton, Peter Ewart, Carl Holtzmann, Gustave-Adolphe Hirn, and Marc Seguin recognized that conservation of momentum alone was not adequate for practical calculation and made use of Leibniz's principle. The principle was also championed by some chemists such as William Hyde Wollaston. Academics such as John Playfair were quick to point out that kinetic energy is clearly not conserved. This is obvious to a modern analysis based on the second law of thermodynamics, but in the 18th and 19th centuries, the fate of the lost energy was still unknown. Gradually it came to be suspected that the heat inevitably generated by motion under friction was another form of vis viva. In 1783, Antoine Lavoisier and Pierre-Simon Laplace reviewed the two competing theories of vis viva and caloric theory. Count Rumford's 1798 observations of heat generation during the boring of cannons added more weight to the view that mechanical motion could be converted into heat and (that it was important) that the conversion was quantitative and could be predicted (allowing for a universal conversion constant between kinetic energy and heat). Vis viva then started to be known as energy, after the term was first used in that sense by Thomas Young in 1807. The recalibration of vis viva to 1 2 ∑ i m i v i 2 {\displaystyle {\frac {1}{2}}\sum _{i}m_{i}v_{i}^{2}} which can be understood as converting kinetic energy to work, was largely the result of Gaspard-Gustave Coriolis and Jean-Victor Poncelet over the period 1819–1839. The former called the quantity quantité de travail (quantity of work) and the latter, travail mécanique (mechanical work), and both championed its use in engineering calculations. In the paper Über die Natur der Wärme (German "On the Nature of Heat/Warmth"), published in the Zeitschrift für Physik in 1837, Karl Friedrich Mohr gave one of the earliest general statements of the doctrine of the conservation of energy: "besides the 54 known chemical elements there is in the physical world one agent only, and this is called Kraft [energy or work]. It may appear, according to circumstances, as motion, chemical affinity, cohesion, electricity, light and magnetism; and from any one of these forms it can be transformed into any of the others." === Mechanical equivalent of heat === A key stage in the development of the modern conservation principle was the demonstration of the mechanical equivalent of heat. The caloric theory maintained that heat could neither be created nor destroyed, whereas conservation of energy entails the contrary principle that heat and mechanical work are interchangeable. In the middle of the eighteenth century, Mikhail Lomonosov, a Russian scientist, postulated his corpusculo-kinetic theory of heat, which rejected the idea of a caloric. Through the results of empirical studies, Lomonosov came to the conclusion that heat was not transferred through the particles of the caloric fluid. In 1798, Count Rumford (Benjamin Thompson) performed measurements of the frictional heat generated in boring cannons and developed the idea that heat is a form of kinetic energy; his measurements refuted caloric theory, but were imprecise enough to leave room for doubt. The mechanical equivalence principle was first stated in its modern form by the German surgeon Julius Robert von Mayer in 1842. Mayer reached his conclusion on a voyage to the Dutch East Indies, where he found that his patients' blood was a deeper red because they were consuming less oxygen, and therefore less energy, to maintain their body temperature in the hotter climate. He discovered that heat and mechanical work were both forms of energy, and in 1845, after improving his knowledge of physics, he published a monograph that stated a quantitative relationship between them. Meanwhile, in 1843, James Prescott Joule independently discovered the mechanical equivalent in a series of experiments. In one of them, now called the "Joule apparatus", a descending weight attached to a string caused a paddle immersed in water to rotate. He showed that the gravitational potential energy lost by the weight in descending was equal to the internal energy gained by the water through friction with the paddle. Over the period 1840–1843, similar work was carried out by engineer Ludwig A. Colding, although it was little known outside his native Denmark. Both Joule's and Mayer's work suffered from resistance and neglect but it was Joule's that eventually drew the wider recognition. In 1844, the Welsh scientist William Robert Grove postulated a relationship between mechanics, heat, light, electricity, and magnetism by treating them all as manifestations of a single "force" (energy in modern terms). In 1846, Grove published his theories in his book The Correlation of Physical Forces. In 1847, drawing on the earlier work of Joule, Sadi Carnot, and Émile Clapeyron, Hermann von Helmholtz arrived at conclusions similar to Grove's and published his theories in his book Über die Erhaltung der Kraft (On the Conservation of Force, 1847). The general modern acceptance of the principle stems from this publication. In 1850, the Scottish mathematician William Rankine first used the phrase the law of the conservation of energy for the principle. In 1877, Peter Guthrie Tait claimed that the principle originated with Sir Isaac Newton, based on a creative reading of propositions 40 and 41 of the Philosophiae Naturalis Principia Mathematica. This is now regarded as an example of Whig history. === Mass–energy equivalence === Matter is composed of atoms and what makes up atoms. Matter has intrinsic or rest mass. In the limited range of recognized experience of the nineteenth century, it was found that such rest mass is conserved. Einstein's 1905 theory of special relativity showed that rest mass corresponds to an equivalent amount of rest energy. This means that rest mass can be converted to or from equivalent amounts of (non-material) forms of energy, for example, kinetic energy, potential energy, and electromagnetic radiant energy. When this happens, as recognized in twentieth-century experience, rest mass is not conserved, unlike the total mass or total energy. All forms of energy contribute to the total mass and total energy. For example, an electron and a positron each have rest mass. They can perish together, converting their combined rest energy into photons which have electromagnetic radiant energy but no rest mass. If this occurs within an isolated system that does not release the photons or their energy into the external surroundings, then neither the total mass nor the total energy of the system will change. The produced electromagnetic radiant energy contributes just as much to the inertia (and to any weight) of the system as did the rest mass of the electron and positron before their demise. Likewise, non-material forms of energy can perish into matter, which has rest mass. Thus, conservation of energy (total, including material or rest energy) and conservation of mass (total, not just rest) are one (equivalent) law. In the 18th century, these had appeared as two seemingly-distinct laws. === Conservation of energy in beta decay === The discovery in 1911 that electrons emitted in beta decay have a continuous rather than a discrete spectrum appeared to contradict conservation of energy, under the then-current assumption that beta decay is the simple emission of an electron from a nucleus. This problem was eventually resolved in 1933 by Enrico Fermi who proposed the correct description of beta-decay as the emission of both an electron and an antineutrino, which carries away the apparently missing energy. == First law of thermodynamics == For a closed thermodynamic system, the first law of thermodynamics may be stated as: δ Q = d U + δ W {\displaystyle \delta Q=\mathrm {d} U+\delta W} , or equivalently, d U = δ Q − δ W , {\displaystyle \mathrm {d} U=\delta Q-\delta W,} where δ Q {\displaystyle \delta Q} is the quantity of energy added to the system by a heating process, δ W {\displaystyle \delta W} is the quantity of energy lost by the system due to work done by the system on its surroundings, and d U {\displaystyle \mathrm {d} U} is the change in the internal energy of the system. The δ's before the heat and work terms are used to indicate that they describe an increment of energy which is to be interpreted somewhat differently than the d U {\displaystyle \mathrm {d} U} increment of internal energy (see Inexact differential). Work and heat refer to kinds of process which add or subtract energy to or from a system, while the internal energy U {\displaystyle U} is a property of a particular state of the system when it is in unchanging thermodynamic equilibrium. Thus the term "heat energy" for δ Q {\displaystyle \delta Q} means "that amount of energy added as a result of heating" rather than referring to a particular form of energy. Likewise, the term "work energy" for δ W {\displaystyle \delta W} means "that amount of energy lost as a result of work". Thus one can state the amount of internal energy possessed by a thermodynamic system that one knows is presently in a given state, but one cannot tell, just from knowledge of the given present state, how much energy has in the past flowed into or out of the system as a result of its being heated or cooled, nor as a result of work being performed on or by the system. Entropy is a function of the state of a system which tells of limitations of the possibility of conversion of heat into work. For a simple compressible system, the work performed by the system may be written: δ W = P d V , {\displaystyle \delta W=P\,\mathrm {d} V,} where P {\displaystyle P} is the pressure and d V {\displaystyle dV} is a small change in the volume of the system, each of which are system variables. In the fictive case in which the process is idealized and infinitely slow, so as to be called quasi-static, and regarded as reversible, the heat being transferred from a source with temperature infinitesimally above the system temperature, the heat energy may be written δ Q = T d S , {\displaystyle \delta Q=T\,\mathrm {d} S,} where T {\displaystyle T} is the temperature and d S {\displaystyle \mathrm {d} S} is a small change in the entropy of the system. Temperature and entropy are variables of the state of a system. If an open system (in which mass may be exchanged with the environment) has several walls such that the mass transfer is through rigid walls separate from the heat and work transfers, then the first law may be written as d U = δ Q − δ W + ∑ i h i d M i , {\displaystyle \mathrm {d} U=\delta Q-\delta W+\sum _{i}h_{i}\,dM_{i},} where d M i {\displaystyle dM_{i}} is the added mass of species i {\displaystyle i} and h i {\displaystyle h_{i}} is the corresponding enthalpy per unit mass. Note that generally d S ≠ δ Q / T {\displaystyle dS\neq \delta Q/T} in this case, as matter carries its own entropy. Instead, d S = δ Q / T + ∑ i s i d M i {\displaystyle dS=\delta Q/T+\textstyle {\sum _{i}}s_{i}\,dM_{i}} , where s i {\displaystyle s_{i}} is the entropy per unit mass of type i {\displaystyle i} , from which we recover the fundamental thermodynamic relation d U = T d S − P d V + ∑ i μ i d N i {\displaystyle \mathrm {d} U=T\,dS-P\,dV+\sum _{i}\mu _{i}\,dN_{i}} because the chemical potential μ i {\displaystyle \mu _{i}} is the partial molar Gibbs free energy of species i {\displaystyle i} and the Gibbs free energy G ≡ H − T S {\displaystyle G\equiv H-TS} . == Noether's theorem == The conservation of energy is a common feature in many physical theories. From a mathematical point of view it is understood as a consequence of Noether's theorem, developed by Emmy Noether in 1915 and first published in 1918. In any physical theory that obeys the stationary-action principle, the theorem states that every continuous symmetry has an associated conserved quantity; if the theory's symmetry is time invariance, then the conserved quantity is called "energy". The energy conservation law is a consequence of the shift symmetry of time; energy conservation is implied by the empirical fact that the laws of physics do not change with time itself. Philosophically this can be stated as "nothing depends on time per se". In other words, if the physical system is invariant under the continuous symmetry of time translation, then its energy (which is the canonical conjugate quantity to time) is conserved. Conversely, systems that are not invariant under shifts in time (e.g. systems with time-dependent potential energy) do not exhibit conservation of energy – unless we consider them to exchange energy with another, external system so that the theory of the enlarged system becomes time-invariant again. Conservation of energy for finite systems is valid in physical theories such as special relativity and quantum theory (including QED) in the flat space-time. == Special relativity == With the discovery of special relativity by Henri Poincaré and Albert Einstein, the energy was proposed to be a component of an energy-momentum 4-vector. Each of the four components (one of energy and three of momentum) of this vector is separately conserved across time, in any closed system, as seen from any given inertial reference frame. Also conserved is the vector length (Minkowski norm), which is the rest mass for single particles, and the invariant mass for systems of particles (where momenta and energy are separately summed before the length is calculated). The relativistic energy of a single massive particle contains a term related to its rest mass in addition to its kinetic energy of motion. In the limit of zero kinetic energy (or equivalently in the rest frame) of a massive particle, or else in the center of momentum frame for objects or systems which retain kinetic energy, the total energy of a particle or object (including internal kinetic energy in systems) is proportional to the rest mass or invariant mass, as described by the equation E = m c 2 {\displaystyle E=mc^{2}} . Thus, the rule of conservation of energy over time in special relativity continues to hold, so long as the reference frame of the observer is unchanged. This applies to the total energy of systems, although different observers disagree as to the energy value. Also conserved, and invariant to all observers, is the invariant mass, which is the minimal system mass and energy that can be seen by any observer, and which is defined by the energy–momentum relation. == General relativity == General relativity introduces new phenomena. In an expanding universe, photons spontaneously redshift and tethers spontaneously gain tension; if vacuum energy is positive, the total vacuum energy of the universe appears to spontaneously increase as the volume of space increases. Some scholars claim that energy is no longer meaningfully conserved in any identifiable form. John Baez's view is that energy–momentum conservation is not well-defined except in certain special cases. Energy-momentum is typically expressed with the aid of a stress–energy–momentum pseudotensor. However, since pseudotensors are not tensors, they do not transform cleanly between reference frames. If the metric under consideration is static (that is, does not change with time) or asymptotically flat (that is, at an infinite distance away spacetime looks empty), then energy conservation holds without major pitfalls. In practice, some metrics, notably the Friedmann–Lemaître–Robertson–Walker metric that appears to govern the universe, do not satisfy these constraints and energy conservation is not well defined. Besides being dependent on the coordinate system, pseudotensor energy is dependent on the type of pseudotensor in use; for example, the energy exterior to a Kerr–Newman black hole is twice as large when calculated from Møller's pseudotensor as it is when calculated using the Einstein pseudotensor. For asymptotically flat universes, Einstein and others salvage conservation of energy by introducing a specific global gravitational potential energy that cancels out mass-energy changes triggered by spacetime expansion or contraction. This global energy has no well-defined density and cannot technically be applied to a non-asymptotically flat universe; however, for practical purposes this can be finessed, and so by this view, energy is conserved in our universe. Alan Guth stated that the universe might be "the ultimate free lunch", and theorized that, when accounting for gravitational potential energy, the net energy of the Universe is zero. == Quantum theory == In quantum mechanics, the energy of a quantum system is described by a self-adjoint (or Hermitian) operator called the Hamiltonian, which acts on the Hilbert space (or a space of wave functions) of the system. If the Hamiltonian is a time-independent operator, emergence probability of the measurement result does not change in time over the evolution of the system. Thus the expectation value of energy is also time independent. The local energy conservation in quantum field theory is ensured by the quantum Noether's theorem for the energy-momentum tensor operator. Thus energy is conserved by the normal unitary evolution of a quantum system. However, when the non-unitary Born rule is applied, the system's energy is measured with an energy that can be below or above the expectation value, if the system was not in an energy eigenstate. (For macroscopic systems, this effect is usually too small to measure.) The disposition of this energy gap is not well-understood; most physicists believe that the energy is transferred to or from the macroscopic environment in the course of the measurement process, while others believe that the observable energy is only conserved "on average". No experiment has been confirmed as definitive evidence of violations of the conservation of energy principle in quantum mechanics, but that does not rule out that some newer experiments, as proposed, may find evidence of violations of the conservation of energy principle in quantum mechanics. == Status == In the context of perpetual motion machines such as the Orbo, Professor Eric Ash has argued at the BBC: "Denying [conservation of energy] would undermine not just little bits of science - the whole edifice would be no more. All of the technology on which we built the modern world would lie in ruins". It is because of conservation of energy that "we know - without having to examine details of a particular device - that Orbo cannot work." Energy conservation has been a foundational physical principle for about two hundred years. From the point of view of modern general relativity, the lab environment can be well approximated by Minkowski spacetime, where energy is exactly conserved. The entire Earth can be well approximated by the Schwarzschild metric, where again energy is exactly conserved. Given all the experimental evidence, any new theory (such as quantum gravity), in order to be successful, will have to explain why energy has appeared to always be exactly conserved in terrestrial experiments. In some speculative theories, corrections to quantum mechanics are too small to be detected at anywhere near the current TeV level accessible through particle accelerators. Doubly special relativity models may argue for a breakdown in energy-momentum conservation for sufficiently energetic particles; such models are constrained by observations that cosmic rays appear to travel for billions of years without displaying anomalous non-conservation behavior. Some interpretations of quantum mechanics claim that observed energy tends to increase when the Born rule is applied due to localization of the wave function. If true, objects could be expected to spontaneously heat up; thus, such models are constrained by observations of large, cool astronomical objects as well as the observation of (often supercooled) laboratory experiments. Milton A. Rothman wrote that the law of conservation of energy has been verified by nuclear physics experiments to an accuracy of one part in a thousand million million (1015). He then defines its precision as "perfect for all practical purposes". == See also == == References == == Bibliography == === Modern accounts === Goldstein, Martin, and Inge F., (1993). The Refrigerator and the Universe. Harvard Univ. Press. A gentle introduction. Kroemer, Herbert; Kittel, Charles (1980). Thermal Physics (2nd ed.). W. H. Freeman Company. ISBN 978-0-7167-1088-2. Nolan, Peter J. (1996). Fundamentals of College Physics, 2nd ed. William C. Brown Publishers. Oxtoby & Nachtrieb (1996). Principles of Modern Chemistry, 3rd ed. Saunders College Publishing. Papineau, D. (2002). Thinking about Consciousness. Oxford: Oxford University Press. Serway, Raymond A.; Jewett, John W. (2004). Physics for Scientists and Engineers (6th ed.). Brooks/Cole. ISBN 978-0-534-40842-8. Stenger, Victor J. (2000). Timeless Reality. Prometheus Books. Especially chpt. 12. Nontechnical. Tipler, Paul (2004). Physics for Scientists and Engineers: Mechanics, Oscillations and Waves, Thermodynamics (5th ed.). W. H. Freeman. ISBN 978-0-7167-0809-4. Lanczos, Cornelius (1970). The Variational Principles of Mechanics. Toronto: University of Toronto Press. ISBN 978-0-8020-1743-7. === History of ideas === Brown, T.M. (1965). "Resource letter EEC-1 on the evolution of energy concepts from Galileo to Helmholtz". American Journal of Physics. 33 (10): 759–765. Bibcode:1965AmJPh..33..759B. doi:10.1119/1.1970980. Cardwell, D.S.L. (1971). From Watt to Clausius: The Rise of Thermodynamics in the Early Industrial Age. London: Heinemann. ISBN 978-0-435-54150-7. Guillen, M. (1999). Five Equations That Changed the World. New York: Abacus. ISBN 978-0-349-11064-6. Hiebert, E.N. (1981). Historical Roots of the Principle of Conservation of Energy. Madison, Wis.: Ayer Co Pub. ISBN 978-0-405-13880-5. Kuhn, T.S. (1957) "Energy conservation as an example of simultaneous discovery", in M. Clagett (ed.) Critical Problems in the History of Science pp.321–56 Sarton, G.; Joule, J. P.; Carnot, Sadi (1929). "The discovery of the law of conservation of energy". Isis. 13: 18–49. doi:10.1086/346430. S2CID 145585492. Smith, C. (1998). The Science of Energy: Cultural History of Energy Physics in Victorian Britain. London: Heinemann. ISBN 978-0-485-11431-7. Mach, E. (1872). History and Root of the Principles of the Conservation of Energy. Open Court Pub. Co., Illinois. Poincaré, H. (1905). Science and Hypothesis. Walter Scott Publishing Co. Ltd; Dover reprint, 1952. ISBN 978-0-486-60221-9. {{cite book}}: ISBN / Date incompatibility (help), Chapter 8, "Energy and Thermo-dynamics" == External links == MISN-0-158 The First Law of Thermodynamics (PDF file) by Jerzy Borysowicz for Project PHYSNET.
Wikipedia/Conservation_of_energy
Solar physics is the branch of astrophysics that specializes in the study of the Sun. It intersects with many disciplines of pure physics and astrophysics. Because the Sun is uniquely situated for close-range observing (other stars cannot be resolved with anything like the spatial or temporal resolution that the Sun can), there is a split between the related discipline of observational astrophysics (of distant stars) and observational solar physics. The study of solar physics is also important as it provides a "physical laboratory" for the study of plasma physics. == History == === Ancient times === Babylonians were keeping a record of solar eclipses, with the oldest record originating from the ancient city of Ugarit, in modern-day Syria. This record dates to about 1300 BC. Ancient Chinese astronomers were also observing solar phenomena (such as solar eclipses and visible sunspots) with the purpose of keeping track of calendars, which were based on lunar and solar cycles. Unfortunately, records kept before 720 BC are very vague and offer no useful information. However, after 720 BC, 37 solar eclipses were noted over the course of 240 years. === Medieval times === Astronomical knowledge flourished in the Islamic world during medieval times. Many observatories were built in cities from Damascus to Baghdad, where detailed astronomical observations were taken. Particularly, a few solar parameters were measured and detailed observations of the Sun were taken. Solar observations were taken with the purpose of navigation, but mostly for timekeeping. Islam requires its followers to pray five times a day, at specific position of the Sun in the sky. As such, accurate observations of the Sun and its trajectory on the sky were needed. In the late 10th century, Iranian astronomer Abu-Mahmud Khojandi built a massive observatory near Tehran. There, he took accurate measurements of a series of meridian transits of the Sun, which he later used to calculate the obliquity of the ecliptic. Following the fall of the Western Roman Empire, Western Europe was cut from all sources of ancient scientific knowledge, especially those written in Greek. This, plus de-urbanisation and diseases such as the Black Death led to a decline in scientific knowledge in medieval Europe, especially in the early Middle Ages. During this period, observations of the Sun were taken either in relation to the zodiac, or to assist in building places of worship such as churches and cathedrals. === Renaissance period === In astronomy, the renaissance period started with the work of Nicolaus Copernicus. He proposed that planets revolve around the Sun and not around the Earth, as it was believed at the time. This model is known as the heliocentric model. His work was later expanded by Johannes Kepler and Galileo Galilei. Particularly, Galilei used his new telescope to look at the Sun. In 1610, he discovered sunspots on its surface. In the autumn of 1611, Johannes Fabricius wrote the first book on sunspots, De Maculis in Sole Observatis ("On the spots observed in the Sun"). === Modern times === Modern day solar physics is focused towards understanding the many phenomena observed with the help of modern telescopes and satellites. Of particular interest are the structure of the solar photosphere, the coronal heat problem and sunspots. == Research == The Solar Physics Division of the American Astronomical Society boasts 555 members (as of May 2007), compared to several thousand in the parent organization. A major thrust of current (2009) effort in the field of solar physics is integrated understanding of the entire Solar System including the Sun and its effects throughout interplanetary space within the heliosphere and on planets and planetary atmospheres. Studies of phenomena that affect multiple systems in the heliosphere, or that are considered to fit within a heliospheric context, are called heliophysics, a new coinage that entered usage in the early years of the current millennium. === Space based === ==== Helios ==== Helios-A and Helios-B are a pair of spacecraft launched in December 1974 and January 1976 from Cape Canaveral, as a joint venture between the German Aerospace Center and NASA. Their orbits approach the Sun closer than Mercury. They included instruments to measure the solar wind, magnetic fields, cosmic rays, and interplanetary dust. Helios-A continued to transmit data until 1986. ==== SOHO ==== The Solar and Heliospheric Observatory, SOHO, is a joint project between NASA and ESA that was launched in December 1995. It was launched to probe the interior of the Sun, make observations of the solar wind and phenomena associated with it and investigate the outer layers of the Sun. ==== HINODE ==== A publicly funded mission led by the Japanese Aerospace Exploration Agency, the HINODE satellite, launched in 2006, consists of a coordinated set of optical, extreme ultraviolet and X-ray instruments. These investigate the interaction between the solar corona and the Sun's magnetic field. ==== SDO ==== The Solar Dynamics Observatory (SDO) was launched by NASA in February 2010 from Cape Canaveral. The main goals of the mission are understanding how solar activity arises and how it affects life on Earth by determining how the Sun's magnetic field is generated and structured and how the stored magnetic energy is converted and released into space. ==== PSP ==== The Parker Solar Probe (PSP) was launched in 2018 with the mission of making detailed observations of the outer solar corona. It has made the closest approaches to the Sun of any artificial object. === Ground based === ==== ATST ==== The Advanced Technology Solar Telescope (ATST) is a solar telescope facility that is under construction in Maui. Twenty-two institutions are collaborating on the ATST project, with the main funding agency being the National Science Foundation. ==== SSO ==== Sunspot Solar Observatory (SSO) operates the Richard B. Dunn Solar Telescope (DST) on behalf of the NSF. ==== Big Bear ==== The Big Bear Solar Observatory in California houses several telescopes including the New Solar Telescope(NTS) which is a 1.6 meter, clear-aperture, off-axis Gregorian telescope. The NTS saw first light in December 2008. Until the ATST comes on line, the NTS remains the largest solar telescope in the world. The Big Bear Observatory is one of several facilities operated by the Center for Solar-Terrestrial Research at New Jersey Institute of Technology (NJIT). === Other === ==== EUNIS ==== The Extreme Ultraviolet Normal Incidence Spectrograph (EUNIS) is a two channel imaging spectrograph that first flew in 2006. It observes the solar corona with high spectral resolution. So far, it has provided information on the nature of coronal bright points, cool transients and coronal loop arcades. Data from it also helped calibrating SOHO and a few other telescopes. == See also == Aeronomy Helioseismography Heliophysics Institute for Solar Physics (in La Palma in the Canary Islands) == Further reading == Mullan, Dermott J. (2009). Physics of the Sun: A First Course. Taylor & Francis. ISBN 978-1-4200-8307-1. Zirin, Harold (1988). Astrophysics of the Sun. Cambridge University Press. ISBN 0-521-30268-4. == References == == External links == Living Reviews in Solar Physics NASA's Marshall Space Flight Center Solar Physics Page NASA's Goddard Space Flight Center Solar Physics Laboratory MPS Solar Physics Group SUPARCO Solar physics Page Center for Solar-Terrestrial Research
Wikipedia/Solar_physics
Biophysics is an interdisciplinary science that applies approaches and methods traditionally used in physics to study biological phenomena. Biophysics covers all scales of biological organization, from molecular to organismic and populations. Biophysical research shares significant overlap with biochemistry, molecular biology, physical chemistry, physiology, nanotechnology, bioengineering, computational biology, biomechanics, developmental biology and systems biology. The term biophysics was originally introduced by Karl Pearson in 1892. The term biophysics is also regularly used in academia to indicate the study of the physical quantities (e.g. electric current, temperature, stress, entropy) in biological systems. Other biological sciences also perform research on the biophysical properties of living organisms including molecular biology, cell biology, chemical biology, and biochemistry. == Overview == Molecular biophysics typically addresses biological questions similar to those in biochemistry and molecular biology, seeking to find the physical underpinnings of biomolecular phenomena. Scientists in this field conduct research concerned with understanding the interactions between the various systems of a cell, including the interactions between DNA, RNA and protein biosynthesis, as well as how these interactions are regulated. A great variety of techniques are used to answer these questions. Fluorescent imaging techniques, as well as electron microscopy, x-ray crystallography, NMR spectroscopy, atomic force microscopy (AFM) and small-angle scattering (SAS) both with X-rays and neutrons (SAXS/SANS) are often used to visualize structures of biological significance. Protein dynamics can be observed by neutron spin echo spectroscopy. Conformational change in structure can be measured using techniques such as dual polarisation interferometry, circular dichroism, SAXS and SANS. Direct manipulation of molecules using optical tweezers or AFM, can also be used to monitor biological events where forces and distances are at the nanoscale. Molecular biophysicists often consider complex biological events as systems of interacting entities which can be understood e.g. through statistical mechanics, thermodynamics and chemical kinetics. By drawing knowledge and experimental techniques from a wide variety of disciplines, biophysicists are often able to directly observe, model or even manipulate the structures and interactions of individual molecules or complexes of molecules. In addition to traditional (i.e. molecular and cellular) biophysical topics like structural biology or enzyme kinetics, modern biophysics encompasses an extraordinarily broad range of research, from bioelectronics to quantum biology involving both experimental and theoretical tools. It is becoming increasingly common for biophysicists to apply the models and experimental techniques derived from physics, as well as mathematics and statistics, to larger systems such as tissues, organs, populations and ecosystems. Biophysical models are used extensively in the study of electrical conduction in single neurons, as well as neural circuit analysis in both tissue and whole brain. Medical physics, a branch of biophysics, is any application of physics to medicine or healthcare, ranging from radiology to microscopy and nanomedicine. For example, physicist Richard Feynman theorized about the future of nanomedicine. He wrote about the idea of a medical use for biological machines (see nanomachines). Feynman and Albert Hibbs suggested that certain repair machines might one day be reduced in size to the point that it would be possible to (as Feynman put it) "swallow the doctor". The idea was discussed in Feynman's 1959 essay There's Plenty of Room at the Bottom. == History == The studies of Luigi Galvani (1737–1798) laid groundwork for the later field of biophysics. Some of the earlier studies in biophysics were conducted in the 1840s by a group known as the Berlin school of physiologists. Among its members were pioneers such as Hermann von Helmholtz, Ernst Heinrich Weber, Carl F. W. Ludwig, and Johannes Peter Müller. William T. Bovie (1882–1958) is credited as a leader of the field's further development in the mid-20th century. He was a leader in developing electrosurgery. The popularity of the field rose when the book What Is Life? by Erwin Schrödinger was published. Since 1957, biophysicists have organized themselves into the Biophysical Society which now has about 9,000 members over the world. Some authors such as Robert Rosen criticize biophysics on the ground that the biophysical method does not take into account the specificity of biological phenomena. == Focus as a subfield == While some colleges and universities have dedicated departments of biophysics, usually at the graduate level, many do not have university-level biophysics departments, instead having groups in related departments such as biochemistry, cell biology, chemistry, computer science, engineering, mathematics, medicine, molecular biology, neuroscience, pharmacology, physics, and physiology. Depending on the strengths of a department at a university differing emphasis will be given to fields of biophysics. What follows is a list of examples of how each department applies its efforts toward the study of biophysics. This list is hardly all inclusive. Nor does each subject of study belong exclusively to any particular department. Each academic institution makes its own rules and there is much overlap between departments. Biology and molecular biology – Gene regulation, single protein dynamics, bioenergetics, patch clamping, biomechanics, virophysics. Structural biology – Ångstrom-resolution structures of proteins, nucleic acids, lipids, carbohydrates, and complexes thereof. Biochemistry and chemistry – biomolecular structure, siRNA, nucleic acid structure, structure-activity relationships. Computer science – Neural networks, biomolecular and drug databases. Computational chemistry – molecular dynamics simulation, molecular docking, quantum chemistry Bioinformatics – sequence alignment, structural alignment, protein structure prediction Mathematics – graph/network theory, population modeling, dynamical systems, phylogenetics. Medicine – biophysical research that emphasizes medicine. Medical biophysics is a field closely related to physiology. It explains various aspects and systems of the body from a physical and mathematical perspective. Examples are fluid dynamics of blood flow, gas physics of respiration, radiation in diagnostics/treatment and much more. Biophysics is taught as a preclinical subject in many medical schools, mainly in Europe. Neuroscience – studying neural networks experimentally (brain slicing) as well as theoretically (computer models), membrane permittivity. Pharmacology and physiology – channelomics, electrophysiology, biomolecular interactions, cellular membranes, polyketides. Physics – negentropy, stochastic processes, and the development of new physical techniques and instrumentation as well as their application. Quantum biology – The field of quantum biology applies quantum mechanics to biological objects and problems. Decohered isomers to yield time-dependent base substitutions. These studies imply applications in quantum computing. Agronomy and agriculture Many biophysical techniques are unique to this field. Research efforts in biophysics are often initiated by scientists who were biologists, chemists or physicists by training. == See also == == References == === Sources === == External links == Biophysical Society Journal of Physiology: 2012 virtual issue Biophysics and Beyond bio-physics-wiki Link archive of learning resources for students: biophysika.de (60% English, 40% German)
Wikipedia/Biological_physics
Spin is an intrinsic form of angular momentum carried by elementary particles, and thus by composite particles such as hadrons, atomic nuclei, and atoms.: 183–184  Spin is quantized, and accurate models for the interaction with spin require relativistic quantum mechanics or quantum field theory. The existence of electron spin angular momentum is inferred from experiments, such as the Stern–Gerlach experiment, in which silver atoms were observed to possess two possible discrete angular momenta despite having no orbital angular momentum. The relativistic spin–statistics theorem connects electron spin quantization to the Pauli exclusion principle: observations of exclusion imply half-integer spin, and observations of half-integer spin imply exclusion. Spin is described mathematically as a vector for some particles such as photons, and as a spinor or bispinor for other particles such as electrons. Spinors and bispinors behave similarly to vectors: they have definite magnitudes and change under rotations; however, they use an unconventional "direction". All elementary particles of a given kind have the same magnitude of spin angular momentum, though its direction may change. These are indicated by assigning the particle a spin quantum number.: 183–184  The SI units of spin are the same as classical angular momentum (i.e., N·m·s, J·s, or kg·m2·s−1). In quantum mechanics, angular momentum and spin angular momentum take discrete values proportional to the Planck constant. In practice, spin is usually given as a dimensionless spin quantum number by dividing the spin angular momentum by the reduced Planck constant ħ. Often, the "spin quantum number" is simply called "spin". == Models == === Rotating charged mass === The earliest models for electron spin imagined a rotating charged mass, but this model fails when examined in detail: the required space distribution does not match limits on the electron radius: the required rotation speed exceeds the speed of light. In the Standard Model, the fundamental particles are all considered "point-like": they have their effects through the field that surrounds them. Any model for spin based on mass rotation would need to be consistent with that model. === Pauli's "classically non-describable two-valuedness" === Wolfgang Pauli, a central figure in the history of quantum spin, initially rejected any idea that the "degree of freedom" he introduced to explain experimental observations was related to rotation. He called it "classically non-describable two-valuedness". Later, he allowed that it is related to angular momentum, but insisted on considering spin an abstract property. This approach allowed Pauli to develop a proof of his fundamental Pauli exclusion principle, a proof now called the spin-statistics theorem. In retrospect, this insistence and the style of his proof initiated the modern particle-physics era, where abstract quantum properties derived from symmetry properties dominate. Concrete interpretation became secondary and optional. === Circulation of classical fields === The first classical model for spin proposed a small rigid particle rotating about an axis, as ordinary use of the word may suggest. Angular momentum can be computed from a classical field as well.: 63  By applying Frederik Belinfante's approach to calculating the angular momentum of a field, Hans C. Ohanian showed that "spin is essentially a wave property ... generated by a circulating flow of charge in the wave field of the electron". This same concept of spin can be applied to gravity waves in water: "spin is generated by subwavelength circular motion of water particles". Unlike classical wavefield circulation, which allows continuous values of angular momentum, quantum wavefields allow only discrete values. Consequently, energy transfer to or from spin states always occurs in fixed quantum steps. Only a few steps are allowed: for many qualitative purposes, the complexity of the spin quantum wavefields can be ignored and the system properties can be discussed in terms of "integer" or "half-integer" spin models as discussed in quantum numbers below. === In Bohmian mechanics === Spin can be understood differently depending on the interpretations of quantum mechanics. In the de Broglie–Bohm interpretation, particles have definitive trajectories but their motion is driven by the wave function or pilot wave. In this interpretation, the spin is a property of the pilot wave and not of the particle themselves. === Dirac's relativistic electron === Quantitative calculations of spin properties for electrons requires the Dirac relativistic wave equation. == Relation to orbital angular momentum == As the name suggests, spin was originally conceived as the rotation of a particle around some axis. Historically orbital angular momentum related to particle orbits.: 131  While the names based on mechanical models have survived, the physical explanation has not. Quantization fundamentally alters the character of both spin and orbital angular momentum. Since elementary particles are point-like, self-rotation is not well-defined for them. However, spin implies that the phase of the particle depends on the angle as e i S θ , {\displaystyle e^{iS\theta }\ ,} for rotation of angle θ around the axis parallel to the spin S. This is equivalent to the quantum-mechanical interpretation of momentum as phase dependence in the position, and of orbital angular momentum as phase dependence in the angular position. For fermions, the picture is less clear: From the Ehrenfest theorem, the angular velocity is equal to the derivative of the Hamiltonian to its conjugate momentum, which is the total angular momentum operator J = L + S . Therefore, if the Hamiltonian H has any dependence on the spin S, then ⁠ ∂ H / ∂ S ⁠ must be non-zero; consequently, for classical mechanics, the existence of spin in the Hamiltonian will produce an actual angular velocity, and hence an actual physical rotation – that is, a change in the phase-angle, θ, over time. However, whether this holds true for free electrons is ambiguous, since for an electron, | S |² is a constant ⁠ 1 / 2 ⁠ ℏ , and one might decide that since it cannot change, no partial (∂) can exist. Therefore it is a matter of interpretation whether the Hamiltonian must include such a term, and whether this aspect of classical mechanics extends into quantum mechanics (any particle's intrinsic spin angular momentum, S, is a quantum number arising from a "spinor" in the mathematical solution to the Dirac equation, rather than being a more nearly physical quantity, like orbital angular momentum L). Nevertheless, spin appears in the Dirac equation, and thus the relativistic Hamiltonian of the electron, treated as a Dirac field, can be interpreted as including a dependence in the spin S. == Quantum number == Spin obeys the mathematical laws of angular momentum quantization. The specific properties of spin angular momenta include: Spin quantum numbers may take either half-integer or integer values. Although the direction of its spin can be changed, the magnitude of the spin of an elementary particle cannot be changed. The spin of a charged particle is associated with a magnetic dipole moment with a g-factor that differs from 1. (In the classical context, this would imply the internal charge and mass distributions differing for a rotating object.) The conventional definition of the spin quantum number is s = ⁠n/2⁠, where n can be any non-negative integer. Hence the allowed values of s are 0, ⁠1/2⁠, 1, ⁠3/2⁠, 2, etc. The value of s for an elementary particle depends only on the type of particle and cannot be altered in any known way (in contrast to the spin direction described below). The spin angular momentum S of any physical system is quantized. The allowed values of S are S = ℏ s ( s + 1 ) = h 2 π n 2 ( n + 2 ) 2 = h 4 π n ( n + 2 ) , {\displaystyle S=\hbar \,{\sqrt {s(s+1)}}={\frac {h}{2\pi }}\,{\sqrt {{\frac {n}{2}}{\frac {(n+2)}{2}}}}={\frac {h}{4\pi }}\,{\sqrt {n(n+2)}},} where h is the Planck constant, and ℏ = h 2 π {\textstyle \hbar ={\frac {h}{2\pi }}} is the reduced Planck constant. In contrast, orbital angular momentum can only take on integer values of s; i.e., even-numbered values of n. === Fermions and bosons === Those particles with half-integer spins, such as ⁠1/2⁠, ⁠3/2⁠, ⁠5/2⁠, are known as fermions, while those particles with integer spins, such as 0, 1, 2, are known as bosons. The two families of particles obey different rules and broadly have different roles in the world around us. A key distinction between the two families is that fermions obey the Pauli exclusion principle: that is, there cannot be two identical fermions simultaneously having the same quantum numbers (meaning, roughly, having the same position, velocity and spin direction). Fermions obey the rules of Fermi–Dirac statistics. In contrast, bosons obey the rules of Bose–Einstein statistics and have no such restriction, so they may "bunch together" in identical states. Also, composite particles can have spins different from their component particles. For example, a helium-4 atom in the ground state has spin 0 and behaves like a boson, even though the quarks and electrons which make it up are all fermions. This has some profound consequences: Quarks and leptons (including electrons and neutrinos), which make up what is classically known as matter, are all fermions with spin ⁠1/2⁠. The common idea that "matter takes up space" actually comes from the Pauli exclusion principle acting on these particles to prevent the fermions from being in the same quantum state. Further compaction would require electrons to occupy the same energy states, and therefore a kind of pressure (sometimes known as degeneracy pressure of electrons) acts to resist the fermions being overly close. Elementary fermions with other spins (⁠3/2⁠, ⁠5/2⁠, etc.) are not known to exist. Elementary particles which are thought of as carrying forces are all bosons with spin 1. They include the photon, which carries the electromagnetic force, the gluon (strong force), and the W and Z bosons (weak force). The ability of bosons to occupy the same quantum state is used in the laser, which aligns many photons having the same quantum number (the same direction and frequency), superfluid liquid helium resulting from helium-4 atoms being bosons, and superconductivity, where pairs of electrons (which individually are fermions) act as single composite bosons. Elementary bosons with other spins (0, 2, 3, etc.) were not historically known to exist, although they have received considerable theoretical treatment and are well established within their respective mainstream theories. In particular, theoreticians have proposed the graviton (predicted to exist by some quantum gravity theories) with spin 2, and the Higgs boson (explaining electroweak symmetry breaking) with spin 0. Since 2013, the Higgs boson with spin 0 has been considered proven to exist. It is the first scalar elementary particle (spin 0) known to exist in nature. Atomic nuclei have nuclear spin which may be either half-integer or integer, so that the nuclei may be either fermions or bosons. === Spin–statistics theorem === The spin–statistics theorem splits particles into two groups: bosons and fermions, where bosons obey Bose–Einstein statistics, and fermions obey Fermi–Dirac statistics (and therefore the Pauli exclusion principle). Specifically, the theorem requires that particles with half-integer spins obey the Pauli exclusion principle while particles with integer spin do not. As an example, electrons have half-integer spin and are fermions that obey the Pauli exclusion principle, while photons have integer spin and do not. The theorem was derived by Wolfgang Pauli in 1940; it relies on both quantum mechanics and the theory of special relativity. Pauli described this connection between spin and statistics as "one of the most important applications of the special relativity theory". == Magnetic moments == Particles with spin can possess a magnetic dipole moment, just like a rotating electrically charged body in classical electrodynamics. These magnetic moments can be experimentally observed in several ways, e.g. by the deflection of particles by inhomogeneous magnetic fields in a Stern–Gerlach experiment, or by measuring the magnetic fields generated by the particles themselves. The intrinsic magnetic moment μ of a spin-⁠1/2⁠ particle with charge q, mass m, and spin angular momentum S is μ = g s q 2 m S , {\displaystyle {\boldsymbol {\mu }}={\frac {g_{\text{s}}q}{2m}}\mathbf {S} ,} where the dimensionless quantity gs is called the spin g-factor. For exclusively orbital rotations, it would be 1 (assuming that the mass and the charge occupy spheres of equal radius). The electron, being a charged elementary particle, possesses a nonzero magnetic moment. One of the triumphs of the theory of quantum electrodynamics is its accurate prediction of the electron g-factor, which has been experimentally determined to have the value −2.00231930436092(36), with the digits in parentheses denoting measurement uncertainty in the last two digits at one standard deviation. The value of 2 arises from the Dirac equation, a fundamental equation connecting the electron's spin with its electromagnetic properties; and the deviation from −2 arises from the electron's interaction with the surrounding quantum fields, including its own electromagnetic field and virtual particles. Composite particles also possess magnetic moments associated with their spin. In particular, the neutron possesses a non-zero magnetic moment despite being electrically neutral. This fact was an early indication that the neutron is not an elementary particle. In fact, it is made up of quarks, which are electrically charged particles. The magnetic moment of the neutron comes from the spins of the individual quarks and their orbital motions. Neutrinos are both elementary and electrically neutral. The minimally extended Standard Model that takes into account non-zero neutrino masses predicts neutrino magnetic moments of: μ ν ≈ 3 × 10 − 19 μ B m ν c 2 eV , {\displaystyle \mu _{\nu }\approx 3\times 10^{-19}\mu _{\text{B}}{\frac {m_{\nu }c^{2}}{\text{eV}}},} where the μν are the neutrino magnetic moments, mν are the neutrino masses, and μB is the Bohr magneton. New physics above the electroweak scale could, however, lead to significantly higher neutrino magnetic moments. It can be shown in a model-independent way that neutrino magnetic moments larger than about 10−14 μB are "unnatural" because they would also lead to large radiative contributions to the neutrino mass. Since the neutrino masses are known to be at most about 1 eV/c2, fine-tuning would be necessary in order to prevent large contributions to the neutrino mass via radiative corrections. The measurement of neutrino magnetic moments is an active area of research. Experimental results have put the neutrino magnetic moment at less than 1.2×10−10 times the electron's magnetic moment. On the other hand, elementary particles with spin but without electric charge, such as the photon and Z boson, do not have a magnetic moment. == Direction == === Spin projection quantum number and multiplicity === In classical mechanics, the angular momentum of a particle possesses not only a magnitude (how fast the body is rotating), but also a direction (either up or down on the axis of rotation of the particle). Quantum-mechanical spin also contains information about direction, but in a more subtle form. Quantum mechanics states that the component of angular momentum for a spin-s particle measured along any direction can only take on the values S i = ℏ s i , s i ∈ { − s , − ( s − 1 ) , … , s − 1 , s } , {\displaystyle S_{i}=\hbar s_{i},\quad s_{i}\in \{-s,-(s-1),\dots ,s-1,s\},} where Si is the spin component along the i-th axis (either x, y, or z), si is the spin projection quantum number along the i-th axis, and s is the principal spin quantum number (discussed in the previous section). Conventionally the direction chosen is the z axis: S z = ℏ s z , s z ∈ { − s , − ( s − 1 ) , … , s − 1 , s } , {\displaystyle S_{z}=\hbar s_{z},\quad s_{z}\in \{-s,-(s-1),\dots ,s-1,s\},} where Sz is the spin component along the z axis, sz is the spin projection quantum number along the z axis. One can see that there are 2s + 1 possible values of sz. The number "2s + 1" is the multiplicity of the spin system. For example, there are only two possible values for a spin-⁠1/2⁠ particle: sz = +⁠1/2⁠ and sz = −⁠1/2⁠. These correspond to quantum states in which the spin component is pointing in the +z or −z directions respectively, and are often referred to as "spin up" and "spin down". For a spin-⁠3/2⁠ particle, like a delta baryon, the possible values are +⁠3/2⁠, +⁠1/2⁠, −⁠1/2⁠, −⁠3/2⁠. === Vector === For a given quantum state, one could think of a spin vector ⟨ S ⟩ {\textstyle \langle S\rangle } whose components are the expectation values of the spin components along each axis, i.e., ⟨ S ⟩ = [ ⟨ S x ⟩ , ⟨ S y ⟩ , ⟨ S z ⟩ ] {\textstyle \langle S\rangle =[\langle S_{x}\rangle ,\langle S_{y}\rangle ,\langle S_{z}\rangle ]} . This vector then would describe the "direction" in which the spin is pointing, corresponding to the classical concept of the axis of rotation. It turns out that the spin vector is not very useful in actual quantum-mechanical calculations, because it cannot be measured directly: sx, sy and sz cannot possess simultaneous definite values, because of a quantum uncertainty relation between them. However, for statistically large collections of particles that have been placed in the same pure quantum state, such as through the use of a Stern–Gerlach apparatus, the spin vector does have a well-defined experimental meaning: It specifies the direction in ordinary space in which a subsequent detector must be oriented in order to achieve the maximum possible probability (100%) of detecting every particle in the collection. For spin-⁠1/2⁠ particles, this probability drops off smoothly as the angle between the spin vector and the detector increases, until at an angle of 180°—that is, for detectors oriented in the opposite direction to the spin vector—the expectation of detecting particles from the collection reaches a minimum of 0%. As a qualitative concept, the spin vector is often handy because it is easy to picture classically. For instance, quantum-mechanical spin can exhibit phenomena analogous to classical gyroscopic effects. For example, one can exert a kind of "torque" on an electron by putting it in a magnetic field (the field acts upon the electron's intrinsic magnetic dipole moment—see the following section). The result is that the spin vector undergoes precession, just like a classical gyroscope. This phenomenon is known as electron spin resonance (ESR). The equivalent behaviour of protons in atomic nuclei is used in nuclear magnetic resonance (NMR) spectroscopy and imaging. Mathematically, quantum-mechanical spin states are described by vector-like objects known as spinors. There are subtle differences between the behavior of spinors and vectors under coordinate rotations. For example, rotating a spin-⁠1/2⁠ particle by 360° does not bring it back to the same quantum state, but to the state with the opposite quantum phase; this is detectable, in principle, with interference experiments. To return the particle to its exact original state, one needs a 720° rotation. (The plate trick and Möbius strip give non-quantum analogies.) A spin-zero particle can only have a single quantum state, even after torque is applied. Rotating a spin-2 particle 180° can bring it back to the same quantum state, and a spin-4 particle should be rotated 90° to bring it back to the same quantum state. The spin-2 particle can be analogous to a straight stick that looks the same even after it is rotated 180°, and a spin-0 particle can be imagined as sphere, which looks the same after whatever angle it is turned through. == Mathematical formulation == === Operator === Spin obeys commutation relations analogous to those of the orbital angular momentum: [ S ^ j , S ^ k ] = i ℏ ε j k l S ^ l , {\displaystyle \left[{\hat {S}}_{j},{\hat {S}}_{k}\right]=i\hbar \varepsilon _{jkl}{\hat {S}}_{l},} where εjkl is the Levi-Civita symbol. It follows (as with angular momentum) that the eigenvectors of S ^ 2 {\displaystyle {\hat {S}}^{2}} and S ^ z {\displaystyle {\hat {S}}_{z}} (expressed as kets in the total S basis) are: 166  S ^ 2 | s , m s ⟩ = ℏ 2 s ( s + 1 ) | s , m s ⟩ , S ^ z | s , m s ⟩ = ℏ m s | s , m s ⟩ . {\displaystyle {\begin{aligned}{\hat {S}}^{2}|s,m_{s}\rangle &=\hbar ^{2}s(s+1)|s,m_{s}\rangle ,\\{\hat {S}}_{z}|s,m_{s}\rangle &=\hbar m_{s}|s,m_{s}\rangle .\end{aligned}}} The spin raising and lowering operators acting on these eigenvectors give S ^ ± | s , m s ⟩ = ℏ s ( s + 1 ) − m s ( m s ± 1 ) | s , m s ± 1 ⟩ , {\displaystyle {\hat {S}}_{\pm }|s,m_{s}\rangle =\hbar {\sqrt {s(s+1)-m_{s}(m_{s}\pm 1)}}|s,m_{s}\pm 1\rangle ,} where S ^ ± = S ^ x ± i S ^ y {\displaystyle {\hat {S}}_{\pm }={\hat {S}}_{x}\pm i{\hat {S}}_{y}} .: 166  But unlike orbital angular momentum, the eigenvectors are not spherical harmonics. They are not functions of θ and φ. There is also no reason to exclude half-integer values of s and ms. All quantum-mechanical particles possess an intrinsic spin s {\displaystyle s} (though this value may be equal to zero). The projection of the spin s {\displaystyle s} on any axis is quantized in units of the reduced Planck constant, such that the state function of the particle is, say, not ψ = ψ ( r ) {\displaystyle \psi =\psi (\mathbf {r} )} , but ψ = ψ ( r , s z ) {\displaystyle \psi =\psi (\mathbf {r} ,s_{z})} , where s z {\displaystyle s_{z}} can take only the values of the following discrete set: s z ∈ { − s ℏ , − ( s − 1 ) ℏ , … , + ( s − 1 ) ℏ , + s ℏ } . {\displaystyle s_{z}\in \{-s\hbar ,-(s-1)\hbar ,\dots ,+(s-1)\hbar ,+s\hbar \}.} One distinguishes bosons (integer spin) and fermions (half-integer spin). The total angular momentum conserved in interaction processes is then the sum of the orbital angular momentum and the spin. === Pauli matrices === The quantum-mechanical operators associated with spin-⁠1/2⁠ observables are S ^ = ℏ 2 σ , {\displaystyle {\hat {\mathbf {S} }}={\frac {\hbar }{2}}{\boldsymbol {\sigma }},} where in Cartesian components S x = ℏ 2 σ x , S y = ℏ 2 σ y , S z = ℏ 2 σ z . {\displaystyle S_{x}={\frac {\hbar }{2}}\sigma _{x},\quad S_{y}={\frac {\hbar }{2}}\sigma _{y},\quad S_{z}={\frac {\hbar }{2}}\sigma _{z}.} For the special case of spin-⁠1/2⁠ particles, σx, σy and σz are the three Pauli matrices: σ x = ( 0 1 1 0 ) , σ y = ( 0 − i i 0 ) , σ z = ( 1 0 0 − 1 ) . {\displaystyle \sigma _{x}={\begin{pmatrix}0&1\\1&0\end{pmatrix}},\quad \sigma _{y}={\begin{pmatrix}0&-i\\i&0\end{pmatrix}},\quad \sigma _{z}={\begin{pmatrix}1&0\\0&-1\end{pmatrix}}.} === Pauli exclusion principle === The Pauli exclusion principle states that the wavefunction ψ ( r 1 , σ 1 , … , r N , σ N ) {\displaystyle \psi (\mathbf {r} _{1},\sigma _{1},\dots ,\mathbf {r} _{N},\sigma _{N})} for a system of N identical particles having spin s must change upon interchanges of any two of the N particles as ψ ( … , r i , σ i , … , r j , σ j , … ) = ( − 1 ) 2 s ψ ( … , r j , σ j , … , r i , σ i , … ) . {\displaystyle \psi (\dots ,\mathbf {r} _{i},\sigma _{i},\dots ,\mathbf {r} _{j},\sigma _{j},\dots )=(-1)^{2s}\psi (\dots ,\mathbf {r} _{j},\sigma _{j},\dots ,\mathbf {r} _{i},\sigma _{i},\dots ).} Thus, for bosons the prefactor (−1)2s will reduce to +1, for fermions to −1. This permutation postulate for N-particle state functions has most important consequences in daily life, e.g. the periodic table of the chemical elements. === Rotations === As described above, quantum mechanics states that components of angular momentum measured along any direction can only take a number of discrete values. The most convenient quantum-mechanical description of particle's spin is therefore with a set of complex numbers corresponding to amplitudes of finding a given value of projection of its intrinsic angular momentum on a given axis. For instance, for a spin-⁠1/2⁠ particle, we would need two numbers a±1/2, giving amplitudes of finding it with projection of angular momentum equal to +⁠ħ/2⁠ and −⁠ħ/2⁠, satisfying the requirement | a + 1 / 2 | 2 + | a − 1 / 2 | 2 = 1. {\displaystyle |a_{+1/2}|^{2}+|a_{-1/2}|^{2}=1.} For a generic particle with spin s, we would need 2s + 1 such parameters. Since these numbers depend on the choice of the axis, they transform into each other non-trivially when this axis is rotated. It is clear that the transformation law must be linear, so we can represent it by associating a matrix with each rotation, and the product of two transformation matrices corresponding to rotations A and B must be equal (up to phase) to the matrix representing rotation AB. Further, rotations preserve the quantum-mechanical inner product, and so should our transformation matrices: ∑ m = − j j a m ∗ b m = ∑ m = − j j ( ∑ n = − j j U n m a n ) ∗ ( ∑ k = − j j U k m b k ) , {\displaystyle \sum _{m=-j}^{j}a_{m}^{*}b_{m}=\sum _{m=-j}^{j}\left(\sum _{n=-j}^{j}U_{nm}a_{n}\right)^{*}\left(\sum _{k=-j}^{j}U_{km}b_{k}\right),} ∑ n = − j j ∑ k = − j j U n p ∗ U k q = δ p q . {\displaystyle \sum _{n=-j}^{j}\sum _{k=-j}^{j}U_{np}^{*}U_{kq}=\delta _{pq}.} Mathematically speaking, these matrices furnish a unitary projective representation of the rotation group SO(3). Each such representation corresponds to a representation of the covering group of SO(3), which is SU(2). There is one n-dimensional irreducible representation of SU(2) for each dimension, though this representation is n-dimensional real for odd n and n-dimensional complex for even n (hence of real dimension 2n). For a rotation by angle θ in the plane with normal vector θ ^ {\textstyle {\hat {\boldsymbol {\theta }}}} , U = e − i ℏ θ ⋅ S , {\displaystyle U=e^{-{\frac {i}{\hbar }}{\boldsymbol {\theta }}\cdot \mathbf {S} },} where θ = θ θ ^ {\textstyle {\boldsymbol {\theta }}=\theta {\hat {\boldsymbol {\theta }}}} , and S is the vector of spin operators. A generic rotation in 3-dimensional space can be built by compounding operators of this type using Euler angles: R ( α , β , γ ) = e − i α S x e − i β S y e − i γ S z . {\displaystyle {\mathcal {R}}(\alpha ,\beta ,\gamma )=e^{-i\alpha S_{x}}e^{-i\beta S_{y}}e^{-i\gamma S_{z}}.} An irreducible representation of this group of operators is furnished by the Wigner D-matrix: D m ′ m s ( α , β , γ ) ≡ ⟨ s m ′ | R ( α , β , γ ) | s m ⟩ = e − i m ′ α d m ′ m s ( β ) e − i m γ , {\displaystyle D_{m'm}^{s}(\alpha ,\beta ,\gamma )\equiv \langle sm'|{\mathcal {R}}(\alpha ,\beta ,\gamma )|sm\rangle =e^{-im'\alpha }d_{m'm}^{s}(\beta )e^{-im\gamma },} where d m ′ m s ( β ) = ⟨ s m ′ | e − i β s y | s m ⟩ {\displaystyle d_{m'm}^{s}(\beta )=\langle sm'|e^{-i\beta s_{y}}|sm\rangle } is Wigner's small d-matrix. Note that for γ = 2π and α = β = 0; i.e., a full rotation about the z axis, the Wigner D-matrix elements become D m ′ m s ( 0 , 0 , 2 π ) = d m ′ m s ( 0 ) e − i m 2 π = δ m ′ m ( − 1 ) 2 m . {\displaystyle D_{m'm}^{s}(0,0,2\pi )=d_{m'm}^{s}(0)e^{-im2\pi }=\delta _{m'm}(-1)^{2m}.} Recalling that a generic spin state can be written as a superposition of states with definite m, we see that if s is an integer, the values of m are all integers, and this matrix corresponds to the identity operator. However, if s is a half-integer, the values of m are also all half-integers, giving (−1)2m = −1 for all m, and hence upon rotation by 2π the state picks up a minus sign. This fact is a crucial element of the proof of the spin–statistics theorem. === Lorentz transformations === We could try the same approach to determine the behavior of spin under general Lorentz transformations, but we would immediately discover a major obstacle. Unlike SO(3), the group of Lorentz transformations SO(3,1) is non-compact and therefore does not have any faithful, unitary, finite-dimensional representations. In case of spin-⁠1/2⁠ particles, it is possible to find a construction that includes both a finite-dimensional representation and a scalar product that is preserved by this representation. We associate a 4-component Dirac spinor ψ with each particle. These spinors transform under Lorentz transformations according to the law ψ ′ = exp ⁡ ( 1 8 ω μ ν [ γ μ , γ ν ] ) ψ , {\displaystyle \psi '=\exp {\left({\tfrac {1}{8}}\omega _{\mu \nu }[\gamma _{\mu },\gamma _{\nu }]\right)}\psi ,} where γν are gamma matrices, and ωμν is an antisymmetric 4 × 4 matrix parametrizing the transformation. It can be shown that the scalar product ⟨ ψ | ϕ ⟩ = ψ ¯ ϕ = ψ † γ 0 ϕ {\displaystyle \langle \psi |\phi \rangle ={\bar {\psi }}\phi =\psi ^{\dagger }\gamma _{0}\phi } is preserved. It is not, however, positive-definite, so the representation is not unitary. === Measurement of spin along the x, y, or z axes === Each of the (Hermitian) Pauli matrices of spin-⁠1/2⁠ particles has two eigenvalues, +1 and −1. The corresponding normalized eigenvectors are ψ x + = | 1 2 , + 1 2 ⟩ x = 1 2 ( 1 1 ) , ψ x − = | 1 2 , − 1 2 ⟩ x = 1 2 ( 1 − 1 ) , ψ y + = | 1 2 , + 1 2 ⟩ y = 1 2 ( 1 i ) , ψ y − = | 1 2 , − 1 2 ⟩ y = 1 2 ( 1 − i ) , ψ z + = | 1 2 , + 1 2 ⟩ z = ( 1 0 ) , ψ z − = | 1 2 , − 1 2 ⟩ z = ( 0 1 ) . {\displaystyle {\begin{array}{lclc}\psi _{x+}=\left|{\frac {1}{2}},{\frac {+1}{2}}\right\rangle _{x}=\displaystyle {\frac {1}{\sqrt {2}}}\!\!\!\!\!&{\begin{pmatrix}{1}\\{1}\end{pmatrix}},&\psi _{x-}=\left|{\frac {1}{2}},{\frac {-1}{2}}\right\rangle _{x}=\displaystyle {\frac {1}{\sqrt {2}}}\!\!\!\!\!&{\begin{pmatrix}{1}\\{-1}\end{pmatrix}},\\\psi _{y+}=\left|{\frac {1}{2}},{\frac {+1}{2}}\right\rangle _{y}=\displaystyle {\frac {1}{\sqrt {2}}}\!\!\!\!\!&{\begin{pmatrix}{1}\\{i}\end{pmatrix}},&\psi _{y-}=\left|{\frac {1}{2}},{\frac {-1}{2}}\right\rangle _{y}=\displaystyle {\frac {1}{\sqrt {2}}}\!\!\!\!\!&{\begin{pmatrix}{1}\\{-i}\end{pmatrix}},\\\psi _{z+}=\left|{\frac {1}{2}},{\frac {+1}{2}}\right\rangle _{z}=&{\begin{pmatrix}1\\0\end{pmatrix}},&\psi _{z-}=\left|{\frac {1}{2}},{\frac {-1}{2}}\right\rangle _{z}=&{\begin{pmatrix}0\\1\end{pmatrix}}.\end{array}}} (Because any eigenvector multiplied by a constant is still an eigenvector, there is ambiguity about the overall sign. In this article, the convention is chosen to make the first element imaginary and negative if there is a sign ambiguity. The present convention is used by software such as SymPy; while many physics textbooks, such as Sakurai and Griffiths, prefer to make it real and positive.) By the postulates of quantum mechanics, an experiment designed to measure the electron spin on the x, y, or z axis can only yield an eigenvalue of the corresponding spin operator (Sx, Sy or Sz) on that axis, i.e. ⁠ħ/2⁠ or −⁠ħ/2⁠. The quantum state of a particle (with respect to spin), can be represented by a two-component spinor: ψ = ( a + b i c + d i ) . {\displaystyle \psi ={\begin{pmatrix}a+bi\\c+di\end{pmatrix}}.} When the spin of this particle is measured with respect to a given axis (in this example, the x axis), the probability that its spin will be measured as ⁠ħ/2⁠ is just | ⟨ ψ x + | ψ ⟩ | 2 {\displaystyle {\big |}\langle \psi _{x+}|\psi \rangle {\big |}^{2}} . Correspondingly, the probability that its spin will be measured as −⁠ħ/2⁠ is just | ⟨ ψ x − | ψ ⟩ | 2 {\displaystyle {\big |}\langle \psi _{x-}|\psi \rangle {\big |}^{2}} . Following the measurement, the spin state of the particle collapses into the corresponding eigenstate. As a result, if the particle's spin along a given axis has been measured to have a given eigenvalue, all measurements will yield the same eigenvalue (since | ⟨ ψ x + | ψ x + ⟩ | 2 = 1 {\displaystyle {\big |}\langle \psi _{x+}|\psi _{x+}\rangle {\big |}^{2}=1} , etc.), provided that no measurements of the spin are made along other axes. === Measurement of spin along an arbitrary axis === The operator to measure spin along an arbitrary axis direction is easily obtained from the Pauli spin matrices. Let u = (ux, uy, uz) be an arbitrary unit vector. Then the operator for spin in this direction is simply S u = ℏ 2 ( u x σ x + u y σ y + u z σ z ) . {\displaystyle S_{u}={\frac {\hbar }{2}}(u_{x}\sigma _{x}+u_{y}\sigma _{y}+u_{z}\sigma _{z}).} The operator Su has eigenvalues of ±⁠ħ/2⁠, just like the usual spin matrices. This method of finding the operator for spin in an arbitrary direction generalizes to higher spin states, one takes the dot product of the direction with a vector of the three operators for the three x-, y-, z-axis directions. A normalized spinor for spin-⁠1/2⁠ in the (ux, uy, uz) direction (which works for all spin states except spin down, where it will give ⁠0/0⁠) is 1 2 + 2 u z ( 1 + u z u x + i u y ) . {\displaystyle {\frac {1}{\sqrt {2+2u_{z}}}}{\begin{pmatrix}1+u_{z}\\u_{x}+iu_{y}\end{pmatrix}}.} The above spinor is obtained in the usual way by diagonalizing the σu matrix and finding the eigenstates corresponding to the eigenvalues. In quantum mechanics, vectors are termed "normalized" when multiplied by a normalizing factor, which results in the vector having a length of unity. === Compatibility of spin measurements === Since the Pauli matrices do not commute, measurements of spin along the different axes are incompatible. This means that if, for example, we know the spin along the x axis, and we then measure the spin along the y axis, we have invalidated our previous knowledge of the x axis spin. This can be seen from the property of the eigenvectors (i.e. eigenstates) of the Pauli matrices that | ⟨ ψ x ± | ψ y ± ⟩ | 2 = | ⟨ ψ x ± | ψ z ± ⟩ | 2 = | ⟨ ψ y ± | ψ z ± ⟩ | 2 = 1 2 . {\displaystyle {\big |}\langle \psi _{x\pm }|\psi _{y\pm }\rangle {\big |}^{2}={\big |}\langle \psi _{x\pm }|\psi _{z\pm }\rangle {\big |}^{2}={\big |}\langle \psi _{y\pm }|\psi _{z\pm }\rangle {\big |}^{2}={\tfrac {1}{2}}.} So when physicists measure the spin of a particle along the x axis as, for example, ⁠ħ/2⁠, the particle's spin state collapses into the eigenstate | ψ x + ⟩ {\displaystyle |\psi _{x+}\rangle } . When we then subsequently measure the particle's spin along the y axis, the spin state will now collapse into either | ψ y + ⟩ {\displaystyle |\psi _{y+}\rangle } or | ψ y − ⟩ {\displaystyle |\psi _{y-}\rangle } , each with probability ⁠1/2⁠. Let us say, in our example, that we measure −⁠ħ/2⁠. When we now return to measure the particle's spin along the x axis again, the probabilities that we will measure ⁠ħ/2⁠ or −⁠ħ/2⁠ are each ⁠1/2⁠ (i.e. they are | ⟨ ψ x + | ψ y − ⟩ | 2 {\displaystyle {\big |}\langle \psi _{x+}|\psi _{y-}\rangle {\big |}^{2}} and | ⟨ ψ x − | ψ y − ⟩ | 2 {\displaystyle {\big |}\langle \psi _{x-}|\psi _{y-}\rangle {\big |}^{2}} respectively). This implies that the original measurement of the spin along the x axis is no longer valid, since the spin along the x axis will now be measured to have either eigenvalue with equal probability. === Higher spins === The spin-⁠1/2⁠ operator S = ⁠ħ/2⁠σ forms the fundamental representation of SU(2). By taking Kronecker products of this representation with itself repeatedly, one may construct all higher irreducible representations. That is, the resulting spin operators for higher-spin systems in three spatial dimensions can be calculated for arbitrarily large s using this spin operator and ladder operators. For example, taking the Kronecker product of two spin-⁠1/2⁠ yields a four-dimensional representation, which is separable into a 3-dimensional spin-1 (triplet states) and a 1-dimensional spin-0 representation (singlet state). The resulting irreducible representations yield the following spin matrices and eigenvalues in the z-basis: Also useful in the quantum mechanics of multiparticle systems, the general Pauli group Gn is defined to consist of all n-fold tensor products of Pauli matrices. The analog formula of Euler's formula in terms of the Pauli matrices R ^ ( θ , n ^ ) = e i θ 2 n ^ ⋅ σ = I cos ⁡ θ 2 + i ( n ^ ⋅ σ ) sin ⁡ θ 2 {\displaystyle {\hat {R}}(\theta ,{\hat {\mathbf {n} }})=e^{i{\frac {\theta }{2}}{\hat {\mathbf {n} }}\cdot {\boldsymbol {\sigma }}}=I\cos {\frac {\theta }{2}}+i\left({\hat {\mathbf {n} }}\cdot {\boldsymbol {\sigma }}\right)\sin {\frac {\theta }{2}}} for higher spins is tractable, but less simple. == Parity == In tables of the spin quantum number s for nuclei or particles, the spin is often followed by a "+" or "−". This refers to the parity with "+" for even parity (wave function unchanged by spatial inversion) and "−" for odd parity (wave function negated by spatial inversion). For example, see the isotopes of bismuth, in which the list of isotopes includes the column nuclear spin and parity. For Bi-209, the longest-lived isotope, the entry 9/2− means that the nuclear spin is 9/2 and the parity is odd. == Measuring spin == The nuclear spin of atoms can be determined by sophisticated improvements to the original Stern-Gerlach experiment. A single-energy (monochromatic) molecular beam of atoms in an inhomogeneous magnetic field will split into beams representing each possible spin quantum state. For an atom with electronic spin S and nuclear spin I, there are (2S + 1)(2I + 1) spin states. For example, neutral Na atoms, which have S = 1/2, were passed through a series of inhomogeneous magnetic fields that selected one of the two electronic spin states and separated the nuclear spin states, from which four beams were observed. Thus, the nuclear spin for 23Na atoms was found to be I = 3/2. The spin of pions, a type of elementary particle, was determined by the principle of detailed balance applied to those collisions of protons that produced charged pions and deuterium. p + p → π + + d {\displaystyle p+p\rightarrow \pi ^{+}+d} The known spin values for protons and deuterium allows analysis of the collision cross-section to show that π + {\displaystyle \pi ^{+}} has spin s π = 0 {\displaystyle s_{\pi }=0} . A different approach is needed for neutral pions. In that case the decay produced two gamma ray photons with spin one: π 0 → 2 γ {\displaystyle \pi ^{0}\rightarrow 2\gamma } This result supplemented with additional analysis leads to the conclusion that the neutral pion also has spin zero.: 66  == Applications == Spin has important theoretical implications and practical applications. Well-established direct applications of spin include: Nuclear magnetic resonance (NMR) spectroscopy in chemistry; Electron spin resonance (ESR or EPR) spectroscopy in chemistry and physics; Magnetic resonance imaging (MRI) in medicine, a type of applied NMR, which relies on proton spin density; Giant magnetoresistive (GMR) drive-head technology in modern hard disks. Electron spin plays an important role in magnetism, with applications for instance in computer memories. The manipulation of nuclear spin by radio-frequency waves (nuclear magnetic resonance) is important in chemical spectroscopy and medical imaging. Spin–orbit coupling leads to the fine structure of atomic spectra, which is used in atomic clocks and in the modern definition of the second. Precise measurements of the g-factor of the electron have played an important role in the development and verification of quantum electrodynamics. Photon spin is associated with the polarization of light (photon polarization). An emerging application of spin is as a binary information carrier in spin transistors. The original concept, proposed in 1990, is known as Datta–Das spin transistor. Electronics based on spin transistors are referred to as spintronics. The manipulation of spin in dilute magnetic semiconductor materials, such as metal-doped ZnO or TiO2 imparts a further degree of freedom and has the potential to facilitate the fabrication of more efficient electronics. There are many indirect applications and manifestations of spin and the associated Pauli exclusion principle, starting with the periodic table of chemistry. == History == Spin was first discovered in the context of the emission spectrum of alkali metals. Starting around 1910, many experiments on different atoms produced a collection of relationships involving quantum numbers for atomic energy levels partially summarized in Bohr's model for the atom: 106  Transitions between levels obeyed selection rules and the rules were known to be correlated with even or odd atomic number. Additional information was known from changes to atomic spectra observed in strong magnetic fields, known as the Zeeman effect. In 1924, Wolfgang Pauli used this large collection of empirical observations to propose a new degree of freedom, introducing what he called a "two-valuedness not describable classically" associated with the electron in the outermost shell. The physical interpretation of Pauli's "degree of freedom" was initially unknown. Ralph Kronig, one of Alfred Landé's assistants, suggested in early 1925 that it was produced by the self-rotation of the electron. When Pauli heard about the idea, he criticized it severely, noting that the electron's hypothetical surface would have to be moving faster than the speed of light in order for it to rotate quickly enough to produce the necessary angular momentum. This would violate the theory of relativity. Largely due to Pauli's criticism, Kronig decided not to publish his idea. In the autumn of 1925, the same thought came to Dutch physicists George Uhlenbeck and Samuel Goudsmit at Leiden University. Under the advice of Paul Ehrenfest, they published their results. The young physicists immediately regretted the publication: Hendrik Lorentz and Werner Heisenberg both pointed out problems with the concept of a spinning electron. Pauli was especially unconvinced and continued to pursue his two-valued degree of freedom. This allowed him to formulate the Pauli exclusion principle, stating that no two electrons can have the same quantum state in the same quantum system. Fortunately, by February 1926, Llewellyn Thomas managed to resolve a factor-of-two discrepancy between experimental results for the fine structure in the hydrogen spectrum and calculations based on Uhlenbeck and Goudsmit's (and Kronig's unpublished) model.: 385  This discrepancy was due to a relativistic effect, the difference between the electron's rotating rest frame and the nuclear rest frame; the effect is now known as Thomas precession. Thomas' result convinced Pauli that electron spin was the correct interpretation of his two-valued degree of freedom, while he continued to insist that the classical rotating charge model is invalid. In 1927, Pauli formalized the theory of spin using the theory of quantum mechanics invented by Erwin Schrödinger and Werner Heisenberg. He pioneered the use of Pauli matrices as a representation of the spin operators and introduced a two-component spinor wave-function. Pauli's theory of spin was non-relativistic. In 1928, Paul Dirac published his relativistic electron equation, using a four-component spinor (known as a "Dirac spinor") for the electron wave-function. In 1940, Pauli proved the spin–statistics theorem, which states that fermions have half-integer spin, and bosons have integer spin. In retrospect, the first direct experimental evidence of the electron spin was the Stern–Gerlach experiment of 1922. However, the correct explanation of this experiment was only given in 1927. The original interpretation assumed the two spots observed in the experiment were due to quantized orbital angular momentum. However, in 1927 Ronald Fraser showed that Sodium atoms are isotropic with no orbital angular momentum and suggested that the observed magnetic properties were due to electron spin. In the same year, Phipps and Taylor applied the Stern-Gerlach technique to hydrogen atoms; the ground state of hydrogen has zero angular momentum but the measurements again showed two peaks. Once the quantum theory became established, it became clear that the original interpretation could not have been correct: the possible values of orbital angular momentum along one axis is always an odd number, unlike the observations. Hydrogen atoms have a single electron with two spin states giving the two spots observed; silver atoms have closed shells which do not contribute to the magnetic moment and only the unmatched outer electron's spin responds to the field. == See also == == References == == Further reading == == External links == Quotations related to Spin (physics) at Wikiquote Goudsmit on the discovery of electron spin Nature: "Milestones in 'spin' since 1896." ECE 495N Lecture 36: Spin Online lecture by S. Datta
Wikipedia/Spin_(particle_physics)
In physics, a fifth force refers to a hypothetical fundamental interaction (also known as fundamental force) beyond the four known interactions in nature: gravitational, electromagnetic, strong nuclear, and weak nuclear forces. Some speculative theories have proposed a fifth force to explain various anomalous observations that do not fit existing theories. The specific characteristics of a putative fifth force depend on which hypothesis is being advanced. No evidence to support these models has been found. The term is also used as "the Fifth force" when referring to a specific theory advanced by Ephraim Fischbach in 1971 to explain experimental deviations in the theory of gravity. Later analysis failed to reproduce those deviations. == History == The term fifth force originates in a 1986 paper by Ephraim Fischbach et al. who reanalyzed the data from the Eötvös experiment of Loránd Eötvös from earlier in the century; the reanalysis found a distance dependence to gravity that deviates from the inverse square law. : 57  The reanalysis was sparked by theoretical work in 1971 by Fujii : 3  proposing a model that changes distance dependence with a Yukawa potential-like term: V ( r ) = − G ∞ m i m j r i j ( 1 + α e − r / λ ) {\displaystyle V(r)=-G_{\infty }{\frac {m_{i}m_{j}}{r_{ij}}}(1+\alpha e^{-r/\lambda })} The parameter α {\displaystyle \alpha } characterizes the strength and λ {\displaystyle \lambda } the range of the interaction. Fischbach's paper found a strength around 1% of gravity and a range of a few hundred meters.: 26  The effect of this potential can be described equivalently as exchange of vector and/or scalar bosons, that is a predicting as yet undetected new particles. However, many subsequent attempts to reproduce the deviations have failed. == Theory == Theoretical proposals for a fifth-force are driven by inconsistencies between the existing models of general relativity and quantum field theory, and also between the hierarchy problem and the cosmological constant problem. Both issues suggest the possibility of corrections to the gravitational potential around 100 μ m {\displaystyle 100\mu {\text{m}}} .: 58  The accelerating expansion of the universe has been attributed to a form of energy called dark energy. Some physicists speculate that a form of dark energy called quintessence could be a fifth force. == Experimental approaches == There are at least three kinds of searches that can be undertaken, which depend on the kind of force being considered, and its range. === Equivalence principle === One way to search for a fifth force is with tests of the strong equivalence principle, one of the most powerful tests of general relativity, also known as Einstein's theory of gravity. Alternative theories of gravity, such as Brans–Dicke theory, postulate a fifth force — possibly one with infinite range. This is because gravitational interactions, in theories other than general relativity, have degrees of freedom other than the "metric", which dictates the curvature of space, and different kinds of degrees of freedom produce different effects. For example, a scalar field cannot produce the bending of light rays. The fifth force would manifest itself in an effect on solar system orbits, called the Nordtvedt effect. This is tested with Lunar Laser Ranging experiment and very-long-baseline interferometry. === Extra dimensions === Another kind of fifth force, which arises in Kaluza–Klein theory, where the universe has extra dimensions, or in supergravity or string theory is the Yukawa force, which is transmitted by a light scalar field (i.e. a scalar field with a long Compton wavelength, which determines the range). This has prompted a much recent interest, as a theory of supersymmetric large extra dimensions — dimensions with size slightly less than a millimeter — has prompted an experimental effort to test gravity on very small scales. This requires extremely sensitive experiments which search for a deviation from the inverse-square law of gravity over a range of distances. Essentially, they are looking for signs that the Yukawa interaction is engaging at a certain length. Australian researchers, attempting to measure the gravitational constant deep in a mine shaft, found a discrepancy between the predicted and measured value, with the measured value being two percent too small. They concluded that the results may be explained by a repulsive fifth force with a range from a few centimetres to a kilometre. Similar experiments have been carried out on board a submarine, USS Dolphin (AGSS-555), while deeply submerged. A further experiment measuring the gravitational constant in a deep borehole in the Greenland ice sheet found discrepancies of a few percent, but it was not possible to eliminate a geological source for the observed signal. === Earth's mantle === Another experiment uses the Earth's mantle as a giant particle detector, focusing on geoelectrons. === Cepheid variables === Jain et al. (2012) examined existing data on the rate of pulsation of over a thousand cepheid variable stars in 25 galaxies. Theory suggests that the rate of cepheid pulsation in galaxies screened from a hypothetical fifth force by neighbouring clusters, would follow a different pattern from cepheids that are not screened. They were unable to find any variation from Einstein's theory of gravity. === Other approaches === Some experiments used a lake plus a tower that is 320 meters high. A comprehensive review by Ephraim Fischbach and Carrick Talmadge suggested there is no compelling evidence for the fifth force, though scientists still search for it. The Fischbach–Talmadge article was written in 1992, and since then, other evidence has come to light that may indicate a fifth force. The above experiments search for a fifth force that is, like gravity, independent of the composition of an object, so all objects experience the force in proportion to their masses. Forces that depend on the composition of an object can be very sensitively tested by torsion balance experiments of a type invented by Loránd Eötvös. Such forces may depend, for example, on the ratio of protons to neutrons in an atomic nucleus, nuclear spin, or the relative amount of different kinds of binding energy in a nucleus (see the semi-empirical mass formula). Searches have been done from very short ranges, to municipal scales, to the scale of the Earth, the Sun, and dark matter at the center of the galaxy. === Claims of new particles === In 2015, Attila Krasznahorkay at ATOMKI, the Hungarian Academy of Sciences's Institute for Nuclear Research in Debrecen, Hungary, and his colleagues posited the existence of a new, light boson only 34 times heavier than the electron (17 MeV). In an effort to find a dark photon, the Hungarian team fired protons at thin targets of lithium-7, which created unstable beryllium-8 nuclei that then decayed and ejected pairs of electrons and positrons. Excess decays were observed at an opening angle of 140° between the e+ and e−, and a combined energy of 17 MeV, which indicated that a small fraction of beryllium-8 will shed excess energy in the form of a new particle. In November 2019, Krasznahorkay announced that he and his team at ATOMKI had successfully observed the same anomalies in the decay of stable helium atoms as had been observed in beryllium-8, strengthening the case for the X17 particle's existence. Feng et al. (2016) proposed that a protophobic (i.e. "proton-ignoring") X-boson with a mass of 16.7 MeV with suppressed couplings to protons relative to neutrons and electrons and femtometer range could explain the data. The force may explain the muon g − 2 anomaly and provide a dark matter candidate. Several research experiments are underway to attempt to validate or refute these results. == See also == == References ==
Wikipedia/Fifth_force
In science, a field is a physical quantity, represented by a scalar, vector, or tensor, that has a value for each point in space and time. An example of a scalar field is a weather map, with the surface temperature described by assigning a number to each point on the map. A surface wind map, assigning an arrow to each point on a map that describes the wind speed and direction at that point, is an example of a vector field, i.e. a 1-dimensional (rank-1) tensor field. Field theories, mathematical descriptions of how field values change in space and time, are ubiquitous in physics. For instance, the electric field is another rank-1 tensor field, while electrodynamics can be formulated in terms of two interacting vector fields at each point in spacetime, or as a single-rank 2-tensor field. In the modern framework of the quantum field theory, even without referring to a test particle, a field occupies space, contains energy, and its presence precludes a classical "true vacuum". This has led physicists to consider electromagnetic fields to be a physical entity, making the field concept a supporting paradigm of the edifice of modern physics. Richard Feynman said, "The fact that the electromagnetic field can possess momentum and energy makes it very real, and [...] a particle makes a field, and a field acts on another particle, and the field has such familiar properties as energy content and momentum, just as particles can have." In practice, the strength of most fields diminishes with distance, eventually becoming undetectable. For instance the strength of many relevant classical fields, such as the gravitational field in Newton's theory of gravity or the electrostatic field in classical electromagnetism, is inversely proportional to the square of the distance from the source (i.e. they follow Gauss's law). A field can be classified as a scalar field, a vector field, a spinor field or a tensor field according to whether the represented physical quantity is a scalar, a vector, a spinor, or a tensor, respectively. A field has a consistent tensorial character wherever it is defined: i.e. a field cannot be a scalar field somewhere and a vector field somewhere else. For example, the Newtonian gravitational field is a vector field: specifying its value at a point in spacetime requires three numbers, the components of the gravitational field vector at that point. Moreover, within each category (scalar, vector, tensor), a field can be either a classical field or a quantum field, depending on whether it is characterized by numbers or quantum operators respectively. In this theory an equivalent representation of field is a field particle, for instance a boson. == History == To Isaac Newton, his law of universal gravitation simply expressed the gravitational force that acted between any pair of massive objects. When looking at the motion of many bodies all interacting with each other, such as the planets in the Solar System, dealing with the force between each pair of bodies separately rapidly becomes computationally inconvenient. In the eighteenth century, a new quantity was devised to simplify the bookkeeping of all these gravitational forces. This quantity, the gravitational field, gave at each point in space the total gravitational acceleration which would be felt by a small object at that point. This did not change the physics in any way: it did not matter if all the gravitational forces on an object were calculated individually and then added together, or if all the contributions were first added together as a gravitational field and then applied to an object. His idea in Opticks that optical reflection and refraction arise from interactions across the entire surface is arguably the beginning of the field theory of electric force. The development of the independent concept of a field truly began in the nineteenth century with the development of the theory of electromagnetism. In the early stages, André-Marie Ampère and Charles-Augustin de Coulomb could manage with Newton-style laws that expressed the forces between pairs of electric charges or electric currents. However, it became much more natural to take the field approach and express these laws in terms of electric and magnetic fields; in 1845 Michael Faraday became the first to coin the term "magnetic field". And Lord Kelvin provided a formal definition for a field in 1851. The independent nature of the field became more apparent with James Clerk Maxwell's discovery that waves in these fields, called electromagnetic waves, propagated at a finite speed. Consequently, the forces on charges and currents no longer just depended on the positions and velocities of other charges and currents at the same time, but also on their positions and velocities in the past. Maxwell, at first, did not adopt the modern concept of a field as a fundamental quantity that could independently exist. Instead, he supposed that the electromagnetic field expressed the deformation of some underlying medium—the luminiferous aether—much like the tension in a rubber membrane. If that were the case, the observed velocity of the electromagnetic waves should depend upon the velocity of the observer with respect to the aether. Despite much effort, no experimental evidence of such an effect was ever found; the situation was resolved by the introduction of the special theory of relativity by Albert Einstein in 1905. This theory changed the way the viewpoints of moving observers were related to each other. They became related to each other in such a way that velocity of electromagnetic waves in Maxwell's theory would be the same for all observers. By doing away with the need for a background medium, this development opened the way for physicists to start thinking about fields as truly independent entities. In the late 1920s, the new rules of quantum mechanics were first applied to the electromagnetic field. In 1927, Paul Dirac used quantum fields to successfully explain how the decay of an atom to a lower quantum state led to the spontaneous emission of a photon, the quantum of the electromagnetic field. This was soon followed by the realization (following the work of Pascual Jordan, Eugene Wigner, Werner Heisenberg, and Wolfgang Pauli) that all particles, including electrons and protons, could be understood as the quanta of some quantum field, elevating fields to the status of the most fundamental objects in nature. That said, John Wheeler and Richard Feynman seriously considered Newton's pre-field concept of action at a distance (although they set it aside because of the ongoing utility of the field concept for research in general relativity and quantum electrodynamics). == Classical fields == There are several examples of classical fields. Classical field theories remain useful wherever quantum properties do not arise, and can be active areas of research. Elasticity of materials, fluid dynamics and Maxwell's equations are cases in point. Some of the simplest physical fields are vector force fields. Historically, the first time that fields were taken seriously was with Faraday's lines of force when describing the electric field. The gravitational field was then similarly described. === Newtonian gravitation === A classical field theory describing gravity is Newtonian gravitation, which describes the gravitational force as a mutual interaction between two masses. Any body with mass M is associated with a gravitational field g which describes its influence on other bodies with mass. The gravitational field of M at a point r in space corresponds to the ratio between force F that M exerts on a small or negligible test mass m located at r and the test mass itself: g ( r ) = F ( r ) m . {\displaystyle \mathbf {g} (\mathbf {r} )={\frac {\mathbf {F} (\mathbf {r} )}{m}}.} Stipulating that m is much smaller than M ensures that the presence of m has a negligible influence on the behavior of M. According to Newton's law of universal gravitation, F(r) is given by F ( r ) = − G M m r 2 r ^ , {\displaystyle \mathbf {F} (\mathbf {r} )=-{\frac {GMm}{r^{2}}}{\hat {\mathbf {r} }},} where r ^ {\displaystyle {\hat {\mathbf {r} }}} is a unit vector lying along the line joining M and m and pointing from M to m. Therefore, the gravitational field of M is g ( r ) = F ( r ) m = − G M r 2 r ^ . {\displaystyle \mathbf {g} (\mathbf {r} )={\frac {\mathbf {F} (\mathbf {r} )}{m}}=-{\frac {GM}{r^{2}}}{\hat {\mathbf {r} }}.} The experimental observation that inertial mass and gravitational mass are equal to an unprecedented level of accuracy leads to the identity that gravitational field strength is identical to the acceleration experienced by a particle. This is the starting point of the equivalence principle, which leads to general relativity. Because the gravitational force F is conservative, the gravitational field g can be rewritten in terms of the gradient of a scalar function, the gravitational potential Φ(r): g ( r ) = − ∇ Φ ( r ) . {\displaystyle \mathbf {g} (\mathbf {r} )=-\nabla \Phi (\mathbf {r} ).} === Electromagnetism === Michael Faraday first realized the importance of a field as a physical quantity, during his investigations into magnetism. He realized that electric and magnetic fields are not only fields of force which dictate the motion of particles, but also have an independent physical reality because they carry energy. These ideas eventually led to the creation, by James Clerk Maxwell, of the first unified field theory in physics with the introduction of equations for the electromagnetic field. The modern versions of these equations are called Maxwell's equations. ==== Electrostatics ==== A charged test particle with charge q experiences a force F based solely on its charge. We can similarly describe the electric field E so that F = qE. Using this and Coulomb's law tells us that the electric field due to a single charged particle is E = 1 4 π ϵ 0 q r 2 r ^ . {\displaystyle \mathbf {E} ={\frac {1}{4\pi \epsilon _{0}}}{\frac {q}{r^{2}}}{\hat {\mathbf {r} }}.} The electric field is conservative, and hence can be described by a scalar potential, V(r): E ( r ) = − ∇ V ( r ) . {\displaystyle \mathbf {E} (\mathbf {r} )=-\nabla V(\mathbf {r} ).} ==== Magnetostatics ==== A steady current I flowing along a path ℓ will create a field B, that exerts a force on nearby moving charged particles that is quantitatively different from the electric field force described above. The force exerted by I on a nearby charge q with velocity v is F ( r ) = q v × B ( r ) , {\displaystyle \mathbf {F} (\mathbf {r} )=q\mathbf {v} \times \mathbf {B} (\mathbf {r} ),} where B(r) is the magnetic field, which is determined from I by the Biot–Savart law: B ( r ) = μ 0 4 π ∫ I d ℓ × r ^ r 2 . {\displaystyle \mathbf {B} (\mathbf {r} )={\frac {\mu _{0}}{4\pi }}\int {\frac {Id{\boldsymbol {\ell }}\times {\hat {\mathbf {r} }}}{r^{2}}}.} The magnetic field is not conservative in general, and hence cannot usually be written in terms of a scalar potential. However, it can be written in terms of a vector potential, A(r): B ( r ) = ∇ × A ( r ) {\displaystyle \mathbf {B} (\mathbf {r} )={\boldsymbol {\nabla }}\times \mathbf {A} (\mathbf {r} )} ==== Electrodynamics ==== In general, in the presence of both a charge density ρ(r, t) and current density J(r, t), there will be both an electric and a magnetic field, and both will vary in time. They are determined by Maxwell's equations, a set of differential equations which directly relate E and B to ρ and J. Alternatively, one can describe the system in terms of its scalar and vector potentials V and A. A set of integral equations known as retarded potentials allow one to calculate V and A from ρ and J, and from there the electric and magnetic fields are determined via the relations E = − ∇ V − ∂ A ∂ t {\displaystyle \mathbf {E} =-{\boldsymbol {\nabla }}V-{\frac {\partial \mathbf {A} }{\partial t}}} B = ∇ × A . {\displaystyle \mathbf {B} ={\boldsymbol {\nabla }}\times \mathbf {A} .} At the end of the 19th century, the electromagnetic field was understood as a collection of two vector fields in space. Nowadays, one recognizes this as a single antisymmetric 2nd-rank tensor field in spacetime. === Gravitation in general relativity === Einstein's theory of gravity, called general relativity, is another example of a field theory. Here the principal field is the metric tensor, a symmetric 2nd-rank tensor field in spacetime. This replaces Newton's law of universal gravitation. === Waves as fields === Waves can be constructed as physical fields, due to their finite propagation speed and causal nature when a simplified physical model of an isolated closed system is set . They are also subject to the inverse-square law. For electromagnetic waves, there are optical fields, and terms such as near- and far-field limits for diffraction. In practice though, the field theories of optics are superseded by the electromagnetic field theory of Maxwell Gravity waves are waves in the surface of water, defined by a height field. === Fluid dynamics === Fluid dynamics has fields of pressure, density, and flow rate that are connected by conservation laws for energy and momentum. The mass continuity equation is a continuity equation, representing the conservation of mass ∂ ρ ∂ t + ∇ ⋅ ( ρ u ) = 0 {\displaystyle {\frac {\partial \rho }{\partial t}}+\nabla \cdot (\rho \mathbf {u} )=0} and the Navier–Stokes equations represent the conservation of momentum in the fluid, found from Newton's laws applied to the fluid, ∂ ∂ t ( ρ u ) + ∇ ⋅ ( ρ u ⊗ u + p I ) = ∇ ⋅ τ + ρ b {\displaystyle {\frac {\partial }{\partial t}}(\rho \mathbf {u} )+\nabla \cdot (\rho \mathbf {u} \otimes \mathbf {u} +p\mathbf {I} )=\nabla \cdot {\boldsymbol {\tau }}+\rho \mathbf {b} } if the density ρ, pressure p, deviatoric stress tensor τ of the fluid, as well as external body forces b, are all given. The flow velocity u is the vector field to solve for. === Elasticity === Linear elasticity is defined in terms of constitutive equations between tensor fields, σ i j = L i j k l ε k l {\displaystyle \sigma _{ij}=L_{ijkl}\varepsilon _{kl}} where σ i j {\displaystyle \sigma _{ij}} are the components of the 3x3 Cauchy stress tensor, ε i j {\displaystyle \varepsilon _{ij}} the components of the 3x3 infinitesimal strain and L i j k l {\displaystyle L_{ijkl}} is the elasticity tensor, a fourth-rank tensor with 81 components (usually 21 independent components). === Thermodynamics and transport equations === Assuming that the temperature T is an intensive quantity, i.e., a single-valued, differentiable function of three-dimensional space (a scalar field), i.e., that T = T ( r ) {\displaystyle T=T(\mathbf {r} )} , then the temperature gradient is a vector field defined as ∇ T {\displaystyle \nabla T} . In thermal conduction, the temperature field appears in Fourier's law, q = − k ∇ T {\displaystyle \mathbf {q} =-k\nabla T} where q is the heat flux field and k the thermal conductivity. Temperature and pressure gradients are also important for meteorology. == Quantum fields == It is now believed that quantum mechanics should underlie all physical phenomena, so that a classical field theory should, at least in principle, permit a recasting in quantum mechanical terms; success yields the corresponding quantum field theory. For example, quantizing classical electrodynamics gives quantum electrodynamics. Quantum electrodynamics is arguably the most successful scientific theory; experimental data confirm its predictions to a higher precision (to more significant digits) than any other theory. The two other fundamental quantum field theories are quantum chromodynamics and the electroweak theory. In quantum chromodynamics, the color field lines are coupled at short distances by gluons, which are polarized by the field and line up with it. This effect increases within a short distance (around 1 fm from the vicinity of the quarks) making the color force increase within a short distance, confining the quarks within hadrons. As the field lines are pulled together tightly by gluons, they do not "bow" outwards as much as an electric field between electric charges. These three quantum field theories can all be derived as special cases of the so-called standard model of particle physics. General relativity, the Einsteinian field theory of gravity, has yet to be successfully quantized. However an extension, thermal field theory, deals with quantum field theory at finite temperatures, something seldom considered in quantum field theory. In BRST theory one deals with odd fields, e.g. Faddeev–Popov ghosts. There are different descriptions of odd classical fields both on graded manifolds and supermanifolds. As above with classical fields, it is possible to approach their quantum counterparts from a purely mathematical view using similar techniques as before. The equations governing the quantum fields are in fact PDEs (specifically, relativistic wave equations (RWEs)). Thus one can speak of Yang–Mills, Dirac, Klein–Gordon and Schrödinger fields as being solutions to their respective equations. A possible problem is that these RWEs can deal with complicated mathematical objects with exotic algebraic properties (e.g. spinors are not tensors, so may need calculus for spinor fields), but these in theory can still be subjected to analytical methods given appropriate mathematical generalization. == Field theory == Field theory usually refers to a construction of the dynamics of a field, i.e. a specification of how a field changes with time or with respect to other independent physical variables on which the field depends. Usually this is done by writing a Lagrangian or a Hamiltonian of the field, and treating it as a classical or quantum mechanical system with an infinite number of degrees of freedom. The resulting field theories are referred to as classical or quantum field theories. The dynamics of a classical field are usually specified by the Lagrangian density in terms of the field components; the dynamics can be obtained by using the action principle. It is possible to construct simple fields without any prior knowledge of physics using only mathematics from multivariable calculus, potential theory and partial differential equations (PDEs). For example, scalar PDEs might consider quantities such as amplitude, density and pressure fields for the wave equation and fluid dynamics; temperature/concentration fields for the heat/diffusion equations. Outside of physics proper (e.g., radiometry and computer graphics), there are even light fields. All these previous examples are scalar fields. Similarly for vectors, there are vector PDEs for displacement, velocity and vorticity fields in (applied mathematical) fluid dynamics, but vector calculus may now be needed in addition, being calculus for vector fields (as are these three quantities, and those for vector PDEs in general). More generally problems in continuum mechanics may involve for example, directional elasticity (from which comes the term tensor, derived from the Latin word for stretch), complex fluid flows or anisotropic diffusion, which are framed as matrix-tensor PDEs, and then require matrices or tensor fields, hence matrix or tensor calculus. The scalars (and hence the vectors, matrices and tensors) can be real or complex as both are fields in the abstract-algebraic/ring-theoretic sense. In a general setting, classical fields are described by sections of fiber bundles and their dynamics is formulated in the terms of jet manifolds (covariant classical field theory). In modern physics, the most often studied fields are those that model the four fundamental forces which one day may lead to the Unified Field Theory. === Symmetries of fields === A convenient way of classifying a field (classical or quantum) is by the symmetries it possesses. Physical symmetries are usually of two types: ==== Spacetime symmetries ==== Fields are often classified by their behaviour under transformations of spacetime. The terms used in this classification are: scalar fields (such as temperature) whose values are given by a single variable at each point of space. This value does not change under transformations of space. vector fields (such as the magnitude and direction of the force at each point in a magnetic field) which are specified by attaching a vector to each point of space. The components of this vector transform between themselves contravariantly under rotations in space. Similarly, a dual (or co-) vector field attaches a dual vector to each point of space, and the components of each dual vector transform covariantly. tensor fields, (such as the stress tensor of a crystal) specified by a tensor at each point of space. Under rotations in space, the components of the tensor transform in a more general way which depends on the number of covariant indices and contravariant indices. spinor fields (such as the Dirac spinor) arise in quantum field theory to describe particles with spin which transform like vectors except for one of their components; in other words, when one rotates a vector field 360 degrees around a specific axis, the vector field turns to itself; however, spinors would turn to their negatives in the same case. ==== Internal symmetries ==== Fields may have internal symmetries in addition to spacetime symmetries. In many situations, one needs fields which are a list of spacetime scalars: (φ1, φ2, ... φN). For example, in weather prediction these may be temperature, pressure, humidity, etc. In particle physics, the color symmetry of the interaction of quarks is an example of an internal symmetry, that of the strong interaction. Other examples are isospin, weak isospin, strangeness and any other flavour symmetry. If there is a symmetry of the problem, not involving spacetime, under which these components transform into each other, then this set of symmetries is called an internal symmetry. One may also make a classification of the charges of the fields under internal symmetries. === Statistical field theory === Statistical field theory attempts to extend the field-theoretic paradigm toward many-body systems and statistical mechanics. As above, it can be approached by the usual infinite number of degrees of freedom argument. Much like statistical mechanics has some overlap between quantum and classical mechanics, statistical field theory has links to both quantum and classical field theories, especially the former with which it shares many methods. One important example is mean field theory. === Continuous random fields === Classical fields as above, such as the electromagnetic field, are usually infinitely differentiable functions, but they are in any case almost always twice differentiable. In contrast, generalized functions are not continuous. When dealing carefully with classical fields at finite temperature, the mathematical methods of continuous random fields are used, because thermally fluctuating classical fields are nowhere differentiable. Random fields are indexed sets of random variables; a continuous random field is a random field that has a set of functions as its index set. In particular, it is often mathematically convenient to take a continuous random field to have a Schwartz space of functions as its index set, in which case the continuous random field is a tempered distribution. We can think about a continuous random field, in a (very) rough way, as an ordinary function that is ± ∞ {\displaystyle \pm \infty } almost everywhere, but such that when we take a weighted average of all the infinities over any finite region, we get a finite result. The infinities are not well-defined; but the finite values can be associated with the functions used as the weight functions to get the finite values, and that can be well-defined. We can define a continuous random field well enough as a linear map from a space of functions into the real numbers. == See also == == Notes == == References == == Further reading == "Fields". Principles of Physical Science. Vol. 25 (15th ed.). 1994. p. 815 – via Encyclopædia Britannica (Macropaedia). Landau, Lev D. and Lifshitz, Evgeny M. (1971). Classical Theory of Fields (3rd ed.). London: Pergamon. ISBN 0-08-016019-0. Vol. 2 of the Course of Theoretical Physics. Jepsen, Kathryn (July 18, 2013). "Real talk: Everything is made of fields" (PDF). Symmetry Magazine. Archived from the original (PDF) on March 4, 2016. Retrieved June 9, 2015. == External links == Particle and Polymer Field Theories
Wikipedia/Field_(physics)
Theoretical physics is a branch of physics that employs mathematical models and abstractions of physical objects and systems to rationalize, explain, and predict natural phenomena. This is in contrast to experimental physics, which uses experimental tools to probe these phenomena. The advancement of science generally depends on the interplay between experimental studies and theory. In some cases, theoretical physics adheres to standards of mathematical rigour while giving little weight to experiments and observations. For example, while developing special relativity, Albert Einstein was concerned with the Lorentz transformation which left Maxwell's equations invariant, but was apparently uninterested in the Michelson–Morley experiment on Earth's drift through a luminiferous aether. Conversely, Einstein was awarded the Nobel Prize for explaining the photoelectric effect, previously an experimental result lacking a theoretical formulation. == Overview == A physical theory is a model of physical events. It is judged by the extent to which its predictions agree with empirical observations. The quality of a physical theory is also judged on its ability to make new predictions which can be verified by new observations. A physical theory differs from a mathematical theorem in that while both are based on some form of axioms, judgment of mathematical applicability is not based on agreement with any experimental results. A physical theory similarly differs from a mathematical theory, in the sense that the word "theory" has a different meaning in mathematical terms. R i c = k g {\displaystyle \mathrm {Ric} =kg} The equations for an Einstein manifold, used in general relativity to describe the curvature of spacetime A physical theory involves one or more relationships between various measurable quantities. Archimedes realized that a ship floats by displacing its mass of water, Pythagoras understood the relation between the length of a vibrating string and the musical tone it produces. Other examples include entropy as a measure of the uncertainty regarding the positions and motions of unseen particles and the quantum mechanical idea that (action and) energy are not continuously variable. Theoretical physics consists of several different approaches. In this regard, theoretical particle physics forms a good example. For instance: "phenomenologists" might employ (semi-) empirical formulas and heuristics to agree with experimental results, often without deep physical understanding. "Modelers" (also called "model-builders") often appear much like phenomenologists, but try to model speculative theories that have certain desirable features (rather than on experimental data), or apply the techniques of mathematical modeling to physics problems. Some attempt to create approximate theories, called effective theories, because fully developed theories may be regarded as unsolvable or too complicated. Other theorists may try to unify, formalise, reinterpret or generalise extant theories, or create completely new ones altogether. Sometimes the vision provided by pure mathematical systems can provide clues to how a physical system might be modeled; e.g., the notion, due to Riemann and others, that space itself might be curved. Theoretical problems that need computational investigation are often the concern of computational physics. Theoretical advances may consist in setting aside old, incorrect paradigms (e.g., aether theory of light propagation, caloric theory of heat, burning consisting of evolving phlogiston, or astronomical bodies revolving around the Earth) or may be an alternative model that provides answers that are more accurate or that can be more widely applied. In the latter case, a correspondence principle will be required to recover the previously known result. Sometimes though, advances may proceed along different paths. For example, an essentially correct theory may need some conceptual or factual revisions; atomic theory, first postulated millennia ago (by several thinkers in Greece and India) and the two-fluid theory of electricity are two cases in this point. However, an exception to all the above is the wave–particle duality, a theory combining aspects of different, opposing models via the Bohr complementarity principle. Physical theories become accepted if they are able to make correct predictions and no (or few) incorrect ones. The theory should have, at least as a secondary objective, a certain economy and elegance (compare to mathematical beauty), a notion sometimes called "Occam's razor" after the 13th-century English philosopher William of Occam (or Ockham), in which the simpler of two theories that describe the same matter just as adequately is preferred (but conceptual simplicity may mean mathematical complexity). They are also more likely to be accepted if they connect a wide range of phenomena. Testing the consequences of a theory is part of the scientific method. Physical theories can be grouped into three categories: mainstream theories, proposed theories and fringe theories. == History == Theoretical physics began at least 2,300 years ago, under the Pre-socratic philosophy, and continued by Plato and Aristotle, whose views held sway for a millennium. During the rise of medieval universities, the only acknowledged intellectual disciplines were the seven liberal arts of the Trivium like grammar, logic, and rhetoric and of the Quadrivium like arithmetic, geometry, music and astronomy. During the Middle Ages and Renaissance, the concept of experimental science, the counterpoint to theory, began with scholars such as Ibn al-Haytham and Francis Bacon. As the Scientific Revolution gathered pace, the concepts of matter, energy, space, time and causality slowly began to acquire the form we know today, and other sciences spun off from the rubric of natural philosophy. Thus began the modern era of theory with the Copernican paradigm shift in astronomy, soon followed by Johannes Kepler's expressions for planetary orbits, which summarized the meticulous observations of Tycho Brahe; the works of these men (alongside Galileo's) can perhaps be considered to constitute the Scientific Revolution. The great push toward the modern concept of explanation started with Galileo, one of the few physicists who was both a consummate theoretician and a great experimentalist. The analytic geometry and mechanics of Descartes were incorporated into the calculus and mechanics of Isaac Newton, another theoretician/experimentalist of the highest order, writing Principia Mathematica. In it contained a grand synthesis of the work of Copernicus, Galileo and Kepler; as well as Newton's theories of mechanics and gravitation, which held sway as worldviews until the early 20th century. Simultaneously, progress was also made in optics (in particular colour theory and the ancient science of geometrical optics), courtesy of Newton, Descartes and the Dutchmen Snell and Huygens. In the 18th and 19th centuries Joseph-Louis Lagrange, Leonhard Euler and William Rowan Hamilton would extend the theory of classical mechanics considerably. They picked up the interactive intertwining of mathematics and physics begun two millennia earlier by Pythagoras. Among the great conceptual achievements of the 19th and 20th centuries were the consolidation of the idea of energy (as well as its global conservation) by the inclusion of heat, electricity and magnetism, and then light. The laws of thermodynamics, and most importantly the introduction of the singular concept of entropy began to provide a macroscopic explanation for the properties of matter. Statistical mechanics (followed by statistical physics and Quantum statistical mechanics) emerged as an offshoot of thermodynamics late in the 19th century. Another important event in the 19th century was the discovery of electromagnetic theory, unifying the previously separate phenomena of electricity, magnetism and light. The pillars of modern physics, and perhaps the most revolutionary theories in the history of physics, have been relativity theory and quantum mechanics. Newtonian mechanics was subsumed under special relativity and Newton's gravity was given a kinematic explanation by general relativity. Quantum mechanics led to an understanding of blackbody radiation (which indeed, was an original motivation for the theory) and of anomalies in the specific heats of solids — and finally to an understanding of the internal structures of atoms and molecules. Quantum mechanics soon gave way to the formulation of quantum field theory (QFT), begun in the late 1920s. In the aftermath of World War 2, more progress brought much renewed interest in QFT, which had since the early efforts, stagnated. The same period also saw fresh attacks on the problems of superconductivity and phase transitions, as well as the first applications of QFT in the area of theoretical condensed matter. The 1960s and 70s saw the formulation of the Standard model of particle physics using QFT and progress in condensed matter physics (theoretical foundations of superconductivity and critical phenomena, among others), in parallel to the applications of relativity to problems in astronomy and cosmology respectively. All of these achievements depended on the theoretical physics as a moving force both to suggest experiments and to consolidate results — often by ingenious application of existing mathematics, or, as in the case of Descartes and Newton (with Leibniz), by inventing new mathematics. Fourier's studies of heat conduction led to a new branch of mathematics: infinite, orthogonal series. Modern theoretical physics attempts to unify theories and explain phenomena in further attempts to understand the Universe, from the cosmological to the elementary particle scale. Where experimentation cannot be done, theoretical physics still tries to advance through the use of mathematical models. == Mainstream theories == Mainstream theories (sometimes referred to as central theories) are the body of knowledge of both factual and scientific views and possess a usual scientific quality of the tests of repeatability, consistency with existing well-established science and experimentation. There do exist mainstream theories that are generally accepted theories based solely upon their effects explaining a wide variety of data, although the detection, explanation, and possible composition are subjects of debate. === Examples === == Proposed theories == The proposed theories of physics are usually relatively new theories which deal with the study of physics which include scientific approaches, means for determining the validity of models and new types of reasoning used to arrive at the theory. However, some proposed theories include theories that have been around for decades and have eluded methods of discovery and testing. Proposed theories can include fringe theories in the process of becoming established (and, sometimes, gaining wider acceptance). Proposed theories usually have not been tested. In addition to the theories like those listed below, there are also different interpretations of quantum mechanics, which may or may not be considered different theories since it is debatable whether they yield different predictions for physical experiments, even in principle. For example, AdS/CFT correspondence, Chern–Simons theory, graviton, magnetic monopole, string theory, theory of everything. == Fringe theories == Fringe theories include any new area of scientific endeavor in the process of becoming established and some proposed theories. It can include speculative sciences. This includes physics fields and physical theories presented in accordance with known evidence, and a body of associated predictions have been made according to that theory. Some fringe theories go on to become a widely accepted part of physics. Other fringe theories end up being disproven. Some fringe theories are a form of protoscience and others are a form of pseudoscience. The falsification of the original theory sometimes leads to reformulation of the theory. === Examples === == Thought experiments vs real experiments == "Thought" experiments are situations created in one's mind, asking a question akin to "suppose you are in this situation, assuming such is true, what would follow?". They are usually created to investigate phenomena that are not readily experienced in every-day situations. Famous examples of such thought experiments are Schrödinger's cat, the EPR thought experiment, simple illustrations of time dilation, and so on. These usually lead to real experiments designed to verify that the conclusion (and therefore the assumptions) of the thought experiments are correct. The EPR thought experiment led to the Bell inequalities, which were then tested to various degrees of rigor, leading to the acceptance of the current formulation of quantum mechanics and probabilism as a working hypothesis. == See also == List of theoretical physicists Philosophy of physics Symmetry in quantum mechanics Timeline of developments in theoretical physics Double field theory == Notes == == References == == Further reading == Physical Sciences. Encyclopædia Britannica (Macropaedia). Vol. 25 (15th ed.). 1994. Duhem, Pierre. La théorie physique - Son objet, sa structure, (in French). 2nd edition - 1914. English translation: The physical theory - its purpose, its structure. Republished by Joseph Vrin philosophical bookstore (1981), ISBN 2711602214. Feynman, et al. The Feynman Lectures on Physics (3 vol.). First edition: Addison–Wesley, (1964, 1966). Bestselling three-volume textbook covering the span of physics. Reference for both (under)graduate student and professional researcher alike. Landau et al. Course of Theoretical Physics. Famous series of books dealing with theoretical concepts in physics covering 10 volumes, translated into many languages and reprinted over many editions. Often known simply as "Landau and Lifschits" or "Landau-Lifschits" in the literature. Longair, MS. Theoretical Concepts in Physics: An Alternative View of Theoretical Reasoning in Physics. Cambridge University Press; 2d edition (4 Dec 2003). ISBN 052152878X. ISBN 978-0521528788 Planck, Max (1909). Eight Lectures on theoretical physics. Library of Alexandria. ISBN 1465521887, ISBN 9781465521880. A set of lectures given in 1909 at Columbia University. Sommerfeld, Arnold. Vorlesungen über theoretische Physik (Lectures on Theoretical Physics); German, 6 volumes. A series of lessons from a master educator of theoretical physicists. == External links == MIT Center for Theoretical Physics How to become a GOOD Theoretical Physicist, a website made by Gerard 't Hooft
Wikipedia/Physical_theory
Physics is a natural science that studies matter and the forces that act upon it. Physics may also refer to: == Journals and magazines == Physics (American Physical Society journal), former name of the Journal of Applied Physics, published by the American Physical Society Physics (Chinese Physical Society journal), or Wuli , published by the Chinese Physical Society Physics (magazine), published by the American Physical Society Physics (MDPI journal), published by MDPI Physics Physique Физика, a small journal that ran from 1964 to 1968 published by Physics Publishing, often simply referred to as Physics == Other uses == Physics (Aristotle), a key text in the philosophy of Aristotle Physics (band), an American rock music group The Physics (group), an American hip hop group Aristotelian physics, the natural science described in the works of Aristotle Theoretical physics PhysX, a physics engine for computer games made by Nvidia == See also == All pages with titles beginning with physics All pages with titles containing physics Physic (disambiguation) Psychic (disambiguation)
Wikipedia/Physics_(disambiguation)
Superstring theory is an attempt to explain all of the particles and fundamental forces of nature in one theory by modeling them as vibrations of tiny supersymmetric strings. 'Superstring theory' is a shorthand for supersymmetric string theory because unlike bosonic string theory, it is the version of string theory that accounts for both fermions and bosons and incorporates supersymmetry to model gravity. Since the second superstring revolution, the five superstring theories (Type I, Type IIA, Type IIB, HO and HE) are regarded as different limits of a single theory tentatively called M-theory. == Background == One of the deepest open problems in theoretical physics is formulating a theory of quantum gravity. Such a theory incorporates both the theory of general relativity, which describes gravitation and applies to large-scale structures, and quantum mechanics or more specifically quantum field theory, which describes the other three fundamental forces that act on the atomic scale. Quantum field theory, in particular the Standard model, is currently the most successful theory to describe fundamental forces, but while computing physical quantities of interest, naïvely one obtains infinite values. Physicists developed the technique of renormalization to 'eliminate these infinities' to obtain finite values which can be experimentally tested. This technique works for three of the four fundamental forces: Electromagnetism, the strong force and the weak force, but does not work for gravity, which is non-renormalizable. Development of a quantum theory of gravity therefore requires different means than those used for the other forces. According to superstring theory, or more generally string theory, the fundamental constituents of reality are strings with radius on the order of the Planck length (about 10−33 cm). An appealing feature of string theory is that fundamental particles can be viewed as excitations of the string. The tension in a string is on the order of the Planck force (1044 newtons). The graviton (the proposed messenger particle of the gravitational force) is predicted by the theory to be a string with wave amplitude zero. == History == Investigating how a string theory may include fermions in its spectrum led to the invention of supersymmetry (in the West) in 1971, a mathematical transformation between bosons and fermions. String theories that include fermionic vibrations are now known as "superstring theories". Since its beginnings in the seventies and through the combined efforts of many different researchers, superstring theory has developed into a broad and varied subject with connections to quantum gravity, particle and condensed matter physics, cosmology, and pure mathematics. == Absence of physical evidence == Superstring theory is based on supersymmetry. No supersymmetric particles have been discovered and initial investigation, carried out in 2011 at the Large Hadron Collider (LHC) and in 2006 at the Tevatron has excluded some of the ranges. For instance, the mass constraint of the Minimal Supersymmetric Standard Model squarks has been up to 1.1 TeV, and gluinos up to 500 GeV. No report on suggesting large extra dimensions has been delivered from the LHC. There have been no principles so far to limit the number of vacua in the concept of a landscape of vacua. Some particle physicists became disappointed by the lack of experimental verification of supersymmetry, and some have already discarded it. Jon Butterworth at University College London said that we had no sign of supersymmetry, even in higher energy regions, excluding the superpartners of the top quark up to a few TeV. Ben Allanach at the University of Cambridge states that if we do not discover any new particles in the next trial at the LHC, then we can say it is unlikely to discover supersymmetry at CERN in the foreseeable future. == Extra dimensions == Our physical space is observed to have three large spatial dimensions and, along with time, is a boundless 4-dimensional continuum known as spacetime. However, nothing prevents a theory from including more than 4 dimensions. In the case of string theory, consistency requires spacetime to have 10 dimensions (3D regular space + 1 time(1 time dimension is not necessary, it may be multi-dimensional, according to F-theory) + 6D hyperspace). The fact that we see only 3 dimensions of space can be explained by one of two mechanisms: either the extra dimensions are compactified on a very small scale, or else our world may live on a 3-dimensional submanifold corresponding to a brane, on which all known particles besides gravity would be restricted. If the extra dimensions are compactified, then the extra six dimensions must be in the form of a Calabi–Yau manifold. Within the more complete framework of M-theory, they would have to take form of a G2 manifold. A particular exact symmetry of string/M-theory called T-duality (which exchanges momentum modes for winding number and sends compact dimensions of radius R to radius 1/R), has led to the discovery of equivalences between different Calabi–Yau manifolds called mirror symmetry. Superstring theory is not the first theory to propose extra spatial dimensions. It can be seen as building upon the Kaluza–Klein theory, which proposed a 4+1 dimensional (5D) theory of gravity. When compactified on a circle, the gravity in the extra dimension precisely describes electromagnetism from the perspective of the 3 remaining large space dimensions. Thus the original Kaluza–Klein theory is a prototype for the unification of gauge and gravity interactions, at least at the classical level, however it is known to be insufficient to describe nature for a variety of reasons (missing weak and strong forces, lack of parity violation, etc.) A more complex compact geometry is needed to reproduce the known gauge forces. Also, to obtain a consistent, fundamental, quantum theory requires the upgrade to string theory, not just the extra dimensions. == Number of superstring theories == Theoretical physicists were troubled by the existence of five separate superstring theories. A possible solution for this dilemma was suggested at the beginning of what is called the second superstring revolution in the 1990s, which suggests that the five string theories might be different limits of a single underlying theory, called M-theory. This remains a conjecture. The five consistent superstring theories are: The type I string has one supersymmetry in the ten-dimensional sense (16 supercharges). This theory is special in the sense that it is based on unoriented open and closed strings, while the rest are based on oriented closed strings. The type II string theories have two supersymmetries in the ten-dimensional sense (32 supercharges). There are actually two kinds of type II strings called type IIA and type IIB. They differ mainly in the fact that the IIA theory is non-chiral (parity conserving) while the IIB theory is chiral (parity violating). The heterotic string theories are based on a peculiar hybrid of a type I superstring and a bosonic string. There are two kinds of heterotic strings differing in their ten-dimensional gauge groups: the heterotic E8×E8 string and the heterotic SO(32) string. (The name heterotic SO(32) is slightly inaccurate since among the SO(32) Lie groups, string theory singles out a quotient Spin(32)/Z2 that is not equivalent to SO(32).) Chiral gauge theories can be inconsistent due to anomalies. This happens when certain one-loop Feynman diagrams cause a quantum mechanical breakdown of the gauge symmetry. The anomalies were canceled out via the Green–Schwarz mechanism. Even though there are only five superstring theories, making detailed predictions for real experiments requires information about exactly what physical configuration the theory is in. This considerably complicates efforts to test string theory because there is an astronomically high number—10500 or more—of configurations that meet some of the basic requirements to be consistent with our world. Along with the extreme remoteness of the Planck scale, this is the other major reason it is hard to test superstring theory. Another approach to the number of superstring theories refers to the mathematical structure called composition algebra. In the findings of abstract algebra there are just seven composition algebras over the field of real numbers. In 1990 physicists R. Foot and G.C. Joshi in Australia stated that "the seven classical superstring theories are in one-to-one correspondence to the seven composition algebras". == Integrating general relativity and quantum mechanics == General relativity typically deals with situations involving large mass objects in fairly large regions of spacetime (when it is applied to small distances it often conflicts with quantum mechanics) whereas quantum mechanics is generally reserved for scenarios at the atomic scale (small spacetime regions). The two are very rarely used together, and the most common case that combines them is in the study of black holes. Having peak density, or the maximum amount of matter possible in a space, and very small area, the two must be used in synchrony to predict conditions in such places. Yet, when used together, the equations fall apart, spitting out impossible answers, such as imaginary distances and less than one dimension. The major problem with their incongruence is that, at Planck scale (a fundamental small unit of length) lengths, general relativity predicts a smooth, flowing surface, while quantum mechanics predicts a random, warped surface, which are nowhere near compatible. Superstring theory resolves this issue, replacing the classical idea of point particles with strings. These strings have an average diameter of the Planck length, with extremely small variances, which completely ignores the quantum mechanical predictions of Planck-scale length dimensional warping. Also, these surfaces can be mapped as branes. These branes can be viewed as objects with a morphism between them. In this case, the morphism will be the state of a string that stretches between brane A and brane B. Singularities are avoided because the observed consequences of "Big Crunches" never reach zero size. In fact, should the universe begin a "big crunch" sort of process, string theory dictates that the universe could never be smaller than the size of one string, at which point it would actually begin expanding. == Mathematics == === D-branes === D-branes are membrane-like objects in 10D string theory. They can be thought of as occurring as a result of a Kaluza–Klein compactification of 11D M-theory that contains membranes. Because compactification of a geometric theory produces extra vector fields the D-branes can be included in the action by adding an extra U(1) vector field to the string action. ∂ z → ∂ z + i A z ( z , z ¯ ) {\displaystyle \partial _{z}\rightarrow \partial _{z}+iA_{z}(z,{\overline {z}})} In type I open string theory, the ends of open strings are always attached to D-brane surfaces. A string theory with more gauge fields such as SU(2) gauge fields would then correspond to the compactification of some higher-dimensional theory above 11 dimensions, which is not thought to be possible to date. Furthermore, the tachyons attached to the D-branes show the instability of those D-branes with respect to the annihilation. The tachyon total energy is (or reflects) the total energy of the D-branes. === Why five superstring theories? === For a 10 dimensional supersymmetric theory we are allowed a 32-component Majorana spinor. This can be decomposed into a pair of 16-component Majorana-Weyl (chiral) spinors. There are then various ways to construct an invariant depending on whether these two spinors have the same or opposite chiralities: The heterotic superstrings come in two types SO(32) and E8×E8 as indicated above and the type I superstrings include open strings. == Beyond superstring theory == It is conceivable that the five superstring theories are approximated to a theory in higher dimensions possibly involving membranes. Because the action for this involves quartic terms and higher so is not Gaussian, the functional integrals are very difficult to solve and so this has confounded the top theoretical physicists. Edward Witten has popularised the concept of a theory in 11 dimensions, called M-theory, involving membranes interpolating from the known symmetries of superstring theory. It may turn out that there exist membrane models or other non-membrane models in higher dimensions—which may become acceptable when we find new unknown symmetries of nature, such as noncommutative geometry. It is thought, however, that 16 is probably the maximum since SO(16) is a maximal subgroup of E8, the largest exceptional Lie group, and also is more than large enough to contain the Standard Model. Quartic integrals of the non-functional kind are easier to solve so there is hope for the future. This is the series solution, which is always convergent when a is non-zero and negative: ∫ − ∞ ∞ exp ⁡ ( a x 4 + b x 3 + c x 2 + d x + f ) d x = e f ∑ n , m , p = 0 ∞ b 4 n ( 4 n ) ! c 2 m ( 2 m ) ! d 4 p ( 4 p ) ! Γ ( 3 n + m + p + 1 4 ) a 3 n + m + p + 1 4 {\displaystyle \int _{-\infty }^{\infty }\exp({ax^{4}+bx^{3}+cx^{2}+dx+f})\,dx=e^{f}\sum _{n,m,p=0}^{\infty }{\frac {b^{4n}}{(4n)!}}{\frac {c^{2m}}{(2m)!}}{\frac {d^{4p}}{(4p)!}}{\frac {\Gamma (3n+m+p+{\frac {1}{4}})}{a^{3n+m+p+{\frac {1}{4}}}}}} In the case of membranes the series would correspond to sums of various membrane interactions that are not seen in string theory. === Compactification === Investigating theories of higher dimensions often involves looking at the 10 dimensional superstring theory and interpreting some of the more obscure results in terms of compactified dimensions. For example, D-branes are seen as compactified membranes from 11D M-theory. Theories of higher dimensions such as 12D F-theory and beyond produce other effects, such as gauge terms higher than U(1). The components of the extra vector fields (A) in the D-brane actions can be thought of as extra coordinates (X) in disguise. However, the known symmetries including supersymmetry currently restrict the spinors to 32-components—which limits the number of dimensions to 11 (or 12 if you include two time dimensions.) Some physicists (e.g., John Baez et al.) have speculated that the exceptional Lie groups E6, E7 and E8 having maximum orthogonal subgroups SO(10), SO(12) and SO(16) may be related to theories in 10, 12 and 16 dimensions; 10 dimensions corresponding to string theory and the 12 and 16 dimensional theories being yet undiscovered but would be theories based on 3-branes and 7-branes, respectively. However, this is a minority view within the string community. Since E7 is in some sense F4 quaternified and E8 is F4 octonified, the 12 and 16 dimensional theories, if they did exist, may involve the noncommutative geometry based on the quaternions and octonions, respectively. From the above discussion, it can be seen that physicists have many ideas for extending superstring theory beyond the current 10 dimensional theory, but so far all have been unsuccessful. === Kac–Moody algebras === Since strings can have an infinite number of modes, the symmetry used to describe string theory is based on infinite dimensional Lie algebras. Some Kac–Moody algebras that have been considered as symmetries for M-theory have been E10 and E11 and their supersymmetric extensions. == See also == AdS/CFT correspondence dS/CFT correspondence Grand unification theory List of string theory topics String field theory == References == == Cited sources == Polchinski, Joseph (1998a). String Theory Vol. 1: An Introduction to the Bosonic String. Cambridge University Press. ISBN 978-0-521-63303-1. Polchinski, Joseph (1998b). String Theory Vol. 2: Superstring Theory and Beyond. Cambridge University Press. ISBN 978-0-521-63304-8.
Wikipedia/Superstring_theory
Vacuum energy is an underlying background energy that exists in space throughout the entire universe. The vacuum energy is a special case of zero-point energy that relates to the quantum vacuum. The effects of vacuum energy can be experimentally observed in various phenomena such as spontaneous emission, the Casimir effect, and the Lamb shift, and are thought to influence the behavior of the Universe on cosmological scales. Using the upper limit of the cosmological constant, the vacuum energy of free space has been estimated to be 10−9 joules (10−2 ergs), or ~5 GeV per cubic meter. However, in quantum electrodynamics, consistency with the principle of Lorentz covariance and with the magnitude of the Planck constant suggests a much larger value of 10113 joules per cubic meter. This huge discrepancy is known as the cosmological constant problem or, colloquially, the "vacuum catastrophe." == Origin == Quantum field theory states that all fundamental fields, such as the electromagnetic field, must be quantized at every point in space. A field in physics may be envisioned as if space were filled with interconnected vibrating balls and springs, and the strength of the field is like the displacement of a ball from its rest position. The theory requires "vibrations" in, or more accurately changes in the strength of, such a field to propagate as per the appropriate wave equation for the particular field in question. The second quantization of quantum field theory requires that each such ball–spring combination be quantized, that is, that the strength of the field be quantized at each point in space. Canonically, if the field at each point in space is a simple harmonic oscillator, its quantization places a quantum harmonic oscillator at each point. Excitations of the field correspond to the elementary particles of particle physics. Thus, according to the theory, even the vacuum has a vastly complex structure and all calculations of quantum field theory must be made in relation to this model of the vacuum. The theory considers vacuum to implicitly have the same properties as a particle, such as spin or polarization in the case of light, energy, and so on. According to the theory, most of these properties cancel out on average leaving the vacuum empty in the literal sense of the word. One important exception, however, is the vacuum energy or the vacuum expectation value of the energy. The quantization of a simple harmonic oscillator requires the lowest possible energy, or zero-point energy of such an oscillator to be E = 1 2 ℏ ω {\displaystyle {E}={\tfrac {1}{2}}\hbar \omega \ } Summing over all possible oscillators at all points in space gives an infinite quantity. To remove this infinity, one may argue that only differences in energy are physically measurable, much as the concept of potential energy has been treated in classical mechanics for centuries. This argument is the underpinning of the theory of renormalization. In all practical calculations, this is how the infinity is handled. Vacuum energy can also be thought of in terms of virtual particles (also known as vacuum fluctuations) which are created and destroyed out of the vacuum. These particles are always created out of the vacuum in particle–antiparticle pairs, which in most cases shortly annihilate each other and disappear. However, these particles and antiparticles may interact with others before disappearing, a process which can be mapped using Feynman diagrams. Note that this method of computing vacuum energy is mathematically equivalent to having a quantum harmonic oscillator at each point and, therefore, suffers the same renormalization problems. Additional contributions to the vacuum energy come from spontaneous symmetry breaking in quantum field theory. == Implications == Vacuum energy has a number of consequences. In 1948, Dutch physicists Hendrik B. G. Casimir and Dirk Polder predicted the existence of a tiny attractive force between closely placed metal plates due to resonances in the vacuum energy in the space between them. This is now known as the Casimir effect and has since been extensively experimentally verified. It is therefore believed that the vacuum energy is "real" in the same sense that more familiar conceptual objects such as electrons, magnetic fields, etc., are real. However, alternative explanations for the Casimir effect have since been proposed. Other predictions are harder to verify. Vacuum fluctuations are always created as particle–antiparticle pairs. The creation of these virtual particles near the event horizon of a black hole has been hypothesized by physicist Stephen Hawking to be a mechanism for the eventual "evaporation" of black holes. If one of the pair is pulled into the black hole before this, then the other particle becomes "real" and energy/mass is essentially radiated into space from the black hole. This loss is cumulative and could result in the black hole's disappearance over time. The time required is dependent on the mass of the black hole (the equations indicate that the smaller the black hole, the more rapidly it evaporates) but could be on the order of 1060 years for large solar-mass black holes. The vacuum energy also has important consequences for physical cosmology. General relativity predicts that energy is equivalent to mass, and therefore, if the vacuum energy is "really there", it should exert a gravitational force. Essentially, a non-zero vacuum energy is expected to contribute to the cosmological constant, which affects the expansion of the universe. == Field strength of vacuum energy == The field strength of vacuum energy is a concept proposed in a theoretical study that explores the nature of the vacuum and its relationship to gravitational interactions. The study derived a mathematical framework that uses the field strength of vacuum energy as an indicator of the bulk (spacetime) resistance to localized curvature. It illustrates the association of the field strength of vacuum energy to the curvature of the background, where this concept challenges the traditional understanding of gravity and suggests that the gravitational constant, G, may not be a universal constant, but rather a parameter dependent on the field strength of vacuum energy. Determination of the value of G has been a topic of extensive research, with numerous experiments conducted over the years in an attempt to measure its precise value. These experiments, often employing high-precision techniques, have aimed to provide accurate measurements of G and establish a consensus on its exact value. However, the outcomes of these experiments have shown significant inconsistencies, making it difficult to reach a definitive conclusion regarding the value of G. This lack of consensus has puzzled scientists and called for alternative explanations. To test the theoretical predictions regarding the field strength of vacuum energy, specific experimental conditions involving the position of the moon are recommended in the theoretical study. These conditions aim to achieve consistent outcomes in precision measurements of G. The ultimate goal of such experiments is to either falsify or provide confirmations to the proposed theoretical framework. The significance of exploring the field strength of vacuum energy lies in its potential to revolutionize our understanding of gravity and its interactions. == History == In 1934, Georges Lemaître used an unusual perfect-fluid equation of state to interpret the cosmological constant as due to vacuum energy. In 1948, the Casimir effect provided an experimental method for a verification of the existence of vacuum energy; in 1955, however, Evgeny Lifshitz offered a different origin for the Casimir effect. In 1957, Lee and Yang proved the concepts of broken symmetry and parity violation, for which they won the Nobel prize. In 1973, Edward Tryon proposed the zero-energy universe hypothesis: that the Universe may be a large-scale quantum-mechanical vacuum fluctuation where positive mass–energy is balanced by negative gravitational potential energy. During the 1980s, there were many attempts to relate the fields that generate the vacuum energy to specific fields that were predicted by attempts at a Grand Unified Theory and to use observations of the Universe to confirm one or another version. However, the exact nature of the particles (or fields) that generate vacuum energy, with a density such as that required by inflation theory, remains a mystery. == Vacuum energy in fiction == Arthur C. Clarke's novel The Songs of Distant Earth features a starship powered by a "quantum drive" based on aspects of this theory. In the sci-fi television/film franchise Stargate, a Zero Point Module (ZPM) is a power source that extracts zero-point energy from a micro parallel universe. The book Star Trek: Deep Space Nine Technical Manual describes the operating principle of the so-called quantum torpedo. In this fictional weapon, an antimatter reaction is used to create a multi-dimensional membrane in a vacuum that releases at its decomposition more energy than was needed to produce it. The missing energy is removed from the vacuum. Usually about twice as much energy is released in the explosion as would correspond to the initial antimatter matter annihilation. In the video game Half-Life 2, the item generally known as the "gravity gun" is referred to as both the "zero point field energy manipulator" and the "zero point energy field manipulator." == See also == Cosmic microwave background Dark energy False vacuum Normal ordering Quantum fluctuation Sunyaev–Zeldovich effect Vacuum state == References == == External articles and references == Free PDF copy of The Structured Vacuum – thinking about nothing by Johann Rafelski and Berndt Muller (1985); ISBN 3-87144-889-3. Saunders, S., & Brown, H. R. (1991). The Philosophy of Vacuum. Oxford [England]: Clarendon Press. Poincaré Seminar, Duplantier, B., & Rivasseau, V. (2003). "Poincaré Seminar 2002: vacuum energy-renormalization". Progress in mathematical physics, v. 30. Basel: Birkhäuser Verlag. Futamase & Yoshida Possible measurement of vacuum energy. Study of Vacuum Energy Physics for Breakthrough Propulsion 2004, NASA Glenn Technical Reports Server (PDF, 57 pages, Retrieved 2013-09-18).
Wikipedia/Vacuum_energy
Nature Physics is a monthly peer-reviewed scientific journal published by Nature Portfolio. It was first published in October 2005 (volume 1, issue 1). The chief editor is David Abergel. == Scope == Nature Physics publishes both pure and applied research from all areas of physics. Subject areas covered by the journal include quantum mechanics, condensed-matter physics, optics, thermodynamics, particle physics, and biophysics. == Abstracting and indexing == The journal is indexed in the following databases: Chemical Abstracts Service – CASSI Science Citation Index Science Citation Index Expanded Current Contents – Physical, Chemical & Earth Sciences According to the Journal Citation Reports, the journal has a 2021 impact factor of 19.684, ranking it fourth out of 86 journals in the category "Physics, Multidisciplinary". == References == == External links == Official website
Wikipedia/Nature_Physics
In mathematics, the Hodge conjecture is a major unsolved problem in algebraic geometry and complex geometry that relates the algebraic topology of a non-singular complex algebraic variety to its subvarieties. In simple terms, the Hodge conjecture asserts that the basic topological information like the number of holes in certain geometric spaces, complex algebraic varieties, can be understood by studying the possible nice shapes sitting inside those spaces, which look like zero sets of polynomial equations. The latter objects can be studied using algebra and the calculus of analytic functions, and this allows one to indirectly understand the broad shape and structure of often higher-dimensional spaces which can not be otherwise easily visualized. More specifically, the conjecture states that certain de Rham cohomology classes are algebraic; that is, they are sums of Poincaré duals of the homology classes of subvarieties. It was formulated by the Scottish mathematician William Vallance Douglas Hodge as a result of a work in between 1930 and 1940 to enrich the description of de Rham cohomology to include extra structure that is present in the case of complex algebraic varieties. It received little attention before Hodge presented it in an address during the 1950 International Congress of Mathematicians, held in Cambridge, Massachusetts. The Hodge conjecture is one of the Clay Mathematics Institute's Millennium Prize Problems, with a prize of $1,000,000 US for whoever can prove or disprove the Hodge conjecture. == Motivation == Let X be a compact complex manifold of complex dimension n. Then X is an orientable smooth manifold of real dimension 2 n {\displaystyle 2n} , so its cohomology groups lie in degrees zero through 2 n {\displaystyle 2n} . Assume X is a Kähler manifold, so that there is a decomposition on its cohomology with complex coefficients H n ( X , C ) = ⨁ p + q = n H p , q ( X ) , {\displaystyle H^{n}(X,\mathbb {C} )=\bigoplus _{p+q=n}H^{p,q}(X),} where H p , q ( X ) {\displaystyle H^{p,q}(X)} is the subgroup of cohomology classes which are represented by harmonic forms of type ( p , q ) {\displaystyle (p,q)} . That is, these are the cohomology classes represented by differential forms which, in some choice of local coordinates z 1 , … , z n {\displaystyle z_{1},\ldots ,z_{n}} , can be written as a harmonic function times d z i 1 ∧ ⋯ ∧ d z i p ∧ d z ¯ j 1 ∧ ⋯ ∧ d z ¯ j q . {\displaystyle dz_{i_{1}}\wedge \cdots \wedge dz_{i_{p}}\wedge d{\bar {z}}_{j_{1}}\wedge \cdots \wedge d{\bar {z}}_{j_{q}}.} Since X is a compact oriented manifold, X has a fundamental class, and so X can be integrated over. Let Z be a complex submanifold of X of dimension k, and let i : Z → X {\displaystyle i\colon Z\to X} be the inclusion map. Choose a differential form α {\displaystyle \alpha } of type ( p , q ) {\displaystyle (p,q)} . We can integrate α {\displaystyle \alpha } over Z using the pullback function i ∗ {\displaystyle i^{*}} , ∫ Z i ∗ α . {\displaystyle \int _{Z}i^{*}\alpha .} To evaluate this integral, choose a point of Z and call it z = ( z 1 , … , z k ) {\displaystyle z=(z_{1},\ldots ,z_{k})} . The inclusion of Z in X means that we can choose a local basis on X and have z k + 1 = ⋯ = z n = 0 {\displaystyle z_{k+1}=\cdots =z_{n}=0} (rank-nullity theorem). If p > k {\displaystyle p>k} , then α {\displaystyle \alpha } must contain some d z i {\displaystyle dz_{i}} where z i {\displaystyle z_{i}} pulls back to zero on Z. The same is true for d z ¯ j {\displaystyle d{\bar {z}}_{j}} if q > k {\displaystyle q>k} . Consequently, this integral is zero if ( p , q ) ≠ ( k , k ) {\displaystyle (p,q)\neq (k,k)} . The Hodge conjecture then (loosely) asks: Which cohomology classes in H k , k ( X ) {\displaystyle H^{k,k}(X)} come from complex subvarieties Z? == Statement of the Hodge conjecture == Let Hdg k ⁡ ( X ) = H 2 k ( X , Q ) ∩ H k , k ( X ) . {\displaystyle \operatorname {Hdg} ^{k}(X)=H^{2k}(X,\mathbb {Q} )\cap H^{k,k}(X).} We call this the group of Hodge classes of degree 2k on X. The modern statement of the Hodge conjecture is Hodge conjecture. Let X be a non-singular complex projective manifold. Then every Hodge class on X is a linear combination with rational coefficients of the cohomology classes of complex subvarieties of X. A projective complex manifold is a complex manifold which can be embedded in complex projective space. Because projective space carries a Kähler metric, the Fubini–Study metric, such a manifold is always a Kähler manifold. By Chow's theorem, a projective complex manifold is also a smooth projective algebraic variety, that is, it is the zero set of a collection of homogeneous polynomials. === Reformulation in terms of algebraic cycles === Another way of phrasing the Hodge conjecture involves the idea of an algebraic cycle. An algebraic cycle on X is a formal combination of subvarieties of X; that is, it is something of the form ∑ i c i Z i . {\displaystyle \sum _{i}c_{i}Z_{i}.} The coefficients are usually taken to be integral or rational. We define the cohomology class of an algebraic cycle to be the sum of the cohomology classes of its components. This is an example of the cycle class map of de Rham cohomology, see Weil cohomology. For example, the cohomology class of the above cycle would be ∑ i c i [ Z i ] . {\displaystyle \sum _{i}c_{i}[Z_{i}].} Such a cohomology class is called algebraic. With this notation, the Hodge conjecture becomes Let X be a projective complex manifold. Then every Hodge class on X is algebraic. The assumption in the Hodge conjecture that X be algebraic (projective complex manifold) cannot be weakened. In 1977, Steven Zucker showed that it is possible to construct a counterexample to the Hodge conjecture as complex tori with analytic rational cohomology of type ( p , p ) {\displaystyle (p,p)} , which is not projective algebraic. (see appendix B of Zucker (1977)) == Known cases of the Hodge conjecture == See Theorem 1 of Bouali. === Low dimension and codimension === The first result on the Hodge conjecture is due to Lefschetz (1924). In fact, it predates the conjecture and provided some of Hodge's motivation. Theorem (Lefschetz theorem on (1,1)-classes) Any element of H 2 ( X , Z ) ∩ H 1 , 1 ( X ) {\displaystyle H^{2}(X,\mathbb {Z} )\cap H^{1,1}(X)} is the cohomology class of a divisor on X {\displaystyle X} . In particular, the Hodge conjecture is true for H 2 {\displaystyle H^{2}} . A very quick proof can be given using sheaf cohomology and the exponential exact sequence. (The cohomology class of a divisor turns out to equal to its first Chern class.) Lefschetz's original proof proceeded by normal functions, which were introduced by Henri Poincaré. However, the Griffiths transversality theorem shows that this approach cannot prove the Hodge conjecture for higher codimensional subvarieties. By the Hard Lefschetz theorem, one can prove: Theorem. If for some p < n 2 {\displaystyle p<{\frac {n}{2}}} the Hodge conjecture holds for Hodge classes of degree p {\displaystyle p} , then the Hodge conjecture holds for Hodge classes of degree 2 n − p {\displaystyle 2n-p} . Combining the above two theorems implies that Hodge conjecture is true for Hodge classes of degree 2 n − 2 {\displaystyle 2n-2} . This proves the Hodge conjecture when X {\displaystyle X} has dimension at most three. The Lefschetz theorem on (1,1)-classes also implies that if all Hodge classes are generated by the Hodge classes of divisors, then the Hodge conjecture is true: Corollary. If the algebra Hdg ∗ ⁡ ( X ) = ⨁ k Hdg k ⁡ ( X ) {\displaystyle \operatorname {Hdg} ^{*}(X)=\bigoplus \nolimits _{k}\operatorname {Hdg} ^{k}(X)} is generated by Hdg 1 ⁡ ( X ) {\displaystyle \operatorname {Hdg} ^{1}(X)} , then the Hodge conjecture holds for X {\displaystyle X} . === Hypersurfaces === By the strong and weak Lefschetz theorem, the only non-trivial part of the Hodge conjecture for hypersurfaces is the degree m part (i.e., the middle cohomology) of a 2m-dimensional hypersurface X ⊂ P 2 m + 1 {\displaystyle X\subset \mathbf {P} ^{2m+1}} . If the degree d is 2, i.e., X is a quadric, the Hodge conjecture holds for all m. For m = 2 {\displaystyle m=2} , i.e., fourfolds, the Hodge conjecture is known for d ≤ 5 {\displaystyle d\leq 5} . === Abelian varieties === For most abelian varieties, the algebra Hdg*(X) is generated in degree one, so the Hodge conjecture holds. In particular, the Hodge conjecture holds for sufficiently general abelian varieties, for products of elliptic curves, and for simple abelian varieties of prime dimension. However, Mumford (1969) constructed an example of an abelian variety where Hdg2(X) is not generated by products of divisor classes. Weil (1977) generalized this example by showing that whenever the variety has complex multiplication by an imaginary quadratic field, then Hdg2(X) is not generated by products of divisor classes. Moonen & Zarhin (1999) proved that in dimension less than 5, either Hdg*(X) is generated in degree one, or the variety has complex multiplication by an imaginary quadratic field. In the latter case, the Hodge conjecture is only known in special cases. == Generalizations == === The integral Hodge conjecture === Hodge's original conjecture was: Integral Hodge conjecture. Let X be a projective complex manifold. Then every cohomology class in H 2 k ( X , Z ) ∩ H k , k ( X ) {\displaystyle H^{2k}(X,\mathbb {Z} )\cap H^{k,k}(X)} is the cohomology class of an algebraic cycle with integral coefficients on X. This is now known to be false. The first counterexample was constructed by Atiyah & Hirzebruch (1961). Using K-theory, they constructed an example of a torsion cohomology class—that is, a cohomology class α such that nα = 0 for some positive integer n—which is not the class of an algebraic cycle. Such a class is necessarily a Hodge class. Totaro (1997) reinterpreted their result in the framework of cobordism and found many examples of such classes. The simplest adjustment of the integral Hodge conjecture is: Integral Hodge conjecture modulo torsion. Let X be a projective complex manifold. Then every cohomology class in H 2 k ( X , Z ) ∩ H k , k ( X ) {\displaystyle H^{2k}(X,\mathbb {Z} )\cap H^{k,k}(X)} is the sum of a torsion class and the cohomology class of an algebraic cycle with integral coefficients on X. Equivalently, after dividing H 2 k ( X , Z ) ∩ H k , k ( X ) {\displaystyle H^{2k}(X,\mathbb {Z} )\cap H^{k,k}(X)} by torsion classes, every class is the image of the cohomology class of an integral algebraic cycle. This is also false. Kollár (1992) found an example of a Hodge class α which is not algebraic, but which has an integral multiple which is algebraic. Rosenschon & Srinivas (2016) have shown that in order to obtain a correct integral Hodge conjecture, one needs to replace Chow groups, which can also be expressed as motivic cohomology groups, by a variant known as étale (or Lichtenbaum) motivic cohomology. They show that the rational Hodge conjecture is equivalent to an integral Hodge conjecture for this modified motivic cohomology. === The Hodge conjecture for Kähler varieties === A natural generalization of the Hodge conjecture would ask: Hodge conjecture for Kähler varieties, naive version. Let X be a complex Kähler manifold. Then every Hodge class on X is a linear combination with rational coefficients of the cohomology classes of complex subvarieties of X. This is too optimistic, because there are not enough subvarieties to make this work. A possible substitute is to ask instead one of the two following questions: Hodge conjecture for Kähler varieties, vector bundle version. Let X be a complex Kähler manifold. Then every Hodge class on X is a linear combination with rational coefficients of Chern classes of vector bundles on X. Hodge conjecture for Kähler varieties, coherent sheaf version. Let X be a complex Kähler manifold. Then every Hodge class on X is a linear combination with rational coefficients of Chern classes of coherent sheaves on X. Voisin (2002) proved that the Chern classes of coherent sheaves give strictly more Hodge classes than the Chern classes of vector bundles and that the Chern classes of coherent sheaves are insufficient to generate all the Hodge classes. Consequently, the only known formulations of the Hodge conjecture for Kähler varieties are false. === The generalized Hodge conjecture === Hodge made an additional, stronger conjecture than the integral Hodge conjecture. Say that a cohomology class on X is of co-level c (coniveau c) if it is the pushforward of a cohomology class on a c-codimensional subvariety of X. The cohomology classes of co-level at least c filter the cohomology of X, and it is easy to see that the cth step of the filtration NcHk(X, Z) satisfies N c H k ( X , Z ) ⊆ H k ( X , Z ) ∩ ( H k − c , c ( X ) ⊕ ⋯ ⊕ H c , k − c ( X ) ) . {\displaystyle N^{c}H^{k}(X,\mathbf {Z} )\subseteq H^{k}(X,\mathbf {Z} )\cap (H^{k-c,c}(X)\oplus \cdots \oplus H^{c,k-c}(X)).} Hodge's original statement was: Generalized Hodge conjecture, Hodge's version. N c H k ( X , Z ) = H k ( X , Z ) ∩ ( H k − c , c ( X ) ⊕ ⋯ ⊕ H c , k − c ( X ) ) . {\displaystyle N^{c}H^{k}(X,\mathbf {Z} )=H^{k}(X,\mathbf {Z} )\cap (H^{k-c,c}(X)\oplus \cdots \oplus H^{c,k-c}(X)).} Grothendieck (1969) observed that this cannot be true, even with rational coefficients, because the right-hand side is not always a Hodge structure. His corrected form of the Hodge conjecture is: Generalized Hodge conjecture. NcHk(X, Q) is the largest sub-Hodge structure of Hk(X, Z) contained in H k − c , c ( X ) ⊕ ⋯ ⊕ H c , k − c ( X ) . {\displaystyle H^{k-c,c}(X)\oplus \cdots \oplus H^{c,k-c}(X).} This version is open. == Algebraicity of Hodge loci == The strongest evidence in favor of the Hodge conjecture is the algebraicity result of Cattani, Deligne & Kaplan (1995). Suppose that we vary the complex structure of X over a simply connected base. Then the topological cohomology of X does not change, but the Hodge decomposition does change. It is known that if the Hodge conjecture is true, then the locus of all points on the base where the cohomology of a fiber is a Hodge class is in fact an algebraic subset, that is, it is cut out by polynomial equations. Cattani, Deligne & Kaplan (1995) proved that this is always true, without assuming the Hodge conjecture. == See also == Tate conjecture Hodge theory Hodge structure Period mapping == References == Atiyah, M. F.; Hirzebruch, F. (1961), "Analytic cycles on complex manifolds", Topology, 1: 25–45, doi:10.1016/0040-9383(62)90094-0 Available from the Hirzebruch collection (pdf). Cattani, Eduardo; Deligne, Pierre; Kaplan, Aroldo (1995), "On the locus of Hodge classes", Journal of the American Mathematical Society, 8 (2): 483–506, arXiv:alg-geom/9402009, doi:10.2307/2152824, JSTOR 2152824, MR 1273413. Grothendieck, A. (1969), "Hodge's general conjecture is false for trivial reasons", Topology, 8 (3): 299–303, doi:10.1016/0040-9383(69)90016-0. Hodge, W. V. D. (1950), "The topological invariants of algebraic varieties", Proceedings of the International Congress of Mathematicians, 1, Cambridge, MA: 181–192. Kollár, János (1992), "Trento examples", in Ballico, E.; Catanese, F.; Ciliberto, C. (eds.), Classification of irregular varieties, Lecture Notes in Math., vol. 1515, Springer, p. 134, ISBN 978-3-540-55295-6. Lefschetz, Solomon (1924), L'Analysis situs et la géométrie algébrique, Collection de Monographies publiée sous la Direction de M. Émile Borel (in French), Paris: Gauthier-Villars Reprinted in Lefschetz, Solomon (1971), Selected papers, New York: Chelsea Publishing Co., ISBN 978-0-8284-0234-7, MR 0299447. Moonen, Ben J. J.; Zarhin, Yuri G. (1999), "Hodge classes on abelian varieties of low dimension", Mathematische Annalen, 315 (4): 711–733, arXiv:math/9901113, doi:10.1007/s002080050333, MR 1731466, S2CID 119180172. Mumford, David (1969), "A Note of Shimura's paper "Discontinuous groups and abelian varieties"", Mathematische Annalen, 181 (4): 345–351, doi:10.1007/BF01350672, S2CID 122062924. Rosenschon, Andreas; Srinivas, V. (2016), "Étale motivic cohomology and algebraic cycles" (PDF), Journal of the Institute of Mathematics of Jussieu, 15 (3): 511–537, doi:10.1017/S1474748014000401, MR 3505657, S2CID 55560040, Zbl 1346.19004 Totaro, Burt (1997), "Torsion algebraic cycles and complex cobordism", Journal of the American Mathematical Society, 10 (2): 467–493, arXiv:alg-geom/9609016, doi:10.1090/S0894-0347-97-00232-4, JSTOR 2152859, S2CID 16965164. Voisin, Claire (2002), "A counterexample to the Hodge conjecture extended to Kähler varieties", International Mathematics Research Notices, 2002 (20): 1057–1075, doi:10.1155/S1073792802111135, MR 1902630, S2CID 55572794{{citation}}: CS1 maint: unflagged free DOI (link). Weil, André (1977), "Abelian varieties and the Hodge ring", Collected papers, vol. III, pp. 421–429 Zucker, Steven (1977), "The Hodge conjecture for cubic fourfolds", Compositio Mathematica, 34 (2): 199–209, MR 0453741 == External links == Deligne, Pierre. "The Hodge Conjecture" (PDF) (The Clay Math Institute official problem description). Popular lecture on Hodge Conjecture by Dan Freed (University of Texas) (Real Video) Archived 2015-12-22 at the Wayback Machine (Slides) Biswas, Indranil; Paranjape, Kapil Hari (2002), "The Hodge Conjecture for general Prym varieties", Journal of Algebraic Geometry, 11 (1): 33–39, arXiv:math/0007192, doi:10.1090/S1056-3911-01-00303-4, MR 1865912, S2CID 119139470 Burt Totaro, Why believe the Hodge Conjecture? Claire Voisin, Hodge loci
Wikipedia/Hodge_conjecture
Burnside's lemma, sometimes also called Burnside's counting theorem, the Cauchy–Frobenius lemma, or the orbit-counting theorem, is a result in group theory that is often useful in taking account of symmetry when counting mathematical objects. It was discovered by Augustin Louis Cauchy and Ferdinand Georg Frobenius, and became well known after William Burnside quoted it. The result enumerates orbits of a symmetry group acting on some objects: that is, it counts distinct objects, considering objects symmetric to each other as the same; or counting distinct objects up to a symmetry equivalence relation; or counting only objects in canonical form. For example, in describing possible organic compounds of certain type, one considers them up to spatial rotation symmetry: different rotated drawings of a given molecule are chemically identical. (However a mirror reflection might give a different compound.) Formally, let G {\displaystyle G} be a finite group that acts on a set X {\displaystyle X} . For each g {\displaystyle g} in G {\displaystyle G} , let X g {\displaystyle X^{g}} denote the set of elements in X {\displaystyle X} that are fixed by g {\displaystyle g} (left invariant by g {\displaystyle g} ): that is, X g = { x ∈ X : g ⋅ x = x } . {\displaystyle X^{g}=\{x\in X:g\cdot x=x\}.} Burnside's lemma asserts the following formula for the number of orbits, denoted | X / G | {\displaystyle |X/G|} : | X / G | = 1 | G | ∑ g ∈ G | X g | . {\displaystyle |X/G|={\frac {1}{|G|}}\sum _{g\in G}|X^{g}|.} Thus the number of orbits (a natural number or +∞) is equal to the average number of points fixed by an element of G. For an infinite group G {\displaystyle G} , there is still a bijection: G × X / G ⟷ ∐ g ∈ G X g . {\displaystyle G\times X/G\ \longleftrightarrow \ \coprod _{g\in G}X^{g}.} == Examples of applications to enumeration == === Necklaces === There are 8 possible bit strings of length 3, but tying together the string ends gives only four distinct 2-colored necklaces of length 3, given by the canonical forms 000, 001, 011, 111: the other strings 100 and 010 are equivalent to 001 by rotation, while 110 and 101 are equivalent to 011. That is, rotation equivalence splits the set X {\displaystyle X} of strings into four orbits: X = { 000 } ∪ { 001 , 010 , 100 } ∪ { 011 , 101 , 110 } ∪ { 111 } . {\displaystyle X=\{{000}\}\cup \{{001},{010},{100}\}\cup \{{011},{101},{110}\}\cup \{{111}\}.} The Burnside formula uses the number of rotations, which is 3 including the null rotation, and the number of bit strings left unchanged by each rotation. All 8 bit vectors are unchanged by the null rotation, and two (000 and 111) are unchanged by the other two rotations. Thus the number of orbits is: 4 = 1 3 ( 8 + 2 + 2 ) . {\displaystyle 4={\frac {1}{3}}(8+2+2).} For length 4, there are 16 possible bit strings; 4 rotations; the null rotation leaves all 16 strings unchanged; the 1-rotation and 3-rotation each leave two strings unchanged (0000 and 1111); the 2-rotation leaves 4 bit strings unchanged (0000, 0101, 1010, 1111). The number of distinct necklaces is thus: 6 = 1 4 ( 16 + 2 + 4 + 2 ) {\displaystyle 6={\tfrac {1}{4}}(16+2+4+2)} , represented by the canonical forms 0000, 0001, 0011, 0101, 0111, 1111. The general case of n bits and k colors is given by a necklace polynomial. === Colorings of a cube === Burnside's lemma can compute the number of rotationally distinct colourings of the faces of a cube using three colours. Let X {\displaystyle X} be the set of 36 possible face color combinations that can be applied to a fixed cube, and let the rotation group G of the cube act on X {\displaystyle X} by moving the colored faces: two colorings in X {\displaystyle X} belong to the same orbit precisely when one is a rotation of the other. Rotationally distinct colorings correspond to group orbits, and can be found by counting the sizes of the fixed sets for the 24 elements of G, the colorings left unchanged by each rotation: the identity element fixes all 36 colorings six 90-degree face rotations each fix 33 colorings three 180-degree face rotations each fix 34 colorings eight 120-degree vertex rotations each fix 32 colorings six 180-degree edge rotations each fix 33 colorings. A detailed examination may be found here. The average fixed-set size is thus: | X / G | = 1 24 ( 3 6 + 6 ⋅ 3 3 + 3 ⋅ 3 4 + 8 ⋅ 3 2 + 6 ⋅ 3 3 ) = 57. {\displaystyle |X/G|={\frac {1}{24}}\left(3^{6}+6\cdot 3^{3}+3\cdot 3^{4}+8\cdot 3^{2}+6\cdot 3^{3}\right)=57.} There are 57 rotationally distinct colourings of the faces of a cube in three colours. In general, the number of rotationally distinct colorings of the faces of a cube in n colors is: 1 24 ( n 6 + 3 n 4 + 12 n 3 + 8 n 2 ) . {\displaystyle {\frac {1}{24}}\left(n^{6}+3n^{4}+12n^{3}+8n^{2}\right).} == Proof == In the proof of Burnside's lemma, the first step is to re-express the sum over the group elements g ∈ G as an equivalent sum over the set of elements x ∈ X: ∑ g ∈ G | X g | = # { ( g , x ) ∈ G × X ∣ g ⋅ x = x } = ∑ x ∈ X | G x | . {\displaystyle \sum _{g\in G}|X^{g}|=\#\{(g,x)\in G\times X\mid g\cdot x=x\}=\sum _{x\in X}|G_{x}|.} Here X g = { x ∈ X : g ⋅ x = x } {\displaystyle X^{g}=\{x\in X:g\cdot x=x\}} is the set of points of X {\displaystyle X} fixed by the element g {\displaystyle g} of G {\displaystyle G} , whereas G x = { g ∈ G : g ⋅ x = x } {\displaystyle G_{x}=\{g\in G:g\cdot x=x\}} is the stabilizer subgroup of G {\displaystyle G} , consisting of those symmetries that fix the point x ∈ X {\displaystyle x\in X} .) The orbit-stabilizer theorem says that for each x ∈ X {\displaystyle x\in X} there is a natural bijection between the orbit G ⋅ x = { g ⋅ x : g ∈ G } {\displaystyle G\cdot x=\{g\cdot x:g\in G\}} and the set of left cosets G / G x {\displaystyle G/G_{x}} . Lagrange's theorem implies that | G ⋅ x | = [ G : G x ] = | G | / | G x | . {\displaystyle |G\cdot x|=[G:G_{x}]=|G|/|G_{x}|.} The sum may therefore be rewritten as ∑ x ∈ X | G x | = ∑ x ∈ X | G | | G ⋅ x | = | G | ∑ x ∈ X 1 | G ⋅ x | . {\displaystyle \sum _{x\in X}|G_{x}|=\sum _{x\in X}{\frac {|G|}{|G\cdot x|}}=|G|\sum _{x\in X}{\frac {1}{|G\cdot x|}}.} Writing X {\displaystyle X} as the disjoint union of its orbits in X / G {\displaystyle X/G} gives | G | ∑ x ∈ X 1 | G ⋅ x | = | G | ∑ A ∈ X / G ∑ x ∈ A 1 | A | = | G | ∑ A ∈ X / G 1 = | G | ⋅ | X / G | . {\displaystyle |G|\sum _{x\in X}{\frac {1}{|G\cdot x|}}=|G|\sum _{A\in X/G}\sum _{x\in A}{\frac {1}{|A|}}=|G|\sum _{A\in X/G}1=|G|\cdot |X/G|.} Putting everything together gives the desired result: ∑ g ∈ G | X g | = | G | ⋅ | X / G | . {\displaystyle \sum _{g\in G}|X^{g}|=|G|\cdot |X/G|.} This is similar to the proof of the conjugacy class equation, which considers the conjugation action of G {\displaystyle G} on itself, that is, it is the case X = G {\displaystyle X=G} and g ⋅ x = g x g − 1 {\displaystyle g\cdot x=gxg^{-1}} , so that the stabilizer of x {\displaystyle x} is the centralizer G x = Z G ( x ) {\displaystyle G_{x}=Z_{G}(x)} . == Enumeration vs. generation == Burnside's lemma counts distinct objects, but it does not construct them. In general, combinatorial generation with isomorph rejection considers the symmetries of g {\displaystyle g} on objects x {\displaystyle x} . But instead of checking that g ⋅ x = x {\displaystyle g\cdot x=x} , it checks that g ⋅ x {\displaystyle g\cdot x} has not already been generated. One way to accomplish this is by checking that g ⋅ x {\displaystyle g\cdot x} is not lexicographically less than x {\displaystyle x} , using the lexicographically least member of each equivalence class as the canonical form of the class. Counting the objects generated with such a technique can verify that Burnside's lemma was correctly applied. == History: the lemma that is not Burnside's == William Burnside stated and proved this lemma in his 1897 book on finite groups, attributing it to Frobenius 1887. But even prior to Frobenius, the formula was known to Cauchy in 1845. Consequently, this lemma is sometimes referred to as the lemma that is not Burnside's. Misnaming scientific discoveries is referred to as Stigler's law of eponymy. == See also == Pólya enumeration theorem Cycle index == Notes == == References == Burnside, William (1897). Theory of Groups of Finite Order. Cambridge University Press – via Project Gutenberg. Also available here at Archive.org. (This is the first edition; the introduction to the second edition contains Burnside's famous volte face regarding the utility of representation theory.) Frobenius, Ferdinand Georg (1887), "Ueber die Congruenz nach einem aus zwei endlichen Gruppen gebildeten Doppelmodul", Crelle's Journal, 101 (4): 273–299, doi:10.3931/e-rara-18804. Cheng, Yuanyou (1986). "A generalization of Burnside's lemma to multiply transitive groups". Journal of Hubei University of Technology. ISSN 1003-4684.. Rotman, Joseph (1995), An introduction to the theory of groups, Springer-Verlag, ISBN 0-387-94285-8.
Wikipedia/Burnside's_lemma
Musical set theory provides concepts for categorizing musical objects and describing their relationships. Howard Hanson first elaborated many of the concepts for analyzing tonal music. Other theorists, such as Allen Forte, further developed the theory for analyzing atonal music, drawing on the twelve-tone theory of Milton Babbitt. The concepts of musical set theory are very general and can be applied to tonal and atonal styles in any equal temperament tuning system, and to some extent more generally than that. One branch of musical set theory deals with collections (sets and permutations) of pitches and pitch classes (pitch-class set theory), which may be ordered or unordered, and can be related by musical operations such as transposition, melodic inversion, and complementation. Some theorists apply the methods of musical set theory to the analysis of rhythm as well. == Comparison with mathematical set theory == Although musical set theory is often thought to involve the application of mathematical set theory to music, there are numerous differences between the methods and terminology of the two. For example, musicians use the terms transposition and inversion where mathematicians would use translation and reflection. Furthermore, where musical set theory refers to ordered sets, mathematics would normally refer to tuples or sequences (though mathematics does speak of ordered sets, and although these can be seen to include the musical kind in some sense, they are far more involved). Moreover, musical set theory is more closely related to group theory and combinatorics than to mathematical set theory, which concerns itself with such matters as, for example, various sizes of infinitely large sets. In combinatorics, an unordered subset of n objects, such as pitch classes, is called a combination, and an ordered subset a permutation. Musical set theory is better regarded as an application of combinatorics to music theory than as a branch of mathematical set theory. Its main connection to mathematical set theory is the use of the vocabulary of set theory to talk about finite sets. == Types of sets == The fundamental concept of musical set theory is the (musical) set, which is an unordered collection of pitch classes. More exactly, a pitch-class set is a numerical representation consisting of distinct integers (i.e., without duplicates). The elements of a set may be manifested in music as simultaneous chords, successive tones (as in a melody), or both. Notational conventions vary from author to author, but sets are typically enclosed in curly braces: {}, or square brackets: []. Some theorists use angle brackets ⟨ ⟩ to denote ordered sequences, while others distinguish ordered sets by separating the numbers with spaces. Thus one might notate the unordered set of pitch classes 0, 1, and 2 (corresponding in this case to C, C♯, and D) as {0,1,2}. The ordered sequence C-C♯-D would be notated ⟨0,1,2⟩ or (0,1,2). Although C is considered zero in this example, this is not always the case. For example, a piece (whether tonal or atonal) with a clear pitch center of F might be most usefully analyzed with F set to zero (in which case {0,1,2} would represent F, F♯ and G. (For the use of numbers to represent notes, see pitch class.) Though set theorists usually consider sets of equal-tempered pitch classes, it is possible to consider sets of pitches, non-equal-tempered pitch classes, rhythmic onsets, or "beat classes". Two-element sets are called dyads, three-element sets trichords (occasionally "triads", though this is easily confused with the traditional meaning of the word triad). Sets of higher cardinalities are called tetrachords (or tetrads), pentachords (or pentads), hexachords (or hexads), heptachords (heptads or, sometimes, mixing Latin and Greek roots, "septachords"—e.g. Rahn), octachords (octads), nonachords (nonads), decachords (decads), undecachords, and, finally, the dodecachord. == Basic operations == The basic operations that may be performed on a set are transposition and inversion. Sets related by transposition or inversion are said to be transpositionally related or inversionally related, and to belong to the same set class. Since transposition and inversion are isometries of pitch-class space, they preserve the intervallic structure of a set, even if they do not preserve the musical character (i.e. the physical reality) of the elements of the set. This can be considered the central postulate of musical set theory. In practice, set-theoretic musical analysis often consists in the identification of non-obvious transpositional or inversional relationships between sets found in a piece. Some authors consider the operations of complementation and multiplication as well. The complement of set X is the set consisting of all the pitch classes not contained in X. The product of two pitch classes is the product of their pitch-class numbers modulo 12. Since complementation and multiplication are not isometries of pitch-class space, they do not necessarily preserve the musical character of the objects they transform. Other writers, such as Allen Forte, have emphasized the Z-relation, which obtains between two sets that share the same total interval content, or interval vector—but are not transpositionally or inversionally equivalent. Another name for this relationship, used by Hanson, is "isomeric". Operations on ordered sequences of pitch classes also include transposition and inversion, as well as retrograde and rotation. Retrograding an ordered sequence reverses the order of its elements. Rotation of an ordered sequence is equivalent to cyclic permutation. Transposition and inversion can be represented as elementary arithmetic operations. If x is a number representing a pitch class, its transposition by n semitones is written Tn = x + n mod 12. Inversion corresponds to reflection around some fixed point in pitch class space. If x is a pitch class, the inversion with index number n is written In = n - x mod 12. == Equivalence relation == "For a relation in set S to be an equivalence relation [in algebra], it has to satisfy three conditions: it has to be reflexive ..., symmetrical ..., and transitive ...". "Indeed, an informal notion of equivalence has always been part of music theory and analysis. PC set theory, however, has adhered to formal definitions of equivalence." == Transpositional and inversional set classes == Two transpositionally related sets are said to belong to the same transpositional set class (Tn). Two sets related by transposition or inversion are said to belong to the same transpositional/inversional set class (inversion being written TnI or In). Sets belonging to the same transpositional set class are very similar-sounding; while sets belonging to the same transpositional/inversional set class could include two chords of the same type but in different keys, which would be less similar in sound but obviously still a bounded category. Because of this, music theorists often consider set classes basic objects of musical interest. There are two main conventions for naming equal-tempered set classes. One, known as the Forte number, derives from Allen Forte, whose The Structure of Atonal Music (1973), is one of the first works in musical set theory. Forte provided each set class with a number of the form c–d, where c indicates the cardinality of the set and d is the ordinal number. Thus the chromatic trichord {0, 1, 2} belongs to set-class 3–1, indicating that it is the first three-note set class in Forte's list. The augmented trichord {0, 4, 8}, receives the label 3–12, which happens to be the last trichord in Forte's list. The primary criticisms of Forte's nomenclature are: (1) Forte's labels are arbitrary and difficult to memorize, and it is in practice often easier simply to list an element of the set class; (2) Forte's system assumes equal temperament and cannot easily be extended to include diatonic sets, pitch sets (as opposed to pitch-class sets), multisets or sets in other tuning systems; (3) Forte's original system considers inversionally related sets to belong to the same set-class. This means that, for example a major triad and a minor triad are considered the same set. Western tonal music for centuries has regarded major and minor, as well as chord inversions, as significantly different. They generate indeed completely different physical objects. Ignoring the physical reality of sound is an obvious limitation of atonal theory. However, the defense has been made that theory was not created to fill a vacuum in which existing theories inadequately explained tonal music. Rather, Forte's theory is used to explain atonal music, where the composer has invented a system where the distinction between {0, 4, 7} (called 'major' in tonal theory) and its inversion {0, 3, 7} (called 'minor' in tonal theory) may not be relevant. The second notational system labels sets in terms of their normal form, which depends on the concept of normal order. To put a set in normal order, order it as an ascending scale in pitch-class space that spans less than an octave. Then permute it cyclically until its first and last notes are as close together as possible. In the case of ties, minimize the distance between the first and next-to-last note. (In case of ties here, minimize the distance between the first and next-to-next-to-last note, and so on.) Thus {0, 7, 4} in normal order is {0, 4, 7}, while {0, 2, 10} in normal order is {10, 0, 2}. To put a set in normal form, begin by putting it in normal order, and then transpose it so that its first pitch class is 0. Mathematicians and computer scientists most often order combinations using either alphabetical ordering, binary (base two) ordering, or Gray coding, each of which lead to differing but logical normal forms. Since transpositionally related sets share the same normal form, normal forms can be used to label the Tn set classes. To identify a set's Tn/In set class: Identify the set's Tn set class. Invert the set and find the inversion's Tn set class. Compare these two normal forms to see which is most "left packed." The resulting set labels the initial set's Tn/In set class. == Symmetries == The number of distinct operations in a system that map a set into itself is the set's degree of symmetry. The degree of symmetry, "specifies the number of operations that preserve the unordered pcsets of a partition; it tells the extent to which that partition's pitch-class sets map into (or onto) each other under transposition or inversion". Every set has at least one symmetry, as it maps onto itself under the identity operation T0. Transpositionally symmetric sets map onto themselves for Tn where n does not equal 0 (mod 12). Inversionally symmetric sets map onto themselves under TnI. For any given Tn/TnI type all sets have the same degree of symmetry. The number of distinct sets in a type is 24 (the total number of operations, transposition and inversion, for n = 0 through 11) divided by the degree of symmetry of Tn/TnI type. Transpositionally symmetrical sets either divide the octave evenly, or can be written as the union of equally sized sets that themselves divide the octave evenly. Inversionally symmetrical chords are invariant under reflections in pitch class space. This means that the chords can be ordered cyclically so that the series of intervals between successive notes is the same read forward or backward. For instance, in the cyclical ordering (0, 1, 2, 7), the interval between the first and second note is 1, the interval between the second and third note is 1, the interval between the third and fourth note is 5, and the interval between the fourth note and the first note is 5. One obtains the same sequence if one starts with the third element of the series and moves backward: the interval between the third element of the series and the second is 1; the interval between the second element of the series and the first is 1; the interval between the first element of the series and the fourth is 5; and the interval between the last element of the series and the third element is 5. Symmetry is therefore found between T0 and T2I, and there are 12 sets in the Tn/TnI equivalence class. == See also == Identity (music) Pitch interval Tonnetz Transformational theory == References == Sources == Further reading == == External links == Tucker, Gary (2001) "A Brief Introduction to Pitch-Class Set Analysis", Mount Allison University Department of Music. Nick Collins "Uniqueness of pitch class spaces, minimal bases and Z partners", Sonic Arts. "Twentieth Century Pitch Theory: Some Useful Terms and Techniques", Form and Analysis: A Virtual Textbook. Solomon, Larry (2005). "Set Theory Primer for Music", SolomonMusic.net. Kelley, Robert T (2001). "Introduction to Post-Functional Music Analysis: Post-Functional Theory Terminology", RobertKelleyPhd.com. Kelley, Robert T (2002). "Introduction to Post-Functional Music Analysis: Set Theory, The Matrix, and the Twelve-Tone Method". "SetClass View (SCv)", Flexatone.net. An athenaCL netTool for on-line, web-based pitch class analysis and reference. Tomlin, Jay. "All About Set Theory". JayTomlin.com. "Java Set Theory Machine" or Calculator Kaiser, Ulrich. "Pitch Class Set Calculator", musikanalyse.net. (in German) "Pitch-Class Set Theory and Perception", Ohio-State.edu. "Software Tools for Composers", ComposerTools.com. Javascript PC Set calculator, two-set relationship calculators, and theory tutorial. "PC Set Calculator", MtA.Ca. Taylor, Stephen Andrew. "SetFinder", stephenandrewtaylor.net. Pitch class set library and prime form calculator.
Wikipedia/Set_theory_(music)
In mathematics, the Birch and Swinnerton-Dyer conjecture (often called the Birch–Swinnerton-Dyer conjecture) describes the set of rational solutions to equations defining an elliptic curve. It is an open problem in the field of number theory and is widely recognized as one of the most challenging mathematical problems. It is named after mathematicians Bryan John Birch and Peter Swinnerton-Dyer, who developed the conjecture during the first half of the 1960s with the help of machine computation. Only special cases of the conjecture have been proven. The modern formulation of the conjecture relates to arithmetic data associated with an elliptic curve E over a number field K to the behaviour of the Hasse–Weil L-function L(E, s) of E at s = 1. More specifically, it is conjectured that the rank of the abelian group E(K) of points of E is the order of the zero of L(E, s) at s = 1. The first non-zero coefficient in the Taylor expansion of L(E, s) at s = 1 is given by more refined arithmetic data attached to E over K (Wiles 2006). The conjecture was chosen as one of the seven Millennium Prize Problems listed by the Clay Mathematics Institute, which has offered a $1,000,000 prize for the first correct proof. == Background == Mordell (1922) proved Mordell's theorem: the group of rational points on an elliptic curve has a finite basis. This means that for any elliptic curve there is a finite subset of the rational points on the curve, from which all further rational points may be generated. If the number of rational points on a curve is infinite then some point in a finite basis must have infinite order. The number of independent basis points with infinite order is called the rank of the curve, and is an important invariant property of an elliptic curve. If the rank of an elliptic curve is 0, then the curve has only a finite number of rational points. On the other hand, if the rank of the curve is greater than 0, then the curve has an infinite number of rational points. Although Mordell's theorem shows that the rank of an elliptic curve is always finite, it does not give an effective method for calculating the rank of every curve. The rank of certain elliptic curves can be calculated using numerical methods but (in the current state of knowledge) it is unknown if these methods handle all curves. An L-function L(E, s) can be defined for an elliptic curve E by constructing an Euler product from the number of points on the curve modulo each prime p. This L-function is analogous to the Riemann zeta function and the Dirichlet L-series that is defined for a binary quadratic form. It is a special case of a Hasse–Weil L-function. The natural definition of L(E, s) only converges for values of s in the complex plane with Re(s) > 3/2. Helmut Hasse conjectured that L(E, s) could be extended by analytic continuation to the whole complex plane. This conjecture was first proved by Deuring (1941) for elliptic curves with complex multiplication. It was subsequently shown to be true for all elliptic curves over Q, as a consequence of the modularity theorem in 2001. Finding rational points on a general elliptic curve is a difficult problem. Finding the points on an elliptic curve modulo a given prime p is conceptually straightforward, as there are only a finite number of possibilities to check. However, for large primes it is computationally intensive. == History == In the early 1960s Peter Swinnerton-Dyer used the EDSAC-2 computer at the University of Cambridge Computer Laboratory to calculate the number of points modulo p (denoted by Np) for a large number of primes p on elliptic curves whose rank was known. From these numerical results Birch & Swinnerton-Dyer (1965) conjectured that Np for a curve E with rank r obeys an asymptotic law ∏ p ≤ x N p p ≈ C log ⁡ ( x ) r as x → ∞ {\displaystyle \prod _{p\leq x}{\frac {N_{p}}{p}}\approx C\log(x)^{r}{\mbox{ as }}x\rightarrow \infty } where C is a constant. Initially, this was based on somewhat tenuous trends in graphical plots; this induced a measure of skepticism in J. W. S. Cassels (Birch's Ph.D. advisor). Over time the numerical evidence stacked up. This in turn led them to make a general conjecture about the behavior of a curve's L-function L(E, s) at s = 1, namely that it would have a zero of order r at this point. This was a far-sighted conjecture for the time, given that the analytic continuation of L(E, s) was only established for curves with complex multiplication, which were also the main source of numerical examples. (NB that the reciprocal of the L-function is from some points of view a more natural object of study; on occasion, this means that one should consider poles rather than zeroes.) The conjecture was subsequently extended to include the prediction of the precise leading Taylor coefficient of the L-function at s = 1. It is conjecturally given by L ( r ) ( E , 1 ) r ! = # S h a ( E ) Ω E R E ∏ p | N c p ( # E t o r ) 2 {\displaystyle {\frac {L^{(r)}(E,1)}{r!}}={\frac {\#\mathrm {Sha} (E)\Omega _{E}R_{E}\prod _{p|N}c_{p}}{(\#E_{\mathrm {tor} })^{2}}}} where the quantities on the right-hand side are invariants of the curve, studied by Cassels, Tate, Shafarevich and others (Wiles 2006): # E t o r {\displaystyle \#E_{\mathrm {tor} }} is the order of the torsion group, # S h a ( E ) = {\displaystyle \#\mathrm {Sha} (E)=} #Ш(E) is the order of the Tate–Shafarevich group, Ω E {\displaystyle \Omega _{E}} is the real period of E multiplied by the number of connected components of E, R E {\displaystyle R_{E}} is the regulator of E which is defined via the canonical heights of a basis of rational points, c p {\displaystyle c_{p}} is the Tamagawa number of E at a prime p dividing the conductor N of E. It can be found by Tate's algorithm. At the time of the inception of the conjecture little was known, not even the well-definedness of the left side (referred to as analytic) or the right side (referred to as algebraic) of this equation. John Tate expressed this in 1974 in a famous quote.: 198  This remarkable conjecture relates the behavior of a function L {\displaystyle L} at a point where it is not at present known to be defined to the order of a group Ш which is not known to be finite! By the modularity theorem proved in 2001 for elliptic curves over Q {\displaystyle \mathbb {Q} } the left side is now known to be well-defined and the finiteness of Ш(E) is known when additionally the analytic rank is at most 1, i.e., if L ( E , s ) {\displaystyle L(E,s)} vanishes at most to order 1 at s = 1 {\displaystyle s=1} . Both parts remain open. == Current status == The Birch and Swinnerton-Dyer conjecture has been proved only in special cases: Coates & Wiles (1977) proved that if E is a curve over a number field F with complex multiplication by an imaginary quadratic field K of class number 1, F = K or Q, and L(E, 1) is not 0 then E(F) is a finite group. This was extended to the case where F is any finite abelian extension of K by Arthaud (1978). Gross & Zagier (1986) showed that if a modular elliptic curve has a first-order zero at s = 1 then it has a rational point of infinite order; see Gross–Zagier theorem. Kolyvagin (1989) showed that a modular elliptic curve E for which L(E, 1) is not zero has rank 0, and a modular elliptic curve E for which L(E, 1) has a first-order zero at s = 1 has rank 1. Rubin (1991) showed that for elliptic curves defined over an imaginary quadratic field K with complex multiplication by K, if the L-series of the elliptic curve was not zero at s = 1, then the p-part of the Tate–Shafarevich group had the order predicted by the Birch and Swinnerton-Dyer conjecture, for all primes p > 7. Breuil et al. (2001), extending work of Wiles (1995), proved that all elliptic curves defined over the rational numbers are modular, which extends results #2 and #3 to all elliptic curves over the rationals, and shows that the L-functions of all elliptic curves over Q are defined at s = 1. Bhargava & Shankar (2015) proved that the average rank of the Mordell–Weil group of an elliptic curve over Q is bounded above by 7/6. Combining this with the p-parity theorem of Nekovář (2009) and Dokchitser & Dokchitser (2010) and with the proof of the main conjecture of Iwasawa theory for GL(2) by Skinner & Urban (2014), they conclude that a positive proportion of elliptic curves over Q have analytic rank zero, and hence, by Kolyvagin (1989), satisfy the Birch and Swinnerton-Dyer conjecture. There are currently no proofs involving curves with a rank greater than 1. There is extensive numerical evidence for the truth of the conjecture. == Consequences == Much like the Riemann hypothesis, this conjecture has multiple consequences, including the following two: Let n be an odd square-free integer. Assuming the Birch and Swinnerton-Dyer conjecture, n is the area of a right triangle with rational side lengths (a congruent number) if and only if the number of triplets of integers (x, y, z) satisfying 2x2 + y2 + 8z2 = n is twice the number of triplets satisfying 2x2 + y2 + 32z2 = n. This statement, due to Tunnell's theorem (Tunnell 1983), is related to the fact that n is a congruent number if and only if the elliptic curve y2 = x3 − n2x has a rational point of infinite order (thus, under the Birch and Swinnerton-Dyer conjecture, its L-function has a zero at 1). The interest in this statement is that the condition is easily verified. In a different direction, certain analytic methods allow for an estimation of the order of zero in the center of the critical strip of families of L-functions. Admitting the BSD conjecture, these estimations correspond to information about the rank of families of elliptic curves in question. For example: suppose the generalized Riemann hypothesis and the BSD conjecture, the average rank of curves given by y2 = x3 + ax+ b is smaller than 2. Because of the existence of the functional equation of the L-function of an elliptic curve, BSD allows us to calculate the parity of the rank of an elliptic curve. This is a conjecture in its own right called the parity conjecture, and it relates the parity of the rank of an elliptic curve to its global root number. This leads to many explicit arithmetic phenomena which are yet to be proved unconditionally. For instance: Every positive integer n ≡ 5, 6 or 7 (mod 8) is a congruent number. The elliptic curve given by y2 = x3 + ax + b where a ≡ b (mod 2) has infinitely many solutions over Q ( ζ 8 ) {\displaystyle \mathbb {Q} (\zeta _{8})} . Every positive rational number d can be written in the form d = s2(t3 – 91t – 182) for s and t in Q {\displaystyle \mathbb {Q} } . For every rational number t, the elliptic curve given by y2 = x(x2 – 49(1 + t4)2) has rank at least 1. There are many more examples for elliptic curves over number fields. == Generalizations == There is a version of this conjecture for general abelian varieties over number fields. A version for abelian varieties over Q {\displaystyle \mathbb {Q} } is the following:: 462  lim s → 1 L ( A / Q , s ) ( s − 1 ) r = # S h a ( A ) Ω A R A ∏ p | N c p # A ( Q ) tors ⋅ # A ^ ( Q ) tors . {\displaystyle \lim _{s\to 1}{\frac {L(A/\mathbb {Q} ,s)}{(s-1)^{r}}}={\frac {\#\mathrm {Sha} (A)\Omega _{A}R_{A}\prod _{p|N}c_{p}}{\#A(\mathbb {Q} )_{\text{tors}}\cdot \#{\hat {A}}(\mathbb {Q} )_{\text{tors}}}}.} All of the terms have the same meaning as for elliptic curves, except that the square of the order of the torsion needs to be replaced by the product # A ( Q ) tors ⋅ # A ^ ( Q ) tors {\displaystyle \#A(\mathbb {Q} )_{\text{tors}}\cdot \#{\hat {A}}(\mathbb {Q} )_{\text{tors}}} involving the dual abelian variety A ^ {\displaystyle {\hat {A}}} . Elliptic curves as 1-dimensional abelian varieties are their own duals, i.e. E ^ = E {\displaystyle {\hat {E}}=E} , which simplifies the statement of the BSD conjecture. The regulator R A {\displaystyle R_{A}} needs to be understood for the pairing between a basis for the free parts of A ( Q ) {\displaystyle A(\mathbb {Q} )} and A ^ ( Q ) {\displaystyle {\hat {A}}(\mathbb {Q} )} relative to the Poincare bundle on the product A × A ^ {\displaystyle A\times {\hat {A}}} . The rank-one Birch-Swinnerton-Dyer conjecture for modular elliptic curves and modular abelian varieties of GL(2)-type over totally real number fields was proved by Shou-Wu Zhang in 2001. Another generalization is given by the Bloch-Kato conjecture. == Notes == == References == Shoeib, Maisara (26 May 2025). "A Topological Perspective on the Birch and Swinnerton–Dyer Conjecture". arXiv:2505.19796. == External links == Weisstein, Eric W. "Swinnerton-Dyer Conjecture". MathWorld. "Birch and Swinnerton-Dyer Conjecture". PlanetMath. The Birch and Swinnerton-Dyer Conjecture: An Interview with Professor Henri Darmon by Agnes F. Beaudry What is the Birch and Swinnerton-Dyer Conjecture? lecture by Manjul Bhargava (September 2016) given during the Clay Research Conference held at the University of Oxford
Wikipedia/Birch_and_Swinnerton-Dyer_conjecture
In mathematics, combinatorial group theory is the theory of free groups, and the concept of a presentation of a group by generators and relations. It is much used in geometric topology, the fundamental group of a simplicial complex having in a natural and geometric way such a presentation. A very closely related topic is geometric group theory, which today largely subsumes combinatorial group theory, using techniques from outside combinatorics besides. It also comprises a number of algorithmically insoluble problems, most notably the word problem for groups; and the classical Burnside problem. == History == See the book by Chandler and Magnus for a detailed history of combinatorial group theory. A proto-form is found in the 1856 icosian calculus of William Rowan Hamilton, where he studied the icosahedral symmetry group via the edge graph of the dodecahedron. The foundations of combinatorial group theory were laid by Walther von Dyck, student of Felix Klein, in the early 1880s, who gave the first systematic study of groups by generators and relations. == References ==
Wikipedia/Combinatorial_group_theory
In mathematics, a group is a set with a binary operation that is associative, has an identity element, and has an inverse element for every element of the set. Many mathematical structures are groups endowed with other properties. For example, the integers with the addition operation form an infinite group that is generated by a single element called ⁠ 1 {\displaystyle 1} ⁠ (these properties fully characterize the integers). The concept of a group was elaborated for handling, in a unified way, many mathematical structures such as numbers, geometric shapes and polynomial roots. Because the concept of groups is ubiquitous in numerous areas both within and outside mathematics, some authors consider it as a central organizing principle of contemporary mathematics. In geometry, groups arise naturally in the study of symmetries and geometric transformations: The symmetries of an object form a group, called the symmetry group of the object, and the transformations of a given type form a general group. Lie groups appear in symmetry groups in geometry, and also in the Standard Model of particle physics. The Poincaré group is a Lie group consisting of the symmetries of spacetime in special relativity. Point groups describe symmetry in molecular chemistry. The concept of a group arose in the study of polynomial equations, starting with Évariste Galois in the 1830s, who introduced the term group (French: groupe) for the symmetry group of the roots of an equation, now called a Galois group. After contributions from other fields such as number theory and geometry, the group notion was generalized and firmly established around 1870. Modern group theory—an active mathematical discipline—studies groups in their own right. To explore groups, mathematicians have devised various notions to break groups into smaller, better-understandable pieces, such as subgroups, quotient groups and simple groups. In addition to their abstract properties, group theorists also study the different ways in which a group can be expressed concretely, both from a point of view of representation theory (that is, through the representations of the group) and of computational group theory. A theory has been developed for finite groups, which culminated with the classification of finite simple groups, completed in 2004. Since the mid-1980s, geometric group theory, which studies finitely generated groups as geometric objects, has become an active area in group theory. == Definition and illustration == === First example: the integers === One of the more familiar groups is the set of integers Z = { … , − 4 , − 3 , − 2 , − 1 , 0 , 1 , 2 , 3 , 4 , … } {\displaystyle \mathbb {Z} =\{\ldots ,-4,-3,-2,-1,0,1,2,3,4,\ldots \}} together with addition. For any two integers a {\displaystyle a} and ⁠ b {\displaystyle b} ⁠, the sum a + b {\displaystyle a+b} is also an integer; this closure property says that + {\displaystyle +} is a binary operation on ⁠ Z {\displaystyle \mathbb {Z} } ⁠. The following properties of integer addition serve as a model for the group axioms in the definition below. For all integers ⁠ a {\displaystyle a} ⁠, b {\displaystyle b} and ⁠ c {\displaystyle c} ⁠, one has ⁠ ( a + b ) + c = a + ( b + c ) {\displaystyle (a+b)+c=a+(b+c)} ⁠. Expressed in words, adding a {\displaystyle a} to b {\displaystyle b} first, and then adding the result to c {\displaystyle c} gives the same final result as adding a {\displaystyle a} to the sum of b {\displaystyle b} and ⁠ c {\displaystyle c} ⁠. This property is known as associativity. If a {\displaystyle a} is any integer, then 0 + a = a {\displaystyle 0+a=a} and ⁠ a + 0 = a {\displaystyle a+0=a} ⁠. Zero is called the identity element of addition because adding it to any integer returns the same integer. For every integer ⁠ a {\displaystyle a} ⁠, there is an integer b {\displaystyle b} such that a + b = 0 {\displaystyle a+b=0} and ⁠ b + a = 0 {\displaystyle b+a=0} ⁠. The integer b {\displaystyle b} is called the inverse element of the integer a {\displaystyle a} and is denoted ⁠ − a {\displaystyle -a} ⁠. The integers, together with the operation ⁠ + {\displaystyle +} ⁠, form a mathematical object belonging to a broad class sharing similar structural aspects. To appropriately understand these structures as a collective, the following definition is developed. === Definition === A group is a non-empty set G {\displaystyle G} together with a binary operation on ⁠ G {\displaystyle G} ⁠, here denoted "⁠ ⋅ {\displaystyle \cdot } ⁠", that combines any two elements a {\displaystyle a} and b {\displaystyle b} of G {\displaystyle G} to form an element of ⁠ G {\displaystyle G} ⁠, denoted ⁠ a ⋅ b {\displaystyle a\cdot b} ⁠, such that the following three requirements, known as group axioms, are satisfied: Associativity For all ⁠ a {\displaystyle a} ⁠, ⁠ b {\displaystyle b} ⁠, ⁠ c {\displaystyle c} ⁠ in ⁠ G {\displaystyle G} ⁠, one has ⁠ ( a ⋅ b ) ⋅ c = a ⋅ ( b ⋅ c ) {\displaystyle (a\cdot b)\cdot c=a\cdot (b\cdot c)} ⁠. Identity element There exists an element e {\displaystyle e} in G {\displaystyle G} such that, for every a {\displaystyle a} in ⁠ G {\displaystyle G} ⁠, one has ⁠ e ⋅ a = a {\displaystyle e\cdot a=a} ⁠ and ⁠ a ⋅ e = a {\displaystyle a\cdot e=a} ⁠. Such an element is unique (see below). It is called the identity element (or sometimes neutral element) of the group. Inverse element For each a {\displaystyle a} in ⁠ G {\displaystyle G} ⁠, there exists an element b {\displaystyle b} in G {\displaystyle G} such that a ⋅ b = e {\displaystyle a\cdot b=e} and ⁠ b ⋅ a = e {\displaystyle b\cdot a=e} ⁠, where e {\displaystyle e} is the identity element. For each ⁠ a {\displaystyle a} ⁠, the element b {\displaystyle b} is unique (see below); it is called the inverse of a {\displaystyle a} and is commonly denoted ⁠ a − 1 {\displaystyle a^{-1}} ⁠. === Notation and terminology === Formally, a group is an ordered pair of a set and a binary operation on this set that satisfies the group axioms. The set is called the underlying set of the group, and the operation is called the group operation or the group law. A group and its underlying set are thus two different mathematical objects. To avoid cumbersome notation, it is common to abuse notation by using the same symbol to denote both. This reflects also an informal way of thinking: that the group is the same as the set except that it has been enriched by additional structure provided by the operation. For example, consider the set of real numbers ⁠ R {\displaystyle \mathbb {R} } ⁠, which has the operations of addition a + b {\displaystyle a+b} and multiplication ⁠ a b {\displaystyle ab} ⁠. Formally, R {\displaystyle \mathbb {R} } is a set, ( R , + ) {\displaystyle (\mathbb {R} ,+)} is a group, and ( R , + , ⋅ ) {\displaystyle (\mathbb {R} ,+,\cdot )} is a field. But it is common to write R {\displaystyle \mathbb {R} } to denote any of these three objects. The additive group of the field R {\displaystyle \mathbb {R} } is the group whose underlying set is R {\displaystyle \mathbb {R} } and whose operation is addition. The multiplicative group of the field R {\displaystyle \mathbb {R} } is the group R × {\displaystyle \mathbb {R} ^{\times }} whose underlying set is the set of nonzero real numbers R ∖ { 0 } {\displaystyle \mathbb {R} \smallsetminus \{0\}} and whose operation is multiplication. More generally, one speaks of an additive group whenever the group operation is notated as addition; in this case, the identity is typically denoted ⁠ 0 {\displaystyle 0} ⁠, and the inverse of an element x {\displaystyle x} is denoted ⁠ − x {\displaystyle -x} ⁠. Similarly, one speaks of a multiplicative group whenever the group operation is notated as multiplication; in this case, the identity is typically denoted ⁠ 1 {\displaystyle 1} ⁠, and the inverse of an element x {\displaystyle x} is denoted ⁠ x − 1 {\displaystyle x^{-1}} ⁠. In a multiplicative group, the operation symbol is usually omitted entirely, so that the operation is denoted by juxtaposition, a b {\displaystyle ab} instead of ⁠ a ⋅ b {\displaystyle a\cdot b} ⁠. The definition of a group does not require that a ⋅ b = b ⋅ a {\displaystyle a\cdot b=b\cdot a} for all elements a {\displaystyle a} and b {\displaystyle b} in ⁠ G {\displaystyle G} ⁠. If this additional condition holds, then the operation is said to be commutative, and the group is called an abelian group. It is a common convention that for an abelian group either additive or multiplicative notation may be used, but for a nonabelian group only multiplicative notation is used. Several other notations are commonly used for groups whose elements are not numbers. For a group whose elements are functions, the operation is often function composition ⁠ f ∘ g {\displaystyle f\circ g} ⁠; then the identity may be denoted id. In the more specific cases of geometric transformation groups, symmetry groups, permutation groups, and automorphism groups, the symbol ∘ {\displaystyle \circ } is often omitted, as for multiplicative groups. Many other variants of notation may be encountered. === Second example: a symmetry group === Two figures in the plane are congruent if one can be changed into the other using a combination of rotations, reflections, and translations. Any figure is congruent to itself. However, some figures are congruent to themselves in more than one way, and these extra congruences are called symmetries. A square has eight symmetries. These are: the identity operation leaving everything unchanged, denoted id; rotations of the square around its center by 90°, 180°, and 270° clockwise, denoted by ⁠ r 1 {\displaystyle r_{1}} ⁠, r 2 {\displaystyle r_{2}} and ⁠ r 3 {\displaystyle r_{3}} ⁠, respectively; reflections about the horizontal and vertical middle line (⁠ f v {\displaystyle f_{\mathrm {v} }} ⁠ and ⁠ f h {\displaystyle f_{\mathrm {h} }} ⁠), or through the two diagonals (⁠ f d {\displaystyle f_{\mathrm {d} }} ⁠ and ⁠ f c {\displaystyle f_{\mathrm {c} }} ⁠). These symmetries are functions. Each sends a point in the square to the corresponding point under the symmetry. For example, r 1 {\displaystyle r_{1}} sends a point to its rotation 90° clockwise around the square's center, and f h {\displaystyle f_{\mathrm {h} }} sends a point to its reflection across the square's vertical middle line. Composing two of these symmetries gives another symmetry. These symmetries determine a group called the dihedral group of degree four, denoted ⁠ D 4 {\displaystyle \mathrm {D} _{4}} ⁠. The underlying set of the group is the above set of symmetries, and the group operation is function composition. Two symmetries are combined by composing them as functions, that is, applying the first one to the square, and the second one to the result of the first application. The result of performing first a {\displaystyle a} and then b {\displaystyle b} is written symbolically from right to left as b ∘ a {\displaystyle b\circ a} ("apply the symmetry b {\displaystyle b} after performing the symmetry ⁠ a {\displaystyle a} ⁠"). This is the usual notation for composition of functions. A Cayley table lists the results of all such compositions possible. For example, rotating by 270° clockwise (⁠ r 3 {\displaystyle r_{3}} ⁠) and then reflecting horizontally (⁠ f h {\displaystyle f_{\mathrm {h} }} ⁠) is the same as performing a reflection along the diagonal (⁠ f d {\displaystyle f_{\mathrm {d} }} ⁠). Using the above symbols, highlighted in blue in the Cayley table: f h ∘ r 3 = f d . {\displaystyle f_{\mathrm {h} }\circ r_{3}=f_{\mathrm {d} }.} Given this set of symmetries and the described operation, the group axioms can be understood as follows. Binary operation: Composition is a binary operation. That is, a ∘ b {\displaystyle a\circ b} is a symmetry for any two symmetries a {\displaystyle a} and ⁠ b {\displaystyle b} ⁠. For example, r 3 ∘ f h = f c , {\displaystyle r_{3}\circ f_{\mathrm {h} }=f_{\mathrm {c} },} that is, rotating 270° clockwise after reflecting horizontally equals reflecting along the counter-diagonal (⁠ f c {\displaystyle f_{\mathrm {c} }} ⁠). Indeed, every other combination of two symmetries still gives a symmetry, as can be checked using the Cayley table. Associativity: The associativity axiom deals with composing more than two symmetries: Starting with three elements ⁠ a {\displaystyle a} ⁠, ⁠ b {\displaystyle b} ⁠ and ⁠ c {\displaystyle c} ⁠ of ⁠ D 4 {\displaystyle \mathrm {D} _{4}} ⁠, there are two possible ways of using these three symmetries in this order to determine a symmetry of the square. One of these ways is to first compose a {\displaystyle a} and b {\displaystyle b} into a single symmetry, then to compose that symmetry with ⁠ c {\displaystyle c} ⁠. The other way is to first compose b {\displaystyle b} and ⁠ c {\displaystyle c} ⁠, then to compose the resulting symmetry with ⁠ a {\displaystyle a} ⁠. These two ways must give always the same result, that is, ( a ∘ b ) ∘ c = a ∘ ( b ∘ c ) , {\displaystyle (a\circ b)\circ c=a\circ (b\circ c),} For example, ( f d ∘ f v ) ∘ r 2 = f d ∘ ( f v ∘ r 2 ) {\displaystyle (f_{\mathrm {d} }\circ f_{\mathrm {v} })\circ r_{2}=f_{\mathrm {d} }\circ (f_{\mathrm {v} }\circ r_{2})} can be checked using the Cayley table: ( f d ∘ f v ) ∘ r 2 = r 3 ∘ r 2 = r 1 f d ∘ ( f v ∘ r 2 ) = f d ∘ f h = r 1 . {\displaystyle {\begin{aligned}(f_{\mathrm {d} }\circ f_{\mathrm {v} })\circ r_{2}&=r_{3}\circ r_{2}=r_{1}\\f_{\mathrm {d} }\circ (f_{\mathrm {v} }\circ r_{2})&=f_{\mathrm {d} }\circ f_{\mathrm {h} }=r_{1}.\end{aligned}}} Identity element: The identity element is ⁠ i d {\displaystyle \mathrm {id} } ⁠, as it does not change any symmetry a {\displaystyle a} when composed with it either on the left or on the right. Inverse element: Each symmetry has an inverse: ⁠ i d {\displaystyle \mathrm {id} } ⁠, the reflections ⁠ f h {\displaystyle f_{\mathrm {h} }} ⁠, ⁠ f v {\displaystyle f_{\mathrm {v} }} ⁠, ⁠ f d {\displaystyle f_{\mathrm {d} }} ⁠, ⁠ f c {\displaystyle f_{\mathrm {c} }} ⁠ and the 180° rotation r 2 {\displaystyle r_{2}} are their own inverse, because performing them twice brings the square back to its original orientation. The rotations r 3 {\displaystyle r_{3}} and r 1 {\displaystyle r_{1}} are each other's inverses, because rotating 90° and then rotation 270° (or vice versa) yields a rotation over 360° which leaves the square unchanged. This is easily verified on the table. In contrast to the group of integers above, where the order of the operation is immaterial, it does matter in ⁠ D 4 {\displaystyle \mathrm {D} _{4}} ⁠, as, for example, f h ∘ r 1 = f c {\displaystyle f_{\mathrm {h} }\circ r_{1}=f_{\mathrm {c} }} but ⁠ r 1 ∘ f h = f d {\displaystyle r_{1}\circ f_{\mathrm {h} }=f_{\mathrm {d} }} ⁠. In other words, D 4 {\displaystyle \mathrm {D} _{4}} is not abelian. == History == The modern concept of an abstract group developed out of several fields of mathematics. The original motivation for group theory was the quest for solutions of polynomial equations of degree higher than 4. The 19th-century French mathematician Évariste Galois, extending prior work of Paolo Ruffini and Joseph-Louis Lagrange, gave a criterion for the solvability of a particular polynomial equation in terms of the symmetry group of its roots (solutions). The elements of such a Galois group correspond to certain permutations of the roots. At first, Galois's ideas were rejected by his contemporaries, and published only posthumously. More general permutation groups were investigated in particular by Augustin Louis Cauchy. Arthur Cayley's On the theory of groups, as depending on the symbolic equation θ n = 1 {\displaystyle \theta ^{n}=1} (1854) gives the first abstract definition of a finite group. Geometry was a second field in which groups were used systematically, especially symmetry groups as part of Felix Klein's 1872 Erlangen program. After novel geometries such as hyperbolic and projective geometry had emerged, Klein used group theory to organize them in a more coherent way. Further advancing these ideas, Sophus Lie founded the study of Lie groups in 1884. The third field contributing to group theory was number theory. Certain abelian group structures had been used implicitly in Carl Friedrich Gauss's number-theoretical work Disquisitiones Arithmeticae (1798), and more explicitly by Leopold Kronecker. In 1847, Ernst Kummer made early attempts to prove Fermat's Last Theorem by developing groups describing factorization into prime numbers. The convergence of these various sources into a uniform theory of groups started with Camille Jordan's Traité des substitutions et des équations algébriques (1870). Walther von Dyck (1882) introduced the idea of specifying a group by means of generators and relations, and was also the first to give an axiomatic definition of an "abstract group", in the terminology of the time. As of the 20th century, groups gained wide recognition by the pioneering work of Ferdinand Georg Frobenius and William Burnside (who worked on representation theory of finite groups), Richard Brauer's modular representation theory and Issai Schur's papers. The theory of Lie groups, and more generally locally compact groups was studied by Hermann Weyl, Élie Cartan and many others. Its algebraic counterpart, the theory of algebraic groups, was first shaped by Claude Chevalley (from the late 1930s) and later by the work of Armand Borel and Jacques Tits. The University of Chicago's 1960–61 Group Theory Year brought together group theorists such as Daniel Gorenstein, John G. Thompson and Walter Feit, laying the foundation of a collaboration that, with input from numerous other mathematicians, led to the classification of finite simple groups, with the final step taken by Aschbacher and Smith in 2004. This project exceeded previous mathematical endeavours by its sheer size, in both length of proof and number of researchers. Research concerning this classification proof is ongoing. Group theory remains a highly active mathematical branch, impacting many other fields, as the examples below illustrate. == Elementary consequences of the group axioms == Basic facts about all groups that can be obtained directly from the group axioms are commonly subsumed under elementary group theory. For example, repeated applications of the associativity axiom show that the unambiguity of a ⋅ b ⋅ c = ( a ⋅ b ) ⋅ c = a ⋅ ( b ⋅ c ) {\displaystyle a\cdot b\cdot c=(a\cdot b)\cdot c=a\cdot (b\cdot c)} generalizes to more than three factors. Because this implies that parentheses can be inserted anywhere within such a series of terms, parentheses are usually omitted. === Uniqueness of identity element === The group axioms imply that the identity element is unique; that is, there exists only one identity element: any two identity elements e {\displaystyle e} and f {\displaystyle f} of a group are equal, because the group axioms imply ⁠ e = e ⋅ f = f {\displaystyle e=e\cdot f=f} ⁠. It is thus customary to speak of the identity element of the group. === Uniqueness of inverses === The group axioms also imply that the inverse of each element is unique. Let a group element a {\displaystyle a} have both b {\displaystyle b} and c {\displaystyle c} as inverses. Then b = b ⋅ e ( e is the identity element) = b ⋅ ( a ⋅ c ) ( c and a are inverses of each other) = ( b ⋅ a ) ⋅ c (associativity) = e ⋅ c ( b is an inverse of a ) = c ( e is the identity element and b = c ) {\displaystyle {\begin{aligned}b&=b\cdot e&&{\text{(}}e{\text{ is the identity element)}}\\&=b\cdot (a\cdot c)&&{\text{(}}c{\text{ and }}a{\text{ are inverses of each other)}}\\&=(b\cdot a)\cdot c&&{\text{(associativity)}}\\&=e\cdot c&&{\text{(}}b{\text{ is an inverse of }}a{\text{)}}\\&=c&&{\text{(}}e{\text{ is the identity element and }}b=c{\text{)}}\end{aligned}}} Therefore, it is customary to speak of the inverse of an element. === Division === Given elements a {\displaystyle a} and b {\displaystyle b} of a group ⁠ G {\displaystyle G} ⁠, there is a unique solution x {\displaystyle x} in G {\displaystyle G} to the equation ⁠ a ⋅ x = b {\displaystyle a\cdot x=b} ⁠, namely ⁠ a − 1 ⋅ b {\displaystyle a^{-1}\cdot b} ⁠. It follows that for each a {\displaystyle a} in ⁠ G {\displaystyle G} ⁠, the function G → G {\displaystyle G\to G} that maps each x {\displaystyle x} to a ⋅ x {\displaystyle a\cdot x} is a bijection; it is called left multiplication by a {\displaystyle a} or left translation by ⁠ a {\displaystyle a} ⁠. Similarly, given a {\displaystyle a} and ⁠ b {\displaystyle b} ⁠, the unique solution to x ⋅ a = b {\displaystyle x\cdot a=b} is ⁠ b ⋅ a − 1 {\displaystyle b\cdot a^{-1}} ⁠. For each ⁠ a {\displaystyle a} ⁠, the function G → G {\displaystyle G\to G} that maps each x {\displaystyle x} to x ⋅ a {\displaystyle x\cdot a} is a bijection called right multiplication by a {\displaystyle a} or right translation by ⁠ a {\displaystyle a} ⁠. === Equivalent definition with relaxed axioms === The group axioms for identity and inverses may be "weakened" to assert only the existence of a left identity and left inverses. From these one-sided axioms, one can prove that the left identity is also a right identity and a left inverse is also a right inverse for the same element. Since they define exactly the same structures as groups, collectively the axioms are not weaker. In particular, assuming associativity and the existence of a left identity e {\displaystyle e} (that is, ⁠ e ⋅ f = f {\displaystyle e\cdot f=f} ⁠) and a left inverse f − 1 {\displaystyle f^{-1}} for each element f {\displaystyle f} (that is, ⁠ f − 1 ⋅ f = e {\displaystyle f^{-1}\cdot f=e} ⁠), it follows that every left inverse is also a right inverse of the same element as follows. Indeed, one has f ⋅ f − 1 = e ⋅ ( f ⋅ f − 1 ) (left identity) = ( ( f − 1 ) − 1 ⋅ f − 1 ) ⋅ ( f ⋅ f − 1 ) (left inverse) = ( f − 1 ) − 1 ⋅ ( ( f − 1 ⋅ f ) ⋅ f − 1 ) (associativity) = ( f − 1 ) − 1 ⋅ ( e ⋅ f − 1 ) (left inverse) = ( f − 1 ) − 1 ⋅ f − 1 (left identity) = e (left inverse) {\displaystyle {\begin{aligned}f\cdot f^{-1}&=e\cdot (f\cdot f^{-1})&&{\text{(left identity)}}\\&=((f^{-1})^{-1}\cdot f^{-1})\cdot (f\cdot f^{-1})&&{\text{(left inverse)}}\\&=(f^{-1})^{-1}\cdot ((f^{-1}\cdot f)\cdot f^{-1})&&{\text{(associativity)}}\\&=(f^{-1})^{-1}\cdot (e\cdot f^{-1})&&{\text{(left inverse)}}\\&=(f^{-1})^{-1}\cdot f^{-1}&&{\text{(left identity)}}\\&=e&&{\text{(left inverse)}}\end{aligned}}} Similarly, the left identity is also a right identity: f ⋅ e = f ⋅ ( f − 1 ⋅ f ) (left inverse) = ( f ⋅ f − 1 ) ⋅ f (associativity) = e ⋅ f (right inverse) = f (left identity) {\displaystyle {\begin{aligned}f\cdot e&=f\cdot (f^{-1}\cdot f)&&{\text{(left inverse)}}\\&=(f\cdot f^{-1})\cdot f&&{\text{(associativity)}}\\&=e\cdot f&&{\text{(right inverse)}}\\&=f&&{\text{(left identity)}}\end{aligned}}} These results do not hold if any of these axioms (associativity, existence of left identity and existence of left inverse) is removed. For a structure with a looser definition (like a semigroup) one may have, for example, that a left identity is not necessarily a right identity. The same result can be obtained by only assuming the existence of a right identity and a right inverse. However, only assuming the existence of a left identity and a right inverse (or vice versa) is not sufficient to define a group. For example, consider the set G = { e , f } {\displaystyle G=\{e,f\}} with the operator ⋅ {\displaystyle \cdot } satisfying e ⋅ e = f ⋅ e = e {\displaystyle e\cdot e=f\cdot e=e} and ⁠ e ⋅ f = f ⋅ f = f {\displaystyle e\cdot f=f\cdot f=f} ⁠. This structure does have a left identity (namely, ⁠ e {\displaystyle e} ⁠), and each element has a right inverse (which is e {\displaystyle e} for both elements). Furthermore, this operation is associative (since the product of any number of elements is always equal to the rightmost element in that product, regardless of the order in which these operations are applied). However, ( G , ⋅ ) {\displaystyle (G,\cdot )} is not a group, since it lacks a right identity. == Basic concepts == When studying sets, one uses concepts such as subset, function, and quotient by an equivalence relation. When studying groups, one uses instead subgroups, homomorphisms, and quotient groups. These are the analogues that take the group structure into account. === Group homomorphisms === Group homomorphisms are functions that respect group structure; they may be used to relate two groups. A homomorphism from a group ( G , ⋅ ) {\displaystyle (G,\cdot )} to a group ( H , ∗ ) {\displaystyle (H,*)} is a function φ : G → H {\displaystyle \varphi :G\to H} such that It would be natural to require also that φ {\displaystyle \varphi } respect identities, ⁠ φ ( 1 G ) = 1 H {\displaystyle \varphi (1_{G})=1_{H}} ⁠, and inverses, φ ( a − 1 ) = φ ( a ) − 1 {\displaystyle \varphi (a^{-1})=\varphi (a)^{-1}} for all a {\displaystyle a} in ⁠ G {\displaystyle G} ⁠. However, these additional requirements need not be included in the definition of homomorphisms, because they are already implied by the requirement of respecting the group operation. The identity homomorphism of a group G {\displaystyle G} is the homomorphism ι G : G → G {\displaystyle \iota _{G}:G\to G} that maps each element of G {\displaystyle G} to itself. An inverse homomorphism of a homomorphism φ : G → H {\displaystyle \varphi :G\to H} is a homomorphism ψ : H → G {\displaystyle \psi :H\to G} such that ψ ∘ φ = ι G {\displaystyle \psi \circ \varphi =\iota _{G}} and ⁠ φ ∘ ψ = ι H {\displaystyle \varphi \circ \psi =\iota _{H}} ⁠, that is, such that ψ ( φ ( g ) ) = g {\displaystyle \psi {\bigl (}\varphi (g){\bigr )}=g} for all g {\displaystyle g} in G {\displaystyle G} and such that φ ( ψ ( h ) ) = h {\displaystyle \varphi {\bigl (}\psi (h){\bigr )}=h} for all h {\displaystyle h} in ⁠ H {\displaystyle H} ⁠. An isomorphism is a homomorphism that has an inverse homomorphism; equivalently, it is a bijective homomorphism. Groups G {\displaystyle G} and H {\displaystyle H} are called isomorphic if there exists an isomorphism ⁠ φ : G → H {\displaystyle \varphi :G\to H} ⁠. In this case, H {\displaystyle H} can be obtained from G {\displaystyle G} simply by renaming its elements according to the function ⁠ φ {\displaystyle \varphi } ⁠; then any statement true for G {\displaystyle G} is true for ⁠ H {\displaystyle H} ⁠, provided that any specific elements mentioned in the statement are also renamed. The collection of all groups, together with the homomorphisms between them, form a category, the category of groups. An injective homomorphism ϕ : G ′ → G {\displaystyle \phi :G'\to G} factors canonically as an isomorphism followed by an inclusion, G ′ → ∼ H ↪ G {\displaystyle G'\;{\stackrel {\sim }{\to }}\;H\hookrightarrow G} for some subgroup ⁠ H {\displaystyle H} ⁠ of ⁠ G {\displaystyle G} ⁠. Injective homomorphisms are the monomorphisms in the category of groups. === Subgroups === Informally, a subgroup is a group H {\displaystyle H} contained within a bigger one, ⁠ G {\displaystyle G} ⁠: it has a subset of the elements of ⁠ G {\displaystyle G} ⁠, with the same operation. Concretely, this means that the identity element of G {\displaystyle G} must be contained in ⁠ H {\displaystyle H} ⁠, and whenever h 1 {\displaystyle h_{1}} and h 2 {\displaystyle h_{2}} are both in ⁠ H {\displaystyle H} ⁠, then so are h 1 ⋅ h 2 {\displaystyle h_{1}\cdot h_{2}} and ⁠ h 1 − 1 {\displaystyle h_{1}^{-1}} ⁠, so the elements of ⁠ H {\displaystyle H} ⁠, equipped with the group operation on G {\displaystyle G} restricted to ⁠ H {\displaystyle H} ⁠, indeed form a group. In this case, the inclusion map H → G {\displaystyle H\to G} is a homomorphism. In the example of symmetries of a square, the identity and the rotations constitute a subgroup ⁠ R = { i d , r 1 , r 2 , r 3 } {\displaystyle R=\{\mathrm {id} ,r_{1},r_{2},r_{3}\}} ⁠, highlighted in red in the Cayley table of the example: any two rotations composed are still a rotation, and a rotation can be undone by (i.e., is inverse to) the complementary rotations 270° for 90°, 180° for 180°, and 90° for 270°. The subgroup test provides a necessary and sufficient condition for a nonempty subset ⁠ H {\displaystyle H} ⁠ of a group ⁠ G {\displaystyle G} ⁠ to be a subgroup: it is sufficient to check that g − 1 ⋅ h ∈ H {\displaystyle g^{-1}\cdot h\in H} for all elements g {\displaystyle g} and h {\displaystyle h} in ⁠ H {\displaystyle H} ⁠. Knowing a group's subgroups is important in understanding the group as a whole. Given any subset S {\displaystyle S} of a group ⁠ G {\displaystyle G} ⁠, the subgroup generated by S {\displaystyle S} consists of all products of elements of S {\displaystyle S} and their inverses. It is the smallest subgroup of G {\displaystyle G} containing ⁠ S {\displaystyle S} ⁠. In the example of symmetries of a square, the subgroup generated by r 2 {\displaystyle r_{2}} and f v {\displaystyle f_{\mathrm {v} }} consists of these two elements, the identity element ⁠ i d {\displaystyle \mathrm {id} } ⁠, and the element ⁠ f h = f v ⋅ r 2 {\displaystyle f_{\mathrm {h} }=f_{\mathrm {v} }\cdot r_{2}} ⁠. Again, this is a subgroup, because combining any two of these four elements or their inverses (which are, in this particular case, these same elements) yields an element of this subgroup. === Cosets === In many situations it is desirable to consider two group elements the same if they differ by an element of a given subgroup. For example, in the symmetry group of a square, once any reflection is performed, rotations alone cannot return the square to its original position, so one can think of the reflected positions of the square as all being equivalent to each other, and as inequivalent to the unreflected positions; the rotation operations are irrelevant to the question whether a reflection has been performed. Cosets are used to formalize this insight: a subgroup H {\displaystyle H} determines left and right cosets, which can be thought of as translations of H {\displaystyle H} by an arbitrary group element ⁠ g {\displaystyle g} ⁠. In symbolic terms, the left and right cosets of ⁠ H {\displaystyle H} ⁠, containing an element ⁠ g {\displaystyle g} ⁠, are The left cosets of any subgroup H {\displaystyle H} form a partition of ⁠ G {\displaystyle G} ⁠; that is, the union of all left cosets is equal to G {\displaystyle G} and two left cosets are either equal or have an empty intersection. The first case g 1 H = g 2 H {\displaystyle g_{1}H=g_{2}H} happens precisely when ⁠ g 1 − 1 ⋅ g 2 ∈ H {\displaystyle g_{1}^{-1}\cdot g_{2}\in H} ⁠, i.e., when the two elements differ by an element of ⁠ H {\displaystyle H} ⁠. Similar considerations apply to the right cosets of ⁠ H {\displaystyle H} ⁠. The left cosets of H {\displaystyle H} may or may not be the same as its right cosets. If they are (that is, if all g {\displaystyle g} in G {\displaystyle G} satisfy ⁠ g H = H g {\displaystyle gH=Hg} ⁠), then H {\displaystyle H} is said to be a normal subgroup. In ⁠ D 4 {\displaystyle \mathrm {D} _{4}} ⁠, the group of symmetries of a square, with its subgroup R {\displaystyle R} of rotations, the left cosets g R {\displaystyle gR} are either equal to ⁠ R {\displaystyle R} ⁠, if g {\displaystyle g} is an element of R {\displaystyle R} itself, or otherwise equal to U = f c R = { f c , f d , f v , f h } {\displaystyle U=f_{\mathrm {c} }R=\{f_{\mathrm {c} },f_{\mathrm {d} },f_{\mathrm {v} },f_{\mathrm {h} }\}} (highlighted in green in the Cayley table of ⁠ D 4 {\displaystyle \mathrm {D} _{4}} ⁠). The subgroup R {\displaystyle R} is normal, because f c R = U = R f c {\displaystyle f_{\mathrm {c} }R=U=Rf_{\mathrm {c} }} and similarly for the other elements of the group. (In fact, in the case of ⁠ D 4 {\displaystyle \mathrm {D} _{4}} ⁠, the cosets generated by reflections are all equal: ⁠ f h R = f v R = f d R = f c R {\displaystyle f_{\mathrm {h} }R=f_{\mathrm {v} }R=f_{\mathrm {d} }R=f_{\mathrm {c} }R} ⁠.) === Quotient groups === Suppose that N {\displaystyle N} is a normal subgroup of a group ⁠ G {\displaystyle G} ⁠, and G / N = { g N ∣ g ∈ G } {\displaystyle G/N=\{gN\mid g\in G\}} denotes its set of cosets. Then there is a unique group law on G / N {\displaystyle G/N} for which the map G → G / N {\displaystyle G\to G/N} sending each element g {\displaystyle g} to g N {\displaystyle gN} is a homomorphism. Explicitly, the product of two cosets g N {\displaystyle gN} and h N {\displaystyle hN} is ⁠ ( g h ) N {\displaystyle (gh)N} ⁠, the coset e N = N {\displaystyle eN=N} serves as the identity of ⁠ G / N {\displaystyle G/N} ⁠, and the inverse of g N {\displaystyle gN} in the quotient group is ⁠ ( g N ) − 1 = ( g − 1 ) N {\displaystyle (gN)^{-1}=\left(g^{-1}\right)N} ⁠. The group ⁠ G / N {\displaystyle G/N} ⁠, read as "⁠ G {\displaystyle G} ⁠ modulo ⁠ N {\displaystyle N} ⁠", is called a quotient group or factor group. The quotient group can alternatively be characterized by a universal property. The elements of the quotient group D 4 / R {\displaystyle \mathrm {D} _{4}/R} are R {\displaystyle R} and ⁠ U = f v R {\displaystyle U=f_{\mathrm {v} }R} ⁠. The group operation on the quotient is shown in the table. For example, ⁠ U ⋅ U = f v R ⋅ f v R = ( f v ⋅ f v ) R = R {\displaystyle U\cdot U=f_{\mathrm {v} }R\cdot f_{\mathrm {v} }R=(f_{\mathrm {v} }\cdot f_{\mathrm {v} })R=R} ⁠. Both the subgroup R = { i d , r 1 , r 2 , r 3 } {\displaystyle R=\{\mathrm {id} ,r_{1},r_{2},r_{3}\}} and the quotient D 4 / R {\displaystyle \mathrm {D} _{4}/R} are abelian, but D 4 {\displaystyle \mathrm {D} _{4}} is not. Sometimes a group can be reconstructed from a subgroup and quotient (plus some additional data), by the semidirect product construction; D 4 {\displaystyle \mathrm {D} _{4}} is an example. The first isomorphism theorem implies that any surjective homomorphism ϕ : G → H {\displaystyle \phi :G\to H} factors canonically as a quotient homomorphism followed by an isomorphism: ⁠ G → G / ker ⁡ ϕ → ∼ H {\displaystyle G\to G/\ker \phi \;{\stackrel {\sim }{\to }}\;H} ⁠. Surjective homomorphisms are the epimorphisms in the category of groups. === Presentations === Every group is isomorphic to a quotient of a free group, in many ways. For example, the dihedral group D 4 {\displaystyle \mathrm {D} _{4}} is generated by the right rotation r 1 {\displaystyle r_{1}} and the reflection f v {\displaystyle f_{\mathrm {v} }} in a vertical line (every element of D 4 {\displaystyle \mathrm {D} _{4}} is a finite product of copies of these and their inverses). Hence there is a surjective homomorphism ⁠ ϕ {\displaystyle \phi } ⁠ from the free group ⟨ r , f ⟩ {\displaystyle \langle r,f\rangle } on two generators to D 4 {\displaystyle \mathrm {D} _{4}} sending r {\displaystyle r} to r 1 {\displaystyle r_{1}} and f {\displaystyle f} to ⁠ f 1 {\displaystyle f_{1}} ⁠. Elements in ker ⁡ ϕ {\displaystyle \ker \phi } are called relations; examples include ⁠ r 4 , f 2 , ( r ⋅ f ) 2 {\displaystyle r^{4},f^{2},(r\cdot f)^{2}} ⁠. In fact, it turns out that ker ⁡ ϕ {\displaystyle \ker \phi } is the smallest normal subgroup of ⟨ r , f ⟩ {\displaystyle \langle r,f\rangle } containing these three elements; in other words, all relations are consequences of these three. The quotient of the free group by this normal subgroup is denoted ⁠ ⟨ r , f ∣ r 4 = f 2 = ( r ⋅ f ) 2 = 1 ⟩ {\displaystyle \langle r,f\mid r^{4}=f^{2}=(r\cdot f)^{2}=1\rangle } ⁠. This is called a presentation of D 4 {\displaystyle \mathrm {D} _{4}} by generators and relations, because the first isomorphism theorem for ⁠ ϕ {\displaystyle \phi } ⁠ yields an isomorphism ⁠ ⟨ r , f ∣ r 4 = f 2 = ( r ⋅ f ) 2 = 1 ⟩ → D 4 {\displaystyle \langle r,f\mid r^{4}=f^{2}=(r\cdot f)^{2}=1\rangle \to \mathrm {D} _{4}} ⁠. A presentation of a group can be used to construct the Cayley graph, a graphical depiction of a discrete group. == Examples and applications == Examples and applications of groups abound. A starting point is the group Z {\displaystyle \mathbb {Z} } of integers with addition as group operation, introduced above. If instead of addition multiplication is considered, one obtains multiplicative groups. These groups are predecessors of important constructions in abstract algebra. Groups are also applied in many other mathematical areas. Mathematical objects are often examined by associating groups to them and studying the properties of the corresponding groups. For example, Henri Poincaré founded what is now called algebraic topology by introducing the fundamental group. By means of this connection, topological properties such as proximity and continuity translate into properties of groups. Elements of the fundamental group of a topological space are equivalence classes of loops, where loops are considered equivalent if one can be smoothly deformed into another, and the group operation is "concatenation" (tracing one loop then the other). For example, as shown in the figure, if the topological space is the plane with one point removed, then loops which do not wrap around the missing point (blue) can be smoothly contracted to a single point and are the identity element of the fundamental group. A loop which wraps around the missing point k {\displaystyle k} times cannot be deformed into a loop which wraps m {\displaystyle m} times (with ⁠ m ≠ k {\displaystyle m\neq k} ⁠), because the loop cannot be smoothly deformed across the hole, so each class of loops is characterized by its winding number around the missing point. The resulting group is isomorphic to the integers under addition. In more recent applications, the influence has also been reversed to motivate geometric constructions by a group-theoretical background. In a similar vein, geometric group theory employs geometric concepts, for example in the study of hyperbolic groups. Further branches crucially applying groups include algebraic geometry and number theory. In addition to the above theoretical applications, many practical applications of groups exist. Cryptography relies on the combination of the abstract group theory approach together with algorithmical knowledge obtained in computational group theory, in particular when implemented for finite groups. Applications of group theory are not restricted to mathematics; sciences such as physics, chemistry and computer science benefit from the concept. === Numbers === Many number systems, such as the integers and the rationals, enjoy a naturally given group structure. In some cases, such as with the rationals, both addition and multiplication operations give rise to group structures. Such number systems are predecessors to more general algebraic structures known as rings and fields. Further abstract algebraic concepts such as modules, vector spaces and algebras also form groups. ==== Integers ==== The group of integers Z {\displaystyle \mathbb {Z} } under addition, denoted ⁠ ( Z , + ) {\displaystyle \left(\mathbb {Z} ,+\right)} ⁠, has been described above. The integers, with the operation of multiplication instead of addition, ( Z , ⋅ ) {\displaystyle \left(\mathbb {Z} ,\cdot \right)} do not form a group. The associativity and identity axioms are satisfied, but inverses do not exist: for example, a = 2 {\displaystyle a=2} is an integer, but the only solution to the equation a ⋅ b = 1 {\displaystyle a\cdot b=1} in this case is ⁠ b = 1 2 {\displaystyle b={\tfrac {1}{2}}} ⁠, which is a rational number, but not an integer. Hence not every element of Z {\displaystyle \mathbb {Z} } has a (multiplicative) inverse. ==== Rationals ==== The desire for the existence of multiplicative inverses suggests considering fractions a b . {\displaystyle {\frac {a}{b}}.} Fractions of integers (with b {\displaystyle b} nonzero) are known as rational numbers. The set of all such irreducible fractions is commonly denoted ⁠ Q {\displaystyle \mathbb {Q} } ⁠. There is still a minor obstacle for ⁠ ( Q , ⋅ ) {\displaystyle \left(\mathbb {Q} ,\cdot \right)} ⁠, the rationals with multiplication, being a group: because zero does not have a multiplicative inverse (i.e., there is no x {\displaystyle x} such that ⁠ x ⋅ 0 = 1 {\displaystyle x\cdot 0=1} ⁠), ( Q , ⋅ ) {\displaystyle \left(\mathbb {Q} ,\cdot \right)} is still not a group. However, the set of all nonzero rational numbers Q ∖ { 0 } = { q ∈ Q ∣ q ≠ 0 } {\displaystyle \mathbb {Q} \smallsetminus \left\{0\right\}=\left\{q\in \mathbb {Q} \mid q\neq 0\right\}} does form an abelian group under multiplication, also denoted ⁠ Q × {\displaystyle \mathbb {Q} ^{\times }} ⁠. Associativity and identity element axioms follow from the properties of integers. The closure requirement still holds true after removing zero, because the product of two nonzero rationals is never zero. Finally, the inverse of a / b {\displaystyle a/b} is ⁠ b / a {\displaystyle b/a} ⁠, therefore the axiom of the inverse element is satisfied. The rational numbers (including zero) also form a group under addition. Intertwining addition and multiplication operations yields more complicated structures called rings and – if division by other than zero is possible, such as in Q {\displaystyle \mathbb {Q} } – fields, which occupy a central position in abstract algebra. Group theoretic arguments therefore underlie parts of the theory of those entities. === Modular arithmetic === Modular arithmetic for a modulus n {\displaystyle n} defines any two elements a {\displaystyle a} and b {\displaystyle b} that differ by a multiple of n {\displaystyle n} to be equivalent, denoted by ⁠ a ≡ b ( mod n ) {\displaystyle a\equiv b{\pmod {n}}} ⁠. Every integer is equivalent to one of the integers from 0 {\displaystyle 0} to ⁠ n − 1 {\displaystyle n-1} ⁠, and the operations of modular arithmetic modify normal arithmetic by replacing the result of any operation by its equivalent representative. Modular addition, defined in this way for the integers from 0 {\displaystyle 0} to ⁠ n − 1 {\displaystyle n-1} ⁠, forms a group, denoted as Z n {\displaystyle \mathrm {Z} _{n}} or ⁠ ( Z / n Z , + ) {\displaystyle (\mathbb {Z} /n\mathbb {Z} ,+)} ⁠, with 0 {\displaystyle 0} as the identity element and n − a {\displaystyle n-a} as the inverse element of ⁠ a {\displaystyle a} ⁠. A familiar example is addition of hours on the face of a clock, where 12 rather than 0 is chosen as the representative of the identity. If the hour hand is on 9 {\displaystyle 9} and is advanced 4 {\displaystyle 4} hours, it ends up on ⁠ 1 {\displaystyle 1} ⁠, as shown in the illustration. This is expressed by saying that 9 + 4 {\displaystyle 9+4} is congruent to 1 {\displaystyle 1} "modulo ⁠ 12 {\displaystyle 12} ⁠" or, in symbols, 9 + 4 ≡ 1 ( mod 12 ) . {\displaystyle 9+4\equiv 1{\pmod {12}}.} For any prime number ⁠ p {\displaystyle p} ⁠, there is also the multiplicative group of integers modulo ⁠ p {\displaystyle p} ⁠. Its elements can be represented by 1 {\displaystyle 1} to ⁠ p − 1 {\displaystyle p-1} ⁠. The group operation, multiplication modulo ⁠ p {\displaystyle p} ⁠, replaces the usual product by its representative, the remainder of division by ⁠ p {\displaystyle p} ⁠. For example, for ⁠ p = 5 {\displaystyle p=5} ⁠, the four group elements can be represented by ⁠ 1 , 2 , 3 , 4 {\displaystyle 1,2,3,4} ⁠. In this group, ⁠ 4 ⋅ 4 ≡ 1 mod 5 {\displaystyle 4\cdot 4\equiv 1{\bmod {5}}} ⁠, because the usual product 16 {\displaystyle 16} is equivalent to ⁠ 1 {\displaystyle 1} ⁠: when divided by 5 {\displaystyle 5} it yields a remainder of ⁠ 1 {\displaystyle 1} ⁠. The primality of p {\displaystyle p} ensures that the usual product of two representatives is not divisible by ⁠ p {\displaystyle p} ⁠, and therefore that the modular product is nonzero. The identity element is represented by ⁠ 1 {\displaystyle 1} ⁠, and associativity follows from the corresponding property of the integers. Finally, the inverse element axiom requires that given an integer a {\displaystyle a} not divisible by ⁠ p {\displaystyle p} ⁠, there exists an integer b {\displaystyle b} such that a ⋅ b ≡ 1 ( mod p ) , {\displaystyle a\cdot b\equiv 1{\pmod {p}},} that is, such that p {\displaystyle p} evenly divides ⁠ a ⋅ b − 1 {\displaystyle a\cdot b-1} ⁠. The inverse b {\displaystyle b} can be found by using Bézout's identity and the fact that the greatest common divisor gcd ( a , p ) {\displaystyle \gcd(a,p)} equals ⁠ 1 {\displaystyle 1} ⁠. In the case p = 5 {\displaystyle p=5} above, the inverse of the element represented by 4 {\displaystyle 4} is that represented by ⁠ 4 {\displaystyle 4} ⁠, and the inverse of the element represented by 3 {\displaystyle 3} is represented by ⁠ 2 {\displaystyle 2} ⁠, as ⁠ 3 ⋅ 2 = 6 ≡ 1 mod 5 {\displaystyle 3\cdot 2=6\equiv 1{\bmod {5}}} ⁠. Hence all group axioms are fulfilled. This example is similar to ( Q ∖ { 0 } , ⋅ ) {\displaystyle \left(\mathbb {Q} \smallsetminus \left\{0\right\},\cdot \right)} above: it consists of exactly those elements in the ring Z / p Z {\displaystyle \mathbb {Z} /p\mathbb {Z} } that have a multiplicative inverse. These groups, denoted ⁠ F p × {\displaystyle \mathbb {F} _{p}^{\times }} ⁠, are crucial to public-key cryptography. === Cyclic groups === A cyclic group is a group all of whose elements are powers of a particular element ⁠ a {\displaystyle a} ⁠. In multiplicative notation, the elements of the group are … , a − 3 , a − 2 , a − 1 , a 0 , a , a 2 , a 3 , … , {\displaystyle \dots ,a^{-3},a^{-2},a^{-1},a^{0},a,a^{2},a^{3},\dots ,} where a 2 {\displaystyle a^{2}} means ⁠ a ⋅ a {\displaystyle a\cdot a} ⁠, a − 3 {\displaystyle a^{-3}} stands for ⁠ a − 1 ⋅ a − 1 ⋅ a − 1 = ( a ⋅ a ⋅ a ) − 1 {\displaystyle a^{-1}\cdot a^{-1}\cdot a^{-1}=(a\cdot a\cdot a)^{-1}} ⁠, etc. Such an element a {\displaystyle a} is called a generator or a primitive element of the group. In additive notation, the requirement for an element to be primitive is that each element of the group can be written as … , ( − a ) + ( − a ) , − a , 0 , a , a + a , … . {\displaystyle \dots ,(-a)+(-a),-a,0,a,a+a,\dots .} In the groups ( Z / n Z , + ) {\displaystyle (\mathbb {Z} /n\mathbb {Z} ,+)} introduced above, the element 1 {\displaystyle 1} is primitive, so these groups are cyclic. Indeed, each element is expressible as a sum all of whose terms are ⁠ 1 {\displaystyle 1} ⁠. Any cyclic group with n {\displaystyle n} elements is isomorphic to this group. A second example for cyclic groups is the group of ⁠ n {\displaystyle n} ⁠th complex roots of unity, given by complex numbers z {\displaystyle z} satisfying ⁠ z n = 1 {\displaystyle z^{n}=1} ⁠. These numbers can be visualized as the vertices on a regular n {\displaystyle n} -gon, as shown in blue in the image for ⁠ n = 6 {\displaystyle n=6} ⁠. The group operation is multiplication of complex numbers. In the picture, multiplying with z {\displaystyle z} corresponds to a counter-clockwise rotation by 60°. From field theory, the group F p × {\displaystyle \mathbb {F} _{p}^{\times }} is cyclic for prime p {\displaystyle p} : for example, if ⁠ p = 5 {\displaystyle p=5} ⁠, 3 {\displaystyle 3} is a generator since ⁠ 3 1 = 3 {\displaystyle 3^{1}=3} ⁠, ⁠ 3 2 = 9 ≡ 4 {\displaystyle 3^{2}=9\equiv 4} ⁠, ⁠ 3 3 ≡ 2 {\displaystyle 3^{3}\equiv 2} ⁠, and ⁠ 3 4 ≡ 1 {\displaystyle 3^{4}\equiv 1} ⁠. Some cyclic groups have an infinite number of elements. In these groups, for every non-zero element ⁠ a {\displaystyle a} ⁠, all the powers of a {\displaystyle a} are distinct; despite the name "cyclic group", the powers of the elements do not cycle. An infinite cyclic group is isomorphic to ⁠ ( Z , + ) {\displaystyle (\mathbb {Z} ,+)} ⁠, the group of integers under addition introduced above. As these two prototypes are both abelian, so are all cyclic groups. The study of finitely generated abelian groups is quite mature, including the fundamental theorem of finitely generated abelian groups; and reflecting this state of affairs, many group-related notions, such as center and commutator, describe the extent to which a given group is not abelian. === Symmetry groups === Symmetry groups are groups consisting of symmetries of given mathematical objects, principally geometric entities, such as the symmetry group of the square given as an introductory example above, although they also arise in algebra such as the symmetries among the roots of polynomial equations dealt with in Galois theory (see below). Conceptually, group theory can be thought of as the study of symmetry. Symmetries in mathematics greatly simplify the study of geometrical or analytical objects. A group is said to act on another mathematical object ⁠ X {\displaystyle X} ⁠ if every group element can be associated to some operation on ⁠ X {\displaystyle X} ⁠ and the composition of these operations follows the group law. For example, an element of the (2,3,7) triangle group acts on a triangular tiling of the hyperbolic plane by permuting the triangles. By a group action, the group pattern is connected to the structure of the object being acted on. In chemistry, point groups describe molecular symmetries, while space groups describe crystal symmetries in crystallography. These symmetries underlie the chemical and physical behavior of these systems, and group theory enables simplification of quantum mechanical analysis of these properties. For example, group theory is used to show that optical transitions between certain quantum levels cannot occur simply because of the symmetry of the states involved. Group theory helps predict the changes in physical properties that occur when a material undergoes a phase transition, for example, from a cubic to a tetrahedral crystalline form. An example is ferroelectric materials, where the change from a paraelectric to a ferroelectric state occurs at the Curie temperature and is related to a change from the high-symmetry paraelectric state to the lower symmetry ferroelectric state, accompanied by a so-called soft phonon mode, a vibrational lattice mode that goes to zero frequency at the transition. Such spontaneous symmetry breaking has found further application in elementary particle physics, where its occurrence is related to the appearance of Goldstone bosons. Finite symmetry groups such as the Mathieu groups are used in coding theory, which is in turn applied in error correction of transmitted data, and in CD players. Another application is differential Galois theory, which characterizes functions having antiderivatives of a prescribed form, giving group-theoretic criteria for when solutions of certain differential equations are well-behaved. Geometric properties that remain stable under group actions are investigated in (geometric) invariant theory. === General linear group and representation theory === Matrix groups consist of matrices together with matrix multiplication. The general linear group G L ( n , R ) {\displaystyle \mathrm {GL} (n,\mathbb {R} )} consists of all invertible ⁠ n {\displaystyle n} ⁠-by-⁠ n {\displaystyle n} ⁠ matrices with real entries. Its subgroups are referred to as matrix groups or linear groups. The dihedral group example mentioned above can be viewed as a (very small) matrix group. Another important matrix group is the special orthogonal group ⁠ S O ( n ) {\displaystyle \mathrm {SO} (n)} ⁠. It describes all possible rotations in n {\displaystyle n} dimensions. Rotation matrices in this group are used in computer graphics. Representation theory is both an application of the group concept and important for a deeper understanding of groups. It studies the group by its group actions on other spaces. A broad class of group representations are linear representations in which the group acts on a vector space, such as the three-dimensional Euclidean space ⁠ R 3 {\displaystyle \mathbb {R} ^{3}} ⁠. A representation of a group G {\displaystyle G} on an n {\displaystyle n} -dimensional real vector space is simply a group homomorphism ρ : G → G L ( n , R ) {\displaystyle \rho :G\to \mathrm {GL} (n,\mathbb {R} )} from the group to the general linear group. This way, the group operation, which may be abstractly given, translates to the multiplication of matrices making it accessible to explicit computations. A group action gives further means to study the object being acted on. On the other hand, it also yields information about the group. Group representations are an organizing principle in the theory of finite groups, Lie groups, algebraic groups and topological groups, especially (locally) compact groups. === Galois groups === Galois groups were developed to help solve polynomial equations by capturing their symmetry features. For example, the solutions of the quadratic equation a x 2 + b x + c = 0 {\displaystyle ax^{2}+bx+c=0} are given by x = − b ± b 2 − 4 a c 2 a . {\displaystyle x={\frac {-b\pm {\sqrt {b^{2}-4ac}}}{2a}}.} Each solution can be obtained by replacing the ± {\displaystyle \pm } sign by + {\displaystyle +} or ⁠ − {\displaystyle -} ⁠; analogous formulae are known for cubic and quartic equations, but do not exist in general for degree 5 and higher. In the quadratic formula, changing the sign (permuting the resulting two solutions) can be viewed as a (very simple) group operation. Analogous Galois groups act on the solutions of higher-degree polynomial equations and are closely related to the existence of formulas for their solution. Abstract properties of these groups (in particular their solvability) give a criterion for the ability to express the solutions of these polynomials using solely addition, multiplication, and roots similar to the formula above. Modern Galois theory generalizes the above type of Galois groups by shifting to field theory and considering field extensions formed as the splitting field of a polynomial. This theory establishes—via the fundamental theorem of Galois theory—a precise relationship between fields and groups, underlining once again the ubiquity of groups in mathematics. == Finite groups == A group is called finite if it has a finite number of elements. The number of elements is called the order of the group. An important class is the symmetric groups ⁠ S N {\displaystyle \mathrm {S} _{N}} ⁠, the groups of permutations of N {\displaystyle N} objects. For example, the symmetric group on 3 letters S 3 {\displaystyle \mathrm {S} _{3}} is the group of all possible reorderings of the objects. The three letters ABC can be reordered into ABC, ACB, BAC, BCA, CAB, CBA, forming in total 6 (factorial of 3) elements. The group operation is composition of these reorderings, and the identity element is the reordering operation that leaves the order unchanged. This class is fundamental insofar as any finite group can be expressed as a subgroup of a symmetric group S N {\displaystyle \mathrm {S} _{N}} for a suitable integer ⁠ N {\displaystyle N} ⁠, according to Cayley's theorem. Parallel to the group of symmetries of the square above, S 3 {\displaystyle \mathrm {S} _{3}} can also be interpreted as the group of symmetries of an equilateral triangle. The order of an element a {\displaystyle a} in a group G {\displaystyle G} is the least positive integer n {\displaystyle n} such that ⁠ a n = e {\displaystyle a^{n}=e} ⁠, where a n {\displaystyle a^{n}} represents a ⋯ a ⏟ n factors , {\displaystyle \underbrace {a\cdots a} _{n{\text{ factors}}},} that is, application of the operation "⁠ ⋅ {\displaystyle \cdot } ⁠" to n {\displaystyle n} copies of ⁠ a {\displaystyle a} ⁠. (If "⁠ ⋅ {\displaystyle \cdot } ⁠" represents multiplication, then a n {\displaystyle a^{n}} corresponds to the ⁠ n {\displaystyle n} ⁠th power of ⁠ a {\displaystyle a} ⁠.) In infinite groups, such an n {\displaystyle n} may not exist, in which case the order of a {\displaystyle a} is said to be infinity. The order of an element equals the order of the cyclic subgroup generated by this element. More sophisticated counting techniques, for example, counting cosets, yield more precise statements about finite groups: Lagrange's Theorem states that for a finite group G {\displaystyle G} the order of any finite subgroup H {\displaystyle H} divides the order of ⁠ G {\displaystyle G} ⁠. The Sylow theorems give a partial converse. The dihedral group D 4 {\displaystyle \mathrm {D} _{4}} of symmetries of a square is a finite group of order 8. In this group, the order of r 1 {\displaystyle r_{1}} is 4, as is the order of the subgroup R {\displaystyle R} that this element generates. The order of the reflection elements f v {\displaystyle f_{\mathrm {v} }} etc. is 2. Both orders divide 8, as predicted by Lagrange's theorem. The groups F p × {\displaystyle \mathbb {F} _{p}^{\times }} of multiplication modulo a prime p {\displaystyle p} have order ⁠ p − 1 {\displaystyle p-1} ⁠. === Finite abelian groups === Any finite abelian group is isomorphic to a product of finite cyclic groups; this statement is part of the fundamental theorem of finitely generated abelian groups. Any group of prime order p {\displaystyle p} is isomorphic to the cyclic group Z p {\displaystyle \mathrm {Z} _{p}} (a consequence of Lagrange's theorem). Any group of order p 2 {\displaystyle p^{2}} is abelian, isomorphic to Z p 2 {\displaystyle \mathrm {Z} _{p^{2}}} or ⁠ Z p × Z p {\displaystyle \mathrm {Z} _{p}\times \mathrm {Z} _{p}} ⁠. But there exist nonabelian groups of order ⁠ p 3 {\displaystyle p^{3}} ⁠; the dihedral group D 4 {\displaystyle \mathrm {D} _{4}} of order 2 3 {\displaystyle 2^{3}} above is an example. === Simple groups === When a group G {\displaystyle G} has a normal subgroup N {\displaystyle N} other than { 1 } {\displaystyle \{1\}} and G {\displaystyle G} itself, questions about G {\displaystyle G} can sometimes be reduced to questions about N {\displaystyle N} and ⁠ G / N {\displaystyle G/N} ⁠. A nontrivial group is called simple if it has no such normal subgroup. Finite simple groups are to finite groups as prime numbers are to positive integers: they serve as building blocks, in a sense made precise by the Jordan–Hölder theorem. === Classification of finite simple groups === Computer algebra systems have been used to list all groups of order up to 2000. But classifying all finite groups is a problem considered too hard to be solved. The classification of all finite simple groups was a major achievement in contemporary group theory. There are several infinite families of such groups, as well as 26 "sporadic groups" that do not belong to any of the families. The largest sporadic group is called the monster group. The monstrous moonshine conjectures, proved by Richard Borcherds, relate the monster group to certain modular functions. The gap between the classification of simple groups and the classification of all groups lies in the extension problem. == Groups with additional structure == An equivalent definition of group consists of replacing the "there exist" part of the group axioms by operations whose result is the element that must exist. So, a group is a set G {\displaystyle G} equipped with a binary operation G × G → G {\displaystyle G\times G\rightarrow G} (the group operation), a unary operation G → G {\displaystyle G\rightarrow G} (which provides the inverse) and a nullary operation, which has no operand and results in the identity element. Otherwise, the group axioms are exactly the same. This variant of the definition avoids existential quantifiers and is used in computing with groups and for computer-aided proofs. This way of defining groups lends itself to generalizations such as the notion of group object in a category. Briefly, this is an object with morphisms that mimic the group axioms. === Topological groups === Some topological spaces may be endowed with a group law. In order for the group law and the topology to interweave well, the group operations must be continuous functions; informally, g ⋅ h {\displaystyle g\cdot h} and g − 1 {\displaystyle g^{-1}} must not vary wildly if g {\displaystyle g} and h {\displaystyle h} vary only a little. Such groups are called topological groups, and they are the group objects in the category of topological spaces. The most basic examples are the group of real numbers under addition and the group of nonzero real numbers under multiplication. Similar examples can be formed from any other topological field, such as the field of complex numbers or the field of p-adic numbers. These examples are locally compact, so they have Haar measures and can be studied via harmonic analysis. Other locally compact topological groups include the group of points of an algebraic group over a local field or adele ring; these are basic to number theory Galois groups of infinite algebraic field extensions are equipped with the Krull topology, which plays a role in infinite Galois theory. A generalization used in algebraic geometry is the étale fundamental group. === Lie groups === A Lie group is a group that also has the structure of a differentiable manifold; informally, this means that it looks locally like a Euclidean space of some fixed dimension. Again, the definition requires the additional structure, here the manifold structure, to be compatible: the multiplication and inverse maps are required to be smooth. A standard example is the general linear group introduced above: it is an open subset of the space of all n {\displaystyle n} -by- n {\displaystyle n} matrices, because it is given by the inequality det ( A ) ≠ 0 , {\displaystyle \det(A)\neq 0,} where A {\displaystyle A} denotes an n {\displaystyle n} -by- n {\displaystyle n} matrix. Lie groups are of fundamental importance in modern physics: Noether's theorem links continuous symmetries to conserved quantities. Rotation, as well as translations in space and time, are basic symmetries of the laws of mechanics. They can, for instance, be used to construct simple models—imposing, say, axial symmetry on a situation will typically lead to significant simplification in the equations one needs to solve to provide a physical description. Another example is the group of Lorentz transformations, which relate measurements of time and velocity of two observers in motion relative to each other. They can be deduced in a purely group-theoretical way, by expressing the transformations as a rotational symmetry of Minkowski space. The latter serves—in the absence of significant gravitation—as a model of spacetime in special relativity. The full symmetry group of Minkowski space, i.e., including translations, is known as the Poincaré group. By the above, it plays a pivotal role in special relativity and, by implication, for quantum field theories. Symmetries that vary with location are central to the modern description of physical interactions with the help of gauge theory. An important example of a gauge theory is the Standard Model, which describes three of the four known fundamental forces and classifies all known elementary particles. == Generalizations == More general structures may be defined by relaxing some of the axioms defining a group. The table gives a list of several structures generalizing groups. For example, if the requirement that every element has an inverse is eliminated, the resulting algebraic structure is called a monoid. The natural numbers N {\displaystyle \mathbb {N} } (including zero) under addition form a monoid, as do the nonzero integers under multiplication ⁠ ( Z ∖ { 0 } , ⋅ ) {\displaystyle (\mathbb {Z} \smallsetminus \{0\},\cdot )} ⁠. Adjoining inverses of all elements of the monoid ( Z ∖ { 0 } , ⋅ ) {\displaystyle (\mathbb {Z} \smallsetminus \{0\},\cdot )} produces a group ⁠ ( Q ∖ { 0 } , ⋅ ) {\displaystyle (\mathbb {Q} \smallsetminus \{0\},\cdot )} ⁠, and likewise adjoining inverses to any (abelian) monoid ⁠ M {\displaystyle M} ⁠ produces a group known as the Grothendieck group of ⁠ M {\displaystyle M} ⁠. A group can be thought of as a small category with one object ⁠ x {\displaystyle x} ⁠ in which every morphism is an isomorphism: given such a category, the set Hom ⁡ ( x , x ) {\displaystyle \operatorname {Hom} (x,x)} is a group; conversely, given a group ⁠ G {\displaystyle G} ⁠, one can build a small category with one object ⁠ x {\displaystyle x} ⁠ in which ⁠ Hom ⁡ ( x , x ) ≃ G {\displaystyle \operatorname {Hom} (x,x)\simeq G} ⁠. More generally, a groupoid is any small category in which every morphism is an isomorphism. In a groupoid, the set of all morphisms in the category is usually not a group, because the composition is only partially defined: ⁠ f g {\displaystyle fg} ⁠ is defined only when the source of ⁠ f {\displaystyle f} ⁠ matches the target of ⁠ g {\displaystyle g} ⁠. Groupoids arise in topology (for instance, the fundamental groupoid) and in the theory of stacks. Finally, it is possible to generalize any of these concepts by replacing the binary operation with an n-ary operation (i.e., an operation taking n arguments, for some nonnegative integer n). With the proper generalization of the group axioms, this gives a notion of n-ary group. == See also == List of group theory topics == Notes == == Citations == == References == == External links == Weisstein, Eric W., "Group", MathWorld
Wikipedia/Elementary_group_theory
In mathematics, specifically group theory, Cauchy's theorem states that if G is a finite group and p is a prime number dividing the order of G (the number of elements in G), then G contains an element of order p. That is, there is x in G such that p is the smallest positive integer with xp = e, where e is the identity element of G. It is named after Augustin-Louis Cauchy, who discovered it in 1845. The theorem is a partial converse to Lagrange's theorem, which states that the order of any subgroup of a finite group G divides the order of G. In general, not every divisor of | G | {\displaystyle |G|} arises as the order of a subgroup of G {\displaystyle G} . Cauchy's theorem states that for any prime divisor p of the order of G, there is a subgroup of G whose order is p—the cyclic group generated by the element in Cauchy's theorem. Cauchy's theorem is generalized by Sylow's first theorem, which implies that if pn is the maximal power of p dividing the order of G, then G has a subgroup of order pn (and using the fact that a p-group is solvable, one can show that G has subgroups of order pr for any r less than or equal to n). == Statement and proof == Many texts prove the theorem with the use of strong induction and the class equation, though considerably less machinery is required to prove the theorem in the abelian case. One can also invoke group actions for the proof. === Proof 1 === We first prove the special case that where G is abelian, and then the general case; both proofs are by induction on n = |G|, and have as starting case n = p which is trivial because any non-identity element now has order p. Suppose first that G is abelian. Take any non-identity element a, and let H be the cyclic group it generates. If p divides |H|, then a|H|/p is an element of order p. If p does not divide |H|, then it divides the order [G:H] of the quotient group G/H, which therefore contains an element of order p by the inductive hypothesis. That element is a class xH for some x in G, and if m is the order of x in G, then xm = e in G gives (xH)m = eH in G/H, so p divides m; as before xm/p is now an element of order p in G, completing the proof for the abelian case. In the general case, let Z be the center of G, which is an abelian subgroup. If p divides |Z|, then Z contains an element of order p by the case of abelian groups, and this element works for G as well. So we may assume that p does not divide the order of Z. Since p does divide |G|, and G is the disjoint union of Z and of the conjugacy classes of non-central elements, there exists a conjugacy class of a non-central element a whose size is not divisible by p. But the class equation shows that size is [G : CG(a)], so p divides the order of the centralizer CG(a) of a in G, which is a proper subgroup because a is not central. This subgroup contains an element of order p by the inductive hypothesis, and we are done. === Proof 2 === This proof uses the fact that for any action of a (cyclic) group of prime order p, the only possible orbit sizes are 1 and p, which is immediate from the orbit stabilizer theorem. The set that our cyclic group shall act on is the set X = { ( x 1 , … , x p ) ∈ G p : x 1 x 2 ⋯ x p = e } {\displaystyle X=\{\,(x_{1},\ldots ,x_{p})\in G^{p}:x_{1}x_{2}\cdots x_{p}=e\,\}} of p-tuples of elements of G whose product (in order) gives the identity. Such a p-tuple is uniquely determined by all its components except the last one, as the last element must be the inverse of the product of those preceding elements. One also sees that those p − 1 elements can be chosen freely, so X has |G|p−1 elements, which is divisible by p. Now from the fact that in a group if ab = e then ba = e, it follows that any cyclic permutation of the components of an element of X again gives an element of X. Therefore one can define an action of the cyclic group Cp of order p on X by cyclic permutations of components, in other words in which a chosen generator of Cp sends ( x 1 , x 2 , … , x p ) ↦ ( x 2 , … , x p , x 1 ) {\displaystyle (x_{1},x_{2},\ldots ,x_{p})\mapsto (x_{2},\ldots ,x_{p},x_{1})} . As remarked, orbits in X under this action either have size 1 or size p. The former happens precisely for those tuples ( x , x , … , x ) {\displaystyle (x,x,\ldots ,x)} for which x p = e {\displaystyle x^{p}=e} . Counting the elements of X by orbits, and dividing by p, one sees that the number of elements satisfying x p = e {\displaystyle x^{p}=e} is divisible by p. But x = e is one such element, so there must be at least p − 1 other solutions for x, and these solutions are elements of order p. This completes the proof. == Applications == Cauchy's theorem implies a rough classification of all elementary abelian groups (groups whose non-identity elements all have equal, finite order). If G {\displaystyle G} is such a group, and x ∈ G {\displaystyle x\in G} has order p {\displaystyle p} , then p {\displaystyle p} must be prime, since otherwise Cauchy's theorem applied to the (finite) subgroup generated by x {\displaystyle x} produces an element of order less than p {\displaystyle p} . Moreover, every finite subgroup of G {\displaystyle G} has order a power of p {\displaystyle p} (including G {\displaystyle G} itself, if it is finite). This argument applies equally to p-groups, where every element's order is a power of p {\displaystyle p} (but not necessarily every order is the same). One may use the abelian case of Cauchy's Theorem in an inductive proof of the first of Sylow's theorems, similar to the first proof above, although there are also proofs that avoid doing this special case separately. == Notes == == References == Cauchy, Augustin-Louis (1845), "Mémoire sur les arrangements que l'on peut former avec des lettres données, et sur les permutations ou substitutions à l'aide desquelles on passe d'un arrangement à un autre", Exercises d'analyse et de physique mathématique, 3, Paris: 151–252 Cauchy, Augustin-Louis (1932), "Oeuvres complètes" (PDF), Lilliad - Université de Lille - Sciences et Technologies, second series, 13 (reprinted ed.), Paris: Gauthier-Villars: 171–282 Jacobson, Nathan (2009) [1985], Basic Algebra, Dover Books on Mathematics, vol. I (Second ed.), Dover Publications, p. 80, ISBN 978-0-486-47189-1 McKay, James H. (1959), "Another proof of Cauchy's group theorem", American Mathematical Monthly, 66 (2): 119, CiteSeerX 10.1.1.434.3544, doi:10.2307/2310010, JSTOR 2310010, MR 0098777, Zbl 0082.02601 Meo, M. (2004), "The mathematical life of Cauchy's group theorem", Historia Mathematica, 31 (2): 196–221, doi:10.1016/S0315-0860(03)00003-X == External links == "Cauchy's theorem". PlanetMath. "Proof of Cauchy's theorem". PlanetMath.
Wikipedia/Cauchy's_theorem_(group_theory)
Group-based cryptography is a use of groups to construct cryptographic primitives. A group is a very general algebraic object and most cryptographic schemes use groups in some way. In particular Diffie–Hellman key exchange uses finite cyclic groups. So the term group-based cryptography refers mostly to cryptographic protocols that use infinite non-abelian groups such as a braid group. == Examples == Shpilrain–Zapata public-key protocols Magyarik–Wagner public key protocol Anshel–Anshel–Goldfeld key exchange Ko–Lee et al. key exchange protocol == See also == Non-commutative cryptography == References == == Further reading == Paul, Kamakhya; Goswami, Pinkimani; Singh, Madan Mohan. (2022). "ALGEBRAIC BRAID GROUP PUBLIC KEY CRYPTOGRAPHY", Jnanabha, Vol. 52(2) (2022), 218-223. ISSN 0304-9892 (Print) ISSN 2455-7463 (Online) == External links == Cryptography and Braid Groups page (archived version 7/17/2017)
Wikipedia/Group-based_cryptography
Geometric group theory is an area in mathematics devoted to the study of finitely generated groups via exploring the connections between algebraic properties of such groups and topological and geometric properties of spaces on which these groups can act non-trivially (that is, when the groups in question are realized as geometric symmetries or continuous transformations of some spaces). Another important idea in geometric group theory is to consider finitely generated groups themselves as geometric objects. This is usually done by studying the Cayley graphs of groups, which, in addition to the graph structure, are endowed with the structure of a metric space, given by the so-called word metric. Geometric group theory, as a distinct area, is relatively new, and became a clearly identifiable branch of mathematics in the late 1980s and early 1990s. Geometric group theory closely interacts with low-dimensional topology, hyperbolic geometry, algebraic topology, computational group theory and differential geometry. There are also substantial connections with complexity theory, mathematical logic, the study of Lie groups and their discrete subgroups, dynamical systems, probability theory, K-theory, and other areas of mathematics. In the introduction to his book Topics in Geometric Group Theory, Pierre de la Harpe wrote: "One of my personal beliefs is that fascination with symmetries and groups is one way of coping with frustrations of life's limitations: we like to recognize symmetries which allow us to recognize more than what we can see. In this sense the study of geometric group theory is a part of culture, and reminds me of several things that Georges de Rham practiced on many occasions, such as teaching mathematics, reciting Mallarmé, or greeting a friend".: 3  == History == Geometric group theory grew out of combinatorial group theory that largely studied properties of discrete groups via analyzing group presentations, which describe groups as quotients of free groups; this field was first systematically studied by Walther von Dyck, student of Felix Klein, in the early 1880s, while an early form is found in the 1856 icosian calculus of William Rowan Hamilton, where he studied the icosahedral symmetry group via the edge graph of the dodecahedron. Currently combinatorial group theory as an area is largely subsumed by geometric group theory. Moreover, the term "geometric group theory" came to often include studying discrete groups using probabilistic, measure-theoretic, arithmetic, analytic and other approaches that lie outside of the traditional combinatorial group theory arsenal. In the first half of the 20th century, pioneering work of Max Dehn, Jakob Nielsen, Kurt Reidemeister and Otto Schreier, J. H. C. Whitehead, Egbert van Kampen, amongst others, introduced some topological and geometric ideas into the study of discrete groups. Other precursors of geometric group theory include small cancellation theory and Bass–Serre theory. Small cancellation theory was introduced by Martin Grindlinger in the 1960s and further developed by Roger Lyndon and Paul Schupp. It studies van Kampen diagrams, corresponding to finite group presentations, via combinatorial curvature conditions and derives algebraic and algorithmic properties of groups from such analysis. Bass–Serre theory, introduced in the 1977 book of Serre, derives structural algebraic information about groups by studying group actions on simplicial trees. External precursors of geometric group theory include the study of lattices in Lie groups, especially Mostow's rigidity theorem, the study of Kleinian groups, and the progress achieved in low-dimensional topology and hyperbolic geometry in the 1970s and early 1980s, spurred, in particular, by William Thurston's Geometrization program. The emergence of geometric group theory as a distinct area of mathematics is usually traced to the late 1980s and early 1990s. It was spurred by the 1987 monograph of Mikhail Gromov "Hyperbolic groups" that introduced the notion of a hyperbolic group (also known as word-hyperbolic or Gromov-hyperbolic or negatively curved group), which captures the idea of a finitely generated group having large-scale negative curvature, and by his subsequent monograph Asymptotic Invariants of Infinite Groups, that outlined Gromov's program of understanding discrete groups up to quasi-isometry. The work of Gromov had a transformative effect on the study of discrete groups and the phrase "geometric group theory" started appearing soon afterwards. (see e.g.). == Modern themes and developments == Notable themes and developments in geometric group theory in 1990s and 2000s include: Gromov's program to study quasi-isometric properties of groups. A particularly influential broad theme in the area is Gromov's program of classifying finitely generated groups according to their large scale geometry. Formally, this means classifying finitely generated groups with their word metric up to quasi-isometry. This program involves: The study of properties that are invariant under quasi-isometry. Examples of such properties of finitely generated groups include: the growth rate of a finitely generated group; the isoperimetric function or Dehn function of a finitely presented group; the number of ends of a group; hyperbolicity of a group; the homeomorphism type of the Gromov boundary of a hyperbolic group; asymptotic cones of finitely generated groups (see e.g.); amenability of a finitely generated group; being virtually abelian (that is, having an abelian subgroup of finite index); being virtually nilpotent; being virtually free; being finitely presentable; being a finitely presentable group with solvable Word Problem; and others. Theorems which use quasi-isometry invariants to prove algebraic results about groups, for example: Gromov's polynomial growth theorem; Stallings' ends theorem; Mostow rigidity theorem. Quasi-isometric rigidity theorems, in which one classifies algebraically all groups that are quasi-isometric to some given group or metric space. This direction was initiated by the work of Schwartz on quasi-isometric rigidity of rank-one lattices and the work of Benson Farb and Lee Mosher on quasi-isometric rigidity of Baumslag–Solitar groups. The theory of word-hyperbolic and relatively hyperbolic groups. A particularly important development here is the work of Zlil Sela in 1990s resulting in the solution of the isomorphism problem for word-hyperbolic groups. The notion of a relatively hyperbolic groups was originally introduced by Gromov in 1987 and refined by Farb and Brian Bowditch, in the 1990s. The study of relatively hyperbolic groups gained prominence in the 2000s. Interactions with mathematical logic and the study of the first-order theory of free groups. Particularly important progress occurred on the famous Tarski conjectures, due to the work of Sela as well as of Olga Kharlampovich and Alexei Myasnikov. The study of limit groups and introduction of the language and machinery of non-commutative algebraic geometry gained prominence. Interactions with computer science, complexity theory and the theory of formal languages. This theme is exemplified by the development of the theory of automatic groups, a notion that imposes certain geometric and language theoretic conditions on the multiplication operation in a finitely generated group. The study of isoperimetric inequalities, Dehn functions and their generalizations for finitely presented group. This includes, in particular, the work of Jean-Camille Birget, Aleksandr Olʹshanskiĭ, Eliyahu Rips and Mark Sapir essentially characterizing the possible Dehn functions of finitely presented groups, as well as results providing explicit constructions of groups with fractional Dehn functions. The theory of toral or JSJ-decompositions for 3-manifolds was originally brought into a group theoretic setting by Peter Kropholler. This notion has been developed by many authors for both finitely presented and finitely generated groups. Connections with geometric analysis, the study of C*-algebras associated with discrete groups and of the theory of free probability. This theme is represented, in particular, by considerable progress on the Novikov conjecture and the Baum–Connes conjecture and the development and study of related group-theoretic notions such as topological amenability, asymptotic dimension, uniform embeddability into Hilbert spaces, rapid decay property, and so on (see e.g.). Interactions with the theory of quasiconformal analysis on metric spaces, particularly in relation to Cannon's conjecture about characterization of hyperbolic groups with Gromov boundary homeomorphic to the 2-sphere. Finite subdivision rules, also in relation to Cannon's conjecture. Interactions with topological dynamics in the contexts of studying actions of discrete groups on various compact spaces and group compactifications, particularly convergence group methods Development of the theory of group actions on R {\displaystyle \mathbb {R} } -trees (particularly the Rips machine), and its applications. The study of group actions on CAT(0) spaces and CAT(0) cubical complexes, motivated by ideas from Alexandrov geometry. Interactions with low-dimensional topology and hyperbolic geometry, particularly the study of 3-manifold groups (see, e.g.,), mapping class groups of surfaces, braid groups and Kleinian groups. Introduction of probabilistic methods to study algebraic properties of "random" group theoretic objects (groups, group elements, subgroups, etc.). A particularly important development here is the work of Gromov who used probabilistic methods to prove the existence of a finitely generated group that is not uniformly embeddable into a Hilbert space. Other notable developments include introduction and study of the notion of generic-case complexity for group-theoretic and other mathematical algorithms and algebraic rigidity results for generic groups. The study of automata groups and iterated monodromy groups as groups of automorphisms of infinite rooted trees. In particular, Grigorchuk's groups of intermediate growth, and their generalizations, appear in this context. The study of measure-theoretic properties of group actions on measure spaces, particularly introduction and development of the notions of measure equivalence and orbit equivalence, as well as measure-theoretic generalizations of Mostow rigidity. The study of unitary representations of discrete groups and Kazhdan's property (T) The study of Out(Fn) (the outer automorphism group of a free group of rank n) and of individual automorphisms of free groups. Introduction and the study of Culler-Vogtmann's outer space and of the theory of train tracks for free group automorphisms played a particularly prominent role here. Development of Bass–Serre theory, particularly various accessibility results and the theory of tree lattices. Generalizations of Bass–Serre theory such as the theory of complexes of groups. The study of random walks on groups and related boundary theory, particularly the notion of Poisson boundary (see e.g.). The study of amenability and of groups whose amenability status is still unknown. Interactions with finite group theory, particularly progress in the study of subgroup growth. Studying subgroups and lattices in linear groups, such as S L ( n , R ) {\displaystyle SL(n,\mathbb {R} )} , and of other Lie groups, via geometric methods (e.g. buildings), algebro-geometric tools (e.g. algebraic groups and representation varieties), analytic methods (e.g. unitary representations on Hilbert spaces) and arithmetic methods. Group cohomology, using algebraic and topological methods, particularly involving interaction with algebraic topology and the use of morse-theoretic ideas in the combinatorial context; large-scale, or coarse (see e.g.) homological and cohomological methods. Progress on traditional combinatorial group theory topics, such as the Burnside problem, the study of Coxeter groups and Artin groups, and so on (the methods used to study these questions currently are often geometric and topological). == Examples == The following examples are often studied in geometric group theory: == See also == The ping-pong lemma, a useful way to exhibit a group as a free product Amenable group Nielsen transformation Tietze transformation == References == === Books and monographs === These texts cover geometric group theory and related topics. Bowditch, Brian H. (2006). A course on geometric group theory. MSJ Memoirs. Vol. 16. Tokyo: Mathematical Society of Japan. ISBN 4-931469-35-3. Bridson, Martin R.; Haefliger, André (1999). Metric spaces of non-positive curvature. Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. Vol. 319. Berlin: Springer-Verlag. ISBN 3-540-64324-9. Coornaert, Michel; Delzant, Thomas; Papadopoulos, Athanase (1990). Géométrie et théorie des groupes : les groupes hyperboliques de Gromov. Lecture Notes in Mathematics. Vol. 1441. Springer-Verlag. ISBN 3-540-52977-2. MR 1075994. Clay, Matt; Margalit, Dan (2017). Office Hours with a Geometric Group Theorist. Princeton University Press. ISBN 978-0-691-15866-2. Coornaert, Michel; Papadopoulos, Athanase (1993). Symbolic dynamics and hyperbolic groups. Lecture Notes in Mathematics. Vol. 1539. Springer-Verlag. ISBN 3-540-56499-3. de la Harpe, P. (2000). Topics in geometric group theory. Chicago Lectures in Mathematics. University of Chicago Press. ISBN 0-226-31719-6. Druţu, Cornelia; Kapovich, Michael (2018). Geometric Group Theory (PDF). American Mathematical Society Colloquium Publications. Vol. 63. American Mathematical Society. ISBN 978-1-4704-1104-6. MR 3753580. Epstein, D.B.A.; Cannon, J.W.; Holt, D.; Levy, S.; Paterson, M.; Thurston, W. (1992). Word Processing in Groups. Jones and Bartlett. ISBN 0-86720-244-0. Gromov, M. (1987). "Hyperbolic Groups". In Gersten, G.M. (ed.). Essays in Group Theory. Vol. 8. MSRI. pp. 75–263. ISBN 0-387-96618-8. Gromov, Mikhael (1993). "Asymptotic invariants of infinite groups". In Niblo, G.A.; Roller, M.A. (eds.). Geometric Group Theory: Proceedings of the Symposium held in Sussex 1991. London Mathematical Society Lecture Note Series. Vol. 2. Cambridge University Press. pp. 1–295. ISBN 978-0-521-44680-8. Kapovich, M. (2001). Hyperbolic Manifolds and Discrete Groups. Progress in Mathematics. Vol. 183. Birkhäuser. ISBN 978-0-8176-3904-4. Lyndon, Roger C.; Schupp, Paul E. (2015) [1977]. Combinatorial Group Theory. Classics in mathematics. Springer. ISBN 978-3-642-61896-3. Ol'shanskii, A.Yu. (2012) [1991]. Geometry of Defining Relations in Groups. Springer. ISBN 978-94-011-3618-1. Roe, John (2003). Lectures on Coarse Geometry. University Lecture Series. Vol. 31. American Mathematical Society. ISBN 978-0-8218-3332-2. == External links == Jon McCammond's Geometric Group Theory Page What is Geometric Group Theory? By Daniel Wise Open Problems in combinatorial and geometric group theory Geometric group theory Theme on arxiv.org
Wikipedia/Geometric_group_theory
In mathematics, an n-dimensional differential structure (or differentiable structure) on a set M makes M into an n-dimensional differential manifold, which is a topological manifold with some additional structure that allows for differential calculus on the manifold. If M is already a topological manifold, it is required that the new topology be identical to the existing one. == Definition == For a natural number n and some k which may be a non-negative integer or infinity, an n-dimensional Ck differential structure is defined using a Ck-atlas, which is a set of bijections called charts between subsets of M (whose union is the whole of M) and open subsets of R n {\displaystyle \mathbb {R} ^{n}} : φ i : M ⊃ W i → U i ⊂ R n {\displaystyle \varphi _{i}:M\supset W_{i}\rightarrow U_{i}\subset \mathbb {R} ^{n}} which are Ck-compatible (in the sense defined below): Each chart allows a subset of the manifold to be viewed as an open subset of R n {\displaystyle \mathbb {R} ^{n}} , but the usefulness of this depends on how much the charts agree when their domains overlap. Consider two charts: φ i : W i → U i , {\displaystyle \varphi _{i}:W_{i}\rightarrow U_{i},} φ j : W j → U j . {\displaystyle \varphi _{j}:W_{j}\rightarrow U_{j}.} The intersection of their domains is W i j = W i ∩ W j {\displaystyle W_{ij}=W_{i}\cap W_{j}} whose images under the two charts are U i j = φ i ( W i j ) , {\displaystyle U_{ij}=\varphi _{i}\left(W_{ij}\right),} U j i = φ j ( W i j ) . {\displaystyle U_{ji}=\varphi _{j}\left(W_{ij}\right).} The transition map between the two charts translates between their images on their shared domain: φ i j : U i j → U j i {\displaystyle \varphi _{ij}:U_{ij}\rightarrow U_{ji}} φ i j ( x ) = φ j ( φ i − 1 ( x ) ) . {\displaystyle \varphi _{ij}(x)=\varphi _{j}\left(\varphi _{i}^{-1}\left(x\right)\right).} Two charts φ i , φ j {\displaystyle \varphi _{i},\,\varphi _{j}} are Ck-compatible if U i j , U j i {\displaystyle U_{ij},\,U_{ji}} are open, and the transition maps φ i j , φ j i {\displaystyle \varphi _{ij},\,\varphi _{ji}} have continuous partial derivatives of order k. If k = 0, we only require that the transition maps are continuous, consequently a C0-atlas is simply another way to define a topological manifold. If k = ∞, derivatives of all orders must be continuous. A family of Ck-compatible charts covering the whole manifold is a Ck-atlas defining a Ck differential manifold. Two atlases are Ck-equivalent if the union of their sets of charts forms a Ck-atlas. In particular, a Ck-atlas that is C0-compatible with a C0-atlas that defines a topological manifold is said to determine a Ck differential structure on the topological manifold. The Ck equivalence classes of such atlases are the distinct Ck differential structures of the manifold. Each distinct differential structure is determined by a unique maximal atlas, which is simply the union of all atlases in the equivalence class. For simplification of language, without any loss of precision, one might just call a maximal Ck−atlas on a given set a Ck−manifold. This maximal atlas then uniquely determines both the topology and the underlying set, the latter being the union of the domains of all charts, and the former having the set of all these domains as a basis. == Existence and uniqueness theorems == For any integer k > 0 and any n−dimensional Ck−manifold, the maximal atlas contains a C∞−atlas on the same underlying set by a theorem due to Hassler Whitney. It has also been shown that any maximal Ck−atlas contains some number of distinct maximal C∞−atlases whenever n > 0, although for any pair of these distinct C∞−atlases there exists a C∞−diffeomorphism identifying the two. It follows that there is only one class of smooth structures (modulo pairwise smooth diffeomorphism) over any topological manifold which admits a differentiable structure, i.e. The C∞−, structures in a Ck−manifold. A bit loosely, one might express this by saying that the smooth structure is (essentially) unique. The case for k = 0 is different. Namely, there exist topological manifolds which admit no C1−structure, a result proved by Kervaire (1960), and later explained in the context of Donaldson's theorem (compare Hilbert's fifth problem). Smooth structures on an orientable manifold are usually counted modulo orientation-preserving smooth homeomorphisms. There then arises the question whether orientation-reversing diffeomorphisms exist. There is an "essentially unique" smooth structure for any topological manifold of dimension smaller than 4. For compact manifolds of dimension greater than 4, there is a finite number of "smooth types", i.e. equivalence classes of pairwise smoothly diffeomorphic smooth structures. In the case of Rn with n ≠ 4, the number of these types is one, whereas for n = 4, there are uncountably many such types. One refers to these by exotic R4. == Differential structures on spheres of dimension 1 to 20 == The following table lists the number of smooth types of the topological m−sphere Sm for the values of the dimension m from 1 up to 20. Spheres with a smooth, i.e. C∞−differential structure not smoothly diffeomorphic to the usual one are known as exotic spheres. It is not currently known how many smooth types the topological 4-sphere S4 has, except that there is at least one. There may be one, a finite number, or an infinite number. The claim that there is just one is known as the smooth Poincaré conjecture (see Generalized Poincaré conjecture). Most mathematicians believe that this conjecture is false, i.e. that S4 has more than one smooth type. The problem is connected with the existence of more than one smooth type of the topological 4-disk (or 4-ball). == Differential structures on topological manifolds == As mentioned above, in dimensions smaller than 4, there is only one differential structure for each topological manifold. That was proved by Tibor Radó for dimension 1 and 2, and by Edwin E. Moise in dimension 3. By using obstruction theory, Robion Kirby and Laurent C. Siebenmann were able to show that the number of PL structures for compact topological manifolds of dimension greater than 4 is finite. John Milnor, Michel Kervaire, and Morris Hirsch proved that the number of smooth structures on a compact PL manifold is finite and agrees with the number of differential structures on the sphere for the same dimension (see the book Asselmeyer-Maluga, Brans chapter 7) . By combining these results, the number of smooth structures on a compact topological manifold of dimension not equal to 4 is finite. Dimension 4 is more complicated. For compact manifolds, results depend on the complexity of the manifold as measured by the second Betti number b2. For large Betti numbers b2 > 18 in a simply connected 4-manifold, one can use a surgery along a knot or link to produce a new differential structure. With the help of this procedure one can produce countably infinite many differential structures. But even for simple spaces such as S 4 , C P 2 , . . . {\displaystyle S^{4},{\mathbb {C} }P^{2},...} one doesn't know the construction of other differential structures. For non-compact 4-manifolds there are many examples like R 4 , S 3 × R , M 4 ∖ { ∗ } , . . . {\displaystyle {\mathbb {R} }^{4},S^{3}\times {\mathbb {R} },M^{4}\smallsetminus \{*\},...} having uncountably many differential structures. == See also == Mathematical structure Exotic R4 Exotic sphere == References ==
Wikipedia/Differential_structure
Elliptic-curve cryptography (ECC) is an approach to public-key cryptography based on the algebraic structure of elliptic curves over finite fields. ECC allows smaller keys to provide equivalent security, compared to cryptosystems based on modular exponentiation in Galois fields, such as the RSA cryptosystem and ElGamal cryptosystem. Elliptic curves are applicable for key agreement, digital signatures, pseudo-random generators and other tasks. Indirectly, they can be used for encryption by combining the key agreement with a symmetric encryption scheme. They are also used in several integer factorization algorithms that have applications in cryptography, such as Lenstra elliptic-curve factorization. == History == The use of elliptic curves in cryptography was suggested independently by Neal Koblitz and Victor S. Miller in 1985. Elliptic curve cryptography algorithms entered wide use in 2004 to 2005. In 1999, NIST recommended fifteen elliptic curves. Specifically, FIPS 186-4 has ten recommended finite fields: Five prime fields F p {\displaystyle \mathbb {F} _{p}} for certain primes p of sizes 192, 224, 256, 384, and 521 bits. For each of the prime fields, one elliptic curve is recommended. Five binary fields F 2 m {\displaystyle \mathbb {F} _{2^{m}}} for m equal 163, 233, 283, 409, and 571. For each of the binary fields, one elliptic curve and one Koblitz curve was selected. The NIST recommendation thus contains a total of five prime curves and ten binary curves. The curves were chosen for optimal security and implementation efficiency. At the RSA Conference 2005, the National Security Agency (NSA) announced Suite B, which exclusively uses ECC for digital signature generation and key exchange. The suite is intended to protect both classified and unclassified national security systems and information. National Institute of Standards and Technology (NIST) has endorsed elliptic curve cryptography in its Suite B set of recommended algorithms, specifically elliptic-curve Diffie–Hellman (ECDH) for key exchange and Elliptic Curve Digital Signature Algorithm (ECDSA) for digital signature. The NSA allows their use for protecting information classified up to top secret with 384-bit keys. Recently, a large number of cryptographic primitives based on bilinear mappings on various elliptic curve groups, such as the Weil and Tate pairings, have been introduced. Schemes based on these primitives provide efficient identity-based encryption as well as pairing-based signatures, signcryption, key agreement, and proxy re-encryption. Elliptic curve cryptography is used successfully in numerous popular protocols, such as Transport Layer Security and Bitcoin. === Security concerns === In 2013, The New York Times stated that Dual Elliptic Curve Deterministic Random Bit Generation (or Dual_EC_DRBG) had been included as a NIST national standard due to the influence of NSA, which had included a deliberate weakness in the algorithm and the recommended elliptic curve. RSA Security in September 2013 issued an advisory recommending that its customers discontinue using any software based on Dual_EC_DRBG. In the wake of the exposure of Dual_EC_DRBG as "an NSA undercover operation", cryptography experts have also expressed concern over the security of the NIST recommended elliptic curves, suggesting a return to encryption based on non-elliptic-curve groups. Additionally, in August 2015, the NSA announced that it plans to replace Suite B with a new cipher suite due to concerns about quantum computing attacks on ECC. === Patents === While the RSA patent expired in 2000, there may be patents in force covering certain aspects of ECC technology, including at least one ECC scheme (ECMQV). However, RSA Laboratories and Daniel J. Bernstein have argued that the US government elliptic curve digital signature standard (ECDSA; NIST FIPS 186-3) and certain practical ECC-based key exchange schemes (including ECDH) can be implemented without infringing those patents. == Elliptic curve theory == For the purposes of this article, an elliptic curve is a plane curve over a finite field (rather than the real numbers) which consists of the points satisfying the equation y 2 = x 3 + a x + b , {\displaystyle y^{2}=x^{3}+ax+b,} along with a distinguished point at infinity, denoted ∞. The coordinates here are to be chosen from a fixed finite field of characteristic not equal to 2 or 3, or the curve equation would be somewhat more complicated. This set of points, together with the group operation of elliptic curves, is an abelian group, with the point at infinity as an identity element. The structure of the group is inherited from the divisor group of the underlying algebraic variety: Div 0 ⁡ ( E ) → Pic 0 ⁡ ( E ) ≃ E . {\displaystyle \operatorname {Div} ^{0}(E)\to \operatorname {Pic} ^{0}(E)\simeq E.} === Application to cryptography === Public-key cryptography is based on the intractability of certain mathematical problems. Early public-key systems, such as RSA's 1983 patent, based their security on the assumption that it is difficult to factor a large integer composed of two or more large prime factors which are far apart. For later elliptic-curve-based protocols, the base assumption is that finding the discrete logarithm of a random elliptic curve element with respect to a publicly known base point is infeasible (the computational Diffie–Hellman assumption): this is the "elliptic curve discrete logarithm problem" (ECDLP). The security of elliptic curve cryptography depends on the ability to compute a point multiplication and the inability to compute the multiplicand given the original point and product point. The size of the elliptic curve, measured by the total number of discrete integer pairs satisfying the curve equation, determines the difficulty of the problem. The primary benefit promised by elliptic curve cryptography over alternatives such as RSA is a smaller key size, reducing storage and transmission requirements. For example, a 256-bit elliptic curve public key should provide comparable security to a 3072-bit RSA public key. === Cryptographic schemes === Several discrete logarithm-based protocols have been adapted to elliptic curves, replacing the group ( Z p ) × {\displaystyle (\mathbb {Z} _{p})^{\times }} with an elliptic curve: The Elliptic-curve Diffie–Hellman (ECDH) key agreement scheme is based on the Diffie–Hellman scheme, The Elliptic Curve Integrated Encryption Scheme (ECIES), also known as Elliptic Curve Augmented Encryption Scheme or simply the Elliptic Curve Encryption Scheme, The Elliptic Curve Digital Signature Algorithm (ECDSA) is based on the Digital Signature Algorithm, The deformation scheme using Harrison's p-adic Manhattan metric, The Edwards-curve Digital Signature Algorithm (EdDSA) is based on Schnorr signature and uses twisted Edwards curves, The ECMQV key agreement scheme is based on the MQV key agreement scheme, The ECQV implicit certificate scheme. == Implementation == Some common implementation considerations include: === Domain parameters === To use ECC, all parties must agree on all the elements defining the elliptic curve, that is, the domain parameters of the scheme. The size of the field used is typically either prime (and denoted as p) or is a power of two ( 2 m {\displaystyle 2^{m}} ); the latter case is called the binary case, and this case necessitates the choice of an auxiliary curve denoted by f. Thus the field is defined by p in the prime case and the pair of m and f in the binary case. The elliptic curve is defined by the constants a and b used in its defining equation. Finally, the cyclic subgroup is defined by its generator (a.k.a. base point) G. For cryptographic application, the order of G, that is the smallest positive number n such that n G = O {\displaystyle nG={\mathcal {O}}} (the point at infinity of the curve, and the identity element), is normally prime. Since n is the size of a subgroup of E ( F p ) {\displaystyle E(\mathbb {F} _{p})} it follows from Lagrange's theorem that the number h = 1 n | E ( F p ) | {\displaystyle h={\frac {1}{n}}|E(\mathbb {F} _{p})|} is an integer. In cryptographic applications, this number h, called the cofactor, must be small ( h ≤ 4 {\displaystyle h\leq 4} ) and, preferably, h = 1 {\displaystyle h=1} . To summarize: in the prime case, the domain parameters are ( p , a , b , G , n , h ) {\displaystyle (p,a,b,G,n,h)} ; in the binary case, they are ( m , f , a , b , G , n , h ) {\displaystyle (m,f,a,b,G,n,h)} . Unless there is an assurance that domain parameters were generated by a party trusted with respect to their use, the domain parameters must be validated before use. The generation of domain parameters is not usually done by each participant because this involves computing the number of points on a curve which is time-consuming and troublesome to implement. As a result, several standard bodies published domain parameters of elliptic curves for several common field sizes. Such domain parameters are commonly known as "standard curves" or "named curves"; a named curve can be referenced either by name or by the unique object identifier defined in the standard documents: NIST, Recommended Elliptic Curves for Government Use SECG, SEC 2: Recommended Elliptic Curve Domain Parameters ECC Brainpool (RFC 5639), ECC Brainpool Standard Curves and Curve Generation SECG test vectors are also available. NIST has approved many SECG curves, so there is a significant overlap between the specifications published by NIST and SECG. EC domain parameters may be specified either by value or by name. If, despite the preceding admonition, one decides to construct one's own domain parameters, one should select the underlying field and then use one of the following strategies to find a curve with appropriate (i.e., near prime) number of points using one of the following methods: Select a random curve and use a general point-counting algorithm, for example, Schoof's algorithm or the Schoof–Elkies–Atkin algorithm, Select a random curve from a family which allows easy calculation of the number of points (e.g., Koblitz curves), or Select the number of points and generate a curve with this number of points using the complex multiplication technique. Several classes of curves are weak and should be avoided: Curves over F 2 m {\displaystyle \mathbb {F} _{2^{m}}} with non-prime m are vulnerable to Weil descent attacks. Curves such that n divides p B − 1 {\displaystyle p^{B}-1} (where p is the characteristic of the field: q for a prime field, or 2 {\displaystyle 2} for a binary field) for sufficiently small B are vulnerable to Menezes–Okamoto–Vanstone (MOV) attack which applies usual discrete logarithm problem (DLP) in a small-degree extension field of F p {\displaystyle \mathbb {F} _{p}} to solve ECDLP. The bound B should be chosen so that discrete logarithms in the field F p B {\displaystyle \mathbb {F} _{p^{B}}} are at least as difficult to compute as discrete logs on the elliptic curve E ( F q ) {\displaystyle E(\mathbb {F} _{q})} . Curves such that | E ( F q ) | = q {\displaystyle |E(\mathbb {F} _{q})|=q} are vulnerable to the attack that maps the points on the curve to the additive group of F q {\displaystyle \mathbb {F} _{q}} . === Key sizes === Because all the fastest known algorithms that allow one to solve the ECDLP (baby-step giant-step, Pollard's rho, etc.), need O ( n ) {\displaystyle O({\sqrt {n}})} steps, it follows that the size of the underlying field should be roughly twice the security parameter. For example, for 128-bit security one needs a curve over F q {\displaystyle \mathbb {F} _{q}} , where q ≈ 2 256 {\displaystyle q\approx 2^{256}} . This can be contrasted with finite-field cryptography (e.g., DSA) which requires 3072-bit public keys and 256-bit private keys, and integer factorization cryptography (e.g., RSA) which requires a 3072-bit value of n, where the private key should be just as large. However, the public key may be smaller to accommodate efficient encryption, especially when processing power is limited. The hardest ECC scheme (publicly) broken to date had a 112-bit key for the prime field case and a 109-bit key for the binary field case. For the prime field case, this was broken in July 2009 using a cluster of over 200 PlayStation 3 game consoles and could have been finished in 3.5 months using this cluster when running continuously. The binary field case was broken in April 2004 using 2600 computers over 17 months. A current project is aiming at breaking the ECC2K-130 challenge by Certicom, by using a wide range of different hardware: CPUs, GPUs, FPGA. === Projective coordinates === A close examination of the addition rules shows that in order to add two points, one needs not only several additions and multiplications in F q {\displaystyle \mathbb {F} _{q}} but also an inversion operation. The inversion (for given x ∈ F q {\displaystyle x\in \mathbb {F} _{q}} find y ∈ F q {\displaystyle y\in \mathbb {F} _{q}} such that x y = 1 {\displaystyle xy=1} ) is one to two orders of magnitude slower than multiplication. However, points on a curve can be represented in different coordinate systems which do not require an inversion operation to add two points. Several such systems were proposed: in the projective system each point is represented by three coordinates ( X , Y , Z ) {\displaystyle (X,Y,Z)} using the following relation: x = X Z {\displaystyle x={\frac {X}{Z}}} , y = Y Z {\displaystyle y={\frac {Y}{Z}}} ; in the Jacobian system a point is also represented with three coordinates ( X , Y , Z ) {\displaystyle (X,Y,Z)} , but a different relation is used: x = X Z 2 {\displaystyle x={\frac {X}{Z^{2}}}} , y = Y Z 3 {\displaystyle y={\frac {Y}{Z^{3}}}} ; in the López–Dahab system the relation is x = X Z {\displaystyle x={\frac {X}{Z}}} , y = Y Z 2 {\displaystyle y={\frac {Y}{Z^{2}}}} ; in the modified Jacobian system the same relations are used but four coordinates are stored and used for calculations ( X , Y , Z , a Z 4 ) {\displaystyle (X,Y,Z,aZ^{4})} ; and in the Chudnovsky Jacobian system five coordinates are used ( X , Y , Z , Z 2 , Z 3 ) {\displaystyle (X,Y,Z,Z^{2},Z^{3})} . Note that there may be different naming conventions, for example, IEEE P1363-2000 standard uses "projective coordinates" to refer to what is commonly called Jacobian coordinates. An additional speed-up is possible if mixed coordinates are used. === Fast reduction (NIST curves) === Reduction modulo p (which is needed for addition and multiplication) can be executed much faster if the prime p is a pseudo-Mersenne prime, that is p ≈ 2 d {\displaystyle p\approx 2^{d}} ; for example, p = 2 521 − 1 {\displaystyle p=2^{521}-1} or p = 2 256 − 2 32 − 2 9 − 2 8 − 2 7 − 2 6 − 2 4 − 1. {\displaystyle p=2^{256}-2^{32}-2^{9}-2^{8}-2^{7}-2^{6}-2^{4}-1.} Compared to Barrett reduction, there can be an order of magnitude speed-up. The speed-up here is a practical rather than theoretical one, and derives from the fact that the moduli of numbers against numbers near powers of two can be performed efficiently by computers operating on binary numbers with bitwise operations. The curves over F p {\displaystyle \mathbb {F} _{p}} with pseudo-Mersenne p are recommended by NIST. Yet another advantage of the NIST curves is that they use a = −3, which improves addition in Jacobian coordinates. According to Bernstein and Lange, many of the efficiency-related decisions in NIST FIPS 186-2 are suboptimal. Other curves are more secure and run just as fast. == Security == === Side-channel attacks === Unlike most other DLP systems (where it is possible to use the same procedure for squaring and multiplication), the EC addition is significantly different for doubling (P = Q) and general addition (P ≠ Q) depending on the coordinate system used. Consequently, it is important to counteract side-channel attacks (e.g., timing or simple/differential power analysis attacks) using, for example, fixed pattern window (a.k.a. comb) methods (note that this does not increase computation time). Alternatively one can use an Edwards curve; this is a special family of elliptic curves for which doubling and addition can be done with the same operation. Another concern for ECC-systems is the danger of fault attacks, especially when running on smart cards. === Backdoors === Cryptographic experts have expressed concerns that the National Security Agency has inserted a kleptographic backdoor into at least one elliptic curve-based pseudo random generator. Internal memos leaked by former NSA contractor Edward Snowden suggest that the NSA put a backdoor in the Dual EC DRBG standard. One analysis of the possible backdoor concluded that an adversary in possession of the algorithm's secret key could obtain encryption keys given only 32 bytes of PRNG output. The SafeCurves project has been launched in order to catalog curves that are easy to implement securely and are designed in a fully publicly verifiable way to minimize the chance of a backdoor. === Quantum computing attack === Shor's algorithm can be used to break elliptic curve cryptography by computing discrete logarithms on a hypothetical quantum computer. The latest quantum resource estimates for breaking a curve with a 256-bit modulus (128-bit security level) are 2330 qubits and 126 billion Toffoli gates. For the binary elliptic curve case, 906 qubits are necessary (to break 128 bits of security). In comparison, using Shor's algorithm to break the RSA algorithm requires 4098 qubits and 5.2 trillion Toffoli gates for a 2048-bit RSA key, suggesting that ECC is an easier target for quantum computers than RSA. All of these figures vastly exceed any quantum computer that has ever been built, and estimates place the creation of such computers at a decade or more away. Supersingular Isogeny Diffie–Hellman Key Exchange claimed to provide a post-quantum secure form of elliptic curve cryptography by using isogenies to implement Diffie–Hellman key exchanges. This key exchange uses much of the same field arithmetic as existing elliptic curve cryptography and requires computational and transmission overhead similar to many currently used public key systems. However, new classical attacks undermined the security of this protocol. In August 2015, the NSA announced that it planned to transition "in the not distant future" to a new cipher suite that is resistant to quantum attacks. "Unfortunately, the growth of elliptic curve use has bumped up against the fact of continued progress in the research on quantum computing, necessitating a re-evaluation of our cryptographic strategy." === Invalid curve attack === When ECC is used in virtual machines, an attacker may use an invalid curve to get a complete PDH private key. == Alternative representations == Alternative representations of elliptic curves include: Hessian curves Edwards curves Twisted curves Twisted Hessian curves Twisted Edwards curve Doubling-oriented Doche–Icart–Kohel curve Tripling-oriented Doche–Icart–Kohel curve Jacobian curve Montgomery curves == See also == == Notes == == References == Jacques Vélu, Courbes elliptiques (...), Société Mathématique de France, 57, 1-152, Paris, 1978. == External links == Elliptic Curves at Stanford University Interactive introduction to elliptic curves and elliptic curve cryptography with Sage by Maike Massierer and the CrypTool team Media related to Elliptic curve at Wikimedia Commons
Wikipedia/Elliptic_curve_cryptography
Bass–Serre theory is a part of the mathematical subject of group theory that deals with analyzing the algebraic structure of groups acting by automorphisms on simplicial trees. The theory relates group actions on trees with decomposing groups as iterated applications of the operations of free product with amalgamation and HNN extension, via the notion of the fundamental group of a graph of groups. Bass–Serre theory can be regarded as one-dimensional version of the orbifold theory. == History == Bass–Serre theory was developed by Jean-Pierre Serre in the 1970s and formalized in Trees, Serre's 1977 monograph (developed in collaboration with Hyman Bass) on the subject. Serre's original motivation was to understand the structure of certain algebraic groups whose Bruhat–Tits buildings are trees. However, the theory quickly became a standard tool of geometric group theory and geometric topology, particularly the study of 3-manifolds. Subsequent work of Bass contributed substantially to the formalization and development of basic tools of the theory and currently the term "Bass–Serre theory" is widely used to describe the subject. Mathematically, Bass–Serre theory builds on exploiting and generalizing the properties of two older group-theoretic constructions: free product with amalgamation and HNN extension. However, unlike the traditional algebraic study of these two constructions, Bass–Serre theory uses the geometric language of covering theory and fundamental groups. Graphs of groups, which are the basic objects of Bass–Serre theory, can be viewed as one-dimensional versions of orbifolds. Apart from Serre's book, the basic treatment of Bass–Serre theory is available in the article of Bass, the article of G. Peter Scott and C. T. C. Wall and the books of Allen Hatcher, Gilbert Baumslag, Warren Dicks and Martin Dunwoody and Daniel E. Cohen. == Basic set-up == === Graphs in the sense of Serre === Serre's formalism of graphs is slightly different from the standard formalism from graph theory. Here a graph A consists of a vertex set V, an edge set E, an edge reversal map E → E , e ↦ e ¯ {\displaystyle E\to E,\ e\mapsto {\overline {e}}} such that e ≠ e and e ¯ ¯ = e {\displaystyle {\overline {\overline {e}}}=e} for every e in E, and an initial vertex map o : E → V {\displaystyle o\colon E\to V} . Thus in A every edge e comes equipped with its formal inverse e. The vertex o(e) is called the origin or the initial vertex of e and the vertex o(e) is called the terminus of e and is denoted t(e). Both loop-edges (that is, edges e such that o(e) = t(e)) and multiple edges are allowed. An orientation on A is a partition of E into the union of two disjoint subsets E+ and E− so that for every edge e exactly one of the edges from the pair e, e belongs to E+ and the other belongs to E−. === Graphs of groups === A graph of groups A consists of the following data: A connected graph A; An assignment of a vertex group Av to every vertex v of A. An assignment of an edge group Ae to every edge e of A so that we have A e = A e ¯ {\displaystyle A_{e}=A_{\overline {e}}} for every e ∈ E. Boundary monomorphisms α e : A e → A o ( e ) {\displaystyle \alpha _{e}:A_{e}\to A_{o(e)}} for all edges e of A, so that each α e {\displaystyle \alpha _{e}} is an injective group homomorphism. For every e ∈ E {\displaystyle e\in E} the map α e ¯ : A e → A t ( e ) {\displaystyle \alpha _{\overline {e}}\colon A_{e}\to A_{t(e)}} is also denoted by ω e {\displaystyle \omega _{e}} . === Fundamental group of a graph of groups === There are two equivalent definitions of the notion of the fundamental group of a graph of groups: the first is a direct algebraic definition via an explicit group presentation (as a certain iterated application of amalgamated free products and HNN extensions), and the second using the language of groupoids. The algebraic definition is easier to state: First, choose a spanning tree T in A. The fundamental group of A with respect to T, denoted π1(A, T), is defined as the quotient of the free product ( ∗ v ∈ V A v ) ∗ F ( E ) {\displaystyle (\ast _{v\in V}A_{v})\ast F(E)} where F(E) is a free group with free basis E, subject to the following relations: e ¯ α e ( g ) e = α e ¯ ( g ) {\displaystyle {\overline {e}}\alpha _{e}(g)e=\alpha _{\overline {e}}(g)} for every e in E and every g ∈ A e {\displaystyle g\in A_{e}} . (The so-called Bass–Serre relation.) ee = 1 for every e in E. e = 1 for every edge e of the spanning tree T. There is also a notion of the fundamental group of A with respect to a base-vertex v in V, denoted π1(A, v), which is defined using the formalism of groupoids. It turns out that for every choice of a base-vertex v and every spanning tree T in A the groups π1(A, T) and π1(A, v) are naturally isomorphic. The fundamental group of a graph of groups has a natural topological interpretation as well: it is the fundamental group of a graph of spaces whose vertex spaces and edge spaces have the fundamental groups of the vertex groups and edge groups, respectively, and whose gluing maps induce the homomorphisms of the edge groups into the vertex groups. One can therefore take this as a third definition of the fundamental group of a graph of groups. ==== Fundamental groups of graphs of groups as iterations of amalgamated products and HNN-extensions ==== The group G = π1(A, T) defined above admits an algebraic description in terms of iterated amalgamated free products and HNN extensions. First, form a group B as a quotient of the free product ( ∗ v ∈ V A v ) ∗ F ( E + T ) {\displaystyle (\ast _{v\in V}A_{v})*F(E^{+}T)} subject to the relations e−1αe(g)e = ωe(g) for every e in E+T and every g ∈ A e {\displaystyle g\in A_{e}} . e = 1 for every e in E+T. This presentation can be rewritten as B = ∗ v ∈ V A v / n c l { α e ( g ) = ω e ( g ) , where e ∈ E + T , g ∈ G e } {\displaystyle B=\ast _{v\in V}A_{v}/{\rm {ncl}}\{\alpha _{e}(g)=\omega _{e}(g),{\text{ where }}e\in E^{+}T,g\in G_{e}\}} which shows that B is an iterated amalgamated free product of the vertex groups Av. Then the group G = π1(A, T) has the presentation ⟨ B , E + ( A − T ) | e − 1 α e ( g ) e = ω e ( g ) where e ∈ E + ( A − T ) , g ∈ G e ⟩ , {\displaystyle \langle B,E^{+}(A-T)|e^{-1}\alpha _{e}(g)e=\omega _{e}(g){\text{ where }}e\in E^{+}(A-T),g\in G_{e}\rangle ,} which shows that G = π1(A, T) is a multiple HNN extension of B with stable letters { e | e ∈ E + ( A − T ) } {\displaystyle \{e|e\in E^{+}(A-T)\}} . === Splittings === An isomorphism between a group G and the fundamental group of a graph of groups is called a splitting of G. If the edge groups in the splitting come from a particular class of groups (e.g. finite, cyclic, abelian, etc.), the splitting is said to be a splitting over that class. Thus a splitting where all edge groups are finite is called a splitting over finite groups. Algebraically, a splitting of G with trivial edge groups corresponds to a free product decomposition G = ( ∗ A v ) ∗ F ( X ) {\displaystyle G=(\ast A_{v})\ast F(X)} where F(X) is a free group with free basis X = E+(A−T) consisting of all positively oriented edges (with respect to some orientation on A) in the complement of some spanning tree T of A. === The normal forms theorem === Let g be an element of G = π1(A, T) represented as a product of the form g = a 0 e 1 a 1 … e n a n , {\displaystyle g=a_{0}e_{1}a_{1}\dots e_{n}a_{n},} where e1, ..., en is a closed edge-path in A with the vertex sequence v0, v1, ..., vn = v0 (that is v0=o(e1), vn = t(en) and vi = t(ei) = o(ei+1) for 0 < i < n) and where a i ∈ A v i {\displaystyle a_{i}\in A_{v_{i}}} for i = 0, ..., n. Suppose that g = 1 in G. Then either n = 0 and a0 = 1 in A v 0 {\displaystyle A_{v_{0}}} , or n > 0 and there is some 0 < i < n such that ei+1 = ei and a i ∈ ω e i ( A e i ) {\displaystyle a_{i}\in \omega _{e_{i}}(A_{e_{i}})} . The normal forms theorem immediately implies that the canonical homomorphisms Av → π1(A, T) are injective, so that we can think of the vertex groups Av as subgroups of G. Higgins has given a nice version of the normal form using the fundamental groupoid of a graph of groups. This avoids choosing a base point or tree, and has been exploited by Moore. == Bass–Serre covering trees == To every graph of groups A, with a specified choice of a base-vertex, one can associate a Bass–Serre covering tree A ~ {\displaystyle {\tilde {\mathbf {A} }}} , which is a tree that comes equipped with a natural group action of the fundamental group π1(A, v) without edge-inversions. Moreover, the quotient graph A ~ / π 1 ( A , v ) {\displaystyle {\tilde {\mathbf {A} }}/\pi _{1}(\mathbf {A} ,v)} is isomorphic to A. Similarly, if G is a group acting on a tree X without edge-inversions (that is, so that for every edge e of X and every g in G we have ge ≠ e), one can define the natural notion of a quotient graph of groups A. The underlying graph A of A is the quotient graph X/G. The vertex groups of A are isomorphic to vertex stabilizers in G of vertices of X and the edge groups of A are isomorphic to edge stabilizers in G of edges of X. Moreover, if X was the Bass–Serre covering tree of a graph of groups A and if G = π1(A, v) then the quotient graph of groups for the action of G on X can be chosen to be naturally isomorphic to A. == Fundamental theorem of Bass–Serre theory == Let G be a group acting on a tree X without inversions. Let A be the quotient graph of groups and let v be a base-vertex in A. Then G is isomorphic to the group π1(A, v) and there is an equivariant isomorphism between the tree X and the Bass–Serre covering tree A ~ {\displaystyle {\tilde {\mathbf {A} }}} . More precisely, there is a group isomorphism σ: G → π1(A, v) and a graph isomorphism j : X → A ~ {\displaystyle j:X\to {\tilde {\mathbf {A} }}} such that for every g in G, for every vertex x of X and for every edge e of X we have j(gx) = g j(x) and j(ge) = g j(e). This result is also known as the structure theorem. One of the immediate consequences is the classic Kurosh subgroup theorem describing the algebraic structure of subgroups of free products. == Examples == === Amalgamated free product === Consider a graph of groups A consisting of a single non-loop edge e (together with its formal inverse e) with two distinct end-vertices u = o(e) and v = t(e), vertex groups H = Au, K = Av, an edge group C = Ae and the boundary monomorphisms α = α e : C → H , ω = ω e : C → K {\displaystyle \alpha =\alpha _{e}:C\to H,\omega =\omega _{e}:C\to K} . Then T = A is a spanning tree in A and the fundamental group π1(A, T) is isomorphic to the amalgamated free product G = H ∗ C K = H ∗ K / n c l { α ( c ) = ω ( c ) , c ∈ C } . {\displaystyle G=H\ast _{C}K=H\ast K/{\rm {ncl}}\{\alpha (c)=\omega (c),c\in C\}.} In this case the Bass–Serre tree X = A ~ {\displaystyle X={\tilde {\mathbf {A} }}} can be described as follows. The vertex set of X is the set of cosets V X = { g K : g ∈ G } ⊔ { g H : g ∈ G } . {\displaystyle VX=\{gK:g\in G\}\sqcup \{gH:g\in G\}.} Two vertices gK and fH are adjacent in X whenever there exists k ∈ K such that fH = gkH (or, equivalently, whenever there is h ∈ H such that gK = fhK). The G-stabilizer of every vertex of X of type gK is equal to gKg−1 and the G-stabilizer of every vertex of X of type gH is equal to gHg−1. For an edge [gH, ghK] of X its G-stabilizer is equal to ghα(C)h−1g−1. For every c ∈ C and h ∈ 'k ∈ K' the edges [gH, ghK] and [gH, ghα(c)K] are equal and the degree of the vertex gH in X is equal to the index [H:α(C)]. Similarly, every vertex of type gK has degree [K:ω(C)] in X. === HNN extension === Let A be a graph of groups consisting of a single loop-edge e (together with its formal inverse e), a single vertex v = o(e) = t(e), a vertex group B = Av, an edge group C = Ae and the boundary monomorphisms α = α e : C → B , ω = ω e : C → B {\displaystyle \alpha =\alpha _{e}:C\to B,\omega =\omega _{e}:C\to B} . Then T = v is a spanning tree in A and the fundamental group π1(A, T) is isomorphic to the HNN extension G = ⟨ B , e | e − 1 α ( c ) e = ω ( c ) , c ∈ C ⟩ . {\displaystyle G=\langle B,e|e^{-1}\alpha (c)e=\omega (c),c\in C\rangle .} with the base group B, stable letter e and the associated subgroups H = α(C), K = ω(C) in B. The composition ϕ = ω ∘ α − 1 : H → K {\displaystyle \phi =\omega \circ \alpha ^{-1}:H\to K} is an isomorphism and the above HNN-extension presentation of G can be rewritten as G = ⟨ B , e | e − 1 h e = ϕ ( h ) , h ∈ H ⟩ . {\displaystyle G=\langle B,e|e^{-1}he=\phi (h),h\in H\rangle .\,} In this case the Bass–Serre tree X = A ~ {\displaystyle X={\tilde {\mathbf {A} }}} can be described as follows. The vertex set of X is the set of cosets VX = {gB : g ∈ G}. Two vertices gB and fB are adjacent in X whenever there exists b in B such that either fB = gbeB or fB = gbe−1B. The G-stabilizer of every vertex of X is conjugate to B in G and the stabilizer of every edge of X is conjugate to H in G. Every vertex of X has degree equal to [B : H] + [B : K]. === A graph with the trivial graph of groups structure === Let A be a graph of groups with underlying graph A such that all the vertex and edge groups in A are trivial. Let v be a base-vertex in A. Then π1(A,v) is equal to the fundamental group π1(A,v) of the underlying graph A in the standard sense of algebraic topology and the Bass–Serre covering tree A ~ {\displaystyle {\tilde {\mathbf {A} }}} is equal to the standard universal covering space A ~ {\displaystyle {\tilde {A}}} of A. Moreover, the action of π1(A,v) on A ~ {\displaystyle {\tilde {\mathbf {A} }}} is exactly the standard action of π1(A,v) on A ~ {\displaystyle {\tilde {A}}} by deck transformations. == Basic facts and properties == If A is a graph of groups with a spanning tree T and if G = π1(A, T), then for every vertex v of A the canonical homomorphism from Av to G is injective. If g ∈ G is an element of finite order then g is conjugate in G to an element of finite order in some vertex group Av. If F ≤ G is a finite subgroup then F is conjugate in G to a subgroup of some vertex group Av. If the graph A is finite and all vertex groups Av are finite then the group G is virtually free, that is, G contains a free subgroup of finite index. If A is finite and all the vertex groups Av are finitely generated then G is finitely generated. If A is finite and all the vertex groups Av are finitely presented and all the edge groups Ae are finitely generated then G is finitely presented. == Trivial and nontrivial actions == A graph of groups A is called trivial if A = T is already a tree and there is some vertex v of A such that Av = π1(A, A). This is equivalent to the condition that A is a tree and that for every edge e = [u, z] of A (with o(e) = u, t(e) = z) such that u is closer to v than z we have [Az : ωe(Ae)] = 1, that is Az = ωe(Ae). An action of a group G on a tree X without edge-inversions is called trivial if there exists a vertex x of X that is fixed by G, that is such that Gx = x. It is known that an action of G on X is trivial if and only if the quotient graph of groups for that action is trivial. Typically, only nontrivial actions on trees are studied in Bass–Serre theory since trivial graphs of groups do not carry any interesting algebraic information, although trivial actions in the above sense (e. g. actions of groups by automorphisms on rooted trees) may also be interesting for other mathematical reasons. One of the classic and still important results of the theory is a theorem of Stallings about ends of groups. The theorem states that a finitely generated group has more than one end if and only if this group admits a nontrivial splitting over finite subgroups that is, if and only if the group admits a nontrivial action without inversions on a tree with finite edge stabilizers. An important general result of the theory states that if G is a group with Kazhdan's property (T) then G does not admit any nontrivial splitting, that is, that any action of G on a tree X without edge-inversions has a global fixed vertex. == Hyperbolic length functions == Let G be a group acting on a tree X without edge-inversions. For every g∈G put ℓ X ( g ) = min { d ( x , g x ) | x ∈ V X } . {\displaystyle \ell _{X}(g)=\min\{d(x,gx)|x\in VX\}.} Then ℓX(g) is called the translation length of g on X. The function ℓ X : G → Z , g ∈ G ↦ ℓ X ( g ) {\displaystyle \ell _{X}:G\to \mathbf {Z} ,\quad g\in G\mapsto \ell _{X}(g)} is called the hyperbolic length function or the translation length function for the action of G on X. === Basic facts regarding hyperbolic length functions === For g ∈ G exactly one of the following holds: (a) ℓX(g) = 0 and g fixes a vertex of G. In this case g is called an elliptic element of G. (b) ℓX(g) > 0 and there is a unique bi-infinite embedded line in X, called the axis of g and denoted Lg which is g-invariant. In this case g acts on Lg by translation of magnitude ℓX(g) and the element g ∈ G is called hyperbolic. If ℓX(G) ≠ 0 then there exists a unique minimal G-invariant subtree XG of X. Moreover, XG is equal to the union of axes of hyperbolic elements of G. The length-function ℓX : G → Z is said to be abelian if it is a group homomorphism from G to Z and non-abelian otherwise. Similarly, the action of G on X is said to be abelian if the associated hyperbolic length function is abelian and is said to be non-abelian otherwise. In general, an action of G on a tree X without edge-inversions is said to be minimal if there are no proper G-invariant subtrees in X. An important fact in the theory says that minimal non-abelian tree actions are uniquely determined by their hyperbolic length functions: === Uniqueness theorem === Let G be a group with two nonabelian minimal actions without edge-inversions on trees X and Y. Suppose that the hyperbolic length functions ℓX and ℓY on G are equal, that is ℓX(g) = ℓY(g) for every g ∈ G. Then the actions of G on X and Y are equal in the sense that there exists a graph isomorphism f : X → Y which is G-equivariant, that is f(gx) = g f(x) for every g ∈ G and every x ∈ VX. == Important developments in Bass–Serre theory == Important developments in Bass–Serre theory in the last 30 years include: Various accessibility results for finitely presented groups that bound the complexity (that is, the number of edges) in a graph of groups decomposition of a finitely presented group, where some algebraic or geometric restrictions on the types of groups considered are imposed. These results include: Dunwoody's theorem about accessibility of finitely presented groups stating that for any finitely presented group G there exists a bound on the complexity of splittings of G over finite subgroups (the splittings are required to satisfy a technical assumption of being "reduced"); Bestvina–Feighn generalized accessibility theorem stating that for any finitely presented group G there is a bound on the complexity of reduced splittings of G over small subgroups (the class of small groups includes, in particular, all groups that do not contain non-abelian free subgroups); Acylindrical accessibility results for finitely presented (Sela, Delzant) and finitely generated (Weidmann) groups which bound the complexity of the so-called acylindrical splittings, that is splittings where for their Bass–Serre covering trees the diameters of fixed subsets of nontrivial elements of G are uniformly bounded. The theory of JSJ-decompositions for finitely presented groups. This theory was motivated by the classic notion of JSJ decomposition in 3-manifold topology and was initiated, in the context of word-hyperbolic groups, by the work of Sela. JSJ decompositions are splittings of finitely presented groups over some classes of small subgroups (cyclic, abelian, noetherian, etc., depending on the version of the theory) that provide a canonical descriptions, in terms of some standard moves, of all splittings of the group over subgroups of the class. There are a number of versions of JSJ-decomposition theories: The initial version of Sela for cyclic splittings of torsion-free word-hyperbolic groups. Bowditch's version of JSJ theory for word-hyperbolic groups (with possible torsion) encoding their splittings over virtually cyclic subgroups. The version of Rips and Sela of JSJ decompositions of torsion-free finitely presented groups encoding their splittings over free abelian subgroups. The version of Dunwoody and Sageev of JSJ decompositions of finitely presented groups over noetherian subgroups. The version of Fujiwara and Papasoglu, also of JSJ decompositions of finitely presented groups over noetherian subgroups. A version of JSJ decomposition theory for finitely presented groups developed by Scott and Swarup. The theory of lattices in automorphism groups of trees. The theory of tree lattices was developed by Bass, Kulkarni and Lubotzky by analogy with the theory of lattices in Lie groups (that is discrete subgroups of Lie groups of finite co-volume). For a discrete subgroup G of the automorphism group of a locally finite tree X one can define a natural notion of volume for the quotient graph of groups A as v o l ( A ) = ∑ v ∈ V 1 | A v | . {\displaystyle vol(\mathbf {A} )=\sum _{v\in V}{\frac {1}{|A_{v}|}}.} The group G is called an X-lattice if vol(A)< ∞. The theory of tree lattices turns out to be useful in the study of discrete subgroups of algebraic groups over non-archimedean local fields and in the study of Kac–Moody groups. Development of foldings and Nielsen methods for approximating group actions on trees and analyzing their subgroup structure. The theory of ends and relative ends of groups, particularly various generalizations of Stallings theorem about groups with more than one end. Quasi-isometric rigidity results for groups acting on trees. == Generalizations == There have been several generalizations of Bass–Serre theory: The theory of complexes of groups (see Haefliger, Corson Bridson-Haefliger) provides a higher-dimensional generalization of Bass–Serre theory. The notion of a graph of groups is replaced by that of a complex of groups, where groups are assigned to each cell in a simplicial complex, together with monomorphisms between these groups corresponding to face inclusions (these monomorphisms are required to satisfy certain compatibility conditions). One can then define an analog of the fundamental group of a graph of groups for a complex of groups. However, in order for this notion to have good algebraic properties (such as embeddability of the vertex groups in it) and in order for a good analog for the notion of the Bass–Serre covering tree to exist in this context, one needs to require some sort of "non-positive curvature" condition for the complex of groups in question (see, for example ). The theory of isometric group actions on real trees (or R-trees) which are metric spaces generalizing the graph-theoretic notion of a tree (graph theory). The theory was developed largely in the 1990s, where the Rips machine of Eliyahu Rips on the structure theory of stable group actions on R-trees played a key role (see Bestvina-Feighn). This structure theory assigns to a stable isometric action of a finitely generated group G a certain "normal form" approximation of that action by a stable action of G on a simplicial tree and hence a splitting of G in the sense of Bass–Serre theory. Group actions on real trees arise naturally in several contexts in geometric topology: for example as boundary points of the Teichmüller space (every point in the Thurston boundary of the Teichmüller space is represented by a measured geodesic lamination on the surface; this lamination lifts to the universal cover of the surface and a naturally dual object to that lift is an R-tree endowed with an isometric action of the fundamental group of the surface), as Gromov-Hausdorff limits of, appropriately rescaled, Kleinian group actions, and so on. The use of R-trees machinery provides substantial shortcuts in modern proofs of Thurston's Hyperbolization Theorem for Haken 3-manifolds. Similarly, R-trees play a key role in the study of Culler-Vogtmann's Outer space as well as in other areas of geometric group theory; for example, asymptotic cones of groups often have a tree-like structure and give rise to group actions on real trees. The use of R-trees, together with Bass–Serre theory, is a key tool in the work of Sela on solving the isomorphism problem for (torsion-free) word-hyperbolic groups, Sela's version of the JSJ-decomposition theory and the work of Sela on the Tarski Conjecture for free groups and the theory of limit groups. The theory of group actions on Λ-trees, where Λ is an ordered abelian group (such as R or Z) provides a further generalization of both the Bass–Serre theory and the theory of group actions on R-trees (see Morgan, Alperin-Bass, Chiswell). == See also == Geometric group theory == References ==
Wikipedia/Bass-Serre_theory
In mathematics, the mathematician Sophus Lie ( LEE) initiated lines of study involving integration of differential equations, transformation groups, and contact of spheres that have come to be called Lie theory. For instance, the latter subject is Lie sphere geometry. This article addresses his approach to transformation groups, which is one of the areas of mathematics, and was worked out by Wilhelm Killing and Élie Cartan. The foundation of Lie theory is the exponential map relating Lie algebras to Lie groups which is called the Lie group–Lie algebra correspondence. The subject is part of differential geometry since Lie groups are differentiable manifolds. Lie groups evolve out of the identity (1) and the tangent vectors to one-parameter subgroups generate the Lie algebra. The structure of a Lie group is implicit in its algebra, and the structure of the Lie algebra is expressed by root systems and root data. Lie theory has been particularly useful in mathematical physics since it describes the standard transformation groups: the Galilean group, the Lorentz group, the Poincaré group and the conformal group of spacetime. == Elementary Lie theory == The one-parameter groups are the first instance of Lie theory. The compact case arises through Euler's formula in the complex plane. Other one-parameter groups occur in the split-complex number plane as the unit hyperbola { exp ⁡ ( j t ) = cosh ⁡ ( t ) + j sinh ⁡ ( t ) : t ∈ R } , j 2 = + 1 {\displaystyle \lbrace \exp(jt)=\cosh(t)+j\sinh(t):t\in R\rbrace ,\quad j^{2}=+1} and in the dual number plane as the line { exp ⁡ ( ε t ) = 1 + ε t : t ∈ R } ε 2 = 0. {\displaystyle \lbrace \exp(\varepsilon t)=1+\varepsilon t:t\in R\rbrace \quad \varepsilon ^{2}=0.} In these cases the Lie algebra parameters have names: angle, hyperbolic angle, and slope. These species of angle are useful for providing polar decompositions which describe the planar subalgebras of 2 x 2 real matrices. There is a classical 3-parameter Lie group and algebra pair: the quaternions of unit length which can be identified with the 3-sphere. Its Lie algebra is the subspace of quaternion vectors. Since the commutator ij − ji = 2k, the Lie bracket in this algebra is twice the cross product of ordinary vector analysis. Another elementary 3-parameter example is given by the Heisenberg group and its Lie algebra. Standard treatments of Lie theory often begin with the classical groups. == History and scope == Early expressions of Lie theory are found in books composed by Sophus Lie with Friedrich Engel and Georg Scheffers from 1888 to 1896. In Lie's early work, the idea was to construct a theory of continuous groups, to complement the theory of discrete groups that had developed in the theory of modular forms, in the hands of Felix Klein and Henri Poincaré. The initial application that Lie had in mind was to the theory of differential equations. On the model of Galois theory and polynomial equations, the driving conception was of a theory capable of unifying, by the study of symmetry, the whole area of ordinary differential equations. According to Thomas W. Hawkins Jr., it was Élie Cartan that made Lie theory what it is: While Lie had many fertile ideas, Cartan was primarily responsible for the extensions and applications of his theory that have made it a basic component of modern mathematics. It was he who, with some help from Weyl, developed the seminal, essentially algebraic ideas of Killing into the theory of the structure and representation of semisimple Lie algebras that plays such a fundamental role in present-day Lie theory. And although Lie envisioned applications of his theory to geometry, it was Cartan who actually created them, for example through his theories of symmetric and generalized spaces, including all the attendant apparatus (moving frames, exterior differential forms, etc.) == Lie's three theorems == In his work on transformation groups, Sophus Lie proved three theorems relating the groups and algebras that bear his name. The first theorem exhibited the basis of an algebra through infinitesimal transformations.: 96  The second theorem exhibited structure constants of the algebra as the result of commutator products in the algebra.: 100  The third theorem showed these constants are anti-symmetric and satisfy the Jacobi identity.: 106  As Robert Gilmore wrote: Lie's three theorems provide a mechanism for constructing the Lie algebra associated with any Lie group. They also characterize the properties of a Lie algebra. ¶ The converses of Lie’s three theorems do the opposite: they supply a mechanism for associating a Lie group with any finite dimensional Lie algebra ... Taylor's theorem allows for the construction of a canonical analytic structure function φ(β,α) from the Lie algebra. ¶ These seven theorems – the three theorems of Lie and their converses, and Taylor's theorem – provide an essential equivalence between Lie groups and algebras. == Aspects of Lie theory == Lie theory is frequently built upon a study of the classical linear algebraic groups. Special branches include Weyl groups, Coxeter groups, and buildings. The classical subject has been extended to Groups of Lie type. In 1900 David Hilbert challenged Lie theorists with his Fifth Problem presented at the International Congress of Mathematicians in Paris. == See also == Baker–Campbell–Hausdorff formula Glossary of Lie groups and Lie algebras List of Lie groups topics Lie group integrator == Notes and references == John A. Coleman (1989) "The Greatest Mathematical Paper of All Time", The Mathematical Intelligencer 11(3): 29–38. == Further reading == M.A. Akivis & B.A. Rosenfeld (1993) Élie Cartan (1869–1951), translated from Russian original by V.V. Goldberg, chapter 2: Lie groups and Lie algebras, American Mathematical Society ISBN 0-8218-4587-X . P. M. Cohn (1957) Lie Groups, Cambridge Tracts in Mathematical Physics. Nijenhuis, Albert (1959). "Review: Lie groups, by P. M. Cohn". Bulletin of the American Mathematical Society. 65 (6): 338–341. doi:10.1090/s0002-9904-1959-10358-x. J. L. Coolidge (1940) A History of Geometrical Methods, pp 304–17, Oxford University Press (Dover Publications 2003). Robert Gilmore (2008) Lie groups, physics, and geometry: an introduction for physicists, engineers and chemists, Cambridge University Press ISBN 9780521884006 . F. Reese Harvey (1990) Spinors and calibrations, Academic Press, ISBN 0-12-329650-1 . Hall, Brian C. (2015), Lie Groups, Lie Algebras, and Representations: An Elementary Introduction, Graduate Texts in Mathematics, vol. 222 (2nd ed.), Springer, ISBN 978-3319134666. Hawkins, Thomas (2000). Emergence of the Theory of Lie Groups: an essay in the history of mathematics, 1869–1926. Springer. ISBN 0-387-98963-3. Sattinger, David H.; Weaver, O. L. (1986). Lie groups and algebras with applications to physics, geometry, and mechanics. Springer-Verlag. ISBN 3-540-96240-9. Stillwell, John (2008). Naive Lie Theory. Springer. ISBN 978-0-387-98289-2. Heldermann Verlag Journal of Lie Theory
Wikipedia/Lie_theory
In physics, a conservation law states that a particular measurable property of an isolated physical system does not change as the system evolves over time. Exact conservation laws include conservation of mass-energy, conservation of linear momentum, conservation of angular momentum, and conservation of electric charge. There are also many approximate conservation laws, which apply to such quantities as mass, parity, lepton number, baryon number, strangeness, hypercharge, etc. These quantities are conserved in certain classes of physics processes, but not in all. A local conservation law is usually expressed mathematically as a continuity equation, a partial differential equation which gives a relation between the amount of the quantity and the "transport" of that quantity. It states that the amount of the conserved quantity at a point or within a volume can only change by the amount of the quantity which flows in or out of the volume. From Noether's theorem, every differentiable symmetry leads to a local conservation law. Other conserved quantities can exist as well. == Conservation laws as fundamental laws of nature == Conservation laws are fundamental to our understanding of the physical world, in that they describe which processes can or cannot occur in nature. For example, the conservation law of energy states that the total quantity of energy in an isolated system does not change, though it may change form. In general, the total quantity of the property governed by that law remains unchanged during physical processes. With respect to classical physics, conservation laws include conservation of energy, mass (or matter), linear momentum, angular momentum, and electric charge. With respect to particle physics, particles cannot be created or destroyed except in pairs, where one is ordinary and the other is an antiparticle. With respect to symmetries and invariance principles, three special conservation laws have been described, associated with inversion or reversal of space, time, and charge. Conservation laws are considered to be fundamental laws of nature, with broad application in physics, as well as in other fields such as chemistry, biology, geology, and engineering. Most conservation laws are exact, or absolute, in the sense that they apply to all possible processes. Some conservation laws are partial, in that they hold for some processes but not for others. One particularly important result concerning local conservation laws is Noether's theorem, which states that there is a one-to-one correspondence between each one of them and a differentiable symmetry of the Universe. For example, the local conservation of energy follows from the uniformity of time and the local conservation of angular momentum arises from the isotropy of space, i.e. because there is no preferred direction of space. Notably, there is no conservation law associated with time-reversal, although more complex conservation laws combining time-reversal with other symmetries are known. == Exact laws == A partial listing of physical conservation equations due to symmetry that are said to be exact laws, or more precisely have never been proven to be violated: Another exact symmetry is CPT symmetry, the simultaneous inversion of space and time coordinates, together with swapping all particles with their antiparticles; however being a discrete symmetry Noether's theorem does not apply to it. Accordingly, the conserved quantity, CPT parity, can usually not be meaningfully calculated or determined. == Approximate laws == There are also approximate conservation laws. These are approximately true in particular situations, such as low speeds, short time scales, or certain interactions. Conservation of (macroscopic) mechanical energy (approximately true for processes close to free of dissipative forces like friction) Conservation of (rest) mass (approximately true for nonrelativistic speeds) Conservation of baryon number (See chiral anomaly and sphaleron) Conservation of lepton number (In the Standard Model) Conservation of flavor (violated by the weak interaction) Conservation of strangeness (violated by the weak interaction) Conservation of space-parity (violated by the weak interaction) Conservation of charge-parity (violated by the weak interaction) Conservation of time-parity (violated by the weak interaction) Conservation of CP parity (violated by the weak interaction); by the CPT theorem, this is equivalent to conservation of time-parity. == Global and local conservation laws == The total amount of some conserved quantity in the universe could remain unchanged if an equal amount were to appear at one point A and simultaneously disappear from another separate point B. For example, an amount of energy could appear on Earth without changing the total amount in the Universe if the same amount of energy were to disappear from some other region of the Universe. This weak form of "global" conservation is really not a conservation law because it is not Lorentz invariant, so phenomena like the above do not occur in nature. Due to special relativity, if the appearance of the energy at A and disappearance of the energy at B are simultaneous in one inertial reference frame, they will not be simultaneous in other inertial reference frames moving with respect to the first. In a moving frame one will occur before the other; either the energy at A will appear before or after the energy at B disappears. In both cases, during the interval energy will not be conserved. A stronger form of conservation law requires that, for the amount of a conserved quantity at a point to change, there must be a flow, or flux of the quantity into or out of the point. For example, the amount of electric charge at a point is never found to change without an electric current into or out of the point that carries the difference in charge. Since it only involves continuous local changes, this stronger type of conservation law is Lorentz invariant; a quantity conserved in one reference frame is conserved in all moving reference frames. This is called a local conservation law. Local conservation also implies global conservation; that the total amount of the conserved quantity in the Universe remains constant. All of the conservation laws listed above are local conservation laws. A local conservation law is expressed mathematically by a continuity equation, which states that the change in the quantity in a volume is equal to the total net "flux" of the quantity through the surface of the volume. The following sections discuss continuity equations in general. == Differential forms == In continuum mechanics, the most general form of an exact conservation law is given by a continuity equation. For example, conservation of electric charge q is ∂ ρ ∂ t = − ∇ ⋅ j {\displaystyle {\frac {\partial \rho }{\partial t}}=-\nabla \cdot \mathbf {j} \,} where ∇⋅ is the divergence operator, ρ is the density of q (amount per unit volume), j is the flux of q (amount crossing a unit area in unit time), and t is time. If we assume that the motion u of the charge is a continuous function of position and time, then j = ρ u ∂ ρ ∂ t = − ∇ ⋅ ( ρ u ) . {\displaystyle {\begin{aligned}\mathbf {j} &=\rho \mathbf {u} \\{\frac {\partial \rho }{\partial t}}&=-\nabla \cdot (\rho \mathbf {u} )\,.\end{aligned}}} In one space dimension this can be put into the form of a homogeneous first-order quasilinear hyperbolic equation:: 43  y t + A ( y ) y x = 0 {\displaystyle y_{t}+A(y)y_{x}=0} where the dependent variable y is called the density of a conserved quantity, and A(y) is called the current Jacobian, and the subscript notation for partial derivatives has been employed. The more general inhomogeneous case: y t + A ( y ) y x = s {\displaystyle y_{t}+A(y)y_{x}=s} is not a conservation equation but the general kind of balance equation describing a dissipative system. The dependent variable y is called a nonconserved quantity, and the inhomogeneous term s(y,x,t) is the-source, or dissipation. For example, balance equations of this kind are the momentum and energy Navier-Stokes equations, or the entropy balance for a general isolated system. In the one-dimensional space a conservation equation is a first-order quasilinear hyperbolic equation that can be put into the advection form: y t + a ( y ) y x = 0 {\displaystyle y_{t}+a(y)y_{x}=0} where the dependent variable y(x,t) is called the density of the conserved (scalar) quantity, and a(y) is called the current coefficient, usually corresponding to the partial derivative in the conserved quantity of a current density of the conserved quantity j(y):: 43  a ( y ) = j y ( y ) {\displaystyle a(y)=j_{y}(y)} In this case since the chain rule applies: j x = j y ( y ) y x = a ( y ) y x {\displaystyle j_{x}=j_{y}(y)y_{x}=a(y)y_{x}} the conservation equation can be put into the current density form: y t + j x ( y ) = 0 {\displaystyle y_{t}+j_{x}(y)=0} In a space with more than one dimension the former definition can be extended to an equation that can be put into the form: y t + a ( y ) ⋅ ∇ y = 0 {\displaystyle y_{t}+\mathbf {a} (y)\cdot \nabla y=0} where the conserved quantity is y(r,t), ⋅ denotes the scalar product, ∇ is the nabla operator, here indicating a gradient, and a(y) is a vector of current coefficients, analogously corresponding to the divergence of a vector current density associated to the conserved quantity j(y): y t + ∇ ⋅ j ( y ) = 0 {\displaystyle y_{t}+\nabla \cdot \mathbf {j} (y)=0} This is the case for the continuity equation: ρ t + ∇ ⋅ ( ρ u ) = 0 {\displaystyle \rho _{t}+\nabla \cdot (\rho \mathbf {u} )=0} Here the conserved quantity is the mass, with density ρ(r,t) and current density ρu, identical to the momentum density, while u(r, t) is the flow velocity. In the general case a conservation equation can be also a system of this kind of equations (a vector equation) in the form:: 43  y t + A ( y ) ⋅ ∇ y = 0 {\displaystyle \mathbf {y} _{t}+\mathbf {A} (\mathbf {y} )\cdot \nabla \mathbf {y} =\mathbf {0} } where y is called the conserved (vector) quantity, ∇y is its gradient, 0 is the zero vector, and A(y) is called the Jacobian of the current density. In fact as in the former scalar case, also in the vector case A(y) usually corresponding to the Jacobian of a current density matrix J(y): A ( y ) = J y ( y ) {\displaystyle \mathbf {A} (\mathbf {y} )=\mathbf {J} _{\mathbf {y} }(\mathbf {y} )} and the conservation equation can be put into the form: y t + ∇ ⋅ J ( y ) = 0 {\displaystyle \mathbf {y} _{t}+\nabla \cdot \mathbf {J} (\mathbf {y} )=\mathbf {0} } For example, this the case for Euler equations (fluid dynamics). In the simple incompressible case they are: ∇ ⋅ u = 0 , ∂ u ∂ t + u ⋅ ∇ u + ∇ s = 0 , {\displaystyle \nabla \cdot \mathbf {u} =0\,,\qquad {\frac {\partial \mathbf {u} }{\partial t}}+\mathbf {u} \cdot \nabla \mathbf {u} +\nabla s=\mathbf {0} ,} where: u is the flow velocity vector, with components in a N-dimensional space u1, u2, ..., uN, s is the specific pressure (pressure per unit density) giving the source term, It can be shown that the conserved (vector) quantity and the current density matrix for these equations are respectively: y = ( 1 u ) ; J = ( u u ⊗ u + s I ) ; {\displaystyle {\mathbf {y} }={\begin{pmatrix}1\\\mathbf {u} \end{pmatrix}};\qquad {\mathbf {J} }={\begin{pmatrix}\mathbf {u} \\\mathbf {u} \otimes \mathbf {u} +s\mathbf {I} \end{pmatrix}};\qquad } where ⊗ {\displaystyle \otimes } denotes the outer product. == Integral and weak forms == Conservation equations can usually also be expressed in integral form: the advantage of the latter is substantially that it requires less smoothness of the solution, which paves the way to weak form, extending the class of admissible solutions to include discontinuous solutions.: 62–63  By integrating in any space-time domain the current density form in 1-D space: y t + j x ( y ) = 0 {\displaystyle y_{t}+j_{x}(y)=0} and by using Green's theorem, the integral form is: ∫ − ∞ ∞ y d x + ∫ 0 ∞ j ( y ) d t = 0 {\displaystyle \int _{-\infty }^{\infty }y\,dx+\int _{0}^{\infty }j(y)\,dt=0} In a similar fashion, for the scalar multidimensional space, the integral form is: ∮ [ y d N r + j ( y ) d t ] = 0 {\displaystyle \oint \left[y\,d^{N}r+j(y)\,dt\right]=0} where the line integration is performed along the boundary of the domain, in an anticlockwise manner.: 62–63  Moreover, by defining a test function φ(r,t) continuously differentiable both in time and space with compact support, the weak form can be obtained pivoting on the initial condition. In 1-D space it is: ∫ 0 ∞ ∫ − ∞ ∞ ϕ t y + ϕ x j ( y ) d x d t = − ∫ − ∞ ∞ ϕ ( x , 0 ) y ( x , 0 ) d x {\displaystyle \int _{0}^{\infty }\int _{-\infty }^{\infty }\phi _{t}y+\phi _{x}j(y)\,dx\,dt=-\int _{-\infty }^{\infty }\phi (x,0)y(x,0)\,dx} In the weak form all the partial derivatives of the density and current density have been passed on to the test function, which with the former hypothesis is sufficiently smooth to admit these derivatives.: 62–63  == See also == Invariant (physics) Momentum Cauchy momentum equation Energy Conservation of energy and the First law of thermodynamics Conservative system Conserved quantity Some kinds of helicity are conserved in dissipationless limit: hydrodynamical helicity, magnetic helicity, cross-helicity. Principle of mutability Conservation law of the Stress–energy tensor Riemann invariant Philosophy of physics Totalitarian principle Convection–diffusion equation Uniformity of nature === Examples and applications === Advection Mass conservation, or Continuity equation Charge conservation Euler equations (fluid dynamics) inviscid Burgers equation Kinematic wave Conservation of energy Traffic flow == Notes == == References == Philipson, Schuster, Modeling by Nonlinear Differential Equations: Dissipative and Conservative Processes, World Scientific Publishing Company 2009. Victor J. Stenger, 2000. Timeless Reality: Symmetry, Simplicity, and Multiple Universes. Buffalo NY: Prometheus Books. Chpt. 12 is a gentle introduction to symmetry, invariance, and conservation laws. E. Godlewski and P.A. Raviart, Hyperbolic systems of conservation laws, Ellipses, 1991. == External links == Media related to Conservation laws at Wikimedia Commons Conservation Laws – Ch. 11–15 in an online textbook
Wikipedia/Conservation_law_(physics)
Public-key cryptography, or asymmetric cryptography, is the field of cryptographic systems that use pairs of related keys. Each key pair consists of a public key and a corresponding private key. Key pairs are generated with cryptographic algorithms based on mathematical problems termed one-way functions. Security of public-key cryptography depends on keeping the private key secret; the public key can be openly distributed without compromising security. There are many kinds of public-key cryptosystems, with different security goals, including digital signature, Diffie–Hellman key exchange, public-key key encapsulation, and public-key encryption. Public key algorithms are fundamental security primitives in modern cryptosystems, including applications and protocols that offer assurance of the confidentiality and authenticity of electronic communications and data storage. They underpin numerous Internet standards, such as Transport Layer Security (TLS), SSH, S/MIME, and PGP. Compared to symmetric cryptography, public-key cryptography can be too slow for many purposes, so these protocols often combine symmetric cryptography with public-key cryptography in hybrid cryptosystems. == Description == Before the mid-1970s, all cipher systems used symmetric key algorithms, in which the same cryptographic key is used with the underlying algorithm by both the sender and the recipient, who must both keep it secret. Of necessity, the key in every such system had to be exchanged between the communicating parties in some secure way prior to any use of the system – for instance, via a secure channel. This requirement is never trivial and very rapidly becomes unmanageable as the number of participants increases, or when secure channels are not available, or when, (as is sensible cryptographic practice), keys are frequently changed. In particular, if messages are meant to be secure from other users, a separate key is required for each possible pair of users. By contrast, in a public-key cryptosystem, the public keys can be disseminated widely and openly, and only the corresponding private keys need be kept secret. The two best-known types of public key cryptography are digital signature and public-key encryption: In a digital signature system, a sender can use a private key together with a message to create a signature. Anyone with the corresponding public key can verify whether the signature matches the message, but a forger who does not know the private key cannot find any message/signature pair that will pass verification with the public key.For example, a software publisher can create a signature key pair and include the public key in software installed on computers. Later, the publisher can distribute an update to the software signed using the private key, and any computer receiving an update can confirm it is genuine by verifying the signature using the public key. As long as the software publisher keeps the private key secret, even if a forger can distribute malicious updates to computers, they cannot convince the computers that any malicious updates are genuine. In a public-key encryption system, anyone with a public key can encrypt a message, yielding a ciphertext, but only those who know the corresponding private key can decrypt the ciphertext to obtain the original message.For example, a journalist can publish the public key of an encryption key pair on a web site so that sources can send secret messages to the news organization in ciphertext.Only the journalist who knows the corresponding private key can decrypt the ciphertexts to obtain the sources' messages—an eavesdropper reading email on its way to the journalist cannot decrypt the ciphertexts. However, public-key encryption does not conceal metadata like what computer a source used to send a message, when they sent it, or how long it is. Public-key encryption on its own also does not tell the recipient anything about who sent a message: 283 —it just conceals the content of the message. One important issue is confidence/proof that a particular public key is authentic, i.e. that it is correct and belongs to the person or entity claimed, and has not been tampered with or replaced by some (perhaps malicious) third party. There are several possible approaches, including: A public key infrastructure (PKI), in which one or more third parties – known as certificate authorities – certify ownership of key pairs. TLS relies upon this. This implies that the PKI system (software, hardware, and management) is trust-able by all involved. A "web of trust" decentralizes authentication by using individual endorsements of links between a user and the public key belonging to that user. PGP uses this approach, in addition to lookup in the domain name system (DNS). The DKIM system for digitally signing emails also uses this approach. == Applications == The most obvious application of a public key encryption system is for encrypting communication to provide confidentiality – a message that a sender encrypts using the recipient's public key, which can be decrypted only by the recipient's paired private key. Another application in public key cryptography is the digital signature. Digital signature schemes can be used for sender authentication. Non-repudiation systems use digital signatures to ensure that one party cannot successfully dispute its authorship of a document or communication. Further applications built on this foundation include: digital cash, password-authenticated key agreement, time-stamping services and non-repudiation protocols. == Hybrid cryptosystems == Because asymmetric key algorithms are nearly always much more computationally intensive than symmetric ones, it is common to use a public/private asymmetric key-exchange algorithm to encrypt and exchange a symmetric key, which is then used by symmetric-key cryptography to transmit data using the now-shared symmetric key for a symmetric key encryption algorithm. PGP, SSH, and the SSL/TLS family of schemes use this procedure; they are thus called hybrid cryptosystems. The initial asymmetric cryptography-based key exchange to share a server-generated symmetric key from the server to client has the advantage of not requiring that a symmetric key be pre-shared manually, such as on printed paper or discs transported by a courier, while providing the higher data throughput of symmetric key cryptography over asymmetric key cryptography for the remainder of the shared connection. == Weaknesses == As with all security-related systems, there are various potential weaknesses in public-key cryptography. Aside from poor choice of an asymmetric key algorithm (there are few that are widely regarded as satisfactory) or too short a key length, the chief security risk is that the private key of a pair becomes known. All security of messages, authentication, etc., will then be lost. Additionally, with the advent of quantum computing, many asymmetric key algorithms are considered vulnerable to attacks, and new quantum-resistant schemes are being developed to overcome the problem. === Algorithms === All public key schemes are in theory susceptible to a "brute-force key search attack". However, such an attack is impractical if the amount of computation needed to succeed – termed the "work factor" by Claude Shannon – is out of reach of all potential attackers. In many cases, the work factor can be increased by simply choosing a longer key. But other algorithms may inherently have much lower work factors, making resistance to a brute-force attack (e.g., from longer keys) irrelevant. Some special and specific algorithms have been developed to aid in attacking some public key encryption algorithms; both RSA and ElGamal encryption have known attacks that are much faster than the brute-force approach. None of these are sufficiently improved to be actually practical, however. Major weaknesses have been found for several formerly promising asymmetric key algorithms. The "knapsack packing" algorithm was found to be insecure after the development of a new attack. As with all cryptographic functions, public-key implementations may be vulnerable to side-channel attacks that exploit information leakage to simplify the search for a secret key. These are often independent of the algorithm being used. Research is underway to both discover, and to protect against, new attacks. === Alteration of public keys === Another potential security vulnerability in using asymmetric keys is the possibility of a "man-in-the-middle" attack, in which the communication of public keys is intercepted by a third party (the "man in the middle") and then modified to provide different public keys instead. Encrypted messages and responses must, in all instances, be intercepted, decrypted, and re-encrypted by the attacker using the correct public keys for the different communication segments so as to avoid suspicion. A communication is said to be insecure where data is transmitted in a manner that allows for interception (also called "sniffing"). These terms refer to reading the sender's private data in its entirety. A communication is particularly unsafe when interceptions can not be prevented or monitored by the sender. A man-in-the-middle attack can be difficult to implement due to the complexities of modern security protocols. However, the task becomes simpler when a sender is using insecure media such as public networks, the Internet, or wireless communication. In these cases an attacker can compromise the communications infrastructure rather than the data itself. A hypothetical malicious staff member at an Internet service provider (ISP) might find a man-in-the-middle attack relatively straightforward. Capturing the public key would only require searching for the key as it gets sent through the ISP's communications hardware; in properly implemented asymmetric key schemes, this is not a significant risk. In some advanced man-in-the-middle attacks, one side of the communication will see the original data while the other will receive a malicious variant. Asymmetric man-in-the-middle attacks can prevent users from realizing their connection is compromised. This remains so even when one user's data is known to be compromised because the data appears fine to the other user. This can lead to confusing disagreements between users such as "it must be on your end!" when neither user is at fault. Hence, man-in-the-middle attacks are only fully preventable when the communications infrastructure is physically controlled by one or both parties; such as via a wired route inside the sender's own building. In summation, public keys are easier to alter when the communications hardware used by a sender is controlled by an attacker. === Public key infrastructure === One approach to prevent such attacks involves the use of a public key infrastructure (PKI); a set of roles, policies, and procedures needed to create, manage, distribute, use, store and revoke digital certificates and manage public-key encryption. However, this has potential weaknesses. For example, the certificate authority issuing the certificate must be trusted by all participating parties to have properly checked the identity of the key-holder, to have ensured the correctness of the public key when it issues a certificate, to be secure from computer piracy, and to have made arrangements with all participants to check all their certificates before protected communications can begin. Web browsers, for instance, are supplied with a long list of "self-signed identity certificates" from PKI providers – these are used to check the bona fides of the certificate authority and then, in a second step, the certificates of potential communicators. An attacker who could subvert one of those certificate authorities into issuing a certificate for a bogus public key could then mount a "man-in-the-middle" attack as easily as if the certificate scheme were not used at all. An attacker who penetrates an authority's servers and obtains its store of certificates and keys (public and private) would be able to spoof, masquerade, decrypt, and forge transactions without limit, assuming that they were able to place themselves in the communication stream. Despite its theoretical and potential problems, Public key infrastructure is widely used. Examples include TLS and its predecessor SSL, which are commonly used to provide security for web browser transactions (for example, most websites utilize TLS for HTTPS). Aside from the resistance to attack of a particular key pair, the security of the certification hierarchy must be considered when deploying public key systems. Some certificate authority – usually a purpose-built program running on a server computer – vouches for the identities assigned to specific private keys by producing a digital certificate. Public key digital certificates are typically valid for several years at a time, so the associated private keys must be held securely over that time. When a private key used for certificate creation higher in the PKI server hierarchy is compromised, or accidentally disclosed, then a "man-in-the-middle attack" is possible, making any subordinate certificate wholly insecure. === Unencrypted metadata === Most of the available public-key encryption software does not conceal metadata in the message header, which might include the identities of the sender and recipient, the sending date, subject field, and the software they use etc. Rather, only the body of the message is concealed and can only be decrypted with the private key of the intended recipient. This means that a third party could construct quite a detailed model of participants in a communication network, along with the subjects being discussed, even if the message body itself is hidden. However, there has been a recent demonstration of messaging with encrypted headers, which obscures the identities of the sender and recipient, and significantly reduces the available metadata to a third party. The concept is based around an open repository containing separately encrypted metadata blocks and encrypted messages. Only the intended recipient is able to decrypt the metadata block, and having done so they can identify and download their messages and decrypt them. Such a messaging system is at present in an experimental phase and not yet deployed. Scaling this method would reveal to the third party only the inbox server being used by the recipient and the timestamp of sending and receiving. The server could be shared by thousands of users, making social network modelling much more challenging. == History == During the early history of cryptography, two parties would rely upon a key that they would exchange by means of a secure, but non-cryptographic, method such as a face-to-face meeting, or a trusted courier. This key, which both parties must then keep absolutely secret, could then be used to exchange encrypted messages. A number of significant practical difficulties arise with this approach to distributing keys. === Anticipation === In his 1874 book The Principles of Science, William Stanley Jevons wrote: Can the reader say what two numbers multiplied together will produce the number 8616460799? I think it unlikely that anyone but myself will ever know. Here he described the relationship of one-way functions to cryptography, and went on to discuss specifically the factorization problem used to create a trapdoor function. In July 1996, mathematician Solomon W. Golomb said: "Jevons anticipated a key feature of the RSA Algorithm for public key cryptography, although he certainly did not invent the concept of public key cryptography." === Classified discovery === In 1970, James H. Ellis, a British cryptographer at the UK Government Communications Headquarters (GCHQ), conceived of the possibility of "non-secret encryption", (now called public key cryptography), but could see no way to implement it. In 1973, his colleague Clifford Cocks implemented what has become known as the RSA encryption algorithm, giving a practical method of "non-secret encryption", and in 1974 another GCHQ mathematician and cryptographer, Malcolm J. Williamson, developed what is now known as Diffie–Hellman key exchange. The scheme was also passed to the US's National Security Agency. Both organisations had a military focus and only limited computing power was available in any case; the potential of public key cryptography remained unrealised by either organization: I judged it most important for military use ... if you can share your key rapidly and electronically, you have a major advantage over your opponent. Only at the end of the evolution from Berners-Lee designing an open internet architecture for CERN, its adaptation and adoption for the Arpanet ... did public key cryptography realise its full potential. —Ralph Benjamin These discoveries were not publicly acknowledged for 27 years, until the research was declassified by the British government in 1997. === Public discovery === In 1976, an asymmetric key cryptosystem was published by Whitfield Diffie and Martin Hellman who, influenced by Ralph Merkle's work on public key distribution, disclosed a method of public key agreement. This method of key exchange, which uses exponentiation in a finite field, came to be known as Diffie–Hellman key exchange. This was the first published practical method for establishing a shared secret-key over an authenticated (but not confidential) communications channel without using a prior shared secret. Merkle's "public key-agreement technique" became known as Merkle's Puzzles, and was invented in 1974 and only published in 1978. This makes asymmetric encryption a rather new field in cryptography although cryptography itself dates back more than 2,000 years. In 1977, a generalization of Cocks's scheme was independently invented by Ron Rivest, Adi Shamir and Leonard Adleman, all then at MIT. The latter authors published their work in 1978 in Martin Gardner's Scientific American column, and the algorithm came to be known as RSA, from their initials. RSA uses exponentiation modulo a product of two very large primes, to encrypt and decrypt, performing both public key encryption and public key digital signatures. Its security is connected to the extreme difficulty of factoring large integers, a problem for which there is no known efficient general technique. A description of the algorithm was published in the Mathematical Games column in the August 1977 issue of Scientific American. Since the 1970s, a large number and variety of encryption, digital signature, key agreement, and other techniques have been developed, including the Rabin cryptosystem, ElGamal encryption, DSA and ECC. == Examples == Examples of well-regarded asymmetric key techniques for varied purposes include: Diffie–Hellman key exchange protocol DSS (Digital Signature Standard), which incorporates the Digital Signature Algorithm ElGamal Elliptic-curve cryptography Elliptic Curve Digital Signature Algorithm (ECDSA) Elliptic-curve Diffie–Hellman (ECDH) Ed25519 and Ed448 (EdDSA) X25519 and X448 (ECDH/EdDH) Various password-authenticated key agreement techniques Paillier cryptosystem RSA encryption algorithm (PKCS#1) Cramer–Shoup cryptosystem YAK authenticated key agreement protocol Examples of asymmetric key algorithms not yet widely adopted include: NTRUEncrypt cryptosystem Kyber McEliece cryptosystem Examples of notable – yet insecure – asymmetric key algorithms include: Merkle–Hellman knapsack cryptosystem Examples of protocols using asymmetric key algorithms include: S/MIME GPG, an implementation of OpenPGP, and an Internet Standard EMV, EMV Certificate Authority IPsec PGP ZRTP, a secure VoIP protocol Transport Layer Security standardized by IETF and its predecessor Secure Socket Layer SILC SSH Bitcoin Off-the-Record Messaging == See also == == Notes == == References == == External links == Oral history interview with Martin Hellman, Charles Babbage Institute, University of Minnesota. Leading cryptography scholar Martin Hellman discusses the circumstances and fundamental insights of his invention of public key cryptography with collaborators Whitfield Diffie and Ralph Merkle at Stanford University in the mid-1970s. An account of how GCHQ kept their invention of PKE secret until 1997
Wikipedia/Public_key_cryptography
The representation theory of groups is a part of mathematics which examines how groups act on given structures. Here the focus is in particular on operations of groups on vector spaces. Nevertheless, groups acting on other groups or on sets are also considered. For more details, please refer to the section on permutation representations. Other than a few marked exceptions, only finite groups will be considered in this article. We will also restrict ourselves to vector spaces over fields of characteristic zero. Because the theory of algebraically closed fields of characteristic zero is complete, a theory valid for a special algebraically closed field of characteristic zero is also valid for every other algebraically closed field of characteristic zero. Thus, without loss of generality, we can study vector spaces over C . {\displaystyle \mathbb {C} .} Representation theory is used in many parts of mathematics, as well as in quantum chemistry and physics. Among other things it is used in algebra to examine the structure of groups. There are also applications in harmonic analysis and number theory. For example, representation theory is used in the modern approach to gain new results about automorphic forms. == Definition == === Linear representations === Let V {\displaystyle V} be a K {\displaystyle K} –vector space and G {\displaystyle G} a finite group. A linear representation of G {\displaystyle G} is a group homomorphism ρ : G → GL ( V ) = Aut ( V ) . {\displaystyle \rho :G\to {\text{GL}}(V)={\text{Aut}}(V).} Here GL ( V ) {\displaystyle {\text{GL}}(V)} is notation for a general linear group, and Aut ( V ) {\displaystyle {\text{Aut}}(V)} for an automorphism group. This means that a linear representation is a map ρ : G → GL ( V ) {\displaystyle \rho :G\to {\text{GL}}(V)} which satisfies ρ ( s t ) = ρ ( s ) ρ ( t ) {\displaystyle \rho (st)=\rho (s)\rho (t)} for all s , t ∈ G . {\displaystyle s,t\in G.} The vector space V {\displaystyle V} is called a representation space of G . {\displaystyle G.} Often the term "representation of G {\displaystyle G} " is also used for the representation space V . {\displaystyle V.} The representation of a group in a module instead of a vector space is also called a linear representation. We write ( ρ , V ρ ) {\displaystyle (\rho ,V_{\rho })} for the representation ρ : G → GL ( V ρ ) {\displaystyle \rho :G\to {\text{GL}}(V_{\rho })} of G . {\displaystyle G.} Sometimes we use the notation ( ρ , V ) {\displaystyle (\rho ,V)} if it is clear to which representation the space V {\displaystyle V} belongs. In this article we will restrict ourselves to the study of finite-dimensional representation spaces, except for the last chapter. As in most cases only a finite number of vectors in V {\displaystyle V} is of interest, it is sufficient to study the subrepresentation generated by these vectors. The representation space of this subrepresentation is then finite-dimensional. The degree of a representation is the dimension of its representation space V . {\displaystyle V.} The notation dim ⁡ ( ρ ) {\displaystyle \dim(\rho )} is sometimes used to denote the degree of a representation ρ . {\displaystyle \rho .} === Examples === The trivial representation is given by ρ ( s ) = Id {\displaystyle \rho (s)={\text{Id}}} for all s ∈ G . {\displaystyle s\in G.} A representation of degree 1 {\displaystyle 1} of a group G {\displaystyle G} is a homomorphism into the multiplicative group ρ : G → GL 1 ( C ) = C × = C ∖ { 0 } . {\displaystyle \rho :G\to {\text{GL}}_{1}(\mathbb {C} )=\mathbb {C} ^{\times }=\mathbb {C} \setminus \{0\}.} As every element of G {\displaystyle G} is of finite order, the values of ρ ( s ) {\displaystyle \rho (s)} are roots of unity. For example, let ρ : G = Z / 4 Z → C × {\displaystyle \rho :G=\mathbb {Z} /4\mathbb {Z} \to \mathbb {C} ^{\times }} be a nontrivial linear representation. Since ρ {\displaystyle \rho } is a group homomorphism, it has to satisfy ρ ( 0 ) = 1. {\displaystyle \rho ({0})=1.} Because 1 {\displaystyle 1} generates G , ρ {\displaystyle G,\rho } is determined by its value on ρ ( 1 ) . {\displaystyle \rho (1).} And as ρ {\displaystyle \rho } is nontrivial, ρ ( 1 ) ∈ { i , − 1 , − i } . {\displaystyle \rho ({1})\in \{i,-1,-i\}.} Thus, we achieve the result that the image of G {\displaystyle G} under ρ {\displaystyle \rho } has to be a nontrivial subgroup of the group which consists of the fourth roots of unity. In other words, ρ {\displaystyle \rho } has to be one of the following three maps: { ρ 1 ( 0 ) = 1 ρ 1 ( 1 ) = i ρ 1 ( 2 ) = − 1 ρ 1 ( 3 ) = − i { ρ 2 ( 0 ) = 1 ρ 2 ( 1 ) = − 1 ρ 2 ( 2 ) = 1 ρ 2 ( 3 ) = − 1 { ρ 3 ( 0 ) = 1 ρ 3 ( 1 ) = − i ρ 3 ( 2 ) = − 1 ρ 3 ( 3 ) = i {\displaystyle {\begin{cases}\rho _{1}({0})=1\\\rho _{1}({1})=i\\\rho _{1}({2})=-1\\\rho _{1}({3})=-i\end{cases}}\qquad {\begin{cases}\rho _{2}({0})=1\\\rho _{2}({1})=-1\\\rho _{2}({2})=1\\\rho _{2}({3})=-1\end{cases}}\qquad {\begin{cases}\rho _{3}({0})=1\\\rho _{3}({1})=-i\\\rho _{3}({2})=-1\\\rho _{3}({3})=i\end{cases}}} Let G = Z / 2 Z × Z / 2 Z {\displaystyle G=\mathbb {Z} /2\mathbb {Z} \times \mathbb {Z} /2\mathbb {Z} } and let ρ : G → GL 2 ( C ) {\displaystyle \rho :G\to {\text{GL}}_{2}(\mathbb {C} )} be the group homomorphism defined by: ρ ( 0 , 0 ) = ( 1 0 0 1 ) , ρ ( 1 , 0 ) = ( − 1 0 0 − 1 ) , ρ ( 0 , 1 ) = ( 0 1 1 0 ) , ρ ( 1 , 1 ) = ( 0 − 1 − 1 0 ) . {\displaystyle \rho ({0},{0})={\begin{pmatrix}1&0\\0&1\end{pmatrix}},\quad \rho ({1},{0})={\begin{pmatrix}-1&0\\0&-1\end{pmatrix}},\quad \rho ({0},{1})={\begin{pmatrix}0&1\\1&0\end{pmatrix}},\quad \rho ({1},{1})={\begin{pmatrix}0&-1\\-1&0\end{pmatrix}}.} In this case ρ {\displaystyle \rho } is a linear representation of G {\displaystyle G} of degree 2. {\displaystyle 2.} ==== Permutation representation ==== Let X {\displaystyle X} be a finite set and let G {\displaystyle G} be a group acting on X . {\displaystyle X.} Denote by Aut ( X ) {\displaystyle {\text{Aut}}(X)} the group of all permutations on X {\displaystyle X} with the composition as group multiplication. A group acting on a finite set is sometimes considered sufficient for the definition of the permutation representation. However, since we want to construct examples for linear representations - where groups act on vector spaces instead of on arbitrary finite sets - we have to proceed in a different way. In order to construct the permutation representation, we need a vector space V {\displaystyle V} with dim ⁡ ( V ) = | X | . {\displaystyle \dim(V)=|X|.} A basis of V {\displaystyle V} can be indexed by the elements of X . {\displaystyle X.} The permutation representation is the group homomorphism ρ : G → GL ( V ) {\displaystyle \rho :G\to {\text{GL}}(V)} given by ρ ( s ) e x = e s . x {\displaystyle \rho (s)e_{x}=e_{s.x}} for all s ∈ G , x ∈ X . {\displaystyle s\in G,x\in X.} All linear maps ρ ( s ) {\displaystyle \rho (s)} are uniquely defined by this property. Example. Let X = { 1 , 2 , 3 } {\displaystyle X=\{1,2,3\}} and G = Sym ( 3 ) . {\displaystyle G={\text{Sym}}(3).} Then G {\displaystyle G} acts on X {\displaystyle X} via Aut ( X ) = G . {\displaystyle {\text{Aut}}(X)=G.} The associated linear representation is ρ : G → GL ( V ) ≅ GL 3 ( C ) {\displaystyle \rho :G\to {\text{GL}}(V)\cong {\text{GL}}_{3}(\mathbb {C} )} with ρ ( σ ) e x = e σ ( x ) {\displaystyle \rho (\sigma )e_{x}=e_{\sigma (x)}} for σ ∈ G , x ∈ X . {\displaystyle \sigma \in G,x\in X.} ==== Left- and right-regular representation ==== Let G {\displaystyle G} be a group and V {\displaystyle V} be a vector space of dimension | G | {\displaystyle |G|} with a basis ( e t ) t ∈ G {\displaystyle (e_{t})_{t\in G}} indexed by the elements of G . {\displaystyle G.} The left-regular representation is a special case of the permutation representation by choosing X = G . {\displaystyle X=G.} This means ρ ( s ) e t = e s t {\displaystyle \rho (s)e_{t}=e_{st}} for all s , t ∈ G . {\displaystyle s,t\in G.} Thus, the family ( ρ ( s ) e 1 ) s ∈ G {\displaystyle (\rho (s)e_{1})_{s\in G}} of images of e 1 {\displaystyle e_{1}} are a basis of V . {\displaystyle V.} The degree of the left-regular representation is equal to the order of the group. The right-regular representation is defined on the same vector space with a similar homomorphism: ρ ( s ) e t = e t s − 1 . {\displaystyle \rho (s)e_{t}=e_{ts^{-1}}.} In the same way as before ( ρ ( s ) e 1 ) s ∈ G {\displaystyle (\rho (s)e_{1})_{s\in G}} is a basis of V . {\displaystyle V.} Just as in the case of the left-regular representation, the degree of the right-regular representation is equal to the order of G . {\displaystyle G.} Both representations are isomorphic via e s ↦ e s − 1 . {\displaystyle e_{s}\mapsto e_{s^{-1}}.} For this reason they are not always set apart, and often referred to as "the" regular representation. A closer look provides the following result: A given linear representation ρ : G → GL ( W ) {\displaystyle \rho :G\to {\text{GL}}(W)} is isomorphic to the left-regular representation if and only if there exists a w ∈ W , {\displaystyle w\in W,} such that ( ρ ( s ) w ) s ∈ G {\displaystyle (\rho (s)w)_{s\in G}} is a basis of W . {\displaystyle W.} Example. Let G = Z / 5 Z {\displaystyle G=\mathbb {Z} /5\mathbb {Z} } and V = R 5 {\displaystyle V=\mathbb {R} ^{5}} with the basis { e 0 , … , e 4 } . {\displaystyle \{e_{0},\ldots ,e_{4}\}.} Then the left-regular representation L ρ : G → GL ( V ) {\displaystyle L_{\rho }:G\to {\text{GL}}(V)} is defined by L ρ ( k ) e l = e l + k {\displaystyle L_{\rho }(k)e_{l}=e_{l+k}} for k , l ∈ Z / 5 Z . {\displaystyle k,l\in \mathbb {Z} /5\mathbb {Z} .} The right-regular representation is defined analogously by R ρ ( k ) e l = e l − k {\displaystyle R_{\rho }(k)e_{l}=e_{l-k}} for k , l ∈ Z / 5 Z . {\displaystyle k,l\in \mathbb {Z} /5\mathbb {Z} .} === Representations, modules and the convolution algebra === Let G {\displaystyle G} be a finite group, let K {\displaystyle K} be a commutative ring and let K [ G ] {\displaystyle K[G]} be the group algebra of G {\displaystyle G} over K . {\displaystyle K.} This algebra is free and a basis can be indexed by the elements of G . {\displaystyle G.} Most often the basis is identified with G {\displaystyle G} . Every element f ∈ K [ G ] {\displaystyle f\in K[G]} can then be uniquely expressed as f = ∑ s ∈ G a s s {\displaystyle f=\sum _{s\in G}a_{s}s} with a s ∈ K {\displaystyle a_{s}\in K} . The multiplication in K [ G ] {\displaystyle K[G]} extends that in G {\displaystyle G} distributively. Now let V {\displaystyle V} be a K {\displaystyle K} –module and let ρ : G → GL ( V ) {\displaystyle \rho :G\to {\text{GL}}(V)} be a linear representation of G {\displaystyle G} in V . {\displaystyle V.} We define s v = ρ ( s ) v {\displaystyle sv=\rho (s)v} for all s ∈ G {\displaystyle s\in G} and v ∈ V {\displaystyle v\in V} . By linear extension V {\displaystyle V} is endowed with the structure of a left- K [ G ] {\displaystyle K[G]} –module. Vice versa we obtain a linear representation of G {\displaystyle G} starting from a K [ G ] {\displaystyle K[G]} –module V {\displaystyle V} . Additionally, homomorphisms of representations are in bijective correspondence with group algebra homomorphisms. Therefore, these terms may be used interchangeably. This is an example of an isomorphism of categories. Suppose K = C . {\displaystyle K=\mathbb {C} .} In this case the left C [ G ] {\displaystyle \mathbb {C} [G]} –module given by C [ G ] {\displaystyle \mathbb {C} [G]} itself corresponds to the left-regular representation. In the same way C [ G ] {\displaystyle \mathbb {C} [G]} as a right C [ G ] {\displaystyle \mathbb {C} [G]} –module corresponds to the right-regular representation. In the following we will define the convolution algebra: Let G {\displaystyle G} be a group, the set L 1 ( G ) := { f : G → C } {\displaystyle L^{1}(G):=\{f:G\to \mathbb {C} \}} is a C {\displaystyle \mathbb {C} } –vector space with the operations addition and scalar multiplication then this vector space is isomorphic to C | G | . {\displaystyle \mathbb {C} ^{|G|}.} The convolution of two elements f , h ∈ L 1 ( G ) {\displaystyle f,h\in L^{1}(G)} defined by f ∗ h ( s ) := ∑ t ∈ G f ( t ) h ( t − 1 s ) {\displaystyle f*h(s):=\sum _{t\in G}f(t)h(t^{-1}s)} makes L 1 ( G ) {\displaystyle L^{1}(G)} an algebra. The algebra L 1 ( G ) {\displaystyle L^{1}(G)} is called the convolution algebra. The convolution algebra is free and has a basis indexed by the group elements: ( δ s ) s ∈ G , {\displaystyle (\delta _{s})_{s\in G},} where δ s ( t ) = { 1 t = s 0 otherwise. {\displaystyle \delta _{s}(t)={\begin{cases}1&t=s\\0&{\text{otherwise.}}\end{cases}}} Using the properties of the convolution we obtain: δ s ∗ δ t = δ s t . {\displaystyle \delta _{s}*\delta _{t}=\delta _{st}.} We define a map between L 1 ( G ) {\displaystyle L^{1}(G)} and C [ G ] , {\displaystyle \mathbb {C} [G],} by defining δ s ↦ e s {\displaystyle \delta _{s}\mapsto e_{s}} on the basis ( δ s ) s ∈ G {\displaystyle (\delta _{s})_{s\in G}} and extending it linearly. Obviously the prior map is bijective. A closer inspection of the convolution of two basis elements as shown in the equation above reveals that the multiplication in L 1 ( G ) {\displaystyle L^{1}(G)} corresponds to that in C [ G ] . {\displaystyle \mathbb {C} [G].} Thus, the convolution algebra and the group algebra are isomorphic as algebras. The involution f ∗ ( s ) = f ( s − 1 ) ¯ {\displaystyle f^{*}(s)={\overline {f(s^{-1})}}} turns L 1 ( G ) {\displaystyle L^{1}(G)} into a ∗ {\displaystyle ^{*}} –algebra. We have δ s ∗ = δ s − 1 . {\displaystyle \delta _{s}^{*}=\delta _{s^{-1}}.} A representation ( π , V π ) {\displaystyle (\pi ,V_{\pi })} of a group G {\displaystyle G} extends to a ∗ {\displaystyle ^{*}} –algebra homomorphism π : L 1 ( G ) → End ( V π ) {\displaystyle \pi :L^{1}(G)\to {\text{End}}(V_{\pi })} by π ( δ s ) = π ( s ) . {\displaystyle \pi (\delta _{s})=\pi (s).} Since multiplicativity is a characteristic property of algebra homomorphisms, π {\displaystyle \pi } satisfies π ( f ∗ h ) = π ( f ) π ( h ) . {\displaystyle \pi (f*h)=\pi (f)\pi (h).} If π {\displaystyle \pi } is unitary, we also obtain π ( f ) ∗ = π ( f ∗ ) . {\displaystyle \pi (f)^{*}=\pi (f^{*}).} For the definition of a unitary representation, please refer to the chapter on properties. In that chapter we will see that (without loss of generality) every linear representation can be assumed to be unitary. Using the convolution algebra we can implement a Fourier transformation on a group G . {\displaystyle G.} In the area of harmonic analysis it is shown that the following definition is consistent with the definition of the Fourier transformation on R . {\displaystyle \mathbb {R} .} Let ρ : G → GL ( V ρ ) {\displaystyle \rho :G\to {\text{GL}}(V_{\rho })} be a representation and let f ∈ L 1 ( G ) {\displaystyle f\in L^{1}(G)} be a C {\displaystyle \mathbb {C} } -valued function on G {\displaystyle G} . The Fourier transform f ^ ( ρ ) ∈ End ( V ρ ) {\displaystyle {\hat {f}}(\rho )\in {\text{End}}(V_{\rho })} of f {\displaystyle f} is defined as f ^ ( ρ ) = ∑ s ∈ G f ( s ) ρ ( s ) . {\displaystyle {\hat {f}}(\rho )=\sum _{s\in G}f(s)\rho (s).} This transformation satisfies f ∗ g ^ ( ρ ) = f ^ ( ρ ) ⋅ g ^ ( ρ ) . {\displaystyle {\widehat {f*g}}(\rho )={\hat {f}}(\rho )\cdot {\hat {g}}(\rho ).} === Maps between representations === A map between two representations ( ρ , V ρ ) , ( τ , V τ ) {\displaystyle (\rho ,V_{\rho }),\,(\tau ,V_{\tau })} of the same group G {\displaystyle G} is a linear map T : V ρ → V τ , {\displaystyle T:V_{\rho }\to V_{\tau },} with the property that τ ( s ) ∘ T = T ∘ ρ ( s ) {\displaystyle \tau (s)\circ T=T\circ \rho (s)} holds for all s ∈ G . {\displaystyle s\in G.} In other words, the following diagram commutes for all s ∈ G {\displaystyle s\in G} : Such a map is also called G {\displaystyle G} –linear, or an equivariant map. The kernel, the image and the cokernel of T {\displaystyle T} are defined by default. The composition of equivariant maps is again an equivariant map. There is a category of representations with equivariant maps as its morphisms. They are again G {\displaystyle G} –modules. Thus, they provide representations of G {\displaystyle G} due to the correlation described in the previous section. == Irreducible representations and Schur's lemma == Let ρ : G → GL ( V ) {\displaystyle \rho :G\to {\text{GL}}(V)} be a linear representation of G . {\displaystyle G.} Let W {\displaystyle W} be a G {\displaystyle G} -invariant subspace of V , {\displaystyle V,} that is, ρ ( s ) w ∈ W {\displaystyle \rho (s)w\in W} for all s ∈ G {\displaystyle s\in G} and w ∈ W {\displaystyle w\in W} . The restriction ρ ( s ) | W {\displaystyle \rho (s)|_{W}} is an isomorphism of W {\displaystyle W} onto itself. Because ρ ( s ) | W ∘ ρ ( t ) | W = ρ ( s t ) | W {\displaystyle \rho (s)|_{W}\circ \rho (t)|_{W}=\rho (st)|_{W}} holds for all s , t ∈ G , {\displaystyle s,t\in G,} this construction is a representation of G {\displaystyle G} in W . {\displaystyle W.} It is called subrepresentation of V . {\displaystyle V.} Any representation V has at least two subrepresentations, namely the one consisting only of 0, and the one consisting of V itself. The representation is called an irreducible representation, if these two are the only subrepresentations. Some authors also call these representations simple, given that they are precisely the simple modules over the group algebra C [ G ] {\displaystyle \mathbb {C} [G]} . Schur's lemma puts a strong constraint on maps between irreducible representations. If ρ 1 : G → GL ( V 1 ) {\displaystyle \rho _{1}:G\to {\text{GL}}(V_{1})} and ρ 2 : G → GL ( V 2 ) {\displaystyle \rho _{2}:G\to {\text{GL}}(V_{2})} are both irreducible, and F : V 1 → V 2 {\displaystyle F:V_{1}\to V_{2}} is a linear map such that ρ 2 ( s ) ∘ F = F ∘ ρ 1 ( s ) {\displaystyle \rho _{2}(s)\circ F=F\circ \rho _{1}(s)} for all s ∈ G . {\displaystyle s\in G.} , there is the following dichotomy: If V 1 = V 2 {\displaystyle V_{1}=V_{2}} and ρ 1 = ρ 2 , {\displaystyle \rho _{1}=\rho _{2},} F {\displaystyle F} is a homothety (i.e. F = λ Id {\displaystyle F=\lambda {\text{Id}}} for a λ ∈ C {\displaystyle \lambda \in \mathbb {C} } ). More generally, if ρ 1 {\displaystyle \rho _{1}} and ρ 2 {\displaystyle \rho _{2}} are isomorphic, the space of G-linear maps is one-dimensional. Otherwise, if the two representations are not isomorphic, F must be 0. == Properties == Two representations ( ρ , V ρ ) , ( π , V π ) {\displaystyle (\rho ,V_{\rho }),(\pi ,V_{\pi })} are called equivalent or isomorphic, if there exists a G {\displaystyle G} –linear vector space isomorphism between the representation spaces. In other words, they are isomorphic if there exists a bijective linear map T : V ρ → V π , {\displaystyle T:V_{\rho }\to V_{\pi },} such that T ∘ ρ ( s ) = π ( s ) ∘ T {\displaystyle T\circ \rho (s)=\pi (s)\circ T} for all s ∈ G . {\displaystyle s\in G.} In particular, equivalent representations have the same degree. A representation ( π , V π ) {\displaystyle (\pi ,V_{\pi })} is called faithful when π {\displaystyle \pi } is injective. In this case π {\displaystyle \pi } induces an isomorphism between G {\displaystyle G} and the image π ( G ) . {\displaystyle \pi (G).} As the latter is a subgroup of GL ( V π ) , {\displaystyle {\text{GL}}(V_{\pi }),} we can regard G {\displaystyle G} via π {\displaystyle \pi } as subgroup of Aut ( V π ) . {\displaystyle {\text{Aut}}(V_{\pi }).} We can restrict the range as well as the domain: Let H {\displaystyle H} be a subgroup of G . {\displaystyle G.} Let ρ {\displaystyle \rho } be a linear representation of G . {\displaystyle G.} We denote by Res H ( ρ ) {\displaystyle {\text{Res}}_{H}(\rho )} the restriction of ρ {\displaystyle \rho } to the subgroup H . {\displaystyle H.} If there is no danger of confusion, we might use only Res ( ρ ) {\displaystyle {\text{Res}}(\rho )} or in short Res ρ . {\displaystyle {\text{Res}}\rho .} The notation Res H ( V ) {\displaystyle {\text{Res}}_{H}(V)} or in short Res ( V ) {\displaystyle {\text{Res}}(V)} is also used to denote the restriction of the representation V {\displaystyle V} of G {\displaystyle G} onto H . {\displaystyle H.} Let f {\displaystyle f} be a function on G . {\displaystyle G.} We write Res H ( f ) {\displaystyle {\text{Res}}_{H}(f)} or shortly Res ( f ) {\displaystyle {\text{Res}}(f)} for the restriction to the subgroup H . {\displaystyle H.} It can be proven that the number of irreducible representations of a group G {\displaystyle G} (or correspondingly the number of simple C [ G ] {\displaystyle \mathbb {C} [G]} –modules) equals the number of conjugacy classes of G . {\displaystyle G.} A representation is called semisimple or completely reducible if it can be written as a direct sum of irreducible representations. This is analogous to the corresponding definition for a semisimple algebra. For the definition of the direct sum of representations please refer to the section on direct sums of representations. A representation is called isotypic if it is a direct sum of pairwise isomorphic irreducible representations. Let ( ρ , V ρ ) {\displaystyle (\rho ,V_{\rho })} be a given representation of a group G . {\displaystyle G.} Let τ {\displaystyle \tau } be an irreducible representation of G . {\displaystyle G.} The τ {\displaystyle \tau } –isotype V ρ ( τ ) {\displaystyle V_{\rho }(\tau )} of G {\displaystyle G} is defined as the sum of all irreducible subrepresentations of V {\displaystyle V} isomorphic to τ . {\displaystyle \tau .} Every vector space over C {\displaystyle \mathbb {C} } can be provided with an inner product. A representation ρ {\displaystyle \rho } of a group G {\displaystyle G} in a vector space endowed with an inner product is called unitary if ρ ( s ) {\displaystyle \rho (s)} is unitary for every s ∈ G . {\displaystyle s\in G.} This means that in particular every ρ ( s ) {\displaystyle \rho (s)} is diagonalizable. For more details see the article on unitary representations. A representation is unitary with respect to a given inner product if and only if the inner product is invariant with regard to the induced operation of G , {\displaystyle G,} i.e. if and only if ( v | u ) = ( ρ ( s ) v | ρ ( s ) u ) {\displaystyle (v|u)=(\rho (s)v|\rho (s)u)} holds for all v , u ∈ V ρ , s ∈ G . {\displaystyle v,u\in V_{\rho },s\in G.} A given inner product ( ⋅ | ⋅ ) {\displaystyle (\cdot |\cdot )} can be replaced by an invariant inner product by exchanging ( v | u ) {\displaystyle (v|u)} with ∑ t ∈ G ( ρ ( t ) v | ρ ( t ) u ) . {\displaystyle \sum _{t\in G}(\rho (t)v|\rho (t)u).} Thus, without loss of generality we can assume that every further considered representation is unitary. Example. Let G = D 6 = { id , μ , μ 2 , ν , μ ν , μ 2 ν } {\displaystyle G=D_{6}=\{{\text{id}},\mu ,\mu ^{2},\nu ,\mu \nu ,\mu ^{2}\nu \}} be the dihedral group of order 6 {\displaystyle 6} generated by μ , ν {\displaystyle \mu ,\nu } which fulfil the properties ord ( ν ) = 2 , ord ( μ ) = 3 {\displaystyle {\text{ord}}(\nu )=2,{\text{ord}}(\mu )=3} and ν μ ν = μ 2 . {\displaystyle \nu \mu \nu =\mu ^{2}.} Let ρ : D 6 → GL 3 ( C ) {\displaystyle \rho :D_{6}\to {\text{GL}}_{3}(\mathbb {C} )} be a linear representation of D 6 {\displaystyle D_{6}} defined on the generators by: ρ ( μ ) = ( cos ⁡ ( 2 π 3 ) 0 − sin ⁡ ( 2 π 3 ) 0 1 0 sin ⁡ ( 2 π 3 ) 0 cos ⁡ ( 2 π 3 ) ) , ρ ( ν ) = ( − 1 0 0 0 − 1 0 0 0 1 ) . {\displaystyle \rho (\mu )=\left({\begin{array}{ccc}\cos({\frac {2\pi }{3}})&0&-\sin({\frac {2\pi }{3}})\\0&1&0\\\sin({\frac {2\pi }{3}})&0&\cos({\frac {2\pi }{3}})\end{array}}\right),\,\,\,\,\rho (\nu )=\left({\begin{array}{ccc}-1&0&0\\0&-1&0\\0&0&1\end{array}}\right).} This representation is faithful. The subspace C e 2 {\displaystyle \mathbb {C} e_{2}} is a D 6 {\displaystyle D_{6}} –invariant subspace. Thus, there exists a nontrivial subrepresentation ρ | C e 2 : D 6 → C × {\displaystyle \rho |_{\mathbb {C} e_{2}}:D_{6}\to \mathbb {C} ^{\times }} with ν ↦ − 1 , μ ↦ 1. {\displaystyle \nu \mapsto -1,\mu \mapsto 1.} Therefore, the representation is not irreducible. The mentioned subrepresentation is of degree one and irreducible. The complementary subspace of C e 2 {\displaystyle \mathbb {C} e_{2}} is D 6 {\displaystyle D_{6}} –invariant as well. Therefore, we obtain the subrepresentation ρ | C e 1 ⊕ C e 3 {\displaystyle \rho |_{\mathbb {C} e_{1}\oplus \mathbb {C} e_{3}}} with ν ↦ ( − 1 0 0 1 ) , μ ↦ ( cos ⁡ ( 2 π 3 ) − sin ⁡ ( 2 π 3 ) sin ⁡ ( 2 π 3 ) cos ⁡ ( 2 π 3 ) ) . {\displaystyle \nu \mapsto {\begin{pmatrix}-1&0\\0&1\end{pmatrix}},\,\,\,\,\mu \mapsto {\begin{pmatrix}\cos({\frac {2\pi }{3}})&-\sin({\frac {2\pi }{3}})\\\sin({\frac {2\pi }{3}})&\cos({\frac {2\pi }{3}})\end{pmatrix}}.} This subrepresentation is also irreducible. That means, the original representation is completely reducible: ρ = ρ | C e 2 ⊕ ρ | C e 1 ⊕ C e 3 . {\displaystyle \rho =\rho |_{\mathbb {C} e_{2}}\oplus \rho |_{\mathbb {C} e_{1}\oplus \mathbb {C} e_{3}}.} Both subrepresentations are isotypic and are the two only non-zero isotypes of ρ . {\displaystyle \rho .} The representation ρ {\displaystyle \rho } is unitary with regard to the standard inner product on C 3 , {\displaystyle \mathbb {C} ^{3},} because ρ ( μ ) {\displaystyle \rho (\mu )} and ρ ( ν ) {\displaystyle \rho (\nu )} are unitary. Let T : C 3 → C 3 {\displaystyle T:\mathbb {C} ^{3}\to \mathbb {C} ^{3}} be any vector space isomorphism. Then η : D 6 → GL 3 ( C ) , {\displaystyle \eta :D_{6}\to {\text{GL}}_{3}(\mathbb {C} ),} which is defined by the equation η ( s ) := T ∘ ρ ( s ) ∘ T − 1 {\displaystyle \eta (s):=T\circ \rho (s)\circ T^{-1}} for all s ∈ D 6 , {\displaystyle s\in D_{6},} is a representation isomorphic to ρ . {\displaystyle \rho .} By restricting the domain of the representation to a subgroup, e.g. H = { id , μ , μ 2 } , {\displaystyle H=\{{\text{id}},\mu ,\mu ^{2}\},} we obtain the representation Res H ( ρ ) . {\displaystyle {\text{Res}}_{H}(\rho ).} This representation is defined by the image ρ ( μ ) , {\displaystyle \rho (\mu ),} whose explicit form is shown above. == Constructions == === The dual representation === Let ρ : G → GL ( V ) {\displaystyle \rho :G\to {\text{GL}}(V)} be a given representation. The dual representation or contragredient representation ρ ∗ : G → GL ( V ∗ ) {\displaystyle \rho ^{*}:G\to {\text{GL}}(V^{*})} is a representation of G {\displaystyle G} in the dual vector space of V . {\displaystyle V.} It is defined by the property ∀ s ∈ G , v ∈ V , α ∈ V ∗ : ( ρ ∗ ( s ) α ) ( v ) = α ( ρ ( s − 1 ) v ) . {\displaystyle \forall s\in G,v\in V,\alpha \in V^{*}:\qquad \left(\rho ^{*}(s)\alpha \right)(v)=\alpha \left(\rho \left(s^{-1}\right)v\right).} With regard to the natural pairing ⟨ α , v ⟩ := α ( v ) {\displaystyle \langle \alpha ,v\rangle :=\alpha (v)} between V ∗ {\displaystyle V^{*}} and V {\displaystyle V} the definition above provides the equation: ∀ s ∈ G , v ∈ V , α ∈ V ∗ : ⟨ ρ ∗ ( s ) ( α ) , ρ ( s ) ( v ) ⟩ = ⟨ α , v ⟩ . {\displaystyle \forall s\in G,v\in V,\alpha \in V^{*}:\qquad \langle \rho ^{*}(s)(\alpha ),\rho (s)(v)\rangle =\langle \alpha ,v\rangle .} For an example, see the main page on this topic: Dual representation. === Direct sum of representations === Let ( ρ 1 , V 1 ) {\displaystyle (\rho _{1},V_{1})} and ( ρ 2 , V 2 ) {\displaystyle (\rho _{2},V_{2})} be a representation of G 1 {\displaystyle G_{1}} and G 2 , {\displaystyle G_{2},} respectively. The direct sum of these representations is a linear representation and is defined as ∀ s 1 ∈ G 1 , s 2 ∈ G 2 , v 1 ∈ V 1 , v 2 ∈ V 2 : { ρ 1 ⊕ ρ 2 : G 1 × G 2 → GL ( V 1 ⊕ V 2 ) ( ρ 1 ⊕ ρ 2 ) ( s 1 , s 2 ) ( v 1 , v 2 ) := ρ 1 ( s 1 ) v 1 ⊕ ρ 2 ( s 2 ) v 2 {\displaystyle \forall s_{1}\in G_{1},s_{2}\in G_{2},v_{1}\in V_{1},v_{2}\in V_{2}:\qquad {\begin{cases}\rho _{1}\oplus \rho _{2}:G_{1}\times G_{2}\to {\text{GL}}(V_{1}\oplus V_{2})\\[4pt](\rho _{1}\oplus \rho _{2})(s_{1},s_{2})(v_{1},v_{2}):=\rho _{1}(s_{1})v_{1}\oplus \rho _{2}(s_{2})v_{2}\end{cases}}} Let ρ 1 , ρ 2 {\displaystyle \rho _{1},\rho _{2}} be representations of the same group G . {\displaystyle G.} For the sake of simplicity, the direct sum of these representations is defined as a representation of G , {\displaystyle G,} i.e. it is given as ρ 1 ⊕ ρ 2 : G → GL ( V 1 ⊕ V 2 ) , {\displaystyle \rho _{1}\oplus \rho _{2}:G\to {\text{GL}}(V_{1}\oplus V_{2}),} by viewing G {\displaystyle G} as the diagonal subgroup of G × G . {\displaystyle G\times G.} Example. Let (here i {\displaystyle i} and ω {\displaystyle \omega } are the imaginary unit and the primitive cube root of unity respectively): { ρ 1 : Z / 2 Z → GL 2 ( C ) ρ 1 ( 1 ) = ( 0 − i i 0 ) { ρ 2 : Z / 3 Z → GL 3 ( C ) ρ 2 ( 1 ) = ( 1 0 ω 0 ω 0 0 0 ω 2 ) {\displaystyle {\begin{cases}\rho _{1}:\mathbb {Z} /2\mathbb {Z} \to {\text{GL}}_{2}(\mathbb {C} )\\[4pt]\rho _{1}(1)={\begin{pmatrix}0&-i\\i&0\end{pmatrix}}\end{cases}}\qquad \qquad {\begin{cases}\rho _{2}:\mathbb {Z} /3\mathbb {Z} \to {\text{GL}}_{3}(\mathbb {C} )\\[6pt]\rho _{2}(1)={\begin{pmatrix}1&0&\omega \\0&\omega &0\\0&0&\omega ^{2}\end{pmatrix}}\end{cases}}} Then { ρ 1 ⊕ ρ 2 : Z / 2 Z × Z / 3 Z → GL ( C 2 ⊕ C 3 ) ( ρ 1 ⊕ ρ 2 ) ( k , l ) = ( ρ 1 ( k ) 0 0 ρ 2 ( l ) ) k ∈ Z / 2 Z , l ∈ Z / 3 Z {\displaystyle {\begin{cases}\rho _{1}\oplus \rho _{2}:\mathbb {Z} /2\mathbb {Z} \times \mathbb {Z} /3\mathbb {Z} \to {\text{GL}}\left(\mathbb {C} ^{2}\oplus \mathbb {C} ^{3}\right)\\[6pt]\left(\rho _{1}\oplus \rho _{2}\right)(k,l)={\begin{pmatrix}\rho _{1}(k)&0\\0&\rho _{2}(l)\end{pmatrix}}&k\in \mathbb {Z} /2\mathbb {Z} ,l\in \mathbb {Z} /3\mathbb {Z} \end{cases}}} As it is sufficient to consider the image of the generating element, we find that ( ρ 1 ⊕ ρ 2 ) ( 1 , 1 ) = ( 0 − i 0 0 0 i 0 0 0 0 0 0 1 0 ω 0 0 0 ω 0 0 0 0 0 ω 2 ) {\displaystyle (\rho _{1}\oplus \rho _{2})(1,1)={\begin{pmatrix}0&-i&0&0&0\\i&0&0&0&0\\0&0&1&0&\omega \\0&0&0&\omega &0\\0&0&0&0&\omega ^{2}\end{pmatrix}}} === Tensor product of representations === Let ρ 1 : G 1 → GL ( V 1 ) , ρ 2 : G 2 → GL ( V 2 ) {\displaystyle \rho _{1}:G_{1}\to {\text{GL}}(V_{1}),\rho _{2}:G_{2}\to {\text{GL}}(V_{2})} be linear representations. We define the linear representation ρ 1 ⊗ ρ 2 : G 1 × G 2 → GL ( V 1 ⊗ V 2 ) {\displaystyle \rho _{1}\otimes \rho _{2}:G_{1}\times G_{2}\to {\text{GL}}(V_{1}\otimes V_{2})} into the tensor product of V 1 {\displaystyle V_{1}} and V 2 {\displaystyle V_{2}} by ρ 1 ⊗ ρ 2 ( s 1 , s 2 ) = ρ 1 ( s 1 ) ⊗ ρ 2 ( s 2 ) , {\displaystyle \rho _{1}\otimes \rho _{2}(s_{1},s_{2})=\rho _{1}(s_{1})\otimes \rho _{2}(s_{2}),} in which s 1 ∈ G 1 , s 2 ∈ G 2 . {\displaystyle s_{1}\in G_{1},s_{2}\in G_{2}.} This representation is called outer tensor product of the representations ρ 1 {\displaystyle \rho _{1}} and ρ 2 . {\displaystyle \rho _{2}.} The existence and uniqueness is a consequence of the properties of the tensor product. Example. We reexamine the example provided for the direct sum: { ρ 1 : Z / 2 Z → GL 2 ( C ) ρ 1 ( 1 ) = ( 0 − i i 0 ) { ρ 2 : Z / 3 Z → GL 3 ( C ) ρ 2 ( 1 ) = ( 1 0 ω 0 ω 0 0 0 ω 2 ) {\displaystyle {\begin{cases}\rho _{1}:\mathbb {Z} /2\mathbb {Z} \to {\text{GL}}_{2}(\mathbb {C} )\\[4pt]\rho _{1}(1)={\begin{pmatrix}0&-i\\i&0\end{pmatrix}}\end{cases}}\qquad \qquad {\begin{cases}\rho _{2}:\mathbb {Z} /3\mathbb {Z} \to {\text{GL}}_{3}(\mathbb {C} )\\[6pt]\rho _{2}(1)={\begin{pmatrix}1&0&\omega \\0&\omega &0\\0&0&\omega ^{2}\end{pmatrix}}\end{cases}}} The outer tensor product { ρ 1 ⊗ ρ 2 : Z / 2 Z × Z / 3 Z → GL ( C 2 ⊗ C 3 ) ( ρ 1 ⊗ ρ 2 ) ( k , l ) = ρ 1 ( k ) ⊗ ρ 2 ( l ) k ∈ Z / 2 Z , l ∈ Z / 3 Z {\displaystyle {\begin{cases}\rho _{1}\otimes \rho _{2}:\mathbb {Z} /2\mathbb {Z} \times \mathbb {Z} /3\mathbb {Z} \to {\text{GL}}(\mathbb {C} ^{2}\otimes \mathbb {C} ^{3})\\(\rho _{1}\otimes \rho _{2})(k,l)=\rho _{1}(k)\otimes \rho _{2}(l)&k\in \mathbb {Z} /2\mathbb {Z} ,l\in \mathbb {Z} /3\mathbb {Z} \end{cases}}} Using the standard basis of C 2 ⊗ C 3 ≅ C 6 {\displaystyle \mathbb {C} ^{2}\otimes \mathbb {C} ^{3}\cong \mathbb {C} ^{6}} we have the following for the generating element: ρ 1 ⊗ ρ 2 ( 1 , 1 ) = ρ 1 ( 1 ) ⊗ ρ 2 ( 1 ) = ( 0 0 0 − i 0 − i ω 0 0 0 0 − i ω 0 0 0 0 0 0 − i ω 2 i 0 i ω 0 0 0 0 i ω 0 0 0 0 0 0 i ω 2 0 0 0 ) {\displaystyle \rho _{1}\otimes \rho _{2}(1,1)=\rho _{1}(1)\otimes \rho _{2}(1)={\begin{pmatrix}0&0&0&-i&0&-i\omega \\0&0&0&0&-i\omega &0\\0&0&0&0&0&-i\omega ^{2}\\i&0&i\omega &0&0&0\\0&i\omega &0&0&0&0\\0&0&i\omega ^{2}&0&0&0\end{pmatrix}}} Remark. Note that the direct sum and the tensor products have different degrees and hence are different representations. Let ρ 1 : G → GL ( V 1 ) , ρ 2 : G → GL ( V 2 ) {\displaystyle \rho _{1}:G\to {\text{GL}}(V_{1}),\rho _{2}:G\to {\text{GL}}(V_{2})} be two linear representations of the same group. Let s {\displaystyle s} be an element of G . {\displaystyle G.} Then ρ ( s ) ∈ GL ( V 1 ⊗ V 2 ) {\displaystyle \rho (s)\in {\text{GL}}(V_{1}\otimes V_{2})} is defined by ρ ( s ) ( v 1 ⊗ v 2 ) = ρ 1 ( s ) v 1 ⊗ ρ 2 ( s ) v 2 , {\displaystyle \rho (s)(v_{1}\otimes v_{2})=\rho _{1}(s)v_{1}\otimes \rho _{2}(s)v_{2},} for v 1 ∈ V 1 , v 2 ∈ V 2 , {\displaystyle v_{1}\in V_{1},v_{2}\in V_{2},} and we write ρ ( s ) = ρ 1 ( s ) ⊗ ρ 2 ( s ) . {\displaystyle \rho (s)=\rho _{1}(s)\otimes \rho _{2}(s).} Then the map s ↦ ρ ( s ) {\displaystyle s\mapsto \rho (s)} defines a linear representation of G , {\displaystyle G,} which is also called tensor product of the given representations. These two cases have to be strictly distinguished. The first case is a representation of the group product into the tensor product of the corresponding representation spaces. The second case is a representation of the group G {\displaystyle G} into the tensor product of two representation spaces of this one group. But this last case can be viewed as a special case of the first one by focusing on the diagonal subgroup G × G . {\displaystyle G\times G.} This definition can be iterated a finite number of times. Let V {\displaystyle V} and W {\displaystyle W} be representations of the group G . {\displaystyle G.} Then Hom ( V , W ) {\displaystyle {\text{Hom}}(V,W)} is a representation by virtue of the following identity: Hom ( V , W ) = V ∗ ⊗ W {\displaystyle {\text{Hom}}(V,W)=V^{*}\otimes W} . Let B ∈ Hom ( V , W ) {\displaystyle B\in {\text{Hom}}(V,W)} and let ρ {\displaystyle \rho } be the representation on Hom ( V , W ) . {\displaystyle {\text{Hom}}(V,W).} Let ρ V {\displaystyle \rho _{V}} be the representation on V {\displaystyle V} and ρ W {\displaystyle \rho _{W}} the representation on W . {\displaystyle W.} Then the identity above leads to the following result: ρ ( s ) ( B ) v = ρ W ( s ) ∘ B ∘ ρ V ( s − 1 ) v {\displaystyle \rho (s)(B)v=\rho _{W}(s)\circ B\circ \rho _{V}(s^{-1})v} for all s ∈ G , v ∈ V . {\displaystyle s\in G,v\in V.} Theorem. The irreducible representations of G 1 × G 2 {\displaystyle G_{1}\times G_{2}} up to isomorphism are exactly the representations ρ 1 ⊗ ρ 2 {\displaystyle \rho _{1}\otimes \rho _{2}} in which ρ 1 {\displaystyle \rho _{1}} and ρ 2 {\displaystyle \rho _{2}} are irreducible representations of G 1 {\displaystyle G_{1}} and G 2 , {\displaystyle G_{2},} respectively. ==== Symmetric and alternating square ==== Let ρ : G → V ⊗ V {\displaystyle \rho :G\to V\otimes V} be a linear representation of G . {\displaystyle G.} Let ( e k ) {\displaystyle (e_{k})} be a basis of V . {\displaystyle V.} Define ϑ : V ⊗ V → V ⊗ V {\displaystyle \vartheta :V\otimes V\to V\otimes V} by extending ϑ ( e k ⊗ e j ) = e j ⊗ e k {\displaystyle \vartheta (e_{k}\otimes e_{j})=e_{j}\otimes e_{k}} linearly. It then holds that ϑ 2 = 1 {\displaystyle \vartheta ^{2}=1} and therefore V ⊗ V {\displaystyle V\otimes V} splits up into V ⊗ V = Sym 2 ( V ) ⊕ Alt 2 ( V ) , {\displaystyle V\otimes V={\text{Sym}}^{2}(V)\oplus {\text{Alt}}^{2}(V),} in which Sym 2 ( V ) = { z ∈ V ⊗ V : ϑ ( z ) = z } {\displaystyle {\text{Sym}}^{2}(V)=\{z\in V\otimes V:\vartheta (z)=z\}} Alt 2 ( V ) = ⋀ 2 V = { z ∈ V ⊗ V : ϑ ( z ) = − z } . {\displaystyle {\text{Alt}}^{2}(V)=\bigwedge ^{2}V=\{z\in V\otimes V:\vartheta (z)=-z\}.} These subspaces are G {\displaystyle G} –invariant and by this define subrepresentations which are called the symmetric square and the alternating square, respectively. These subrepresentations are also defined in V ⊗ m , {\displaystyle V^{\otimes m},} although in this case they are denoted wedge product ⋀ m V {\displaystyle \bigwedge ^{m}V} and symmetric product Sym m ( V ) . {\displaystyle {\text{Sym}}^{m}(V).} In case that m > 2 , {\displaystyle m>2,} the vector space V ⊗ m {\displaystyle V^{\otimes m}} is in general not equal to the direct sum of these two products. == Decompositions == In order to understand representations more easily, a decomposition of the representation space into the direct sum of simpler subrepresentations would be desirable. This can be achieved for finite groups as we will see in the following results. More detailed explanations and proofs may be found in [1] and [2]. Theorem. (Maschke) Let ρ : G → GL ( V ) {\displaystyle \rho :G\to {\text{GL}}(V)} be a linear representation where V {\displaystyle V} is a vector space over a field of characteristic zero. Let W {\displaystyle W} be a G {\displaystyle G} -invariant subspace of V . {\displaystyle V.} Then the complement W 0 {\displaystyle W^{0}} of W {\displaystyle W} exists in V {\displaystyle V} and is G {\displaystyle G} -invariant. A subrepresentation and its complement determine a representation uniquely. The following theorem will be presented in a more general way, as it provides a very beautiful result about representations of compact – and therefore also of finite – groups: Theorem. Every linear representation of a compact group over a field of characteristic zero is a direct sum of irreducible representations. Or in the language of K [ G ] {\displaystyle K[G]} -modules: If char ( K ) = 0 , {\displaystyle {\text{char}}(K)=0,} the group algebra K [ G ] {\displaystyle K[G]} is semisimple, i.e. it is the direct sum of simple algebras. Note that this decomposition is not unique. However, the number of how many times a subrepresentation isomorphic to a given irreducible representation is occurring in this decomposition is independent of the choice of decomposition. The canonical decomposition To achieve a unique decomposition, one has to combine all the irreducible subrepresentations that are isomorphic to each other. That means, the representation space is decomposed into a direct sum of its isotypes. This decomposition is uniquely determined. It is called the canonical decomposition. Let ( τ j ) j ∈ I {\displaystyle (\tau _{j})_{j\in I}} be the set of all irreducible representations of a group G {\displaystyle G} up to isomorphism. Let V {\displaystyle V} be a representation of G {\displaystyle G} and let { V ( τ j ) | j ∈ I } {\displaystyle \{V(\tau _{j})|j\in I\}} be the set of all isotypes of V . {\displaystyle V.} The projection p j : V → V ( τ j ) {\displaystyle p_{j}:V\to V(\tau _{j})} corresponding to the canonical decomposition is given by p j = n j g ∑ t ∈ G χ τ j ( t ) ¯ ρ ( t ) , {\displaystyle p_{j}={\frac {n_{j}}{g}}\sum _{t\in G}{\overline {\chi _{\tau _{j}}(t)}}\rho (t),} where n j = dim ⁡ ( τ j ) , {\displaystyle n_{j}=\dim(\tau _{j}),} g = ord ( G ) {\displaystyle g={\text{ord}}(G)} and χ τ j {\displaystyle \chi _{\tau _{j}}} is the character belonging to τ j . {\displaystyle \tau _{j}.} In the following, we show how to determine the isotype to the trivial representation: Definition (Projection formula). For every representation ( ρ , V ) {\displaystyle (\rho ,V)} of a group G {\displaystyle G} we define V G := { v ∈ V : ρ ( s ) v = v ∀ s ∈ G } . {\displaystyle V^{G}:=\{v\in V:\rho (s)v=v\,\,\,\,\forall \,s\in G\}.} In general, ρ ( s ) : V → V {\displaystyle \rho (s):V\to V} is not G {\displaystyle G} -linear. We define P := 1 | G | ∑ s ∈ G ρ ( s ) ∈ End ( V ) . {\displaystyle P:={\frac {1}{|G|}}\sum _{s\in G}\rho (s)\in {\text{End}}(V).} Then P {\displaystyle P} is a G {\displaystyle G} -linear map, because ∀ t ∈ G : ∑ s ∈ G ρ ( s ) = ∑ s ∈ G ρ ( t s t − 1 ) . {\displaystyle \forall t\in G:\qquad \sum _{s\in G}\rho (s)=\sum _{s\in G}\rho (tst^{-1}).} Proposition. The map P {\displaystyle P} is a projection from V {\displaystyle V} to V G . {\displaystyle V^{G}.} This proposition enables us to determine the isotype to the trivial subrepresentation of a given representation explicitly. How often the trivial representation occurs in V {\displaystyle V} is given by Tr ( P ) . {\displaystyle {\text{Tr}}(P).} This result is a consequence of the fact that the eigenvalues of a projection are only 0 {\displaystyle 0} or 1 {\displaystyle 1} and that the eigenspace corresponding to the eigenvalue 1 {\displaystyle 1} is the image of the projection. Since the trace of the projection is the sum of all eigenvalues, we obtain the following result dim ⁡ ( V ( 1 ) ) = dim ⁡ ( V G ) = T r ( P ) = 1 | G | ∑ s ∈ G χ V ( s ) , {\displaystyle \dim(V(1))=\dim(V^{G})=Tr(P)={\frac {1}{|G|}}\sum _{s\in G}\chi _{V}(s),} in which V ( 1 ) {\displaystyle V(1)} denotes the isotype of the trivial representation. Let V π {\displaystyle V_{\pi }} be a nontrivial irreducible representation of G . {\displaystyle G.} Then the isotype to the trivial representation of π {\displaystyle \pi } is the null space. That means the following equation holds P = 1 | G | ∑ s ∈ G π ( s ) = 0. {\displaystyle P={\frac {1}{|G|}}\sum _{s\in G}\pi (s)=0.} Let e 1 , . . . , e n {\displaystyle e_{1},...,e_{n}} be an orthonormal basis of V π . {\displaystyle V_{\pi }.} Then we have: ∑ s ∈ G Tr ( π ( s ) ) = ∑ s ∈ G ∑ j = 1 n ⟨ π ( s ) e j , e j ⟩ = ∑ j = 1 n ⟨ ∑ s ∈ G π ( s ) e j , e j ⟩ = 0. {\displaystyle \sum _{s\in G}{\text{Tr}}(\pi (s))=\sum _{s\in G}\sum _{j=1}^{n}\langle \pi (s)e_{j},e_{j}\rangle =\sum _{j=1}^{n}\left\langle \sum _{s\in G}\pi (s)e_{j},e_{j}\right\rangle =0.} Therefore, the following is valid for a nontrivial irreducible representation V {\displaystyle V} : ∑ s ∈ G χ V ( s ) = 0. {\displaystyle \sum _{s\in G}\chi _{V}(s)=0.} Example. Let G = Per ( 3 ) {\displaystyle G={\text{Per}}(3)} be the permutation groups in three elements. Let ρ : Per ( 3 ) → GL 5 ( C ) {\displaystyle \rho :{\text{Per}}(3)\to {\text{GL}}_{5}(\mathbb {C} )} be a linear representation of Per ( 3 ) {\displaystyle {\text{Per}}(3)} defined on the generating elements as follows: ρ ( 1 , 2 ) = ( − 1 2 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 1 ) , ρ ( 1 , 3 ) = ( 1 2 1 2 0 0 0 1 2 − 1 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 ) , ρ ( 2 , 3 ) = ( 0 − 2 0 0 0 − 1 2 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 1 0 ) . {\displaystyle \rho (1,2)={\begin{pmatrix}-1&2&0&0&0\\0&1&0&0&0\\0&0&0&1&0\\0&0&1&0&0\\0&0&0&0&1\end{pmatrix}},\quad \rho (1,3)={\begin{pmatrix}{\frac {1}{2}}&{\frac {1}{2}}&0&0&0\\{\frac {1}{2}}&-1&0&0&0\\0&0&0&0&1\\0&0&0&1&0\\0&0&1&0&0\end{pmatrix}},\quad \rho (2,3)={\begin{pmatrix}0&-2&0&0&0\\-{\frac {1}{2}}&0&0&0&0\\0&0&1&0&0\\0&0&0&0&1\\0&0&0&1&0\end{pmatrix}}.} This representation can be decomposed on first look into the left-regular representation of Per ( 3 ) , {\displaystyle {\text{Per}}(3),} which is denoted by π {\displaystyle \pi } in the following, and the representation η : Per ( 3 ) → GL 2 ( C ) {\displaystyle \eta :{\text{Per}}(3)\to {\text{GL}}_{2}(\mathbb {C} )} with η ( 1 , 2 ) = ( − 1 2 0 1 ) , η ( 1 , 3 ) = ( 1 2 1 2 1 2 − 1 ) , η ( 2 , 3 ) = ( 0 − 2 − 1 2 0 ) . {\displaystyle \eta (1,2)={\begin{pmatrix}-1&2\\0&1\end{pmatrix}},\quad \eta (1,3)={\begin{pmatrix}{\frac {1}{2}}&{\frac {1}{2}}\\{\frac {1}{2}}&-1\end{pmatrix}},\quad \eta (2,3)={\begin{pmatrix}0&-2\\-{\frac {1}{2}}&0\end{pmatrix}}.} With the help of the irreducibility criterion taken from the next chapter, we could realize that η {\displaystyle \eta } is irreducible but π {\displaystyle \pi } is not. This is because (in terms of the inner product from ”Inner product and characters” below) we have ( η | η ) = 1 , ( π | π ) = 2. {\displaystyle (\eta |\eta )=1,(\pi |\pi )=2.} The subspace C ( e 1 + e 2 + e 3 ) {\displaystyle \mathbb {C} (e_{1}+e_{2}+e_{3})} of C 3 {\displaystyle \mathbb {C} ^{3}} is invariant with respect to the left-regular representation. Restricted to this subspace we obtain the trivial representation. The orthogonal complement of C ( e 1 + e 2 + e 3 ) {\displaystyle \mathbb {C} (e_{1}+e_{2}+e_{3})} is C ( e 1 − e 2 ) ⊕ C ( e 1 + e 2 − 2 e 3 ) . {\displaystyle \mathbb {C} (e_{1}-e_{2})\oplus \mathbb {C} (e_{1}+e_{2}-2e_{3}).} Restricted to this subspace, which is also G {\displaystyle G} –invariant as we have seen above, we obtain the representation τ {\displaystyle \tau } given by τ ( 1 , 2 ) = ( − 1 0 0 1 ) , τ ( 1 , 3 ) = ( 1 2 3 2 1 2 − 1 2 ) , τ ( 2 , 3 ) = ( 1 2 − 3 2 − 1 2 − 1 2 ) . {\displaystyle \tau (1,2)={\begin{pmatrix}-1&0\\0&1\end{pmatrix}},\quad \tau (1,3)={\begin{pmatrix}{\frac {1}{2}}&{\frac {3}{2}}\\{\frac {1}{2}}&-{\frac {1}{2}}\end{pmatrix}},\quad \tau (2,3)={\begin{pmatrix}{\frac {1}{2}}&-{\frac {3}{2}}\\-{\frac {1}{2}}&-{\frac {1}{2}}\end{pmatrix}}.} Again, we can use the irreducibility criterion of the next chapter to prove that τ {\displaystyle \tau } is irreducible. Now, η {\displaystyle \eta } and τ {\displaystyle \tau } are isomorphic because η ( s ) = B ∘ τ ( s ) ∘ B − 1 {\displaystyle \eta (s)=B\circ \tau (s)\circ B^{-1}} for all s ∈ Per ( 3 ) , {\displaystyle s\in {\text{Per}}(3),} in which B : C 2 → C 2 {\displaystyle B:\mathbb {C} ^{2}\to \mathbb {C} ^{2}} is given by the matrix M B = ( 2 2 0 2 ) . {\displaystyle M_{B}={\begin{pmatrix}2&2\\0&2\end{pmatrix}}.} A decomposition of ( ρ , C 5 ) {\displaystyle (\rho ,\mathbb {C} ^{5})} in irreducible subrepresentations is: ρ = τ ⊕ η ⊕ 1 {\displaystyle \rho =\tau \oplus \eta \oplus 1} where 1 {\displaystyle 1} denotes the trivial representation and C 5 = C ( e 1 , e 2 ) ⊕ C ( e 3 − e 4 , e 3 + e 4 − 2 e 5 ) ⊕ C ( e 3 + e 4 + e 5 ) {\displaystyle \mathbb {C} ^{5}=\mathbb {C} (e_{1},e_{2})\oplus \mathbb {C} (e_{3}-e_{4},e_{3}+e_{4}-2e_{5})\oplus \mathbb {C} (e_{3}+e_{4}+e_{5})} is the corresponding decomposition of the representation space. We obtain the canonical decomposition by combining all the isomorphic irreducible subrepresentations: ρ 1 := η ⊕ τ {\displaystyle \rho _{1}:=\eta \oplus \tau } is the τ {\displaystyle \tau } -isotype of ρ {\displaystyle \rho } and consequently the canonical decomposition is given by ρ = ρ 1 ⊕ 1 , C 5 = C ( e 1 , e 2 , e 3 − e 4 , e 3 + e 4 − 2 e 5 ) ⊕ C ( e 3 + e 4 + e 5 ) . {\displaystyle \rho =\rho _{1}\oplus 1,\qquad \mathbb {C} ^{5}=\mathbb {C} (e_{1},e_{2},e_{3}-e_{4},e_{3}+e_{4}-2e_{5})\oplus \mathbb {C} (e_{3}+e_{4}+e_{5}).} The theorems above are in general not valid for infinite groups. This will be demonstrated by the following example: let G = { A ∈ GL 2 ( C ) | A is an upper triangular matrix } . {\displaystyle G=\{A\in {\text{GL}}_{2}(\mathbb {C} )|\,A\,\,{\text{ is an upper triangular matrix}}\}.} Together with the matrix multiplication G {\displaystyle G} is an infinite group. G {\displaystyle G} acts on C 2 {\displaystyle \mathbb {C} ^{2}} by matrix-vector multiplication. We consider the representation ρ ( A ) = A {\displaystyle \rho (A)=A} for all A ∈ G . {\displaystyle A\in G.} The subspace C e 1 {\displaystyle \mathbb {C} e_{1}} is a G {\displaystyle G} -invariant subspace. However, there exists no G {\displaystyle G} -invariant complement to this subspace. The assumption that such a complement exists would entail that every matrix is diagonalizable over C . {\displaystyle \mathbb {C} .} This is known to be wrong and thus yields a contradiction. The moral of the story is that if we consider infinite groups, it is possible that a representation - even one that is not irreducible - can not be decomposed into a direct sum of irreducible subrepresentations. == Character theory == === Definitions === The character of a representation ρ : G → GL ( V ) {\displaystyle \rho :G\to {\text{GL}}(V)} is defined as the map χ ρ : G → C , χ ρ ( s ) := Tr ( ρ ( s ) ) , {\displaystyle \chi _{\rho }:G\to \mathbb {C} ,\chi _{\rho }(s):={\text{Tr}}(\rho (s)),} in which Tr ( ρ ( s ) ) {\displaystyle {\text{Tr}}(\rho (s))} denotes the trace of the linear map ρ ( s ) . {\displaystyle \rho (s).} Even though the character is a map between two groups, it is not in general a group homomorphism, as the following example shows. Let ρ : Z / 2 Z × Z / 2 Z → GL 2 ( C ) {\displaystyle \rho :\mathbb {Z} /2\mathbb {Z} \times \mathbb {Z} /2\mathbb {Z} \to {\text{GL}}_{2}(\mathbb {C} )} be the representation defined by: ρ ( 0 , 0 ) = ( 1 0 0 1 ) , ρ ( 1 , 0 ) = ( − 1 0 0 − 1 ) , ρ ( 0 , 1 ) = ( 0 1 1 0 ) , ρ ( 1 , 1 ) = ( 0 − 1 − 1 0 ) . {\displaystyle \rho (0,0)={\begin{pmatrix}1&0\\0&1\end{pmatrix}},\quad \rho (1,0)={\begin{pmatrix}-1&0\\0&-1\end{pmatrix}},\quad \rho (0,1)={\begin{pmatrix}0&1\\1&0\end{pmatrix}},\quad \rho (1,1)={\begin{pmatrix}0&-1\\-1&0\end{pmatrix}}.} The character χ ρ {\displaystyle \chi _{\rho }} is given by χ ρ ( 0 , 0 ) = 2 , χ ρ ( 1 , 0 ) = − 2 , χ ρ ( 0 , 1 ) = χ ρ ( 1 , 1 ) = 0. {\displaystyle \chi _{\rho }(0,0)=2,\quad \chi _{\rho }(1,0)=-2,\quad \chi _{\rho }(0,1)=\chi _{\rho }(1,1)=0.} Characters of permutation representations are particularly easy to compute. If V is the G-representation corresponding to the left action of G {\displaystyle G} on a finite set X {\displaystyle X} , then χ V ( s ) = | { x ∈ X | s ⋅ x = x } | . {\displaystyle \chi _{V}(s)=|\{x\in X|s\cdot x=x\}|.} For example, the character of the regular representation R {\displaystyle R} is given by χ R ( s ) = { 0 s ≠ e | G | s = e , {\displaystyle \chi _{R}(s)={\begin{cases}0&s\neq e\\|G|&s=e\end{cases}},} where e {\displaystyle e} denotes the neutral element of G . {\displaystyle G.} === Properties === A crucial property of characters is the formula χ ( t s t − 1 ) = χ ( s ) , ∀ s , t ∈ G . {\displaystyle \chi (tst^{-1})=\chi (s),\,\,\forall \,s,t\in G.} This formula follows from the fact that the trace of a product AB of two square matrices is the same as the trace of BA. Functions G → C {\displaystyle G\to \mathbb {C} } satisfying such a formula are called class functions. Put differently, class functions and in particular characters are constant on each conjugacy class C s = { t s t − 1 | t ∈ G } . {\displaystyle C_{s}=\{tst^{-1}|t\in G\}.} It also follows from elementary properties of the trace that χ ( s ) {\displaystyle \chi (s)} is the sum of the eigenvalues of ρ ( s ) {\displaystyle \rho (s)} with multiplicity. If the degree of the representation is n, then the sum is n long. If s has order m, these eigenvalues are all m-th roots of unity. This fact can be used to show that χ ( s − 1 ) = χ ( s ) ¯ , ∀ s ∈ G {\displaystyle \chi (s^{-1})={\overline {\chi (s)}},\,\,\,\forall \,s\in G} and it also implies | χ ( s ) | ⩽ n . {\displaystyle |\chi (s)|\leqslant n.} Since the trace of the identity matrix is the number of rows, χ ( e ) = n , {\displaystyle \chi (e)=n,} where e {\displaystyle e} is the neutral element of G {\displaystyle G} and n is the dimension of the representation. In general, { s ∈ G | χ ( s ) = n } {\displaystyle \{s\in G|\chi (s)=n\}} is a normal subgroup in G . {\displaystyle G.} The following table shows how the characters χ 1 , χ 2 {\displaystyle \chi _{1},\chi _{2}} of two given representations ρ 1 : G → GL ( V 1 ) , ρ 2 : G → GL ( V 2 ) {\displaystyle \rho _{1}:G\to {\text{GL}}(V_{1}),\rho _{2}:G\to {\text{GL}}(V_{2})} give rise to characters of related representations. By construction, there is a direct sum decomposition of V ⊗ V = S y m 2 ( V ) ⊕ ⋀ 2 V {\displaystyle V\otimes V=Sym^{2}(V)\oplus \bigwedge ^{2}V} . On characters, this corresponds to the fact that the sum of the last two expressions in the table is χ ( s ) 2 {\displaystyle \chi (s)^{2}} , the character of V ⊗ V {\displaystyle V\otimes V} . === Inner product and characters === In order to show some particularly interesting results about characters, it is rewarding to consider a more general type of functions on groups: Definition (Class functions). A function φ : G → C {\displaystyle \varphi :G\to \mathbb {C} } is called a class function if it is constant on conjugacy classes of G {\displaystyle G} , i.e. ∀ s , t ∈ G : φ ( s t s − 1 ) = φ ( t ) . {\displaystyle \forall s,t\in G:\quad \varphi \left(sts^{-1}\right)=\varphi (t).} Note that every character is a class function, as the trace of a matrix is preserved under conjugation. The set of all class functions is a C {\displaystyle \mathbb {C} } –algebra and is denoted by C class ( G ) {\displaystyle \mathbb {C} _{\text{class}}(G)} . Its dimension is equal to the number of conjugacy classes of G . {\displaystyle G.} Proofs of the following results of this chapter may be found in [1], [2] and [3]. An inner product can be defined on the set of all class functions on a finite group: ( f | h ) G = 1 | G | ∑ t ∈ G f ( t ) h ( t ) ¯ {\displaystyle (f|h)_{G}={\frac {1}{|G|}}\sum _{t\in G}f(t){\overline {h(t)}}} Orthonormal property. If χ 1 , … , χ k {\displaystyle \chi _{1},\ldots ,\chi _{k}} are the distinct irreducible characters of G {\displaystyle G} , they form an orthonormal basis for the vector space of all class functions with respect to the inner product defined above, i.e. ( χ i | χ j ) = { 1 if i = j 0 otherwise . {\displaystyle (\chi _{i}|\chi _{j})={\begin{cases}1{\text{ if }}i=j\\0{\text{ otherwise }}\end{cases}}.} Every class function f {\displaystyle f} may be expressed as a unique linear combination of the irreducible characters χ 1 , … , χ k {\displaystyle \chi _{1},\ldots ,\chi _{k}} . One might verify that the irreducible characters generate C class ( G ) {\displaystyle \mathbb {C} _{\text{class}}(G)} by showing that there exists no nonzero class function which is orthogonal to all the irreducible characters. For ρ {\displaystyle \rho } a representation and f {\displaystyle f} a class function, denote ρ f = ∑ g f ( g ) ρ ( g ) . {\displaystyle \rho _{f}=\sum _{g}f(g)\rho (g).} Then for ρ {\displaystyle \rho } irreducible, we have ρ f = | G | n ⟨ f , χ V ∗ ⟩ ∈ E n d ( V ) {\displaystyle \rho _{f}={\frac {|G|}{n}}\langle f,\chi _{V}^{*}\rangle \in End(V)} from Schur's lemma. Suppose f {\displaystyle f} is a class function which is orthogonal to all the characters. Then by the above we have ρ f = 0 {\displaystyle \rho _{f}=0} whenever ρ {\displaystyle \rho } is irreducible. But then it follows that ρ f = 0 {\displaystyle \rho _{f}=0} for all ρ {\displaystyle \rho } , by decomposability. Take ρ {\displaystyle \rho } to be the regular representation. Applying ρ f {\displaystyle \rho _{f}} to some particular basis element g {\displaystyle g} , we get f ( g ) = 0 {\displaystyle f(g)=0} . Since this is true for all g {\displaystyle g} , we have f = 0. {\displaystyle f=0.} It follows from the orthonormal property that the number of non-isomorphic irreducible representations of a group G {\displaystyle G} is equal to the number of conjugacy classes of G . {\displaystyle G.} Furthermore, a class function on G {\displaystyle G} is a character of G {\displaystyle G} if and only if it can be written as a linear combination of the distinct irreducible characters χ j {\displaystyle \chi _{j}} with non-negative integer coefficients: if φ {\displaystyle \varphi } is a class function on G {\displaystyle G} such that φ = c 1 χ 1 + ⋯ + c k χ k {\displaystyle \varphi =c_{1}\chi _{1}+\cdots +c_{k}\chi _{k}} where c j {\displaystyle c_{j}} non-negative integers, then φ {\displaystyle \varphi } is the character of the direct sum c 1 τ 1 ⊕ ⋯ ⊕ c k τ k {\displaystyle c_{1}\tau _{1}\oplus \cdots \oplus c_{k}\tau _{k}} of the representations τ j {\displaystyle \tau _{j}} corresponding to χ j . {\displaystyle \chi _{j}.} Conversely, it is always possible to write any character as a sum of irreducible characters. The inner product defined above can be extended on the set of all C {\displaystyle \mathbb {C} } -valued functions L 1 ( G ) {\displaystyle L^{1}(G)} on a finite group: ( f | h ) G = 1 | G | ∑ t ∈ G f ( t ) h ( t ) ¯ {\displaystyle (f|h)_{G}={\frac {1}{|G|}}\sum _{t\in G}f(t){\overline {h(t)}}} A symmetric bilinear form can also be defined on L 1 ( G ) : {\displaystyle L^{1}(G):} ⟨ f , h ⟩ G = 1 | G | ∑ t ∈ G f ( t ) h ( t − 1 ) {\displaystyle \langle f,h\rangle _{G}={\frac {1}{|G|}}\sum _{t\in G}f(t)h(t^{-1})} These two forms match on the set of characters. If there is no danger of confusion the index of both forms ( ⋅ | ⋅ ) G {\displaystyle (\cdot |\cdot )_{G}} and ⟨ ⋅ | ⋅ ⟩ G {\displaystyle \langle \cdot |\cdot \rangle _{G}} will be omitted. Let V 1 , V 2 {\displaystyle V_{1},V_{2}} be two C [ G ] {\displaystyle \mathbb {C} [G]} –modules. Note that C [ G ] {\displaystyle \mathbb {C} [G]} –modules are simply representations of G {\displaystyle G} . Since the orthonormal property yields the number of irreducible representations of G {\displaystyle G} is exactly the number of its conjugacy classes, then there are exactly as many simple C [ G ] {\displaystyle \mathbb {C} [G]} –modules (up to isomorphism) as there are conjugacy classes of G . {\displaystyle G.} We define ⟨ V 1 , V 2 ⟩ G := dim ⁡ ( Hom G ( V 1 , V 2 ) ) , {\displaystyle \langle V_{1},V_{2}\rangle _{G}:=\dim({\text{Hom}}^{G}(V_{1},V_{2})),} in which Hom G ( V 1 , V 2 ) {\displaystyle {\text{Hom}}^{G}(V_{1},V_{2})} is the vector space of all G {\displaystyle G} –linear maps. This form is bilinear with respect to the direct sum. In the following, these bilinear forms will allow us to obtain some important results with respect to the decomposition and irreducibility of representations. For instance, let χ 1 {\displaystyle \chi _{1}} and χ 2 {\displaystyle \chi _{2}} be the characters of V 1 {\displaystyle V_{1}} and V 2 , {\displaystyle V_{2},} respectively. Then ⟨ χ 1 , χ 2 ⟩ G = ( χ 1 | χ 2 ) G = ⟨ V 1 , V 2 ⟩ G . {\displaystyle \langle \chi _{1},\chi _{2}\rangle _{G}=(\chi _{1}|\chi _{2})_{G}=\langle V_{1},V_{2}\rangle _{G}.} It is possible to derive the following theorem from the results above, along with Schur's lemma and the complete reducibility of representations. Theorem. Let V {\displaystyle V} be a linear representation of G {\displaystyle G} with character ξ . {\displaystyle \xi .} Let V = W 1 ⊕ ⋯ ⊕ W k , {\displaystyle V=W_{1}\oplus \cdots \oplus W_{k},} where W j {\displaystyle W_{j}} are irreducible. Let ( τ , W ) {\displaystyle (\tau ,W)} be an irreducible representation of G {\displaystyle G} with character χ . {\displaystyle \chi .} Then the number of subrepresentations W j {\displaystyle W_{j}} which are isomorphic to W {\displaystyle W} is independent of the given decomposition and is equal to the inner product ( ξ | χ ) , {\displaystyle (\xi |\chi ),} i.e. the τ {\displaystyle \tau } –isotype V ( τ ) {\displaystyle V(\tau )} of V {\displaystyle V} is independent of the choice of decomposition. We also get: ( ξ | χ ) = dim ⁡ ( V ( τ ) ) dim ⁡ ( τ ) = ⟨ V , W ⟩ {\displaystyle (\xi |\chi )={\frac {\dim(V(\tau ))}{\dim(\tau )}}=\langle V,W\rangle } and thus dim ⁡ ( V ( τ ) ) = dim ⁡ ( τ ) ( ξ | χ ) . {\displaystyle \dim(V(\tau ))=\dim(\tau )(\xi |\chi ).} Corollary. Two representations with the same character are isomorphic. This means that every representation is determined by its character. With this we obtain a very useful result to analyse representations: Irreducibility criterion. Let χ {\displaystyle \chi } be the character of the representation V , {\displaystyle V,} then we have ( χ | χ ) ∈ N 0 . {\displaystyle (\chi |\chi )\in \mathbb {N} _{0}.} The case ( χ | χ ) = 1 {\displaystyle (\chi |\chi )=1} holds if and only if V {\displaystyle V} is irreducible. Therefore, using the first theorem, the characters of irreducible representations of G {\displaystyle G} form an orthonormal set on C class ( G ) {\displaystyle \mathbb {C} _{\text{class}}(G)} with respect to this inner product. Corollary. Let V {\displaystyle V} be a vector space with dim ⁡ ( V ) = n . {\displaystyle \dim(V)=n.} A given irreducible representation V {\displaystyle V} of G {\displaystyle G} is contained n {\displaystyle n} –times in the regular representation. In other words, if R {\displaystyle R} denotes the regular representation of G {\displaystyle G} then we have: R ≅ ⊕ ( W j ) ⊕ dim ⁡ ( W j ) , {\displaystyle R\cong \oplus (W_{j})^{\oplus \dim(W_{j})},} in which { W j | j ∈ I } {\displaystyle \{W_{j}|j\in I\}} is the set of all irreducible representations of G {\displaystyle G} that are pairwise not isomorphic to each other. In terms of the group algebra, this means that C [ G ] ≅ ⊕ j End ( W j ) {\displaystyle \mathbb {C} [G]\cong \oplus _{j}{\text{End}}(W_{j})} as algebras. As a numerical result we get: | G | = χ R ( e ) = dim ⁡ ( R ) = ∑ j dim ⁡ ( ( W j ) ⊕ ( χ W j | χ R ) ) = ∑ j ( χ W j | χ R ) ⋅ dim ⁡ ( W j ) = ∑ j dim ⁡ ( W j ) 2 , {\displaystyle |G|=\chi _{R}(e)=\dim(R)=\sum _{j}\dim \left((W_{j})^{\oplus (\chi _{W_{j}}|\chi _{R})}\right)=\sum _{j}(\chi _{W_{j}}|\chi _{R})\cdot \dim(W_{j})=\sum _{j}\dim(W_{j})^{2},} in which R {\displaystyle R} is the regular representation and χ W j {\displaystyle \chi _{W_{j}}} and χ R {\displaystyle \chi _{R}} are corresponding characters to W j {\displaystyle W_{j}} and R , {\displaystyle R,} respectively. Recall that e {\displaystyle e} denotes the neutral element of the group. This formula is a "necessary and sufficient" condition for the problem of classifying the irreducible representations of a group up to isomorphism. It provides us with the means to check whether we found all the isomorphism classes of irreducible representations of a group. Similarly, by using the character of the regular representation evaluated at s ≠ e , {\displaystyle s\neq e,} we get the equation: 0 = χ R ( s ) = ∑ j dim ⁡ ( W j ) ⋅ χ W j ( s ) . {\displaystyle 0=\chi _{R}(s)=\sum _{j}\dim(W_{j})\cdot \chi _{W_{j}}(s).} Using the description of representations via the convolution algebra we achieve an equivalent formulation of these equations: The Fourier inversion formula: f ( s ) = 1 | G | ∑ ρ irr. rep. of G dim ⁡ ( V ρ ) ⋅ Tr ( ρ ( s − 1 ) ⋅ f ^ ( ρ ) ) . {\displaystyle f(s)={\frac {1}{|G|}}\sum _{\rho {\text{ irr. rep. of }}G}\dim(V_{\rho })\cdot {\text{Tr}}(\rho (s^{-1})\cdot {\hat {f}}(\rho )).} In addition, the Plancherel formula holds: ∑ s ∈ G f ( s − 1 ) h ( s ) = 1 | G | ∑ ρ irred. rep. of G dim ⁡ ( V ρ ) ⋅ Tr ( f ^ ( ρ ) h ^ ( ρ ) ) . {\displaystyle \sum _{s\in G}f(s^{-1})h(s)={\frac {1}{|G|}}\sum _{\rho \,\,{\text{ irred.}}{\text{ rep.}}{\text{ of }}G}\dim(V_{\rho })\cdot {\text{Tr}}({\hat {f}}(\rho ){\hat {h}}(\rho )).} In both formulas ( ρ , V ρ ) {\displaystyle (\rho ,V_{\rho })} is a linear representation of a group G , s ∈ G {\displaystyle G,s\in G} and f , h ∈ L 1 ( G ) . {\displaystyle f,h\in L^{1}(G).} The corollary above has an additional consequence: Lemma. Let G {\displaystyle G} be a group. Then the following is equivalent: G {\displaystyle G} is abelian. Every function on G {\displaystyle G} is a class function. All irreducible representations of G {\displaystyle G} have degree 1. {\displaystyle 1.} == The induced representation == As was shown in the section on properties of linear representations, we can - by restriction - obtain a representation of a subgroup starting from a representation of a group. Naturally we are interested in the reverse process: Is it possible to obtain the representation of a group starting from a representation of a subgroup? We will see that the induced representation defined below provides us with the necessary concept. Admittedly, this construction is not inverse but rather adjoint to the restriction. === Definitions === Let ρ : G → GL ( V ρ ) {\displaystyle \rho :G\to {\text{GL}}(V_{\rho })} be a linear representation of G . {\displaystyle G.} Let H {\displaystyle H} be a subgroup and ρ | H {\displaystyle \rho |_{H}} the restriction. Let W {\displaystyle W} be a subrepresentation of ρ H . {\displaystyle \rho _{H}.} We write θ : H → GL ( W ) {\displaystyle \theta :H\to {\text{GL}}(W)} to denote this representation. Let s ∈ G . {\displaystyle s\in G.} The vector space ρ ( s ) ( W ) {\displaystyle \rho (s)(W)} depends only on the left coset s H {\displaystyle sH} of s . {\displaystyle s.} Let R {\displaystyle R} be a representative system of G / H , {\displaystyle G/H,} then ∑ r ∈ R ρ ( r ) ( W ) {\displaystyle \sum _{r\in R}\rho (r)(W)} is a subrepresentation of V ρ . {\displaystyle V_{\rho }.} A representation ρ {\displaystyle \rho } of G {\displaystyle G} in V ρ {\displaystyle V_{\rho }} is called induced by the representation θ {\displaystyle \theta } of H {\displaystyle H} in W , {\displaystyle W,} if V ρ = ⨁ r ∈ R W r . {\displaystyle V_{\rho }=\bigoplus _{r\in R}W_{r}.} Here W r = ρ ( s ) ( W ) {\displaystyle W_{r}=\rho (s)(W)} for all s ∈ r H {\displaystyle s\in rH} and for all r ∈ R . {\displaystyle r\in R.} In other words: the representation ( ρ , V ρ ) {\displaystyle (\rho ,V_{\rho })} is induced by ( θ , W ) , {\displaystyle (\theta ,W),} if every v ∈ V ρ {\displaystyle v\in V_{\rho }} can be written uniquely as ∑ r ∈ R w r , {\displaystyle \sum _{r\in R}w_{r},} where w r ∈ W r {\displaystyle w_{r}\in W_{r}} for every r ∈ R . {\displaystyle r\in R.} We denote the representation ρ {\displaystyle \rho } of G {\displaystyle G} which is induced by the representation θ {\displaystyle \theta } of H {\displaystyle H} as ρ = Ind H G ( θ ) , {\displaystyle \rho ={\text{Ind}}_{H}^{G}(\theta ),} or in short ρ = Ind ( θ ) , {\displaystyle \rho ={\text{Ind}}(\theta ),} if there is no danger of confusion. The representation space itself is frequently used instead of the representation map, i.e. V = Ind H G ( W ) , {\displaystyle V={\text{Ind}}_{H}^{G}(W),} or V = Ind ( W ) , {\displaystyle V={\text{Ind}}(W),} if the representation V {\displaystyle V} is induced by W . {\displaystyle W.} ==== Alternative description of the induced representation ==== By using the group algebra we obtain an alternative description of the induced representation: Let G {\displaystyle G} be a group, V {\displaystyle V} a C [ G ] {\displaystyle \mathbb {C} [G]} –module and W {\displaystyle W} a C [ H ] {\displaystyle \mathbb {C} [H]} –submodule of V {\displaystyle V} corresponding to the subgroup H {\displaystyle H} of G . {\displaystyle G.} We say that V {\displaystyle V} is induced by W {\displaystyle W} if V = C [ G ] ⊗ C [ H ] W , {\displaystyle V=\mathbb {C} [G]\otimes _{\mathbb {C} [H]}W,} in which G {\displaystyle G} acts on the first factor: s ⋅ ( e t ⊗ w ) = e s t ⊗ w {\displaystyle s\cdot (e_{t}\otimes w)=e_{st}\otimes w} for all s , t ∈ G , w ∈ W . {\displaystyle s,t\in G,w\in W.} === Properties === The results introduced in this section will be presented without proof. These may be found in [1] and [2]. Uniqueness and existence of the induced representation. Let ( θ , W θ ) {\displaystyle (\theta ,W_{\theta })} be a linear representation of a subgroup H {\displaystyle H} of G . {\displaystyle G.} Then there exists a linear representation ( ρ , V ρ ) {\displaystyle (\rho ,V_{\rho })} of G , {\displaystyle G,} which is induced by ( θ , W θ ) . {\displaystyle (\theta ,W_{\theta }).} Note that this representation is unique up to isomorphism. Transitivity of induction. Let W {\displaystyle W} be a representation of H {\displaystyle H} and let H ≤ G ≤ K {\displaystyle H\leq G\leq K} be an ascending series of groups. Then we have Ind G K ( Ind H G ( W ) ) ≅ Ind H K ( W ) . {\displaystyle {\text{Ind}}_{G}^{K}({\text{Ind}}_{H}^{G}(W))\cong {\text{Ind}}_{H}^{K}(W).} Lemma. Let ( ρ , V ρ ) {\displaystyle (\rho ,V_{\rho })} be induced by ( θ , W θ ) {\displaystyle (\theta ,W_{\theta })} and let ρ ′ : G → GL ( V ′ ) {\displaystyle \rho ':G\to {\text{GL}}(V')} be a linear representation of G . {\displaystyle G.} Now let F : W θ → V ′ {\displaystyle F:W_{\theta }\to V'} be a linear map satisfying the property that F ∘ θ ( t ) = ρ ′ ( t ) ∘ F {\displaystyle F\circ \theta (t)=\rho '(t)\circ F} for all t ∈ G . {\displaystyle t\in G.} Then there exists a uniquely determined linear map F ′ : V ρ → V ′ , {\displaystyle F':V_{\rho }\to V',} which extends F {\displaystyle F} and for which F ′ ∘ ρ ( s ) = ρ ′ ( s ) ∘ F ′ {\displaystyle F'\circ \rho (s)=\rho '(s)\circ F'} is valid for all s ∈ G . {\displaystyle s\in G.} This means that if we interpret V ′ {\displaystyle V'} as a C [ G ] {\displaystyle \mathbb {C} [G]} –module, we have Hom H ( W θ , V ′ ) ≅ Hom G ( V ρ , V ′ ) , {\displaystyle {\text{Hom}}^{H}(W_{\theta },V')\cong {\text{Hom}}^{G}(V_{\rho },V'),} where Hom G ( V ρ , V ′ ) {\displaystyle {\text{Hom}}^{G}(V_{\rho },V')} is the vector space of all C [ G ] {\displaystyle \mathbb {C} [G]} –homomorphisms of V ρ {\displaystyle V_{\rho }} to V ′ . {\displaystyle V'.} The same is valid for Hom H ( W θ , V ′ ) . {\displaystyle {\text{Hom}}^{H}(W_{\theta },V').} Induction on class functions. In the same way as it was done with representations, we can - by induction - obtain a class function on the group from a class function on a subgroup. Let φ {\displaystyle \varphi } be a class function on H . {\displaystyle H.} We define a function φ ′ {\displaystyle \varphi '} on G {\displaystyle G} by φ ′ ( s ) = 1 | H | ∑ t ∈ G t − 1 s t ∈ H φ ( t − 1 s t ) . {\displaystyle \varphi '(s)={\frac {1}{|H|}}\sum _{t\in G \atop t^{-1}st\in H}^{}\varphi (t^{-1}st).} We say φ ′ {\displaystyle \varphi '} is induced by φ {\displaystyle \varphi } and write Ind H G ( φ ) = φ ′ {\displaystyle {\text{Ind}}_{H}^{G}(\varphi )=\varphi '} or Ind ( φ ) = φ ′ . {\displaystyle {\text{Ind}}(\varphi )=\varphi '.} Proposition. The function Ind ( φ ) {\displaystyle {\text{Ind}}(\varphi )} is a class function on G . {\displaystyle G.} If φ {\displaystyle \varphi } is the character of a representation W {\displaystyle W} of H , {\displaystyle H,} then Ind ( φ ) {\displaystyle {\text{Ind}}(\varphi )} is the character of the induced representation Ind ( W ) {\displaystyle {\text{Ind}}(W)} of G . {\displaystyle G.} Lemma. If ψ {\displaystyle \psi } is a class function on H {\displaystyle H} and φ {\displaystyle \varphi } is a class function on G , {\displaystyle G,} then we have: Ind ( ψ ⋅ Res φ ) = ( Ind ψ ) ⋅ φ . {\displaystyle {\text{Ind}}(\psi \cdot {\text{Res}}\varphi )=({\text{Ind}}\psi )\cdot \varphi .} Theorem. Let ( ρ , V ρ ) {\displaystyle (\rho ,V_{\rho })} be the representation of G {\displaystyle G} induced by the representation ( θ , W θ ) {\displaystyle (\theta ,W_{\theta })} of the subgroup H . {\displaystyle H.} Let χ ρ {\displaystyle \chi _{\rho }} and χ θ {\displaystyle \chi _{\theta }} be the corresponding characters. Let R {\displaystyle R} be a representative system of G / H . {\displaystyle G/H.} The induced character is given by ∀ t ∈ G : χ ρ ( t ) = ∑ r ∈ R , r − 1 t r ∈ H χ θ ( r − 1 t r ) = 1 | H | ∑ s ∈ G , s − 1 t s ∈ H χ θ ( s − 1 t s ) . {\displaystyle \forall t\in G:\qquad \chi _{\rho }(t)=\sum _{r\in R, \atop r^{-1}tr\in H}^{}\chi _{\theta }(r^{-1}tr)={\frac {1}{|H|}}\sum _{s\in G, \atop s^{-1}ts\in H}^{}\chi _{\theta }(s^{-1}ts).} === Frobenius reciprocity === As a preemptive summary, the lesson to take from Frobenius reciprocity is that the maps Res {\displaystyle {\text{Res}}} and Ind {\displaystyle {\text{Ind}}} are adjoint to each other. Let W {\displaystyle W} be an irreducible representation of H {\displaystyle H} and let V {\displaystyle V} be an irreducible representation of G , {\displaystyle G,} then the Frobenius reciprocity tells us that W {\displaystyle W} is contained in Res ( V ) {\displaystyle {\text{Res}}(V)} as often as Ind ( W ) {\displaystyle {\text{Ind}}(W)} is contained in V . {\displaystyle V.} Frobenius reciprocity. If ψ ∈ C class ( H ) {\displaystyle \psi \in \mathbb {C} _{\text{class}}(H)} and φ ∈ C class ( G ) {\displaystyle \varphi \in \mathbb {C} _{\text{class}}(G)} we have ⟨ ψ , Res ( φ ) ⟩ H = ⟨ Ind ( ψ ) , φ ⟩ G . {\displaystyle \langle \psi ,{\text{Res}}(\varphi )\rangle _{H}=\langle {\text{Ind}}(\psi ),\varphi \rangle _{G}.} This statement is also valid for the inner product. === Mackey's irreducibility criterion === George Mackey established a criterion to verify the irreducibility of induced representations. For this we will first need some definitions and some specifications with respect to the notation. Two representations V 1 {\displaystyle V_{1}} and V 2 {\displaystyle V_{2}} of a group G {\displaystyle G} are called disjoint, if they have no irreducible component in common, i.e. if ⟨ V 1 , V 2 ⟩ G = 0. {\displaystyle \langle V_{1},V_{2}\rangle _{G}=0.} Let G {\displaystyle G} be a group and let H {\displaystyle H} be a subgroup. We define H s = s H s − 1 ∩ H {\displaystyle H_{s}=sHs^{-1}\cap H} for s ∈ G . {\displaystyle s\in G.} Let ( ρ , W ) {\displaystyle (\rho ,W)} be a representation of the subgroup H . {\displaystyle H.} This defines by restriction a representation Res H s ( ρ ) {\displaystyle {\text{Res}}_{H_{s}}(\rho )} of H s . {\displaystyle H_{s}.} We write Res s ( ρ ) {\displaystyle {\text{Res}}_{s}(\rho )} for Res H s ( ρ ) . {\displaystyle {\text{Res}}_{H_{s}}(\rho ).} We also define another representation ρ s {\displaystyle \rho ^{s}} of H s {\displaystyle H_{s}} by ρ s ( t ) = ρ ( s − 1 t s ) . {\displaystyle \rho ^{s}(t)=\rho (s^{-1}ts).} These two representations are not to be confused. Mackey's irreducibility criterion. The induced representation V = Ind H G ( W ) {\displaystyle V={\text{Ind}}_{H}^{G}(W)} is irreducible if and only if the following conditions are satisfied: W {\displaystyle W} is irreducible For each s ∈ G ∖ H {\displaystyle s\in G\setminus H} the two representations ρ s {\displaystyle \rho ^{s}} and Res s ( ρ ) {\displaystyle {\text{Res}}_{s}(\rho )} of H s {\displaystyle H_{s}} are disjoint. For the case of H {\displaystyle H} normal, we have H s = H {\displaystyle H_{s}=H} and Res s ( ρ ) = ρ {\displaystyle {\text{Res}}_{s}(\rho )=\rho } . Thus we obtain the following: Corollary. Let H {\displaystyle H} be a normal subgroup of G . {\displaystyle G.} Then Ind H G ( ρ ) {\displaystyle {\text{Ind}}_{H}^{G}(\rho )} is irreducible if and only if ρ {\displaystyle \rho } is irreducible and not isomorphic to the conjugates ρ s {\displaystyle \rho ^{s}} for s ∉ H . {\displaystyle s\notin H.} === Applications to special groups === In this section we present some applications of the so far presented theory to normal subgroups and to a special group, the semidirect product of a subgroup with an abelian normal subgroup. Proposition. Let A {\displaystyle A} be a normal subgroup of the group G {\displaystyle G} and let ρ : G → GL ( V ) {\displaystyle \rho :G\to {\text{GL}}(V)} be an irreducible representation of G . {\displaystyle G.} Then one of the following statements has to be valid: either there exists a proper subgroup H {\displaystyle H} of G {\displaystyle G} containing A {\displaystyle A} , and an irreducible representation η {\displaystyle \eta } of H {\displaystyle H} which induces ρ {\displaystyle \rho } , or V {\displaystyle V} is an isotypic C A {\displaystyle \mathbb {C} A} -module. Proof. Consider V {\displaystyle V} as a C A {\displaystyle \mathbb {C} A} -module, and decompose it into isotypes as V = ⨁ j V j {\displaystyle V=\bigoplus _{j}{V_{j}}} . If this decomposition is trivial, we are in the second case. Otherwise, the larger G {\displaystyle G} -action permutes these isotypic modules; because V {\displaystyle V} is irreducible as a C G {\displaystyle \mathbb {C} G} -module, the permutation action is transitive (in fact primitive). Fix any j {\displaystyle j} ; the stabilizer in G {\displaystyle G} of V j {\displaystyle V_{j}} is elementarily seen to exhibit the claimed properties. ◻ {\displaystyle \Box } Note that if A {\displaystyle A} is abelian, then the isotypic modules of A {\displaystyle A} are irreducible, of degree one, and all homotheties. We obtain also the following Corollary. Let A {\displaystyle A} be an abelian normal subgroup of G {\displaystyle G} and let τ {\displaystyle \tau } be any irreducible representation of G . {\displaystyle G.} We denote with ( G : A ) {\displaystyle (G:A)} the index of A {\displaystyle A} in G . {\displaystyle G.} Then deg ⁡ ( τ ) | ( G : A ) . {\displaystyle \deg(\tau )|(G:A).} [1] If A {\displaystyle A} is an abelian subgroup of G {\displaystyle G} (not necessarily normal), generally deg ⁡ ( τ ) | ( G : A ) {\displaystyle \deg(\tau )|(G:A)} is not satisfied, but nevertheless deg ⁡ ( τ ) ≤ ( G : A ) {\displaystyle \deg(\tau )\leq (G:A)} is still valid. ==== Classification of representations of a semidirect product ==== In the following, let G = A ⋊ H {\displaystyle G=A\rtimes H} be a semidirect product such that the normal semidirect factor, A {\displaystyle A} , is abelian. The irreducible representations of such a group G , {\displaystyle G,} can be classified by showing that all irreducible representations of G {\displaystyle G} can be constructed from certain subgroups of H {\displaystyle H} . This is the so-called method of “little groups” of Wigner and Mackey. Since A {\displaystyle A} is abelian, the irreducible characters of A {\displaystyle A} have degree one and form the group X = Hom ( A , C × ) . {\displaystyle \mathrm {X} ={\text{Hom}}(A,\mathbb {C} ^{\times }).} The group G {\displaystyle G} acts on X {\displaystyle \mathrm {X} } by ( s χ ) ( a ) = χ ( s − 1 a s ) {\displaystyle (s\chi )(a)=\chi (s^{-1}as)} for s ∈ G , χ ∈ X , a ∈ A . {\displaystyle s\in G,\chi \in \mathrm {X} ,a\in A.} Let ( χ j ) j ∈ X / H {\displaystyle (\chi _{j})_{j\in \mathrm {X} /H}} be a representative system of the orbit of H {\displaystyle H} in X . {\displaystyle \mathrm {X} .} For every j ∈ X / H {\displaystyle j\in \mathrm {X} /H} let H j = { t ∈ H : t χ j = χ j } . {\displaystyle H_{j}=\{t\in H:t\chi _{j}=\chi _{j}\}.} This is a subgroup of H . {\displaystyle H.} Let G j = A ⋅ H j {\displaystyle G_{j}=A\cdot H_{j}} be the corresponding subgroup of G . {\displaystyle G.} We now extend the function χ j {\displaystyle \chi _{j}} onto G j {\displaystyle G_{j}} by χ j ( a t ) = χ j ( a ) {\displaystyle \chi _{j}(at)=\chi _{j}(a)} for a ∈ A , t ∈ H j . {\displaystyle a\in A,t\in H_{j}.} Thus, χ j {\displaystyle \chi _{j}} is a class function on G j . {\displaystyle G_{j}.} Moreover, since t χ j = χ j {\displaystyle t\chi _{j}=\chi _{j}} for all t ∈ H j , {\displaystyle t\in H_{j},} it can be shown that χ j {\displaystyle \chi _{j}} is a group homomorphism from G j {\displaystyle G_{j}} to C × . {\displaystyle \mathbb {C} ^{\times }.} Therefore, we have a representation of G j {\displaystyle G_{j}} of degree one which is equal to its own character. Let now ρ {\displaystyle \rho } be an irreducible representation of H j . {\displaystyle H_{j}.} Then we obtain an irreducible representation ρ ~ {\displaystyle {\tilde {\rho }}} of G j , {\displaystyle G_{j},} by combining ρ {\displaystyle \rho } with the canonical projection G j → H j . {\displaystyle G_{j}\to H_{j}.} Finally, we construct the tensor product of χ j {\displaystyle \chi _{j}} and ρ ~ . {\displaystyle {\tilde {\rho }}.} Thus, we obtain an irreducible representation χ j ⊗ ρ ~ {\displaystyle \chi _{j}\otimes {\tilde {\rho }}} of G j . {\displaystyle G_{j}.} To finally obtain the classification of the irreducible representations of G {\displaystyle G} we use the representation θ j , ρ {\displaystyle \theta _{j,\rho }} of G , {\displaystyle G,} which is induced by the tensor product χ j ⊗ ρ ~ . {\displaystyle \chi _{j}\otimes {\tilde {\rho }}.} Thus, we achieve the following result: Proposition. θ j , ρ {\displaystyle \theta _{j,\rho }} is irreducible. If θ j , ρ {\displaystyle \theta _{j,\rho }} and θ j ′ , ρ ′ {\displaystyle \theta _{j',\rho '}} are isomorphic, then j = j ′ {\displaystyle j=j'} and additionally ρ {\displaystyle \rho } is isomorphic to ρ ′ . {\displaystyle \rho '.} Every irreducible representation of G {\displaystyle G} is isomorphic to one of the θ j , ρ . {\displaystyle \theta _{j,\rho }.} Amongst others, the criterion of Mackey and a conclusion based on the Frobenius reciprocity are needed for the proof of the proposition. Further details may be found in [1]. In other words, we classified all irreducible representations of G = A ⋊ H . {\displaystyle G=A\rtimes H.} == Representation ring == The representation ring of G {\displaystyle G} is defined as the abelian group R ( G ) = { ∑ j = 1 m a j τ j | τ 1 , … , τ m all irreducible representations of G up to isomorphism , a j ∈ Z } . {\displaystyle R(G)=\left\{\left.\sum _{j=1}^{m}a_{j}\tau _{j}\right|\tau _{1},\ldots ,\tau _{m}{\text{ all irreducible representations of }}G{\text{ up to isomorphism}},a_{j}\in \mathbb {Z} \right\}.} With the multiplication provided by the tensor product, R ( G ) {\displaystyle R(G)} becomes a ring. The elements of R ( G ) {\displaystyle R(G)} are called virtual representations. The character defines a ring homomorphism in the set of all class functions on G {\displaystyle G} with complex values { χ : R ( G ) → C class ( G ) ∑ a j τ j ↦ ∑ a j χ j {\displaystyle {\begin{cases}\chi :R(G)\to \mathbb {C} _{\text{class}}(G)\\\sum a_{j}\tau _{j}\mapsto \sum a_{j}\chi _{j}\end{cases}}} in which the χ j {\displaystyle \chi _{j}} are the irreducible characters corresponding to the τ j . {\displaystyle \tau _{j}.} Because a representation is determined by its character, χ {\displaystyle \chi } is injective. The images of χ {\displaystyle \chi } are called virtual characters. As the irreducible characters form an orthonormal basis of C class , χ {\displaystyle \mathbb {C} _{\text{class}},\chi } induces an isomorphism χ C : R ( G ) ⊗ C → C class ( G ) . {\displaystyle \chi _{\mathbb {C} }:R(G)\otimes \mathbb {C} \to \mathbb {C} _{\text{class}}(G).} This isomorphism is defined on a basis out of elementary tensors ( τ j ⊗ 1 ) j = 1 , … , m {\displaystyle (\tau _{j}\otimes 1)_{j=1,\ldots ,m}} by χ C ( τ j ⊗ 1 ) = χ j {\displaystyle \chi _{\mathbb {C} }(\tau _{j}\otimes 1)=\chi _{j}} respectively χ C ( τ j ⊗ z ) = z χ j , {\displaystyle \chi _{\mathbb {C} }(\tau _{j}\otimes z)=z\chi _{j},} and extended bilinearly. We write R + ( G ) {\displaystyle {\mathcal {R}}^{+}(G)} for the set of all characters of G {\displaystyle G} and R ( G ) {\displaystyle {\mathcal {R}}(G)} to denote the group generated by R + ( G ) , {\displaystyle {\mathcal {R}}^{+}(G),} i.e. the set of all differences of two characters. It then holds that R ( G ) = Z χ 1 ⊕ ⋯ ⊕ Z χ m {\displaystyle {\mathcal {R}}(G)=\mathbb {Z} \chi _{1}\oplus \cdots \oplus \mathbb {Z} \chi _{m}} and R ( G ) = Im ( χ ) = χ ( R ( G ) ) . {\displaystyle {\mathcal {R}}(G)={\text{Im}}(\chi )=\chi (R(G)).} Thus, we have R ( G ) ≅ R ( G ) {\displaystyle R(G)\cong {\mathcal {R}}(G)} and the virtual characters correspond to the virtual representations in an optimal manner. Since R ( G ) = Im ( χ ) {\displaystyle {\mathcal {R}}(G)={\text{Im}}(\chi )} holds, R ( G ) {\displaystyle {\mathcal {R}}(G)} is the set of all virtual characters. As the product of two characters provides another character, R ( G ) {\displaystyle {\mathcal {R}}(G)} is a subring of the ring C class ( G ) {\displaystyle \mathbb {C} _{\text{class}}(G)} of all class functions on G . {\displaystyle G.} Because the χ i {\displaystyle \chi _{i}} form a basis of C class ( G ) {\displaystyle \mathbb {C} _{\text{class}}(G)} we obtain, just as in the case of R ( G ) , {\displaystyle R(G),} an isomorphism C ⊗ R ( G ) ≅ C class ( G ) . {\displaystyle \mathbb {C} \otimes {\mathcal {R}}(G)\cong \mathbb {C} _{\text{class}}(G).} Let H {\displaystyle H} be a subgroup of G . {\displaystyle G.} The restriction thus defines a ring homomorphism R ( G ) → R ( H ) , ϕ ↦ ϕ | H , {\displaystyle {\mathcal {R}}(G)\to {\mathcal {R}}(H),\phi \mapsto \phi |_{H},} which will be denoted by Res H G {\displaystyle {\text{Res}}_{H}^{G}} or Res . {\displaystyle {\text{Res}}.} Likewise, the induction on class functions defines a homomorphism of abelian groups R ( H ) → R ( G ) , {\displaystyle {\mathcal {R}}(H)\to {\mathcal {R}}(G),} which will be written as Ind H G {\displaystyle {\text{Ind}}_{H}^{G}} or in short Ind . {\displaystyle {\text{Ind}}.} According to the Frobenius reciprocity, these two homomorphisms are adjoint with respect to the bilinear forms ⟨ ⋅ , ⋅ ⟩ H {\displaystyle \langle \cdot ,\cdot \rangle _{H}} and ⟨ ⋅ , ⋅ ⟩ G . {\displaystyle \langle \cdot ,\cdot \rangle _{G}.} Furthermore, the formula Ind ( φ ⋅ Res ( ψ ) ) = Ind ( φ ) ⋅ ψ {\displaystyle {\text{Ind}}(\varphi \cdot {\text{Res}}(\psi ))={\text{Ind}}(\varphi )\cdot \psi } shows that the image of Ind : R ( H ) → R ( G ) {\displaystyle {\text{Ind}}:{\mathcal {R}}(H)\to {\mathcal {R}}(G)} is an ideal of the ring R ( G ) . {\displaystyle {\mathcal {R}}(G).} By the restriction of representations, the map Res {\displaystyle {\text{Res}}} can be defined analogously for R ( G ) , {\displaystyle R(G),} and by the induction we obtain the map Ind {\displaystyle {\text{Ind}}} for R ( G ) . {\displaystyle R(G).} Due to the Frobenius reciprocity, we get the result that these maps are adjoint to each other and that the image Im ( Ind ) = Ind ( R ( H ) ) {\displaystyle {\text{Im}}({\text{Ind}})={\text{Ind}}(R(H))} is an ideal of the ring R ( G ) . {\displaystyle R(G).} If A {\displaystyle A} is a commutative ring, the homomorphisms Res {\displaystyle {\text{Res}}} and Ind {\displaystyle {\text{Ind}}} may be extended to A {\displaystyle A} –linear maps: { A ⊗ Res : A ⊗ R ( G ) → A ⊗ R ( H ) ( a ⊗ ∑ a j τ j ) ↦ ( a ⊗ ∑ a j Res ( τ j ) ) , { A ⊗ Ind : A ⊗ R ( H ) → A ⊗ R ( G ) ( a ⊗ ∑ a j η j ) ↦ ( a ⊗ ∑ a j Ind ( η j ) ) {\displaystyle {\begin{cases}A\otimes {\text{Res}}:A\otimes R(G)\to A\otimes R(H)\\\left(a\otimes \sum a_{j}\tau _{j}\right)\mapsto \left(a\otimes \sum a_{j}{\text{Res}}(\tau _{j})\right)\end{cases}},\qquad {\begin{cases}A\otimes {\text{Ind}}:A\otimes R(H)\to A\otimes R(G)\\\left(a\otimes \sum a_{j}\eta _{j}\right)\mapsto \left(a\otimes \sum a_{j}{\text{Ind}}(\eta _{j})\right)\end{cases}}} in which η j {\displaystyle \eta _{j}} are all the irreducible representations of H {\displaystyle H} up to isomorphism. With A = C {\displaystyle A=\mathbb {C} } we obtain in particular that Ind {\displaystyle {\text{Ind}}} and Res {\displaystyle {\text{Res}}} supply homomorphisms between C class ( G ) {\displaystyle \mathbb {C} _{\text{class}}(G)} and C class ( H ) . {\displaystyle \mathbb {C} _{\text{class}}(H).} Let G 1 {\displaystyle G_{1}} and G 2 {\displaystyle G_{2}} be two groups with respective representations ( ρ 1 , V 1 ) {\displaystyle (\rho _{1},V_{1})} and ( ρ 2 , V 2 ) . {\displaystyle (\rho _{2},V_{2}).} Then, ρ 1 ⊗ ρ 2 {\displaystyle \rho _{1}\otimes \rho _{2}} is the representation of the direct product G 1 × G 2 {\displaystyle G_{1}\times G_{2}} as was shown in a previous section. Another result of that section was that all irreducible representations of G 1 × G 2 {\displaystyle G_{1}\times G_{2}} are exactly the representations η 1 ⊗ η 2 , {\displaystyle \eta _{1}\otimes \eta _{2},} where η 1 {\displaystyle \eta _{1}} and η 2 {\displaystyle \eta _{2}} are irreducible representations of G 1 {\displaystyle G_{1}} and G 2 , {\displaystyle G_{2},} respectively. This passes over to the representation ring as the identity R ( G 1 × G 2 ) = R ( G 1 ) ⊗ Z R ( G 2 ) , {\displaystyle R(G_{1}\times G_{2})=R(G_{1})\otimes _{\mathbb {Z} }R(G_{2}),} in which R ( G 1 ) ⊗ Z R ( G 2 ) {\displaystyle R(G_{1})\otimes _{\mathbb {Z} }R(G_{2})} is the tensor product of the representation rings as Z {\displaystyle \mathbb {Z} } –modules. == Induction theorems == Induction theorems relate the representation ring of a given finite group G to representation rings of a family X consisting of some subsets H of G. More precisely, for such a collection of subgroups, the induction functor yields a map φ : Ind : ⨁ H ∈ X R ( H ) → R ( G ) {\displaystyle \varphi :{\text{Ind}}:\bigoplus _{H\in X}{\mathcal {R}}(H)\to {\mathcal {R}}(G)} ; induction theorems give criteria for the surjectivity of this map or closely related ones. Artin's induction theorem is the most elementary theorem in this group of results. It asserts that the following are equivalent: The cokernel of φ {\displaystyle \varphi } is finite. G {\displaystyle G} is the union of the conjugates of the subgroups belonging to X , {\displaystyle X,} i.e. G = ⋃ H ∈ X s ∈ G s H s − 1 . {\displaystyle G=\bigcup _{H\in X \atop s\in G}sHs^{-1}.} Since R ( G ) {\displaystyle {\mathcal {R}}(G)} is finitely generated as a group, the first point can be rephrased as follows: For each character χ {\displaystyle \chi } of G , {\displaystyle G,} there exist virtual characters χ H ∈ R ( H ) , H ∈ X {\displaystyle \chi _{H}\in {\mathcal {R}}(H),\,H\in X} and an integer d ≥ 1 , {\displaystyle d\geq 1,} such that d ⋅ χ = ∑ H ∈ X Ind H G ( χ H ) . {\displaystyle d\cdot \chi =\sum _{H\in X}{\text{Ind}}_{H}^{G}(\chi _{H}).} Serre (1977) gives two proofs of this theorem. For example, since G is the union of its cyclic subgroups, every character of G {\displaystyle G} is a linear combination with rational coefficients of characters induced by characters of cyclic subgroups of G . {\displaystyle G.} Since the representations of cyclic groups are well-understood, in particular the irreducible representations are one-dimensional, this gives a certain control over representations of G. Under the above circumstances, it is not in general true that φ {\displaystyle \varphi } is surjective. Brauer's induction theorem asserts that φ {\displaystyle \varphi } is surjective, provided that X is the family of all elementary subgroups. Here a group H is elementary if there is some prime p such that H is the direct product of a cyclic group of order prime to p {\displaystyle p} and a p {\displaystyle p} –group. In other words, every character of G {\displaystyle G} is a linear combination with integer coefficients of characters induced by characters of elementary subgroups. The elementary subgroups H arising in Brauer's theorem have a richer representation theory than cyclic groups, they at least have the property that any irreducible representation for such H is induced by a one-dimensional representation of a (necessarily also elementary) subgroup K ⊂ H {\displaystyle K\subset H} . (This latter property can be shown to hold for any supersolvable group, which includes nilpotent groups and, in particular, elementary groups.) This ability to induce representations from degree 1 representations has some further consequences in the representation theory of finite groups. == Real representations == For proofs and more information about representations over general subfields of C {\displaystyle \mathbb {C} } please refer to [2]. If a group G {\displaystyle G} acts on a real vector space V 0 , {\displaystyle V_{0},} the corresponding representation on the complex vector space V = V 0 ⊗ R C {\displaystyle V=V_{0}\otimes _{\mathbb {R} }\mathbb {C} } is called real ( V {\displaystyle V} is called the complexification of V 0 {\displaystyle V_{0}} ). The corresponding representation mentioned above is given by s ⋅ ( v 0 ⊗ z ) = ( s ⋅ v 0 ) ⊗ z {\displaystyle s\cdot (v_{0}\otimes z)=(s\cdot v_{0})\otimes z} for all s ∈ G , v 0 ∈ V 0 , z ∈ C . {\displaystyle s\in G,v_{0}\in V_{0},z\in \mathbb {C} .} Let ρ {\displaystyle \rho } be a real representation. The linear map ρ ( s ) {\displaystyle \rho (s)} is R {\displaystyle \mathbb {R} } -valued for all s ∈ G . {\displaystyle s\in G.} Thus, we can conclude that the character of a real representation is always real-valued. But not every representation with a real-valued character is real. To make this clear, let G {\displaystyle G} be a finite, non-abelian subgroup of the group SU ( 2 ) = { ( a b − b ¯ a ¯ ) : | a | 2 + | b | 2 = 1 } . {\displaystyle {\text{SU}}(2)=\left\{{\begin{pmatrix}a&b\\-{\overline {b}}&{\overline {a}}\end{pmatrix}}\ :\ |a|^{2}+|b|^{2}=1\right\}.} Then G ⊂ SU ( 2 ) {\displaystyle G\subset {\text{SU}}(2)} acts on V = C 2 . {\displaystyle V=\mathbb {C} ^{2}.} Since the trace of any matrix in SU ( 2 ) {\displaystyle {\text{SU}}(2)} is real, the character of the representation is real-valued. Suppose ρ {\displaystyle \rho } is a real representation, then ρ ( G ) {\displaystyle \rho (G)} would consist only of real-valued matrices. Thus, G ⊂ SU ( 2 ) ∩ GL 2 ( R ) = SO ( 2 ) = S 1 . {\displaystyle G\subset {\text{SU}}(2)\cap {\text{GL}}_{2}(\mathbb {R} )={\text{SO}}(2)=S^{1}.} However the circle group is abelian but G {\displaystyle G} was chosen to be a non-abelian group. Now we only need to prove the existence of a non-abelian, finite subgroup of SU ( 2 ) . {\displaystyle {\text{SU}}(2).} To find such a group, observe that SU ( 2 ) {\displaystyle {\text{SU}}(2)} can be identified with the units of the quaternions. Now let G = { ± 1 , ± i , ± j , ± i j } . {\displaystyle G=\{\pm 1,\pm i,\pm j,\pm ij\}.} The following two-dimensional representation of G {\displaystyle G} is not real-valued, but has a real-valued character: { ρ : G → GL 2 ( C ) ρ ( ± 1 ) = ( ± 1 0 0 ± 1 ) , ρ ( ± i ) = ( ± i 0 0 ∓ i ) , ρ ( ± j ) = ( 0 ± i ± i 0 ) {\displaystyle {\begin{cases}\rho :G\to {\text{GL}}_{2}(\mathbb {C} )\\[4pt]\rho (\pm 1)={\begin{pmatrix}\pm 1&0\\0&\pm 1\end{pmatrix}},\quad \rho (\pm i)={\begin{pmatrix}\pm i&0\\0&\mp i\end{pmatrix}},\quad \rho (\pm j)={\begin{pmatrix}0&\pm i\\\pm i&0\end{pmatrix}}\end{cases}}} Then the image of ρ {\displaystyle \rho } is not real-valued, but nevertheless it is a subset of SU ( 2 ) . {\displaystyle {\text{SU}}(2).} Thus, the character of the representation is real. Lemma. An irreducible representation V {\displaystyle V} of G {\displaystyle G} is real if and only if there exists a nondegenerate symmetric bilinear form B {\displaystyle B} on V {\displaystyle V} preserved by G . {\displaystyle G.} An irreducible representation of G {\displaystyle G} on a real vector space can become reducible when extending the field to C . {\displaystyle \mathbb {C} .} For example, the following real representation of the cyclic group is reducible when considered over C {\displaystyle \mathbb {C} } { ρ : Z / m Z → GL 2 ( R ) ρ ( k ) = ( cos ⁡ ( 2 π i k m ) sin ⁡ ( 2 π i k m ) − sin ⁡ ( 2 π i k m ) cos ⁡ ( 2 π i k m ) ) {\displaystyle {\begin{cases}\rho :\mathbb {Z} /m\mathbb {Z} \to {\text{GL}}_{2}(\mathbb {R} )\\[4pt]\rho (k)={\begin{pmatrix}\cos \left({\frac {2\pi ik}{m}}\right)&\sin \left({\frac {2\pi ik}{m}}\right)\\-\sin \left({\frac {2\pi ik}{m}}\right)&\cos \left({\frac {2\pi ik}{m}}\right)\end{pmatrix}}\end{cases}}} Therefore, by classifying all the irreducible representations that are real over C , {\displaystyle \mathbb {C} ,} we still haven't classified all the irreducible real representations. But we achieve the following: Let V 0 {\displaystyle V_{0}} be a real vector space. Let G {\displaystyle G} act irreducibly on V 0 {\displaystyle V_{0}} and let V = V 0 ⊗ C . {\displaystyle V=V_{0}\otimes \mathbb {C} .} If V {\displaystyle V} is not irreducible, there are exactly two irreducible factors which are complex conjugate representations of G . {\displaystyle G.} Definition. A quaternionic representation is a (complex) representation V , {\displaystyle V,} which possesses a G {\displaystyle G} –invariant anti-linear homomorphism J : V → V {\displaystyle J:V\to V} satisfying J 2 = − Id . {\displaystyle J^{2}=-{\text{Id}}.} Thus, a skew-symmetric, nondegenerate G {\displaystyle G} –invariant bilinear form defines a quaternionic structure on V . {\displaystyle V.} Theorem. An irreducible representation V {\displaystyle V} is one and only one of the following: (i) complex: χ V {\displaystyle \chi _{V}} is not real-valued and there exists no G {\displaystyle G} –invariant nondegenerate bilinear form on V . {\displaystyle V.} (ii) real: V = V 0 ⊗ C , {\displaystyle V=V_{0}\otimes \mathbb {C} ,} a real representation; V {\displaystyle V} has a G {\displaystyle G} –invariant nondegenerate symmetric bilinear form. (iii) quaternionic: χ V {\displaystyle \chi _{V}} is real, but V {\displaystyle V} is not real; V {\displaystyle V} has a G {\displaystyle G} –invariant skew-symmetric nondegenerate bilinear form. == Representations of particular groups == === Symmetric groups === Representation of the symmetric groups S n {\displaystyle S_{n}} have been intensely studied. Conjugacy classes in S n {\displaystyle S_{n}} (and therefore, by the above, irreducible representations) correspond to partitions of n. For example, S 3 {\displaystyle S_{3}} has three irreducible representations, corresponding to the partitions 3; 2+1; 1+1+1 of 3. For such a partition, a Young tableau is a graphical device depicting a partition. The irreducible representation corresponding to such a partition (or Young tableau) is called a Specht module. Representations of different symmetric groups are related: any representation of S n × S m {\displaystyle S_{n}\times S_{m}} yields a representation of S n + m {\displaystyle S_{n+m}} by induction, and vice versa by restriction. The direct sum of all these representation rings ⨁ n ≥ 0 R ( S n ) {\displaystyle \bigoplus _{n\geq 0}R(S_{n})} inherits from these constructions the structure of a Hopf algebra which, it turns out, is closely related to symmetric functions. === Finite groups of Lie type === To a certain extent, the representations of the G L n ( F q ) {\displaystyle GL_{n}(\mathbf {F} _{q})} , as n varies, have a similar flavor as for the S n {\displaystyle S_{n}} ; the above-mentioned induction process gets replaced by so-called parabolic induction. However, unlike for S n {\displaystyle S_{n}} , where all representations can be obtained by induction of trivial representations, this is not true for G L n ( F q ) {\displaystyle GL_{n}(\mathbf {F} _{q})} . Instead, new building blocks, known as cuspidal representations, are needed. Representations of G L n ( F q ) {\displaystyle GL_{n}(\mathbf {F} _{q})} and more generally, representations of finite groups of Lie type have been thoroughly studied. Bonnafé (2010) describes the representations of S L 2 ( F q ) {\displaystyle SL_{2}(\mathbf {F} _{q})} . A geometric description of irreducible representations of such groups, including the above-mentioned cuspidal representations, is obtained by Deligne-Lusztig theory, which constructs such representation in the l-adic cohomology of Deligne-Lusztig varieties. The similarity of the representation theory of S n {\displaystyle S_{n}} and G L n ( F q ) {\displaystyle GL_{n}(\mathbf {F} _{q})} goes beyond finite groups. The philosophy of cusp forms highlights the kinship of representation theoretic aspects of these types of groups with general linear groups of local fields such as Qp and of the ring of adeles, see Bump (2004). == Outlook—Representations of compact groups == The theory of representations of compact groups may be, to some degree, extended to locally compact groups. The representation theory unfolds in this context great importance for harmonic analysis and the study of automorphic forms. For proofs, further information and for a more detailed insight which is beyond the scope of this chapter please consult [4] and [5]. === Definition and properties === A topological group is a group together with a topology with respect to which the group composition and the inversion are continuous. Such a group is called compact, if any cover of G , {\displaystyle G,} which is open in the topology, has a finite subcover. Closed subgroups of a compact group are compact again. Let G {\displaystyle G} be a compact group and let V {\displaystyle V} be a finite-dimensional C {\displaystyle \mathbb {C} } –vector space. A linear representation of G {\displaystyle G} to V {\displaystyle V} is a continuous group homomorphism ρ : G → GL ( V ) , {\displaystyle \rho :G\to {\text{GL}}(V),} i.e. ρ ( s ) v {\displaystyle \rho (s)v} is a continuous function in the two variables s ∈ G {\displaystyle s\in G} and v ∈ V . {\displaystyle v\in V.} A linear representation of G {\displaystyle G} into a Banach space V {\displaystyle V} is defined to be a continuous group homomorphism of G {\displaystyle G} into the set of all bijective bounded linear operators on V {\displaystyle V} with a continuous inverse. Since π ( g ) − 1 = π ( g − 1 ) , {\displaystyle \pi (g)^{-1}=\pi (g^{-1}),} we can do without the last requirement. In the following, we will consider in particular representations of compact groups in Hilbert spaces. Just as with finite groups, we can define the group algebra and the convolution algebra. However, the group algebra provides no helpful information in the case of infinite groups, because the continuity condition gets lost during the construction. Instead the convolution algebra L 1 ( G ) {\displaystyle L^{1}(G)} takes its place. Most properties of representations of finite groups can be transferred with appropriate changes to compact groups. For this we need a counterpart to the summation over a finite group: === Existence and uniqueness of the Haar measure === On a compact group G {\displaystyle G} there exists exactly one measure d t , {\displaystyle dt,} such that: It is a left-translation-invariant measure ∀ s ∈ G : ∫ G f ( t ) d t = ∫ G f ( s t ) d t . {\displaystyle \forall s\in G:\quad \int _{G}f(t)dt=\int _{G}f(st)dt.} The whole group has unit measure: ∫ G d t = 1 , {\displaystyle \int _{G}dt=1,} Such a left-translation-invariant, normed measure is called Haar measure of the group G . {\displaystyle G.} Since G {\displaystyle G} is compact, it is possible to show that this measure is also right-translation-invariant, i.e. it also applies ∀ s ∈ G : ∫ G f ( t ) d t = ∫ G f ( t s ) d t . {\displaystyle \forall s\in G:\quad \int _{G}f(t)dt=\int _{G}f(ts)dt.} By the scaling above the Haar measure on a finite group is given by d t ( s ) = 1 | G | {\displaystyle dt(s)={\tfrac {1}{|G|}}} for all s ∈ G . {\displaystyle s\in G.} All the definitions to representations of finite groups that are mentioned in the section ”Properties”, also apply to representations of compact groups. But there are some modifications needed: To define a subrepresentation we now need a closed subspace. This was not necessary for finite-dimensional representation spaces, because in this case every subspace is already closed. Furthermore, two representations ρ , π {\displaystyle \rho ,\pi } of a compact group G {\displaystyle G} are called equivalent, if there exists a bijective, continuous, linear operator T {\displaystyle T} between the representation spaces whose inverse is also continuous and which satisfies T ∘ ρ ( s ) = π ( s ) ∘ T {\displaystyle T\circ \rho (s)=\pi (s)\circ T} for all s ∈ G . {\displaystyle s\in G.} If T {\displaystyle T} is unitary, the two representations are called unitary equivalent. To obtain a G {\displaystyle G} –invariant inner product from a not G {\displaystyle G} –invariant, we now have to use the integral over G {\displaystyle G} instead of the sum. If ( ⋅ | ⋅ ) {\displaystyle (\cdot |\cdot )} is an inner product on a Hilbert space V , {\displaystyle V,} which is not invariant with respect to the representation ρ {\displaystyle \rho } of G , {\displaystyle G,} then ( v | u ) ρ = ∫ G ( ρ ( t ) v | ρ ( t ) u ) d t {\displaystyle (v|u)_{\rho }=\int _{G}(\rho (t)v|\rho (t)u)dt} is a G {\displaystyle G} –invariant inner product on V {\displaystyle V} due to the properties of the Haar measure d t . {\displaystyle dt.} Thus, we can assume every representation on a Hilbert space to be unitary. Let G {\displaystyle G} be a compact group and let s ∈ G . {\displaystyle s\in G.} Let L 2 ( G ) {\displaystyle L^{2}(G)} be the Hilbert space of the square integrable functions on G . {\displaystyle G.} We define the operator L s {\displaystyle L_{s}} on this space by L s Φ ( t ) = Φ ( s − 1 t ) , {\displaystyle L_{s}\Phi (t)=\Phi (s^{-1}t),} where Φ ∈ L 2 ( G ) , t ∈ G . {\displaystyle \Phi \in L^{2}(G),t\in G.} The map s ↦ L s {\displaystyle s\mapsto L_{s}} is a unitary representation of G . {\displaystyle G.} It is called left-regular representation. The right-regular representation is defined similarly. As the Haar measure of G {\displaystyle G} is also right-translation-invariant, the operator R s {\displaystyle R_{s}} on L 2 ( G ) {\displaystyle L^{2}(G)} is given by R s Φ ( t ) = Φ ( t s ) . {\displaystyle R_{s}\Phi (t)=\Phi (ts).} The right-regular representation is then the unitary representation given by s ↦ R s . {\displaystyle s\mapsto R_{s}.} The two representations s ↦ L s {\displaystyle s\mapsto L_{s}} and s ↦ R s {\displaystyle s\mapsto R_{s}} are dual to each other. If G {\displaystyle G} is infinite, these representations have no finite degree. The left- and right-regular representation as defined at the beginning are isomorphic to the left- and right-regular representation as defined above, if the group G {\displaystyle G} is finite. This is due to the fact that in this case L 2 ( G ) ≅ L 1 ( G ) ≅ C [ G ] . {\displaystyle L^{2}(G)\cong L^{1}(G)\cong \mathbb {C} [G].} === Constructions and decompositions === The different ways of constructing new representations from given ones can be used for compact groups as well, except for the dual representation with which we will deal later. The direct sum and the tensor product with a finite number of summands/factors are defined in exactly the same way as for finite groups. This is also the case for the symmetric and alternating square. However, we need a Haar measure on the direct product of compact groups in order to extend the theorem saying that the irreducible representations of the product of two groups are (up to isomorphism) exactly the tensor product of the irreducible representations of the factor groups. First, we note that the direct product G 1 × G 2 {\displaystyle G_{1}\times G_{2}} of two compact groups is again a compact group when provided with the product topology. The Haar measure on the direct product is then given by the product of the Haar measures on the factor groups. For the dual representation on compact groups we require the topological dual V ′ {\displaystyle V'} of the vector space V . {\displaystyle V.} This is the vector space of all continuous linear functionals from the vector space V {\displaystyle V} into the base field. Let π {\displaystyle \pi } be a representation of a compact group G {\displaystyle G} in V . {\displaystyle V.} The dual representation π ′ : G → GL ( V ′ ) {\displaystyle \pi ':G\to {\text{GL}}(V')} is defined by the property ∀ v ∈ V , ∀ v ′ ∈ V ′ , ∀ s ∈ G : ⟨ π ′ ( s ) v ′ , π ( s ) v ⟩ = ⟨ v ′ , v ⟩ := v ′ ( v ) . {\displaystyle \forall v\in V,\forall v'\in V',\forall s\in G:\qquad \left\langle \pi '(s)v',\pi (s)v\right\rangle =\langle v',v\rangle :=v'(v).} Thus, we can conclude that the dual representation is given by π ′ ( s ) v ′ = v ′ ∘ π ( s − 1 ) {\displaystyle \pi '(s)v'=v'\circ \pi (s^{-1})} for all v ′ ∈ V ′ , s ∈ G . {\displaystyle v'\in V',s\in G.} The map π ′ {\displaystyle \pi '} is again a continuous group homomorphism and thus a representation. On Hilbert spaces: π {\displaystyle \pi } is irreducible if and only if π ′ {\displaystyle \pi '} is irreducible. By transferring the results of the section decompositions to compact groups, we obtain the following theorems: Theorem. Every irreducible representation ( τ , V τ ) {\displaystyle (\tau ,V_{\tau })} of a compact group into a Hilbert space is finite-dimensional and there exists an inner product on V τ {\displaystyle V_{\tau }} such that τ {\displaystyle \tau } is unitary. Since the Haar measure is normalized, this inner product is unique. Every representation of a compact group is isomorphic to a direct Hilbert sum of irreducible representations. Let ( ρ , V ρ ) {\displaystyle (\rho ,V_{\rho })} be a unitary representation of the compact group G . {\displaystyle G.} Just as for finite groups we define for an irreducible representation ( τ , V τ ) {\displaystyle (\tau ,V_{\tau })} the isotype or isotypic component in ρ {\displaystyle \rho } to be the subspace V ρ ( τ ) = ∑ V τ ≅ U ⊂ V ρ U . {\displaystyle V_{\rho }(\tau )=\sum _{V_{\tau }\cong U\subset V_{\rho }}U.} This is the sum of all invariant closed subspaces U , {\displaystyle U,} which are G {\displaystyle G} –isomorphic to V τ . {\displaystyle V_{\tau }.} Note that the isotypes of not equivalent irreducible representations are pairwise orthogonal. Theorem. (i) V ρ ( τ ) {\displaystyle V_{\rho }(\tau )} is a closed invariant subspace of V ρ . {\displaystyle V_{\rho }.} (ii) V ρ ( τ ) {\displaystyle V_{\rho }(\tau )} is G {\displaystyle G} –isomorphic to the direct sum of copies of V τ . {\displaystyle V_{\tau }.} (iii) Canonical decomposition: V ρ {\displaystyle V_{\rho }} is the direct Hilbert sum of the isotypes V ρ ( τ ) , {\displaystyle V_{\rho }(\tau ),} in which τ {\displaystyle \tau } passes through all the isomorphism classes of the irreducible representations. The corresponding projection to the canonical decomposition p τ : V → V ( τ ) , {\displaystyle p_{\tau }:V\to V(\tau ),} in which V ( τ ) {\displaystyle V(\tau )} is an isotype of V , {\displaystyle V,} is for compact groups given by p τ ( v ) = n τ ∫ G χ τ ( t ) ¯ ρ ( t ) ( v ) d t , {\displaystyle p_{\tau }(v)=n_{\tau }\int _{G}{\overline {\chi _{\tau }(t)}}\rho (t)(v)dt,} where n τ = dim ⁡ ( V ( τ ) ) {\displaystyle n_{\tau }=\dim(V(\tau ))} and χ τ {\displaystyle \chi _{\tau }} is the character corresponding to the irreducible representation τ . {\displaystyle \tau .} ==== Projection formula ==== For every representation ( ρ , V ) {\displaystyle (\rho ,V)} of a compact group G {\displaystyle G} we define V G = { v ∈ V : ρ ( s ) v = v ∀ s ∈ G } . {\displaystyle V^{G}=\{v\in V:\rho (s)v=v\,\,\,\forall s\in G\}.} In general ρ ( s ) : V → V {\displaystyle \rho (s):V\to V} is not G {\displaystyle G} –linear. Let P v := ∫ G ρ ( s ) v d s . {\displaystyle Pv:=\int _{G}\rho (s)vds.} The map P {\displaystyle P} is defined as endomorphism on V {\displaystyle V} by having the property ( ∫ G ρ ( s ) v d s | w ) = ∫ G ( ρ ( s ) v | w ) d s , {\displaystyle \left.\left(\int _{G}\rho (s)vds\right|w\right)=\int _{G}(\rho (s)v|w)ds,} which is valid for the inner product of the Hilbert space V . {\displaystyle V.} Then P {\displaystyle P} is G {\displaystyle G} –linear, because of ( ∫ G ρ ( s ) ( ρ ( t ) v ) d s | w ) = ∫ G ( ρ ( t s t − 1 ) ( ρ ( t ) v ) | w ) d s = ∫ G ( ρ ( t s ) v | w ) d s = ∫ ( ρ ( t ) ρ ( s ) v | w ) d s = ( ρ ( t ) ∫ G ρ ( s ) v d s | w ) , {\displaystyle {\begin{aligned}\left.\left(\int _{G}\rho (s)(\rho (t)v)ds\right|w\right)&=\int _{G}\left.\left(\rho \left(tst^{-1}\right)(\rho (t)v)\right|w\right)ds\\&=\int _{G}(\rho (ts)v|w)ds\\&=\int (\rho (t)\rho (s)v|w)ds\\&=\left.\left(\rho (t)\int _{G}\rho (s)vds\right|w\right),\end{aligned}}} where we used the invariance of the Haar measure. Proposition. The map P {\displaystyle P} is a projection from V {\displaystyle V} to V G . {\displaystyle V^{G}.} If the representation is finite-dimensional, it is possible to determine the direct sum of the trivial subrepresentation just as in the case of finite groups. === Characters, Schur's lemma and the inner product === Generally, representations of compact groups are investigated on Hilbert- and Banach spaces. In most cases they are not finite-dimensional. Therefore, it is not useful to refer to characters when speaking about representations of compact groups. Nevertheless, in most cases it is possible to restrict the study to the case of finite dimensions: Since irreducible representations of compact groups are finite-dimensional and unitary (see results from the first subsection), we can define irreducible characters in the same way as it was done for finite groups. As long as the constructed representations stay finite-dimensional, the characters of the newly constructed representations may be obtained in the same way as for finite groups. Schur's lemma is also valid for compact groups: Let ( π , V ) {\displaystyle (\pi ,V)} be an irreducible unitary representation of a compact group G . {\displaystyle G.} Then every bounded operator T : V → V {\displaystyle T:V\to V} satisfying the property T ∘ π ( s ) = π ( s ) ∘ T {\displaystyle T\circ \pi (s)=\pi (s)\circ T} for all s ∈ G , {\displaystyle s\in G,} is a scalar multiple of the identity, i.e. there exists λ ∈ C {\displaystyle \lambda \in \mathbb {C} } such that T = λ Id . {\displaystyle T=\lambda {\text{Id}}.} Definition. The formula ( Φ | Ψ ) = ∫ G Φ ( t ) Ψ ( t ) ¯ d t . {\displaystyle (\Phi |\Psi )=\int _{G}\Phi (t){\overline {\Psi (t)}}dt.} defines an inner product on the set of all square integrable functions L 2 ( G ) {\displaystyle L^{2}(G)} of a compact group G . {\displaystyle G.} Likewise ⟨ Φ , Ψ ⟩ = ∫ G Φ ( t ) Ψ ( t − 1 ) d t . {\displaystyle \langle \Phi ,\Psi \rangle =\int _{G}\Phi (t)\Psi (t^{-1})dt.} defines a bilinear form on L 2 ( G ) {\displaystyle L^{2}(G)} of a compact group G . {\displaystyle G.} The bilinear form on the representation spaces is defined exactly as it was for finite groups and analogous to finite groups the following results are therefore valid: Theorem. Let χ {\displaystyle \chi } and χ ′ {\displaystyle \chi '} be the characters of two non-isomorphic irreducible representations V {\displaystyle V} and V ′ , {\displaystyle V',} respectively. Then the following is valid ( χ | χ ′ ) = 0. {\displaystyle (\chi |\chi ')=0.} ( χ | χ ) = 1 , {\displaystyle (\chi |\chi )=1,} i.e. χ {\displaystyle \chi } has "norm" 1. {\displaystyle 1.} Theorem. Let V {\displaystyle V} be a representation of G {\displaystyle G} with character χ V . {\displaystyle \chi _{V}.} Suppose W {\displaystyle W} is an irreducible representation of G {\displaystyle G} with character χ W . {\displaystyle \chi _{W}.} The number of subrepresentations of V {\displaystyle V} equivalent to W {\displaystyle W} is independent of any given decomposition for V {\displaystyle V} and is equal to the inner product ( χ V | χ W ) . {\displaystyle (\chi _{V}|\chi _{W}).} Irreducibility Criterion. Let χ {\displaystyle \chi } be the character of the representation V , {\displaystyle V,} then ( χ | χ ) {\displaystyle (\chi |\chi )} is a positive integer. Moreover ( χ | χ ) = 1 {\displaystyle (\chi |\chi )=1} if and only if V {\displaystyle V} is irreducible. Therefore, using the first theorem, the characters of irreducible representations of G {\displaystyle G} form an orthonormal set on L 2 ( G ) {\displaystyle L^{2}(G)} with respect to this inner product. Corollary. Every irreducible representation V {\displaystyle V} of G {\displaystyle G} is contained dim ⁡ ( V ) {\displaystyle \dim(V)} –times in the left-regular representation. Lemma. Let G {\displaystyle G} be a compact group. Then the following statements are equivalent: G {\displaystyle G} is abelian. All the irreducible representations of G {\displaystyle G} have degree 1. {\displaystyle 1.} Orthonormal Property. Let G {\displaystyle G} be a group. The non-isomorphic irreducible representations of G {\displaystyle G} form an orthonormal basis in L 2 ( G ) {\displaystyle L^{2}(G)} with respect to this inner product. As we already know that the non-isomorphic irreducible representations are orthonormal, we only need to verify that they generate L 2 ( G ) . {\displaystyle L^{2}(G).} This may be done, by proving that there exists no non-zero square integrable function on G {\displaystyle G} orthogonal to all the irreducible characters. Just as in the case of finite groups, the number of the irreducible representations up to isomorphism of a group G {\displaystyle G} equals the number of conjugacy classes of G . {\displaystyle G.} However, because a compact group has in general infinitely many conjugacy classes, this does not provide any useful information. === The induced representation === If H {\displaystyle H} is a closed subgroup of finite index in a compact group G , {\displaystyle G,} the definition of the induced representation for finite groups may be adopted. However, the induced representation can be defined more generally, so that the definition is valid independent of the index of the subgroup H . {\displaystyle H.} For this purpose let ( η , V η ) {\displaystyle (\eta ,V_{\eta })} be a unitary representation of the closed subgroup H . {\displaystyle H.} The continuous induced representation Ind H G ( η ) = ( I , V I ) {\displaystyle {\text{Ind}}_{H}^{G}(\eta )=(I,V_{I})} is defined as follows: Let V I {\displaystyle V_{I}} denote the Hilbert space of all measurable, square integrable functions Φ : G → V η {\displaystyle \Phi :G\to V_{\eta }} with the property Φ ( l s ) = η ( l ) Φ ( s ) {\displaystyle \Phi (ls)=\eta (l)\Phi (s)} for all l ∈ H , s ∈ G . {\displaystyle l\in H,s\in G.} The norm is given by ‖ Φ ‖ G = sup s ∈ G ‖ Φ ( s ) ‖ {\displaystyle \|\Phi \|_{G}={\text{sup}}_{s\in G}\|\Phi (s)\|} and the representation I {\displaystyle I} is given as the right-translation: I ( s ) Φ ( k ) = Φ ( k s ) . {\displaystyle I(s)\Phi (k)=\Phi (ks).} The induced representation is then again a unitary representation. Since G {\displaystyle G} is compact, the induced representation can be decomposed into the direct sum of irreducible representations of G . {\displaystyle G.} Note that all irreducible representations belonging to the same isotype appear with a multiplicity equal to dim ⁡ ( Hom G ( V η , V I ) ) = ⟨ V η , V I ⟩ G . {\displaystyle \dim({\text{Hom}}_{G}(V_{\eta },V_{I}))=\langle V_{\eta },V_{I}\rangle _{G}.} Let ( ρ , V ρ ) {\displaystyle (\rho ,V_{\rho })} be a representation of G , {\displaystyle G,} then there exists a canonical isomorphism T : Hom G ( V ρ , I H G ( η ) ) → Hom H ( V ρ | H , V η ) . {\displaystyle T:{\text{Hom}}_{G}(V_{\rho },I_{H}^{G}(\eta ))\to {\text{Hom}}_{H}(V_{\rho }|_{H},V_{\eta }).} The Frobenius reciprocity transfers, together with the modified definitions of the inner product and of the bilinear form, to compact groups. The theorem now holds for square integrable functions on G {\displaystyle G} instead of class functions, but the subgroup H {\displaystyle H} must be closed. === The Peter-Weyl Theorem === Another important result in the representation theory of compact groups is the Peter-Weyl Theorem. It is usually presented and proven in harmonic analysis, as it represents one of its central and fundamental statements. The Peter-Weyl Theorem. Let G {\displaystyle G} be a compact group. For every irreducible representation ( τ , V τ ) {\displaystyle (\tau ,V_{\tau })} of G {\displaystyle G} let { e 1 , … , e dim ⁡ ( τ ) } {\displaystyle \{e_{1},\ldots ,e_{\dim(\tau )}\}} be an orthonormal basis of V τ . {\displaystyle V_{\tau }.} We define the matrix coefficients τ k , l ( s ) = ⟨ τ ( s ) e k , e l ⟩ {\displaystyle \tau _{k,l}(s)=\langle \tau (s)e_{k},e_{l}\rangle } for k , l ∈ { 1 , … , dim ⁡ ( τ ) } , s ∈ G . {\displaystyle k,l\in \{1,\ldots ,\dim(\tau )\},s\in G.} Then we have the following orthonormal basis of L 2 ( G ) {\displaystyle L^{2}(G)} : ( dim ⁡ ( τ ) τ k , l ) k , l {\displaystyle \left({\sqrt {\dim(\tau )}}\tau _{k,l}\right)_{k,l}} We can reformulate this theorem to obtain a generalization of the Fourier series for functions on compact groups: The Peter-Weyl Theorem (Second version). There exists a natural G × G {\displaystyle G\times G} –isomorphism L 2 ( G ) ≅ G × G ⨁ ^ τ ∈ G ^ End ( V τ ) ≅ G × G ⨁ ^ τ ∈ G ^ τ ⊗ τ ∗ {\displaystyle L^{2}(G)\cong _{G\times G}{\widehat {\bigoplus }}_{\tau \in {\widehat {G}}}{\text{End}}(V_{\tau })\cong _{G\times G}{\widehat {\bigoplus }}_{\tau \in {\widehat {G}}}\tau \otimes \tau ^{*}} in which G ^ {\displaystyle {\widehat {G}}} is the set of all irreducible representations of G {\displaystyle G} up to isomorphism and V τ {\displaystyle V_{\tau }} is the representation space corresponding to τ . {\displaystyle \tau .} More concretely: { Φ ↦ ∑ τ ∈ G ^ τ ( Φ ) τ ( Φ ) = ∫ G Φ ( t ) τ ( t ) d t ∈ End ( V τ ) {\displaystyle {\begin{cases}\Phi \mapsto \sum _{\tau \in {\widehat {G}}}\tau (\Phi )\\[5pt]\tau (\Phi )=\int _{G}\Phi (t)\tau (t)dt\in {\text{End}}(V_{\tau })\end{cases}}} == History == The general features of the representation theory of a finite group G, over the complex numbers, were discovered by Ferdinand Georg Frobenius in the years before 1900. Later the modular representation theory of Richard Brauer was developed. == See also == Character theory Real representation Schur orthogonality relations McKay conjecture Burnside ring == Literature == Bonnafé, Cedric (2010). Representations of SL2(Fq). Algebra and Applications. Vol. 13. Springer. ISBN 9780857291578. Bump, Daniel (2004), Lie Groups, Graduate Texts in Mathematics, vol. 225, New York: Springer-Verlag, ISBN 0-387-21154-3 [1] Serre, Jean-Pierre (1977), Linear Representations of Finite Groups, New York: Springer Verlag, ISBN 0-387-90190-6 [2] Fulton, William; Harris, Joe: Representation Theory A First Course. Springer-Verlag, New York 1991, ISBN 0-387-97527-6. [3] Alperin, J.L.; Bell, Rowen B.: Groups and Representations Springer-Verlag, New York 1995, ISBN 0-387-94525-3. [4] Deitmar, Anton: Automorphe Formen Springer-Verlag 2010, ISBN 978-3-642-12389-4, p. 89-93,185-189 [5] Echterhoff, Siegfried; Deitmar, Anton: Principles of harmonic analysis Springer-Verlag 2009, ISBN 978-0-387-85468-7, p. 127-150 [6] Lang, Serge: Algebra Springer-Verlag, New York 2002, ISBN 0-387-95385-X, p. 663-729 [7] Sengupta, Ambar (2012). Representing finite groups: a semisimple introduction. New York. ISBN 9781461412311. OCLC 769756134.{{cite book}}: CS1 maint: location missing publisher (link) == References ==
Wikipedia/Representation_theory_of_finite_groups
In mathematics, the equivariant algebraic K-theory is an algebraic K-theory associated to the category Coh G ⁡ ( X ) {\displaystyle \operatorname {Coh} ^{G}(X)} of equivariant coherent sheaves on an algebraic scheme X with action of a linear algebraic group G, via Quillen's Q-construction; thus, by definition, K i G ( X ) = π i ( B + Coh G ⁡ ( X ) ) . {\displaystyle K_{i}^{G}(X)=\pi _{i}(B^{+}\operatorname {Coh} ^{G}(X)).} In particular, K 0 G ( C ) {\displaystyle K_{0}^{G}(C)} is the Grothendieck group of Coh G ⁡ ( X ) {\displaystyle \operatorname {Coh} ^{G}(X)} . The theory was developed by R. W. Thomason in 1980s. Specifically, he proved equivariant analogs of fundamental theorems such as the localization theorem. Equivalently, K i G ( X ) {\displaystyle K_{i}^{G}(X)} may be defined as the K i {\displaystyle K_{i}} of the category of coherent sheaves on the quotient stack [ X / G ] {\displaystyle [X/G]} . (Hence, the equivariant K-theory is a specific case of the K-theory of a stack.) A version of the Lefschetz fixed-point theorem holds in the setting of equivariant (algebraic) K-theory. == Fundamental theorems == Let X be an equivariant algebraic scheme. == Examples == One of the fundamental examples of equivariant K-theory groups are the equivariant K-groups of G {\displaystyle G} -equivariant coherent sheaves on a points, so K i G ( ∗ ) {\displaystyle K_{i}^{G}(*)} . Since Coh G ( ∗ ) {\displaystyle {\text{Coh}}^{G}(*)} is equivalent to the category Rep ( G ) {\displaystyle {\text{Rep}}(G)} of finite-dimensional representations of G {\displaystyle G} . Then, the Grothendieck group of Rep ( G ) {\displaystyle {\text{Rep}}(G)} , denoted R ( G ) {\displaystyle R(G)} is K 0 G ( ∗ ) {\displaystyle K_{0}^{G}(*)} . === Torus ring === Given an algebraic torus T ≅ G m k {\displaystyle \mathbb {T} \cong \mathbb {G} _{m}^{k}} a finite-dimensional representation V {\displaystyle V} is given by a direct sum of 1 {\displaystyle 1} -dimensional T {\displaystyle \mathbb {T} } -modules called the weights of V {\displaystyle V} . There is an explicit isomorphism between K T {\displaystyle K_{\mathbb {T} }} and Z [ t 1 , … , t k ] {\displaystyle \mathbb {Z} [t_{1},\ldots ,t_{k}]} given by sending [ V ] {\displaystyle [V]} to its associated character. == See also == Topological K-theory, the topological equivariant K-theory == References == N. Chris and V. Ginzburg, Representation Theory and Complex Geometry, Birkhäuser, 1997. Baum, Paul; Fulton, William; Quart, George (1979). "Lefschetz-riemann-roch for singular varieties". Acta Mathematica. 143: 193–211. doi:10.1007/BF02392092. Thomason, R.W.:Algebraic K-theory of group scheme actions. In: Browder, W. (ed.) Algebraic topology and algebraic K-theory. (Ann. Math. Stud., vol. 113, pp. 539 563) Princeton: Princeton University Press 1987 Thomason, R.W.: Lefschetz–Riemann–Roch theorem and coherent trace formula. Invent. Math. 85, 515–543 (1986) Thomason, R.W., Trobaugh, T.: Higher algebraic K-theory of schemes and of derived categories. In: Cartier, P., Illusie, L., Katz, N.M., Laumon, G., Manin, Y., Ribet, K.A. (eds.) The Grothendieck Festschrift, vol. III. (Prog. Math. vol. 88, pp. 247 435) Boston Basel Berlin: Birkhfiuser 1990 Thomason, R.W., Une formule de Lefschetz en K-théorie équivariante algébrique, Duke Math. J. 68 (1992), 447–462. == Further reading == Dan Edidin, Riemann–Roch for Deligne–Mumford stacks, 2012
Wikipedia/Equivariant_algebraic_K-theory
In mathematics, an algebraic extension is a field extension L/K such that every element of the larger field L is algebraic over the smaller field K; that is, every element of L is a root of a non-zero polynomial with coefficients in K. A field extension that is not algebraic, is said to be transcendental, and must contain transcendental elements, that is, elements that are not algebraic. The algebraic extensions of the field Q {\displaystyle \mathbb {Q} } of the rational numbers are called algebraic number fields and are the main objects of study of algebraic number theory. Another example of a common algebraic extension is the extension C / R {\displaystyle \mathbb {C} /\mathbb {R} } of the real numbers by the complex numbers. == Some properties == All transcendental extensions are of infinite degree. This in turn implies that all finite extensions are algebraic. The converse is not true however: there are infinite extensions which are algebraic. For instance, the field of all algebraic numbers is an infinite algebraic extension of the rational numbers. Let E be an extension field of K, and a ∈ E. The smallest subfield of E that contains K and a is commonly denoted K ( a ) . {\displaystyle K(a).} If a is algebraic over K, then the elements of K(a) can be expressed as polynomials in a with coefficients in K; that is, K ( a ) = K [ a ] {\displaystyle K(a)=K[a]} , the smallest ring containing K and a. In this case, K ( a ) {\displaystyle K(a)} is a finite extension of K and all its elements are algebraic over K. In particular, K ( a ) {\displaystyle K(a)} is a K-vector space with basis { 1 , a , . . . , a d − 1 } {\displaystyle \{1,a,...,a^{d-1}\}} , where d is the degree of the minimal polynomial of a. These properties do not hold if a is not algebraic. For example, Q ( π ) ≠ Q [ π ] , {\displaystyle \mathbb {Q} (\pi )\neq \mathbb {Q} [\pi ],} and they are both infinite dimensional vector spaces over Q . {\displaystyle \mathbb {Q} .} An algebraically closed field F has no proper algebraic extensions, that is, no algebraic extensions E with F < E. An example is the field of complex numbers. Every field has an algebraic extension which is algebraically closed (called its algebraic closure), but proving this in general requires some form of the axiom of choice. An extension L/K is algebraic if and only if every sub K-algebra of L is a field. == Properties == The following three properties hold: If E is an algebraic extension of F and F is an algebraic extension of K then E is an algebraic extension of K. If E and F are algebraic extensions of K in a common overfield C, then the compositum EF is an algebraic extension of K. If E is an algebraic extension of F and E > K > F then E is an algebraic extension of K. These finitary results can be generalized using transfinite induction: This fact, together with Zorn's lemma (applied to an appropriately chosen poset), establishes the existence of algebraic closures. == Generalizations == Model theory generalizes the notion of algebraic extension to arbitrary theories: an embedding of M into N is called an algebraic extension if for every x in N there is a formula p with parameters in M, such that p(x) is true and the set { y ∈ N ∣ p ( y ) } {\displaystyle \left\{y\in N\mid p(y)\right\}} is finite. It turns out that applying this definition to the theory of fields gives the usual definition of algebraic extension. The Galois group of N over M can again be defined as the group of automorphisms, and it turns out that most of the theory of Galois groups can be developed for the general case. == Relative algebraic closures == Given a field k and a field K containing k, one defines the relative algebraic closure of k in K to be the subfield of K consisting of all elements of K that are algebraic over k, that is all elements of K that are a root of some nonzero polynomial with coefficients in k. == See also == Integral element Lüroth's theorem Galois extension Separable extension Normal extension == Notes == == References == Fraleigh, John B. (2014), A First Course in Abstract Algebra, Pearson, ISBN 978-1-292-02496-7 Hazewinkel, Michiel; Gubareni, Nadiya; Gubareni, Nadezhda Mikhaĭlovna; Kirichenko, Vladimir V. (2004), Algebras, rings and modules, vol. 1, Springer, ISBN 1-4020-2690-0 Lang, Serge (1993), "V.1:Algebraic Extensions", Algebra (Third ed.), Reading, Mass.: Addison-Wesley, pp. 223ff, ISBN 978-0-201-55540-0, Zbl 0848.13001 Malik, D. B.; Mordeson, John N.; Sen, M. K. (1997), Fundamentals of Abstract Algebra, McGraw-Hill, ISBN 0-07-040035-0 McCarthy, Paul J. (1991) [corrected reprint of 2nd edition, 1976], Algebraic extensions of fields, New York: Dover Publications, ISBN 0-486-66651-4, Zbl 0768.12001 Roman, Steven (1995), Field Theory, GTM 158, Springer-Verlag, ISBN 9780387944081 Rotman, Joseph J. (2002), Advanced Modern Algebra, Prentice Hall, ISBN 9780130878687
Wikipedia/Algebraic_field_extension
Transformational theory is a branch of music theory developed by David Lewin in the 1980s, and formally introduced in his 1987 work Generalized Musical Intervals and Transformations. The theory—which models musical transformations as elements of a mathematical group—can be used to analyze both tonal and atonal music. The goal of transformational theory is to change the focus from musical objects—such as the "C major chord" or "G major chord"—to relations between musical objects (related by transformation). Thus, instead of saying that a C major chord is followed by G major, a transformational theorist might say that the first chord has been "transformed" into the second by the "Dominant operation." (Symbolically, one might write "Dominant(C major) = G major.") While traditional musical set theory focuses on the makeup of musical objects, transformational theory focuses on the intervals or types of musical motion that can occur. According to Lewin's description of this change in emphasis, "[The transformational] attitude does not ask for some observed measure of extension between reified 'points'; rather it asks: 'If I am at s and wish to get to t, what characteristic gesture should I perform in order to arrive there?'" (from Generalized Musical Intervals and Transformations (GMIT), p. 159) == Formalism == The formal setting for Lewin's theory is a set S (or "space") of musical objects, and a set T of transformations on that space. Transformations are modeled as functions acting on the entire space, meaning that every transformation must be applicable to every object. Lewin points out that this requirement significantly constrains the spaces and transformations that can be considered. For example, if the space S is the space of diatonic triads (represented by the Roman numerals I, ii, iii, IV, V, vi, and vii°), the "Dominant transformation" must be defined so as to apply to each of these triads. This means, for example, that some diatonic triad must be selected as the "dominant" of the diminished triad on vii. Ordinary musical discourse, however, typically holds that the "dominant" relationship is only between the I and V chords. (Certainly, no diatonic triad is ordinarily considered the dominant of the diminished triad.) In other words, "dominant," as used informally, is not a function that applies to all chords, but rather describes a particular relationship between two of them. There are, however, any number of situations in which "transformations" can extend to an entire space. Here, transformational theory provides a degree of abstraction that could be a significant music-theoretical asset. One transformational network can describe the relationships among musical events in more than one musical excerpt, thus offering an elegant way of relating them. For example, figure 7.9 in Lewin's GMIT can describe the first phrases of both the first and third movements of Beethoven's Symphony No. 1 in C Major, Op. 21. In this case, the transformation graph's objects are the same in both excerpts from the Beethoven Symphony, but this graph could apply to many more musical examples when the object labels are removed. Further, such a transformational network that gives only the intervals between pitch classes in an excerpt may also describe the differences in the relative durations of another excerpt in a piece, thus succinctly relating two different domains of music analysis. Lewin's observation that only the transformations, and not the objects on which they act, are necessary to specify a transformational network is the main benefit of transformational analysis over traditional object-oriented analysis. == Transformations as functions == The "transformations" of transformational theory are typically modeled as functions that act over some musical space S, meaning that they are entirely defined by their inputs and outputs: for instance, the "ascending major third" might be modeled as a function that takes a particular pitch class as input and outputs the pitch class a major third above it. However, several theorists have pointed out that ordinary musical discourse often includes more information than functions. For example, a single pair of pitch classes (such as C and E) can stand in multiple relationships: E is both a major third above C and a minor sixth below it. (This is analogous to the fact that, on an ordinary clockface, the number 4 is both four steps clockwise from 12 and 8 steps counterclockwise from it.) For this reason, theorists such as Dmitri Tymoczko have proposed replacing Lewinnian "pitch class intervals" with "paths in pitch class space". More generally, this suggests that there are situations where it might not be useful to model musical motion ("transformations" in the intuitive sense) using functions ("transformations" in the strict sense of Lewinnian theory). Another issue concerns the role of "distance" in transformational theory. In the opening pages of GMIT, Lewin suggests that a subspecies of "transformations" (namely, musical intervals) can be used to model "directed measurements, distances, or motions". However, the mathematical formalism he uses—which models "transformations" by group elements—does not obviously represent distances, since group elements are not typically considered to have size. (Groups are typically individuated only up to isomorphism, and isomorphism does not necessarily preserve the "sizes" assigned to group elements.) Theorists such as Ed Gollin, Dmitri Tymoczko, and Rachel Hall, have all written about this subject, with Gollin attempting to incorporate "distances" into a broadly Lewinnian framework. Tymoczko's "Generalizing Musical Intervals" contains one of the few extended critiques of transformational theory, arguing (1) that intervals are sometimes "local" objects that, like vectors, cannot be transported around a musical space; (2) that musical spaces often have boundaries, or multiple paths between the same points, both prohibited by Lewin's formalism; and (3) that transformational theory implicitly relies on notions of distance extraneous to the formalism as such. == Reception == Although transformation theory is more than thirty years old, it did not become a widespread theoretical or analytical pursuit until the late 1990s. Following Lewin's revival (in GMIT) of Hugo Riemann's three contextual inversion operations on triads (parallel, relative, and Leittonwechsel) as formal transformations, the branch of transformation theory called Neo-Riemannian theory was popularized by Brian Hyer (1995), Michael Kevin Mooney (1996), Richard Cohn (1997), and an entire issue of the Journal of Music Theory (42/2, 1998). Transformation theory has received further treatment by Fred Lerdahl (2001), Julian Hook (2002), David Kopp (2002), and many others. The status of transformational theory is currently a topic of debate in music-theoretical circles. Some authors, such as Ed Gollin, Dmitri Tymoczko and Julian Hook, have argued that Lewin's transformational formalism is too restrictive, and have called for extending the system in various ways. Others, such as Richard Cohn and Steven Rings, while acknowledging the validity of some of these criticisms, continue to use broadly Lewinnian techniques. == See also == Pitch space Interval vector == References == == Further reading == Cohn, Richard. "Neo-Riemannian Operations, Parsimonious Trichords, and their Tonnetz Representations", Journal of Music Theory, 41/1 (1997), 1–66 Hook, Julian. Uniform Triadic Transformations (Ph.D. dissertation, Indiana University, 2002) Hyer, Brian. "Reimag(in)ing Riemann", Journal of Music Theory, 39/1 (1995), 101–138 Kopp, David. Chromatic Transformations in Nineteenth-century Music (Cambridge University Press, 2002) Lerdahl, Fred. Tonal Pitch Space (Oxford University Press: New York, 2001) Lewin, David. "Transformational Techniques in Atonal and Other Music Theories", Perspectives of New Music, xxi (1982–83), 312–371 Lewin, David. Generalized Musical Intervals and Transformations (Yale University Press: New Haven, Connecticut, 1987) Lewin, David. Musical Form and Transformation: Four Analytic Essays (Yale University Press: New Haven, Connecticut, 1993) Mooney, Michael Kevin. The 'Table of Relations' and Music Psychology in Hugo Riemann's Chromatic Theory (Ph.D. dissertation, Columbia University, 1996) Rings, Steven. "Tonality and Transformation" (Oxford University Press: New York, 2011) Rehding, Alexander and Gollin, Edward. The Oxford Handbook of Neo-Riemannian Music Theories (Oxford University Press: New York 2011) Tsao, Ming (2010). Abstract Musical Intervals: Group Theory for Composition and Analysis. Berkeley, CA: Musurgia Universalis Press. ISBN 978-1430308355. == External links == Baez, John (June 12, 2006). "This Week's Finds in Mathematical Physics (Week 234)". University of California, Riverside.
Wikipedia/Transformational_theory
A cryptographic protocol is an abstract or concrete protocol that performs a security-related function and applies cryptographic methods, often as sequences of cryptographic primitives. A protocol describes how the algorithms should be used and includes details about data structures and representations, at which point it can be used to implement multiple, interoperable versions of a program. Cryptographic protocols are widely used for secure application-level data transport. A cryptographic protocol usually incorporates at least some of these aspects: Key agreement or establishment Entity authentication Symmetric encryption and message authentication material construction Secured application-level data transport Non-repudiation methods Secret sharing methods Secure multi-party computation For example, Transport Layer Security (TLS) is a cryptographic protocol that is used to secure web (HTTPS) connections. It has an entity authentication mechanism, based on the X.509 system; a key setup phase, where a symmetric encryption key is formed by employing public-key cryptography; and an application-level data transport function. These three aspects have important interconnections. Standard TLS does not have non-repudiation support. There are other types of cryptographic protocols as well, and even the term itself has various readings; Cryptographic application protocols often use one or more underlying key agreement methods, which are also sometimes themselves referred to as "cryptographic protocols". For instance, TLS employs what is known as the Diffie–Hellman key exchange, which although it is only a part of TLS per se, Diffie–Hellman may be seen as a complete cryptographic protocol in itself for other applications. == Advanced cryptographic protocols == A wide variety of cryptographic protocols go beyond the traditional goals of data confidentiality, integrity, and authentication to also secure a variety of other desired characteristics of computer-mediated collaboration. Blind signatures can be used for digital cash and digital credentials to prove that a person holds an attribute or right without revealing that person's identity or the identities of parties that person transacted with. Secure digital timestamping can be used to prove that data (even if confidential) existed at a certain time. Secure multiparty computation can be used to compute answers (such as determining the highest bid in an auction) based on confidential data (such as private bids), so that when the protocol is complete the participants know only their own input and the answer. End-to-end auditable voting systems provide sets of desirable privacy and auditability properties for conducting e-voting. Undeniable signatures include interactive protocols that allow the signer to prove a forgery and limit who can verify the signature. Deniable encryption augments standard encryption by making it impossible for an attacker to mathematically prove the existence of a plain text message. Digital mixes create hard-to-trace communications. == Formal verification == Cryptographic protocols can sometimes be verified formally on an abstract level. When it is done, there is a necessity to formalize the environment in which the protocol operates in order to identify threats. This is frequently done through the Dolev-Yao model. Logics, concepts and calculi used for formal reasoning of security protocols: Burrows–Abadi–Needham logic (BAN logic) Dolev–Yao model π-calculus Protocol composition logic (PCL) Strand space Research projects and tools used for formal verification of security protocols: Automated Validation of Internet Security Protocols and Applications (AVISPA) and follow-up project AVANTSSAR. Constraint Logic-based Attack Searcher (CL-AtSe) Open-Source Fixed-Point Model-Checker (OFMC) SAT-based Model-Checker (SATMC) Casper CryptoVerif Cryptographic Protocol Shapes Analyzer (CPSA) Knowledge In Security protocolS (KISS) Maude-NRL Protocol Analyzer (Maude-NPA) ProVerif Scyther Tamarin Prover Squirrel === Notion of abstract protocol === To formally verify a protocol it is often abstracted and modelled using Alice & Bob notation. A simple example is the following: A → B : { X } K A , B {\displaystyle A\rightarrow B:\{X\}_{K_{A,B}}} This states that Alice A {\displaystyle A} intends a message for Bob B {\displaystyle B} consisting of a message X {\displaystyle X} encrypted under shared key K A , B {\displaystyle K_{A,B}} . == Examples == Internet Key Exchange IPsec Kerberos Off-the-Record Messaging Point to Point Protocol Secure Shell (SSH) Signal Protocol Transport Layer Security ZRTP == See also == List of cryptosystems Secure channel Security Protocols Open Repository Comparison of cryptography libraries Quantum cryptographic protocol == References == == Further reading == Ermoshina, Ksenia; Musiani, Francesca; Halpin, Harry (September 2016). "End-to-End Encrypted Messaging Protocols: An Overview" (PDF). In Bagnoli, Franco; et al. (eds.). Internet Science. INSCI 2016. Florence, Italy: Springer. pp. 244–254. doi:10.1007/978-3-319-45982-0_22. ISBN 978-3-319-45982-0.
Wikipedia/Cryptographic_protocol
In mathematics, a partial function f from a set X to a set Y is a function from a subset S of X (possibly the whole X itself) to Y. The subset S, that is, the domain of f viewed as a function, is called the domain of definition or natural domain of f. If S equals X, that is, if f is defined on every element in X, then f is said to be a total function. In other words, a partial function is a binary relation over two sets that associates to every element of the first set at most one element of the second set; it is thus a univalent relation. This generalizes the concept of a (total) function by not requiring every element of the first set to be associated to an element of the second set. A partial function is often used when its exact domain of definition is not known, or is difficult to specify. However, even when the exact domain of definition is known, partial functions are often used for simplicity or brevity. This is the case in calculus, where, for example, the quotient of two functions is a partial function whose domain of definition cannot contain the zeros of the denominator; in this context, a partial function is generally simply called a function. In computability theory, a general recursive function is a partial function from the integers to the integers; no algorithm can exist for deciding whether an arbitrary such function is in fact total. When arrow notation is used for functions, a partial function f {\displaystyle f} from X {\displaystyle X} to Y {\displaystyle Y} is sometimes written as f : X ⇀ Y , {\displaystyle f:X\rightharpoonup Y,} f : X ↛ Y , {\displaystyle f:X\nrightarrow Y,} or f : X ↪ Y . {\displaystyle f:X\hookrightarrow Y.} However, there is no general convention, and the latter notation is more commonly used for inclusion maps or embeddings. Specifically, for a partial function f : X ⇀ Y , {\displaystyle f:X\rightharpoonup Y,} and any x ∈ X , {\displaystyle x\in X,} one has either: f ( x ) = y ∈ Y {\displaystyle f(x)=y\in Y} (it is a single element in Y), or f ( x ) {\displaystyle f(x)} is undefined. For example, if f {\displaystyle f} is the square root function restricted to the integers f : Z → N , {\displaystyle f:\mathbb {Z} \to \mathbb {N} ,} defined by: f ( n ) = m {\displaystyle f(n)=m} if, and only if, m 2 = n , {\displaystyle m^{2}=n,} m ∈ N , n ∈ Z , {\displaystyle m\in \mathbb {N} ,n\in \mathbb {Z} ,} then f ( n ) {\displaystyle f(n)} is only defined if n {\displaystyle n} is a perfect square (that is, 0 , 1 , 4 , 9 , 16 , … {\displaystyle 0,1,4,9,16,\ldots } ). So f ( 25 ) = 5 {\displaystyle f(25)=5} but f ( 26 ) {\displaystyle f(26)} is undefined. == Basic concepts == A partial function arises from the consideration of maps between two sets X and Y that may not be defined on the entire set X. A common example is the square root operation on the real numbers R {\displaystyle \mathbb {R} } : because negative real numbers do not have real square roots, the operation can be viewed as a partial function from R {\displaystyle \mathbb {R} } to R . {\displaystyle \mathbb {R} .} The domain of definition of a partial function is the subset S of X on which the partial function is defined; in this case, the partial function may also be viewed as a function from S to Y. In the example of the square root operation, the set S consists of the nonnegative real numbers [ 0 , + ∞ ) . {\displaystyle [0,+\infty ).} The notion of partial function is particularly convenient when the exact domain of definition is unknown or even unknowable. For a computer-science example of the latter, see Halting problem. In case the domain of definition S is equal to the whole set X, the partial function is said to be total. Thus, total partial functions from X to Y coincide with functions from X to Y. Many properties of functions can be extended in an appropriate sense of partial functions. A partial function is said to be injective, surjective, or bijective when the function given by the restriction of the partial function to its domain of definition is injective, surjective, bijective respectively. Because a function is trivially surjective when restricted to its image, the term partial bijection denotes a partial function which is injective. An injective partial function may be inverted to an injective partial function, and a partial function which is both injective and surjective has an injective function as inverse. Furthermore, a function which is injective may be inverted to a bijective partial function. The notion of transformation can be generalized to partial functions as well. A partial transformation is a function f : A ⇀ B , {\displaystyle f:A\rightharpoonup B,} where both A {\displaystyle A} and B {\displaystyle B} are subsets of some set X . {\displaystyle X.} == Function spaces == For convenience, denote the set of all partial functions f : X ⇀ Y {\displaystyle f:X\rightharpoonup Y} from a set X {\displaystyle X} to a set Y {\displaystyle Y} by [ X ⇀ Y ] . {\displaystyle [X\rightharpoonup Y].} This set is the union of the sets of functions defined on subsets of X {\displaystyle X} with same codomain Y {\displaystyle Y} : [ X ⇀ Y ] = ⋃ D ⊆ X [ D → Y ] , {\displaystyle [X\rightharpoonup Y]=\bigcup _{D\subseteq X}[D\to Y],} the latter also written as ⋃ D ⊆ X Y D . {\textstyle \bigcup _{D\subseteq {X}}Y^{D}.} In finite case, its cardinality is | [ X ⇀ Y ] | = ( | Y | + 1 ) | X | , {\displaystyle |[X\rightharpoonup Y]|=(|Y|+1)^{|X|},} because any partial function can be extended to a function by any fixed value c {\displaystyle c} not contained in Y , {\displaystyle Y,} so that the codomain is Y ∪ { c } , {\displaystyle Y\cup \{c\},} an operation which is injective (unique and invertible by restriction). == Discussion and examples == The first diagram at the top of the article represents a partial function that is not a function since the element 1 in the left-hand set is not associated with anything in the right-hand set. Whereas, the second diagram represents a function since every element on the left-hand set is associated with exactly one element in the right hand set. === Natural logarithm === Consider the natural logarithm function mapping the real numbers to themselves. The logarithm of a non-positive real is not a real number, so the natural logarithm function doesn't associate any real number in the codomain with any non-positive real number in the domain. Therefore, the natural logarithm function is not a function when viewed as a function from the reals to themselves, but it is a partial function. If the domain is restricted to only include the positive reals (that is, if the natural logarithm function is viewed as a function from the positive reals to the reals), then the natural logarithm is a function. === Subtraction of natural numbers === Subtraction of natural numbers (in which N {\displaystyle \mathbb {N} } is the non-negative integers) is a partial function: f : N × N ⇀ N {\displaystyle f:\mathbb {N} \times \mathbb {N} \rightharpoonup \mathbb {N} } f ( x , y ) = x − y . {\displaystyle f(x,y)=x-y.} It is defined only when x ≥ y . {\displaystyle x\geq y.} === Bottom element === In denotational semantics a partial function is considered as returning the bottom element when it is undefined. In computer science a partial function corresponds to a subroutine that raises an exception or loops forever. The IEEE floating point standard defines a not-a-number value which is returned when a floating point operation is undefined and exceptions are suppressed, e.g. when the square root of a negative number is requested. In a programming language where function parameters are statically typed, a function may be defined as a partial function because the language's type system cannot express the exact domain of the function, so the programmer instead gives it the smallest domain which is expressible as a type and contains the domain of definition of the function. === In category theory === In category theory, when considering the operation of morphism composition in concrete categories, the composition operation ∘ : hom ⁡ ( C ) × hom ⁡ ( C ) → hom ⁡ ( C ) {\displaystyle \circ \;:\;\hom(C)\times \hom(C)\to \hom(C)} is a total function if and only if ob ⁡ ( C ) {\displaystyle \operatorname {ob} (C)} has one element. The reason for this is that two morphisms f : X → Y {\displaystyle f:X\to Y} and g : U → V {\displaystyle g:U\to V} can only be composed as g ∘ f {\displaystyle g\circ f} if Y = U , {\displaystyle Y=U,} that is, the codomain of f {\displaystyle f} must equal the domain of g . {\displaystyle g.} The category of sets and partial functions is equivalent to but not isomorphic with the category of pointed sets and point-preserving maps. One textbook notes that "This formal completion of sets and partial maps by adding “improper,” “infinite” elements was reinvented many times, in particular, in topology (one-point compactification) and in theoretical computer science." The category of sets and partial bijections is equivalent to its dual. It is the prototypical inverse category. === In abstract algebra === Partial algebra generalizes the notion of universal algebra to partial operations. An example would be a field, in which the multiplicative inversion is the only proper partial operation (because division by zero is not defined). The set of all partial functions (partial transformations) on a given base set, X , {\displaystyle X,} forms a regular semigroup called the semigroup of all partial transformations (or the partial transformation semigroup on X {\displaystyle X} ), typically denoted by P T X . {\displaystyle {\mathcal {PT}}_{X}.} The set of all partial bijections on X {\displaystyle X} forms the symmetric inverse semigroup. === Charts and atlases for manifolds and fiber bundles === Charts in the atlases which specify the structure of manifolds and fiber bundles are partial functions. In the case of manifolds, the domain is the point set of the manifold. In the case of fiber bundles, the domain is the space of the fiber bundle. In these applications, the most important construction is the transition map, which is the composite of one chart with the inverse of another. The initial classification of manifolds and fiber bundles is largely expressed in terms of constraints on these transition maps. The reason for the use of partial functions instead of functions is to permit general global topologies to be represented by stitching together local patches to describe the global structure. The "patches" are the domains where the charts are defined. == See also == Analytic continuation – Extension of the domain of an analytic function (mathematics) Multivalued function – Generalized mathematical function Densely defined operator – Function that is defined almost everywhere (mathematics) == References == Martin Davis (1958), Computability and Unsolvability, McGraw–Hill Book Company, Inc, New York. Republished by Dover in 1982. ISBN 0-486-61471-9. Stephen Kleene (1952), Introduction to Meta-Mathematics, North-Holland Publishing Company, Amsterdam, Netherlands, 10th printing with corrections added on 7th printing (1974). ISBN 0-7204-2103-9. Harold S. Stone (1972), Introduction to Computer Organization and Data Structures, McGraw–Hill Book Company, New York. === Notes ===
Wikipedia/Partial_functions
In mathematics, to solve an equation is to find its solutions, which are the values (numbers, functions, sets, etc.) that fulfill the condition stated by the equation, consisting generally of two expressions related by an equals sign. When seeking a solution, one or more variables are designated as unknowns. A solution is an assignment of values to the unknown variables that makes the equality in the equation true. In other words, a solution is a value or a collection of values (one for each unknown) such that, when substituted for the unknowns, the equation becomes an equality. A solution of an equation is often called a root of the equation, particularly but not only for polynomial equations. The set of all solutions of an equation is its solution set. An equation may be solved either numerically or symbolically. Solving an equation numerically means that only numbers are admitted as solutions. Solving an equation symbolically means that expressions can be used for representing the solutions. For example, the equation x + y = 2x – 1 is solved for the unknown x by the expression x = y + 1, because substituting y + 1 for x in the equation results in (y + 1) + y = 2(y + 1) – 1, a true statement. It is also possible to take the variable y to be the unknown, and then the equation is solved by y = x – 1. Or x and y can both be treated as unknowns, and then there are many solutions to the equation; a symbolic solution is (x, y) = (a + 1, a), where the variable a may take any value. Instantiating a symbolic solution with specific numbers gives a numerical solution; for example, a = 0 gives (x, y) = (1, 0) (that is, x = 1, y = 0), and a = 1 gives (x, y) = (2, 1). The distinction between known variables and unknown variables is generally made in the statement of the problem, by phrases such as "an equation in x and y", or "solve for x and y", which indicate the unknowns, here x and y. However, it is common to reserve x, y, z, ... to denote the unknowns, and to use a, b, c, ... to denote the known variables, which are often called parameters. This is typically the case when considering polynomial equations, such as quadratic equations. However, for some problems, all variables may assume either role. Depending on the context, solving an equation may consist to find either any solution (finding a single solution is enough), all solutions, or a solution that satisfies further properties, such as belonging to a given interval. When the task is to find the solution that is the best under some criterion, this is an optimization problem. Solving an optimization problem is generally not referred to as "equation solving", as, generally, solving methods start from a particular solution for finding a better solution, and repeating the process until finding eventually the best solution. == Overview == One general form of an equation is f ( x 1 , … , x n ) = c , {\displaystyle f\left(x_{1},\dots ,x_{n}\right)=c,} where f is a function, x1, ..., xn are the unknowns, and c is a constant. Its solutions are the elements of the inverse image (fiber) f − 1 ( c ) = { ( a 1 , … , a n ) ∈ D ∣ f ( a 1 , … , a n ) = c } , {\displaystyle f^{-1}(c)={\bigl \{}(a_{1},\dots ,a_{n})\in D\mid f\left(a_{1},\dots ,a_{n}\right)=c{\bigr \}},} where D is the domain of the function f. The set of solutions can be the empty set (there are no solutions), a singleton (there is exactly one solution), finite, or infinite (there are infinitely many solutions). For example, an equation such as 3 x + 2 y = 21 z , {\displaystyle 3x+2y=21z,} with unknowns x, y and z, can be put in the above form by subtracting 21z from both sides of the equation, to obtain 3 x + 2 y − 21 z = 0 {\displaystyle 3x+2y-21z=0} In this particular case there is not just one solution, but an infinite set of solutions, which can be written using set builder notation as { ( x , y , z ) ∣ 3 x + 2 y − 21 z = 0 } . {\displaystyle {\bigl \{}(x,y,z)\mid 3x+2y-21z=0{\bigr \}}.} One particular solution is x = 0, y = 0, z = 0. Two other solutions are x = 3, y = 6, z = 1, and x = 8, y = 9, z = 2. There is a unique plane in three-dimensional space which passes through the three points with these coordinates, and this plane is the set of all points whose coordinates are solutions of the equation. == Solution sets == The solution set of a given set of equations or inequalities is the set of all its solutions, a solution being a tuple of values, one for each unknown, that satisfies all the equations or inequalities. If the solution set is empty, then there are no values of the unknowns that satisfy simultaneously all equations and inequalities. For a simple example, consider the equation x 2 = 2. {\displaystyle x^{2}=2.} This equation can be viewed as a Diophantine equation, that is, an equation for which only integer solutions are sought. In this case, the solution set is the empty set, since 2 is not the square of an integer. However, if one searches for real solutions, there are two solutions, √2 and –√2; in other words, the solution set is {√2, −√2}. When an equation contains several unknowns, and when one has several equations with more unknowns than equations, the solution set is often infinite. In this case, the solutions cannot be listed. For representing them, a parametrization is often useful, which consists of expressing the solutions in terms of some of the unknowns or auxiliary variables. This is always possible when all the equations are linear. Such infinite solution sets can naturally be interpreted as geometric shapes such as lines, curves (see picture), planes, and more generally algebraic varieties or manifolds. In particular, algebraic geometry may be viewed as the study of solution sets of algebraic equations. == Methods of solution == The methods for solving equations generally depend on the type of equation, both the kind of expressions in the equation and the kind of values that may be assumed by the unknowns. The variety in types of equations is large, and so are the corresponding methods. Only a few specific types are mentioned below. In general, given a class of equations, there may be no known systematic method (algorithm) that is guaranteed to work. This may be due to a lack of mathematical knowledge; some problems were only solved after centuries of effort. But this also reflects that, in general, no such method can exist: some problems are known to be unsolvable by an algorithm, such as Hilbert's tenth problem, which was proved unsolvable in 1970. For several classes of equations, algorithms have been found for solving them, some of which have been implemented and incorporated in computer algebra systems, but often require no more sophisticated technology than pencil and paper. In some other cases, heuristic methods are known that are often successful but that are not guaranteed to lead to success. === Brute force, trial and error, inspired guess === If the solution set of an equation is restricted to a finite set (as is the case for equations in modular arithmetic, for example), or can be limited to a finite number of possibilities (as is the case with some Diophantine equations), the solution set can be found by brute force, that is, by testing each of the possible values (candidate solutions). It may be the case, though, that the number of possibilities to be considered, although finite, is so huge that an exhaustive search is not practically feasible; this is, in fact, a requirement for strong encryption methods. As with all kinds of problem solving, trial and error may sometimes yield a solution, in particular where the form of the equation, or its similarity to another equation with a known solution, may lead to an "inspired guess" at the solution. If a guess, when tested, fails to be a solution, consideration of the way in which it fails may lead to a modified guess. === Elementary algebra === Equations involving linear or simple rational functions of a single real-valued unknown, say x, such as 8 x + 7 = 4 x + 35 or 4 x + 9 3 x + 4 = 2 , {\displaystyle 8x+7=4x+35\quad {\text{or}}\quad {\frac {4x+9}{3x+4}}=2\,,} can be solved using the methods of elementary algebra. === Systems of linear equations === Smaller systems of linear equations can be solved likewise by methods of elementary algebra. For solving larger systems, algorithms are used that are based on linear algebra. See Gaussian elimination and numerical solution of linear systems. === Polynomial equations === Polynomial equations of degree up to four can be solved exactly using algebraic methods, of which the quadratic formula is the simplest example. Polynomial equations with a degree of five or higher require in general numerical methods (see below) or special functions such as Bring radicals, although some specific cases may be solvable algebraically, for example 4 x 5 − x 3 − 3 = 0 {\displaystyle 4x^{5}-x^{3}-3=0} (by using the rational root theorem), and x 6 − 5 x 3 + 6 = 0 , {\displaystyle x^{6}-5x^{3}+6=0\,,} (by using the substitution x = z1⁄3, which simplifies this to a quadratic equation in z). === Diophantine equations === In Diophantine equations the solutions are required to be integers. In some cases a brute force approach can be used, as mentioned above. In some other cases, in particular if the equation is in one unknown, it is possible to solve the equation for rational-valued unknowns (see Rational root theorem), and then find solutions to the Diophantine equation by restricting the solution set to integer-valued solutions. For example, the polynomial equation 2 x 5 − 5 x 4 − x 3 − 7 x 2 + 2 x + 3 = 0 {\displaystyle 2x^{5}-5x^{4}-x^{3}-7x^{2}+2x+3=0\,} has as rational solutions x = −⁠1/2⁠ and x = 3, and so, viewed as a Diophantine equation, it has the unique solution x = 3. In general, however, Diophantine equations are among the most difficult equations to solve. === Inverse functions === In the simple case of a function of one variable, say, h(x), we can solve an equation of the form h(x) = c for some constant c by considering what is known as the inverse function of h. Given a function h : A → B, the inverse function, denoted h−1 and defined as h−1 : B → A, is a function such that h − 1 ( h ( x ) ) = h ( h − 1 ( x ) ) = x . {\displaystyle h^{-1}{\bigl (}h(x){\bigr )}=h{\bigl (}h^{-1}(x){\bigr )}=x\,.} Now, if we apply the inverse function to both sides of h(x) = c, where c is a constant value in B, we obtain h − 1 ( h ( x ) ) = h − 1 ( c ) x = h − 1 ( c ) {\displaystyle {\begin{aligned}h^{-1}{\bigl (}h(x){\bigr )}&=h^{-1}(c)\\x&=h^{-1}(c)\\\end{aligned}}} and we have found the solution to the equation. However, depending on the function, the inverse may be difficult to be defined, or may not be a function on all of the set B (only on some subset), and have many values at some point. If just one solution will do, instead of the full solution set, it is actually sufficient if only the functional identity h ( h − 1 ( x ) ) = x {\displaystyle h\left(h^{-1}(x)\right)=x} holds. For example, the projection π1 : R2 → R defined by π1(x, y) = x has no post-inverse, but it has a pre-inverse π−11 defined by π−11(x) = (x, 0). Indeed, the equation π1(x, y) = c is solved by ( x , y ) = π 1 − 1 ( c ) = ( c , 0 ) . {\displaystyle (x,y)=\pi _{1}^{-1}(c)=(c,0).} Examples of inverse functions include the nth root (inverse of xn); the logarithm (inverse of ax); the inverse trigonometric functions; and Lambert's W function (inverse of xex). === Factorization === If the left-hand side expression of an equation P = 0 can be factorized as P = QR, the solution set of the original solution consists of the union of the solution sets of the two equations Q = 0 and R = 0. For example, the equation tan ⁡ x + cot ⁡ x = 2 {\displaystyle \tan x+\cot x=2} can be rewritten, using the identity tan x cot x = 1 as tan 2 ⁡ x − 2 tan ⁡ x + 1 tan ⁡ x = 0 , {\displaystyle {\frac {\tan ^{2}x-2\tan x+1}{\tan x}}=0,} which can be factorized into ( tan ⁡ x − 1 ) 2 tan ⁡ x = 0. {\displaystyle {\frac {\left(\tan x-1\right)^{2}}{\tan x}}=0.} The solutions are thus the solutions of the equation tan x = 1, and are thus the set x = π 4 + k π , k = 0 , ± 1 , ± 2 , … . {\displaystyle x={\tfrac {\pi }{4}}+k\pi ,\quad k=0,\pm 1,\pm 2,\ldots .} === Numerical methods === With more complicated equations in real or complex numbers, simple methods to solve equations can fail. Often, root-finding algorithms like the Newton–Raphson method can be used to find a numerical solution to an equation, which, for some applications, can be entirely sufficient to solve some problem. There are also numerical methods for systems of linear equations. === Matrix equations === Equations involving matrices and vectors of real numbers can often be solved by using methods from linear algebra. === Differential equations === There is a vast body of methods for solving various kinds of differential equations, both numerically and analytically. A particular class of problem that can be considered to belong here is integration, and the analytic methods for solving this kind of problems are now called symbolic integration. Solutions of differential equations can be implicit or explicit. == See also == Extraneous and missing solutions Simultaneous equations Equating coefficients Solving the geodesic equations Unification (computer science) — solving equations involving symbolic expressions == References ==
Wikipedia/Solution_(equation)
Combinatorial design theory is the part of combinatorial mathematics that deals with the existence, construction and properties of systems of finite sets whose arrangements satisfy generalized concepts of balance and/or symmetry. These concepts are not made precise so that a wide range of objects can be thought of as being under the same umbrella. At times this might involve the numerical sizes of set intersections as in block designs, while at other times it could involve the spatial arrangement of entries in an array as in sudoku grids. Combinatorial design theory can be applied to the area of design of experiments. Some of the basic theory of combinatorial designs originated in the statistician Ronald Fisher's work on the design of biological experiments. Modern applications are also found in a wide gamut of areas including finite geometry, tournament scheduling, lotteries, mathematical chemistry, mathematical biology, algorithm design and analysis, networking, group testing and cryptography. == Example == Given a certain number n of people, is it possible to assign them to sets so that each person is in at least one set, each pair of people is in exactly one set together, every two sets have exactly one person in common, and no set contains everyone, all but one person, or exactly one person? The answer depends on n. This has a solution only if n has the form q2 + q + 1. It is less simple to prove that a solution exists if q is a prime power. It is conjectured that these are the only solutions. It has been further shown that if a solution exists for q congruent to 1 or 2 mod 4, then q is a sum of two square numbers. This last result, the Bruck–Ryser theorem, is proved by a combination of constructive methods based on finite fields and an application of quadratic forms. When such a structure does exist, it is called a finite projective plane; thus showing how finite geometry and combinatorics intersect. When q = 2, the projective plane is called the Fano plane. == History == Combinatorial designs date to antiquity, with the Lo Shu Square being an early magic square. One of the earliest datable application of combinatorial design is found in India in the book Brhat Samhita by Varahamihira, written around 587 AD, for the purpose of making perfumes using 4 substances selected from 16 different substances using a magic square. Combinatorial designs developed along with the general growth of combinatorics from the 18th century, for example with Latin squares in the 18th century and Steiner systems in the 19th century. Designs have also been popular in recreational mathematics, such as Kirkman's schoolgirl problem (1850), and in practical problems, such as the scheduling of round-robin tournaments (solution published 1880s). In the 20th century designs were applied to the design of experiments, notably Latin squares, finite geometry, and association schemes, yielding the field of algebraic statistics. == Fundamental combinatorial designs == The classical core of the subject of combinatorial designs is built around balanced incomplete block designs (BIBDs), Hadamard matrices and Hadamard designs, symmetric BIBDs, Latin squares, resolvable BIBDs, difference sets, and pairwise balanced designs (PBDs). Other combinatorial designs are related to or have been developed from the study of these fundamental ones. A balanced incomplete block design or BIBD (usually called for short a block design) is a collection B of b subsets (called blocks) of a finite set X of v elements, such that any element of X is contained in the same number r of blocks, every block has the same number k of elements, and each pair of distinct elements appear together in the same number λ of blocks. BIBDs are also known as 2-designs and are often denoted as 2-(v,k,λ) designs. As an example, when λ = 1 and b = v, we have a projective plane: X is the point set of the plane and the blocks are the lines. A symmetric balanced incomplete block design or SBIBD is a BIBD in which v = b (the number of points equals the number of blocks). They are the single most important and well studied subclass of BIBDs. Projective planes, biplanes and Hadamard 2-designs are all SBIBDs. They are of particular interest since they are the extremal examples of Fisher's inequality (b ≥ v). A resolvable BIBD is a BIBD whose blocks can be partitioned into sets (called parallel classes), each of which forms a partition of the point set of the BIBD. The set of parallel classes is called a resolution of the design. A solution of the famous 15 schoolgirl problem is a resolution of a BIBD with v = 15, k = 3 and λ = 1. A Latin rectangle is an r × n matrix that has the numbers 1, 2, 3, ..., n as its entries (or any other set of n distinct symbols) with no number occurring more than once in any row or column where r ≤ n. An n × n Latin rectangle is called a Latin square. If r < n, then it is possible to append n − r rows to an r × n Latin rectangle to form a Latin square, using Hall's marriage theorem. Two Latin squares of order n are said to be orthogonal if the set of all ordered pairs consisting of the corresponding entries in the two squares has n2 distinct members (all possible ordered pairs occur). A set of Latin squares of the same order forms a set of mutually orthogonal Latin squares (MOLS) if every pair of Latin squares in the set are orthogonal. There can be at most n − 1 squares in a set of MOLS of order n. A set of n − 1 MOLS of order n can be used to construct a projective plane of order n (and conversely). A (v, k, λ) difference set is a subset D of a group G such that the order of G is v, the size of D is k, and every nonidentity element of G can be expressed as a product d1d2−1 of elements of D in exactly λ ways (when G is written with a multiplicative operation). If D is a difference set, and g in G, then g D = {gd: d in D} is also a difference set, and is called a translate of D. The set of all translates of a difference set D forms a symmetric BIBD. In such a design there are v elements and v blocks. Each block of the design consists of k points, each point is contained in k blocks. Any two blocks have exactly λ elements in common and any two points appear together in λ blocks. This SBIBD is called the development of D. In particular, if λ = 1, then the difference set gives rise to a projective plane. An example of a (7,3,1) difference set in the group Z / 7 Z {\displaystyle \mathbb {Z} /7\mathbb {Z} } (an abelian group written additively) is the subset {1,2,4}. The development of this difference set gives the Fano plane. Since every difference set gives an SBIBD, the parameter set must satisfy the Bruck–Ryser–Chowla theorem, but not every SBIBD gives a difference set. An Hadamard matrix of order m is an m × m matrix H whose entries are ±1 such that HH⊤ = mIm, where H⊤ is the transpose of H and Im is the m × m identity matrix. An Hadamard matrix can be put into standardized form (that is, converted to an equivalent Hadamard matrix) where the first row and first column entries are all +1. If the order m > 2 then m must be a multiple of 4. Given an Hadamard matrix of order 4a in standardized form, remove the first row and first column and convert every −1 to a 0. The resulting 0–1 matrix M is the incidence matrix of a symmetric 2 − (4a − 1, 2a − 1, a − 1) design called an Hadamard 2-design. This construction is reversible, and the incidence matrix of a symmetric 2-design with these parameters can be used to form an Hadamard matrix of order 4a. When a = 2 we obtain the, by now familiar, Fano plane as an Hadamard 2-design. A pairwise balanced design (or PBD) is a set X together with a family of subsets of X (which need not have the same size and may contain repeats) such that every pair of distinct elements of X is contained in exactly λ (a positive integer) subsets. The set X is allowed to be one of the subsets, and if all the subsets are copies of X, the PBD is called trivial. The size of X is v and the number of subsets in the family (counted with multiplicity) is b. Fisher's inequality holds for PBDs: For any non-trivial PBD, v ≤ b. This result also generalizes the famous Erdős–De Bruijn theorem: For a PBD with λ = 1 having no blocks of size 1 or size v, v ≤ b, with equality if and only if the PBD is a projective plane or a near-pencil. == Other combinatorial designs == The Handbook of Combinatorial Designs (Colbourn & Dinitz 2007) has, amongst others, 65 chapters, each devoted to a combinatorial design other than those given above. A partial listing is given below: Association schemes A balanced ternary design BTD(V, B; ρ1, ρ2, R; K, Λ) is an arrangement of V elements into B multisets (blocks), each of cardinality K (K ≤ V), satisfying: Each element appears R = ρ1 + 2ρ2 times altogether, with multiplicity one in exactly ρ1 blocks and multiplicity two in exactly ρ2 blocks. Every pair of distinct elements appears Λ times (counted with multiplicity); that is, if mvb is the multiplicity of the element v in block b, then for every pair of distinct elements v and w, ∑ b = 1 B m v b m w b = Λ {\displaystyle \sum _{b=1}^{B}m_{vb}m_{wb}=\Lambda } . For example, one of the only two nonisomorphic BTD(4,8;2,3,8;4,6)s (blocks are columns) is: The incidence matrix of a BTD (where the entries are the multiplicities of the elements in the blocks) can be used to form a ternary error-correcting code analogous to the way binary codes are formed from the incidence matrices of BIBDs. A balanced tournament design of order n (a BTD(n)) is an arrangement of all the distinct unordered pairs of a 2n-set V into an n × (2n − 1) array such that every element of V appears precisely once in each column, and every element of V appears at most twice in each row. An example of a BTD(3) is given by The columns of a BTD(n) provide a 1-factorization of the complete graph on 2n vertices, K2n. BTD(n)s can be used to schedule round-robin tournaments: the rows represent the locations, the columns the rounds of play and the entries are the competing players or teams. Bent functions Costas arrays Factorial designs A frequency square (F-square) is a higher order generalization of a Latin square. Let S = {s1,s2, ..., sm} be a set of distinct symbols and (λ1, λ2, ...,λm) a frequency vector of positive integers. A frequency square of order n is an n × n array in which each symbol si occurs λi times, i = 1,2,...,m, in each row and column. The order n = λ1 + λ2 + ... + λm. An F-square is in standard form if in the first row and column, all occurrences of si precede those of sj whenever i < j. A frequency square F1 of order n based on the set {s1,s2, ..., sm} with frequency vector (λ1, λ2, ...,λm) and a frequency square F2, also of order n, based on the set {t1,t2, ..., tk} with frequency vector (μ1, μ2, ...,μk) are orthogonal if every ordered pair (si, tj) appears precisely λiμj times when F1 and F2 are superimposed. Hall triple systems (HTSs) are Steiner triple systems (STSs) (but the blocks are called lines) with the property that the substructure generated by two intersecting lines is isomorphic to the finite affine plane AG(2,3). Any affine space AG(n,3) gives an example of an HTS. Such an HTS is an affine HTS. Nonaffine HTSs also exist. The number of points of an HTS is 3m for some integer m ≥ 2. Nonaffine HTSs exist for any m ≥ 4 and do not exist for m = 2 or 3. Every Steiner triple system is equivalent to a Steiner quasigroup (idempotent, commutative and satisfying (xy)y = x for all x and y). A Hall triple system is equivalent to a Steiner quasigroup which is distributive, that is, satisfies a(xy) = (ax)(ay) for all a,x,y in the quasigroup. Let S be a set of 2n elements. A Howell design, H(s,2n) (on symbol set S) is an s × s array such that: Each cell of the array is either empty or contains an unordered pair from S, Each symbol occurs exactly once in each row and column of the array, and Every unordered pair of symbols occurs in at most one cell of the array. An example of an H(4,6) is An H(2n − 1, 2n) is a Room square of side 2n − 1, and thus the Howell designs generalize the concept of Room squares. The pairs of symbols in the cells of a Howell design can be thought of as the edges of an s regular graph on 2n vertices, called the underlying graph of the Howell design. Cyclic Howell designs are used as Howell movements in duplicate bridge tournaments. The rows of the design represent the rounds, the columns represent the boards, and the diagonals represent the tables. Linear spaces An (n,k,p,t)-lotto design is an n-set V of elements together with a set β of k-element subsets of V (blocks), so that for any p-subset P of V, there is a block B in β for which |P ∩ B | ≥ t. L(n,k,p,t) denotes the smallest number of blocks in any (n,k,p,t)-lotto design. The following is a (7,5,4,3)-lotto design with the smallest possible number of blocks: {1,2,3,4,7} {1,2,5,6,7} {3,4,5,6,7}. Lotto designs model any lottery that is run in the following way: Individuals purchase tickets consisting of k numbers chosen from a set of n numbers. At a certain point the sale of tickets is stopped and a set of p numbers is randomly selected from the n numbers. These are the winning numbers. If any sold ticket contains t or more of the winning numbers, a prize is given to the ticket holder. Larger prizes go to tickets with more matches. The value of L(n,k,p,t) is of interest to both gamblers and researchers, as this is the smallest number of tickets that are needed to be purchased in order to guarantee a prize. The Hungarian Lottery is a (90,5,5,t)-lotto design and it is known that L(90,5,5,2) = 100. Lotteries with parameters (49,6,6,t) are also popular worldwide and it is known that L(49,6,6,2) = 19. In general though, these numbers are hard to calculate and remain unknown. A geometric construction of one such design is given in Transylvanian lottery. Magic squares A (v,k,λ)-Mendelsohn design, or MD(v,k,λ), is a v-set V and a collection β of ordered k-tuples of distinct elements of V (called blocks), such that each ordered pair (x,y) with x ≠ y of elements of V is cyclically adjacent in λ blocks. The ordered pair (x,y) of distinct elements is cyclically adjacent in a block if the elements appear in the block as (...,x,y,...) or (y,...,x). An MD(v,3,λ) is a Mendelsohn triple system, MTS(v,λ). An example of an MTS(4,1) on V = {0,1,2,3} is: (0,1,2) (1,0,3) (2,1,3) (0,2,3) Any triple system can be made into a Mendelson triple system by replacing the unordered triple {a,b,c} with the pair of ordered triples (a,b,c) and (a,c,b), but as the example shows, the converse of this statement is not true. If (Q,∗) is an idempotent semisymmetric quasigroup, that is, x ∗ x = x (idempotent) and x ∗ (y ∗ x) = y (semisymmetric) for all x, y in Q, let β = {(x,y,x ∗ y): x, y in Q}. Then (Q, β) is a Mendelsohn triple system MTS(|Q|,1). This construction is reversible. Orthogonal arrays A quasi-3 design is a symmetric design (SBIBD) in which each triple of blocks intersect in either x or y points, for fixed x and y called the triple intersection numbers (x < y). Any symmetric design with λ ≤ 2 is a quasi-3 design with x = 0 and y = 1. The point-hyperplane design of PG(n,q) is a quasi-3 design with x = (qn−2 − 1)/(q − 1) and y = λ = (qn−1 − 1)/(q − 1). If y = λ for a quasi-3 design, the design is isomorphic to PG(n,q) or a projective plane. A t-(v,k,λ) design D is quasi-symmetric with intersection numbers x and y (x < y) if every two distinct blocks intersect in either x or y points. These designs naturally arise in the investigation of the duals of designs with λ = 1. A non-symmetric (b > v) 2-(v,k,1) design is quasisymmetric with x = 0 and y = 1. A multiple (repeat all blocks a certain number of times) of a symmetric 2-(v,k,λ) design is quasisymmetric with x = λ and y = k. Hadamard 3-designs (extensions of Hadamard 2-designs) are quasisymmetric. Every quasisymmetric block design gives rise to a strongly regular graph (as its block graph), but not all SRGs arise in this way. The incidence matrix of a quasisymmetric 2-(v,k,λ) design with k ≡ x ≡ y (mod 2) generates a binary self-orthogonal code (when bordered if k is odd). Room squares A spherical design is a finite set X of points in a (d − 1)-dimensional sphere such that, for some integer t, the average value on X of every polynomial f ( x 1 , … , x d ) {\displaystyle f(x_{1},\ldots ,x_{d})\ } of total degree at most t is equal to the average value of f on the whole sphere, i.e., the integral of f divided by the area of the sphere. Turán systems An r × n tuscan-k rectangle on n symbols has r rows and n columns such that: each row is a permutation of the n symbols and for any two distinct symbols a and b and for each m from 1 to k, there is at most one row in which b is m steps to the right of a. If r = n and k = 1 these are referred to as Tuscan squares, while if r = n and k = n − 1 they are Florentine squares. A Roman square is a Tuscan square which is also a latin square (these are also known as row complete Latin squares). A Vatican square is a Florentine square which is also a Latin square. The following example is a tuscan-1 square on 7 symbols which is not tuscan-2: A tuscan square on n symbols is equivalent to a decomposition of the complete graph with n vertices into n hamiltonian directed paths. In a sequence of visual impressions, one flash card may have some effect on the impression given by the next. This bias can be cancelled by using n sequences corresponding to the rows of an n × n tuscan-1 square. A t-wise balanced design (or t BD) of type t − (v,K,λ) is a v-set X together with a family of subsets of X (called blocks) whose sizes are in the set K, such that every t-subset of distinct elements of X is contained in exactly λ blocks. If K is a set of positive integers strictly between t and v, then the t BD is proper. If all the k-subsets of X for some k are blocks, the t BD is a trivial design. Notice that in the following example of a 3-{12,{4,6},1) design based on the set X = {1,2,...,12}, some pairs appear four times (such as 1,2) while others appear five times (6,12 for instance). 1 2 3 4 5 6 1 2 7 8 1 2 9 11 1 2 10 12 3 5 7 8 3 5 9 11 3 5 10 12 4 6 7 8 4 6 9 11 4 6 10 12 7 8 9 10 11 12 2 3 8 9 2 3 10 7 2 3 11 12 4 1 8 9 4 1 10 7 4 1 11 12 5 6 8 9 5 6 10 7 5 6 11 12 3 4 9 10 3 4 11 8 3 4 7 12 5 2 9 10 5 2 11 8 5 2 7 12 1 6 9 10 1 6 11 8 1 6 7 12 4 5 10 11 4 5 7 9 4 5 8 12 1 3 10 11 1 3 7 9 1 3 8 12 2 6 10 11 2 6 7 9 2 6 8 12 5 1 11 7 5 1 8 10 5 1 9 12 2 4 11 7 2 4 8 10 2 4 9 12 3 6 11 7 3 6 8 10 3 6 9 12 Weighing matrices, A generalization of Hadamard matrices that allows zero entries, are used in some combinatoric designs. In particular, the design of experiments for estimating the individual weights of multiple objects in few trials. A Youden square is a k × v rectangular array (k < v) of v symbols such that each symbol appears exactly once in each row and the symbols appearing in any column form a block of a symmetric (v, k, λ) design, all the blocks of which occur in this manner. A Youden square is a Latin rectangle. The term "square" in the name comes from an older definition which did use a square array. An example of a 4 × 7 Youden square is given by: The seven blocks (columns) form the order 2 biplane (a symmetric (7,4,2)-design). == See also == Algebraic statistics Hypergraph Williamson conjecture == Notes == == References ==
Wikipedia/Combinatorial_design
Journal of Algebraic Combinatorics is a peer-reviewed scientific journal covering algebraic combinatorics. It was established in 1992 and is published by Springer Science+Business Media. The editor-in-chief is Ilias S. Kotsireas (Wilfrid Laurier University). In 2017, the journal's four editors-in-chief and editorial board resigned to protest the publisher's high prices and limited accessibility. They criticized Springer for "double-dipping", that is, charging large subscription fees to libraries in addition to high fees for authors who wished to make their publications open access. The board subsequently started their own open access journal, Algebraic Combinatorics. == Abstracting and indexing == The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2020 impact factor of 0.875. == References == == External links == Official website
Wikipedia/Journal_of_Algebraic_Combinatorics
In mathematics, Schubert calculus is a branch of algebraic geometry introduced in the nineteenth century by Hermann Schubert in order to solve various counting problems of projective geometry and, as such, is viewed as part of enumerative geometry. Giving it a more rigorous foundation was the aim of Hilbert's 15th problem. It is related to several more modern concepts, such as characteristic classes, and both its algorithmic aspects and applications remain of current interest. The term Schubert calculus is sometimes used to mean the enumerative geometry of linear subspaces of a vector space, which is roughly equivalent to describing the cohomology ring of Grassmannians. Sometimes it is used to mean the more general enumerative geometry of algebraic varieties that are homogenous spaces of simple Lie groups. Even more generally, Schubert calculus is sometimes understood as encompassing the study of analogous questions in generalized cohomology theories. The objects introduced by Schubert are the Schubert cells, which are locally closed sets in a Grassmannian defined by conditions of incidence of a linear subspace in projective space with a given flag. For further details see Schubert variety. The intersection theory of these cells, which can be seen as the product structure in the cohomology ring of the Grassmannian, consisting of associated cohomology classes, allows in particular the determination of cases in which the intersections of cells results in a finite set of points. A key result is that the Schubert cells (or rather, the classes of their Zariski closures, the Schubert cycles or Schubert varieties) span the whole cohomology ring. The combinatorial aspects mainly arise in relation to computing intersections of Schubert cycles. Lifted from the Grassmannian, which is a homogeneous space, to the general linear group that acts on it, similar questions are involved in the Bruhat decomposition and classification of parabolic subgroups (as block triangular matrices). == Construction == Schubert calculus can be constructed using the Chow ring of the Grassmannian, where the generating cycles are represented by geometrically defined data. Denote the Grassmannian of k {\displaystyle k} -planes in a fixed n {\displaystyle n} -dimensional vector space V {\displaystyle V} as G r ( k , V ) {\displaystyle \mathbf {Gr} (k,V)} , and its Chow ring as A ∗ ( G r ( k , V ) ) {\displaystyle A^{*}(\mathbf {Gr} (k,V))} . (Note that the Grassmannian is sometimes denoted G r ( k , n ) {\displaystyle \mathbf {Gr} (k,n)} if the vector space isn't explicitly given or as G ( k − 1 , n − 1 ) {\displaystyle \mathbb {G} (k-1,n-1)} if the ambient space V {\displaystyle V} and its k {\displaystyle k} -dimensional subspaces are replaced by their projectizations.) Choosing an (arbitrary) complete flag V = ( V 1 ⊂ ⋯ ⊂ V n − 1 ⊂ V n = V ) , dim ⁡ V i = i , i = 1 , … , n , {\displaystyle {\mathcal {V}}=(V_{1}\subset \cdots \subset V_{n-1}\subset V_{n}=V),\quad \dim {V}_{i}=i,\quad i=1,\dots ,n,} to each weakly decreasing k {\displaystyle k} -tuple of integers a = ( a 1 , … , a k ) {\displaystyle \mathbf {a} =(a_{1},\ldots ,a_{k})} , where n − k ≥ a 1 ≥ a 2 ≥ ⋯ ≥ a k ≥ 0 , {\displaystyle n-k\geq a_{1}\geq a_{2}\geq \cdots \geq a_{k}\geq 0,} i.e., to each partition of weight | a | = ∑ i = 1 k a i , {\displaystyle |\mathbf {a} |=\sum _{i=1}^{k}a_{i},} whose Young diagram fits into the k × ( n − k ) {\displaystyle k\times (n-k)} rectangular one for the partition ( n − k ) k {\displaystyle (n-k)^{k}} , we associate a Schubert variety (or Schubert cycle) Σ a ( V ) ⊂ G r ( k , V ) {\displaystyle \Sigma _{\mathbf {a} }({\mathcal {V}})\subset \mathbf {Gr} (k,V)} , defined as Σ a ( V ) = { w ∈ G r ( k , V ) : dim ⁡ ( V n − k + i − a i ∩ w ) ≥ i for i = 1 , … , k } . {\displaystyle \Sigma _{\mathbf {a} }({\mathcal {V}})=\{w\in \mathbf {Gr} (k,V):\dim(V_{n-k+i-a_{i}}\cap w)\geq i{\text{ for }}i=1,\dots ,k\}.} This is the closure, in the Zariski topology, of the Schubert cell X a ( V ) := { w ∈ G r ( k , V ) : dim ⁡ ( V j ∩ w ) = i for all n − k − a i + i ≤ j ≤ n − k − a i + 1 + i , 1 ≤ j ≤ n } ⊂ Σ a ( V ) , {\displaystyle X_{\mathbf {a} }({\mathcal {V}}):=\{w\in \mathbf {Gr} (k,V):\dim(V_{j}\cap w)=i{\text{ for all }}n-k-a_{i}+i\leq j\leq n-k-a_{i+1}+i,\quad 1\leq j\leq n\}\subset \Sigma _{\mathbf {a} }({\mathcal {V}}),} which is used when considering cellular homology instead of the Chow ring. The latter are disjoint affine spaces, of dimension | a | {\displaystyle |\mathbf {a} |} , whose union is G r ( k , V ) {\displaystyle \mathbf {Gr} (k,V)} . An equivalent characterization of the Schubert cell X a ( V ) {\displaystyle X_{\mathbf {a} }({\mathcal {V}})} may be given in terms of the dual complete flag V ~ = ( V ~ 1 ⊂ V ~ 2 ⋯ ⊂ V ~ n = V ) , {\displaystyle {\tilde {\mathcal {V}}}=({\tilde {V}}_{1}\subset {\tilde {V}}_{2}\cdots \subset {\tilde {V}}_{n}=V),} where V ~ i := V n ∖ V n − i , i = 1 , … , n ( V 0 := ∅ ) . {\displaystyle {\tilde {V}}_{i}:=V_{n}\backslash V_{n-i},\quad i=1,\dots ,n\quad (V_{0}:=\emptyset ).} Then X a ( V ) ⊂ G r ( k , V ) {\displaystyle X_{\mathbf {a} }({\mathcal {V}})\subset \mathbf {Gr} (k,V)} consists of those k {\displaystyle k} -dimensional subspaces w ⊂ V {\displaystyle w\subset V} that have a basis ( W ~ 1 , … , W ~ k ) {\displaystyle ({\tilde {W}}_{1},\dots ,{\tilde {W}}_{k})} consisting of elements W ~ i ∈ V ~ k + a i − i + 1 , i = 1 , … , k {\displaystyle {\tilde {W}}_{i}\in {\tilde {V}}_{k+a_{i}-i+1},\quad i=1,\dots ,k} of the subspaces { V ~ k + a i − i + 1 } i = 1 , … , k . {\displaystyle \{{\tilde {V}}_{k+a_{i}-i+1}\}_{i=1,\dots ,k}.} Since the homology class [ Σ a ( V ) ] ∈ A ∗ ( G r ( k , V ) ) {\displaystyle [\Sigma _{\mathbf {a} }({\mathcal {V}})]\in A^{*}(\mathbf {Gr} (k,V))} , called a Schubert class, does not depend on the choice of complete flag V {\displaystyle {\mathcal {V}}} , it can be written as σ a := [ Σ a ] ∈ A ∗ ( G r ( k , V ) ) . {\displaystyle \sigma _{\mathbf {a} }:=[\Sigma _{\mathbf {a} }]\in A^{*}(\mathbf {Gr} (k,V)).} It can be shown that these classes are linearly independent and generate the Chow ring as their linear span. The associated intersection theory is called Schubert calculus. For a given sequence a = ( a 1 , … , a j , 0 , … , 0 ) {\displaystyle \mathbf {a} =(a_{1},\ldots ,a_{j},0,\ldots ,0)} with a j > 0 {\displaystyle a_{j}>0} the Schubert class σ ( a 1 , … , a j , 0 , … , 0 ) {\displaystyle \sigma _{(a_{1},\ldots ,a_{j},0,\ldots ,0)}} is usually just denoted σ ( a 1 , … , a j ) {\displaystyle \sigma _{(a_{1},\ldots ,a_{j})}} . The Schubert classes given by a single integer σ a 1 {\displaystyle \sigma _{a_{1}}} , (i.e., a horizontal partition), are called special classes. Using the Giambelli formula below, all the Schubert classes can be generated from these special classes. === Other notational conventions === In some sources, the Schubert cells X a {\displaystyle X_{\mathbf {a} }} and Schubert varieties Σ a {\displaystyle \Sigma _{\mathbf {a} }} are labelled differently, as S λ {\displaystyle S_{\lambda }} and S ¯ λ {\displaystyle {\bar {S}}_{\lambda }} , respectively, where λ {\displaystyle \lambda } is the complementary partition to a {\displaystyle \mathbf {a} } with parts λ i := n − k − a k − i + 1 {\displaystyle \lambda _{i}:=n-k-a_{k-i+1}} , whose Young diagram is the complement of the one for a {\displaystyle \mathbf {a} } within the k × ( n − k ) {\displaystyle k\times (n-k)} rectangular one (reversed, both horizontally and vertically). Another labelling convention for X a {\displaystyle X_{\mathbf {a} }} and Σ a {\displaystyle \Sigma _{\mathbf {a} }} is C L {\displaystyle C_{L}} and C ¯ L {\displaystyle {\bar {C}}_{L}} , respectively, where L = ( L 1 , … , L k ) ⊂ ( 1 , … , n ) {\displaystyle L=(L_{1},\dots ,L_{k})\subset (1,\dots ,n)} is the multi-index defined by L i := n − k − a i + i = λ k − i + 1 + i . {\displaystyle L_{i}:=n-k-a_{i}+i=\lambda _{k-i+1}+i.} The integers ( L 1 , … , L k ) {\displaystyle (L_{1},\dots ,L_{k})} are the pivot locations of the representations of elements of X a {\displaystyle X_{\mathbf {a} }} in reduced matricial echelon form. === Explanation === In order to explain the definition, consider a generic k {\displaystyle k} -plane w ⊂ V {\displaystyle w\subset V} . It will have only a zero intersection with V j {\displaystyle V_{j}} for j ≤ n − k {\displaystyle j\leq n-k} , whereas dim ⁡ ( V j ∩ w ) = i {\displaystyle \dim(V_{j}\cap w)=i} for j = n − k + i ≥ n − k . {\displaystyle j=n-k+i\geq n-k.} For example, in G r ( 4 , 9 ) {\displaystyle \mathbf {Gr} (4,9)} , a 4 {\displaystyle 4} -plane w {\displaystyle w} is the solution space of a system of five independent homogeneous linear equations. These equations will generically span when restricted to a subspace V j {\displaystyle V_{j}} with j = dim ⁡ V j ≤ 5 = 9 − 4 {\displaystyle j=\dim V_{j}\leq 5=9-4} , in which case the solution space (the intersection of V j {\displaystyle V_{j}} with w {\displaystyle w} ) will consist only of the zero vector. However, if dim ⁡ ( V j ) + dim ⁡ ( w ) > n = 9 {\displaystyle \dim(V_{j})+\dim(w)>n=9} , V j {\displaystyle V_{j}} and w {\displaystyle w} will necessarily have nonzero intersection. For example, the expected dimension of intersection of V 6 {\displaystyle V_{6}} and w {\displaystyle w} is 1 {\displaystyle 1} , the intersection of V 7 {\displaystyle V_{7}} and w {\displaystyle w} has expected dimension 2 {\displaystyle 2} , and so on. The definition of a Schubert variety states that the first value of j {\displaystyle j} with dim ⁡ ( V j ∩ w ) ≥ i {\displaystyle \dim(V_{j}\cap w)\geq i} is generically smaller than the expected value n − k + i {\displaystyle n-k+i} by the parameter a i {\displaystyle a_{i}} . The k {\displaystyle k} -planes w ⊂ V {\displaystyle w\subset V} given by these constraints then define special subvarieties of G r ( k , n ) {\displaystyle \mathbf {Gr} (k,n)} . === Properties === ==== Inclusion ==== There is a partial ordering on all k {\displaystyle k} -tuples where a ≥ b {\displaystyle \mathbf {a} \geq \mathbf {b} } if a i ≥ b i {\displaystyle a_{i}\geq b_{i}} for every i {\displaystyle i} . This gives the inclusion of Schubert varieties Σ a ⊂ Σ b ⟺ a ≥ b , {\displaystyle \Sigma _{\mathbf {a} }\subset \Sigma _{\mathbf {b} }\iff \mathbf {a} \geq \mathbf {b} ,} showing an increase of the indices corresponds to an even greater specialization of subvarieties. ==== Dimension formula ==== A Schubert variety Σ a {\displaystyle \Sigma _{\mathbf {a} }} has codimension equal to the weight | a | = ∑ a i {\displaystyle |\mathbf {a} |=\sum a_{i}} of the partition a {\displaystyle \mathbf {a} } . Alternatively, in the notational convention S λ {\displaystyle S_{\lambda }} indicated above, its dimension in G r ( k , n ) {\displaystyle \mathbf {Gr} (k,n)} is the weight | λ | = ∑ i = 1 k λ i = k ( n − k ) − | a | . {\displaystyle |\lambda |=\sum _{i=1}^{k}\lambda _{i}=k(n-k)-|\mathbf {a} |.} of the complementary partition λ ⊂ ( n − k ) k {\displaystyle \lambda \subset (n-k)^{k}} in the k × ( n − k ) {\displaystyle k\times (n-k)} dimensional rectangular Young diagram. This is stable under inclusions of Grassmannians. That is, the inclusion i ( k , n ) : G r ( k , C n ) ↪ G r ( k , C n + 1 ) , C n = span { e 1 , … , e n } {\displaystyle i_{(k,n)}:\mathbf {Gr} (k,\mathbf {C} ^{n})\hookrightarrow \mathbf {Gr} (k,\mathbf {C} ^{n+1}),\quad \mathbf {C} ^{n}={\text{span}}\{e_{1},\dots ,e_{n}\}} defined, for w ∈ G r ( k , C n ) {\displaystyle w\in \mathbf {Gr} (k,\mathbf {C} ^{n})} , by i ( k , n ) : w ⊂ C n ↦ w ⊂ C n ⊕ C e n + 1 = C n + 1 {\displaystyle i_{(k,n)}:w\subset \mathbf {C} ^{n}\mapsto w\subset \mathbf {C} ^{n}\oplus \mathbf {C} e_{n+1}=\mathbf {C} ^{n+1}} has the property i ( k , n ) ∗ ( σ a ) = σ a , {\displaystyle i_{(k,n)}^{*}(\sigma _{\mathbf {a} })=\sigma _{\mathbf {a} },} and the inclusion i ~ ( k , n ) : G r ( k , n ) ↪ G r ( k + 1 , n + 1 ) {\displaystyle {\tilde {i}}_{(k,n)}:\mathbf {Gr} (k,n)\hookrightarrow \mathbf {Gr} (k+1,n+1)} defined by adding the extra basis element e n + 1 {\displaystyle e_{n+1}} to each k {\displaystyle k} -plane, giving a ( k + 1 ) {\displaystyle (k+1)} -plane, i ~ ( k , n ) : w ↦ w ⊕ C e n + 1 ⊂ C n ⊕ C e n + 1 = C n + 1 {\displaystyle {\tilde {i}}_{(k,n)}:w\mapsto w\oplus \mathbf {C} e_{n+1}\subset \mathbf {C} ^{n}\oplus \mathbf {C} e_{n+1}=\mathbf {C} ^{n+1}} does as well i ~ ( k , n ) ∗ ( σ a ) = σ a . {\displaystyle {\tilde {i}}_{(k,n)}^{*}(\sigma _{\mathbf {a} })=\sigma _{\mathbf {a} }.} Thus, if X a ⊂ G r k ( n ) {\displaystyle X_{\mathbf {a} }\subset \mathbf {Gr} _{k}(n)} and Σ a ⊂ G r k ( n ) {\displaystyle \Sigma _{\mathbf {a} }\subset \mathbf {Gr} _{k}(n)} are a cell and a subvariety in the Grassmannian G r k ( n ) {\displaystyle \mathbf {Gr} _{k}(n)} , they may also be viewed as a cell X a ⊂ G r k ~ ( n ~ ) {\displaystyle X_{\mathbf {a} }\subset \mathbf {Gr} _{\tilde {k}}({\tilde {n}})} and a subvariety Σ a ⊂ G r k ~ ( n ~ ) {\displaystyle \Sigma _{\mathbf {a} }\subset \mathbf {Gr} _{\tilde {k}}({\tilde {n}})} within the Grassmannian G r k ~ ( n ~ ) {\displaystyle \mathbf {Gr} _{\tilde {k}}({\tilde {n}})} for any pair ( k ~ , n ~ ) {\displaystyle ({\tilde {k}},{\tilde {n}})} with k ~ ≥ k {\displaystyle {\tilde {k}}\geq k} and n ~ − k ~ ≥ n − k {\displaystyle {\tilde {n}}-{\tilde {k}}\geq n-k} . === Intersection product === The intersection product was first established using the Pieri and Giambelli formulas. ==== Pieri formula ==== In the special case b = ( b , 0 , … , 0 ) {\displaystyle \mathbf {b} =(b,0,\ldots ,0)} , there is an explicit formula of the product of σ b {\displaystyle \sigma _{b}} with an arbitrary Schubert class σ a 1 , … , a k {\displaystyle \sigma _{a_{1},\ldots ,a_{k}}} given by σ b ⋅ σ a 1 , … , a k = ∑ | c | = | a | + b a i ≤ c i ≤ a i − 1 σ c , {\displaystyle \sigma _{b}\cdot \sigma _{a_{1},\ldots ,a_{k}}=\sum _{\begin{matrix}|c|=|a|+b\\a_{i}\leq c_{i}\leq a_{i-1}\end{matrix}}\sigma _{\mathbf {c} },} where | a | = a 1 + ⋯ + a k {\displaystyle |\mathbf {a} |=a_{1}+\cdots +a_{k}} , | c | = c 1 + ⋯ + c k {\displaystyle |\mathbf {c} |=c_{1}+\cdots +c_{k}} are the weights of the partitions. This is called the Pieri formula, and can be used to determine the intersection product of any two Schubert classes when combined with the Giambelli formula. For example, σ 1 ⋅ σ 4 , 2 , 1 = σ 5 , 2 , 1 + σ 4 , 3 , 1 + σ 4 , 2 , 1 , 1 . {\displaystyle \sigma _{1}\cdot \sigma _{4,2,1}=\sigma _{5,2,1}+\sigma _{4,3,1}+\sigma _{4,2,1,1}.} and σ 2 ⋅ σ 4 , 3 = σ 4 , 3 , 2 + σ 4 , 4 , 1 + σ 5 , 3 , 1 + σ 5 , 4 + σ 6 , 3 {\displaystyle \sigma _{2}\cdot \sigma _{4,3}=\sigma _{4,3,2}+\sigma _{4,4,1}+\sigma _{5,3,1}+\sigma _{5,4}+\sigma _{6,3}} ==== Giambelli formula ==== Schubert classes σ a {\displaystyle \sigma _{\mathbf {a} }} for partitions of any length ℓ ( a ) ≤ k {\displaystyle \ell (\mathbf {a} )\leq k} can be expressed as the determinant of a ( k × k ) {\displaystyle (k\times k)} matrix having the special classes as entries. σ ( a 1 , … , a k ) = | σ a 1 σ a 1 + 1 σ a 1 + 2 ⋯ σ a 1 + k − 1 σ a 2 − 1 σ a 2 σ a 2 + 1 ⋯ σ a 2 + k − 2 σ a 3 − 2 σ a 3 − 1 σ a 3 ⋯ σ a 3 + k − 3 ⋮ ⋮ ⋮ ⋱ ⋮ σ a k − k + 1 σ a k − k + 2 σ a k − k + 3 ⋯ σ a k | {\displaystyle \sigma _{(a_{1},\ldots ,a_{k})}={\begin{vmatrix}\sigma _{a_{1}}&\sigma _{a_{1}+1}&\sigma _{a_{1}+2}&\cdots &\sigma _{a_{1}+k-1}\\\sigma _{a_{2}-1}&\sigma _{a_{2}}&\sigma _{a_{2}+1}&\cdots &\sigma _{a_{2}+k-2}\\\sigma _{a_{3}-2}&\sigma _{a_{3}-1}&\sigma _{a_{3}}&\cdots &\sigma _{a_{3}+k-3}\\\vdots &\vdots &\vdots &\ddots &\vdots \\\sigma _{a_{k}-k+1}&\sigma _{a_{k}-k+2}&\sigma _{a_{k}-k+3}&\cdots &\sigma _{a_{k}}\end{vmatrix}}} This is known as the Giambelli formula. It has the same form as the first Jacobi-Trudi identity, expressing arbitrary Schur functions s a {\displaystyle s_{\mathbf {a} }} as determinants in terms of the complete symmetric functions { h j := s ( j ) } {\displaystyle \{h_{j}:=s_{(j)}\}} . For example, σ 2 , 2 = | σ 2 σ 3 σ 1 σ 2 | = σ 2 2 − σ 1 ⋅ σ 3 {\displaystyle \sigma _{2,2}={\begin{vmatrix}\sigma _{2}&\sigma _{3}\\\sigma _{1}&\sigma _{2}\end{vmatrix}}=\sigma _{2}^{2}-\sigma _{1}\cdot \sigma _{3}} and σ 2 , 1 , 1 = | σ 2 σ 3 σ 4 σ 0 σ 1 σ 2 0 σ 0 σ 1 | . {\displaystyle \sigma _{2,1,1}={\begin{vmatrix}\sigma _{2}&\sigma _{3}&\sigma _{4}\\\sigma _{0}&\sigma _{1}&\sigma _{2}\\0&\sigma _{0}&\sigma _{1}\end{vmatrix}}.} ==== General case ==== The intersection product between any pair of Schubert classes σ a , σ b {\displaystyle \sigma _{\mathbf {a} },\sigma _{\mathbf {b} }} is given by σ a σ b = ∑ c c a b c σ c , {\displaystyle \sigma _{\mathbf {a} }\sigma _{\mathbf {b} }=\sum _{\mathbf {c} }c_{\mathbf {a} \mathbf {b} }^{\mathbf {c} }\sigma _{\mathbf {c} },} where { c a b c } {\displaystyle \{c_{\mathbf {a} \mathbf {b} }^{\mathbf {c} }\}} are the Littlewood-Richardson coefficients. The Pieri formula is a special case of this, when b = ( b , 0 , … , 0 ) {\displaystyle \mathbf {b} =(b,0,\dots ,0)} has length ℓ ( b ) = 1 {\displaystyle \ell (\mathbf {b} )=1} . == Relation with Chern classes == There is an easy description of the cohomology ring, or the Chow ring, of the Grassmannian G r ( k , V ) {\displaystyle \mathbf {Gr} (k,V)} using the Chern classes of two natural vector bundles over G r ( k , V ) {\displaystyle \mathbf {Gr} (k,V)} . We have the exact sequence of vector bundles over G r ( k , V ) {\displaystyle \mathbf {Gr} (k,V)} 0 → T → V _ → Q → 0 {\displaystyle 0\to T\to {\underline {V}}\to Q\to 0} where T {\displaystyle T} is the tautological bundle whose fiber, over any element w ∈ G r ( k , V ) {\displaystyle w\in \mathbf {Gr} (k,V)} is the subspace w ⊂ V {\displaystyle w\subset V} itself, V _ := G r ( k , V ) × V {\displaystyle \,{\underline {V}}:=\mathbf {Gr} (k,V)\times V} is the trivial vector bundle of rank n {\displaystyle n} , with V {\displaystyle V} as fiber and Q {\displaystyle Q} is the quotient vector bundle of rank n − k {\displaystyle n-k} , with V / w {\displaystyle V/w} as fiber. The Chern classes of the bundles T {\displaystyle T} and Q {\displaystyle Q} are c i ( T ) = ( − 1 ) i σ ( 1 ) i , {\displaystyle c_{i}(T)=(-1)^{i}\sigma _{(1)^{i}},} where ( 1 ) i {\displaystyle (1)^{i}} is the partition whose Young diagram consists of a single column of length i {\displaystyle i} and c i ( Q ) = σ i . {\displaystyle c_{i}(Q)=\sigma _{i}.} The tautological sequence then gives the presentation of the Chow ring as A ∗ ( G r ( k , V ) ) = Z [ c 1 ( T ) , … , c k ( T ) , c 1 ( Q ) , … , c n − k ( Q ) ] ( c ( T ) c ( Q ) − 1 ) . {\displaystyle A^{*}(\mathbf {Gr} (k,V))={\frac {\mathbb {Z} [c_{1}(T),\ldots ,c_{k}(T),c_{1}(Q),\ldots ,c_{n-k}(Q)]}{(c(T)c(Q)-1)}}.} == Gr(2,4) == One of the classical examples analyzed is the Grassmannian G r ( 2 , 4 ) {\displaystyle \mathbf {Gr} (2,4)} since it parameterizes lines in P 3 {\displaystyle \mathbb {P} ^{3}} . Using the Chow ring A ∗ ( G r ( 2 , 4 ) ) {\displaystyle A^{*}(\mathbf {Gr} (2,4))} , Schubert calculus can be used to compute the number of lines on a cubic surface. === Chow ring === The Chow ring has the presentation A ∗ ( G r ( 2 , 4 ) ) = Z [ σ 1 , σ 1 , 1 , σ 2 ] ( ( 1 − σ 1 + σ 1 , 1 ) ( 1 + σ 1 + σ 2 ) − 1 ) {\displaystyle A^{*}(\mathbf {Gr} (2,4))={\frac {\mathbb {Z} [\sigma _{1},\sigma _{1,1},\sigma _{2}]}{((1-\sigma _{1}+\sigma _{1,1})(1+\sigma _{1}+\sigma _{2})-1)}}} and as a graded Abelian group it is given by A 0 ( G r ( 2 , 4 ) ) = Z ⋅ 1 A 2 ( G r ( 2 , 4 ) ) = Z ⋅ σ 1 A 4 ( G r ( 2 , 4 ) ) = Z ⋅ σ 2 ⊕ Z ⋅ σ 1 , 1 A 6 ( G r ( 2 , 4 ) ) = Z ⋅ σ 2 , 1 A 8 ( G r ( 2 , 4 ) ) = Z ⋅ σ 2 , 2 {\displaystyle {\begin{aligned}A^{0}(\mathbf {Gr} (2,4))&=\mathbb {Z} \cdot 1\\A^{2}(\mathbf {Gr} (2,4))&=\mathbb {Z} \cdot \sigma _{1}\\A^{4}(\mathbf {Gr} (2,4))&=\mathbb {Z} \cdot \sigma _{2}\oplus \mathbb {Z} \cdot \sigma _{1,1}\\A^{6}(\mathbf {Gr} (2,4))&=\mathbb {Z} \cdot \sigma _{2,1}\\A^{8}(\mathbf {Gr} (2,4))&=\mathbb {Z} \cdot \sigma _{2,2}\\\end{aligned}}} === Lines on a cubic surface === Recall that a line in P 3 {\displaystyle \mathbb {P} ^{3}} gives a dimension 2 {\displaystyle 2} subspace of A 4 {\displaystyle \mathbb {A} ^{4}} , hence an element of G ( 1 , 3 ) ≅ G r ( 2 , 4 ) {\displaystyle \mathbb {G} (1,3)\cong \mathbf {Gr} (2,4)} . Also, the equation of a line can be given as a section of Γ ( G ( 1 , 3 ) , T ∗ ) {\displaystyle \Gamma (\mathbb {G} (1,3),T^{*})} . Since a cubic surface X {\displaystyle X} is given as a generic homogeneous cubic polynomial, this is given as a generic section s ∈ Γ ( G ( 1 , 3 ) , Sym 3 ( T ∗ ) ) {\displaystyle s\in \Gamma (\mathbb {G} (1,3),{\text{Sym}}^{3}(T^{*}))} . A line L ⊂ P 3 {\displaystyle L\subset \mathbb {P} ^{3}} is a subvariety of X {\displaystyle X} if and only if the section vanishes on [ L ] ∈ G ( 1 , 3 ) {\displaystyle [L]\in \mathbb {G} (1,3)} . Therefore, the Euler class of Sym 3 ( T ∗ ) {\displaystyle {\text{Sym}}^{3}(T^{*})} can be integrated over G ( 1 , 3 ) {\displaystyle \mathbb {G} (1,3)} to get the number of points where the generic section vanishes on G ( 1 , 3 ) {\displaystyle \mathbb {G} (1,3)} . In order to get the Euler class, the total Chern class of T ∗ {\displaystyle T^{*}} must be computed, which is given as c ( T ∗ ) = 1 + σ 1 + σ 1 , 1 {\displaystyle c(T^{*})=1+\sigma _{1}+\sigma _{1,1}} The splitting formula then reads as the formal equation c ( T ∗ ) = ( 1 + α ) ( 1 + β ) = 1 + α + β + α ⋅ β , {\displaystyle {\begin{aligned}c(T^{*})&=(1+\alpha )(1+\beta )\\&=1+\alpha +\beta +\alpha \cdot \beta \end{aligned}},} where c ( L ) = 1 + α {\displaystyle c({\mathcal {L}})=1+\alpha } and c ( M ) = 1 + β {\displaystyle c({\mathcal {M}})=1+\beta } for formal line bundles L , M {\displaystyle {\mathcal {L}},{\mathcal {M}}} . The splitting equation gives the relations σ 1 = α + β {\displaystyle \sigma _{1}=\alpha +\beta } and σ 1 , 1 = α ⋅ β {\displaystyle \sigma _{1,1}=\alpha \cdot \beta } . Since Sym 3 ( T ∗ ) {\displaystyle {\text{Sym}}^{3}(T^{*})} can be viewed as the direct sum of formal line bundles Sym 3 ( T ∗ ) = L ⊗ 3 ⊕ ( L ⊗ 2 ⊗ M ) ⊕ ( L ⊗ M ⊗ 2 ) ⊕ M ⊗ 3 {\displaystyle {\text{Sym}}^{3}(T^{*})={\mathcal {L}}^{\otimes 3}\oplus ({\mathcal {L}}^{\otimes 2}\otimes {\mathcal {M}})\oplus ({\mathcal {L}}\otimes {\mathcal {M}}^{\otimes 2})\oplus {\mathcal {M}}^{\otimes 3}} whose total Chern class is c ( Sym 3 ( T ∗ ) ) = ( 1 + 3 α ) ( 1 + 2 α + β ) ( 1 + α + 2 β ) ( 1 + 3 β ) , {\displaystyle c({\text{Sym}}^{3}(T^{*}))=(1+3\alpha )(1+2\alpha +\beta )(1+\alpha +2\beta )(1+3\beta ),} it follows that c 4 ( Sym 3 ( T ∗ ) ) = 3 α ( 2 α + β ) ( α + 2 β ) 3 β = 9 α β ( 2 ( α + β ) 2 + α β ) = 9 σ 1 , 1 ( 2 σ 1 2 + σ 1 , 1 ) = 27 σ 2 , 2 , {\displaystyle {\begin{aligned}c_{4}({\text{Sym}}^{3}(T^{*}))&=3\alpha (2\alpha +\beta )(\alpha +2\beta )3\beta \\&=9\alpha \beta (2(\alpha +\beta )^{2}+\alpha \beta )\\&=9\sigma _{1,1}(2\sigma _{1}^{2}+\sigma _{1,1})\\&=27\sigma _{2,2}\,,\end{aligned}}} using the fact that σ 1 , 1 ⋅ σ 1 2 = σ 2 , 1 σ 1 = σ 2 , 2 {\displaystyle \sigma _{1,1}\cdot \sigma _{1}^{2}=\sigma _{2,1}\sigma _{1}=\sigma _{2,2}} and σ 1 , 1 ⋅ σ 1 , 1 = σ 2 , 2 . {\displaystyle \sigma _{1,1}\cdot \sigma _{1,1}=\sigma _{2,2}.} Since σ 2 , 2 {\displaystyle \sigma _{2,2}} is the top class, the integral is then ∫ G ( 1 , 3 ) 27 σ 2 , 2 = 27. {\displaystyle \int _{\mathbb {G} (1,3)}27\sigma _{2,2}=27.} Therefore, there are 27 {\displaystyle 27} lines on a cubic surface. == See also == Enumerative geometry Chow ring Intersection theory Grassmannian Giambelli's formula Pieri's formula Chern class Quintic threefold Mirror symmetry conjecture == References == Summer school notes http://homepages.math.uic.edu/~coskun/poland.html Phillip Griffiths and Joseph Harris (1978), Principles of Algebraic Geometry, Chapter 1.5 Kleiman, Steven (1976). "Rigorous foundations of Schubert's enumerative calculus". In Felix E. Browder (ed.). Mathematical Developments Arising from Hilbert Problems. Proceedings of Symposia in Pure Mathematics. Vol. XXVIII.2. American Mathematical Society. pp. 445–482. ISBN 0-8218-1428-1. Steven Kleiman and Dan Laksov (1972). "Schubert calculus" (PDF). American Mathematical Monthly. 79 (10): 1061–1082. doi:10.2307/2317421. JSTOR 2317421. Sottile, Frank (2001) [1994], "Schubert calculus", Encyclopedia of Mathematics, EMS Press David Eisenbud and Joseph Harris (2016), "3264 and All That: A Second Course in Algebraic Geometry". Fulton, William (1997). Young Tableaux. With Applications to Representation Theory and Geometry, Chapts. 5 and 9.4. London Mathematical Society Student Texts. Vol. 35. Cambridge, U.K.: Cambridge University Press. doi:10.1017/CBO9780511626241. ISBN 9780521567244. Fulton, William (1998). Intersection Theory. Berlin, New York: Springer-Verlag. ISBN 978-0-387-98549-7. MR 1644323.
Wikipedia/Schubert_calculus
The Turán graph, denoted by T ( n , r ) {\displaystyle T(n,r)} , is a complete multipartite graph; it is formed by partitioning a set of n {\displaystyle n} vertices into r {\displaystyle r} subsets, with sizes as equal as possible, and then connecting two vertices by an edge if and only if they belong to different subsets. Where q {\displaystyle q} and s {\displaystyle s} are the quotient and remainder of dividing n {\displaystyle n} by r {\displaystyle r} (so n = q r + s {\displaystyle n=qr+s} ), the graph is of the form K q + 1 , q + 1 , … , q , q {\displaystyle K_{q+1,q+1,\ldots ,q,q}} , and the number of edges is ( 1 − 1 r ) n 2 − s 2 2 + ( s 2 ) {\displaystyle \left(1-{\frac {1}{r}}\right){\frac {n^{2}-s^{2}}{2}}+{s \choose 2}} . For r ≤ 7 {\displaystyle r\leq 7} , this edge count can be more succinctly stated as ⌊ ( 1 − 1 r ) n 2 2 ⌋ {\displaystyle \left\lfloor \left(1-{\frac {1}{r}}\right){\frac {n^{2}}{2}}\right\rfloor } . The graph has s {\displaystyle s} subsets of size q + 1 {\displaystyle q+1} , and r − s {\displaystyle r-s} subsets of size q {\displaystyle q} ; each vertex has degree n − q − 1 {\displaystyle n-q-1} or n − q {\displaystyle n-q} . It is a regular graph if n {\displaystyle n} is divisible by r {\displaystyle r} (i.e. when s = 0 {\displaystyle s=0} ). == Turán's theorem == Turán graphs are named after Pál Turán, who used them to prove Turán's theorem, an important result in extremal graph theory. By the pigeonhole principle, every set of r + 1 vertices in the Turán graph includes two vertices in the same partition subset; therefore, the Turán graph does not contain a clique of size r + 1. According to Turán's theorem, the Turán graph has the maximum possible number of edges among all (r + 1)-clique-free graphs with n vertices. Keevash & Sudakov (2003) show that the Turán graph is also the only (r + 1)-clique-free graph of order n in which every subset of αn vertices spans at least r − 1 3 r ( 2 α − 1 ) n 2 {\displaystyle {\frac {r\,{-}\,1}{3r}}(2\alpha -1)n^{2}} edges, if α is sufficiently close to 1. The Erdős–Stone theorem extends Turán's theorem by bounding the number of edges in a graph that does not have a fixed Turán graph as a subgraph. Via this theorem, similar bounds in extremal graph theory can be proven for any excluded subgraph, depending on the chromatic number of the subgraph. == Special cases == Several choices of the parameter r in a Turán graph lead to notable graphs that have been independently studied. The Turán graph T(2n,n) can be formed by removing a perfect matching from a complete graph K2n. As Roberts (1969) showed, this graph has boxicity exactly n; it is sometimes known as the Roberts graph. This graph is also the 1-skeleton of an n-dimensional cross-polytope; for instance, the graph T(6,3) = K2,2,2 is the octahedral graph, the graph of the regular octahedron. If n couples go to a party, and each person shakes hands with every person except his or her partner, then this graph describes the set of handshakes that take place; for this reason, it is also called the cocktail party graph. The Turán graph T(n,2) is a complete bipartite graph and, when n is even, a Moore graph. When r is a divisor of n, the Turán graph is symmetric and strongly regular, although some authors consider Turán graphs to be a trivial case of strong regularity and therefore exclude them from the definition of a strongly regular graph. The class of Turán graphs can have exponentially many maximal cliques, meaning this class does not have few cliques. For example, the Turán graph T ( n , ⌈ n / 3 ⌉ ) {\displaystyle T(n,\lceil n/3\rceil )} has 3a2b maximal cliques, where 3a + 2b = n and b ≤ 2; each maximal clique is formed by choosing one vertex from each partition subset. This is the largest number of maximal cliques possible among all n-vertex graphs regardless of the number of edges in the graph; these graphs are sometimes called Moon–Moser graphs. == Other properties == Every Turán graph is a cograph; that is, it can be formed from individual vertices by a sequence of disjoint union and complement operations. Specifically, such a sequence can begin by forming each of the independent sets of the Turán graph as a disjoint union of isolated vertices. Then, the overall graph is the complement of the disjoint union of the complements of these independent sets. Chao & Novacky (1982) show that the Turán graphs are chromatically unique: no other graphs have the same chromatic polynomials. Nikiforov (2005) uses Turán graphs to supply a lower bound for the sum of the kth eigenvalues of a graph and its complement. Falls, Powell & Snoeyink (2003) develop an efficient algorithm for finding clusters of orthologous groups of genes in genome data, by representing the data as a graph and searching for large Turán subgraphs. Turán graphs also have some interesting properties related to geometric graph theory. Pór & Wood (2005) give a lower bound of Ω((rn)3/4) on the volume of any three-dimensional grid embedding of the Turán graph. Witsenhausen (1974) conjectures that the maximum sum of squared distances, among n points with unit diameter in Rd, is attained for a configuration formed by embedding a Turán graph onto the vertices of a regular simplex. An n-vertex graph G is a subgraph of a Turán graph T(n,r) if and only if G admits an equitable coloring with r colors. The partition of the Turán graph into independent sets corresponds to the partition of G into color classes. In particular, the Turán graph is the unique maximal n-vertex graph with an r-color equitable coloring. == Notes == == References == == External links == Weisstein, Eric W. "Cocktail Party Graph". MathWorld. Weisstein, Eric W. "Octahedral Graph". MathWorld. Weisstein, Eric W. "Turán Graph". MathWorld.
Wikipedia/Turán_graph
In mathematics, the representation theory of the symmetric group is a particular case of the representation theory of finite groups, for which a concrete and detailed theory can be obtained. This has a large area of potential applications, from symmetric function theory to quantum chemistry studies of atoms, molecules and solids. The symmetric group Sn has order n!. Its conjugacy classes are labeled by partitions of n. Therefore according to the representation theory of a finite group, the number of inequivalent irreducible representations, over the complex numbers, is equal to the number of partitions of n. Unlike the general situation for finite groups, there is in fact a natural way to parametrize irreducible representations by the same set that parametrizes conjugacy classes, namely by partitions of n or equivalently Young diagrams of size n. Each such irreducible representation can in fact be realized over the integers (every permutation acting by a matrix with integer coefficients); it can be explicitly constructed by computing the Young symmetrizers acting on a space generated by the Young tableaux of shape given by the Young diagram. The dimension d λ {\displaystyle d_{\lambda }} of the representation that corresponds to the Young diagram λ {\displaystyle \lambda } is given by the hook length formula. To each irreducible representation ρ we can associate an irreducible character, χρ. To compute χρ(π) where π is a permutation, one can use the combinatorial Murnaghan–Nakayama rule . Note that χρ is constant on conjugacy classes, that is, χρ(π) = χρ(σ−1πσ) for all permutations σ. Over other fields the situation can become much more complicated. If the field K has characteristic equal to zero or greater than n then by Maschke's theorem the group algebra KSn is semisimple. In these cases the irreducible representations defined over the integers give the complete set of irreducible representations (after reduction modulo the characteristic if necessary). However, the irreducible representations of the symmetric group are not known in arbitrary characteristic. In this context it is more usual to use the language of modules rather than representations. The representation obtained from an irreducible representation defined over the integers by reducing modulo the characteristic will not in general be irreducible. The modules so constructed are called Specht modules, and every irreducible does arise inside some such module. There are now fewer irreducibles, and although they can be classified they are very poorly understood. For example, even their dimensions are not known in general. The determination of the irreducible modules for the symmetric group over an arbitrary field is widely regarded as one of the most important open problems in representation theory. == Low-dimensional representations == === Symmetric groups === The lowest-dimensional representations of the symmetric groups can be described explicitly, and over arbitrary fields. The smallest two degrees in characteristic zero are described here: Every symmetric group has a one-dimensional representation called the trivial representation, where every element acts as the one by one identity matrix. For n ≥ 2, there is another irreducible representation of degree 1, called the sign representation or alternating character, which takes a permutation to the one by one matrix with entry ±1 based on the sign of the permutation. These are the only one-dimensional representations of the symmetric groups, as one-dimensional representations are abelian, and the abelianization of the symmetric group is C2, the cyclic group of order 2. For all n, there is an n-dimensional representation of the symmetric group of order n!, called the natural permutation representation, which consists of permuting n coordinates. This has the trivial subrepresentation consisting of vectors whose coordinates are all equal. The orthogonal complement consists of those vectors whose coordinates sum to zero, and when n ≥ 2, the representation on this subspace is an (n − 1)-dimensional irreducible representation, called the standard representation. Another (n − 1)-dimensional irreducible representation is found by tensoring with the sign representation. An exterior power Λ k V {\displaystyle \Lambda ^{k}V} of the standard representation V {\displaystyle V} is irreducible provided 0 ≤ k ≤ n − 1 {\displaystyle 0\leq k\leq n-1} (Fulton & Harris 2004). For n ≥ 7, these are the lowest-dimensional irreducible representations of Sn – all other irreducible representations have dimension at least n. However for n = 4, the surjection from S4 to S3 allows S4 to inherit a two-dimensional irreducible representation. For n = 6, the exceptional transitive embedding of S5 into S6 produces another pair of five-dimensional irreducible representations. === Alternating groups === The representation theory of the alternating groups is similar, though the sign representation disappears. For n ≥ 7, the lowest-dimensional irreducible representations are the trivial representation in dimension one, and the (n − 1)-dimensional representation from the other summand of the permutation representation, with all other irreducible representations having higher dimension, but there are exceptions for smaller n. The alternating groups for n ≥ 5 have only one one-dimensional irreducible representation, the trivial representation. For n = 3, 4 there are two additional one-dimensional irreducible representations, corresponding to maps to the cyclic group of order 3: A3 ≅ C3 and A4 → A4/V ≅ C3. For n ≥ 7, there is just one irreducible representation of degree n − 1, and this is the smallest degree of a non-trivial irreducible representation. For n = 3 the obvious analogue of the (n − 1)-dimensional representation is reducible – the permutation representation coincides with the regular representation, and thus breaks up into the three one-dimensional representations, as A3 ≅ C3 is abelian; see the discrete Fourier transform for representation theory of cyclic groups. For n = 4, there is just one n − 1 irreducible representation, but there are the exceptional irreducible representations of dimension 1. For n = 5, there are two dual irreducible representations of dimension 3, corresponding to its action as icosahedral symmetry. For n = 6, there is an extra irreducible representation of dimension 5 corresponding to the exceptional transitive embedding of A5 in A6. == Tensor products of representations == === Kronecker coefficients === The tensor product of two representations of S n {\displaystyle S_{n}} corresponding to the Young diagrams λ , μ {\displaystyle \lambda ,\mu } is a combination of irreducible representations of S n {\displaystyle S_{n}} , V λ ⊗ V μ ≅ ∑ ν C λ , μ , ν V ν {\displaystyle V_{\lambda }\otimes V_{\mu }\cong \sum _{\nu }C_{\lambda ,\mu ,\nu }V_{\nu }} The coefficients C λ μ ν ∈ N {\displaystyle C_{\lambda \mu \nu }\in \mathbb {N} } are called the Kronecker coefficients of the symmetric group. They can be computed from the characters of the representations (Fulton & Harris 2004): C λ , μ , ν = ∑ ρ 1 z ρ χ λ ( C ρ ) χ μ ( C ρ ) χ ν ( C ρ ) {\displaystyle C_{\lambda ,\mu ,\nu }=\sum _{\rho }{\frac {1}{z_{\rho }}}\chi _{\lambda }(C_{\rho })\chi _{\mu }(C_{\rho })\chi _{\nu }(C_{\rho })} The sum is over partitions ρ {\displaystyle \rho } of n {\displaystyle n} , with C ρ {\displaystyle C_{\rho }} the corresponding conjugacy classes. The values of the characters χ λ ( C ρ ) {\displaystyle \chi _{\lambda }(C_{\rho })} can be computed using the Frobenius formula. The coefficients z ρ {\displaystyle z_{\rho }} are z ρ = ∏ j = 0 n j i j i j ! = n ! | C ρ | {\displaystyle z_{\rho }=\prod _{j=0}^{n}j^{i_{j}}i_{j}!={\frac {n!}{|C_{\rho }|}}} where i j {\displaystyle i_{j}} is the number of times j {\displaystyle j} appears in ρ {\displaystyle \rho } , so that ∑ i j j = n {\displaystyle \sum i_{j}j=n} . A few examples, written in terms of Young diagrams (Hamermesh 1989): ( n − 1 , 1 ) ⊗ ( n − 1 , 1 ) ≅ ( n ) + ( n − 1 , 1 ) + ( n − 2 , 2 ) + ( n − 2 , 1 , 1 ) {\displaystyle (n-1,1)\otimes (n-1,1)\cong (n)+(n-1,1)+(n-2,2)+(n-2,1,1)} ( n − 1 , 1 ) ⊗ ( n − 2 , 2 ) ≅ n > 4 ( n − 1 , 1 ) + ( n − 2 , 2 ) + ( n − 2 , 1 , 1 ) + ( n − 3 , 3 ) + ( n − 3 , 2 , 1 ) {\displaystyle (n-1,1)\otimes (n-2,2){\underset {n>4}{\cong }}(n-1,1)+(n-2,2)+(n-2,1,1)+(n-3,3)+(n-3,2,1)} ( n − 1 , 1 ) ⊗ ( n − 2 , 1 , 1 ) ≅ ( n − 1 , 1 ) + ( n − 2 , 2 ) + ( n − 2 , 1 , 1 ) + ( n − 3 , 2 , 1 ) + ( n − 3 , 1 , 1 , 1 ) {\displaystyle (n-1,1)\otimes (n-2,1,1)\cong (n-1,1)+(n-2,2)+(n-2,1,1)+(n-3,2,1)+(n-3,1,1,1)} ( n − 2 , 2 ) ⊗ ( n − 2 , 2 ) ≅ ( n ) + ( n − 1 , 1 ) + 2 ( n − 2 , 2 ) + ( n − 2 , 1 , 1 ) + ( n − 3 , 3 ) + 2 ( n − 3 , 2 , 1 ) + ( n − 3 , 1 , 1 , 1 ) + ( n − 4 , 4 ) + ( n − 4 , 3 , 1 ) + ( n − 4 , 2 , 2 ) {\displaystyle {\begin{aligned}(n-2,2)\otimes (n-2,2)\cong &(n)+(n-1,1)+2(n-2,2)+(n-2,1,1)+(n-3,3)\\&+2(n-3,2,1)+(n-3,1,1,1)+(n-4,4)+(n-4,3,1)+(n-4,2,2)\end{aligned}}} There is a simple rule for computing ( n − 1 , 1 ) ⊗ λ {\displaystyle (n-1,1)\otimes \lambda } for any Young diagram λ {\displaystyle \lambda } (Hamermesh 1989): the result is the sum of all Young diagrams that are obtained from λ {\displaystyle \lambda } by removing one box and then adding one box, where the coefficients are one except for λ {\displaystyle \lambda } itself, whose coefficient is # { λ i } − 1 {\displaystyle \#\{\lambda _{i}\}-1} , i.e., the number of different row lengths minus one. A constraint on the irreducible constituents of V λ ⊗ V μ {\displaystyle V_{\lambda }\otimes V_{\mu }} is (James & Kerber 1981) C λ , μ , ν > 0 ⟹ | d λ − d μ | ≤ d ν ≤ d λ + d μ {\displaystyle C_{\lambda ,\mu ,\nu }>0\implies |d_{\lambda }-d_{\mu }|\leq d_{\nu }\leq d_{\lambda }+d_{\mu }} where the depth d λ = n − λ 1 {\displaystyle d_{\lambda }=n-\lambda _{1}} of a Young diagram is the number of boxes that do not belong to the first row. === Reduced Kronecker coefficients === For λ {\displaystyle \lambda } a Young diagram and n ≥ λ 1 {\displaystyle n\geq \lambda _{1}} , λ [ n ] = ( n − | λ | , λ ) {\displaystyle \lambda [n]=(n-|\lambda |,\lambda )} is a Young diagram of size n {\displaystyle n} . Then C λ [ n ] , μ [ n ] , ν [ n ] {\displaystyle C_{\lambda [n],\mu [n],\nu [n]}} is a bounded, non-decreasing function of n {\displaystyle n} , and C ¯ λ , μ , ν = lim n → ∞ C λ [ n ] , μ [ n ] , ν [ n ] {\displaystyle {\bar {C}}_{\lambda ,\mu ,\nu }=\lim _{n\to \infty }C_{\lambda [n],\mu [n],\nu [n]}} is called a reduced Kronecker coefficient or stable Kronecker coefficient. There are known bounds on the value of n {\displaystyle n} where C λ [ n ] , μ [ n ] , ν [ n ] {\displaystyle C_{\lambda [n],\mu [n],\nu [n]}} reaches its limit. The reduced Kronecker coefficients are structure constants of Deligne categories of representations of S n {\displaystyle S_{n}} with n ∈ C − N {\displaystyle n\in \mathbb {C} -\mathbb {N} } . In contrast to Kronecker coefficients, reduced Kronecker coefficients are defined for any triple of Young diagrams, not necessarily of the same size. If | ν | = | λ | + | μ | {\displaystyle |\nu |=|\lambda |+|\mu |} , then C ¯ λ , μ , ν {\displaystyle {\bar {C}}_{\lambda ,\mu ,\nu }} coincides with the Littlewood-Richardson coefficient c λ , μ ν {\displaystyle c_{\lambda ,\mu }^{\nu }} . Reduced Kronecker coefficients can be written as linear combinations of Littlewood-Richardson coefficients via a change of bases in the space of symmetric functions, giving rise to expressions that are manifestly integral although not manifestly positive. Reduced Kronecker coefficients can also be written in terms of Kronecker and Littlewood-Richardson coefficients c α β γ λ {\displaystyle c_{\alpha \beta \gamma }^{\lambda }} via Littlewood's formula C ¯ λ , μ , ν = ∑ λ ′ , μ ′ , ν ′ , α , β , γ C λ ′ , μ ′ , ν ′ c λ ′ β γ λ c μ ′ α γ μ c ν ′ α β ν {\displaystyle {\bar {C}}_{\lambda ,\mu ,\nu }=\sum _{\lambda ',\mu ',\nu ',\alpha ,\beta ,\gamma }C_{\lambda ',\mu ',\nu '}c_{\lambda '\beta \gamma }^{\lambda }c_{\mu '\alpha \gamma }^{\mu }c_{\nu '\alpha \beta }^{\nu }} Conversely, it is possible to recover the Kronecker coefficients as linear combinations of reduced Kronecker coefficients. Reduced Kronecker coefficients are implemented in the computer algebra system SageMath. == Eigenvalues of complex representations == Given an element w ∈ S n {\displaystyle w\in S_{n}} of cycle-type μ = ( μ 1 , μ 2 , … , μ k ) {\displaystyle \mu =(\mu _{1},\mu _{2},\dots ,\mu _{k})} and order m = lcm ( μ i ) {\displaystyle m={\text{lcm}}(\mu _{i})} , the eigenvalues of w {\displaystyle w} in a complex representation of S n {\displaystyle S_{n}} are of the type ω e j {\displaystyle \omega ^{e_{j}}} with ω = e 2 π i m {\displaystyle \omega =e^{\frac {2\pi i}{m}}} , where the integers e j ∈ Z m Z {\displaystyle e_{j}\in {\frac {\mathbb {Z} }{m\mathbb {Z} }}} are called the cyclic exponents of w {\displaystyle w} with respect to the representation. There is a combinatorial description of the cyclic exponents of the symmetric group (and wreath products thereof). Defining ( b μ ( 1 ) , … , b μ ( n ) ) = ( m μ 1 , 2 m μ 1 , … , m , m μ 2 , 2 m μ 2 , … , m , … ) {\displaystyle \left(b_{\mu }(1),\dots ,b_{\mu }(n)\right)=\left({\frac {m}{\mu _{1}}},2{\frac {m}{\mu _{1}}},\dots ,m,{\frac {m}{\mu _{2}}},2{\frac {m}{\mu _{2}}},\dots ,m,\dots \right)} , let the μ {\displaystyle \mu } -index of a standard Young tableau be the sum of the values of b μ {\displaystyle b_{\mu }} over the tableau's descents, ind μ ( T ) = ∑ k ∈ { descents ( T ) } b μ ( k ) mod m {\displaystyle {\text{ind}}_{\mu }(T)=\sum _{k\in \{{\text{descents}}(T)\}}b_{\mu }(k){\bmod {m}}} . Then the cyclic exponents of the representation of S n {\displaystyle S_{n}} described by the Young diagram λ {\displaystyle \lambda } are the μ {\displaystyle \mu } -indices of the corresponding Young tableaux. In particular, if w {\displaystyle w} is of order n {\displaystyle n} , then b μ ( k ) = k {\displaystyle b_{\mu }(k)=k} , and ind μ ( T ) {\displaystyle {\text{ind}}_{\mu }(T)} coincides with the major index of T {\displaystyle T} (the sum of the descents). The cyclic exponents of an irreducible representation of S n {\displaystyle S_{n}} then describe how it decomposes into representations of the cyclic group Z n Z {\displaystyle {\frac {\mathbb {Z} }{n\mathbb {Z} }}} , with ω e j {\displaystyle \omega ^{e_{j}}} being interpreted as the image of w {\displaystyle w} in the (one-dimensional) representation characterized by e j {\displaystyle e_{j}} . == See also == Alternating polynomials Symmetric polynomials Schur functor Robinson–Schensted correspondence Schur–Weyl duality Jucys–Murphy element Garnir relations == References == == Cited Publications == Fulton, William; Harris, Joe (2004). "Representation Theory". Graduate Texts in Mathematics. New York, NY: Springer New York. doi:10.1007/978-1-4612-0979-9. ISBN 978-3-540-00539-1. ISSN 0072-5285. Hamermesh, M (1989). Group theory and its application to physical problems. New York: Dover Publications. ISBN 0-486-66181-4. OCLC 20218471. James, Gordon; Kerber, Adalbert (1981), The representation theory of the symmetric group, Encyclopedia of Mathematics and its Applications, vol. 16, Addison-Wesley Publishing Co., Reading, Mass., ISBN 978-0-201-13515-2, MR 0644144 James, G. D. (1983), "On the minimal dimensions of irreducible representations of symmetric groups", Mathematical Proceedings of the Cambridge Philosophical Society, 94 (3): 417–424, Bibcode:1983MPCPS..94..417J, doi:10.1017/S0305004100000803, ISSN 0305-0041, MR 0720791, S2CID 123113210
Wikipedia/Representation_theory_of_the_symmetric_group
In algebra and in particular in algebraic combinatorics, the ring of symmetric functions is a specific limit of the rings of symmetric polynomials in n indeterminates, as n goes to infinity. This ring serves as universal structure in which relations between symmetric polynomials can be expressed in a way independent of the number n of indeterminates (but its elements are neither polynomials nor functions). Among other things, this ring plays an important role in the representation theory of the symmetric group. The ring of symmetric functions can be given a coproduct and a bilinear form making it into a positive selfadjoint graded Hopf algebra that is both commutative and cocommutative. == Symmetric polynomials == The study of symmetric functions is based on that of symmetric polynomials. In a polynomial ring in some finite set of indeterminates, a polynomial is called symmetric if it stays the same whenever the indeterminates are permuted in any way. More formally, there is an action by ring automorphisms of the symmetric group Sn on the polynomial ring in n indeterminates, where a permutation acts on a polynomial by simultaneously substituting each of the indeterminates for another according to the permutation used. The invariants for this action form the subring of symmetric polynomials. If the indeterminates are X1, ..., Xn, then examples of such symmetric polynomials are X 1 + X 2 + ⋯ + X n , {\displaystyle X_{1}+X_{2}+\cdots +X_{n},\,} X 1 3 + X 2 3 + ⋯ + X n 3 , {\displaystyle X_{1}^{3}+X_{2}^{3}+\cdots +X_{n}^{3},\,} and X 1 X 2 ⋯ X n . {\displaystyle X_{1}X_{2}\cdots X_{n}.\,} A somewhat more complicated example is X13X2X3 + X1X23X3 + X1X2X33 + X13X2X4 + X1X23X4 + X1X2X43 + ... where the summation goes on to include all products of the third power of some variable and two other variables. There are many specific kinds of symmetric polynomials, such as elementary symmetric polynomials, power sum symmetric polynomials, monomial symmetric polynomials, complete homogeneous symmetric polynomials, and Schur polynomials. == The ring of symmetric functions == Most relations between symmetric polynomials do not depend on the number n of indeterminates, other than that some polynomials in the relation might require n to be large enough in order to be defined. For instance the Newton's identity for the third power sum polynomial p3 leads to p 3 ( X 1 , … , X n ) = e 1 ( X 1 , … , X n ) 3 − 3 e 2 ( X 1 , … , X n ) e 1 ( X 1 , … , X n ) + 3 e 3 ( X 1 , … , X n ) , {\displaystyle p_{3}(X_{1},\ldots ,X_{n})=e_{1}(X_{1},\ldots ,X_{n})^{3}-3e_{2}(X_{1},\ldots ,X_{n})e_{1}(X_{1},\ldots ,X_{n})+3e_{3}(X_{1},\ldots ,X_{n}),} where the e i {\displaystyle e_{i}} denote elementary symmetric polynomials; this formula is valid for all natural numbers n, and the only notable dependency on it is that ek(X1,...,Xn) = 0 whenever n < k. One would like to write this as an identity p 3 = e 1 3 − 3 e 2 e 1 + 3 e 3 {\displaystyle p_{3}=e_{1}^{3}-3e_{2}e_{1}+3e_{3}} that does not depend on n at all, and this can be done in the ring of symmetric functions. In that ring there are nonzero elements ek for all integers k ≥ 1, and any element of the ring can be given by a polynomial expression in the elements ek. === Definitions === A ring of symmetric functions can be defined over any commutative ring R, and will be denoted ΛR; the basic case is for R = Z. The ring ΛR is in fact a graded R-algebra. There are two main constructions for it; the first one given below can be found in (Stanley, 1999), and the second is essentially the one given in (Macdonald, 1979). ==== As a ring of formal power series ==== The easiest (though somewhat heavy) construction starts with the ring of formal power series R [ [ X 1 , X 2 , . . . ] ] {\displaystyle R[[X_{1},X_{2},...]]} over R in infinitely (countably) many indeterminates; the elements of this power series ring are formal infinite sums of terms, each of which consists of a coefficient from R multiplied by a monomial, where each monomial is a product of finitely many finite powers of indeterminates. One defines ΛR as its subring consisting of those power series S that satisfy S is invariant under any permutation of the indeterminates, and the degrees of the monomials occurring in S are bounded. Note that because of the second condition, power series are used here only to allow infinitely many terms of a fixed degree, rather than to sum terms of all possible degrees. Allowing this is necessary because an element that contains for instance a term X1 should also contain a term Xi for every i > 1 in order to be symmetric. Unlike the whole power series ring, the subring ΛR is graded by the total degree of monomials: due to condition 2, every element of ΛR is a finite sum of homogeneous elements of ΛR (which are themselves infinite sums of terms of equal degree). For every k ≥ 0, the element ek ∈ ΛR is defined as the formal sum of all products of k distinct indeterminates, which is clearly homogeneous of degree k. ==== As an algebraic limit ==== Another construction of ΛR takes somewhat longer to describe, but better indicates the relationship with the rings R[X1,...,Xn]Sn of symmetric polynomials in n indeterminates. For every n there is a surjective ring homomorphism ρn from the analogous ring R[X1,...,Xn+1]Sn+1 with one more indeterminate onto R[X1,...,Xn]Sn, defined by setting the last indeterminate Xn+1 to 0. Although ρn has a non-trivial kernel, the nonzero elements of that kernel have degree at least n + 1 {\displaystyle n+1} (they are multiples of X1X2...Xn+1). This means that the restriction of ρn to elements of degree at most n is a bijective linear map, and ρn(ek(X1,...,Xn+1)) = ek(X1,...,Xn) for all k ≤ n. The inverse of this restriction can be extended uniquely to a ring homomorphism φn from R[X1,...,Xn]Sn to R[X1,...,Xn+1]Sn+1, as follows for instance from the fundamental theorem of symmetric polynomials. Since the images φn(ek(X1,...,Xn)) = ek(X1,...,Xn+1) for k = 1,...,n are still algebraically independent over R, the homomorphism φn is injective and can be viewed as a (somewhat unusual) inclusion of rings; applying φn to a polynomial amounts to adding all monomials containing the new indeterminate obtained by symmetry from monomials already present. The ring ΛR is then the "union" (direct limit) of all these rings subject to these inclusions. Since all φn are compatible with the grading by total degree of the rings involved, ΛR obtains the structure of a graded ring. This construction differs slightly from the one in (Macdonald, 1979). That construction only uses the surjective morphisms ρn without mentioning the injective morphisms φn: it constructs the homogeneous components of ΛR separately, and equips their direct sum with a ring structure using the ρn. It is also observed that the result can be described as an inverse limit in the category of graded rings. That description however somewhat obscures an important property typical for a direct limit of injective morphisms, namely that every individual element (symmetric function) is already faithfully represented in some object used in the limit construction, here a ring R[X1,...,Xd]Sd. It suffices to take for d the degree of the symmetric function, since the part in degree d of that ring is mapped isomorphically to rings with more indeterminates by φn for all n ≥ d. This implies that for studying relations between individual elements, there is no fundamental difference between symmetric polynomials and symmetric functions. === Defining individual symmetric functions === The name "symmetric function" for elements of ΛR is a misnomer: in neither construction are the elements functions, and in fact, unlike symmetric polynomials, no function of independent variables can be associated to such elements (for instance e1 would be the sum of all infinitely many variables, which is not defined unless restrictions are imposed on the variables). However the name is traditional and well established; it can be found both in (Macdonald, 1979), which says (footnote on p. 12) The elements of Λ (unlike those of Λn) are no longer polynomials: they are formal infinite sums of monomials. We have therefore reverted to the older terminology of symmetric functions. (here Λn denotes the ring of symmetric polynomials in n indeterminates), and also in (Stanley, 1999). To define a symmetric function one must either indicate directly a power series as in the first construction, or give a symmetric polynomial in n indeterminates for every natural number n in a way compatible with the second construction. An expression in an unspecified number of indeterminates may do both, for instance e 2 = ∑ i < j X i X j {\displaystyle e_{2}=\sum _{i<j}X_{i}X_{j}\,} can be taken as the definition of an elementary symmetric function if the number of indeterminates is infinite, or as the definition of an elementary symmetric polynomial in any finite number of indeterminates. Symmetric polynomials for the same symmetric function should be compatible with the homomorphisms ρn (decreasing the number of indeterminates is obtained by setting some of them to zero, so that the coefficients of any monomial in the remaining indeterminates is unchanged), and their degree should remain bounded. (An example of a family of symmetric polynomials that fails both conditions is ∏ i = 1 n X i {\displaystyle \textstyle \prod _{i=1}^{n}X_{i}} ; the family ∏ i = 1 n ( X i + 1 ) {\displaystyle \textstyle \prod _{i=1}^{n}(X_{i}+1)} fails only the second condition.) Any symmetric polynomial in n indeterminates can be used to construct a compatible family of symmetric polynomials, using the homomorphisms ρi for i < n to decrease the number of indeterminates, and φi for i ≥ n to increase the number of indeterminates (which amounts to adding all monomials in new indeterminates obtained by symmetry from monomials already present). The following are fundamental examples of symmetric functions. The monomial symmetric functions mα. Suppose α = (α1,α2,...) is a sequence of non-negative integers, only finitely many of which are non-zero. Then we can consider the monomial defined by α: Xα = X1α1X2α2X3α3.... Then mα is the symmetric function determined by Xα, i.e. the sum of all monomials obtained from Xα by symmetry. For a formal definition, define β ~ α to mean that the sequence β is a permutation of the sequence α and set m α = ∑ β ∼ α X β . {\displaystyle m_{\alpha }=\sum \nolimits _{\beta \sim \alpha }X^{\beta }.} This symmetric function corresponds to the monomial symmetric polynomial mα(X1,...,Xn) for any n large enough to have the monomial Xα. The distinct monomial symmetric functions are parametrized by the integer partitions (each mα has a unique representative monomial Xλ with the parts λi in weakly decreasing order). Since any symmetric function containing any of the monomials of some mα must contain all of them with the same coefficient, each symmetric function can be written as an R-linear combination of monomial symmetric functions, and the distinct monomial symmetric functions therefore form a basis of ΛR as an R-module. The elementary symmetric functions ek, for any natural number k; one has ek = mα where X α = ∏ i = 1 k X i {\displaystyle \textstyle X^{\alpha }=\prod _{i=1}^{k}X_{i}} . As a power series, this is the sum of all distinct products of k distinct indeterminates. This symmetric function corresponds to the elementary symmetric polynomial ek(X1,...,Xn) for any n ≥ k. The power sum symmetric functions pk, for any positive integer k; one has pk = m(k), the monomial symmetric function for the monomial X1k. This symmetric function corresponds to the power sum symmetric polynomial pk(X1,...,Xn) = X1k + ... + Xnk for any n ≥ 1. The complete homogeneous symmetric functions hk, for any natural number k; hk is the sum of all monomial symmetric functions mα where α is a partition of k. As a power series, this is the sum of all monomials of degree k, which is what motivates its name. This symmetric function corresponds to the complete homogeneous symmetric polynomial hk(X1,...,Xn) for any n ≥ k. The Schur functions sλ for any partition λ, which corresponds to the Schur polynomial sλ(X1,...,Xn) for any n large enough to have the monomial Xλ. There is no power sum symmetric function p0: although it is possible (and in some contexts natural) to define p 0 ( X 1 , … , X n ) = ∑ i = 1 n X i 0 = n {\displaystyle \textstyle p_{0}(X_{1},\ldots ,X_{n})=\sum _{i=1}^{n}X_{i}^{0}=n} as a symmetric polynomial in n variables, these values are not compatible with the morphisms ρn. The "discriminant" ( ∏ i < j ( X i − X j ) ) 2 {\displaystyle \textstyle (\prod _{i<j}(X_{i}-X_{j}))^{2}} is another example of an expression giving a symmetric polynomial for all n, but not defining any symmetric function. The expressions defining Schur polynomials as a quotient of alternating polynomials are somewhat similar to that for the discriminant, but the polynomials sλ(X1,...,Xn) turn out to be compatible for varying n, and therefore do define a symmetric function. === A principle relating symmetric polynomials and symmetric functions === For any symmetric function P, the corresponding symmetric polynomials in n indeterminates for any natural number n may be designated by P(X1,...,Xn). The second definition of the ring of symmetric functions implies the following fundamental principle: If P and Q are symmetric functions of degree d, then one has the identity P = Q {\displaystyle P=Q} of symmetric functions if and only if one has the identity P(X1,...,Xd) = Q(X1,...,Xd) of symmetric polynomials in d indeterminates. In this case one has in fact P(X1,...,Xn) = Q(X1,...,Xn) for any number n of indeterminates. This is because one can always reduce the number of variables by substituting zero for some variables, and one can increase the number of variables by applying the homomorphisms φn; the definition of those homomorphisms assures that φn(P(X1,...,Xn)) = P(X1,...,Xn+1) (and similarly for Q) whenever n ≥ d. See a proof of Newton's identities for an effective application of this principle. == Properties of the ring of symmetric functions == === Identities === The ring of symmetric functions is a convenient tool for writing identities between symmetric polynomials that are independent of the number of indeterminates: in ΛR there is no such number, yet by the above principle any identity in ΛR automatically gives identities the rings of symmetric polynomials over R in any number of indeterminates. Some fundamental identities are ∑ i = 0 k ( − 1 ) i e i h k − i = 0 = ∑ i = 0 k ( − 1 ) i h i e k − i for all k > 0 , {\displaystyle \sum _{i=0}^{k}(-1)^{i}e_{i}h_{k-i}=0=\sum _{i=0}^{k}(-1)^{i}h_{i}e_{k-i}\quad {\mbox{for all }}k>0,} which shows a symmetry between elementary and complete homogeneous symmetric functions; these relations are explained under complete homogeneous symmetric polynomial. k e k = ∑ i = 1 k ( − 1 ) i − 1 p i e k − i for all k ≥ 0 , {\displaystyle ke_{k}=\sum _{i=1}^{k}(-1)^{i-1}p_{i}e_{k-i}\quad {\mbox{for all }}k\geq 0,} the Newton identities, which also have a variant for complete homogeneous symmetric functions: k h k = ∑ i = 1 k p i h k − i for all k ≥ 0. {\displaystyle kh_{k}=\sum _{i=1}^{k}p_{i}h_{k-i}\quad {\mbox{for all }}k\geq 0.} === Structural properties of ΛR === Important properties of ΛR include the following. The set of monomial symmetric functions parametrized by partitions form a basis of ΛR as a graded R-module, those parametrized by partitions of d being homogeneous of degree d; the same is true for the set of Schur functions (also parametrized by partitions). ΛR is isomorphic as a graded R-algebra to a polynomial ring R[Y1,Y2, ...] in infinitely many variables, where Yi is given degree i for all i > 0, one isomorphism being the one that sends Yi to ei ∈ ΛR for every i. There is an involutory automorphism ω of ΛR that interchanges the elementary symmetric functions ei and the complete homogeneous symmetric function hi for all i. It also sends each power sum symmetric function pi to (−1)i−1pi, and it permutes the Schur functions among each other, interchanging sλ and sλt where λt is the transpose partition of λ. Property 2 is the essence of the fundamental theorem of symmetric polynomials. It immediately implies some other properties: The subring of ΛR generated by its elements of degree at most n is isomorphic to the ring of symmetric polynomials over R in n variables; The Hilbert–Poincaré series of ΛR is ∏ i = 1 ∞ 1 1 − t i {\displaystyle \textstyle \prod _{i=1}^{\infty }{\frac {1}{1-t^{i}}}} , the generating function of the integer partitions (this also follows from property 1); For every n > 0, the R-module formed by the homogeneous part of ΛR of degree n, modulo its intersection with the subring generated by its elements of degree strictly less than n, is free of rank 1, and (the image of) en is a generator of this R-module; For every family of symmetric functions (fi)i>0 in which fi is homogeneous of degree i and gives a generator of the free R-module of the previous point (for all i), there is an alternative isomorphism of graded R-algebras from R[Y1,Y2, ...] as above to ΛR that sends Yi to fi; in other words, the family (fi)i>0 forms a set of free polynomial generators of ΛR. This final point applies in particular to the family (hi)i>0 of complete homogeneous symmetric functions. If R contains the field Q {\displaystyle \mathbb {Q} } of rational numbers, it applies also to the family (pi)i>0 of power sum symmetric functions. This explains why the first n elements of each of these families define sets of symmetric polynomials in n variables that are free polynomial generators of that ring of symmetric polynomials. The fact that the complete homogeneous symmetric functions form a set of free polynomial generators of ΛR already shows the existence of an automorphism ω sending the elementary symmetric functions to the complete homogeneous ones, as mentioned in property 3. The fact that ω is an involution of ΛR follows from the symmetry between elementary and complete homogeneous symmetric functions expressed by the first set of relations given above. The ring of symmetric functions ΛZ is the Exp ring of the integers Z. It is also a lambda-ring in a natural fashion; in fact it is the universal lambda-ring in one generator. === Generating functions === The first definition of ΛR as a subring of R [ [ X 1 , X 2 , . . . ] ] {\displaystyle R[[X_{1},X_{2},...]]} allows the generating functions of several sequences of symmetric functions to be elegantly expressed. Contrary to the relations mentioned earlier, which are internal to ΛR, these expressions involve operations taking place in R[[X1,X2,...;t]] but outside its subring ΛR[[t]], so they are meaningful only if symmetric functions are viewed as formal power series in indeterminates Xi. We shall write "(X)" after the symmetric functions to stress this interpretation. The generating function for the elementary symmetric functions is E ( t ) = ∑ k ≥ 0 e k ( X ) t k = ∏ i = 1 ∞ ( 1 + X i t ) . {\displaystyle E(t)=\sum _{k\geq 0}e_{k}(X)t^{k}=\prod _{i=1}^{\infty }(1+X_{i}t).} Similarly one has for complete homogeneous symmetric functions H ( t ) = ∑ k ≥ 0 h k ( X ) t k = ∏ i = 1 ∞ ( ∑ k ≥ 0 ( X i t ) k ) = ∏ i = 1 ∞ 1 1 − X i t . {\displaystyle H(t)=\sum _{k\geq 0}h_{k}(X)t^{k}=\prod _{i=1}^{\infty }\left(\sum _{k\geq 0}(X_{i}t)^{k}\right)=\prod _{i=1}^{\infty }{\frac {1}{1-X_{i}t}}.} The obvious fact that E ( − t ) H ( t ) = 1 = E ( t ) H ( − t ) {\displaystyle E(-t)H(t)=1=E(t)H(-t)} explains the symmetry between elementary and complete homogeneous symmetric functions. The generating function for the power sum symmetric functions can be expressed as P ( t ) = ∑ k > 0 p k ( X ) t k = ∑ k > 0 ∑ i = 1 ∞ ( X i t ) k = ∑ i = 1 ∞ X i t 1 − X i t = t E ′ ( − t ) E ( − t ) = t H ′ ( t ) H ( t ) {\displaystyle P(t)=\sum _{k>0}p_{k}(X)t^{k}=\sum _{k>0}\sum _{i=1}^{\infty }(X_{i}t)^{k}=\sum _{i=1}^{\infty }{\frac {X_{i}t}{1-X_{i}t}}={\frac {tE'(-t)}{E(-t)}}={\frac {tH'(t)}{H(t)}}} ((Macdonald, 1979) defines P(t) as Σk>0 pk(X)tk−1, and its expressions therefore lack a factor t with respect to those given here). The two final expressions, involving the formal derivatives of the generating functions E(t) and H(t), imply Newton's identities and their variants for the complete homogeneous symmetric functions. These expressions are sometimes written as P ( t ) = − t d d t log ⁡ ( E ( − t ) ) = t d d t log ⁡ ( H ( t ) ) , {\displaystyle P(t)=-t{\frac {d}{dt}}\log(E(-t))=t{\frac {d}{dt}}\log(H(t)),} which amounts to the same, but requires that R contain the rational numbers, so that the logarithm of power series with constant term 1 is defined (by log ⁡ ( 1 − t S ) = − ∑ i > 0 1 i ( t S ) i {\displaystyle \textstyle \log(1-tS)=-\sum _{i>0}{\frac {1}{i}}(tS)^{i}} ). == Specializations == Let Λ {\displaystyle \Lambda } be the ring of symmetric functions and R {\displaystyle R} a commutative algebra with unit element. An algebra homomorphism φ : Λ → R , f ↦ f ( φ ) {\displaystyle \varphi :\Lambda \to R,\quad f\mapsto f(\varphi )} is called a specialization. Example: Given some real numbers a 1 , … , a k {\displaystyle a_{1},\dots ,a_{k}} and f ( x 1 , x 2 , … , ) ∈ Λ {\displaystyle f(x_{1},x_{2},\dots ,)\in \Lambda } , then the substitution x 1 = a 1 , … , x k = a k {\displaystyle x_{1}=a_{1},\dots ,x_{k}=a_{k}} and x j = 0 , ∀ j > k {\displaystyle x_{j}=0,\forall j>k} is a specialization. Let f ∈ Λ {\displaystyle f\in \Lambda } , then ps ⁡ ( f ) := f ( 1 , q , q 2 , q 3 , … ) {\displaystyle \operatorname {ps} (f):=f(1,q,q^{2},q^{3},\dots )} is called principal specialization. == See also == Newton's identities Quasisymmetric function == References == Macdonald, I. G. Symmetric functions and Hall polynomials. Oxford Mathematical Monographs. The Clarendon Press, Oxford University Press, Oxford, 1979. viii+180 pp. ISBN 0-19-853530-9 MR553598 Macdonald, I. G. Symmetric functions and Hall polynomials. Second edition. Oxford Mathematical Monographs. Oxford Science Publications. The Clarendon Press, Oxford University Press, New York, 1995. x+475 pp. ISBN 0-19-853489-2 MR1354144 Stanley, Richard P. Enumerative Combinatorics, Vol. 2, Cambridge University Press, 1999. ISBN 0-521-56069-1 (hardback) ISBN 0-521-78987-7 (paperback).
Wikipedia/Ring_of_symmetric_functions
Algebraic Combinatorics is a peer-reviewed diamond open access mathematical journal specializing in the field of algebraic combinatorics. Established in 2018, the journal is published by the Centre Mersenne. It is a member of the Free Journal Network. == History == The journal was established in 2018, when the editorial board of the Springer Science+Business Media Journal of Algebraic Combinatorics resigned to protest the publisher's high prices and limited accessibility. The board criticized Springer for "double-dipping", that is, charging large subscription fees to libraries in addition to high fees for authors who wished to make their publications open access. == Operations == Algebraic Combinatorics operates on a diamond open access model, in which publication costs are underwritten by voluntary contributions from universities, foundations, and other organizations. Authors do not pay submission fees or article processing charges. All content is published under a Creative Commons license. The journal's editors-in-chief are Akihiro Munemasa (Tohoku University), Satoshi Murai (Waseda University), Hendrik Van Maldeghem (Ghent University), Brendon Rhoades (University of California, San Diego), and David Speyer (University of Michigan). == Abstracting and indexing == The journal is abstracted and indexed in the Web of Science, Directory of Open Access Journals, Scopus, Mathematical Reviews, and zbMath. == References == == External links == Official website
Wikipedia/Algebraic_Combinatorics_(journal)
In theoretical physics, the Einstein–Cartan theory, also known as the Einstein–Cartan–Sciama–Kibble theory, is a classical theory of gravitation, one of several alternatives to general relativity. The theory was first proposed by Élie Cartan in 1922. == Overview == Einstein–Cartan theory differs from general relativity in two ways: (1) it is formulated within the framework of Riemann–Cartan geometry, which possesses a locally gauged Lorentz symmetry, while general relativity is formulated within the framework of Riemannian geometry, which does not; (2) an additional set of equations are posed that relate torsion to spin. This difference can be factored into general relativity (Einstein–Hilbert) → general relativity (Palatini) → Einstein–Cartan by first reformulating general relativity onto a Riemann–Cartan geometry, replacing the Einstein–Hilbert action over Riemannian geometry by the Palatini action over Riemann–Cartan geometry; and second, removing the zero torsion constraint from the Palatini action, which results in the additional set of equations for spin and torsion, as well as the addition of extra spin-related terms in the Einstein field equations themselves. The theory of general relativity was originally formulated in the setting of Riemannian geometry by the Einstein–Hilbert action, out of which arise the Einstein field equations. At the time of its original formulation, there was no concept of Riemann–Cartan geometry. Nor was there a sufficient awareness of the concept of gauge symmetry to understand that Riemannian geometries do not possess the requisite structure to embody a locally gauged Lorentz symmetry, such as would be required to be able to express continuity equations and conservation laws for rotational and boost symmetries, or to describe spinors in curved spacetime geometries. The result of adding this infrastructure is a Riemann–Cartan geometry. In particular, to be able to describe spinors requires the inclusion of a spin structure, which suffices to produce such a geometry. The chief difference between a Riemann–Cartan geometry and Riemannian geometry is that in the former, the affine connection is independent of the metric, while in the latter it is derived from the metric as the Levi-Civita connection, the difference between the two being referred to as the contorsion. In particular, the antisymmetric part of the connection (referred to as the torsion) is zero for Levi-Civita connections, as one of the defining conditions for such connections. Because the contorsion can be expressed linearly in terms of the torsion, it is also possible to directly translate the Einstein–Hilbert action into a Riemann–Cartan geometry, the result being the Palatini action (see also Palatini variation). It is derived by rewriting the Einstein–Hilbert action in terms of the affine connection and then separately posing a constraint that forces both the torsion and contorsion to be zero, which thus forces the affine connection to be equal to the Levi-Civita connection. Because it is a direct translation of the action and field equations of general relativity, expressed in terms of the Levi-Civita connection, this may be regarded as the theory of general relativity, itself, transposed into the framework of Riemann–Cartan geometry. Einstein–Cartan theory relaxes this condition and, correspondingly, relaxes general relativity's assumption that the affine connection have a vanishing antisymmetric part (torsion tensor). The action used is the same as the Palatini action, except that the constraint on the torsion is removed. This results in two differences from general relativity: (1) the field equations are now expressed in terms of affine connection, rather than the Levi-Civita connection, and so have additional terms in Einstein's field equations involving the contorsion that are not present in the field equations derived from the Palatini formulation; (2) an additional set of equations are now present which couple the torsion to the intrinsic angular momentum (spin) of matter, much in the same way in which the affine connection is coupled to the energy and momentum of matter. In Einstein–Cartan theory, the torsion is now a variable in the principle of stationary action that is coupled to a curved spacetime formulation of spin (the spin tensor). These extra equations express the torsion linearly in terms of the spin tensor associated with the matter source, which entails that the torsion generally be non-zero inside matter. A consequence of the linearity is that outside of matter there is zero torsion, so that the exterior geometry remains the same as what would be described in general relativity. The differences between Einstein–Cartan theory and general relativity (formulated either in terms of the Einstein–Hilbert action on Riemannian geometry or the Palatini action on Riemann–Cartan geometry) rest solely on what happens to the geometry inside matter sources. That is: "torsion does not propagate". Generalizations of the Einstein–Cartan action have been considered which allow for propagating torsion. Because Riemann–Cartan geometries have Lorentz symmetry as a local gauge symmetry, it is possible to formulate the associated conservation laws. In particular, regarding the metric and torsion tensors as independent variables gives the correct generalization of the conservation law for the total (orbital plus intrinsic) angular momentum to the presence of the gravitational field. == History == The theory was first proposed by Élie Cartan, who was inspired by Cosserat elasticity theory, in 1922 and expounded in the following few years. Albert Einstein became affiliated with the theory in 1928 during his unsuccessful attempt to match torsion to the electromagnetic field tensor as part of a unified field theory. This line of thought led him to the related but different theory of teleparallelism. Dennis Sciama and Tom Kibble independently revisited the theory in the 1960s. Einstein–Cartan theory has been historically overshadowed by its torsion-free counterpart and other alternatives like Brans–Dicke theory because torsion seemed to add little predictive benefit at the expense of the tractability of its equations. Since the Einstein–Cartan theory is purely classical, it also does not fully address the issue of quantum gravity. In the Einstein–Cartan theory, the Dirac equation becomes nonlinear when it is expressed in terms of the Levi-Civita connection, though it remains linear when expressed in terms of the connection native to the geometry. Because the torsion does not 'propagate', its relation to the spin tensor of the matter source is algebraic and it is possible to solve in terms of the spin tensor. In turn, the difference between the connection and Levi-Civita connection (the contorsion) can be solved in terms of the torsion. When the contorsion is back-substituted for in the Dirac equation, to reduce the connection to the Levi-Civita connection (e.g. in passing from equation (4.1) to equation (4.2) in ), this results in non-linear contributions arising, ultimately, from the Dirac field itself. If two or more Dirac fields are present, or other fields that carry spin, the non-linear additions to the Dirac equation of each field would include contributions from all of the other fields, as well. Even though renowned physicists such as Steven Weinberg "never understood what is so important physically about the possibility of torsion in differential geometry", other physicists claim that theories with torsion are valuable. == Field equations == The Einstein field equations of general relativity can be derived by postulating the Einstein–Hilbert action to be the true action of spacetime and then varying that action with respect to the metric tensor. The field equations of Einstein–Cartan theory come from exactly the same approach, except that a general asymmetric affine connection is assumed rather than the symmetric Levi-Civita connection (i.e., spacetime is assumed to have torsion in addition to curvature), and then the metric and torsion are varied independently. Let L M {\displaystyle {\mathcal {L}}_{\mathrm {M} }} represent the Lagrangian density of matter and L G {\displaystyle {\mathcal {L}}_{\mathrm {G} }} represent the Lagrangian density of the gravitational field. The Lagrangian density for the gravitational field in the Einstein–Cartan theory is proportional to the Ricci scalar: L G = 1 2 κ R | g | {\displaystyle {\mathcal {L}}_{\mathrm {G} }={\frac {1}{2\kappa }}R{\sqrt {|g|}}} S = ∫ ( L G + L M ) d 4 x , {\displaystyle S=\int \left({\mathcal {L}}_{\mathrm {G} }+{\mathcal {L}}_{\mathrm {M} }\right)\,d^{4}x,} where g {\displaystyle g} is the determinant of the metric tensor, and κ {\displaystyle \kappa } is a physical constant 8 π G / c 4 {\displaystyle 8\pi G/c^{4}} involving the gravitational constant and the speed of light. By Hamilton's principle, the variation of the total action S {\displaystyle S} for the gravitational field and matter vanishes: δ S = 0. {\displaystyle \delta S=0.} The variation with respect to the metric tensor g a b {\displaystyle g^{ab}} yields the Einstein equations: δ L G δ g a b − 1 2 P a b = 0 {\displaystyle {\frac {\delta {\mathcal {L}}_{\mathrm {G} }}{\delta g^{ab}}}-{\frac {1}{2}}P_{ab}=0} where R a b {\displaystyle R_{ab}} is the Ricci tensor and P a b {\displaystyle P_{ab}} is the canonical stress–energy–momentum tensor. The Ricci tensor is no longer symmetric because the connection contains a nonzero torsion tensor; therefore, the right-hand side of the equation cannot be symmetric either, implying that P a b {\displaystyle P_{ab}} must include an asymmetric contribution that can be shown to be related to the spin tensor. This canonical energy–momentum tensor is related to the more familiar symmetric energy–momentum tensor by the Belinfante–Rosenfeld procedure. The variation with respect to the torsion tensor T a b c {\displaystyle {T^{ab}}_{c}} yields the Cartan spin connection equations δ L G δ T a b c − 1 2 σ a b c = 0 {\displaystyle {\frac {\delta {\mathcal {L}}_{\mathrm {G} }}{\delta {T^{ab}}_{c}}}-{\frac {1}{2}}{\sigma _{ab}}^{c}=0} where σ a b c {\displaystyle {\sigma _{ab}}^{c}} is the spin tensor. Because the torsion equation is an algebraic constraint rather than a partial differential equation, the torsion field does not propagate as a wave, and vanishes outside of matter. Therefore, in principle the torsion can be algebraically eliminated from the theory in favor of the spin tensor, which generates an effective "spin–spin" nonlinear self-interaction inside matter. Torsion is equal to its source term and can be replaced by a boundary or a topological structure with a throat such as a "wormhole". == Avoidance of singularities == Recently, interest in Einstein–Cartan theory has been driven toward nonsingular black hole models and cosmological implications, most importantly, the avoidance of a gravitational singularity at the beginning of the universe, such as in the black hole cosmology, quantum cosmology, static universe, and cyclic model. Singularity theorems which are premised on and formulated within the setting of Riemannian geometry (e.g. Penrose–Hawking singularity theorems) need not hold in Riemann–Cartan geometry. Consequently, Einstein–Cartan theory is able to avoid the general-relativistic problem of the singularity at the Big Bang. The minimal coupling between torsion and Dirac spinors generates an effective nonlinear spin–spin self-interaction, which becomes significant inside fermionic matter at extremely high densities. Such an interaction is conjectured to replace the singular Big Bang with a cusp-like Big Bounce at a minimum but finite scale factor, before which the observable universe was contracting. This scenario also explains why the present Universe at largest scales appears spatially flat, homogeneous and isotropic, providing a physical alternative to cosmic inflation. Torsion allows fermions to be spatially extended instead of "pointlike", which helps to avoid the formation of singularities such as black holes, removes the ultraviolet divergence in quantum field theory, and leads to the toroidal ring model of electrons. According to general relativity, the gravitational collapse of a sufficiently compact mass forms a singular black hole. In the Einstein–Cartan theory, instead, the collapse reaches a bounce and forms a regular Einstein–Rosen bridge (wormhole) to a new, growing universe on the other side of the event horizon; pair production by the gravitational field after the bounce, when torsion is still strong, generates a finite period of inflation. == Other == Einstein–Cartan theory seems to allow gravitational shielding and the oscillation of massless neutrinos without violating the equivalence principle. In addition, the Einstein–Cartan theory is also related to geometrodynamics and the vortex theory of the atom. == See also == Alternatives to general relativity Metric-affine gravitation theory Gauge theory gravity Loop quantum gravity == References == == Further reading == Gronwald, F.; Hehl, F. W. (1996). "On the Gauge Aspects of Gravity". arXiv:gr-qc/9602013. Hammond, Richard T (2002-03-27). "Torsion gravity". Reports on Progress in Physics. 65 (5): 599–649. Bibcode:2002RPPh...65..599H. doi:10.1088/0034-4885/65/5/201. ISSN 0034-4885. S2CID 250831296. Hehl, F. W. (1973). "Spin and torsion in general relativity: I. Foundations". General Relativity and Gravitation. 4 (4): 333–349. Bibcode:1973GReGr...4..333H. doi:10.1007/bf00759853. ISSN 0001-7701. S2CID 120910420. Hehl, F. W. (1974). "Spin and torsion in general relativity II: Geometry and field equations". General Relativity and Gravitation. 5 (5): 491–516. Bibcode:1974GReGr...5..491H. doi:10.1007/bf02451393. ISSN 0001-7701. S2CID 120844152. Hehl, Friedrich W.; von der Heyde, Paul; Kerlick, G. David (1974-08-15). "General relativity with spin and torsion and its deviations from Einstein's theory". Physical Review D. 10 (4): 1066–1069. Bibcode:1974PhRvD..10.1066H. doi:10.1103/physrevd.10.1066. ISSN 0556-2821. Kleinert, Hagen (2000). "Nonholonomic Mapping Principle for Classical and Quantum Mechanics in Spaces with Curvature and Torsion". General Relativity and Gravitation. 32 (5): 769–839. arXiv:gr-qc/9801003. Bibcode:2000GReGr..32..769K. doi:10.1023/a:1001962922592. ISSN 0001-7701. S2CID 14846186. Kuchowicz, Bronisław (1978). "Friedmann-like cosmological models without singularity". General Relativity and Gravitation. 9 (6): 511–517. Bibcode:1978GReGr...9..511K. doi:10.1007/bf00759545. ISSN 0001-7701. S2CID 118380177. Lord, E. A. (1976). "Tensors, Relativity and Cosmology" (McGraw-Hill). Petti, R. J. (1976). "Some aspects of the geometry of first-quantized theories". General Relativity and Gravitation. 7 (11): 869–883. Bibcode:1976GReGr...7..869P. doi:10.1007/bf00771019. ISSN 0001-7701. S2CID 189851295. Petti, R J (2006-01-12). "Translational spacetime symmetries in gravitational theories". Classical and Quantum Gravity. 23 (3): 737–751. arXiv:1804.06730. Bibcode:2006CQGra..23..737P. doi:10.1088/0264-9381/23/3/012. ISSN 0264-9381. S2CID 118897253. Petti, R. J. (2021). "Derivation of Einstein–Cartan theory from general relativity". International Journal of Geometric Methods in Modern Physics. 18 (6): 2150083–2151205. arXiv:1301.1588. Bibcode:2021IJGMM..1850083P. doi:10.1142/S0219887821500833. S2CID 119218875. Poplawski, Nikodem J. (2009). "Spacetime and fields". arXiv:0911.0334 [gr-qc]. de Sabbata, V. and Gasperini, M. (1985). "Introduction to Gravitation" (World Scientific). de Sabbata, V. and Sivaram, C. (1994). "Spin and Torsion in Gravitation" (World Scientific). Shapiro, I.L. (2002). "Physical aspects of the space–time torsion". Physics Reports. 357 (2): 113–213. arXiv:hep-th/0103093. Bibcode:2002PhR...357..113S. doi:10.1016/s0370-1573(01)00030-8. ISSN 0370-1573. S2CID 119356912. Trautman, Andrzej (1973). "Spin and Torsion May Avert Gravitational Singularities". Nature Physical Science. 242 (114): 7–8. Bibcode:1973NPhS..242....7T. doi:10.1038/physci242007a0. ISSN 0300-8746. Trautman, Andrzej (2006). "Einstein–Cartan Theory". arXiv:gr-qc/0606062.
Wikipedia/Einstein–Cartan_theory
General relativity, also known as the general theory of relativity, and as Einstein's theory of gravity, is the geometric theory of gravitation published by Albert Einstein in 1915 and is the current description of gravitation in modern physics. General relativity generalizes special relativity and refines Newton's law of universal gravitation, providing a unified description of gravity as a geometric property of space and time, or four-dimensional spacetime. In particular, the curvature of spacetime is directly related to the energy and momentum of whatever is present, including matter and radiation. The relation is specified by the Einstein field equations, a system of second-order partial differential equations. Newton's law of universal gravitation, which describes gravity in classical mechanics, can be seen as a prediction of general relativity for the almost flat spacetime geometry around stationary mass distributions. Some predictions of general relativity, however, are beyond Newton's law of universal gravitation in classical physics. These predictions concern the passage of time, the geometry of space, the motion of bodies in free fall, and the propagation of light, and include gravitational time dilation, gravitational lensing, the gravitational redshift of light, the Shapiro time delay and singularities/black holes. So far, all tests of general relativity have been shown to be in agreement with the theory. The time-dependent solutions of general relativity enable us to talk about the history of the universe and have provided the modern framework for cosmology, thus leading to the discovery of the Big Bang and cosmic microwave background radiation. Despite the introduction of a number of alternative theories, general relativity continues to be the simplest theory consistent with experimental data. Reconciliation of general relativity with the laws of quantum physics remains a problem, however, as there is a lack of a self-consistent theory of quantum gravity. It is not yet known how gravity can be unified with the three non-gravitational forces: strong, weak and electromagnetic. Einstein's theory has astrophysical implications, including the prediction of black holes—regions of space in which space and time are distorted in such a way that nothing, not even light, can escape from them. Black holes are the end-state for massive stars. Microquasars and active galactic nuclei are believed to be stellar black holes and supermassive black holes. It also predicts gravitational lensing, where the bending of light results in multiple images of the same distant astronomical phenomenon. Other predictions include the existence of gravitational waves, which have been observed directly by the physics collaboration LIGO and other observatories. In addition, general relativity has provided the base of cosmological models of an expanding universe. Widely acknowledged as a theory of extraordinary beauty, general relativity has often been described as the most beautiful of all existing physical theories. == History == Henri Poincaré's 1905 theory of the dynamics of the electron was a relativistic theory which he applied to all forces, including gravity. While others thought that gravity was instantaneous or of electromagnetic origin, he suggested that relativity was "something due to our methods of measurement". In his theory, he showed that gravitational waves propagate at the speed of light. Soon afterwards, Einstein started thinking about how to incorporate gravity into his relativistic framework. In 1907, beginning with a simple thought experiment involving an observer in free fall (FFO), he embarked on what would be an eight-year search for a relativistic theory of gravity. After numerous detours and false starts, his work culminated in the presentation to the Prussian Academy of Science in November 1915 of what are now known as the Einstein field equations, which form the core of Einstein's general theory of relativity. These equations specify how the geometry of space and time is influenced by whatever matter and radiation are present. A version of non-Euclidean geometry, called Riemannian geometry, enabled Einstein to develop general relativity by providing the key mathematical framework on which he fit his physical ideas of gravity. This idea was pointed out by mathematician Marcel Grossmann and published by Grossmann and Einstein in 1913. The Einstein field equations are nonlinear and considered difficult to solve. Einstein used approximation methods in working out initial predictions of the theory. But in 1916, the astrophysicist Karl Schwarzschild found the first non-trivial exact solution to the Einstein field equations, the Schwarzschild metric. This solution laid the groundwork for the description of the final stages of gravitational collapse, and the objects known today as black holes. In the same year, the first steps towards generalizing Schwarzschild's solution to electrically charged objects were taken, eventually resulting in the Reissner–Nordström solution, which is now associated with electrically charged black holes. In 1917, Einstein applied his theory to the universe as a whole, initiating the field of relativistic cosmology. In line with contemporary thinking, he assumed a static universe, adding a new parameter to his original field equations—the cosmological constant—to match that observational presumption. By 1929, however, the work of Hubble and others had shown that the universe is expanding. This is readily described by the expanding cosmological solutions found by Friedmann in 1922, which do not require a cosmological constant. Lemaître used these solutions to formulate the earliest version of the Big Bang models, in which the universe has evolved from an extremely hot and dense earlier state. Einstein later declared the cosmological constant the biggest blunder of his life. During that period, general relativity remained something of a curiosity among physical theories. It was clearly superior to Newtonian gravity, being consistent with special relativity and accounting for several effects unexplained by the Newtonian theory. Einstein showed in 1915 how his theory explained the anomalous perihelion advance of the planet Mercury without any arbitrary parameters ("fudge factors"), and in 1919 an expedition led by Eddington confirmed general relativity's prediction for the deflection of starlight by the Sun during the total solar eclipse of 29 May 1919, instantly making Einstein famous. Yet the theory remained outside the mainstream of theoretical physics and astrophysics until developments between approximately 1960 and 1975, now known as the golden age of general relativity. Physicists began to understand the concept of a black hole, and to identify quasars as one of these objects' astrophysical manifestations. Ever more precise solar system tests confirmed the theory's predictive power, and relativistic cosmology also became amenable to direct observational tests. General relativity has acquired a reputation as a theory of extraordinary beauty. Subrahmanyan Chandrasekhar has noted that at multiple levels, general relativity exhibits what Francis Bacon has termed a "strangeness in the proportion" (i.e. elements that excite wonderment and surprise). It juxtaposes fundamental concepts (space and time versus matter and motion) which had previously been considered as entirely independent. Chandrasekhar also noted that Einstein's only guides in his search for an exact theory were the principle of equivalence and his sense that a proper description of gravity should be geometrical at its basis, so that there was an "element of revelation" in the manner in which Einstein arrived at his theory. Other elements of beauty associated with the general theory of relativity are its simplicity and symmetry, the manner in which it incorporates invariance and unification, and its perfect logical consistency. In the preface to Relativity: The Special and the General Theory, Einstein said "The present book is intended, as far as possible, to give an exact insight into the theory of Relativity to those readers who, from a general scientific and philosophical point of view, are interested in the theory, but who are not conversant with the mathematical apparatus of theoretical physics. The work presumes a standard of education corresponding to that of a university matriculation examination, and, despite the shortness of the book, a fair amount of patience and force of will on the part of the reader. The author has spared himself no pains in his endeavour to present the main ideas in the simplest and most intelligible form, and on the whole, in the sequence and connection in which they actually originated." == From classical mechanics to general relativity == General relativity can be understood by examining its similarities with and departures from classical physics. The first step is the realization that classical mechanics and Newton's law of gravity admit a geometric description. The combination of this description with the laws of special relativity results in a heuristic derivation of general relativity. === Geometry of Newtonian gravity === At the base of classical mechanics is the notion that a body's motion can be described as a combination of free (or inertial) motion, and deviations from this free motion. Such deviations are caused by external forces acting on a body in accordance with Newton's second law of motion, which states that the net force acting on a body is equal to that body's (inertial) mass multiplied by its acceleration. The preferred inertial motions are related to the geometry of space and time: in the standard reference frames of classical mechanics, objects in free motion move along straight lines at constant speed. In modern parlance, their paths are geodesics, straight world lines in curved spacetime. Conversely, one might expect that inertial motions, once identified by observing the actual motions of bodies and making allowances for the external forces (such as electromagnetism or friction), can be used to define the geometry of space, as well as a time coordinate. However, there is an ambiguity once gravity comes into play. According to Newton's law of gravity, and independently verified by experiments such as that of Eötvös and its successors (see Eötvös experiment), there is a universality of free fall (also known as the weak equivalence principle, or the universal equality of inertial and passive-gravitational mass): the trajectory of a test body in free fall depends only on its position and initial speed, but not on any of its material properties. A simplified version of this is embodied in Einstein's elevator experiment, illustrated in the figure on the right: for an observer in an enclosed room, it is impossible to decide, by mapping the trajectory of bodies such as a dropped ball, whether the room is stationary in a gravitational field and the ball accelerating, or in free space aboard a rocket that is accelerating at a rate equal to that of the gravitational field versus the ball which upon release has nil acceleration. Given the universality of free fall, there is no observable distinction between inertial motion and motion under the influence of the gravitational force. This suggests the definition of a new class of inertial motion, namely that of objects in free fall under the influence of gravity. This new class of preferred motions, too, defines a geometry of space and time—in mathematical terms, it is the geodesic motion associated with a specific connection which depends on the gradient of the gravitational potential. Space, in this construction, still has the ordinary Euclidean geometry. However, spacetime as a whole is more complicated. As can be shown using simple thought experiments following the free-fall trajectories of different test particles, the result of transporting spacetime vectors that can denote a particle's velocity (time-like vectors) will vary with the particle's trajectory; mathematically speaking, the Newtonian connection is not integrable. From this, one can deduce that spacetime is curved. The resulting Newton–Cartan theory is a geometric formulation of Newtonian gravity using only covariant concepts, i.e. a description which is valid in any desired coordinate system. In this geometric description, tidal effects—the relative acceleration of bodies in free fall—are related to the derivative of the connection, showing how the modified geometry is caused by the presence of mass. === Relativistic generalization === As intriguing as geometric Newtonian gravity may be, its basis, classical mechanics, is merely a limiting case of (special) relativistic mechanics. In the language of symmetry: where gravity can be neglected, physics is Lorentz invariant as in special relativity rather than Galilei invariant as in classical mechanics. (The defining symmetry of special relativity is the Poincaré group, which includes translations, rotations, boosts and reflections.) The differences between the two become significant when dealing with speeds approaching the speed of light, and with high-energy phenomena. With Lorentz symmetry, additional structures come into play. They are defined by the set of light cones (see image). The light-cones define a causal structure: for each event A, there is a set of events that can, in principle, either influence or be influenced by A via signals or interactions that do not need to travel faster than light (such as event B in the image), and a set of events for which such an influence is impossible (such as event C in the image). These sets are observer-independent. In conjunction with the world-lines of freely falling particles, the light-cones can be used to reconstruct the spacetime's semi-Riemannian metric, at least up to a positive scalar factor. In mathematical terms, this defines a conformal structure or conformal geometry. Special relativity is defined in the absence of gravity. For practical applications, it is a suitable model whenever gravity can be neglected. Bringing gravity into play, and assuming the universality of free fall motion, an analogous reasoning as in the previous section applies: there are no global inertial frames. Instead there are approximate inertial frames moving alongside freely falling particles. Translated into the language of spacetime: the straight time-like lines that define a gravity-free inertial frame are deformed to lines that are curved relative to each other, suggesting that the inclusion of gravity necessitates a change in spacetime geometry. A priori, it is not clear whether the new local frames in free fall coincide with the reference frames in which the laws of special relativity hold—that theory is based on the propagation of light, and thus on electromagnetism, which could have a different set of preferred frames. But using different assumptions about the special-relativistic frames (such as their being earth-fixed, or in free fall), one can derive different predictions for the gravitational redshift, that is, the way in which the frequency of light shifts as the light propagates through a gravitational field (cf. below). The actual measurements show that free-falling frames are the ones in which light propagates as it does in special relativity. The generalization of this statement, namely that the laws of special relativity hold to good approximation in freely falling (and non-rotating) reference frames, is known as the Einstein equivalence principle, a crucial guiding principle for generalizing special-relativistic physics to include gravity. The same experimental data shows that time as measured by clocks in a gravitational field—proper time, to give the technical term—does not follow the rules of special relativity. In the language of spacetime geometry, it is not measured by the Minkowski metric. As in the Newtonian case, this is suggestive of a more general geometry. At small scales, all reference frames that are in free fall are equivalent, and approximately Minkowskian. Consequently, we are now dealing with a curved generalization of Minkowski space. The metric tensor that defines the geometry—in particular, how lengths and angles are measured—is not the Minkowski metric of special relativity, it is a generalization known as a semi- or pseudo-Riemannian metric. Furthermore, each Riemannian metric is naturally associated with one particular kind of connection, the Levi-Civita connection, and this is, in fact, the connection that satisfies the equivalence principle and makes space locally Minkowskian (that is, in suitable locally inertial coordinates, the metric is Minkowskian, and its first partial derivatives and the connection coefficients vanish). === Einstein's equations === Having formulated the relativistic, geometric version of the effects of gravity, the question of gravity's source remains. In Newtonian gravity, the source is mass. In special relativity, mass turns out to be part of a more general quantity called the energy–momentum tensor, which includes both energy and momentum densities as well as stress: pressure and shear. Using the equivalence principle, this tensor is readily generalized to curved spacetime. Drawing further upon the analogy with geometric Newtonian gravity, it is natural to assume that the field equation for gravity relates this tensor and the Ricci tensor, which describes a particular class of tidal effects: the change in volume for a small cloud of test particles that are initially at rest, and then fall freely. In special relativity, conservation of energy–momentum corresponds to the statement that the energy–momentum tensor is divergence-free. This formula, too, is readily generalized to curved spacetime by replacing partial derivatives with their curved-manifold counterparts, covariant derivatives studied in differential geometry. With this additional condition—the covariant divergence of the energy–momentum tensor, and hence of whatever is on the other side of the equation, is zero—the simplest nontrivial set of equations are what are called Einstein's (field) equations: On the left-hand side is the Einstein tensor, G μ ν {\displaystyle G_{\mu \nu }} , which is symmetric and a specific divergence-free combination of the Ricci tensor R μ ν {\displaystyle R_{\mu \nu }} and the metric. In particular, R = g μ ν R μ ν {\displaystyle R=g^{\mu \nu }R_{\mu \nu }} is the curvature scalar. The Ricci tensor itself is related to the more general Riemann curvature tensor as R μ ν = R α μ α ν . {\displaystyle R_{\mu \nu }={R^{\alpha }}_{\mu \alpha \nu }.} On the right-hand side, κ {\displaystyle \kappa } is a constant and T μ ν {\displaystyle T_{\mu \nu }} is the energy–momentum tensor. All tensors are written in abstract index notation. Matching the theory's prediction to observational results for planetary orbits or, equivalently, assuring that the weak-gravity, low-speed limit is Newtonian mechanics, the proportionality constant κ {\displaystyle \kappa } is found to be κ = 8 π G c 4 {\textstyle \kappa ={\frac {8\pi G}{c^{4}}}} , where G {\displaystyle G} is the Newtonian constant of gravitation and c {\displaystyle c} the speed of light in vacuum. When there is no matter present, so that the energy–momentum tensor vanishes, the results are the vacuum Einstein equations, R μ ν = 0. {\displaystyle R_{\mu \nu }=0.} In general relativity, the world line of a particle free from all external, non-gravitational force is a particular type of geodesic in curved spacetime. In other words, a freely moving or falling particle always moves along a geodesic. The geodesic equation is: d 2 x μ d s 2 + Γ μ α β d x α d s d x β d s = 0 , {\displaystyle {d^{2}x^{\mu } \over ds^{2}}+\Gamma ^{\mu }{}_{\alpha \beta }{dx^{\alpha } \over ds}{dx^{\beta } \over ds}=0,} where s {\displaystyle s} is a scalar parameter of motion (e.g. the proper time), and Γ μ α β {\displaystyle \Gamma ^{\mu }{}_{\alpha \beta }} are Christoffel symbols (sometimes called the affine connection coefficients or Levi-Civita connection coefficients) which is symmetric in the two lower indices. Greek indices may take the values: 0, 1, 2, 3 and the summation convention is used for repeated indices α {\displaystyle \alpha } and β {\displaystyle \beta } . The quantity on the left-hand-side of this equation is the acceleration of a particle, and so this equation is analogous to Newton's laws of motion which likewise provide formulae for the acceleration of a particle. This equation of motion employs the Einstein notation, meaning that repeated indices are summed (i.e. from zero to three). The Christoffel symbols are functions of the four spacetime coordinates, and so are independent of the velocity or acceleration or other characteristics of a test particle whose motion is described by the geodesic equation. === Total force in general relativity === In general relativity, the effective gravitational potential energy of an object of mass m revolving around a massive central body M is given by U f ( r ) = − G M m r + L 2 2 m r 2 − G M L 2 m c 2 r 3 {\displaystyle U_{f}(r)=-{\frac {GMm}{r}}+{\frac {L^{2}}{2mr^{2}}}-{\frac {GML^{2}}{mc^{2}r^{3}}}} A conservative total force can then be obtained as its negative gradient F f ( r ) = − G M m r 2 + L 2 m r 3 − 3 G M L 2 m c 2 r 4 {\displaystyle F_{f}(r)=-{\frac {GMm}{r^{2}}}+{\frac {L^{2}}{mr^{3}}}-{\frac {3GML^{2}}{mc^{2}r^{4}}}} where L is the angular momentum. The first term represents the force of Newtonian gravity, which is described by the inverse-square law. The second term represents the centrifugal force in the circular motion. The third term represents the relativistic effect. === Alternatives to general relativity === There are alternatives to general relativity built upon the same premises, which include additional rules and/or constraints, leading to different field equations. Examples are Whitehead's theory, Brans–Dicke theory, teleparallelism, f(R) gravity and Einstein–Cartan theory. == Definition and basic applications == The derivation outlined in the previous section contains all the information needed to define general relativity, describe its key properties, and address a question of crucial importance in physics, namely how the theory can be used for model-building. === Definition and basic properties === General relativity is a metric theory of gravitation. At its core are Einstein's equations, which describe the relation between the geometry of a four-dimensional pseudo-Riemannian manifold representing spacetime, and the energy–momentum contained in that spacetime. Phenomena that in classical mechanics are ascribed to the action of the force of gravity (such as free-fall, orbital motion, and spacecraft trajectories), correspond to inertial motion within a curved geometry of spacetime in general relativity; there is no gravitational force deflecting objects from their natural, straight paths. Instead, gravity corresponds to changes in the properties of space and time, which in turn changes the straightest-possible paths that objects will naturally follow. The curvature is, in turn, caused by the energy–momentum of matter. Paraphrasing the relativist John Archibald Wheeler, spacetime tells matter how to move; matter tells spacetime how to curve. While general relativity replaces the scalar gravitational potential of classical physics by a symmetric rank-two tensor, the latter reduces to the former in certain limiting cases. For weak gravitational fields and slow speed relative to the speed of light, the theory's predictions converge on those of Newton's law of universal gravitation. As it is constructed using tensors, general relativity exhibits general covariance: its laws—and further laws formulated within the general relativistic framework—take on the same form in all coordinate systems. Furthermore, the theory does not contain any invariant geometric background structures, i.e. it is background independent. It thus satisfies a more stringent general principle of relativity, namely that the laws of physics are the same for all observers. Locally, as expressed in the equivalence principle, spacetime is Minkowskian, and the laws of physics exhibit local Lorentz invariance. === Model-building === The core concept of general-relativistic model-building is that of a solution of Einstein's equations. Given both Einstein's equations and suitable equations for the properties of matter, such a solution consists of a specific semi-Riemannian manifold (usually defined by giving the metric in specific coordinates), and specific matter fields defined on that manifold. Matter and geometry must satisfy Einstein's equations, so in particular, the matter's energy–momentum tensor must be divergence-free. The matter must, of course, also satisfy whatever additional equations were imposed on its properties. In short, such a solution is a model universe that satisfies the laws of general relativity, and possibly additional laws governing whatever matter might be present. Einstein's equations are nonlinear partial differential equations and, as such, difficult to solve exactly. Nevertheless, a number of exact solutions are known, although only a few have direct physical applications. The best-known exact solutions, and also those most interesting from a physics point of view, are the Schwarzschild solution, the Reissner–Nordström solution and the Kerr metric, each corresponding to a certain type of black hole in an otherwise empty universe, and the Friedmann–Lemaître–Robertson–Walker and de Sitter universes, each describing an expanding cosmos. Exact solutions of great theoretical interest include the Gödel universe (which opens up the intriguing possibility of time travel in curved spacetimes), the Taub–NUT solution (a model universe that is homogeneous, but anisotropic), and anti-de Sitter space (which has recently come to prominence in the context of what is called the Maldacena conjecture). Given the difficulty of finding exact solutions, Einstein's field equations are also solved frequently by numerical integration on a computer, or by considering small perturbations of exact solutions. In the field of numerical relativity, powerful computers are employed to simulate the geometry of spacetime and to solve Einstein's equations for interesting situations such as two colliding black holes. In principle, such methods may be applied to any system, given sufficient computer resources, and may address fundamental questions such as naked singularities. Approximate solutions may also be found by perturbation theories such as linearized gravity and its generalization, the post-Newtonian expansion, both of which were developed by Einstein. The latter provides a systematic approach to solving for the geometry of a spacetime that contains a distribution of matter that moves slowly compared with the speed of light. The expansion involves a series of terms; the first terms represent Newtonian gravity, whereas the later terms represent ever smaller corrections to Newton's theory due to general relativity. An extension of this expansion is the parametrized post-Newtonian (PPN) formalism, which allows quantitative comparisons between the predictions of general relativity and alternative theories. == Consequences of Einstein's theory == General relativity has a number of physical consequences. Some follow directly from the theory's axioms, whereas others have become clear only in the course of many years of research that followed Einstein's initial publication. === Gravitational time dilation and frequency shift === Assuming that the equivalence principle holds, gravity influences the passage of time. Light sent down into a gravity well is blueshifted, whereas light sent in the opposite direction (i.e., climbing out of the gravity well) is redshifted; collectively, these two effects are known as the gravitational frequency shift. More generally, processes close to a massive body run more slowly when compared with processes taking place farther away; this effect is known as gravitational time dilation. Gravitational redshift has been measured in the laboratory and using astronomical observations. Gravitational time dilation in the Earth's gravitational field has been measured numerous times using atomic clocks, while ongoing validation is provided as a side effect of the operation of the Global Positioning System (GPS). Tests in stronger gravitational fields are provided by the observation of binary pulsars. All results are in agreement with general relativity. However, at the current level of accuracy, these observations cannot distinguish between general relativity and other theories in which the equivalence principle is valid. === Light deflection and gravitational time delay === General relativity predicts that the path of light will follow the curvature of spacetime as it passes near a massive object. This effect was initially confirmed by observing the light of stars or distant quasars being deflected as it passes the Sun. This and related predictions follow from the fact that light follows what is called a light-like or null geodesic—a generalization of the straight lines along which light travels in classical physics. Such geodesics are the generalization of the invariance of lightspeed in special relativity. As one examines suitable model spacetimes (either the exterior Schwarzschild solution or, for more than a single mass, the post-Newtonian expansion), several effects of gravity on light propagation emerge. Although the bending of light can also be derived by extending the universality of free fall to light, the angle of deflection resulting from such calculations is only half the value given by general relativity. Closely related to light deflection is the Shapiro Time Delay, the phenomenon that light signals take longer to move through a gravitational field than they would in the absence of that field. There have been numerous successful tests of this prediction. In the parameterized post-Newtonian formalism (PPN), measurements of both the deflection of light and the gravitational time delay determine a parameter called γ, which encodes the influence of gravity on the geometry of space. === Gravitational waves === Predicted in 1916 by Albert Einstein, there are gravitational waves: ripples in the metric of spacetime that propagate at the speed of light. These are one of several analogies between weak-field gravity and electromagnetism in that, they are analogous to electromagnetic waves. On 11 February 2016, the Advanced LIGO team announced that they had directly detected gravitational waves from a pair of black holes merging. The simplest type of such a wave can be visualized by its action on a ring of freely floating particles. A sine wave propagating through such a ring towards the reader distorts the ring in a characteristic, rhythmic fashion (animated image to the right). Since Einstein's equations are non-linear, arbitrarily strong gravitational waves do not obey linear superposition, making their description difficult. However, linear approximations of gravitational waves are sufficiently accurate to describe the exceedingly weak waves that are expected to arrive here on Earth from far-off cosmic events, which typically result in relative distances increasing and decreasing by 10 − 21 {\displaystyle 10^{-21}} or less. Data analysis methods routinely make use of the fact that these linearized waves can be Fourier decomposed. Some exact solutions describe gravitational waves without any approximation, e.g., a wave train traveling through empty space or Gowdy universes, varieties of an expanding cosmos filled with gravitational waves. But for gravitational waves produced in astrophysically relevant situations, such as the merger of two black holes, numerical methods are presently the only way to construct appropriate models. === Orbital effects and the relativity of direction === General relativity differs from classical mechanics in a number of predictions concerning orbiting bodies. It predicts an overall rotation (precession) of planetary orbits, as well as orbital decay caused by the emission of gravitational waves and effects related to the relativity of direction. ==== Precession of apsides ==== In general relativity, the apsides of any orbit (the point of the orbiting body's closest approach to the system's center of mass) will precess; the orbit is not an ellipse, but akin to an ellipse that rotates on its focus, resulting in a rose curve-like shape (see image). Einstein first derived this result by using an approximate metric representing the Newtonian limit and treating the orbiting body as a test particle. For him, the fact that his theory gave a straightforward explanation of Mercury's anomalous perihelion shift, discovered earlier by Urbain Le Verrier in 1859, was important evidence that he had at last identified the correct form of the gravitational field equations. The effect can also be derived by using either the exact Schwarzschild metric (describing spacetime around a spherical mass) or the much more general post-Newtonian formalism. It is due to the influence of gravity on the geometry of space and to the contribution of self-energy to a body's gravity (encoded in the nonlinearity of Einstein's equations). Relativistic precession has been observed for all planets that allow for accurate precession measurements (Mercury, Venus, and Earth), as well as in binary pulsar systems, where it is larger by five orders of magnitude. In general relativity the perihelion shift σ {\displaystyle \sigma } , expressed in radians per revolution, is approximately given by: σ = 24 π 3 L 2 T 2 c 2 ( 1 − e 2 ) , {\displaystyle \sigma ={\frac {24\pi ^{3}L^{2}}{T^{2}c^{2}(1-e^{2})}}\ ,} where: L {\displaystyle L} is the semi-major axis T {\displaystyle T} is the orbital period c {\displaystyle c} is the speed of light in vacuum e {\displaystyle e} is the orbital eccentricity ==== Orbital decay ==== According to general relativity, a binary system will emit gravitational waves, thereby losing energy. Due to this loss, the distance between the two orbiting bodies decreases, and so does their orbital period. Within the Solar System or for ordinary double stars, the effect is too small to be observable. This is not the case for a close binary pulsar, a system of two orbiting neutron stars, one of which is a pulsar: from the pulsar, observers on Earth receive a regular series of radio pulses that can serve as a highly accurate clock, which allows precise measurements of the orbital period. Because neutron stars are immensely compact, significant amounts of energy are emitted in the form of gravitational radiation. The first observation of a decrease in orbital period due to the emission of gravitational waves was made by Hulse and Taylor, using the binary pulsar PSR1913+16 they had discovered in 1974. This was the first detection of gravitational waves, albeit indirect, for which they were awarded the 1993 Nobel Prize in physics. Since then, several other binary pulsars have been found, in particular the double pulsar PSR J0737−3039, where both stars are pulsars and which was last reported to also be in agreement with general relativity in 2021 after 16 years of observations. ==== Geodetic precession and frame-dragging ==== Several relativistic effects are directly related to the relativity of direction. One is geodetic precession: the axis direction of a gyroscope in free fall in curved spacetime will change when compared, for instance, with the direction of light received from distant stars—even though such a gyroscope represents the way of keeping a direction as stable as possible ("parallel transport"). For the Moon–Earth system, this effect has been measured with the help of lunar laser ranging. More recently, it has been measured for test masses aboard the satellite Gravity Probe B to a precision of better than 0.3%. Near a rotating mass, there are gravitomagnetic or frame-dragging effects. A distant observer will determine that objects close to the mass get "dragged around". This is most extreme for rotating black holes where, for any object entering a zone known as the ergosphere, rotation is inevitable. Such effects can again be tested through their influence on the orientation of gyroscopes in free fall. Somewhat controversial tests have been performed using the LAGEOS satellites, confirming the relativistic prediction. Also the Mars Global Surveyor probe around Mars has been used. == Astrophysical applications == === Gravitational lensing === The deflection of light by gravity is responsible for a new class of astronomical phenomena. If a massive object is situated between the astronomer and a distant target object with appropriate mass and relative distances, the astronomer will see multiple distorted images of the target. Such effects are known as gravitational lensing. Depending on the configuration, scale, and mass distribution, there can be two or more images, a bright ring known as an Einstein ring, or partial rings called arcs. The earliest example was discovered in 1979; since then, more than a hundred gravitational lenses have been observed. Even if the multiple images are too close to each other to be resolved, the effect can still be measured, e.g., as an overall brightening of the target object; a number of such "microlensing events" have been observed. Gravitational lensing has developed into a tool of observational astronomy. It is used to detect the presence and distribution of dark matter, provide a "natural telescope" for observing distant galaxies, and to obtain an independent estimate of the Hubble constant. Statistical evaluations of lensing data provide valuable insight into the structural evolution of galaxies. === Gravitational-wave astronomy === Observations of binary pulsars provide strong indirect evidence for the existence of gravitational waves (see Orbital decay, above). Detection of these waves is a major goal of current relativity-related research. Several land-based gravitational wave detectors are currently in operation, most notably the interferometric detectors GEO 600, LIGO (two detectors), TAMA 300 and VIRGO. Various pulsar timing arrays are using millisecond pulsars to detect gravitational waves in the 10−9 to 10−6 hertz frequency range, which originate from binary supermassive blackholes. A European space-based detector, eLISA / NGO, is currently under development, with a precursor mission (LISA Pathfinder) having launched in December 2015. Observations of gravitational waves promise to complement observations in the electromagnetic spectrum. They are expected to yield information about black holes and other dense objects such as neutron stars and white dwarfs, about certain kinds of supernova implosions, and about processes in the very early universe, including the signature of certain types of hypothetical cosmic string. In February 2016, the Advanced LIGO team announced that they had detected gravitational waves from a black hole merger. === Black holes and other compact objects === Whenever the ratio of an object's mass to its radius becomes sufficiently large, general relativity predicts the formation of a black hole, a region of space from which nothing, not even light, can escape. In the currently accepted models of stellar evolution, neutron stars of around 1.4 solar masses, and stellar black holes with a few to a few dozen solar masses, are thought to be the final state for the evolution of massive stars. Usually a galaxy has one supermassive black hole with a few million to a few billion solar masses in its center, and its presence is thought to have played an important role in the formation of the galaxy and larger cosmic structures. Astronomically, the most important property of compact objects is that they provide a supremely efficient mechanism for converting gravitational energy into electromagnetic radiation. Accretion, the falling of dust or gaseous matter onto stellar or supermassive black holes, is thought to be responsible for some spectacularly luminous astronomical objects, notably diverse kinds of active galactic nuclei on galactic scales and stellar-size objects such as microquasars. In particular, accretion can lead to relativistic jets, focused beams of highly energetic particles that are being flung into space at almost light speed. General relativity plays a central role in modelling all these phenomena, and observations provide strong evidence for the existence of black holes with the properties predicted by the theory. Black holes are also sought-after targets in the search for gravitational waves (cf. Gravitational waves, above). Merging black hole binaries should lead to some of the strongest gravitational wave signals reaching detectors here on Earth, and the phase directly before the merger ("chirp") could be used as a "standard candle" to deduce the distance to the merger events–and hence serve as a probe of cosmic expansion at large distances. The gravitational waves produced as a stellar black hole plunges into a supermassive one should provide direct information about the supermassive black hole's geometry. === Cosmology === The current models of cosmology are based on Einstein's field equations, which include the cosmological constant Λ {\displaystyle \Lambda } since it has important influence on the large-scale dynamics of the cosmos, R μ ν − 1 2 R g μ ν + Λ g μ ν = 8 π G c 4 T μ ν {\displaystyle R_{\mu \nu }-{\textstyle 1 \over 2}R\,g_{\mu \nu }+\Lambda \ g_{\mu \nu }={\frac {8\pi G}{c^{4}}}\,T_{\mu \nu }} where g μ ν {\displaystyle g_{\mu \nu }} is the spacetime metric. Isotropic and homogeneous solutions of these enhanced equations, the Friedmann–Lemaître–Robertson–Walker solutions, allow physicists to model a universe that has evolved over the past 14 billion years from a hot, early Big Bang phase. Once a small number of parameters (for example the universe's mean matter density) have been fixed by astronomical observation, further observational data can be used to put the models to the test. Predictions, all successful, include the initial abundance of chemical elements formed in a period of primordial nucleosynthesis, the large-scale structure of the universe, and the existence and properties of a "thermal echo" from the early cosmos, the cosmic background radiation. Astronomical observations of the cosmological expansion rate allow the total amount of matter in the universe to be estimated, although the nature of that matter remains mysterious in part. About 90% of all matter appears to be dark matter, which has mass (or, equivalently, gravitational influence), but does not interact electromagnetically and, hence, cannot be observed directly. There is no generally accepted description of this new kind of matter, within the framework of known particle physics or otherwise. Observational evidence from redshift surveys of distant supernovae and measurements of the cosmic background radiation also show that the evolution of our universe is significantly influenced by a cosmological constant resulting in an acceleration of cosmic expansion or, equivalently, by a form of energy with an unusual equation of state, known as dark energy, the nature of which remains unclear. An inflationary phase, an additional phase of strongly accelerated expansion at cosmic times of around 10−33 seconds, was hypothesized in 1980 to account for several puzzling observations that were unexplained by classical cosmological models, such as the nearly perfect homogeneity of the cosmic background radiation. Recent measurements of the cosmic background radiation have resulted in the first evidence for this scenario. However, there is a bewildering variety of possible inflationary scenarios, which cannot be restricted by current observations. An even larger question is the physics of the earliest universe, prior to the inflationary phase and close to where the classical models predict the big bang singularity. An authoritative answer would require a complete theory of quantum gravity, which has not yet been developed (cf. the section on quantum gravity, below). === Exotic solutions: time travel, warp drives === Kurt Gödel showed that solutions to Einstein's equations exist that contain closed timelike curves (CTCs), which allow for loops in time. The solutions require extreme physical conditions unlikely ever to occur in practice, and it remains an open question whether further laws of physics will eliminate them completely. Since then, other—similarly impractical—GR solutions containing CTCs have been found, such as the Tipler cylinder and traversable wormholes. Stephen Hawking introduced chronology protection conjecture, which is an assumption beyond those of standard general relativity to prevent time travel. Some exact solutions in general relativity such as Alcubierre drive present examples of warp drive but these solutions requires exotic matter distribution, and generally suffers from semiclassical instability. == Advanced concepts == === Asymptotic symmetries === The spacetime symmetry group for special relativity is the Poincaré group, which is a ten-dimensional group of three Lorentz boosts, three rotations, and four spacetime translations. It is logical to ask what symmetries, if any, might apply in General Relativity. A tractable case might be to consider the symmetries of spacetime as seen by observers located far away from all sources of the gravitational field. The naive expectation for asymptotically flat spacetime symmetries might be simply to extend and reproduce the symmetries of flat spacetime of special relativity, viz., the Poincaré group. In 1962 Hermann Bondi, M. G. van der Burg, A. W. Metzner and Rainer K. Sachs addressed this asymptotic symmetry problem in order to investigate the flow of energy at infinity due to propagating gravitational waves. Their first step was to decide on some physically sensible boundary conditions to place on the gravitational field at light-like infinity to characterize what it means to say a metric is asymptotically flat, making no a priori assumptions about the nature of the asymptotic symmetry group—not even the assumption that such a group exists. Then after designing what they considered to be the most sensible boundary conditions, they investigated the nature of the resulting asymptotic symmetry transformations that leave invariant the form of the boundary conditions appropriate for asymptotically flat gravitational fields. What they found was that the asymptotic symmetry transformations actually do form a group and the structure of this group does not depend on the particular gravitational field that happens to be present. This means that, as expected, one can separate the kinematics of spacetime from the dynamics of the gravitational field at least at spatial infinity. The puzzling surprise in 1962 was their discovery of a rich infinite-dimensional group (the so-called BMS group) as the asymptotic symmetry group, instead of the finite-dimensional Poincaré group, which is a subgroup of the BMS group. Not only are the Lorentz transformations asymptotic symmetry transformations, there are also additional transformations that are not Lorentz transformations but are asymptotic symmetry transformations. In fact, they found an additional infinity of transformation generators known as supertranslations. This implies the conclusion that General Relativity (GR) does not reduce to special relativity in the case of weak fields at long distances. It turns out that the BMS symmetry, suitably modified, could be seen as a restatement of the universal soft graviton theorem in quantum field theory (QFT), which relates universal infrared (soft) QFT with GR asymptotic spacetime symmetries. === Causal structure and global geometry === In general relativity, no material body can catch up with or overtake a light pulse. No influence from an event A can reach any other location X before light sent out at A to X. In consequence, an exploration of all light worldlines (null geodesics) yields key information about the spacetime's causal structure. This structure can be displayed using Penrose–Carter diagrams in which infinitely large regions of space and infinite time intervals are shrunk ("compactified") so as to fit onto a finite map, while light still travels along diagonals as in standard spacetime diagrams. Aware of the importance of causal structure, Roger Penrose and others developed what is known as global geometry. In global geometry, the object of study is not one particular solution (or family of solutions) to Einstein's equations. Rather, relations that hold true for all geodesics, such as the Raychaudhuri equation, and additional non-specific assumptions about the nature of matter (usually in the form of energy conditions) are used to derive general results. === Horizons === Using global geometry, some spacetimes can be shown to contain boundaries called horizons, which demarcate one region from the rest of spacetime. The best-known examples are black holes: if mass is compressed into a sufficiently compact region of space (as specified in the hoop conjecture, the relevant length scale is the Schwarzschild radius), no light from inside can escape to the outside. Since no object can overtake a light pulse, all interior matter is imprisoned as well. Passage from the exterior to the interior is still possible, showing that the boundary, the black hole's horizon, is not a physical barrier. Early studies of black holes relied on explicit solutions of Einstein's equations, notably the spherically symmetric Schwarzschild solution (used to describe a static black hole) and the axisymmetric Kerr solution (used to describe a rotating, stationary black hole, and introducing interesting features such as the ergosphere). Using global geometry, later studies have revealed more general properties of black holes. With time they become rather simple objects characterized by eleven parameters specifying: electric charge, mass–energy, linear momentum, angular momentum, and location at a specified time. This is stated by the black hole uniqueness theorem: "black holes have no hair", that is, no distinguishing marks like the hairstyles of humans. Irrespective of the complexity of a gravitating object collapsing to form a black hole, the object that results (having emitted gravitational waves) is very simple. Even more remarkably, there is a general set of laws known as black hole mechanics, which is analogous to the laws of thermodynamics. For instance, by the second law of black hole mechanics, the area of the event horizon of a general black hole will never decrease with time, analogous to the entropy of a thermodynamic system. This limits the energy that can be extracted by classical means from a rotating black hole (e.g. by the Penrose process). There is strong evidence that the laws of black hole mechanics are, in fact, a subset of the laws of thermodynamics, and that the black hole area is proportional to its entropy. This leads to a modification of the original laws of black hole mechanics: for instance, as the second law of black hole mechanics becomes part of the second law of thermodynamics, it is possible for the black hole area to decrease as long as other processes ensure that entropy increases overall. As thermodynamical objects with nonzero temperature, black holes should emit thermal radiation. Semiclassical calculations indicate that indeed they do, with the surface gravity playing the role of temperature in Planck's law. This radiation is known as Hawking radiation (cf. the quantum theory section, below). There are many other types of horizons. In an expanding universe, an observer may find that some regions of the past cannot be observed ("particle horizon"), and some regions of the future cannot be influenced (event horizon). Even in flat Minkowski space, when described by an accelerated observer (Rindler space), there will be horizons associated with a semiclassical radiation known as Unruh radiation. === Singularities === Another general feature of general relativity is the appearance of spacetime boundaries known as singularities. Spacetime can be explored by following up on timelike and lightlike geodesics—all possible ways that light and particles in free fall can travel. But some solutions of Einstein's equations have "ragged edges"—regions known as spacetime singularities, where the paths of light and falling particles come to an abrupt end, and geometry becomes ill-defined. In the more interesting cases, these are "curvature singularities", where geometrical quantities characterizing spacetime curvature, such as the Ricci scalar, take on infinite values. Well-known examples of spacetimes with future singularities—where worldlines end—are the Schwarzschild solution, which describes a singularity inside an eternal static black hole, or the Kerr solution with its ring-shaped singularity inside an eternal rotating black hole. The Friedmann–Lemaître–Robertson–Walker solutions and other spacetimes describing universes have past singularities on which worldlines begin, namely Big Bang singularities, and some have future singularities (Big Crunch) as well. Given that these examples are all highly symmetric—and thus simplified—it is tempting to conclude that the occurrence of singularities is an artifact of idealization. The famous singularity theorems, proved using the methods of global geometry, say otherwise: singularities are a generic feature of general relativity, and unavoidable once the collapse of an object with realistic matter properties has proceeded beyond a certain stage and also at the beginning of a wide class of expanding universes. However, the theorems say little about the properties of singularities, and much of current research is devoted to characterizing these entities' generic structure (hypothesized e.g. by the BKL conjecture). The cosmic censorship hypothesis states that all realistic future singularities (no perfect symmetries, matter with realistic properties) are safely hidden away behind a horizon, and thus invisible to all distant observers. While no formal proof yet exists, numerical simulations offer supporting evidence of its validity. === Evolution equations === Each solution of Einstein's equation encompasses the whole history of a universe—it is not just some snapshot of how things are, but a whole, possibly matter-filled, spacetime. It describes the state of matter and geometry everywhere and at every moment in that particular universe. Due to its general covariance, Einstein's theory is not sufficient by itself to determine the time evolution of the metric tensor. It must be combined with a coordinate condition, which is analogous to gauge fixing in other field theories. To understand Einstein's equations as partial differential equations, it is helpful to formulate them in a way that describes the evolution of the universe over time. This is done in "3+1" formulations, where spacetime is split into three space dimensions and one time dimension. The best-known example is the ADM formalism. These decompositions show that the spacetime evolution equations of general relativity are well-behaved: solutions always exist, and are uniquely defined, once suitable initial conditions have been specified. Such formulations of Einstein's field equations are the basis of numerical relativity. === Global and quasi-local quantities === The notion of evolution equations is intimately tied in with another aspect of general relativistic physics. In Einstein's theory, it turns out to be impossible to find a general definition for a seemingly simple property such as a system's total mass (or energy). The main reason is that the gravitational field—like any physical field—must be ascribed a certain energy, but that it proves to be fundamentally impossible to localize that energy. Nevertheless, there are possibilities to define a system's total mass, either using a hypothetical "infinitely distant observer" (ADM mass) or suitable symmetries (Komar mass). If one excludes from the system's total mass the energy being carried away to infinity by gravitational waves, the result is the Bondi mass at null infinity. Just as in classical physics, it can be shown that these masses are positive. Corresponding global definitions exist for momentum and angular momentum. There have also been a number of attempts to define quasi-local quantities, such as the mass of an isolated system formulated using only quantities defined within a finite region of space containing that system. The hope is to obtain a quantity useful for general statements about isolated systems, such as a more precise formulation of the hoop conjecture. == Relationship with quantum theory == If general relativity were considered to be one of the two pillars of modern physics, then quantum theory, the basis of understanding matter from elementary particles to solid-state physics, would be the other. However, how to reconcile quantum theory with general relativity is still an open question. === Quantum field theory in curved spacetime === Ordinary quantum field theories, which form the basis of modern elementary particle physics, are defined in flat Minkowski space, which is an excellent approximation when it comes to describing the behavior of microscopic particles in weak gravitational fields like those found on Earth. In order to describe situations in which gravity is strong enough to influence (quantum) matter, yet not strong enough to require quantization itself, physicists have formulated quantum field theories in curved spacetime. These theories rely on general relativity to describe a curved background spacetime, and define a generalized quantum field theory to describe the behavior of quantum matter within that spacetime. Using this formalism, it can be shown that black holes emit a blackbody spectrum of particles known as Hawking radiation leading to the possibility that they evaporate over time. As briefly mentioned above, this radiation plays an important role for the thermodynamics of black holes. === Quantum gravity === The demand for consistency between a quantum description of matter and a geometric description of spacetime, as well as the appearance of singularities (where curvature length scales become microscopic), indicate the need for a full theory of quantum gravity: for an adequate description of the interior of black holes, and of the very early universe, a theory is required in which gravity and the associated geometry of spacetime are described in the language of quantum physics. Despite major efforts, no complete and consistent theory of quantum gravity is currently known, even though a number of promising candidates exist. Attempts to generalize ordinary quantum field theories, used in elementary particle physics to describe fundamental interactions, so as to include gravity have led to serious problems. Some have argued that at low energies, this approach proves successful, in that it results in an acceptable effective (quantum) field theory of gravity. At very high energies, however, the perturbative results are badly divergent and lead to models devoid of predictive power ("perturbative non-renormalizability"). One attempt to overcome these limitations is string theory, a quantum theory not of point particles, but of minute one-dimensional extended objects. The theory promises to be a unified description of all particles and interactions, including gravity; the price to pay is unusual features such as six extra dimensions of space in addition to the usual three. In what is called the second superstring revolution, it was conjectured that both string theory and a unification of general relativity and supersymmetry known as supergravity form part of a hypothesized eleven-dimensional model known as M-theory, which would constitute a uniquely defined and consistent theory of quantum gravity. Another approach starts with the canonical quantization procedures of quantum theory. Using the initial-value-formulation of general relativity (cf. evolution equations above), the result is the Wheeler–deWitt equation (an analogue of the Schrödinger equation) which, regrettably, turns out to be ill-defined without a proper ultraviolet (lattice) cutoff. However, with the introduction of what are now known as Ashtekar variables, this leads to a promising model known as loop quantum gravity. Space is represented by a web-like structure called a spin network, evolving over time in discrete steps. Depending on which features of general relativity and quantum theory are accepted unchanged, and on what level changes are introduced, there are numerous other attempts to arrive at a viable theory of quantum gravity, some examples being the lattice theory of gravity based on the Feynman Path Integral approach and Regge calculus, dynamical triangulations, causal sets, twistor models or the path integral based models of quantum cosmology. All candidate theories still have major formal and conceptual problems to overcome. They also face the common problem that, as yet, there is no way to put quantum gravity predictions to experimental tests (and thus to decide between the candidates where their predictions vary), although there is hope for this to change as future data from cosmological observations and particle physics experiments becomes available. == Current status == General relativity has emerged as a highly successful model of gravitation and cosmology, which has so far passed many unambiguous observational and experimental tests. However, there are strong indications that the theory is incomplete. The problem of quantum gravity and the question of the reality of spacetime singularities remain open. Observational data that is taken as evidence for dark energy and dark matter could indicate the need for new physics. Even taken as is, general relativity is rich with possibilities for further exploration. Mathematical relativists seek to understand the nature of singularities and the fundamental properties of Einstein's equations, while numerical relativists run increasingly powerful computer simulations (such as those describing merging black holes). In February 2016, it was announced that the existence of gravitational waves was directly detected by the Advanced LIGO team on 14 September 2015. A century after its introduction, general relativity remains a highly active area of research. == See also == Alcubierre drive – Hypothetical FTL transportation by warping space (warp drive) Alternatives to general relativity – Proposed theories of gravity Contributors to general relativity Derivations of the Lorentz transformations Ehrenfest paradox – Paradox in special relativity Einstein–Hilbert action – Concept in general relativity Einstein's thought experiments – Albert Einstein's hypothetical situations to argue scientific points General relativity priority dispute – Debate about credit for general relativity Introduction to the mathematics of general relativity Nordström's theory of gravitation – Predecessor to the theory of relativity Ricci calculus – Tensor index notation for tensor-based calculations Timeline of gravitational physics and relativity == References == == Bibliography == == Further reading == === Popular books === Einstein, A. (1916), Relativity: The Special and the General Theory, Berlin, ISBN 978-3-528-06059-6 {{citation}}: ISBN / Date incompatibility (help)CS1 maint: location missing publisher (link) Geroch, R. (1981), General Relativity from A to B, Chicago: University of Chicago Press, ISBN 978-0-226-28864-2 Lieber, Lillian (2008), The Einstein Theory of Relativity: A Trip to the Fourth Dimension, Philadelphia: Paul Dry Books, Inc., ISBN 978-1-58988-044-3 Schutz, Bernard F. (2001), "Gravitational radiation", in Murdin, Paul (ed.), Encyclopedia of Astronomy and Astrophysics, Institute of Physics Pub., ISBN 978-1-56159-268-5 Thorne, Kip; Hawking, Stephen (1994). Black Holes and Time Warps: Einstein's Outrageous Legacy. New York: W. W. Norton. ISBN 0-393-03505-0. Wald, Robert M. (1992), Space, Time, and Gravity: the Theory of the Big Bang and Black Holes, Chicago: University of Chicago Press, ISBN 978-0-226-87029-8 Wheeler, John; Ford, Kenneth (1998), Geons, Black Holes, & Quantum Foam: a life in physics, New York: W. W. Norton, ISBN 978-0-393-31991-0 === Beginning undergraduate textbooks === Yvonne Choquet-Bruhat (2014). Introduction to General Relativity, Black Holes, and Cosmology. Oxford University Press. ISBN 9780191936500. Taylor, Edwin F.; Wheeler, John Archibald (2000), Exploring Black Holes: Introduction to General Relativity, Addison Wesley, ISBN 978-0-201-38423-9 === Advanced undergraduate textbooks === Crowell, Ben (2020). General Relativity. Dirac, Paul (1996), General Theory of Relativity, Princeton University Press, ISBN 978-0-691-01146-2 Gron, O.; Hervik, S. (2007), Einstein's General theory of Relativity, Springer, ISBN 978-0-387-69199-2 Hartle, James B. (2003), Gravity: an Introduction to Einstein's General Relativity, San Francisco: Addison-Wesley, ISBN 978-0-8053-8662-2 Hughston, L. P.; Tod, K. P. (1991), Introduction to General Relativity, Cambridge: Cambridge University Press, ISBN 978-0-521-33943-8 d'Inverno, Ray (1992), Introducing Einstein's Relativity, Oxford: Oxford University Press, ISBN 978-0-19-859686-8 Ludyk, Günter (2013). Einstein in Matrix Form (1st ed.). Berlin: Springer. ISBN 978-3-642-35797-8. Møller, Christian (1955) [1952], The Theory of Relativity, Oxford University Press, OCLC 7644624 Moore, Thomas A (2012), A General Relativity Workbook, University Science Books, ISBN 978-1-891389-82-5 Schutz, B. F. (2009), A First Course in General Relativity (Second ed.), Cambridge University Press, Bibcode:2009fcgr.book.....S, ISBN 978-0-521-88705-2 === Graduate textbooks === Carroll, Sean M. (2004), Spacetime and Geometry: An Introduction to General Relativity, San Francisco: Addison-Wesley, Bibcode:2004sgig.book.....C, ISBN 978-0-8053-8732-2 Grøn, Øyvind; Hervik, Sigbjørn (2007), Einstein's General Theory of Relativity, New York: Springer, ISBN 978-0-387-69199-2 Landau, Lev D.; Lifshitz, Evgeny F. (1980), The Classical Theory of Fields (4th ed.), London: Butterworth-Heinemann, ISBN 978-0-7506-2768-9 Landsman, Klaas (2021). Foundations of General Relativity: From Einstein to Black Holes. Radboud University Press. ISBN 9789083178929. Stephani, Hans (1990), General Relativity: An Introduction to the Theory of the Gravitational Field, Cambridge: Cambridge University Press, Bibcode:1990grit.book.....S, ISBN 978-0-521-37941-0 Charles W. Misner; Kip S. Thorne; John Archibald Wheeler (1973), Gravitation, W. H. Freeman, Princeton University Press, ISBN 0-7167-0344-0 R.K. Sachs; H. Wu (1977), General Relativity for Mathematicians, Springer-Verlag, Bibcode:1977grm..book.....S, ISBN 1-4612-9905-5 Wald, Robert M. (1984). General Relativity. Chicago: University of Chicago Press. ISBN 0-226-87032-4. OCLC 10018614. === Specialists' books === Hawking, Stephen; Ellis, George (1975). The Large Scale Structure of Space-time. Cambridge University Press. ISBN 978-0-521-09906-6. Poisson, Eric (2007). A Relativist's Toolkit: The Mathematics of Black-Hole Mechanics. Cambridge University Press. ISBN 978-0-521-53780-3. === Journal articles === Einstein, Albert (1916), "Die Grundlage der allgemeinen Relativitätstheorie", Annalen der Physik, 49 (7): 769–822, Bibcode:1916AnP...354..769E, doi:10.1002/andp.19163540702 See also English translation at Einstein Papers Project Flanagan, Éanna É.; Hughes, Scott A. (2005), "The basics of gravitational wave theory", New J. Phys., 7 (1): 204, arXiv:gr-qc/0501041, Bibcode:2005NJPh....7..204F, doi:10.1088/1367-2630/7/1/204 Landgraf, M.; Hechler, M.; Kemble, S. (2005), "Mission design for LISA Pathfinder", Class. Quantum Grav., 22 (10): S487 – S492, arXiv:gr-qc/0411071, Bibcode:2005CQGra..22S.487L, doi:10.1088/0264-9381/22/10/048, S2CID 119476595 Nieto, Michael Martin (2006), "The quest to understand the Pioneer anomaly" (PDF), Europhysics News, 37 (6): 30–34, arXiv:gr-qc/0702017, Bibcode:2006ENews..37f..30N, doi:10.1051/epn:2006604, archived (PDF) from the original on 24 September 2015 Shapiro, I. I.; Pettengill, Gordon; Ash, Michael; Stone, Melvin; Smith, William; Ingalls, Richard; Brockelman, Richard (1968), "Fourth test of general relativity: preliminary results", Phys. Rev. Lett., 20 (22): 1265–1269, Bibcode:1968PhRvL..20.1265S, doi:10.1103/PhysRevLett.20.1265 Valtonen, M. J.; Lehto, H. J.; Nilsson, K.; Heidt, J.; Takalo, L. O.; Sillanpää, A.; Villforth, C.; Kidger, M.; et al. (2008), "A massive binary black-hole system in OJ 287 and a test of general relativity", Nature, 452 (7189): 851–853, arXiv:0809.1280, Bibcode:2008Natur.452..851V, doi:10.1038/nature06896, PMID 18421348, S2CID 4412396 == External links == Einstein Online Archived 1 June 2014 at the Wayback Machine – Articles on a variety of aspects of relativistic physics for a general audience; hosted by the Max Planck Institute for Gravitational Physics GEO600 home page, the official website of the GEO600 project. LIGO Laboratory NCSA Spacetime Wrinkles – produced by the numerical relativity group at the NCSA, with an elementary introduction to general relativity Einstein's General Theory of Relativity on YouTube (lecture by Leonard Susskind recorded 22 September 2008 at Stanford University). Series of lectures on General Relativity given in 2006 at the Institut Henri Poincaré (introductory/advanced). General Relativity Tutorials by John Baez. Brown, Kevin. "Reflections on relativity". Mathpages.com. Archived from the original on 18 December 2015. Retrieved 29 May 2005. Carroll, Sean M. (1997). "Lecture Notes on General Relativity". arXiv:gr-qc/9712019. Moor, Rafi. "Understanding General Relativity". Retrieved 11 July 2006. Waner, Stefan. "Introduction to Differential Geometry and General Relativity". Retrieved 5 April 2015. The Feynman Lectures on Physics Vol. II Ch. 42: Curved Space
Wikipedia/Theory_of_general_relativity