text
stringlengths
559
401k
source
stringlengths
13
121
In particle physics and physical cosmology, Planck units are a system of units of measurement defined exclusively in terms of four universal physical constants: c, G, ħ, and kB (described further below). Expressing one of these physical constants in terms of Planck units yields a numerical value of 1. They are a system of natural units, defined using fundamental properties of nature (specifically, properties of free space) rather than properties of a chosen prototype object. Originally proposed in 1899 by German physicist Max Planck, they are relevant in research on unified theories such as quantum gravity. The term Planck scale refers to quantities of space, time, energy and other units that are similar in magnitude to corresponding Planck units. This region may be characterized by particle energies of around 1019 GeV or 109 J, time intervals of around 5×10−44 s and lengths of around 10−35 m (approximately the energy-equivalent of the Planck mass, the Planck time and the Planck length, respectively). At the Planck scale, the predictions of the Standard Model, quantum field theory and general relativity are not expected to apply, and quantum effects of gravity are expected to dominate. One example is represented by the conditions in the first 10−43 seconds of our universe after the Big Bang, approximately 13.8 billion years ago. The four universal constants that, by definition, have a numeric value 1 when expressed in these units are: c, the speed of light in vacuum, G, the gravitational constant, ħ, the reduced Planck constant, and kB, the Boltzmann constant. Variants of the basic idea of Planck units exist, such as alternate choices of normalization that give other numeric values to one or more of the four constants above. == Introduction == Any system of measurement may be assigned a mutually independent set of base quantities and associated base units, from which all other quantities and units may be derived. In the International System of Units, for example, the SI base quantities include length with the associated unit of the metre. In the system of Planck units, a similar set of base quantities and associated units may be selected, in terms of which other quantities and coherent units may be expressed.: 1215  The Planck unit of length has become known as the Planck length, and the Planck unit of time is known as the Planck time, but this nomenclature has not been established as extending to all quantities. All Planck units are derived from the dimensional universal physical constants that define the system, and in a convention in which these units are omitted (i.e. treated as having the dimensionless value 1), these constants are then eliminated from equations of physics in which they appear. For example, Newton's law of universal gravitation, F = G m 1 m 2 r 2 = ( F P l P 2 m P 2 ) m 1 m 2 r 2 , {\displaystyle F=G{\frac {m_{1}m_{2}}{r^{2}}}=\left({\frac {F_{\text{P}}l_{\text{P}}^{2}}{m_{\text{P}}^{2}}}\right){\frac {m_{1}m_{2}}{r^{2}}},} can be expressed as: F F P = ( m 1 m P ) ( m 2 m P ) ( r l P ) 2 . {\displaystyle {\frac {F}{F_{\text{P}}}}={\frac {\left({\dfrac {m_{1}}{m_{\text{P}}}}\right)\left({\dfrac {m_{2}}{m_{\text{P}}}}\right)}{\left({\dfrac {r}{l_{\text{P}}}}\right)^{2}}}.} Both equations are dimensionally consistent and equally valid in any system of quantities, but the second equation, with G absent, is relating only dimensionless quantities since any ratio of two like-dimensioned quantities is a dimensionless quantity. If, by a shorthand convention, it is understood that each physical quantity is the corresponding ratio with a coherent Planck unit (or "expressed in Planck units"), the ratios above may be expressed simply with the symbols of physical quantity, without being scaled explicitly by their corresponding unit: F ′ = m 1 ′ m 2 ′ r ′ 2 . {\displaystyle F'={\frac {m_{1}'m_{2}'}{r'^{2}}}.} This last equation (without G) is valid with F′, m1′, m2′, and r′ being the dimensionless ratio quantities corresponding to the standard quantities, written e.g. F′ ≘ F or F′ = F/FP, but not as a direct equality of quantities. This may seem to be "setting the constants c, G, etc., to 1" if the correspondence of the quantities is thought of as equality. For this reason, Planck or other natural units should be employed with care. Referring to "G = c = 1", Paul S. Wesson wrote that, "Mathematically it is an acceptable trick which saves labour. Physically it represents a loss of information and can lead to confusion." == History and definition == The concept of natural units was introduced in 1874, when George Johnstone Stoney, noting that electric charge is quantized, derived units of length, time, and mass, now named Stoney units in his honor. Stoney chose his units so that G, c, and the electron charge e would be numerically equal to 1. In 1899, one year before the advent of quantum theory, Max Planck introduced what became later known as the Planck constant. At the end of the paper, he proposed the base units that were later named in his honor. The Planck units are based on the quantum of action, now usually known as the Planck constant, which appeared in the Wien approximation for black-body radiation. Planck underlined the universality of the new unit system, writing: ... die Möglichkeit gegeben ist, Einheiten für Länge, Masse, Zeit und Temperatur aufzustellen, welche, unabhängig von speciellen Körpern oder Substanzen, ihre Bedeutung für alle Zeiten und für alle, auch ausserirdische und aussermenschliche Culturen nothwendig behalten und welche daher als »natürliche Maasseinheiten« bezeichnet werden können. ... it is possible to set up units for length, mass, time and temperature, which are independent of special bodies or substances, necessarily retaining their meaning for all times and for all civilizations, including extraterrestrial and non-human ones, which can be called "natural units of measure". Planck considered only the units based on the universal constants G {\displaystyle G} , h {\displaystyle h} , c {\displaystyle c} , and k B {\displaystyle k_{\rm {B}}} to arrive at natural units for length, time, mass, and temperature. His definitions differ from the modern ones by a factor of 2 π {\displaystyle {\sqrt {2\pi }}} , because the modern definitions use ℏ {\displaystyle \hbar } rather than h {\displaystyle h} . Unlike the case with the International System of Units, there is no official entity that establishes a definition of a Planck unit system. Some authors define the base Planck units to be those of mass, length and time, regarding an additional unit for temperature to be redundant. Other tabulations add, in addition to a unit for temperature, a unit for electric charge, so that either the Coulomb constant k e {\displaystyle k_{\text{e}}} or the vacuum permittivity ϵ 0 {\displaystyle \epsilon _{0}} is normalized to 1. Thus, depending on the author's choice, this charge unit is given by q P = 4 π ϵ 0 ℏ c ≈ 1.875546 × 10 − 18 C ≈ 11.7 e {\displaystyle q_{\text{P}}={\sqrt {4\pi \epsilon _{0}\hbar c}}\approx 1.875546\times 10^{-18}{\text{ C}}\approx 11.7\ e} for k e = 1 {\displaystyle k_{\text{e}}=1} , or q P = ϵ 0 ℏ c ≈ 5.290818 × 10 − 19 C ≈ 3.3 e {\displaystyle q_{\text{P}}={\sqrt {\epsilon _{0}\hbar c}}\approx 5.290818\times 10^{-19}{\text{ C}}\approx 3.3\ e} for ε 0 = 1 {\displaystyle \varepsilon _{0}=1} . Some of these tabulations also replace mass with energy when doing so. In SI units, the values of c, h, e and kB are exact and the values of ε0 and G in SI units respectively have relative uncertainties of 1.6×10−10‍ and 2.2×10−5. Hence, the uncertainties in the SI values of the Planck units derive almost entirely from uncertainty in the SI value of G. Compared to Stoney units, Planck base units are all larger by a factor 1 / α ≈ 11.7 {\textstyle {\sqrt {{1}/{\alpha }}}\approx 11.7} , where α {\displaystyle \alpha } is the fine-structure constant. == Derived units == In any system of measurement, units for many physical quantities can be derived from base units. Table 2 offers a sample of derived Planck units, some of which are seldom used. As with the base units, their use is mostly confined to theoretical physics because most of them are too large or too small for empirical or practical use and there are large uncertainties in their values. Some Planck units, such as of time and length, are many orders of magnitude too large or too small to be of practical use, so that Planck units as a system are typically only relevant to theoretical physics. In some cases, a Planck unit may suggest a limit to a range of a physical quantity where present-day theories of physics apply. For example, our understanding of the Big Bang does not extend to the Planck epoch, i.e., when the universe was less than one Planck time old. Describing the universe during the Planck epoch requires a theory of quantum gravity that would incorporate quantum effects into general relativity. Such a theory does not yet exist. Several quantities are not "extreme" in magnitude, such as the Planck mass, which is about 22 micrograms: very large in comparison with subatomic particles, and within the mass range of living organisms.: 872  Similarly, the related units of energy and of momentum are in the range of some everyday phenomena. == Significance == Planck units have little anthropocentric arbitrariness, but do still involve some arbitrary choices in terms of the defining constants. Unlike the metre and second, which exist as base units in the SI system for historical reasons, the Planck length and Planck time are conceptually linked at a fundamental physical level. Consequently, natural units help physicists to reframe questions. Frank Wilczek puts it succinctly: We see that the question [posed] is not, "Why is gravity so feeble?" but rather, "Why is the proton's mass so small?" For in natural (Planck) units, the strength of gravity simply is what it is, a primary quantity, while the proton's mass is the tiny number 1/13 quintillion. While it is true that the electrostatic repulsive force between two protons (alone in free space) greatly exceeds the gravitational attractive force between the same two protons, this is not about the relative strengths of the two fundamental forces. When Planck proposed his units, the goal was only that of establishing a universal ("natural") way of measuring objects, without giving any special meaning to quantities that measured one single unit. During the 1950s, multiple authors including Lev Landau and Oskar Klein argued that quantities on the order of the Planck scale indicated the limits of the validity of quantum field theory. John Archibald Wheeler proposed in 1955 that quantum fluctuations of spacetime become significant at the Planck scale, though at the time he was unaware of the Planck units. == Planck scale == In particle physics and physical cosmology, the Planck scale is an energy scale around 1.22×1028 eV (the Planck energy, corresponding to the energy equivalent of the Planck mass, 2.17645×10−8 kg) at which quantum effects of gravity become significant. At this scale, present descriptions and theories of sub-atomic particle interactions in terms of quantum field theory break down and become inadequate, due to the impact of the apparent non-renormalizability of gravity within current theories. === Relationship to gravity === At the Planck length scale, the strength of gravity is expected to become comparable with the other forces, and it has been theorized that all the fundamental forces are unified at that scale, but the exact mechanism of this unification remains unknown. The Planck scale is therefore the point at which the effects of quantum gravity can no longer be ignored in other fundamental interactions, where current calculations and approaches begin to break down, and a means to take account of its impact is necessary. On these grounds, it has been speculated that it may be an approximate lower limit at which a black hole could be formed by collapse. While physicists have a fairly good understanding of the other fundamental interactions of forces on the quantum level, gravity is problematic, and cannot be integrated with quantum mechanics at very high energies using the usual framework of quantum field theory. At lesser energy levels it is usually ignored, while for energies approaching or exceeding the Planck scale, a new theory of quantum gravity is necessary. Approaches to this problem include string theory and M-theory, loop quantum gravity, noncommutative geometry, and causal set theory. === In cosmology === In Big Bang cosmology, the Planck epoch or Planck era is the earliest stage of the Big Bang, before the time passed was equal to the Planck time, tP, or approximately 10−43 seconds. There is no currently available physical theory to describe such short times, and it is not clear in what sense the concept of time is meaningful for values smaller than the Planck time. It is generally assumed that quantum effects of gravity dominate physical interactions at this time scale. At this scale, the unified force of the Standard Model is assumed to be unified with gravitation. Immeasurably hot and dense, the state of the Planck epoch was succeeded by the grand unification epoch, where gravitation is separated from the unified force of the Standard Model, in turn followed by the inflationary epoch, which ended after about 10−32 seconds (or about 1011 tP). Table 3 lists properties of the observable universe today expressed in Planck units. After the measurement of the cosmological constant (Λ) in 1998, estimated at 10−122 in Planck units, it was noted that this is suggestively close to the reciprocal of the age of the universe (T) squared. Barrow and Shaw proposed a modified theory in which Λ is a field evolving in such a way that its value remains Λ ~ T−2 throughout the history of the universe. === Analysis of the units === ==== Planck length ==== The Planck length, denoted ℓP, is a unit of length defined as: ℓ P = ℏ G c 3 {\displaystyle \ell _{\mathrm {P} }={\sqrt {\frac {\hbar G}{c^{3}}}}} It is equal to 1.616255(18)×10−35 m‍ (the two digits enclosed by parentheses are the estimated standard error associated with the reported numerical value) or about 10−20 times the diameter of a proton. It can be motivated in various ways, such as considering a particle whose reduced Compton wavelength is comparable to its Schwarzschild radius, though whether those concepts are in fact simultaneously applicable is open to debate. (The same heuristic argument simultaneously motivates the Planck mass.) The Planck length is a distance scale of interest in speculations about quantum gravity. The Bekenstein–Hawking entropy of a black hole is one-fourth the area of its event horizon in units of Planck length squared.: 370  Since the 1950s, it has been conjectured that quantum fluctuations of the spacetime metric might make the familiar notion of distance inapplicable below the Planck length. This is sometimes expressed by saying that "spacetime becomes a foam at the Planck scale". It is possible that the Planck length is the shortest physically measurable distance, since any attempt to investigate the possible existence of shorter distances, by performing higher-energy collisions, would result in black hole production. Higher-energy collisions, rather than splitting matter into finer pieces, would simply produce bigger black holes. The strings of string theory are modeled to be on the order of the Planck length. In theories with large extra dimensions, the Planck length calculated from the observed value of G {\displaystyle G} can be smaller than the true, fundamental Planck length.: 61  ==== Planck time ==== The Planck time, denoted tP, is defined as: t P = ℓ P c = ℏ G c 5 {\displaystyle t_{\mathrm {P} }={\frac {\ell _{\mathrm {P} }}{c}}={\sqrt {\frac {\hbar G}{c^{5}}}}} This is the time required for light to travel a distance of 1 Planck length in vacuum, which is a time interval of approximately 5.39×10−44 s. No current physical theory can describe timescales shorter than the Planck time, such as the earliest events after the Big Bang. Some conjectures state that the structure of time need not remain smooth on intervals comparable to the Planck time. ==== Planck energy ==== The Planck energy EP is approximately equal to the energy released in the combustion of the fuel in an automobile fuel tank (57.2 L at 34.2 MJ/L of chemical energy). The ultra-high-energy cosmic ray observed in 1991 had a measured energy of about 50 J, equivalent to about 2.5×10−8 EP. Proposals for theories of doubly special relativity posit that, in addition to the speed of light, an energy scale is also invariant for all inertial observers. Typically, this energy scale is chosen to be the Planck energy. ==== Planck unit of force ==== The Planck unit of force may be thought of as the derived unit of force in the Planck system if the Planck units of time, length, and mass are considered to be base units. F P = m P c t P = c 4 G ≈ 1.2103 × 10 44 N {\displaystyle F_{\text{P}}={\frac {m_{\text{P}}c}{t_{\text{P}}}}={\frac {c^{4}}{G}}\approx \mathrm {1.2103\times 10^{44}~N} } It is the gravitational attractive force of two bodies of 1 Planck mass each that are held 1 Planck length apart. One convention for the Planck charge is to choose it so that the electrostatic repulsion of two objects with Planck charge and mass that are held 1 Planck length apart balances the Newtonian attraction between them. Some authors have argued that the Planck force is on the order of the maximum force that can occur between two bodies. However, the validity of these conjectures has been disputed. ==== Planck temperature ==== The Planck temperature TP is 1.416784(16)×1032 K. At this temperature, the wavelength of light emitted by thermal radiation reaches the Planck length. There are no known physical models able to describe temperatures greater than TP; a quantum theory of gravity would be required to model the extreme energies attained. Hypothetically, a system in thermal equilibrium at the Planck temperature might contain Planck-scale black holes, constantly being formed from thermal radiation and decaying via Hawking evaporation. Adding energy to such a system might decrease its temperature by creating larger black holes, whose Hawking temperature is lower. == Nondimensionalized equations == Physical quantities that have different dimensions (such as time and length) cannot be equated even if they are numerically equal (e.g., 1 second is not the same as 1 metre). In theoretical physics, however, this scruple may be set aside, by a process called nondimensionalization. The effective result is that many fundamental equations of physics, which often include some of the constants used to define Planck units, become equations where these constants are replaced by a 1. Examples include the energy–momentum relation E 2 = ( m c 2 ) 2 + ( p c ) 2 {\displaystyle E^{2}=(mc^{2})^{2}+(pc)^{2}} (which becomes E 2 = m 2 + p 2 {\displaystyle E^{2}=m^{2}+p^{2}} ) and the Dirac equation ( i ℏ γ μ ∂ μ − m c ) ψ = 0 {\displaystyle (i\hbar \gamma ^{\mu }\partial _{\mu }-mc)\psi =0} (which becomes ( i γ μ ∂ μ − m ) ψ = 0 {\displaystyle (i\gamma ^{\mu }\partial _{\mu }-m)\psi =0} ). == Alternative choices of normalization == As already stated above, Planck units are derived by "normalizing" the numerical values of certain fundamental constants to 1. These normalizations are neither the only ones possible nor necessarily the best. Moreover, the choice of what factors to normalize, among the factors appearing in the fundamental equations of physics, is not evident, and the values of the Planck units are sensitive to this choice. The factor 4π is ubiquitous in theoretical physics because in three-dimensional space, the surface area of a sphere of radius r is 4πr2. This, along with the concept of flux, are the basis for the inverse-square law, Gauss's law, and the divergence operator applied to flux density. For example, gravitational and electrostatic fields produced by point objects have spherical symmetry, and so the electric flux through a sphere of radius r around a point charge will be distributed uniformly over that sphere. From this, it follows that a factor of 4πr2 will appear in the denominator of Coulomb's law in rationalized form.: 214–15  (Both the numerical factor and the power of the dependence on r would change if space were higher-dimensional; the correct expressions can be deduced from the geometry of higher-dimensional spheres.: 51 ) Likewise for Newton's law of universal gravitation: a factor of 4π naturally appears in Poisson's equation when relating the gravitational potential to the distribution of matter.: 56  Hence a substantial body of physical theory developed since Planck's 1899 paper suggests normalizing not G but 4πG (or 8πG) to 1. Doing so would introduce a factor of ⁠1/4π⁠ (or ⁠1/8π⁠) into the nondimensionalized form of the law of universal gravitation, consistent with the modern rationalized formulation of Coulomb's law in terms of the vacuum permittivity. In fact, alternative normalizations frequently preserve the factor of ⁠1/4π⁠ in the nondimensionalized form of Coulomb's law as well, so that the nondimensionalized Maxwell's equations for electromagnetism and gravitoelectromagnetism both take the same form as those for electromagnetism in SI, which do not have any factors of 4π. When this is applied to electromagnetic constants, ε0, this unit system is called "rationalized". When applied additionally to gravitation and Planck units, these are called rationalized Planck units and are seen in high-energy physics. The rationalized Planck units are defined so that c = 4πG = ħ = ε0 = kB = 1. There are several possible alternative normalizations. === Gravitational constant === In 1899, Newton's law of universal gravitation was still seen as exact, rather than as a convenient approximation holding for "small" velocities and masses (the approximate nature of Newton's law was shown following the development of general relativity in 1915). Hence Planck normalized to 1 the gravitational constant G in Newton's law. In theories emerging after 1899, G nearly always appears in formulae multiplied by 4π or a small integer multiple thereof. Hence, a choice to be made when designing a system of natural units is which, if any, instances of 4π appearing in the equations of physics are to be eliminated via the normalization. Normalizing 4πG to 1 (and therefore setting G = ⁠1/4π⁠): Gauss's law for gravity becomes Φg = −M (rather than Φg = −4πM in Planck units). Eliminates 4πG from the Poisson equation. Eliminates 4πG in the gravitoelectromagnetic (GEM) equations, which hold in weak gravitational fields or locally flat spacetime. These equations have the same form as Maxwell's equations (and the Lorentz force equation) of electromagnetism, with mass density replacing charge density, and with ⁠1/4πG⁠ replacing ε0. Normalizes the characteristic impedance Zg of gravitational radiation in free space to 1 (normally expressed as ⁠4πG/c⁠). Eliminates 4πG from the Bekenstein–Hawking formula (for the entropy of a black hole in terms of its mass mBH and the area of its event horizon ABH) which is simplified to SBH = πABH = (mBH)2. Setting 8πG = 1 (and therefore setting G = ⁠1/8π⁠). This would eliminate 8πG from the Einstein field equations, Einstein–Hilbert action, and the Friedmann equations, for gravitation. Planck units modified so that 8πG = 1 are known as reduced Planck units, because the Planck mass is divided by 8 π {\displaystyle {\sqrt {8\pi }}} . Also, the Bekenstein–Hawking formula for the entropy of a black hole simplifies to SBH = (mBH)2/2 = 2πABH. == See also == cGh physics Dimensional analysis Doubly special relativity Trans-Planckian problem Zero-point energy == Explanatory notes == == References == == External links == Value of the fundamental constants, including the Planck units, as reported by the National Institute of Standards and Technology (NIST). The Planck scale: relativity meets quantum mechanics meets gravity from 'Einstein Light' at UNSW
Wikipedia/Planck_energy
Physics beyond the Standard Model (BSM) refers to the theoretical developments needed to explain the deficiencies of the Standard Model, such as the inability to explain the fundamental dimensionless physical constants of the standard model, the strong CP problem, neutrino oscillations, matter–antimatter asymmetry, and the nature of dark matter and dark energy. Another problem lies within the mathematical framework of the Standard Model itself: the Standard Model is inconsistent with that of general relativity, and one or both theories break down under certain conditions, such as spacetime singularities like the Big Bang and black hole event horizons. Theories that lie beyond the Standard Model include various extensions of the standard model through supersymmetry, such as the Minimal Supersymmetric Standard Model (MSSM) and Next-to-Minimal Supersymmetric Standard Model (NMSSM), and entirely novel explanations, such as string theory, M-theory, and extra dimensions. As these theories tend to reproduce the entirety of current phenomena, the question of which theory is the right one, or at least the "best step" towards a Theory of Everything, can only be settled via experiments, and is one of the most active areas of research in both theoretical and experimental physics. == Problems with the Standard Model == Despite being the most successful theory of particle physics to date, the Standard Model is not perfect. A large share of the published output of theoretical physicists consists of proposals for various forms of "Beyond the Standard Model" new physics proposals that would modify the Standard Model in ways subtle enough to be consistent with existing data, yet address its imperfections materially enough to predict non-Standard Model outcomes of new experiments that can be proposed. === Phenomena not explained === The Standard Model is inherently an incomplete theory. There are fundamental physical phenomena in nature that the Standard Model does not adequately explain: Dimensionless physical constants. The standard model does not explain the masses of the elementary particles (as fractions of the Planck mass), their mixing angles and phases, the coupling constants, the cosmological constant (multiplied with the Planck length), and the number of spatial dimensions. Gravity. The standard model does not explain gravity. The approach of simply adding a graviton to the Standard Model does not recreate what is observed experimentally without other modifications, as yet undiscovered, to the Standard Model. Moreover, the Standard Model is widely considered to be incompatible with the most successful theory of gravity to date, general relativity. Dark matter. Assuming that general relativity and Lambda CDM are true, cosmological observations tell us the standard model explains about 5% of the mass-energy present in the universe. About 26% should be dark matter (the remaining 69% being dark energy) which would behave just like other matter, but which only interacts weakly (if at all) with the Standard Model fields. Yet, the Standard Model does not supply any fundamental particles that are good dark matter candidates. Dark energy. As mentioned, the remaining 69% of the universe's energy should consist of the so-called dark energy, a constant energy density for the vacuum. Attempts to explain dark energy in terms of vacuum energy of the standard model lead to a mismatch of 120 orders of magnitude. Neutrino oscillations. According to the Standard Model, neutrinos do not oscillate. However, experiments and astronomical observations have shown that neutrino oscillation does occur. These are typically explained by postulating that neutrinos have mass. Neutrinos do not have mass in the Standard Model, and mass terms for the neutrinos can be added to the Standard Model by hand, but these lead to new theoretical problems. For example, the mass terms need to be extraordinarily small and it is not clear if the neutrino masses would arise in the same way that the masses of other fundamental particles do in the Standard Model. There are also other extensions of the Standard Model for neutrino oscillations which do not assume massive neutrinos, such as Lorentz-violating neutrino oscillations. Matter–antimatter asymmetry. The universe is made out of mostly matter. However, the standard model predicts that matter and antimatter should have been created in (almost) equal amounts if the initial conditions of the universe did not involve disproportionate matter relative to antimatter. Yet, there is no mechanism in the Standard Model to sufficiently explain this asymmetry. ==== Experimental results not explained ==== No experimental result is accepted as definitively contradicting the Standard Model at the 5 σ level, widely considered to be the threshold of a discovery in particle physics. Because every experiment contains some degree of statistical and systemic uncertainty, and the theoretical predictions themselves are also almost never calculated exactly and are subject to uncertainties in measurements of the fundamental constants of the Standard Model (some of which are tiny and others of which are substantial), it is to be expected that some of the hundreds of experimental tests of the Standard Model will deviate from it to some extent, even if there were no new physics to be discovered. At any given moment there are several experimental results standing that significantly differ from a Standard Model-based prediction. In the past, many of these discrepancies have been found to be statistical flukes or experimental errors that vanish as more data has been collected, or when the same experiments were conducted more carefully. On the other hand, any physics beyond the Standard Model would necessarily first appear in experiments as a statistically significant difference between an experiment and the theoretical prediction. The task is to determine which is the case. In each case, physicists seek to determine if a result is merely a statistical fluke or experimental error on the one hand, or a sign of new physics on the other. More statistically significant results cannot be mere statistical flukes but can still result from experimental error or inaccurate estimates of experimental precision. Frequently, experiments are tailored to be more sensitive to experimental results that would distinguish the Standard Model from theoretical alternatives. Some of the most notable examples include the following: B meson decay etc. – results from a BaBar experiment may suggest a surplus over Standard Model predictions of a type of particle decay ( B → D(*) τ− ντ ). In this, an electron and positron collide, resulting in a B meson and an antimatter B meson, which then decays into a D meson and a tau lepton as well as a tau antineutrino. While the level of certainty of the excess (3.4 σ in statistical jargon) is not enough to declare a break from the Standard Model, the results are a potential sign of something amiss and are likely to affect existing theories, including those attempting to deduce the properties of Higgs bosons. In 2015, LHCb reported observing a 2.1 σ excess in the same ratio of branching fractions. The Belle experiment also reported an excess. In 2017 a meta analysis of all available data reported a cumulative 5 σ deviation from SM. Neutron lifetime puzzle - Free neutrons are not stable but decay after some time. Currently there are two methods used to measure this lifetime ("bottle" versus "beam") that give different values not within each other's error margin. Currently the lifetime from the bottle method is at τ n = 877.75 s {\displaystyle \tau _{n}=877.75s} with a difference of 10 seconds below the beam method value of τ n = 887.7 s {\displaystyle \tau _{n}=887.7s} . This problem may be solved by taking into account neutron scattering which decreases the lifetime of the involved neutrons. This error occurs in the bottle method and the effect depends on the shape of the bottle – thus this might be a bottle method only systematic error. === Theoretical predictions not observed === Observation at particle colliders of all of the fundamental particles predicted by the Standard Model has been confirmed. The Higgs boson is predicted by the Standard Model's explanation of the Higgs mechanism, which describes how the weak SU(2) gauge symmetry is broken and how fundamental particles obtain mass; it was the last particle predicted by the Standard Model to be observed. On July 4, 2012, CERN scientists using the Large Hadron Collider announced the discovery of a particle consistent with the Higgs boson, with a mass of about 126 GeV/c2. A Higgs boson was confirmed to exist on March 14, 2013, although efforts to confirm that it has all of the properties predicted by the Standard Model are ongoing. A few hadrons (i.e. composite particles made of quarks) whose existence is predicted by the Standard Model, which can be produced only at very high energies in very low frequencies have not yet been definitively observed, and "glueballs" (i.e. composite particles made of gluons) have also not yet been definitively observed. Some very low frequency particle decays predicted by the Standard Model have also not yet been definitively observed because insufficient data is available to make a statistically significant observation. === Unexplained relations === Koide formula – an unexplained empirical equation remarked upon by Yoshio Koide in 1981, and later by others. It relates the masses of the three charged leptons: Q = m e + m μ + m τ ( m e + m μ + m τ ) 2 = 0.666661 ( 7 ) ≈ 2 3 {\displaystyle Q={\frac {m_{e}+m_{\mu }+m_{\tau }}{{\big (}{\sqrt {m_{e}}}+{\sqrt {m_{\mu }}}+{\sqrt {m_{\tau }}}{\big )}^{2}}}=0.666661(7)\approx {\frac {2}{3}}} . The Standard Model does not predict lepton masses (they are free parameters of the theory). However, the value of the Koide formula being equal to 2/3 within experimental errors of the measured lepton masses suggests the existence of a theory which is able to predict lepton masses. The CKM matrix, if interpreted as a rotation matrix in a 3-dimensional vector space, "rotates" a vector composed of square roots of down-type quark masses ( m d , m s , m b ) {\displaystyle ({\sqrt {m_{d}}},{\sqrt {m_{s}}},{\sqrt {m_{b}}}{\big )}} into a vector of square roots of up-type quark masses ( m u , m c , m t ) {\displaystyle ({\sqrt {m_{u}}},{\sqrt {m_{c}}},{\sqrt {m_{t}}}{\big )}} , up to vector lengths, a result due to Kohzo Nishida. The sum of squares of the Yukawa couplings of all Standard Model fermions is approximately 0.984, which is very close to 1. To put it another way, the sum of squares of fermion masses is very close to half of squared Higgs vacuum expectation value. This sum is dominated by the top quark. The sum of squares of boson masses (that is, W, Z, and Higgs bosons) is also very close to half of squared Higgs vacuum expectation value, the ratio is approximately 1.004. Consequently, the sum of squared masses of all Standard Model particles is very close to the squared Higgs vacuum expectation value, the ratio is approximately 0.994. It is unclear if these empirical relationships represent any underlying physics; according to Koide, the rule he discovered "may be an accidental coincidence". === Theoretical problems === Some features of the standard model are added in an ad hoc way. These are not problems per se (i.e. the theory works fine with the ad hoc insertions), but they imply a lack of understanding. These contrived features have motivated theorists to look for more fundamental theories with fewer parameters. Some of the contrivances are: Hierarchy problem – the standard model introduces particle masses through a process known as spontaneous symmetry breaking caused by the Higgs field. Within the standard model, the mass of the Higgs particle gets some very large quantum corrections due to the presence of virtual particles (mostly virtual top quarks). These corrections are much larger than the actual mass of the Higgs. This means that the bare mass parameter of the Higgs in the standard model must be fine tuned in such a way that almost completely cancels the quantum corrections. This level of fine-tuning is deemed unnatural by many theorists. Number of parameters – the standard model depends on 19 parameter numbers. Their values are known from experiment, but the origin of the values is unknown. Some theorists have tried to find relations between different parameters, for example, between the masses of particles in different generations or calculating particle masses, such as in asymptotic safety scenarios. Quantum triviality – suggests that it may not be possible to create a consistent quantum field theory involving elementary scalar Higgs particles. This is sometimes called the Landau pole problem. A possible solution is that the renormalized value could go to zero as the cut-off is removed, meaning that the bare value is completely screened by quantum fluctuations. Strong CP problem – it can be argued theoretically that the standard model should contain a term in the strong interaction that breaks CP symmetry, causing slightly different interaction rates for matter vs. antimatter. Experimentally, however, no such violation has been found, implying that the coefficient of this term – if any – would be suspiciously close to zero. == Additional experimental results == Research from experimental data on the cosmological constant, LIGO noise, and pulsar timing, suggests it's very unlikely that there are any new particles with masses much higher than those which can be found in the standard model or the Large Hadron Collider. However, this research has also indicated that quantum gravity or perturbative quantum field theory will become strongly coupled before 1 PeV, leading to other new physics in the TeVs. == Grand unified theories == The standard model has three gauge symmetries; the colour SU(3), the weak isospin SU(2), and the weak hypercharge U(1) symmetry, corresponding to the three fundamental forces. Due to renormalization the coupling constants of each of these symmetries vary with the energy at which they are measured. Around 1016 GeV these couplings become approximately equal. This has led to speculation that above this energy the three gauge symmetries of the standard model are unified in one single gauge symmetry with a simple gauge group, and just one coupling constant. Below this energy the symmetry is spontaneously broken to the standard model symmetries. Popular choices for the unifying group are the special unitary group in five dimensions SU(5) and the special orthogonal group in ten dimensions SO(10). Theories that unify the standard model symmetries in this way are called Grand Unified Theories (or GUTs), and the energy scale at which the unified symmetry is broken is called the GUT scale. Generically, grand unified theories predict the creation of magnetic monopoles in the early universe, and instability of the proton. Neither of these have been observed, and this absence of observation puts limits on the possible GUTs. == Supersymmetry == Supersymmetry extends the Standard Model by adding another class of symmetries to the Lagrangian. These symmetries exchange fermionic particles with bosonic ones. Such a symmetry predicts the existence of supersymmetric particles, abbreviated as sparticles, which include the sleptons, squarks, neutralinos and charginos. Each particle in the Standard Model would have a superpartner whose spin differs by 1/2 from the ordinary particle. Due to the breaking of supersymmetry, the sparticles are much heavier than their ordinary counterparts; they are so heavy that existing particle colliders may not be powerful enough to produce them. == Neutrinos == In the standard model, neutrinos cannot spontaneously change flavor. Measurements however indicated that neutrinos do spontaneously change flavor, in what is called neutrino oscillations. Neutrino oscillations are usually explained using massive neutrinos. In the standard model, neutrinos have exactly zero mass, as the standard model only contains left-handed neutrinos. With no suitable right-handed partner, it is impossible to add a renormalizable mass term to the standard model. These measurements only give the mass differences between the different flavours. The best constraint on the absolute mass of the neutrinos comes from precision measurements of tritium decay, providing an upper limit 2 eV, which makes them at least five orders of magnitude lighter than the other particles in the standard model. This necessitates an extension of the standard model, which not only needs to explain how neutrinos get their mass, but also why the mass is so small. One approach to add masses to the neutrinos, the so-called seesaw mechanism, is to add right-handed neutrinos and have these couple to left-handed neutrinos with a Dirac mass term. The right-handed neutrinos have to be sterile, meaning that they do not participate in any of the standard model interactions. Because they have no charges, the right-handed neutrinos can act as their own anti-particles, and have a Majorana mass term. Like the other Dirac masses in the standard model, the neutrino Dirac mass is expected to be generated through the Higgs mechanism, and is therefore unpredictable. The standard model fermion masses differ by many orders of magnitude; the Dirac neutrino mass has at least the same uncertainty. On the other hand, the Majorana mass for the right-handed neutrinos does not arise from the Higgs mechanism, and is therefore expected to be tied to some energy scale of new physics beyond the standard model, for example the Planck scale. Therefore, any process involving right-handed neutrinos will be suppressed at low energies. The correction due to these suppressed processes effectively gives the left-handed neutrinos a mass that is inversely proportional to the right-handed Majorana mass, a mechanism known as the see-saw. The presence of heavy right-handed neutrinos thereby explains both the small mass of the left-handed neutrinos and the absence of the right-handed neutrinos in observations. However, due to the uncertainty in the Dirac neutrino masses, the right-handed neutrino masses can lie anywhere. For example, they could be as light as keV and be dark matter, they can have a mass in the LHC energy range and lead to observable lepton number violation, or they can be near the GUT scale, linking the right-handed neutrinos to the possibility of a grand unified theory. The mass terms mix neutrinos of different generations. This mixing is parameterized by the PMNS matrix, which is the neutrino analogue of the CKM quark mixing matrix. Unlike the quark mixing, which is almost minimal, the mixing of the neutrinos appears to be almost maximal. This has led to various speculations of symmetries between the various generations that could explain the mixing patterns. The mixing matrix could also contain several complex phases that break CP invariance, although there has been no experimental probe of these. These phases could potentially create a surplus of leptons over anti-leptons in the early universe, a process known as leptogenesis. This asymmetry could then at a later stage be converted in an excess of baryons over anti-baryons, and explain the matter-antimatter asymmetry in the universe. The light neutrinos are disfavored as an explanation for the observation of dark matter, based on considerations of large-scale structure formation in the early universe. Simulations of structure formation show that they are too hot – that is, their kinetic energy is large compared to their mass – while formation of structures similar to the galaxies in our universe requires cold dark matter. The simulations show that neutrinos can at best explain a few percent of the missing mass in dark matter. However, the heavy, sterile, right-handed neutrinos are a possible candidate for a dark matter WIMP. There are however other explanations for neutrino oscillations which do not necessarily require neutrinos to have masses, such as Lorentz-violating neutrino oscillations. == Preon models == Several preon models have been proposed to address the unsolved problem concerning the fact that there are three generations of quarks and leptons. Preon models generally postulate some additional new particles which are further postulated to be able to combine to form the quarks and leptons of the standard model. One of the earliest preon models was the Rishon model. To date, no preon model is widely accepted or fully verified. == Theories of everything == Theoretical physics continues to strive toward a theory of everything, a theory that fully explains and links together all known physical phenomena, and predicts the outcome of any experiment that could be carried out in principle. In practical terms the immediate goal in this regard is to develop a theory which would unify the Standard Model with General Relativity in a theory of quantum gravity. Additional features, such as overcoming conceptual flaws in either theory or accurate prediction of particle masses, would be desired. The challenges in putting together such a theory are not just conceptual - they include the experimental aspects of the very high energies needed to probe exotic realms. Several notable attempts in this direction are supersymmetry, loop quantum gravity, and String theory. === Supersymmetry === === Loop quantum gravity === Theories of quantum gravity such as loop quantum gravity and others are thought by some to be promising candidates to the mathematical unification of quantum field theory and general relativity, requiring less drastic changes to existing theories. However recent work places stringent limits on the putative effects of quantum gravity on the speed of light, and disfavours some current models of quantum gravity. === String theory === Extensions, revisions, replacements, and reorganizations of the Standard Model exist in attempt to correct for these and other issues. String theory is one such reinvention, and many theoretical physicists think that such theories are the next theoretical step toward a true Theory of Everything. Among the numerous variants of string theory, M-theory, whose mathematical existence was first proposed at a String Conference in 1995 by Edward Witten, is believed by many to be a proper "ToE" candidate, notably by physicists Brian Greene and Stephen Hawking. Though a full mathematical description is not yet known, solutions to the theory exist for specific cases. Recent works have also proposed alternate string models, some of which lack the various harder-to-test features of M-theory (e.g. the existence of Calabi–Yau manifolds, many extra dimensions, etc.) including works by well-published physicists such as Lisa Randall. == See also == == Footnotes == == References == == Further reading == Lisa Randall (2005). Warped Passages: Unraveling the Mysteries of the Universe's Hidden Dimensions. HarperCollins. ISBN 978-0-06-053108-9. == External resources == Standard Model Theory @ SLAC Scientific American Apr 2006 LHC. Nature July 2007 Les Houches Conference, Summer 2005
Wikipedia/Beyond_the_standard_model
ScienceAlert is an independently run online publication and news source that publishes articles featuring scientific research, discoveries, and outcomes. The site was founded in 2004 by Julian Cribb, a science writer, to aggregate research findings from Australian universities, and it expanded in 2006 when ex-Microsoft programmer Chris Cassella took on the project of developing the website. It has readership that ranges from 11.5m to 26.5m per month. Science journalist Fiona MacDonald has been CEO since 2017. == History == Science communicator Julian Cribb founded ScienceAlert in 2004. The website was born out of his "concern at the lack of information available about what Australians and New Zealanders achieve in science". Chris Cassella, a former programmer for Microsoft, joined the site in order to develop new web tools. He took on this work as part of a master's degree thesis in science communication at Australia National University, where Cribb was a professor. Initially, the focus of ScienceAlert was twofold: "to both publicise Australasian scientific outcomes more widely and to encourage Australasian research institutions and funding agencies to share more of their achievements by providing a free outlet for them to do so". Cassella is credited with bringing the site to social media, starting the ScienceAlert Facebook page in 2007. By 2011, the page had attracted a significant following among young people, reaching one million followers by 2012. By 2020, the page had slightly more than nine million followers. In 2012, ScienceAlert received a grant from Inspiring Australia, a government initiative aimed at engaging "people who may not have had previous access to or interest in science-communication activities". Although the website began as a project to aggregate research findings and outcomes from Australian universities, by 2019 the focus of the site had shifted toward presenting popular science to a wider audience. The shift toward mass appeal news on social media has met with some criticism. (See Controversy and criticism section, below) In July 2019, reinforcing the site's commitment to fact-checking, ScienceAlert announced a joint partnership with Metafact. ScienceAlert republishes selected expert answers from the Metafact community across the site's multiple digital channels. ScienceAlert is owned by ScienceAlert Pty Ltd., a privately held company owned by Chris Cassella. According to its site, ScienceAlert does not run sponsored articles nor is it affiliated with other companies or institutions. As of 2020, ScienceAlert engages more than 11 million readers per month. == Editorial staff == In addition to Cassella and MacDonald, ScienceAlert's editorial staff is headed by Peter Dockrill, who now manages more than half a dozen contributing science journalists to produce the site's news. Cribb concluded his role as editor at ScienceAlert in 2015. In August 2017, Fiona MacDonald was named CEO of ScienceAlert, with Cassella acting as COO/CFO. Prior to this role, MacDonald had worked with the news site for more than a decade as an editor and then the director of content. According to The Brilliant, the editorial team has doubled since 2017. == Format == As of August 2023, ScienceAlert had the following sections: Space, Environment, Tech, Physics, Opinion, Health, Humans, Nature and Society. Readers could read the trending news or the latest news from the homepage. == Controversy and criticism == In May 2019, ScienceAlert joined the debate surrounding publications, such as The Guardian, shifting their style guide to prioritize terms such as "climate crisis or breakdown" over "climate change". ScienceAlert then shared updated definitions for the site's climate science-related terminology. Later, ScienceAlert noted that this decision led to an increase in negative comments on their Facebook page. The page comprises a small portion of the readers of the publication. The editors said that when they post articles about climate news, "with astonishing speed and ferocity the comment section becomes a hot-pot of climate denialism". The editors developed a policy of dealing with the social media issue by asking that, rather than adding fuel to the onslaught, readers of the page cooperate in a reporting scheme that could enable quick blocking of the disruptive sources and the alternative accounts the "climate trolls" create to appear numerous as well as to evade the blocks. Accusations of "censorship" followed, but the editors stood by the policy and noted its relative effectiveness. The broadening of the scope of topics covered (noted above) has drawn criticism from those opposed to the change to an international science news perspective. Those objecting prefer the original exposure for scientific research and developments solely in Australia that had determined the content of ScienceAlert when founded. The site also has come under criticism for issues related to sensationalism, hyperbole, misleading or naive headlines, and even sexism to attract readers. In a social media post from 2014, STEM blogger Zuleyka Zevallos criticized superficial explanations about a powdered coffee product that vaguely referred to "researchers" without evidence. She also pointed out sexist imagery of unclothed women used by the webzine to attract attention. == References ==
Wikipedia/ScienceAlert
Science fiction (often shortened to sci-fi or abbreviated SF) is a genre of speculative fiction that deals with imaginative and futuristic concepts. These concepts may include information technology and robotics, biological manipulations, space exploration, time travel, parallel universes, and extraterrestrial life. The genre often explores human responses to the consequences of projected or imagined scientific advances. Science fiction is related to fantasy (together abbreviated SF&F), horror, and superhero fiction, and it contains many subgenres. The genre's precise definition has long been disputed among authors, critics, scholars, and readers. Major subgenres include hard science fiction, which emphasizes scientific accuracy, and soft science fiction, which focuses on social sciences. Other notable subgenres are cyberpunk, which explores the interface between technology and society, and climate fiction, which addresses environmental issues. Precedents for science fiction are claimed to exist as far back as antiquity, but the modern genre arose primarily in the 19th and early 20th centuries, when popular writers began looking to technological progress for inspiration and speculation. Mary Shelley's Frankenstein, written in 1818, is often credited as the first true science fiction novel. Jules Verne and H. G. Wells are pivotal figures in the genre's development. In the 20th century, the genre grew during the Golden Age of Science Fiction; it expanded with the introduction of space operas, dystopian literature, and pulp magazines. Science fiction has come to influence not only literature, but also film, television, and culture at large. Science fiction can criticize present-day society and explore alternatives, as well as provide entertainment and inspire a sense of wonder. == Definitions == According to American writer and professor of biochemistry Isaac Asimov, "Science fiction can be defined as that branch of literature which deals with the reaction of human beings to changes in science and technology." Science fiction writer Robert A. Heinlein stated that "A handy short definition of almost all science fiction might read: realistic speculation about possible future events, based solidly on adequate knowledge of the real world, past and present, and on a thorough understanding of the nature and significance of the scientific method." American science fiction author and editor Lester del Rey wrote, "Even the devoted aficionado or fan—has a hard time trying to explain what science fiction is," and no "full satisfactory definition" exists because "there are no easily delineated limits to science fiction." Another definition is provided in The Literature Book by the publisher DK: "scenarios that are at the time of writing technologically impossible, extrapolating from present-day science...[,]...or that deal with some form of speculative science-based conceit, such as a society (on Earth or another planet) that has developed in wholly different ways from our own." There is a tendency among science fiction enthusiasts to be their own arbiters in deciding what constitutes science fiction. David Seed says that it may be more useful to talk about science fiction as the intersection of other more concrete subgenres. American science fiction author, editor, and critic Damon Knight summed up the difficulty, saying "Science fiction is what we point to when we say it." === Alternative terms === American magazine editor, science fiction writer, and literary agent Forrest J Ackerman has been credited with first using the term sci-fi (reminiscent of the then-trendy term hi-fi) in about 1954. The first known use in print was a description of Donovan's Brain by movie critic Jesse Zunser in January 1954. As science fiction entered popular culture, writers and fans in the field came to associate the term with low-quality pulp science fiction and with low-budget, low-tech B movies. By the 1970s, critics in the field, such as Damon Knight and Terry Carr, were using sci fi to distinguish hack-work from serious science fiction. Australian literary scholar and critic Peter Nicholls writes that SF (or sf) is "the preferred abbreviation within the community of sf writers and readers." Robert Heinlein found the term science fiction insufficient to describe certain types of works in this genre, and he suggested that the term speculative fiction be used instead for works that are more "serious" or "thoughtful". == History == Some scholars assert that science fiction had its beginnings in ancient times, when the distinction between myth and fact was blurred. Written in the 2nd century CE by the satirist Lucian, the novel A True Story contains many themes and tropes that are characteristic of modern science fiction, including travel to other worlds, extraterrestrial lifeforms, interplanetary warfare, and artificial life. Some consider it to be the first science fiction novel. Some stories from the folktale collection The Arabian Nights, along with the 10th-century fiction The Tale of the Bamboo Cutter and Ibn al-Nafis's 13th-century novel Theologus Autodidactus, are also argued to contain elements of science fiction. Several books written during the Scientific Revolution and later the Age of Enlightenment are considered true works of science-fantasy. Francis Bacon's New Atlantis (1627), Johannes Kepler's Somnium (1634), Athanasius Kircher's Itinerarium extaticum (1656), Cyrano de Bergerac's Comical History of the States and Empires of the Moon (1657) and The States and Empires of the Sun (1662), Margaret Cavendish's "The Blazing World" (1666), Jonathan Swift's Gulliver's Travels (1726), Ludvig Holberg's Nicolai Klimii Iter Subterraneum (1741) and Voltaire's Micromégas (1752). Isaac Asimov and Carl Sagan considered Johannes Kepler's novel Somnium to be the first science fiction story; it depicts a journey to the Moon and how the Earth's motion is seen from there. Kepler has been called the "father of science fiction". Following the 17th-century development of the novel as a literary form, Mary Shelley's Frankenstein (1818) and The Last Man (1826) helped to define the form of the science fiction novel. Brian Aldiss has argued that Frankenstein was the first work of science fiction. Edgar Allan Poe wrote several stories considered to be science fiction, including "The Unparalleled Adventure of One Hans Pfaall" (1835) about a trip to the Moon. Jules Verne was noted for his attention to detail and scientific accuracy, especially in the novel Twenty Thousand Leagues Under the Seas (1870). In 1887, the novel El anacronópete by Spanish author Enrique Gaspar y Rimbau introduced the first time machine. An early French/Belgian science fiction writer was J.-H. Rosny aîné (1856–1940). Rosny's masterpiece is Les Navigateurs de l'Infini (The Navigators of Infinity) (1925) in which the word astronaut (astronautique in French) was used for the first time. Many critics consider H. G. Wells to be one of science fiction's most important authors, or even "the Shakespeare of science fiction". His novels include The Time Machine (1895), The Island of Doctor Moreau (1896), The Invisible Man (1897), and The War of the Worlds (1898). His science fiction imagined alien invasion, biological engineering, invisibility, and time travel. In his non-fiction futurologist works, he predicted the advent of airplanes, military tanks, nuclear weapons, satellite television, space travel, and something like the World Wide Web. Edgar Rice Burroughs's novel A Princess of Mars, published in 1912, was the first of his thirty-year planetary romance series about the fictional Barsoom; the novels were set on Mars and featured John Carter as the hero. These novels were predecessors to young-adult fiction, and they drew inspiration from European science fiction and American Western fiction. One of the first dystopian novels, We, was written by the Russian author Yevgeny Zamyatin and published in 1924. It describes a world of harmony and conformity within a united totalitarian state. The novel influenced the emergence of dystopia as a literary genre. In 1926, Hugo Gernsback published the first American science fiction magazine, Amazing Stories. In its first issue, he provided the following definition: By 'scientifiction' I mean the Jules Verne, H. G. Wells and Edgar Allan Poe type of story—a charming romance intermingled with scientific fact and prophetic vision... Not only do these amazing tales make tremendously interesting reading—they are always instructive. They supply knowledge... in a very palatable form... New adventures pictured for us in the scientifiction of today are not at all impossible of realization tomorrow... Many great science stories destined to be of historical interest are still to be written... Posterity will point to them as having blazed a new trail, not only in literature and fiction, but progress as well. In 1928, E. E. "Doc" Smith's first published novel, The Skylark of Space (co-authored with Lee Hawkins Garby), appeared in Amazing Stories. It is often described as the first great space opera. That same year, Philip Francis Nowlan's original story about Buck Rogers, Armageddon 2419, also appeared in Amazing Stories. This story was followed by a Buck Rogers comic strip, the first serious science fiction comic. Last and First Men: A Story of the Near and Far Future is a future history novel written in 1930 by the British author Olaf Stapledon. A work of innovative scale in the science fiction genre, it describes the fictional history of humanity from the present forward across two billion years. In 1937, John W. Campbell became the editor of Astounding Science Fiction magazine; this event is sometimes considered the beginning of the Golden Age of Science Fiction, which was characterized by stories celebrating scientific achievement and progress. The "Golden Age" is often said to have ended in 1946, but sometimes the late 1940s and the 1950s are included in this period. In 1942, Isaac Asimov began the Foundation series of novels, which chronicles the rise and fall of galactic empires, and also introduces the concept of psychohistory. The series was later awarded a one-time Hugo Award for "Best All-Time Series". Theodore Sturgeon's novel More Than Human (1953) explored possible future human evolution. In 1957, the novel Andromeda: A Space-Age Tale by the Russian writer and paleontologist Ivan Yefremov presented a view of a future interstellar communist civilization; it is considered one of the most important Soviet science fiction novels. In 1959, Robert A. Heinlein's novel Starship Troopers marked a departure from his earlier juvenile stories and novels. It is one of the first and most influential examples of military science fiction, and it introduced the concept of powered armor exoskeletons. The German space opera series Perry Rhodan, written by various authors, started in 1961 with an account of the first Moon landing; the series has since expanded in space to multiple universes and in time by billions of years. It has become the most popular book series in science fiction to date. During the 1960s and 1970s, New Wave science fiction was known for embracing a high degree of experimentation (in both form and content), as well as a highbrow and self-consciously "literary" or "artistic" sensibility. In 1961, Stanisław Lem's novel Solaris was published in Poland. The novel dealt with the theme of human limitations, as its characters attempted to study a seemingly intelligent ocean on a newly discovered planet. Lem's work anticipated the creation of microrobots and micromachinery, nanotechnology, smartdust, virtual reality, and artificial intelligence (including swarm intelligence); his work also developed the ideas of necroevolution and artificial worlds. In 1965, the novel Dune by Frank Herbert imagined a more complex and detailed future society than had most previous science fiction. In 1967 Anne McCaffrey, began a science fantasy series called Dragonriders of Pern . Two novellas included in the series' first novel, Dragonflight, led McCaffrey to win the first Hugo or Nebula award given to a female author. In 1968, Philip K. Dick's novel Do Androids Dream of Electric Sheep? was published. It is the literary source of the Blade Runner movie franchise. Published in 1969, the novel The Left Hand of Darkness by Ursula K. Le Guin is set on a planet where the inhabitants have no fixed gender. The novel is one of the most influential examples of social, feminist, or anthropological science fiction. In 1979, Science Fiction World magazine began publication in the People's Republic of China. It dominates the Chinese science fiction magazine market, at one time claiming a circulation of 300,000 copies per issue and an estimated 3–5 readers per copy, giving it a total readership of at least 1 million people—making it the world's most popular science fiction periodical. In 1984, William Gibson's first novel, Neuromancer, helped to popularize cyberpunk and the word cyberspace, a term he originally coined in the 1982 short story Burning Chrome. In the same year, Octavia Butler's short story "Speech Sounds" won the Hugo Award for Best Short Story. She went on to explore themes of racial injustice, global warming, women's rights, and political conflict. In 1995, she became the first science fiction author to receive a MacArthur Fellowship. In 1986, the novel Shards of Honor by Lois McMaster Bujold began her Vorkosigan Saga. 1992's novel Snow Crash by Neal Stephenson predicted immense social upheaval due to the information revolution. In 2007, Liu Cixin's novel The Three-Body Problem was published in China. It was translated into English by Ken Liu and published by Tor Books in 2014; it won the Hugo Award for Best Novel in 2015, making Liu the first Asian writer to win the award. Emerging themes in late 20th- and early 21st-century science fiction include the following: environmental issues the implications of the Internet and the expanding information universe questions about biotechnology nanotechnology post-scarcity societies. Recent trends and subgenres include steampunk, biopunk, and mundane science fiction. === Film === One of the first recorded science fiction films is A Trip to the Moon from 1902, directed by French filmmaker Georges Méliès. It influenced later filmmakers, offering a different kind of creativity and fantasy. Méliès's innovative editing and special effects techniques were widely imitated, and they became important elements of the cinematic medium. The 1927 film Metropolis, directed by Fritz Lang, is the first feature-length science fiction film. Though not well received in its time, it is now ranked as one of the best films ever made. In 1954, Godzilla, directed by Ishirō Honda, started the kaiju subgenre of science fiction film; this subgenre features large creatures in any form, usually attacking a major city or engaging other monsters in battle. The 1968 film 2001: A Space Odyssey, was directed by Stanley Kubrick and based on a novel by Arthur C. Clarke. The film improved on the largely B-movie offerings to date in both scope and quality, and it influenced later science fiction films. The original Planet of the Apes movie, directed by Franklin J. Schaffner and based on the 1963 French novel La Planète des Singes by Pierre Boulle, was also released in 1968. The film vividly depicts a post-apocalyptic world in which intelligent apes dominate humans. The film received both popular and critical acclaim. In 1977, George Lucas began the Star Wars series with the film later called "Star Wars: Episode IV – A New Hope." The series, often called a space opera, became a worldwide popular culture phenomenon and the third-highest-grossing film series of all time. Since the 1980s, science fiction films, along with fantasy, horror, and superhero films, have dominated Hollywood's big-budget productions. Science fiction films often cross over with other genres. Some examples include film noir (Blade Runner, 1982), family (E.T. the Extra-Terrestrial, 1982), war (Enemy Mine, 1985), comedy (Spaceballs , 1987; Galaxy Quest, 1999), animation (WALL-E, 2008; Big Hero 6, 2014), Western (Serenity, 2005), action (Edge of Tomorrow, 2014; The Matrix, 1999), adventure (Jupiter Ascending, 2015; Interstellar, 2014), mystery (Minority Report, 2002), thriller (Ex Machina, 2014), drama (Melancholia, 2011; Predestination, 2014), and romance (Eternal Sunshine of the Spotless Mind, 2004; Her, 2013). === Television === Science fiction and television have consistently had a close relationship. Television or similar technology often appeared in science fiction long before television itself became widely available in the late 1940s and early 1950s. The first known science fiction television program was a 35-minute adapted excerpt of the play RUR, written by the Czech playwright Karel Čapek, broadcast live from the BBC's Alexandra Palace studios on 11 February 1938. The first popular science fiction program on American television was the children's adventure serial Captain Video and His Video Rangers, which ran from June 1949 to April 1955. The original The Twilight Zone series, produced and narrated by Rod Serling, ran from 1959 to 1964. (Serling also wrote or co-wrote most of the episodes.) The series featured fantasy, suspense, and horror as well as science fiction, with each episode being a complete story. Critics have ranked it as one of the best TV programs of any genre. The animated series The Jetsons, while intended as comedy and only running for one season (1962–1963), predicted many inventions now in common use: flat-screen televisions, newspapers on a computer-like screen, computer viruses, video chat, tanning beds, home treadmills, and more. In 1963, the series Doctor Who premiered on BBC Television with a time-travel theme. The original series ran until 1989 and was revived in 2005. It has been popular globally and has significantly influenced later science fiction TV. Other notable programs during the 1960s included The Outer Limits (1963–1965), Lost in Space (1965–1968), and The Prisoner (1967). The original Star Trek series, created by Gene Roddenberry, premiered in 1966 on NBC Television and ran for three seasons. It combined elements of space opera and Space Western. Only mildly successful at first, the series gained popularity through syndication and strong fan interest. It became a popular and influential franchise with many films, television shows, novels, and other works and products. The series Star Trek: The Next Generation (1987–1994) led to six additional live action Star Trek shows: Deep Space Nine (1993–1999), Voyager (1995–2001), Enterprise (2001–2005), Discovery (2017–2024), Picard (2020–2023), and Strange New Worlds (2022–present); additional shows are in some stage of development. The miniseries V premiered in 1983 on NBC. It depicted an attempted conquest of Earth by reptilian aliens. Red Dwarf, a comic science fiction series, aired on BBC Two between 1988 and 1999, and on Dave since 2009. The X-Files, which featured UFOs and conspiracy theories, was created by Chris Carter and broadcast by Fox Broadcasting Company from 1993 to 2002, and again from 2016 to 2018. Stargate, a film about ancient astronauts and interstellar teleportation, was released in 1994. The series Stargate SG-1 premiered in 1997 and ran for 10 seasons (1997–2007). Spin-off series included Stargate Infinity (2002–2003), Stargate Atlantis (2004–2009), and Stargate Universe (2009–2011). Other 1990s series included Quantum Leap (1989–1993) and Babylon 5 (1994–1999). The Syfy channel, launched in 1992 as The Sci-Fi Channel, specializes in science fiction, supernatural horror, and fantasy. The space-Western series Firefly premiered in 2002 on Fox. It is set in the year 2517, after humans arrive in a new star system, and it follows the adventures of the renegade crew of Serenity, a "Firefly-class" spaceship. The series Orphan Black began a five-season run in 2013, focusing on a woman who takes on the identity of one of her genetically identical clones. In late 2015, Syfy premiered the series The Expanse to great critical acclaim—an American show about humanity's colonization of the Solar System. Its later seasons were aired through Amazon Prime Video. == Social influence == Science fiction's rapid increase in popularity during the first half of the 20th century was closely tied to public respect for science during that era, as well as the rapid pace of technological innovation and new inventions. Science fiction has often predicted scientific and technological progress. Some works imagine that this progress will tend to improve human life and society, for instance, the stories of Arthur C. Clarke and Star Trek. Other works, such as H.G. Wells's The Time Machine and Aldous Huxley's Brave New World, warn of possible negative consequences. In 2001 the National Science Foundation conducted a survey of "Public Attitudes and Public Understanding: Science Fiction and Pseudoscience". The survey found that people who read or prefer science fiction may think about or relate to science differently than other people. Such people also tend to support the space program and efforts to contact extraterrestrial civilizations. Carl Sagan wrote that "Many scientists deeply involved in the exploration of the solar system (myself among them) were first turned in that direction by science fiction." Science fiction has predicted several existing inventions, such as the atomic bomb, robots, and borazon. In the 2020 TV series Away, astronauts use a Mars rover called InSight to listen intently for a landing on Mars. In 2022, scientists actually used InSight to listen for the landing of a spacecraft. Science fiction can act as a vehicle for analyzing and recognizing a society's past, present, and potential future social relationships with the other. Science fiction offers a medium for and a representation of alterity and differences in social identity. Brian Aldiss described science fiction as "cultural wallpaper". This broad influence can be seen in the trend for writers to use science fiction as a tool for advocacy and generating cultural insights, as well as for educators who teach across a range of academic disciplines beyond the natural sciences. Scholar and science fiction critic George Edgar Slusser said that science fiction "is the one real international literary form we have today, and as such has branched out to visual media, interactive media and on to whatever new media the world will invent in the 21st century. Crossover issues between the sciences and the humanities are crucial for the century to come." === As protest literature === Science fiction has sometimes been used as a means of social protest. George Orwell's novel Nineteen Eighty-Four (1949) is an important work of dystopian science fiction. The novel is often invoked in protests against governments and leaders who are seen as totalitarian. James Cameron's film Avatar (2009) was intended as a protest against imperialism, specifically the European colonization of the Americas. Science fiction in Latin America and Spain explores the concept of authoritarianism. Robots, artificial humans, human clones, intelligent computers, and their possible conflicts with human society have all been major themes of science fiction since the publication of Shelly's novel Frankenstein (or earlier). Some critics have seen this tendency as reflecting authors' concerns over the social alienation seen in modern society. Feminist science fiction poses questions about social issues such as how society constructs gender roles, the role reproduction plays in defining gender, and the inequitable political or personal power of one gender over others. Some works have illustrated these themes using utopias in which gender differences or gender power imbalances do not exist, or dystopias in which gender inequalities are intensified, thus asserting a need for feminist work to continue. Climate fiction (or cli-fi) deals with issues of climate change and global warming. University courses on literature and environmental issues may include climate change fiction in their syllabi, and these issues are often discussed by other media beyond science fiction fandom. Libertarian science fiction focuses on the politics and social order implied by right libertarian philosophies with an emphasis on individualism and private property, and in some cases anti-statism. Robert A. Heinlein is one of the most popular authors of this subgenre, including his novels The Moon is a Harsh Mistress and Stranger in a Strange Land. Science fiction comedy often satirizes and criticizes present-day society, and it sometimes makes fun of the conventions and clichés of more serious science fiction. === Sense of wonder === Science fiction is often said to inspire a sense of wonder. Science fiction editor, publisher, and critic David Hartwell wrote that "Science fiction's appeal lies in combination of the rational, the believable, with the miraculous. It is an appeal to the sense of wonder." Carl Sagan wrote about growing up with science fiction: One of the great benefits of science fiction is that it can convey bits and pieces, hints, and phrases, of knowledge unknown or inaccessible to the reader . . . works you ponder over as the water is running out of the bathtub or as you walk through the woods in an early winter snowfall. In 1967, Isaac Asimov commented on changes occurring in the science fiction community: And because today's real life so resembles day-before-yesterday's fantasy, the old-time fans are restless. Deep within, whether they admit it or not, is a feeling of disappointment and even outrage that the outer world has invaded their private domain. They feel the loss of a 'sense of wonder' because what was once truly confined to 'wonder' has now become prosaic and mundane. == Study == The field of science fiction studies involves the critical assessment, interpretation, and discussion of science fiction literature, film, TV shows, new media, fandom, and fan fiction. Science fiction scholars study the genre to better understand it and its relationship to science, technology, politics, other genres, and culture at large. Science fiction studies began around the turn of the 20th century, but it was not until later that science fiction studies solidified as a discipline with the publication of the academic journals Extrapolation (1959), Foundation: The International Review of Science Fiction (1972), and Science Fiction Studies (1973), and the establishment of the oldest organizations devoted to the study of science fiction in 1970, the Science Fiction Research Association and the Science Fiction Foundation. The field has grown considerably since the 1970s with the establishment of more journals, organizations, and conferences, as well as science fiction degree-granting programs such as those offered by the University of Liverpool. === Classification === Science fiction has historically been subdivided into hard and soft categories, with the division centering on the feasibility of the science. However, this distinction has come under increased scrutiny in the 21st century. Some authors, such as Tade Thompson and Jeff VanderMeer, have observed that stories focusing explicitly on physics, astronomy, mathematics, and engineering tend to be considered hard science fiction, while stories focusing on botany, mycology, zoology, and the social sciences tend to be considered soft science fiction (regardless of the relative rigor of the science). Max Gladstone defined hard science fiction as stories "where the math works", but he pointed out that this definition identifies stories that often seem "weirdly dated", as scientific paradigms shift over time. Michael Swanwick dismissed the traditional definition of hard science fiction altogether, instead stating that it was defined by characters striving to solve problems "in the right way–with determination, a touch of stoicism, and the consciousness that the universe is not on his or her side." Ursula K. Le Guin also criticized the traditional contrast between hard and soft science fiction: "The 'hard' science fiction writers dismiss everything except, well, physics, astronomy, and maybe chemistry. Biology, sociology, anthropology—that's not science to them, that's soft stuff. They're not that interested in what human beings do, really. But I am. I draw on the social sciences a great deal." === Literary merit === Many critics remain skeptical of the literary value of science fiction and other forms of genre fiction, though some mainstream authors have written works claimed by opponents to be science fiction. Mary Shelley wrote a number of scientific romance novels in the Gothic literature tradition, including Frankenstein; or, The Modern Prometheus (1818). Kurt Vonnegut was a respected American author whose works have been argued by some to contain science fiction premises or themes. Other science fiction authors whose works are widely considered to be "serious" literature include Ray Bradbury (especially Fahrenheit 451 and The Martian Chronicles), Arthur C. Clarke (especially Childhood's End), and Paul Myron Anthony Linebarger (using the pseudonym Cordwainer Smith). Doris Lessing, who was later awarded the Nobel Prize in Literature, wrote a series of five science fiction novels, Canopus in Argos: Archives (1979–1983); these novels depict the efforts of more advanced species and civilizations to influence less advanced ones, including humans on Earth. David Barnett has indicated that some novels use recognizable science fiction tropes, but they are not classified by their authors and publishers as science fiction; such novels include The Road (2006) by Cormac McCarthy, Cloud Atlas (2004) by David Mitchell, The Gone-Away World (2008) by Nick Harkaway, The Stone Gods (2007) by Jeanette Winterson, and Oryx and Crake (2003) by Margaret Atwood. Atwood in particular argued against categorizing works such as the Handmaid's Tale as science fiction; instead she labeled this novel, Oryx and Crake, and The Testaments as speculative fiction, and she criticized science fiction as "talking squids in outer space." In his book The Western Canon, literary critic Harold Bloom includes the novels Brave New World, Stanisław Lem's Solaris, Kurt Vonnegut's Cat's Cradle, and The Left Hand of Darkness as culturally and aesthetically significant works of Western literature, though Lem actively spurned the label science fiction. In her 1976 essay "Science Fiction and Mrs Brown", Ursula K. Le Guin was asked, "Can a science fiction writer write a novel?" She answered that "I believe that all novels ... deal with character... The great novelists have brought us to see whatever they wish us to see through some character. Otherwise, they would not be novelists, but poets, historians, or pamphleteers." Orson Scott Card is best known for his 1985 science fiction novel Ender's Game; he has postulated that in science fiction, the message and intellectual significance of the work are contained within the story itself—therefore the genre can omit accepted literary devices and techniques that he characterized as gimmicks or literary games. In 1998, Jonathan Lethem wrote an essay titled "Close Encounters: The Squandered Promise of Science Fiction" in the Village Voice. In this essay, he recalled the time in 1973 when Thomas Pynchon's novel Gravity's Rainbow was nominated for the Nebula Award and was passed over in favor of Arthur C. Clarke's novel Rendezvous with Rama; Lethem suggests that this point stands as "a hidden tombstone marking the death of the hope that SF was about to merge with the mainstream." In the same year, science fiction author and physicist Gregory Benford wrote that "SF is perhaps the defining genre of the twentieth century, although its conquering armies are still camped outside the Rome of the literary citadels." == Community == === Authors === Science fiction has been written by authors from diverse cultural and geographical backgrounds. Among submissions to the science fiction publisher Tor Books, men account for 78% and women account for 22% (according to 2013 statistics from the publisher). A controversy about voting slates for the 2015 Hugo Awards highlighted a tension in the science fiction community between two things: a trend toward increasingly diverse works and authors being honored by awards, and a reaction by groups of authors and fans who preferred more "traditional" science fiction. === Awards === Among the most significant and well-known awards for science fiction are the Hugo Award for literature, presented by the World Science Fiction Society at Worldcon, and voted on by fans; the Nebula Award for literature, presented by the Science Fiction and Fantasy Writers of America, and voted on by the community of authors; the John W. Campbell Memorial Award for Best Science Fiction Novel, presented by a jury of writers; and the Theodore Sturgeon Memorial Award for short fiction, presented by a jury. One notable award for science fiction films and TV programs is the Saturn Award, which is presented annually by The Academy of Science Fiction, Fantasy, and Horror Films. There are other national awards, like Canada's Prix Aurora Awards, regional awards, like the Endeavour Award presented at Orycon for works from the U.S. Pacific Northwest, and special interest or subgenre awards such as the Chesley Award for art, presented by the Association of Science Fiction & Fantasy Artists, or the World Fantasy Award for fantasy. Magazines may organize reader polls, notably the Locus Award. === Conventions === Conventions (often abbreviated by fans as cons, such as Comic-con) are held in cities around the world; these cater to a local, regional, national, or international membership. General-interest conventions cover all aspects of science fiction, while others focus on a particular interest such as media fandom or filk music. Most science fiction conventions are organized by volunteers in non-profit groups, though most media-oriented events are organized by commercial promoters. === Fandom and fanzines === Science fiction fandom emerged from the letters column in Amazing Stories magazine. Fans began writing letters to each other, and then assembling their comments in informal publications that became known as fanzines. Once in regular communication, these fans wanted to meet in person, so they organized local clubs. During the 1930s, the first science fiction conventions gathered fans from a larger area. The earliest organized online fandom was the SF Lovers Community, originally a mailing list in the late 1970s, with a text archive file that was updated regularly. In the 1980s, Usenet groups greatly expanded the circle of fans online. In the 1990s, the development of the World-Wide Web increased online fandom through websites devoted to science fiction and related genres in all media. The first science fiction fanzine, The Comet, was published in 1930 by the Science Correspondence Club in Chicago, Illinois. As of 2025, one of the best known fanzines is Ansible, edited by David Langford, winner of numerous Hugo awards. Other notable fanzines to win one or more Hugo awards include File 770, Mimosa, and Plokta. Artists working for fanzines have often risen to prominence in the field, including Brad W. Foster, Teddy Harvia, and Joe Mayhew; the Hugo Awards include a category for Best Fan Artists. == Elements == Science fiction elements can include the following: Temporal settings in the future or in alternative histories; Predicted or speculative technology such as brain-computer interface, bio-engineering, superintelligent computers, robots, ray guns, and other advanced weapons; Space travel, or settings in outer space, on other worlds, in subterranean earth, or in parallel universes; Fictional concepts in biology such as aliens, mutants, and enhanced humans; Undiscovered scientific possibilities such as teleportation, time travel, and faster-than-light travel or communication; Social/political systems and situations that are new and different, including utopian, dystopian, post-apocalyptic, or post-scarcity; Future history and speculative evolution of humans on Earth or other planets; Paranormal abilities such as mind control, telepathy, and telekinesis. == International examples == == Subgenres == While science fiction is a genre of fiction, a science fiction genre is a subgenre within science fiction. Science fiction may be divided along any number of overlapping axes. Gary K. Wolfe's Critical Terms for Science Fiction and Fantasy identifies over 30 subdivisions of science fiction, not including science fantasy (which is a mixed genre). == Related genres == == See also == == References == == General and cited sources == == External links == Science Fiction Bookshelf at Project Gutenberg Science fiction fanzines (current and historical) online SFWA "Suggested Reading" list Science fiction at standardebooks.org Science Fiction Research Association A selection of articles written by Mike Ashley, Iain Sinclair and others, exploring 19th-century visions of the future. Archived 18 June 2023 at the Wayback Machine from the British Library's Discovering Literature website. Merril Collection of Science Fiction, Speculation and Fantasy at Toronto Public Library Science Fiction Studies' Chronological Bibliography of Science Fiction History, Theory, and Criticism Best 50 sci-fi novels of all time (Esquire; 21 March 2022)
Wikipedia/Science_fiction
In theoretical physics, unparticle physics is a speculative theory that conjectures a form of matter that cannot be explained in terms of particles using the Standard Model of particle physics, because its components are scale invariant. Howard Georgi proposed this theory in two 2007 papers, "Unparticle Physics" and "Another Odd Thing About Unparticle Physics". His papers were followed by further work by other researchers into the properties and phenomenology of unparticle physics and its potential impact on particle physics, astrophysics, cosmology, CP violation, lepton flavour violation, muon decay, neutrino oscillations, and supersymmetry. == Background == All particles exist in states that may be characterized by a certain energy, momentum and mass. In most of the Standard Model of particle physics, particles of the same type cannot exist in another state with all these properties scaled up or down by a common factor – electrons, for example, always have the same mass regardless of their energy or momentum. But this is not always the case: massless particles, such as photons, can exist with their properties scaled equally. This immunity to scaling is called "scale invariance". The idea of unparticles comes from conjecturing that there may be "stuff" that does not necessarily have zero mass but is still scale-invariant, with the same physics regardless of a change of length (or equivalently energy). This stuff is unlike particles, and described as unparticle. The unparticle stuff is equivalent to particles with a continuous spectrum of mass. Such unparticle stuff has not been observed, which suggests that if it exists, it must couple with normal matter weakly at observable energies. Since the Large Hadron Collider (LHC) team announced it will begin probing a higher energy frontier in 2009, some theoretical physicists have begun to consider the properties of unparticle stuff and how it may appear in LHC experiments. One of the great hopes for the LHC is that it might come up with some discoveries that will help us update or replace our best description of the particles that make up matter and the forces that glue them together. == Properties == Unparticles would have properties in common with neutrinos, which have almost zero mass and are therefore nearly scale invariant. Neutrinos barely interact with matter – most of the time physicists can infer their presence only by calculating the "missing" energy and momentum after an interaction. By looking at the same interaction many times, a probability distribution is built up that tells more specifically how many and what sort of neutrinos are involved. They couple very weakly to ordinary matter at low energies, and the effect of the coupling increases as the energy increases. A similar technique could be used to search for evidence of unparticles. According to scale invariance, a distribution containing unparticles would become apparent because it would resemble a distribution for a fractional number of massless particles. This scale invariant sector would interact very weakly with the rest of the Standard Model, making it possible to observe evidence for unparticle stuff, if it exists. The unparticle theory is a high-energy theory that contains both Standard Model fields and Banks–Zaks fields, which have scale-invariant behavior at an infrared point. The two fields can interact through the interactions of ordinary particles if the energy of the interaction is sufficiently high. These particle interactions would appear to have "missing" energy and momentum that would not be detected by the experimental apparatus. Certain distinct distributions of missing energy would signify the production of unparticle stuff. If such signatures are not observed, bounds on the model can be set and refined. == Experimental indications == Unparticle physics has been proposed as an explanation for anomalies in superconducting cuprate materials, where the charge measured by ARPES appears to exceed predictions from Luttinger's theorem for the quantity of electrons. == References == == External links == Zyga, Lisa. "Professor proposes theory of unparticle physics". PhysOrg.com. Zyga, Lisa. "Physicists Build Unparticle Models Guided by Big Bang and Supernovae". PhysOrg.com. "Weird Physics Theory: Unparticle Stuff". ScienceDaily.com. Siegfried, Tom. "'Unparticle' Matter may be the stuff that glues physics together". whyfiles.org. Archived from the original on 2008-05-12. Retrieved 2008-01-29. Feng, Jonathan. "Unparticle Physics" (PDF). hep.ps.uci.edu. Cheung, Kingman; Wai-Yee Keung; Tzu-Chiang Yuan (2007). "Collider Phenomenology of Unparticle Physics". Physical Review D. 76 (5): 055003. arXiv:0706.3155. Bibcode:2007PhRvD..76e5003C. doi:10.1103/PhysRevD.76.055003. S2CID 119612474.
Wikipedia/Unparticle_physics
Unification of theories about observable fundamental phenomena of nature is one of the primary goals of physics. The two great unifications to date are Isaac Newton’s unification of gravity and astronomy, and James Clerk Maxwell’s unification of electromagnetism; the latter has been further unified with the concept of electroweak interaction. This process of "unifying" forces continues today, with the ultimate goal of finding a theory of everything. == Unification of gravity and astronomy == The "first great unification" was Isaac Newton's 17th century unification of gravity, which brought together the understandings of the observable phenomena of gravity on Earth with the observable behaviour of celestial bodies in space. His work is credited with laying the foundations of future endeavors for a grand unified theory. For example, it has been stated that "If we have to take any single individual as the originator of the quest for a unified theory of physics, and, by implication, the whole of knowledge, it has to be Newton." Physicist Steven Weinberg stated that "It is with Isaac Newton that the modern dream of a final theory really begins". == Unification of magnetism, electricity, light and related radiation == The ancient Chinese people observed that certain rocks such as lodestone and magnetite were attracted to one another by an invisible force. This effect was later called magnetism, which was first rigorously studied in the 17th century. However, prior to ancient Chinese observations of magnetism, the ancient Greeks knew of other objects such as amber, that when rubbed with fur would cause a similar invisible attraction between the two. This was also studied rigorously in the 17th century and came to be called electricity. Thus, physics had come to understand two observations of nature in terms of some root cause (electricity and magnetism). However, work in the 19th century revealed that these two forces were just two different aspects of one force – electromagnetism. The "second great unification" was James Clerk Maxwell's 19th century unification of electromagnetism. It brought together the understandings of the observable phenomena of magnetism, electricity and light (and more broadly, the spectrum of electromagnetic radiation). This was followed in the 20th century by Albert Einstein's unification of space and time, and of mass and energy through his theory of special relativity. Later, Paul Dirac developed quantum field theory, unifying quantum mechanics and special relativity. A relatively recent unification of electromagnetism and the weak nuclear force now consider them to be two aspects of the electroweak interaction. == Unification of the remaining fundamental forces: theory of everything == This process of "unifying" forces continues today, with the ultimate goal of finding a theory of everything – it remains perhaps the most prominent of the unsolved problems in physics. There remain four fundamental forces which have not been decisively unified: the gravitational and electromagnetic interactions, which produce significant long-range forces whose effects can be seen directly in everyday life, and the strong and weak interactions, which produce forces at minuscule, subatomic distances and govern nuclear interactions. Electromagnetism and the weak interactions are widely considered to be two aspects of the electroweak interaction. Attempts to unify quantum mechanics and general relativity into a single theory of quantum gravity, a program ongoing for over half a century, have not yet been decisively resolved; current leading candidates are M-theory, superstring theory and loop quantum gravity. == References ==
Wikipedia/Unification_(physics)
A consensus theory of truth is the process of taking statements to be true simply because people generally agree upon them.: 134  == Varieties of consensus == === Consensus gentium === An ancient criterion of truth, the consensus gentium (Latin for agreement of the people), states "that which is universal among men carries the weight of truth" (Ferm, 64). A number of consensus theories of truth are based on variations of this principle. In some criteria the notion of universal consent is taken strictly, while others qualify the terms of consensus in various ways. There are versions of consensus theory in which the specific population weighing in on a given question, the proportion of the population required for consent, and the period of time needed to declare consensus vary from the classical norm. === Consensus as a regulative ideal === A descriptive theory is one that tells how things are, while a normative theory tells how things ought to be. Expressed in practical terms, a normative theory, more properly called a policy, tells agents how they ought to act. A policy can be an absolute imperative, telling agents how they ought to act in any case, or it can be a contingent directive, telling agents how they ought to act if they want to achieve a particular goal. A policy is frequently stated in the form of a piece of advice called a heuristic, a maxim, a norm, a rule, a slogan, and so on. Other names for a policy are a recommendation and a regulative principle. A regulative ideal can be expressed in the form of a description, but what it describes is an ideal state of affairs, a condition of being that constitutes its aim, end, goal, intention, or objective. It is not the usual case for the actual case to be the ideal case, or else there would hardly be much call for a policy aimed at achieving an ideal. Corresponding to the distinction between actual conditions and ideal conditions there is a distinction between actual consensus and ideal consensus. A theory of truth founded on a notion of actual consensus is a very different thing from a theory of truth founded on a notion of ideal consensus. Moreover, an ideal consensus may be ideal in several different ways. The state of consensus may be ideal in its own nature, conceived in the matrix of actual experience by way of intellectual operations like abstraction, extrapolation, and limit formation. Or the conditions under which the consensus is conceived to be possible may be formulated as idealizations of actual conditions. A very common type of ideal consensus theory refers to a community that is an idealization of actual communities in one or more respects. == Critiques == It is very difficult to find any philosopher of note who asserts a bare, naive, or pure consensus theory of truth, in other words, a treatment of truth that is based on actual consensus in an actual community without further qualification. One obvious critique is that not everyone agrees to consensus theory, implying that it may not be true by its own criteria. Another problem is defining how we know that consensus is achieved without falling prey to an infinite regress. Even if everyone agrees to a particular proposition, we may not know that it is true until everyone agrees that everyone agrees to it. Bare consensus theories are frequent topics of discussion, however, evidently because they serve the function of reference points for the discussion of alternative theories. If consensus equals truth, then truth can be made by forcing or organizing a consensus, rather than being discovered through experiment or observation, or existing separately from consensus. The principles of mathematics also do not hold under consensus truth because mathematical propositions build on each other. If the consensus declared 2+2=5 it would render the practice of mathematics where 2+2=4 impossible. Imre Lakatos characterizes it as a "watered down" form of provable truth propounded by some sociologists of knowledge, particularly Thomas Kuhn and Michael Polanyi. Philosopher Nigel Warburton argues that the truth by consensus process is not reliable, general agreement upon something does not make it true. Warburton says that one reason for the unreliability of the consensus theory of truth, is that people are gullible, easily misled, and prone to wishful thinking—they believe an assertion and espouse it as truth in the face of overwhelming evidence and facts to the contrary, simply because they wish that things were so.: 135  == See also == Argumentum ad populum – Fallacy of claiming the majority is always correct Coherentism – Theory in philosophical epistemology Common knowledge – Statement widely known to be true Confirmation holism – Idea in the philosophy of science Consensus reality – Notion of reality based on consensus view Conventional wisdom – Ideas generally accepted by experts or the public Jury trial – Type of legal trial Philosophy of history § History as propaganda: Is history always written by the victors? Philosophy of history – The philosophical study of history and its discipline Truthiness – Quality of preferring concepts or facts one wishes to be true, rather than actual truth Wikiality – Neologism combining Wiki and reality === Related topics === Belief – Subjective attitude that something is true Conventionalism – Philosophical belief that principles depend on societal agreements, not external reality Epistemology – Philosophical study of knowledge Information – Facts provided or learned about something or someone Inquiry – Any process that has the aim of augmenting knowledge, resolving doubt, or solving a problem Knowledge – Awareness of facts or being competent Pragmatism – Philosophical tradition Pragmaticism – Branch of pragmatic philosophy Pragmatic maxim – Maxim of logic formulated by Charles Sanders Peirce. Reproducibility – Aspect of scientific research Scientific method – Interplay between observation, experiment, and theory in science Testability – Extent to which truthness or falseness of a hypothesis/declaration can be tested Verifiability theory of meaning – Philosophical doctrine == References == == Sources == Ferm, Vergilius (1962), "Consensus Gentium", p. 64 in Runes (1962). Haack, Susan (1993), Evidence and Inquiry: Towards Reconstruction in Epistemology, Blackwell Publishers, Oxford, UK. Habermas, Jürgen (1976), "What Is Universal Pragmatics?", 1st published, "Was heißt Universalpragmatik?", Sprachpragmatik und Philosophie, Karl-Otto Apel (ed.), Suhrkamp Verlag, Frankfurt am Main. Reprinted, pp. 1–68 in Jürgen Habermas, Communication and the Evolution of Society, Thomas McCarthy (trans.), Beacon Press, Boston, Massachusetts, 1979. Habermas, Jürgen (1979), Communication and the Evolution of Society, Thomas McCarthy (trans.), Beacon Press, Boston, Massachusetts. Habermas, Jürgen (1990), Moral Consciousness and Communicative Action, Christian Lenhardt and Shierry Weber Nicholsen (trans.), Thomas McCarthy (intro.), MIT Press, Cambridge, Massachusetts. Habermas, Jürgen (2003), Truth and Justification, Barbara Fultner (trans.), MIT Press, Cambridge, Massachusetts. James, William (1907), Pragmatism, A New Name for Some Old Ways of Thinking, Popular Lectures on Philosophy, Longmans, Green, and Company, New York, New York. James, William (1909), The Meaning of Truth, A Sequel to 'Pragmatism', Longmans, Green, and Company, New York, New York. Kant, Immanuel (1800), Introduction to Logic. Reprinted, Thomas Kingsmill Abbott (trans.), Dennis Sweet (intro.), Barnes and Noble, New York, New York, 2005. Kirkham, Richard L. (1992), Theories of Truth: A Critical Introduction, MIT Press, Cambridge, Massachusetts. Rescher, Nicholas (1995), Pluralism: Against the Demand for Consensus, Oxford University Press, Oxford, UK. Runes, Dagobert D. (ed., 1962), Dictionary of Philosophy, Littlefield, Adams, and Company, Totowa, New Jersey.
Wikipedia/Consensus_theory_of_truth
A cyclic model (or oscillating model) is any of several cosmological models in which the universe follows infinite, or indefinite, self-sustaining cycles. For example, the oscillating universe theory briefly considered by Albert Einstein in 1930 theorized a universe following an eternal series of oscillations, each beginning with a Big Bang and ending with a Big Crunch; in the interim, the universe would expand for a period of time before the gravitational attraction of matter causes it to collapse back in and undergo a bounce. == Overview == In the 1920s, theoretical physicists, most notably Albert Einstein, noted the possibility of a cyclic model for the universe as an (everlasting) alternative to the model of an expanding universe. In 1922, Alexander Friedmann introduced the Oscillating Universe Theory. However, work by Richard C. Tolman in 1934 showed that these early attempts failed because of the cyclic problem: according to the second law of thermodynamics, entropy can only increase. This implies that successive cycles grow longer and larger. Extrapolating back in time, cycles before the present one become shorter and smaller culminating again in a Big Bang and thus not replacing it. This puzzling situation remained for many decades until the early 21st century when the recently discovered dark energy component provided new hope for a consistent cyclic cosmology. In 2011, a five-year survey of 200,000 galaxies and spanning 7 billion years of cosmic time confirmed that "dark energy is driving our universe apart at accelerating speeds." One new cyclic model is the brane cosmology model of the creation of the universe, derived from the earlier ekpyrotic model. It was proposed in 2001 by Paul Steinhardt of Princeton University and Neil Turok of Cambridge University. The theory describes a universe exploding into existence not just once, but repeatedly over time. The theory could potentially explain why a repulsive form of energy known as the cosmological constant, which is accelerating the expansion of the universe, is several orders of magnitude smaller than predicted by the standard Big Bang model. A different cyclic model relying on the notion of phantom energy was proposed in 2007 by Lauris Baum and Paul Frampton of the University of North Carolina at Chapel Hill. Other cyclic models include conformal cyclic cosmology and loop quantum cosmology. == The Steinhardt–Turok model == In this cyclic model, two parallel orbifold planes or M-branes collide periodically in a higher-dimensional space. The visible four-dimensional universe lies on one of these branes. The collisions correspond to a reversal from contraction to expansion, or a Big Crunch followed immediately by a Big Bang. The matter and radiation we see today were generated during the most recent collision in a pattern dictated by quantum fluctuations created before the branes. After billions of years the universe reached the state we observe today; after additional billions of years it will ultimately begin to contract again. Dark energy corresponds to a force between the branes, and serves the crucial role of solving the monopole, horizon, and flatness problems. Moreover, the cycles can continue indefinitely into the past and the future, and the solution is an attractor, so it can provide a complete history of the universe. As Richard C. Tolman showed, the earlier cyclic model failed because the universe would undergo inevitable thermodynamic heat death. However, the newer cyclic model evades this by having a net expansion each cycle, preventing entropy from building up. However, there remain major open issues in the model. Foremost among them is that colliding branes are not understood by string theorists, and nobody knows if the scale invariant spectrum will be destroyed by the big crunch. Moreover, as with cosmic inflation, while the general character of the forces (in the ekpyrotic scenario, a force between branes) required to create the vacuum fluctuations is known, there is no candidate from particle physics. == The Baum–Frampton model == This more recent cyclic model of 2007 assumes an exotic form of dark energy called phantom energy, which possesses negative kinetic energy and would usually cause the universe to end in a Big Rip. This condition is achieved if the universe is dominated by dark energy with a cosmological equation of state parameter w {\displaystyle w} satisfying the condition w ≡ p ρ < − 1 {\displaystyle w\equiv {\frac {p}{\rho }}<-1} , for energy density ρ {\displaystyle {\rho }} and pressure p. By contrast, Steinhardt–Turok assume w ≥ − 1 {\displaystyle w{\geq }-1} . In the Baum–Frampton model, a septillionth (or less) of a second (i.e. 10−24 seconds or less) before the would-be Big Rip, a turnaround occurs and only one causal patch is retained as our universe. The generic patch contains no quark, lepton or force carrier; only dark energy – and its entropy thereby vanishes. The adiabatic process of contraction of this much smaller universe takes place with constant vanishing entropy and with no matter including no black holes which disintegrated before turnaround. The idea that the universe "comes back empty" is a central new idea of this cyclic model, and avoids many difficulties confronting matter in a contracting phase such as excessive structure formation, proliferation and expansion of black holes, as well as going through phase transitions such as those of QCD and electroweak symmetry restoration. Any of these would tend strongly to produce an unwanted premature bounce, simply to avoid violation of the second law of thermodynamics. The condition of w < − 1 {\displaystyle w<-1} may be logically inevitable in a truly infinitely cyclic cosmology because of the entropy problem. Nevertheless, many technical back up calculations are necessary to confirm consistency of the approach. Although the model borrows ideas from string theory, it is not necessarily committed to strings, or to higher dimensions, yet such speculative devices may provide the most expeditious methods to investigate the internal consistency. The value of w {\displaystyle w} in the Baum–Frampton model can be made arbitrarily close to, but must be less than, −1. == Other cyclic models == Conformal cyclic cosmology—a general relativity based theory by Roger Penrose in which the universe expands until all the matter decays and is turned to light—so there is nothing in the universe that has any time or distance scale associated with it. This permits it to become identical with the Big Bang, so starting the next cycle. Loop quantum cosmology which predicts a "quantum bridge" between contracting and expanding cosmological branches. == See also == Physical cosmologies: Big Bounce Conformal cyclic cosmology Religion: Bhavacakra Cycles of time in Hinduism Eternal return Historic recurrence Jainism and non-creationism Kalachakra Wheel of time == References == == Further reading == Steinhardt, P. J.; Turok, N. (2007). Endless Universe. New York, New York: Doubleday. ISBN 978-0-385-50964-0. Tolman, R. C. (1987) [1934]. Relativity, Thermodynamics, and Cosmology. New York: Dover. ISBN 978-0-486-65383-9. LCCN 34032023. Baum, L.; Frampton, P. H. (2007). "Turnaround in Cyclic Cosmology". Physical Review Letters. 98 (7): 071301. arXiv:hep-th/0610213. Bibcode:2007PhRvL..98g1301B. doi:10.1103/PhysRevLett.98.071301. PMID 17359014. S2CID 17698158. Dicke, R. H.; Peebles, P. J. E.; Roll, P. G.; Wilkinson, D. T. (1965). "Cosmic Black-Body Radiation". The Astrophysical Journal. 142: 414. Bibcode:1965ApJ...142..414D. doi:10.1086/148306. ISSN 0004-637X. S. W. Hawking and G. F. R. Ellis, The large-scale structure of space-time (Cambridge, 1973). Penrose, Roger (2010). Cycles of Time: an extraordinary new view of the universe. London: The Bodley Head. ISBN 978-0-224-08036-1. == External links == Paul J. Steinhardt, Department of Physics, Princeton University Paul H. Frampton, Department of Physics and Astronomy, The University of North Carolina at Chapel Hill "The Cyclic Universe": A Talk with Neil Turok Roger Penrose—Cyclical Universe Model
Wikipedia/Cyclic_model
The Type 0 string theory is a less well-known model of string theory. It is a superstring theory in the sense that the worldsheet theory is supersymmetric. However, the spacetime spectrum is not supersymmetric and, in fact, does not contain any fermions at all. In dimensions greater than two, the ground state is a tachyon so the theory is unstable. These properties make it similar to the bosonic string and an unsuitable proposal for describing the world as we observe it, although a GSO projection does get rid of the tachyon and the even G-parity sector of the theory defines a stable string theory. The theory is used sometimes as a toy model for exploring concepts in string theory, notably closed string tachyon condensation. Some other recent interest has involved the two-dimensional Type 0 string which has a non-perturbatively stable matrix model description. Like the Type II string, different GSO projections result in slightly different theories, Type 0A and Type 0B. The difference lies in which types of Ramond–Ramond fields lie in the massless spectrum. == References == Polchinski, Joseph (1998). String Theory, Cambridge University Press. A modern textbook. Vol. 2: Superstring theory and beyond. ISBN 0-521-63304-4.
Wikipedia/Type_0_string_theory
In physics, Randall–Sundrum models (RS) (also called 5-dimensional warped geometry theory) are models that describe the world in terms of a warped-geometry higher-dimensional universe, or more concretely as a 5-dimensional anti-de Sitter space where the elementary particles (except the graviton) are localized on a (3 + 1)-dimensional brane or branes. The two models were proposed in two articles in 1999 by Lisa Randall and Raman Sundrum because they were dissatisfied with the universal extra-dimensional models then in vogue. Such models require two fine tunings; one for the value of the bulk cosmological constant and the other for the brane tensions. Later, while studying RS models in the context of the anti-de Sitter / conformal field theory (AdS/CFT) correspondence, they showed how it can be dual to technicolor models. The first of the two models, called RS1, has a finite size for the extra dimension with two branes, one at each end. The second, RS2, is similar to the first, but one brane has been placed infinitely far away, so that there is only one brane left in the model. == Overview == The model is a braneworld theory developed while trying to solve the hierarchy problem of the Standard Model. It involves a finite five-dimensional bulk that is extremely warped and contains two branes: the Planckbrane (where gravity is a relatively strong force; also called "Gravitybrane") and the Tevbrane (our home with the Standard Model particles; also called "Weakbrane"). In this model, the two branes are separated in the not-necessarily large fifth dimension by approximately 16 units (the units based on the brane and bulk energies). The Planckbrane has positive brane energy, and the Tevbrane has negative brane energy. These energies are the cause of the extremely warped spacetime. == Graviton probability function == In this warped spacetime that is only warped along the fifth dimension, the graviton's probability function is extremely high at the Planckbrane, but it drops exponentially as it moves closer towards the Tevbrane. In this, gravity would be much weaker on the Tevbrane than on the Planckbrane. == RS1 model == The RS1 model attempts to address the hierarchy problem. The warping of the extra dimension is analogous to the warping of spacetime in the vicinity of a massive object, such as a black hole. This warping, or red-shifting, generates a large ratio of energy scales, so that the natural energy scale at one end of the extra dimension is much larger than at the other end: d s 2 = 1 k 2 y 2 ( d y 2 + η μ ν d x μ d x ν ) , {\displaystyle \mathrm {d} s^{2}={\frac {1}{k^{2}y^{2}}}(\mathrm {d} y^{2}+\eta _{\mu \nu }\,\mathrm {d} x^{\mu }\,\mathrm {d} x^{\nu }),} where k is some constant, and η has "−+++" metric signature. This space has boundaries at y = 1/k and y = 1/(Wk), with 0 ≤ 1 / k ≤ 1 / ( W k ) {\displaystyle 0\leq 1/k\leq 1/(Wk)} , where k is around the Planck scale, W is the warp factor, and Wk is around a TeV. The boundary at y = 1/k is called the Planck brane, and the boundary at y = 1/(Wk) is called the TeV brane. The particles of the Standard Model reside on the TeV brane. The distance between both branes is only −ln(W)/k, though. In another coordinate system, φ = d e f − π ln ⁡ ( k y ) ln ⁡ ( W ) , {\displaystyle \varphi \ {\stackrel {\mathrm {def} }{=}}\ -{\frac {\pi \ln(ky)}{\ln(W)}},} so that 0 ≤ φ ≤ π , {\displaystyle 0\leq \varphi \leq \pi ,} and d s 2 = ( ln ⁡ ( W ) π k ) 2 d φ 2 + e 2 ln ⁡ ( W ) φ π η μ ν d x μ d x ν . {\displaystyle \mathrm {d} s^{2}=\left({\frac {\ln(W)}{\pi k}}\right)^{2}\,\mathrm {d} \varphi ^{2}+e^{\frac {2\ln(W)\varphi }{\pi }}\eta _{\mu \nu }\,\mathrm {d} x^{\mu }\,\mathrm {d} x^{\nu }.} == RS2 model == The RS2 model uses the same geometry as RS1, but there is no TeV brane. The particles of the standard model are presumed to be on the Planck brane. This model was originally of interest because it represented an infinite 5-dimensional model, which, in many respects, behaved as a 4-dimensional model. This setup may also be of interest for studies of the AdS/CFT conjecture. == Prior models == In 1998/99 Merab Gogberashvili published on arXiv a number of articles on a very similar theme. He showed that if the Universe is considered as a thin shell (a mathematical synonym for "brane") expanding in 5-dimensional space, then there is a possibility to obtain one scale for particle theory corresponding to the 5-dimensional cosmological constant and Universe thickness, and thus to solve the hierarchy problem. It was also shown that four-dimensionality of the Universe is the result of stability requirement, since the extra component of the Einstein field equations giving the localized solution for matter fields coincides with one of the conditions of stability. == Experimental results == In August 2016, experimental results from the LHC excluded RS gravitons with masses below 3.85 and 4.45 TeV for ˜k = 0.1 and 0.2 respectively and for ˜k = 0.01, graviton masses below 1.95 TeV, except for the region between 1.75 TeV and 1.85 TeV. Currently, the most stringent limits on RS graviton production. == See also == DGP model Goldberger–Wise mechanism Kaluza–Klein theory ADD model Scientific importance of GW170817, a neutron star merger == References == == Sources == Randall, Lisa (2005). Warped Passages: Unraveling the Mysteries of the Universe's Hidden Dimensions. New York: HarperCollins. ISBN 978-0-06-053108-9. == External links == Lisa Randall's web page at Harvard University Archived 2013-04-13 at the Wayback Machine Raman Sundrum's web page at the University of Maryland
Wikipedia/Randall–Sundrum_model
In physics, the fundamental interactions or fundamental forces are interactions in nature that appear not to be reducible to more basic interactions. There are four fundamental interactions known to exist: gravity electromagnetism weak interaction strong interaction The gravitational and electromagnetic interactions produce long-range forces whose effects can be seen directly in everyday life. The strong and weak interactions produce forces at subatomic scales and govern nuclear interactions inside atoms. Some scientists hypothesize that a fifth force might exist, but these hypotheses remain speculative. Each of the known fundamental interactions can be described mathematically as a field. The gravitational interaction is attributed to the curvature of spacetime, described by Einstein's general theory of relativity. The other three are discrete quantum fields, and their interactions are mediated by elementary particles described by the Standard Model of particle physics. Within the Standard Model, the strong interaction is carried by a particle called the gluon and is responsible for quarks binding together to form hadrons, such as protons and neutrons. As a residual effect, it creates the nuclear force that binds the latter particles to form atomic nuclei. The weak interaction is carried by particles called W and Z bosons, and also acts on the nucleus of atoms, mediating radioactive decay. The electromagnetic force, carried by the photon, creates electric and magnetic fields, which are responsible for the attraction between orbital electrons and atomic nuclei which holds atoms together, as well as chemical bonding and electromagnetic waves, including visible light, and forms the basis for electrical technology. Although the electromagnetic force is far stronger than gravity, it tends to cancel itself out within large objects, so over large (astronomical) distances gravity tends to be the dominant force, and is responsible for holding together the large scale structures in the universe, such as planets, stars, and galaxies. The historical success of models that show relationships between fundamental interactions have led to efforts to go beyond the Standard Model and combine all four forces in to a theory of everything. == History == === Classical theory === In his 1687 theory, Isaac Newton postulated space as an infinite and unalterable physical structure existing before, within, and around all objects while their states and relations unfold at a constant pace everywhere, thus absolute space and time. Inferring that all objects bearing mass approach at a constant rate, but collide by impact proportional to their masses, Newton inferred that matter exhibits an attractive force. His law of universal gravitation implied there to be instant interaction among all objects. As conventionally interpreted, Newton's theory of motion modelled a central force without a communicating medium. Thus Newton's theory violated the tradition, going back to Descartes, that there should be no action at a distance. Conversely, during the 1820s, when explaining magnetism, Michael Faraday inferred a field filling space and transmitting that force. Faraday conjectured that ultimately, all forces unified into one. In 1873, James Clerk Maxwell unified electricity and magnetism as effects of an electromagnetic field whose third consequence was light, travelling at constant speed in vacuum. If his electromagnetic field theory held true in all inertial frames of reference, this would contradict Newton's theory of motion, which relied on Galilean relativity. If, instead, his field theory only applied to reference frames at rest relative to a mechanical luminiferous aether—presumed to fill all space whether within matter or in vacuum and to manifest the electromagnetic field—then it could be reconciled with Galilean relativity and Newton's laws. (However, such a "Maxwell aether" was later disproven; Newton's laws did, in fact, have to be replaced.) === Standard Model === The Standard Model of particle physics was developed throughout the latter half of the 20th century. In the Standard Model, the electromagnetic, strong, and weak interactions associate with elementary particles, whose behaviours are modelled in quantum mechanics (QM). For predictive success with QM's probabilistic outcomes, particle physics conventionally models QM events across a field set to special relativity, altogether relativistic quantum field theory (QFT). Force particles, called gauge bosons—force carriers or messenger particles of underlying fields—interact with matter particles, called fermions. Everyday matter is atoms, composed of three fermion types: up-quarks and down-quarks constituting, as well as electrons orbiting, the atom's nucleus. Atoms interact, form molecules, and manifest further properties through electromagnetic interactions among their electrons absorbing and emitting photons, the electromagnetic field's force carrier, which if unimpeded traverse potentially infinite distance. Electromagnetism's QFT is quantum electrodynamics (QED). The force carriers of the weak interaction are the massive W and Z bosons. Electroweak theory (EWT) covers both electromagnetism and the weak interaction. At the high temperatures shortly after the Big Bang, the weak interaction, the electromagnetic interaction, and the Higgs boson were originally mixed components of a different set of ancient pre-symmetry-breaking fields. As the early universe cooled, these fields split into the long-range electromagnetic interaction, the short-range weak interaction, and the Higgs boson. In the Higgs mechanism, the Higgs field manifests Higgs bosons that interact with some quantum particles in a way that endows those particles with mass. The strong interaction, whose force carrier is the gluon, traversing minuscule distance among quarks, is modeled in quantum chromodynamics (QCD). EWT, QCD, and the Higgs mechanism comprise particle physics' Standard Model (SM). Predictions are usually made using calculational approximation methods, although such perturbation theory is inadequate to model some experimental observations (for instance bound states and solitons). Still, physicists widely accept the Standard Model as science's most experimentally confirmed theory. Beyond the Standard Model, some theorists work to unite the electroweak and strong interactions within a Grand Unified Theory (GUT). Some attempts at GUTs hypothesize "shadow" particles, such that every known matter particle associates with an undiscovered force particle, and vice versa, altogether supersymmetry (SUSY). Other theorists seek to quantize the gravitational field by the modelling behaviour of its hypothetical force carrier, the graviton and achieve quantum gravity (QG). One approach to QG is loop quantum gravity (LQG). Still other theorists seek both QG and GUT within one framework, reducing all four fundamental interactions to a Theory of Everything (ToE). The most prevalent aim at a ToE is string theory, although to model matter particles, it added SUSY to force particles—and so, strictly speaking, became superstring theory. Multiple, seemingly disparate superstring theories were unified on a backbone, M-theory. Theories beyond the Standard Model remain highly speculative, lacking great experimental support. == Overview of the fundamental interactions == In the conceptual model of fundamental interactions, matter consists of fermions, which carry properties called charges and spin ±1⁄2 (intrinsic angular momentum ±ħ⁄2, where ħ is the reduced Planck constant). They attract or repel each other by exchanging bosons. The interaction of any pair of fermions in perturbation theory can then be modelled thus: Two fermions go in → interaction by boson exchange → two changed fermions go out. The exchange of bosons always carries energy and momentum between the fermions, thereby changing their speed and direction. The exchange may also transport a charge between the fermions, changing the charges of the fermions in the process (e.g., turn them from one type of fermion to another). Since bosons carry one unit of angular momentum, the fermion's spin direction will flip from +1⁄2 to −1⁄2 (or vice versa) during such an exchange (in units of the reduced Planck constant). Since such interactions result in a change in momentum, they can give rise to classical Newtonian forces. In quantum mechanics, physicists often use the terms "force" and "interaction" interchangeably; for example, the weak interaction is sometimes referred to as the "weak force". According to the present understanding, there are four fundamental interactions or forces: gravitation, electromagnetism, the weak interaction, and the strong interaction. Their magnitude and behaviour vary greatly, as described in the table below. Modern physics attempts to explain every observed physical phenomenon by these fundamental interactions. Moreover, reducing the number of different interaction types is seen as desirable. Two cases in point are the unification of: Electric and magnetic force into electromagnetism; The electromagnetic interaction and the weak interaction into the electroweak interaction; see below. Both magnitude ("relative strength") and "range" of the associated potential, as given in the table, are meaningful only within a rather complex theoretical framework. The table below lists properties of a conceptual scheme that remains the subject of ongoing research. The modern (perturbative) quantum mechanical view of the fundamental forces other than gravity is that particles of matter (fermions) do not directly interact with each other, but rather carry a charge, and exchange virtual particles (gauge bosons), which are the interaction carriers or force mediators. For example, photons mediate the interaction of electric charges, and gluons mediate the interaction of color charges. The full theory includes perturbations beyond simply fermions exchanging bosons; these additional perturbations can involve bosons that exchange fermions, as well as the creation or destruction of particles: see Feynman diagrams for examples. == Interactions == === Gravity === Gravitation is the weakest of the four interactions at the atomic scale, where electromagnetic interactions dominate. Gravitation is the most important of the four fundamental forces for astronomical objects over astronomical distances for two reasons. First, gravitation has an infinite effective range, like electromagnetism but unlike the strong and weak interactions. Second, gravity always attracts and never repels; in contrast, astronomical bodies tend toward a near-neutral net electric charge, such that the attraction to one type of charge and the repulsion from the opposite charge mostly cancel each other out. Even though electromagnetism is far stronger than gravitation, electrostatic attraction is not relevant for large celestial bodies, such as planets, stars, and galaxies, simply because such bodies contain equal numbers of protons and electrons and so have a net electric charge of zero. Nothing "cancels" gravity, since it is only attractive, unlike electric forces which can be attractive or repulsive. On the other hand, all objects having mass are subject to the gravitational force, which only attracts. Therefore, only gravitation matters on the large-scale structure of the universe. The long range of gravitation makes it responsible for such large-scale phenomena as the structure of galaxies and black holes and, being only attractive, it slows down the expansion of the universe. Gravitation also explains astronomical phenomena on more modest scales, such as planetary orbits, as well as everyday experience: objects fall; heavy objects act as if they were glued to the ground, and animals can only jump so high. Gravitation was the first interaction to be described mathematically. In ancient times, Aristotle hypothesized that objects of different masses fall at different rates. During the Scientific Revolution, Galileo Galilei experimentally determined that this hypothesis was wrong under certain circumstances—neglecting the friction due to air resistance and buoyancy forces if an atmosphere is present (e.g. the case of a dropped air-filled balloon vs a water-filled balloon), all objects accelerate toward the Earth at the same rate. Isaac Newton's law of Universal Gravitation (1687) was a good approximation of the behaviour of gravitation. Present-day understanding of gravitation stems from Einstein's General Theory of Relativity of 1915, a more accurate (especially for cosmological masses and distances) description of gravitation in terms of the geometry of spacetime. Merging general relativity and quantum mechanics (or quantum field theory) into a more general theory of quantum gravity is an area of active research. It is hypothesized that gravitation is mediated by a massless spin-2 particle called the graviton. Although general relativity has been experimentally confirmed (at least for weak fields, i.e. not black holes) on all but the smallest scales, there are alternatives to general relativity. These theories must reduce to general relativity in some limit, and the focus of observational work is to establish limits on what deviations from general relativity are possible. Proposed extra dimensions could explain why the gravity force is so weak. === Electroweak interaction === Electromagnetism and weak interaction appear to be very different at everyday low energies. They can be modeled using two different theories. However, above unification energy, on the order of 100 GeV, they would merge into a single electroweak force. The electroweak theory is very important for modern cosmology, particularly on how the universe evolved. This is because shortly after the Big Bang, when the temperature was still above approximately 1015 K, the electromagnetic force and the weak force were still merged as a combined electroweak force. For contributions to the unification of the weak and electromagnetic interaction between elementary particles, Abdus Salam, Sheldon Glashow and Steven Weinberg were awarded the Nobel Prize in Physics in 1979. ==== Electromagnetism ==== Electromagnetism is the force that acts between electrically charged particles. This phenomenon includes the electrostatic force acting between charged particles at rest, and the combined effect of electric and magnetic forces acting between charged particles moving relative to each other. Electromagnetism has an infinite range, as gravity does, but is vastly stronger. It is the force that binds electrons to atoms, and it holds molecules together. It is responsible for everyday phenomena like light, magnets, electricity, and friction. Electromagnetism fundamentally determines all macroscopic, and many atomic-level, properties of the chemical elements. In a four kilogram (~1 gallon) jug of water, there is of total electron charge. Thus, if we place two such jugs a meter apart, the electrons in one of the jugs repel those in the other jug with a force of This force is many times larger than the weight of the planet Earth. The atomic nuclei in one jug also repel those in the other with the same force. However, these repulsive forces are canceled by the attraction of the electrons in jug A with the nuclei in jug B and the attraction of the nuclei in jug A with the electrons in jug B, resulting in no net force. Electromagnetic forces are tremendously stronger than gravity, but tend to cancel out so that for astronomical-scale bodies, gravity dominates. Electrical and magnetic phenomena have been observed since ancient times, but it was only in the 19th century James Clerk Maxwell discovered that electricity and magnetism are two aspects of the same fundamental interaction. By 1864, Maxwell's equations had rigorously quantified this unified interaction. Maxwell's theory, restated using vector calculus, is the classical theory of electromagnetism, suitable for most technological purposes. The constant speed of light in vacuum (customarily denoted with a lowercase letter c) can be derived from Maxwell's equations, which are consistent with the theory of special relativity. Albert Einstein's 1905 theory of special relativity, however, which follows from the observation that the speed of light is constant no matter how fast the observer is moving, showed that the theoretical result implied by Maxwell's equations has profound implications far beyond electromagnetism on the very nature of time and space. In another work that departed from classical electro-magnetism, Einstein also explained the photoelectric effect by utilizing Max Planck's discovery that light was transmitted in 'quanta' of specific energy content based on the frequency, which we now call photons. Starting around 1927, Paul Dirac combined quantum mechanics with the relativistic theory of electromagnetism. Further work in the 1940s, by Richard Feynman, Freeman Dyson, Julian Schwinger, and Sin-Itiro Tomonaga, completed this theory, which is now called quantum electrodynamics, the revised theory of electromagnetism. Quantum electrodynamics and quantum mechanics provide a theoretical basis for electromagnetic behavior such as quantum tunneling, in which a certain percentage of electrically charged particles move in ways that would be impossible under the classical electromagnetic theory, that is necessary for everyday electronic devices such as transistors to function. ==== Weak interaction ==== The weak interaction or weak nuclear force is responsible for some nuclear phenomena such as beta decay. Electromagnetism and the weak force are now understood to be two aspects of a unified electroweak interaction — this discovery was the first step toward the unified theory known as the Standard Model. In the theory of the electroweak interaction, the carriers of the weak force are the massive gauge bosons called the W and Z bosons. The weak interaction is the only known interaction that does not conserve parity; it is left–right asymmetric. The weak interaction even violates CP symmetry but does conserve CPT. === Strong interaction === The strong interaction, or strong nuclear force, is the most complicated interaction, mainly because of the way it varies with distance. The nuclear force is powerfully attractive between nucleons at distances of about 1 femtometre (fm, or 10−15 metres), but it rapidly decreases to insignificance at distances beyond about 2.5 fm. At distances less than 0.7 fm, the nuclear force becomes repulsive. This repulsive component is responsible for the physical size of nuclei, since the nucleons can come no closer than the force allows. After the nucleus was discovered in 1908, it was clear that a new force, today known as the nuclear force, was needed to overcome the electrostatic repulsion, a manifestation of electromagnetism, of the positively charged protons. Otherwise, the nucleus could not exist. Moreover, the force had to be strong enough to squeeze the protons into a volume whose diameter is about 10−15 m, much smaller than that of the entire atom. From the short range of this force, Hideki Yukawa predicted that it was associated with a massive force particle, whose mass is approximately 100 MeV. The 1947 discovery of the pion ushered in the modern era of particle physics. Hundreds of hadrons were discovered from the 1940s to 1960s, and an extremely complicated theory of hadrons as strongly interacting particles was developed. Most notably: The pions were understood to be oscillations of vacuum condensates; Jun John Sakurai proposed the rho and omega vector bosons to be force carrying particles for approximate symmetries of isospin and hypercharge; Geoffrey Chew, Edward K. Burdett and Steven Frautschi grouped the heavier hadrons into families that could be understood as vibrational and rotational excitations of strings. While each of these approaches offered insights, no approach led directly to a fundamental theory. Murray Gell-Mann along with George Zweig first proposed fractionally charged quarks in 1961. Throughout the 1960s, different authors considered theories similar to the modern fundamental theory of quantum chromodynamics (QCD) as simple models for the interactions of quarks. The first to hypothesize the gluons of QCD were Moo-Young Han and Yoichiro Nambu, who introduced the quark color charge. Han and Nambu hypothesized that it might be associated with a force-carrying field. At that time, however, it was difficult to see how such a model could permanently confine quarks. Han and Nambu also assigned each quark color an integer electrical charge, so that the quarks were fractionally charged only on average, and they did not expect the quarks in their model to be permanently confined. In 1971, Murray Gell-Mann and Harald Fritzsch proposed that the Han/Nambu color gauge field was the correct theory of the short-distance interactions of fractionally charged quarks. A little later, David Gross, Frank Wilczek, and David Politzer discovered that this theory had the property of asymptotic freedom, allowing them to make contact with experimental evidence. They concluded that QCD was the complete theory of the strong interactions, correct at all distance scales. The discovery of asymptotic freedom led most physicists to accept QCD since it became clear that even the long-distance properties of the strong interactions could be consistent with experiment if the quarks are permanently confined: the strong force increases indefinitely with distance, trapping quarks inside the hadrons. Assuming that quarks are confined, Mikhail Shifman, Arkady Vainshtein and Valentine Zakharov were able to compute the properties of many low-lying hadrons directly from QCD, with only a few extra parameters to describe the vacuum. In 1980, Kenneth G. Wilson published computer calculations based on the first principles of QCD, establishing, to a level of confidence tantamount to certainty, that QCD will confine quarks. Since then, QCD has been the established theory of strong interactions. QCD is a theory of fractionally charged quarks interacting by means of 8 bosonic particles called gluons. The gluons also interact with each other, not just with the quarks, and at long distances the lines of force collimate into strings, loosely modeled by a linear potential, a constant attractive force. In this way, the mathematical theory of QCD not only explains how quarks interact over short distances but also the string-like behavior, discovered by Chew and Frautschi, which they manifest over longer distances. === Higgs interaction === Conventionally, the Higgs interaction is not counted among the four fundamental forces. Nonetheless, although not a gauge interaction nor generated by any diffeomorphism symmetry, the Higgs field's cubic Yukawa coupling produces a weakly attractive fifth interaction. After spontaneous symmetry breaking via the Higgs mechanism, Yukawa terms remain of the form λ i 2 ψ ¯ ϕ ′ ψ = m i ν ψ ¯ ϕ ′ ψ {\displaystyle {\frac {\lambda _{i}}{\sqrt {2}}}{\bar {\psi }}\phi '\psi ={\frac {m_{i}}{\nu }}{\bar {\psi }}\phi '\psi } , with Yukawa coupling λ i {\displaystyle \lambda _{i}} , particle mass m i {\displaystyle m_{i}} (in eV), and Higgs vacuum expectation value 246.22 GeV. Hence coupled particles can exchange a virtual Higgs boson, yielding classical potentials of the form V ( r ) = − m i m j m H 2 1 4 π r e − m H c r / ℏ {\displaystyle V(r)=-{\frac {m_{i}m_{j}}{m_{\rm {H}}^{2}}}{\frac {1}{4\pi r}}e^{-m_{\rm {H}}\,c\,r/\hbar }} , with Higgs mass 125.18 GeV. Because the reduced Compton wavelength of the Higgs boson is so small (1.576×10−18 m, comparable to the W and Z bosons), this potential has an effective range of a few attometers. Between two electrons, it begins roughly 1011 times weaker than the weak interaction, and grows exponentially weaker at non-zero distances. === Beyond the Standard Model === The fundamental forces may become unified into a single force at very high energies and on a minuscule scale, the Planck scale. Particle accelerators cannot produce the enormous energies required to experimentally probe this regime. The weak and electromagnetic forces have already been unified with the electroweak theory of Sheldon Glashow, Abdus Salam, and Steven Weinberg, for which they received the 1979 Nobel Prize in physics. Numerous theoretical efforts have been made to systematize the existing four fundamental interactions on the model of electroweak unification. Grand Unified Theories (GUTs) are proposals to show that each of the three fundamental interactions described by the Standard Model is a different manifestation of a single interaction with symmetries that break down and create separate interactions below some extremely high level of energy. GUTs are also expected to predict some of the relationships between constants of nature that the Standard Model treats as unrelated and gauge coupling unification for the relative strengths of the electromagnetic, weak, and strong forces. A so-called theory of everything, which would integrate GUTs with a quantum gravity theory, faces a greater barrier because no quantum gravity theory (e.g., string theory, loop quantum gravity, and twistor theory) has secured wide acceptance. Some theories look for a graviton to complete the Standard Model list of force-carrying particles, while others, like loop quantum gravity, emphasize the possibility that time-space itself may have a quantum aspect to it. Some theories beyond the Standard Model include a hypothetical fifth force, and the search for such a force is an ongoing line of experimental physics research. In supersymmetric theories, some particles, known as moduli, acquire their masses only through supersymmetry breaking effects and can mediate new forces. Another reason to look for new forces is the discovery that the expansion of the universe is accelerating (also known as dark energy), creating a need to explain a nonzero cosmological constant and possibly other modifications of general relativity. Fifth forces have also been suggested to explain phenomena such as CP violations, dark matter, and dark flow. == See also == Quintessence, a hypothesized fifth force Gerardus 't Hooft Edward Witten Howard Georgi == References == == Bibliography == Davies, Paul (1986), The Forces of Nature, Cambridge Univ. Press 2nd ed. Feynman, Richard (1967), The Character of Physical Law, MIT Press, ISBN 978-0-262-56003-0 Schumm, Bruce A. (2004), Deep Down Things, Johns Hopkins University Press While all interactions are discussed, discussion is especially thorough on the weak. Weinberg, Steven (1993), The First Three Minutes: A Modern View of the Origin of the Universe, Basic Books, ISBN 978-0-465-02437-7 Weinberg, Steven (1994), Dreams of a Final Theory, Basic Books, ISBN 978-0-679-74408-5 Padmanabhan, T. (1998), After The First Three Minutes: The Story of Our Universe, Cambridge Univ. Press, ISBN 978-0-521-62972-0 Perkins, Donald H. (2000), Introduction to High Energy Physics (4th ed.), Cambridge Univ. Press, ISBN 978-0-521-62196-0 Riazuddin (December 29, 2009). "Non-standard interactions" (PDF). NCP 5th Particle Physics Sypnoisis. 1 (1): 1–25. Archived from the original (PDF) on March 3, 2016. Retrieved March 19, 2011.
Wikipedia/Four_fundamental_forces
In quantum field theory, gauge gravitation theory is the effort to extend Yang–Mills theory, which provides a universal description of the fundamental interactions, to describe gravity. Gauge gravitation theory should not be confused with the similarly named gauge theory gravity, which is a formulation of (classical) gravitation in the language of geometric algebra. Nor should it be confused with Kaluza–Klein theory, where the gauge fields are used to describe particle fields, but not gravity itself. == Overview == The first gauge model of gravity was suggested by Ryoyu Utiyama (1916–1990) in 1956 just two years after birth of the gauge theory itself. However, the initial attempts to construct the gauge theory of gravity by analogy with the gauge models of internal symmetries encountered a problem of treating general covariant transformations and establishing the gauge status of a pseudo-Riemannian metric (a tetrad field). In order to overcome this drawback, representing tetrad fields as gauge fields of the translation group was attempted. Infinitesimal generators of general covariant transformations were considered as those of the translation gauge group, and a tetrad (coframe) field was identified with the translation part of an affine connection on a world manifold X {\displaystyle X} . Any such connection is a sum K = Γ + Θ {\displaystyle K=\Gamma +\Theta } of a linear world connection Γ {\displaystyle \Gamma } and a soldering form Θ = Θ μ a d x μ ⊗ ϑ a {\displaystyle \Theta =\Theta _{\mu }^{a}dx^{\mu }\otimes \vartheta _{a}} where ϑ a = ϑ a λ ∂ λ {\displaystyle \vartheta _{a}=\vartheta _{a}^{\lambda }\partial _{\lambda }} is a non-holonomic frame. For instance, if K {\displaystyle K} is the Cartan connection, then Θ = θ = d x μ ⊗ ∂ μ {\displaystyle \Theta =\theta =dx^{\mu }\otimes \partial _{\mu }} is the canonical soldering form on X {\displaystyle X} . There are different physical interpretations of the translation part Θ {\displaystyle \Theta } of affine connections. In gauge theory of dislocations, a field Θ {\displaystyle \Theta } describes a distortion. At the same time, given a linear frame ϑ a {\displaystyle \vartheta _{a}} , the decomposition θ = ϑ a ⊗ ϑ a {\displaystyle \theta =\vartheta ^{a}\otimes \vartheta _{a}} motivates many authors to treat a coframe ϑ a {\displaystyle \vartheta ^{a}} as a translation gauge field. Difficulties of constructing gauge gravitation theory by analogy with the Yang–Mills one result from the gauge transformations in these theories belonging to different classes. In the case of internal symmetries, the gauge transformations are just vertical automorphisms of a principal bundle P → X {\displaystyle P\to X} leaving its base X {\displaystyle X} fixed. On the other hand, gravitation theory is built on the principal bundle F X {\displaystyle FX} of the tangent frames to X {\displaystyle X} . It belongs to the category of natural bundles T → X {\displaystyle T\to X} for which diffeomorphisms of the base X {\displaystyle X} canonically give rise to automorphisms of T. These automorphisms are called general covariant transformations. General covariant transformations are sufficient in order to restate Einstein's general relativity and metric-affine gravitation theory as the gauge ones. In terms of gauge theory on natural bundles, gauge fields are linear connections on a world manifold X {\displaystyle X} , defined as principal connections on the linear frame bundle F X {\displaystyle FX} , and a metric (tetrad) gravitational field plays the role of a Higgs field responsible for spontaneous symmetry breaking of general covariant transformations. Spontaneous symmetry breaking is a quantum effect when the vacuum is not invariant under the transformation group. In classical gauge theory, spontaneous symmetry breaking occurs if the structure group G {\displaystyle G} of a principal bundle P → X {\displaystyle P\to X} is reducible to a closed subgroup H {\displaystyle H} , i.e., there exists a principal subbundle of P {\displaystyle P} with the structure group H {\displaystyle H} . By virtue of the well-known theorem, there exists one-to-one correspondence between the reduced principal subbundles of P {\displaystyle P} with the structure group H {\displaystyle H} and the global sections of the quotient bundle P / H → X. These sections are treated as classical Higgs fields. The idea of the pseudo-Riemannian metric as a Higgs field appeared while constructing non-linear (induced) representations of the general linear group GL(4, R), of which the Lorentz group is a Cartan subgroup. The geometric equivalence principle postulating the existence of a reference frame in which Lorentz invariants are defined on the whole world manifold is the theoretical justification for the reduction of the structure group GL(4, R) of the linear frame bundle FX to the Lorentz group. Then the very definition of a pseudo-Riemannian metric on a manifold X {\displaystyle X} as a global section of the quotient bundle FX / O(1, 3) → X leads to its physical interpretation as a Higgs field. The physical reason for world symmetry breaking is the existence of Dirac fermion matter, whose symmetry group is the universal two-sheeted covering SL(2, C) of the restricted Lorentz group, SO+(1, 3). == See also == == References == == Bibliography == Kirsch, I. (2005). "A Higgs mechanism for gravity". Phys. Rev. D. 72: 024001. arXiv:hep-th/0503024. Sardanashvily, G. (2011). "Classical gauge gravitation theory". Int. J. Geom. Methods Mod. Phys. 8: 1869–1895. arXiv:1110.1176. Obukhov, Yu. (2006). "Poincaré gauge gravity: Selected topics". Int. J. Geom. Methods Mod. Phys. 3: 95–138. arXiv:gr-qc/0601090.
Wikipedia/Gauge_gravitation_theory
In theoretical physics, a scalar–tensor theory is a field theory that includes both a scalar field and a tensor field to represent a certain interaction. For example, the Brans–Dicke theory of gravitation uses both a scalar field and a tensor field to mediate the gravitational interaction. == Tensor fields and field theory == Modern physics tries to derive all physical theories from as few principles as possible. In this way, Newtonian mechanics as well as quantum mechanics are derived from Hamilton's principle of least action. In this approach, the behavior of a system is not described via forces, but by functions which describe the energy of the system. Most important are the energetic quantities known as the Hamiltonian function and the Lagrangian function. Their derivatives in space are known as Hamiltonian density and the Lagrangian density. Going to these quantities leads to the field theories. Modern physics uses field theories to explain reality. These fields can be scalar, vectorial or tensorial. An example of a scalar field is the temperature field. An example of a vector field is the wind velocity field. An example of a tensor field is the stress tensor field in a stressed body, used in continuum mechanics. == Gravity as field theory == In physics, forces (as vectorial quantities) are given as the derivative (gradient) of scalar quantities named potentials. In classical physics before Einstein, gravitation was given in the same way, as consequence of a gravitational force (vectorial), given through a scalar potential field, dependent of the mass of the particles. Thus, Newtonian gravity is called a scalar theory. The gravitational force is dependent of the distance r of the massive objects to each other (more exactly, their centre of mass). Mass is a parameter and space and time are unchangeable. Einstein's theory of gravity, the General Relativity (GR) is of another nature. It unifies space and time in a 4-dimensional manifold called space-time. In GR there is no gravitational force, instead, the actions we ascribed to being a force are the consequence of the local curvature of space-time. That curvature is defined mathematically by the so-called metric, which is a function of the total energy, including mass, in the area. The derivative of the metric is a function that approximates the classical Newtonian force in most cases. The metric is a tensorial quantity of degree 2 (it can be given as a 4x4 matrix, an object carrying 2 indices). Another possibility to explain gravitation in this context is by using both tensor (of degree n>1) and scalar fields, i.e. so that gravitation is given neither solely through a scalar field nor solely through a metric. These are scalar–tensor theories of gravitation. The field theoretical start of General Relativity is given through the Lagrange density. It is a scalar and gauge invariant (look at gauge theories) quantity dependent on the curvature scalar R. This Lagrangian, following Hamilton's principle, leads to the field equations of Hilbert and Einstein. If in the Lagrangian the curvature (or a quantity related to it) is multiplied with a square scalar field, field theories of scalar–tensor theories of gravitation are obtained. In them, the gravitational constant of Newton is no longer a real constant but a quantity dependent of the scalar field. === Mathematical formulation === An action of such a gravitational scalar–tensor theory can be written as follows: S = 1 c ∫ d 4 x − g 1 2 μ × [ Φ R − ω ( Φ ) Φ ( ∂ σ Φ ) 2 − V ( Φ ) + 2 μ L m ( g μ ν , Ψ ) ] , {\displaystyle S={\frac {1}{c}}\int {d^{4}x{\sqrt {-g}}{\frac {1}{2\mu }}}\times \left[\Phi R-{\frac {\omega (\Phi )}{\Phi }}(\partial _{\sigma }\Phi )^{2}-V(\Phi )+2\mu ~{\mathcal {L}}_{m}(g_{\mu \nu },\Psi )\right],} where g {\displaystyle g} is the metric determinant, R {\displaystyle R} is the Ricci scalar constructed from the metric g μ ν {\displaystyle g_{\mu \nu }} , μ {\displaystyle \mu } is a coupling constant with the dimensions L − 1 M − 1 T 2 {\displaystyle L^{-1}M^{-1}T^{2}} , V ( Φ ) {\displaystyle V(\Phi )} is the scalar-field potential, L m {\displaystyle {\mathcal {L}}_{m}} is the material Lagrangian and Ψ {\displaystyle \Psi } represents the non-gravitational fields. Here, the Brans–Dicke parameter ω {\displaystyle \omega } has been generalized to a function. Although μ {\displaystyle \mu } is often written as being 8 π G / c 4 {\displaystyle 8\pi G/c^{4}} , one has to keep in mind that the fundamental constant G {\displaystyle G} there, is not the constant of gravitation that can be measured with, for instance, Cavendish type experiments. Indeed, the empirical gravitational constant is generally no longer a constant in scalar–tensor theories, but a function of the scalar field Φ {\displaystyle \Phi } . The metric and scalar-field equations respectively write: R μ ν − 1 2 g μ ν R = μ Φ T μ ν + 1 Φ [ ∇ μ ∇ ν − g μ ν ◻ ] Φ + ω ( Φ ) Φ 2 ( ∂ μ Φ ∂ ν Φ − 1 2 g μ ν | ∂ α Φ | 2 ) − g μ ν V ( Φ ) 2 Φ , {\displaystyle R_{\mu \nu }-{\frac {1}{2}}g_{\mu \nu }R={\frac {\mu }{\Phi }}T_{\mu \nu }+{\frac {1}{\Phi }}[\nabla _{\mu }\nabla _{\nu }-g_{\mu \nu }\Box ]\Phi +{\frac {\omega (\Phi )}{\Phi ^{2}}}(\partial _{\mu }\Phi \partial _{\nu }\Phi -{\frac {1}{2}}g_{\mu \nu }\left|\partial _{\alpha }\Phi \right|^{2})-g_{\mu \nu }{\frac {V(\Phi )}{2\Phi }},} and 2 ω ( Φ ) + 3 Φ ◻ Φ = μ Φ T − ω ′ ( Φ ) Φ ( ∂ σ Φ ) 2 + V ′ ( Φ ) − 2 V ( Φ ) Φ . {\displaystyle {\frac {2\omega (\Phi )+3}{\Phi }}\Box \Phi ={\frac {\mu }{\Phi }}T-{\frac {\omega '(\Phi )}{\Phi }}(\partial _{\sigma }\Phi )^{2}+V'(\Phi )-2{\frac {V(\Phi )}{\Phi }}.} Also, the theory satisfies the following conservation equation, implying that test-particles follow space-time geodesics such as in general relativity: ∇ σ T μ σ = 0 , {\displaystyle \nabla _{\sigma }T^{\mu \sigma }=0,} where T μ σ {\displaystyle T^{\mu \sigma }} is the stress-energy tensor defined as T μ ν = − 2 − g δ ( − g L m ) δ g μ ν . {\displaystyle T_{\mu \nu }=-{\frac {2}{\sqrt {-g}}}{\frac {\delta ({\sqrt {-g}}{\mathcal {L}}_{m})}{\delta g^{\mu \nu }}}.} ==== The Newtonian approximation of the theory ==== Developing perturbatively the theory defined by the previous action around a Minkowskian background, and assuming non-relativistic gravitational sources, the first order gives the Newtonian approximation of the theory. In this approximation, and for a theory without potential, the metric writes g 00 = − 1 + 2 U c 2 + O ( c − 3 ) , g 0 i = O ( c − 2 ) , g i j = δ i j + O ( c − 1 ) , {\displaystyle g_{00}=-1+2{\frac {U}{c^{2}}}+{\mathcal {O}}(c^{-3}),~g_{0i}={\mathcal {O}}(c^{-2}),~g_{ij}=\delta _{ij}+{\mathcal {O}}(c^{-1}),} with U {\displaystyle U} satisfying the following usual Poisson equation at the lowest order of the approximation: △ U = 8 π G e f f ρ + O ( c − 1 ) , {\displaystyle \triangle U=8\pi G_{\mathrm {eff} }~\rho +{\mathcal {O}}(c^{-1}),} where ρ {\displaystyle \rho } is the density of the gravitational source and G e f f = 2 ω 0 + 4 2 ω 0 + 3 G Φ 0 {\displaystyle G_{\mathrm {eff} }={\frac {2\omega _{0}+4}{2\omega _{0}+3}}{\frac {G}{\Phi _{0}}}} (the subscript 0 {\displaystyle _{0}} indicates that the corresponding value is taken at present cosmological time and location). Therefore, the empirical gravitational constant is a function of the present value of the scalar-field background Φ 0 {\displaystyle \Phi _{0}} and therefore theoretically depends on time and location. However, no deviation from the constancy of the Newtonian gravitational constant has been measured, implying that the scalar-field background Φ 0 {\displaystyle \Phi _{0}} is pretty stable over time. Such a stability is not theoretically generally expected but can be theoretically explained by several mechanisms. ==== The first post-Newtonian approximation of the theory ==== Developing the theory at the next level leads to the so-called first post-Newtonian order. For a theory without potential and in a system of coordinates respecting the weak isotropy condition (i.e., g i j ∝ δ i j + O ( c − 3 ) {\displaystyle g_{ij}\propto \delta _{ij}+{\mathcal {O}}(c^{-3})\,} ), the metric takes the following form: g 00 = − 1 + 2 W c 2 − β 2 W 2 c 4 + O ( c − 5 ) {\displaystyle g_{00}=-1+{\frac {2W}{c^{2}}}-\beta {\frac {2W^{2}}{c^{4}}}+{\mathcal {O}}(c^{-5})} g 0 i = − ( γ + 1 ) 2 W i c 3 + O ( c − 4 ) {\displaystyle g_{0i}=-(\gamma +1){\frac {2W_{i}}{c^{3}}}+{\mathcal {O}}(c^{-4})} g i j = δ i j ( 1 + γ 2 W c 2 ) + O ( c − 3 ) {\displaystyle g_{ij}=\delta _{ij}\left(1+\gamma {\frac {2W}{c^{2}}}\right)+{\mathcal {O}}(c^{-3})} with ◻ W + 1 + 2 β − 3 γ c 2 W △ W + 2 c 2 ( 1 + γ ) ∂ t J = − 4 π G e f f Σ + O ( c − 3 ) , {\displaystyle \Box W+{\frac {1+2\beta -3\gamma }{c^{2}}}W\triangle W+{\frac {2}{c^{2}}}(1+\gamma )\partial _{t}J=-4\pi G_{\mathrm {eff} }\Sigma +{\mathcal {O}}(c^{-3})~,} △ W i − ∂ x i J = − 4 π G e f f Σ i + O ( c − 1 ) , {\displaystyle \triangle W_{i}-\partial x_{i}J=-4\pi G_{\mathrm {eff} }\Sigma ^{i}+{\mathcal {O}}(c^{-1})~,} where J {\displaystyle J} is a function depending on the coordinate gauge J = ∂ t W + ∂ k W k + O ( c − 1 ) . {\displaystyle J=\partial _{t}W+\partial _{k}W_{k}+{\mathcal {O}}(c^{-1})~.} It corresponds to the remaining diffeomorphism degree of freedom that is not fixed by the weak isotropy condition. The sources are defined as Σ = 1 c 2 ( T 00 + γ T k k ) , Σ i = 1 c T 0 i , {\displaystyle \Sigma ={\frac {1}{c^{2}}}(T^{00}+\gamma T^{kk})~,\qquad \Sigma ^{i}={\frac {1}{c}}T^{0i}~,} the so-called post-Newtonian parameters are γ = ω 0 + 1 ω 0 + 2 , {\displaystyle \gamma ={\frac {\omega _{0}+1}{\omega _{0}+2}}~,} β = 1 + ω 0 ′ ( 2 ω 0 + 3 ) ( 2 ω 0 + 4 ) 2 , {\displaystyle \beta =1+{\frac {\omega _{0}^{\prime }}{(2\omega _{0}+3)(2\omega _{0}+4)^{2}}}~,} and finally the empirical gravitational constant G e f f {\displaystyle G_{\mathrm {eff} }} is given by G e f f = 2 ω 0 + 4 2 ω 0 + 3 G , {\displaystyle G_{\mathrm {eff} }={\frac {2\omega _{0}+4}{~2\omega _{0}+3~}}\,G~,} where G {\displaystyle G} is the (true) constant that appears in the coupling constant μ {\displaystyle \mu } defined previously. == Observational constraints on the theory == Current observations indicate that γ − 1 = ( 2.1 ± 2.3 ) × 10 − 5 {\displaystyle \gamma -1=(2.1\pm 2.3)\times 10^{-5}} , which means that ω 0 > 40000 {\displaystyle \omega _{0}>40000} . Although explaining such a value in the context of the original Brans–Dicke theory is impossible, Damour and Nordtvedt found that the field equations of the general theory often lead to an evolution of the function ω {\displaystyle \omega } toward infinity during the evolution of the universe. Hence, according to them, the current high value of the function ω {\displaystyle \omega } could be a simple consequence of the evolution of the universe. Seven years of data from the NASA MESSENGER mission constraints the post-Newtonian parameter β {\displaystyle \beta } for Mercury's perihelion shift to | β − 1 | < 1.6 × 10 − 5 {\displaystyle |\beta -1|<1.6\times 10^{-5}} . Both constraints show that while the theory is still a potential candidate to replace general relativity, the scalar field must be very weakly coupled in order to explain current observations. Generalized scalar-tensor theories have also been proposed as explanation for the accelerated expansion of the universe but the measurement of the speed of gravity with the gravitational wave event GW170817 has ruled this out. == Higher-dimensional relativity and scalar–tensor theories == After the postulation of the General Relativity of Einstein and Hilbert, Theodor Kaluza and Oskar Klein proposed in 1917 a generalization in a 5-dimensional manifold: Kaluza–Klein theory. This theory possesses a 5-dimensional metric (with a compactified and constant 5th metric component, dependent on the gauge potential) and unifies gravitation and electromagnetism, i.e. there is a geometrization of electrodynamics. This theory was modified in 1955 by P. Jordan in his Projective Relativity theory, in which, following group-theoretical reasonings, Jordan took a functional 5th metric component that led to a variable gravitational constant G. In his original work, he introduced coupling parameters of the scalar field, to change energy conservation as well, according to the ideas of Dirac. Following the Conform Equivalence theory, multidimensional theories of gravity are conform equivalent to theories of usual General Relativity in 4 dimensions with an additional scalar field. One case of this is given by Jordan's theory, which, without breaking energy conservation (as it should be valid, following from microwave background radiation being of a black body), is equivalent to the theory of C. Brans and Robert H. Dicke of 1961, so that it is usually spoken about the Brans–Dicke theory. The Brans–Dicke theory follows the idea of modifying Hilbert-Einstein theory to be compatible with Mach's principle. For this, Newton's gravitational constant had to be variable, dependent of the mass distribution in the universe, as a function of a scalar variable, coupled as a field in the Lagrangian. It uses a scalar field of infinite length scale (i.e. long-ranged), so, in the language of Yukawa's theory of nuclear physics, this scalar field is a massless field. This theory becomes Einsteinian for high values for the parameter of the scalar field. In 1979, R. Wagoner proposed a generalization of scalar–tensor theories using more than one scalar field coupled to the scalar curvature. JBD theories although not changing the geodesic equation for test particles, change the motion of composite bodies to a more complex one. The coupling of a universal scalar field directly to the gravitational field gives rise to potentially observable effects for the motion of matter configurations to which gravitational energy contributes significantly. This is known as the "Dicke–Nordtvedt" effect, which leads to possible violations of the Strong as well as the Weak Equivalence Principle for extended masses. JBD-type theories with short-ranged scalar fields use, according to Yukawa's theory, massive scalar fields. The first of this theories was proposed by A. Zee in 1979. He proposed a Broken-Symmetric Theory of Gravitation, combining the idea of Brans and Dicke with the one of Symmetry Breakdown, which is essential within the Standard Model SM of elementary particles, where the so-called Symmetry Breakdown leads to mass generation (as a consequence of particles interacting with the Higgs field). Zee proposed the Higgs field of SM as scalar field and so the Higgs field to generate the gravitational constant. The interaction of the Higgs field with the particles that achieve mass through it is short-ranged (i.e. of Yukawa-type) and gravitational-like (one can get a Poisson equation from it), even within SM, so that Zee's idea was taken 1992 for a scalar–tensor theory with Higgs field as scalar field with Higgs mechanism. There, the massive scalar field couples to the masses, which are at the same time the source of the scalar Higgs field, which generates the mass of the elementary particles through Symmetry Breakdown. For vanishing scalar field, this theories usually go through to standard General Relativity and because of the nature of the massive field, it is possible for such theories that the parameter of the scalar field (the coupling constant) does not have to be as high as in standard JBD theories. Though, it is not clear yet which of these models explains better the phenomenology found in nature nor if such scalar fields are really given or necessary in nature. Nevertheless, JBD theories are used to explain inflation (for massless scalar fields then it is spoken of the inflaton field) after the Big Bang as well as the quintessence. Further, they are an option to explain dynamics usually given through the standard cold dark matter models, as well as MOND, Axions (from Breaking of a Symmetry, too), MACHOS,... == Connection to string theory == A generic prediction of all string theory models is that the spin-2 graviton has a spin-0 partner called the dilaton. Hence, string theory predicts that the actual theory of gravity is a scalar–tensor theory rather than general relativity. However, the precise form of such a theory is not currently known because one does not have the mathematical tools in order to address the corresponding non-perturbative calculations. Besides, the precise effective 4-dimensional form of the theory is also confronted to the so-called landscape issue. == See also == Degenerate Higher-Order Scalar-Tensor theories – Theory of gravity Dilaton – Hypothetical particle Chameleon particle – Hypothetical scalar particle that couples to matter more weakly than gravity Pressuron – Hypothetical gravitational particle Horndeski's theory – Generalized theory of gravity == References == P. Jordan, Schwerkraft und Weltall, Vieweg (Braunschweig) 1955: Projective Relativity. First paper on JBD theories. C.H. Brans and R.H. Dicke, Phys. Rev. 124: 925, 1061: Brans–Dicke theory starting from Mach's principle. R. Wagoner, Phys. Rev. D1(812): 3209, 2004: JBD theories with more than one scalar field. A. Zee, Phys. Rev. Lett. 42(7): 417, 1979: Broken-Symmetric scalar-tensor theory. H. Dehnen and H. Frommert, Int. J. Theor. Phys. 30(7): 985, 1991: Gravitative-like and short-ranged interaction of Higgs fields within the Standard Model or elementary particles. H. Dehnen et al., Int. J. Theor. Phys. 31(1): 109, 1992: Scalar-tensor-theory with Higgs field. C.H. Brans, June 2005: Roots of scalar-tensor theories. arXiv:gr-qc/0506063 . Discusses the history of attempts to construct gravity theories with a scalar field and the relation to the equivalence principle and Mach's principle. P. G. Bergmann (1968). "Comments on the scalar-tensor theory". Int. J. Theor. Phys. 1 (1): 25–36. Bibcode:1968IJTP....1...25B. doi:10.1007/BF00668828. S2CID 119985328. R. V. Wagoner (1970). "Scalar-tensor theory and gravitational waves". Phys. Rev. D1 (12): 3209–3216. Bibcode:1970PhRvD...1.3209W. doi:10.1103/physrevd.1.3209.
Wikipedia/Scalar–tensor_theory
The Dvali–Gabadadze–Porrati (DGP) model is a model of gravity proposed by Gia Dvali, Gregory Gabadadze, and Massimo Porrati in 2000. The model is popular among some model builders, but has resisted being embedded into string theory. == Overview == The DGP model assumes the existence of a 4+1-dimensional Minkowski space, within which ordinary 3+1-dimensional Minkowski space is embedded. The model assumes an action consisting of two terms: One term is the usual Einstein–Hilbert action, which involves only the 4-D spacetime dimensions. The other term is the equivalent of the Einstein–Hilbert action, as extended to all 5 dimensions. The 4-D term dominates at short distances, and the 5-D term dominates at long distances. The model was proposed in part in order to reproduce the cosmic acceleration of dark energy without any need for a small but non-zero vacuum energy density. But critics argue that this branch of the theory is unstable. However, the theory remains interesting because of Dvali's claim that the unusual structure of the graviton propagator makes non-perturbative effects important in a seemingly linear regime, such as the Solar System. Because there is no four-dimensional, linearized effective theory that reproduces the DGP model for weak-field gravity, the theory avoids the vDVZ discontinuity that otherwise plagues attempts to write down a theory of massive gravity. In 2008, Fang et al. argued that recent cosmological observations (including measurements of baryon acoustic oscillations by the Sloan Digital Sky Survey, and measurements of the cosmic microwave background and type 1a supernovae) is in direct conflict with the DGP cosmology unless a cosmological constant or some other form of dark energy is added. However, this negates the appeal of the DGP cosmology, which accelerates without needing to add dark energy. == See also == Kaluza–Klein theory Randall–Sundrum model Large extra dimensions == References ==
Wikipedia/DGP_model
In physics, the Brans–Dicke theory of gravitation (sometimes called the Jordan–Brans–Dicke theory) is a competitor to Einstein's general theory of relativity. It is an example of a scalar–tensor theory, a gravitational theory in which the gravitational interaction is mediated by a scalar field as well as the tensor field of general relativity. The gravitational constant G {\displaystyle G} is not presumed to be constant but instead 1 / G {\displaystyle 1/G} is replaced by a scalar field ϕ {\displaystyle \phi } which can vary from place to place and with time. The theory was developed in 1961 by Robert H. Dicke and Carl H. Brans building upon, among others, the earlier 1959 work of Pascual Jordan. At present, both Brans–Dicke theory and general relativity are generally held to be in agreement with observation. Brans–Dicke theory represents a minority viewpoint in physics. == Comparison with general relativity == Both Brans–Dicke theory and general relativity are examples of a class of relativistic classical field theories of gravitation, called metric theories. In these theories, spacetime is equipped with a metric tensor, g a b {\displaystyle g_{ab}} , and the gravitational field is represented (in whole or in part) by the Riemann curvature tensor R a b c d {\displaystyle R_{abcd}} , which is determined by the metric tensor. All metric theories satisfy the Einstein equivalence principle, which in modern geometric language states that in a very small region (too small to exhibit measurable curvature effects), all the laws of physics known in special relativity are valid in local Lorentz frames. This implies in turn that metric theories all exhibit the gravitational redshift effect. As in general relativity, the source of the gravitational field is considered to be the stress–energy tensor or matter tensor. However, the way in which the immediate presence of mass-energy in some region affects the gravitational field in that region differs from general relativity. So does the way in which spacetime curvature affects the motion of matter. In the Brans–Dicke theory, in addition to the metric, which is a rank two tensor field, there is a scalar field, ϕ {\displaystyle \phi } , which has the physical effect of changing the effective gravitational constant from place to place. (This feature was actually a key desideratum of Dicke and Brans; see the paper by Brans cited below, which sketches the origins of the theory.) The field equations of Brans–Dicke theory contain a parameter, ω {\displaystyle \omega } , called the Brans–Dicke coupling constant. This is a true dimensionless constant which must be chosen once and for all. However, it can be chosen to fit observations. Such parameters are often called tunable parameters. In addition, the present ambient value of the effective gravitational constant must be chosen as a boundary condition. General relativity contains no dimensionless parameters whatsoever, and therefore is easier to falsify (show whether false) than Brans–Dicke theory. Theories with tunable parameters are sometimes deprecated on the principle that, of two theories which both agree with observation, the more parsimonious is preferable. On the other hand, it seems as though they are a necessary feature of some theories, such as the weak mixing angle of the Standard Model. Brans–Dicke theory is "less stringent" than general relativity in another sense: it admits more solutions. In particular, exact vacuum solutions to the Einstein field equation of general relativity, augmented by the trivial scalar field ϕ = 1 {\displaystyle \phi =1} , become exact vacuum solutions in Brans–Dicke theory, but some spacetimes which are not vacuum solutions to the Einstein field equation become, with the appropriate choice of scalar field, vacuum solutions of Brans–Dicke theory. Similarly, an important class of spacetimes, the pp-wave metrics, are also exact null dust solutions of both general relativity and Brans–Dicke theory, but here too, Brans–Dicke theory allows additional wave solutions having geometries which are incompatible with general relativity. Like general relativity, Brans–Dicke theory predicts light deflection and the precession of perihelia of planets orbiting the Sun. However, the precise formulas which govern these effects, according to Brans–Dicke theory, depend upon the value of the coupling constant ω {\displaystyle \omega } . This means that it is possible to set an observational lower bound on the possible value of ω {\displaystyle \omega } from observations of the Solar System and other gravitational systems. The value of ω {\displaystyle \omega } consistent with experiment has risen with time. In 1973 ω > 5 {\displaystyle \omega >5} was consistent with known data. By 1981 ω > 30 {\displaystyle \omega >30} was consistent with known data. In 2003 evidence – derived from the Cassini–Huygens experiment – shows that the value of ω {\displaystyle \omega } must exceed 40,000. It is also often taught that general relativity is obtained from the Brans–Dicke theory in the limit ω → ∞ {\displaystyle \omega \rightarrow \infty } . But Faraoni claims that this breaks down when the trace of the stress-energy momentum vanishes, i.e. T μ μ = 0 {\displaystyle T_{\mu }^{\mu }=0} , an example of which is the Campanelli-Lousto wormhole solution. Some have argued that only general relativity satisfies the strong equivalence principle. == The field equations == The field equations of the Brans–Dicke theory are G a b = 8 π ϕ T a b + ω ϕ 2 ( ∂ a ϕ ∂ b ϕ − 1 2 g a b ∂ c ϕ ∂ c ϕ ) + 1 ϕ ( ∇ a ∇ b ϕ − g a b ◻ ϕ ) − g a b V ( ϕ ) 2 ϕ , {\displaystyle G_{ab}={\frac {8\pi }{\phi }}T_{ab}+{\frac {\omega }{\phi ^{2}}}\left(\partial _{a}\phi \partial _{b}\phi -{\frac {1}{2}}g_{ab}\partial _{c}\phi \partial ^{c}\phi \right)+{\frac {1}{\phi }}(\nabla _{a}\nabla _{b}\phi -g_{ab}\Box \phi )-g_{ab}{\frac {V(\phi )}{2\phi }},} ◻ ϕ = 8 π 3 + 2 ω T + 2 V ( ϕ ) − ϕ V ′ ( ϕ ) 3 + 2 ω {\displaystyle \Box \phi ={\frac {8\pi }{3+2\omega }}T+{\frac {2V(\phi )-\phi V'(\phi )}{3+2\omega }}} where ω {\displaystyle \omega } is the dimensionless Dicke coupling constant; g a b {\displaystyle g_{ab}} is the metric tensor; G a b = R a b − 1 2 R g a b {\displaystyle G_{ab}=R_{ab}-{\tfrac {1}{2}}Rg_{ab}} is the Einstein tensor, a kind of average curvature; R a b = R m a m b {\displaystyle R_{ab}=R^{m}{}_{amb}} is the Ricci tensor, a kind of trace of the curvature tensor; R = R m m {\displaystyle R=R^{m}{}_{m}} is the Ricci scalar, the trace of the Ricci tensor; T a b {\displaystyle T_{ab}} is the stress–energy tensor; T = T a a {\displaystyle T=T_{a}^{a}} is the trace of the stress–energy tensor; ϕ {\displaystyle \phi } is the scalar field; V ( ϕ ) {\displaystyle V(\phi )} is the scalar potential; V ′ ( ϕ ) {\displaystyle V'(\phi )} is the derivative of the scalar potential with respect to ϕ {\displaystyle \phi } ; ◻ {\displaystyle \Box } is the Laplace–Beltrami operator or covariant wave operator, ◻ ϕ = ϕ ; a ; a {\displaystyle \Box \phi =\phi ^{;a}{}_{;a}} . The first equation describes how the stress–energy tensor and scalar field ϕ {\displaystyle \phi } together affect spacetime curvature. The left-hand side, the Einstein tensor, can be thought of as a kind of average curvature. It is a matter of pure mathematics that, in any metric theory, the Riemann tensor can always be written as the sum of the Weyl curvature (or conformal curvature tensor) and a piece constructed from the Einstein tensor. The second equation says that the trace of the stress–energy tensor acts as the source for the scalar field ϕ {\displaystyle \phi } . Since electromagnetic fields contribute only a traceless term to the stress–energy tensor, this implies that in a region of spacetime containing only an electromagnetic field (plus the gravitational field), the right-hand side vanishes, and ϕ {\displaystyle \phi } obeys the (curved spacetime) wave equation. Therefore, changes in ϕ {\displaystyle \phi } propagate through electrovacuum regions; in this sense, we say that ϕ {\displaystyle \phi } is a long-range field. For comparison, the field equation of general relativity is simply G a b = 8 π T a b . {\displaystyle G_{ab}=8\pi T_{ab}.} This means that in general relativity, the Einstein curvature at some event is entirely determined by the stress–energy tensor at that event; the other piece, the Weyl curvature, is the part of the gravitational field which can propagate as a gravitational wave across a vacuum region. But in the Brans–Dicke theory, the Einstein tensor is determined partly by the immediate presence of mass–energy and momentum, and partly by the long-range scalar field ϕ {\displaystyle \phi } . The vacuum field equations of both theories are obtained when the stress–energy tensor vanishes. This models situations in which no non-gravitational fields are present. == The action principle == The following Lagrangian contains the complete description of the Brans–Dicke theory: S = 1 16 π ∫ d 4 x − g ( ϕ R − ω ϕ ∂ a ϕ ∂ a ϕ ) + ∫ d 4 x − g L M , {\displaystyle S={\frac {1}{16\pi }}\int d^{4}x{\sqrt {-g}}\left(\phi R-{\frac {\omega }{\phi }}\partial _{a}\phi \partial ^{a}\phi \right)+\int d^{4}x{\sqrt {-g}}\,{\mathcal {L}}_{\mathrm {M} },} where g {\displaystyle g} is the determinant of the metric, − g d 4 x {\displaystyle {\sqrt {-g}}\,d^{4}x} is the four-dimensional volume form, and L M {\displaystyle {\mathcal {L}}_{\mathrm {M} }} is the matter term, or matter Lagrangian density. The matter term includes the contribution of ordinary matter (e.g. gaseous matter) and also electromagnetic fields. In a vacuum region, the matter term vanishes identically; the remaining term is the gravitational term. To obtain the vacuum field equations, we must vary the gravitational term in the Lagrangian with respect to the metric g a b {\displaystyle g_{ab}} ; this gives the first field equation above. When we vary with respect to the scalar field ϕ {\displaystyle \phi } , we obtain the second field equation. Note that, unlike for the General Relativity field equations, the δ R a b / δ g c d {\displaystyle \delta R_{ab}/\delta g_{cd}} term does not vanish, as the result is not a total derivative. It can be shown that δ ( ϕ R ) δ g a b = ϕ R a b + g a b g c d ϕ ; c ; d − ϕ ; a ; b . {\displaystyle {\frac {\delta (\phi R)}{\delta g^{ab}}}=\phi R_{ab}+g_{ab}g^{cd}\phi _{;c;d}-\phi _{;a;b}.} To prove this result, use δ ( ϕ R ) = R δ ϕ + ϕ R m n δ g m n + ϕ ∇ s ( g m n δ Γ n m s − g m s δ Γ r m r ) . {\displaystyle \delta (\phi R)=R\delta \phi +\phi R_{mn}\delta g^{mn}+\phi \nabla _{s}(g^{mn}\delta \Gamma _{nm}^{s}-g^{ms}\delta \Gamma _{rm}^{r}).} By evaluating the δ Γ {\displaystyle \delta \Gamma } s in Riemann normal coordinates, 6 individual terms vanish. 6 further terms combine when manipulated using Stokes' theorem to provide the desired ( g a b g c d ϕ ; c ; d − ϕ ; a ; b ) δ g a b {\displaystyle (g_{ab}g^{cd}\phi _{;c;d}-\phi _{;a;b})\delta g^{ab}} . For comparison, the Lagrangian defining general relativity is S = ∫ d 4 x − g ( R 16 π G + L M ) . {\displaystyle S=\int d^{4}x{\sqrt {-g}}\,\left({\frac {R}{16\pi G}}+{\mathcal {L}}_{\mathrm {M} }\right).} Varying the gravitational term with respect to g a b {\displaystyle g_{ab}} gives the vacuum Einstein field equation. In both theories, the full field equations can be obtained by variations of the full Lagrangian. == See also == Classical theories of gravitation Dilaton General relativity Mach's principle Scientific importance of GW170817 == Notes == == References == Bergmann, Peter G. (May 1968). "Comments on the Scalar-Tensor Theory". Int. J. Theor. Phys. 1 (1): 25–36. Bibcode:1968IJTP....1...25B. doi:10.1007/BF00668828. ISSN 0020-7748. S2CID 119985328. Wagoner, Robert V. (June 1970). "Scalar-Tensor Theory and Gravitational Waves". Phys. Rev. D. 1 (12). American Physical Society: 3209–3216. Bibcode:1970PhRvD...1.3209W. doi:10.1103/PhysRevD.1.3209. Misner, Charles W.; Thorne, Kip S.; Wheeler, John Archibald (1973). Gravitation. San Francisco: W. H. Freeman. ISBN 0-7167-0344-0. See Box 39.1. Will, Clifford M. (1986). "Chapter 8: The Rise and Fall of the Brans–Dicke Theory". Was Einstein Right?: Putting General Relativity to the Test. NY: Basic Books. ISBN 0-19-282203-9. Faraoni, Valerio (2004). Cosmology in Scalar-Tensor Gravity. Dordrecht, The Netherlands: Kluwer Academic. ISBN 1-4020-1988-2. == External links == Scholarpedia article on the subject by Carl H. Brans Brans, Carl H. (2005). "The roots of scalar-tensor theory: an approximate history". arXiv:gr-qc/0506063.
Wikipedia/Brans–Dicke_theory
In theoretical physics, Whitehead's theory of gravitation was introduced by the mathematician and philosopher Alfred North Whitehead in 1922. While never broadly accepted, at one time it was a scientifically plausible alternative to general relativity. However, after further experimental and theoretical consideration, the theory is now generally regarded as obsolete. == Principal features == Whitehead developed his theory of gravitation by considering how the world line of a particle is affected by those of nearby particles. He arrived at an expression for what he called the "potential impetus" of one particle due to another, which modified Newton's law of universal gravitation by including a time delay for the propagation of gravitational influences. Whitehead's formula for the potential impetus involves the Minkowski metric, which is used to determine which events are causally related and to calculate how gravitational influences are delayed by distance. The potential impetus calculated by means of the Minkowski metric is then used to compute a physical spacetime metric g μ ν {\displaystyle g_{\mu \nu }} , and the motion of a test particle is given by a geodesic with respect to the metric g μ ν {\displaystyle g_{\mu \nu }} . Unlike the Einstein field equations, Whitehead's theory is linear, in that the superposition of two solutions is again a solution. This implies that Einstein's and Whitehead's theories will generally make different predictions when more than two massive bodies are involved. Following the notation of Chiang and Hamity , introduce a Minkowski spacetime with metric tensor η a b = d i a g ( 1 , − 1 , − 1 , − 1 ) {\displaystyle \eta _{ab}=\mathrm {diag} (1,-1,-1,-1)} , where the indices a , b {\displaystyle a,b} run from 0 through 3, and let the masses of a set of gravitating particles be m a {\displaystyle m_{a}} . The Minkowski arc length of particle A {\displaystyle A} is denoted by τ A {\displaystyle \tau _{A}} . Consider an event p {\displaystyle p} with co-ordinates χ a {\displaystyle \chi ^{a}} . A retarded event p A {\displaystyle p_{A}} with co-ordinates χ A a {\displaystyle \chi _{A}^{a}} on the world-line of particle A {\displaystyle A} is defined by the relations ( y A a = χ a − χ A a , y A a y A a = 0 , y A 0 > 0 ) {\displaystyle (y_{A}^{a}=\chi ^{a}-\chi _{A}^{a},y_{A}^{a}y_{Aa}=0,y_{A}^{0}>0)} . The unit tangent vector at p A {\displaystyle p_{A}} is λ A a = ( d x A a / d τ A ) p A {\displaystyle \lambda _{A}^{a}=(dx_{A}^{a}/d\tau _{A})p_{A}} . We also need the invariants w A = y A a λ A a {\displaystyle w_{A}=y_{A}^{a}\lambda _{Aa}} . Then, a gravitational tensor potential is defined by g a b = η a b − h a b , {\displaystyle g_{ab}=\eta _{ab}-h_{ab},} where h a b = 2 ∑ A m A w A 3 y A a y A b . {\displaystyle h_{ab}=2\sum _{A}{\frac {m_{A}}{w_{A}^{3}}}y_{Aa}y_{Ab}.} It is the metric g {\displaystyle g} that appears in the geodesic equation. == Experimental tests == Whitehead's theory is equivalent with the Schwarzschild metric and makes the same predictions as general relativity regarding the four classical solar system tests (gravitational red shift, light bending, perihelion shift, Shapiro time delay), and was regarded as a viable competitor of general relativity for several decades. In 1971, Will argued that Whitehead's theory predicts a periodic variation in local gravitational acceleration 200 times longer than the bound established by experiment. Misner, Thorne and Wheeler's textbook Gravitation states that Will demonstrated "Whitehead's theory predicts a time-dependence for the ebb and flow of ocean tides that is completely contradicted by everyday experience".: 1067  Fowler argued that different tidal predictions can be obtained by a more realistic model of the galaxy. Reinhardt and Rosenblum claimed that the disproof of Whitehead's theory by tidal effects was "unsubstantiated". Chiang and Hamity argued that Reinhardt and Rosenblum's approach "does not provide a unique space-time geometry for a general gravitation system", and they confirmed Will's calculations by a different method. In 1989, a modification of Whitehead's theory was proposed that eliminated the unobserved sidereal tide effects. However, the modified theory did not allow the existence of black holes. Subrahmanyan Chandrasekhar wrote, "Whitehead's philosophical acumen has not served him well in his criticisms of Einstein." == Philosophical disputes == Clifford M. Will argued that Whitehead's theory features a prior geometry. Under Will's presentation (which was inspired by John Lighton Synge's interpretation of the theory), Whitehead's theory has the curious feature that electromagnetic waves propagate along null geodesics of the physical spacetime (as defined by the metric determined from geometrical measurements and timing experiments), while gravitational waves propagate along null geodesics of a flat background represented by the metric tensor of Minkowski spacetime. The gravitational potential can be expressed entirely in terms of waves retarded along the background metric, like the Liénard–Wiechert potential in electromagnetic theory. A cosmological constant can be introduced by changing the background metric to a de Sitter or anti-de Sitter metric. This was first suggested by G. Temple in 1923. Temple's suggestions on how to do this were criticized by C. B. Rayner in 1955. Will's work was disputed by Dean R. Fowler, who argued that Will's presentation of Whitehead's theory contradicts Whitehead's philosophy of nature. For Whitehead, the geometric structure of nature grows out of the relations among what he termed "actual occasions". Fowler claimed that a philosophically consistent interpretation of Whitehead's theory makes it an alternate, mathematically equivalent, presentation of general relativity. In turn, Jonathan Bain argued that Fowler's criticism of Will was in error. == See also == Classical theories of gravitation Eddington–Finkelstein coordinates == References == == Further reading == Will, Clifford M. (1993). Was Einstein Right?: Putting General Relativity to the Test (2nd ed.). Basic Books. ISBN 978-0-465-09086-0.
Wikipedia/Whitehead's_theory_of_gravitation
From Eternity to Here: The Quest for the Ultimate Theory of Time is a nonfiction book by American theoretical physicist Sean M. Carroll, published on January 7, 2010, by Dutton. == Background == In the book, Carroll explores the nature of the arrow of time, that goes forward from the past to the future, and posits that the arrow owes its existence to conditions before the Big Bang. However, reasoning about what was there before the Big Bang has traditionally been dismissed as meaningless, for space and time are considered to be created exactly at the Big Bang. Carroll argues that "understanding the arrow of time is a matter of understanding the origin of the universe" and in his explanations relies on the second law of thermodynamics, which states that all systems in the Universe tend to become more and more disorganized (increase in entropy). His proposed explanation for the arrow of time is based on ideas that go back to Ludwig Boltzmann, an Austrian physicist of the 1870s. == Book organization == The book is divided into four parts and 15 chapters and has an appendix for the relevant math. Part one is entitled, "Time, Experience, and the Universe." Part two is named, "Time in Einstein’s Universe." Part three is called, "Entropy and Time’s Arrow." Part four is entitled, "From the Kitchen to the Multiverse." == Reception == Manjit Kumar in his review for the Daily Telegraph called the book "a rewarding read" that was "not for the faint hearted". Writing for The A.V. Club, Donna Bowman commented, "Its appeal lies in Carroll's gift for leading readers through the train of thought that connects black holes, light cones, event horizons, Laplace's demon (or Maxwell’s), dark energy, and entropy with the question of time... Like all great teachers, he makes his subject irresistible, and makes his students feel smarter." A reviewer of Kirkus Reviews added, "Not for the scientifically disinclined, but determined readers will come away with a rewarding grasp of a complex subject." Andreas Albrecht, writing for Physics Today, gave the book a generally positive review, while noting that Carroll's attempts to provide material for both lay and expert readers might at times leave both dissatisfied. In his review for New Scientist, philosopher Craig Callender wrote that "Carroll seems slightly embarrassed by the many leaps of faith he asks of his reader" in explaining his hypothesis for the origin of the arrow of time. Eric Winsberg's evaluation of Carroll's proposal concluded by saying that its conceptual costs "seem high, and the benefits few." == References == == External links == From Eternity to Here: The Quest for the Ultimate Theory of Time on YouTube
Wikipedia/From_Eternity_to_Here:_The_Quest_for_the_Ultimate_Theory_of_Time
Scalar–tensor–vector gravity (STVG) is a modified theory of gravity developed by John Moffat, a researcher at the Perimeter Institute for Theoretical Physics in Waterloo, Ontario. The theory is also often referred to by the acronym MOG (MOdified Gravity). == Overview == Scalar–tensor–vector gravity theory, also known as MOdified Gravity (MOG), is based on an action principle and postulates the existence of a vector field, while elevating the three constants of the theory to scalar fields. In the weak-field approximation, STVG produces a Yukawa-like modification of the gravitational force due to a point source. Intuitively, this result can be described as follows: far from a source gravity is stronger than the Newtonian prediction, but at shorter distances, it is counteracted by a repulsive fifth force due to the vector field. STVG has been used successfully to explain galaxy rotation curves, the mass profiles of galaxy clusters, gravitational lensing in the Bullet Cluster, and cosmological observations without the need for dark matter. On a smaller scale, in the Solar System, STVG predicts no observable deviation from general relativity. The theory may also offer an explanation for the origin of inertia. == Mathematical details == STVG is formulated using the action principle. In the following discussion, a metric signature of [ + , − , − , − ] {\displaystyle [+,-,-,-]} will be used; the speed of light is set to c = 1 {\displaystyle c=1} , and we are using the following definition for the Ricci tensor: R α β = ∂ γ Γ α β γ − ∂ β Γ α γ γ + Γ α β γ Γ γ δ δ − Γ α δ γ Γ γ β δ . {\displaystyle R_{\alpha \beta }=\partial _{\gamma }\Gamma _{\alpha \beta }^{\gamma }-\partial _{\beta }\Gamma _{\alpha \gamma }^{\gamma }+\Gamma _{\alpha \beta }^{\gamma }\Gamma _{\gamma \delta }^{\delta }-\Gamma _{\alpha \delta }^{\gamma }\Gamma _{\gamma \beta }^{\delta }.} We begin with the Einstein–Hilbert Lagrangian: L G = − 1 16 π G ( R + 2 Λ ) − g , {\displaystyle {\mathcal {L}}_{G}=-{\frac {1}{16\pi G}}(R+2\Lambda ){\sqrt {-g}},} where R {\displaystyle R} is the trace of the Ricci tensor, G {\displaystyle G} is the gravitational constant, g {\displaystyle g} is the determinant of the metric tensor g α β {\displaystyle g_{\alpha \beta }} , while Λ {\displaystyle \Lambda } is the cosmological constant. We introduce the Maxwell-Proca Lagrangian for the STVG covector field ϕ α {\displaystyle \phi _{\alpha }} : L ϕ = − 1 4 π ω [ 1 4 B α β B α β − 1 2 μ 2 ϕ α ϕ α + V ϕ ( ϕ ) ] − g , {\displaystyle {\mathcal {L}}_{\phi }=-{\frac {1}{4\pi }}\omega \left[{\frac {1}{4}}B^{\alpha \beta }B_{\alpha \beta }-{\frac {1}{2}}\mu ^{2}\phi _{\alpha }\phi ^{\alpha }+V_{\phi }(\phi )\right]{\sqrt {-g}},} where B α β = ∂ α ϕ β − ∂ β ϕ α = ( d ϕ ) α β {\displaystyle B_{\alpha \beta }=\partial _{\alpha }\phi _{\beta }-\partial _{\beta }\phi _{\alpha }=(\mathrm {d} \phi )_{\alpha \beta }} is the field strength of ϕ α {\displaystyle \phi _{\alpha }} (given by the exterior derivative), μ {\displaystyle \mu } is the mass of the vector field, ω {\displaystyle \omega } characterizes the strength of the coupling between the fifth force and matter, and V ϕ {\displaystyle V_{\phi }} is a self-interaction potential. The three constants of the theory, G , μ , {\displaystyle G,\mu ,} and ω , {\displaystyle \omega ,} are promoted to scalar fields by introducing associated kinetic and potential terms in the Lagrangian density: L S = − 1 G [ 1 2 g α β ( ∂ α G ∂ β G G 2 + ∂ α μ ∂ β μ μ 2 − ∂ α ω ∂ β ω ) + V G ( G ) G 2 + V μ ( μ ) μ 2 + V ω ( ω ) ] − g , {\displaystyle {\mathcal {L}}_{S}=-{\frac {1}{G}}\left[{\frac {1}{2}}g^{\alpha \beta }\left({\frac {\partial _{\alpha }G\partial _{\beta }G}{G^{2}}}+{\frac {\partial _{\alpha }\mu \partial _{\beta }\mu }{\mu ^{2}}}-\partial _{\alpha }\omega \partial _{\beta }\omega \right)+{\frac {V_{G}(G)}{G^{2}}}+{\frac {V_{\mu }(\mu )}{\mu ^{2}}}+V_{\omega }(\omega )\right]{\sqrt {-g}},} where V G , V μ , {\displaystyle V_{G},V_{\mu },} and V ω {\displaystyle V_{\omega }} are the self-interaction potentials associated with the scalar fields. The STVG action integral takes the form S = ∫ ( L G + L ϕ + L S + L M ) d 4 x , {\displaystyle S=\int {({\mathcal {L}}_{G}+{\mathcal {L}}_{\phi }+{\mathcal {L}}_{S}+{\mathcal {L}}_{M})}~\mathrm {d^{4}} x,} where L M {\displaystyle {\mathcal {L}}_{M}} is the ordinary matter Lagrangian density. == Spherically symmetric, static vacuum solution == The field equations of STVG can be developed from the action integral using the variational principle. First a test particle Lagrangian is postulated in the form L T P = − m + α ω q 5 ϕ μ u μ , {\displaystyle {\mathcal {L}}_{\mathrm {TP} }=-m+\alpha \omega q_{5}\phi _{\mu }u^{\mu },} where m {\displaystyle m} is the test particle mass, α {\displaystyle \alpha } is a factor representing the nonlinearity of the theory, q 5 {\displaystyle q_{5}} is the test particle's fifth-force charge, and u μ = d x μ / d s {\displaystyle u^{\mu }=dx^{\mu }/ds} is its four-velocity. Assuming that the fifth-force charge is proportional to mass, i.e., q 5 = κ m , {\displaystyle q_{5}=\kappa m,} the value of κ = G N / ω {\displaystyle \kappa ={\sqrt {G_{N}/\omega }}} is determined and the following equation of motion is obtained in the spherically symmetric, static gravitational field of a point mass of mass M {\displaystyle M} : r ¨ = − G N M r 2 [ 1 + α − α ( 1 + μ r ) e − μ r ] , {\displaystyle {\ddot {r}}=-{\frac {G_{N}M}{r^{2}}}\left[1+\alpha -\alpha (1+\mu r)e^{-\mu r}\right],} where G N {\displaystyle G_{N}} is Newton's constant of gravitation. Further study of the field equations allows a determination of α {\displaystyle \alpha } and μ {\displaystyle \mu } for a point gravitational source of mass M {\displaystyle M} in the form μ = D M , {\displaystyle \mu ={\frac {D}{\sqrt {M}}},} α = G ∞ − G N G N M ( M + E ) 2 , {\displaystyle \alpha ={\frac {G_{\infty }-G_{N}}{G_{N}}}{\frac {M}{({\sqrt {M}}+E)^{2}}},} where G ∞ ≃ 20 G N {\displaystyle G_{\infty }\simeq 20G_{N}} is determined from cosmological observations, while for the constants D {\displaystyle D} and E {\displaystyle E} galaxy rotation curves yield the following values: D ≃ 25 2 ⋅ 10 M ⊙ 1 / 2 k p c − 1 , {\displaystyle D\simeq 25^{2}\cdot \,10M_{\odot }^{1/2}\mathrm {kpc} ^{-1},} E ≃ 50 2 ⋅ 10 M ⊙ 1 / 2 , {\displaystyle E\simeq 50^{2}\cdot \,10M_{\odot }^{1/2},} where M ⊙ {\displaystyle M_{\odot }} is the mass of the Sun. These results form the basis of a series of calculations that are used to confront the theory with observation. == Agreement with observations == STVG/MOG has been applied successfully to a range of astronomical, astrophysical, and cosmological phenomena. On the scale of the Solar System, the theory predicts no deviation from the results of Newton and Einstein. This is also true for star clusters containing no more than a few million solar masses. The theory accounts for the rotation curves of spiral galaxies, correctly reproducing the Tully–Fisher law. STVG is in good agreement with the mass profiles of galaxy clusters. STVG can also account for key cosmological observations, including: The acoustic peaks in the cosmic microwave background radiation; The accelerating expansion of the universe that is apparent from type Ia supernova observations; The matter power spectrum of the universe that is observed in the form of galaxy-galaxy correlations. == Problems and criticism == A 2017 article on Forbes by Ethan Siegel states that the Bullet Cluster still "proves dark matter exists, but not for the reason most physicists think". There he argues in favor of dark matter over non-local gravity theories, such as STVG/MOG. Observations show that in "undisturbed" galaxy clusters the reconstructed mass from gravitational lensing is located where matter is distributed, and a separation of matter from gravitation only seems to appear after a collision or interaction has taken place. According to Ethan Siegel: "Adding dark matter makes this work, but non-local gravity would make differing before-and-after predictions that can't both match up, simultaneously, with what we observe." == See also == Modified Newtonian dynamics Nonsymmetric gravitational theory Tensor–vector–scalar gravity Reinventing Gravity == References ==
Wikipedia/Scalar–tensor–vector_gravity
The zero-energy universe hypothesis proposes that the total amount of energy in the universe is exactly zero: its amount of positive energy in the form of matter is exactly canceled out by its negative energy in the form of gravity. Some physicists, such as Lawrence Krauss, Stephen Hawking or Alexander Vilenkin, call or called this state "a universe from nothingness", although the zero-energy universe model requires both a matter field with positive energy and a gravitational field with negative energy to exist. The hypothesis is broadly discussed in popular sources. Other cancellation examples include the expected symmetric prevalence of right- and left-handed angular momenta of objects ("spin" in the common sense), the observed flatness of the universe, the equal prevalence of positive and negative charges, opposing particle spin in quantum mechanics, as well as the crests and troughs of electromagnetic waves, among other possible examples in nature. == History == During World War II, Pascual Jordan first suggested that since the positive energy of a star's mass and the negative energy of its gravitational field together may have zero total energy, conservation of energy would not prevent a star being created by a quantum transition of the vacuum. George Gamow recounted putting this idea to Albert Einstein: "Einstein stopped in his tracks and, since we were crossing a street, several cars had to stop to avoid running us down". Elaboration of the concept was slow, with the first notable calculation being performed by Richard Feynman in 1962. The first known publication on the topic was in 1973, when Edward Tryon proposed in the journal Nature that the universe emerged from a large-scale quantum fluctuation of vacuum energy, resulting in its positive mass-energy being exactly balanced by its negative gravitational potential energy. In the subsequent decades, development of the concept was constantly plagued by the dependence of the calculated masses on the selection of the coordinate systems. In particular, a problem arises due to energy associated with coordinate systems co-rotating with the entire universe. A first constraint was derived in 1987 when Alan Guth published a proof of gravitational energy being negative. The question of the mechanism permitting generation of both positive and negative energy from null initial solution was not understood, and an ad hoc solution with cyclic time was proposed by Stephen Hawking in 1988. In 1994, development of the theory resumed following the publication of a work by Nathan Rosen, in which Rosen described a special case of closed universe. In 1995, J.V. Johri demonstrated that the total energy of Rosen's universe is zero in any universe compliant with a Friedmann–Lemaître–Robertson–Walker metric, and proposed a mechanism of inflation-driven generation of matter in a young universe. The zero energy solution for Minkowski space representing an observable universe, was provided in 2009. In his book Brief Answers to the Big Questions, Hawking explains: The laws of physics demand the existence of something called 'negative energy'.To help you get your head around this weird but crucial concept, let me draw on a simple analogy. Imagine a man wants to build a hill on a flat piece of land. The hill will represent the universe. To make this hill he digs a hole in the ground and uses that soil to dig his hill. But of course he's not just making a hill—he's also making a hole, in effect a negative version of the hill. The stuff that was in the hole has now become the hill, so it all perfectly balances out. This is the principle behind what happened at the beginning of the universe. When the Big Bang produced a massive amount of positive energy, it simultaneously produced the same amount of negative energy. In this way, the positive and the negative add up to zero, always. It's another law of nature. So where is all this negative energy today? It's in the third ingredient in our cosmic cookbook: it's in space. This may sound odd, but according to the laws of nature concerning gravity and motion—laws that are among the oldest in science—space itself is a vast store of negative energy. Enough to ensure that everything adds up to zero. Some research in quantum cosmology provide a concrete realization of the zero energy universe hypothesis. == Experimental constraints == Experimental proof for the observable universe being a "zero-energy universe" is currently inconclusive. Gravitational energy from visible matter accounts for 26–37% of the observed total mass–energy density. Therefore, to fit the concept of a "zero-energy universe" to the observed universe, other negative energy reservoirs besides gravity from baryonic matter are necessary. These reservoirs are frequently assumed to be dark matter. == See also == A Universe from Nothing False vacuum Heat death of the universe List of cosmology paradoxes – List of statements that appear to contradict themselves Ultimate fate of the universe == References ==
Wikipedia/Zero-energy_universe
The Theory of Everything is a 2014 British biographical drama film produced by Working Title Films and directed by James Marsh. Set at the University of Cambridge, it details the three decades of the life of the theoretical physicist Stephen Hawking. It was adapted by Anthony McCarten from the 2007 memoir Travelling to Infinity: My Life with Stephen by Jane Hawking, which deals with her relationship with her ex-husband Stephen Hawking, his diagnosis of motor neurone disease — also known as amyotrophic lateral sclerosis, (ALS) — and his success in the field of physics. The film stars Eddie Redmayne and Felicity Jones, with Charlie Cox, Emily Watson, Simon McBurney, Christian McKay, Harry Lloyd, and David Thewlis featured in supporting roles. The film had its world premiere at the 2014 Toronto International Film Festival on 7 September 2014. It had its UK premiere on 1 January 2015. The film received positive reviews, with praise for the musical score by Jóhann Jóhannsson, the cinematography by Benoît Delhomme, and the performances of Jones and especially Redmayne. It was also a global box office success, grossing US$123 million against a US$15 million production budget. The film gained numerous awards and nominations, including five Academy Award nominations: Best Picture, Best Actress (Jones), Best Adapted Screenplay, Best Original Score (Jóhannsson) and won Best Actor for Redmayne. The film received 10 British Academy Film Awards (BAFTA) nominations, and won Outstanding British Film, Best Leading Actor for Redmayne, and Best Adapted Screenplay for McCarten. It received four Golden Globe Award nominations, winning the Golden Globe Award for Best Actor – Motion Picture Drama for Redmayne, and Best Original Score for Jóhannsson. It also received three Screen Actors Guild Awards nominations, and won the Screen Actors Guild Award for Outstanding Performance by a Male Actor in a Leading Role for Redmayne. == Plot == In 1962, Stephen Hawking, a post-graduate astrophysics student at the University of Cambridge, begins a relationship with literature student Jane Wilde. Although Stephen is intelligent, both his friends and academics are concerned about his lack of a thesis topic. After attending a lecture by Roger Penrose on black holes with his supervisor, Prof. Dennis Sciama, Stephen speculates that these may have been part of the creation of the universe and so decides upon his thesis. However, soon Stephen's muscles begin to fail, causing him to have decreased coordination with his body. After a bad fall, he diagnosed with early-onset progressive degenerative motor neurone disease (MND) that will eventually leave him unable to move, swallow, and even breathe. With no treatment options, he is given approximately two years to live. The doctor assures Stephen that his brain will not be affected, so his thoughts and intelligence will remain intact, but eventually, he will be unable to communicate them. Stephen develops severe depression, becoming reclusive and focusing on his work. Jane confesses she loves him and that she intends to stay even as his condition worsens. They marry and have their first son, Robert. Once his walking ability deteriorates, he begins using a wheelchair. Inspired by Penrose’s work on spacetime singularities at the centre of black holes, Stephen presents his doctoral thesis viva extrapolating that a black hole created the universe in a Big Bang and it will end in a Big Crunch. After the Hawkings have their daughter Lucy, Jane becomes frustrated having to focus on the children, as well as Stephen's slowly degenerating health while his fame increases, all at the expense of her academic work. Stephen tells her he will understand if she needs help. In the 1970s, Jane joins a church choir, where she meets and becomes close friends with Jonathan, a widower. She employs him as a piano teacher for Robert, and Jonathan befriends the entire family, helping Stephen with his illness, supporting Jane, and playing with the children. When Jane gives birth to another son, Timothy, Stephen's mother asks her if the baby is Jonathan's. This causes outrage and Jonathan is appalled, but when he and Jane are alone, they admit the depth of their feelings for one another. He distances himself from the family, but Stephen tells him that Jane needs him. As the Lucasian Professor of Mathematics at Cambridge, Stephen goes on to develop a theory about the visibility of black holes that they emit radiation and becomes a world-renowned physicist. In the 1980s, while attending an opera performance in Bordeaux on holiday, Stephen falls ill and is rushed to a hospital. The doctor informs Jane that he has pneumonia and the tracheotomy he needs to survive will leave him mute. She agrees to the surgery. Stephen learns to use a spelling board and uses it to communicate with his new nurse, Elaine Mason. He receives a computer with a built-in voice synthesizer and uses it to write a book, A Brief History of Time, which becomes an international best-seller. In the late 1980s, Stephen tells Jane he has been invited to the United States to accept an award and will take Elaine with him. Jane faces the fact that the marriage has not been working, saying she "did her best", and they agree to divorce. While Stephen has fallen in love with Elaine, Jane and Jonathan reunite. Stephen goes to deliver a public lecture where he sees a student drop a pen. He imagines getting up to return it, almost crying at the reminder of how his disease has affected him. He then gives a speech telling audiences to pursue their ambitions despite the harsh reality of life: "While there's life, there is hope." On being made a member of the Order of the Companions of Honour in 1989, Stephen invites Jane to go with him to meet Queen Elizabeth II, where they share a happy day together with their three children. An extended closing series of select moments from the film, shown in reverse, back to the moment Stephen first saw Jane — the reversal is reminiscent of Stephen's research methodology of reversing time to understand the beginning of the universe. A written epilogue reveals that A Brief History of Time has sold over ten million copies worldwide; Stephen declined an offer of a knighthood and has no plans to retire; Jane earned her PhD in Medieval Spanish Poetry and married Jonathan; and both Stephen and Jane remain friends, sharing three grandchildren. == Cast == == Production == === Development === Screenwriter Anthony McCarten had been interested in Hawking since reading his seminal 1988 book A Brief History of Time. In 2004, McCarten read Jane Hawking's first memoir, Music to Move the Stars: A Life with Stephen of 1999, and subsequently began writing a screenplay adaptation of the book, with no guarantees in place. He met with Jane at her home numerous times to discuss the project. After multiple redrafts, incorporating details from her second memoir Travelling to Infinity: My Life with Stephen of 2007, he was introduced to producer Lisa Bruce via their mutual ICM agent, Craig Bernstein in 2009. Bruce spent three years with McCarten, further convincing Jane Hawking to agree to a film adaptation of her book, with Bruce stating, "It was a lot of conversation, many glasses of sherry, and many pots of tea". On 18 April 2013, James Marsh was confirmed to direct the film, with the shooting being based in Cambridge, and at other locations in the United Kingdom, with Eddie Redmayne courted to fill the male lead of the piece. On 23 June 2013, it was revealed that Felicity Jones was confirmed to play the film's female lead role opposite Redmayne. On 8 October 2013, it was confirmed that Emily Watson and David Thewlis had joined the cast and that Working Title's Tim Bevan, Eric Fellner, Lisa Bruce, and Anthony McCarten would be producing the piece. Marsh had studied archival images to give the film its authenticity, stating, "When we had photographs and documentary footage of Stephen that related to our story, we tried to reproduce them as best we could". Redmayne met with Hawking himself, commenting, "Even now, when he's unable to move, you can still see such effervescence in his eyes". He described portraying Hawking on-screen as a "hefty" challenge, adding that, "The real problem with making a film is of course you don't shoot chronologically. So it was about having to really try and chart his physical deterioration [so] you can jump into it day-to-day, whilst at the same time keeping this spark and wit and humour that he has". Redmayne spent six months researching Hawking's life, watching every piece of interview footage he could find of him. He studied Hawking's accent and speech patterns under dialect coach Julia Wilson-Dickson to prepare for the role. Marsh stated that what Redmayne had to do was not easy. "He had to take on enormous amounts of difficult preparation, as well as embracing the difficult physicality of the role. It's not just doing a disability. It's actually charting the course of an illness that erodes the body, and the mind has to project out from that erosion", he said. He added that Hawking gave him his blessing, and also revealed that, "[Hawking's] response was very positive, so much so that he offered to lend his voice, the real voice that he uses. The voice you hear in the latter part of the story is in fact Stephen's actual electronic voice as he uses it", he said. It was revealed to the Toronto International Film Festival (TIFF) audience that as the lights came up at a recent screening, a nurse had wiped a tear from Hawking's cheek. Jane Hawking, speaking on BBC Radio 4's Woman's Hour, talked of meeting Jones several times while the latter prepared for the role. When Hawking saw the finished film, she was amazed to see that Jones had incorporated her mannerisms and speech patterns into her performance. === Filming === Prior to the start of principal photography, Working Title had begun shooting on the lawn in front of the New Court building from 23 September 2013 to 27 September 2013; they filmed the Cambridge May Ball scene, set in 1963. On 24 September 2013, scenes were filmed at St John's College, The Backs in Queen's Road, and Queen's Green. The New Court lawn and Kitchen Bridge were featured places included in the location filming. Principal photography began on 8 October 2013, with the location filming at the University of Cambridge, and at other places in Cambridgeshire and across the United Kingdom. The May Ball scene was also the last of the outdoor shoots, with filming in a lecture theatre the following day, and the remaining filming completed in the studio over the final five weeks of production. The pyrotechnic specialists Titanium Fireworks, who developed the displays for the London 2012 Olympic Games, provided three identical firework displays for the May Ball scene at Trinity College, Cambridge. === Music === Composer Jóhann Jóhannsson scored The Theory of Everything. His score in the film has been described as including "[Jóhannsson's] signature blend of acoustic instruments and electronics". Jóhannsson commented that "it always involves the layers of live recordings, whether it's orchestra or a band or solo instrument, with electronics and more 'soundscapey' elements which can come from various sources". Jóhannsson's score was highly praised, being nominated for an Academy Award for Best Original Score, a BAFTA Award for Best Film Music, a Critics' Choice Movie Award for Best Score and a Grammy Award for Best Score Soundtrack for Visual Media, winning the Golden Globe Award for Best Original Score. The soundtrack was recorded at Abbey Road Studios. The music that plays over the final scene of Hawking and his family in the garden and the reverse-flashback is "The Arrival of the Birds", composed and played by The Cinematic Orchestra, originally from the soundtrack to the 2008 nature documentary The Crimson Wing: Mystery of the Flamingos. === Post-production === During editing, filmmakers tried to remake Hawking's synthesised voice, but it did not turn out as they wanted. Hawking enjoyed the film enough that he granted them permission to use his own synthesised voice, which is heard in the final film. == Historical accuracy == The film takes various dramatic liberties with the history it portrays. Writing for the film blog of UK daily newspaper The Guardian, Michelle Dean noted: The Theory of Everything's marketing materials will tell you it is based on Jane Hawking's memoir of her marriage, a book published in the UK as Music to Move the Stars: A Life with Stephen, and then re-issued as Travelling to Infinity. But the screenwriters rearranged the facts to suit certain dramatic conventions. And while that always happens in these based-on-a-true-story films, the scale of the departure in The Theory of Everything is unusually wide. The film becomes almost dishonest — in a way that feels unfair to both parties, and oddly, particularly Jane Hawking herself. In Slate, L.V. Anderson wrote that "the Stephen played by Eddie Redmayne is far gentler and more sensitive" than suggested in Travelling to Infinity. The Slate article further noted that the character Brian, Hawking's closest friend at Cambridge in the film, is not based on a real individual, but rather a composite of several of his real-life friends. The film alters some of the details surrounding the beginning of Stephen and Jane's relationship, including how they met, as well as the fact that Jane knew about Stephen's disease before they started dating. Slate also comments that the film underplays Hawking's stubbornness and refusal to accept outside assistance for his disorder. For The Guardian, Dean concluded by saying: The movie presents the demise of their relationship as a beautiful, tear-soaked, mutually respectful conversation. Of course that didn't actually happen either. Jane's book describes a protracted breakup that comes to a head in a screaming fight on vacation. She also described devastation when Hawking announced by letter he was leaving her for his second wife, Elaine Mason. He ended up married to Mason for 10 years before that fell apart, and then he and Jane mended fences. Which, as it happens, the movie fudges too. It tries to present the rapprochement as coming when Hawking was made a Companion of Honour in 1989, but that actually happened before the couple separated. Physicist Adrian Melott, a former student of Prof. Dennis Sciama, Hawking's doctoral supervisor portrayed in the film, strongly criticised the portrayal of Sciama in the film. In the film, when Stephen attends the opera in Bordeaux, his companion was actually Raymond LaFlamme, his PhD student. In the film, it is explained that Stephen's voice is taken from an answering machine. It is actually the voice of Dr Dennis H. Klatt. == Release == On 8 October 2013, Universal Pictures International had acquired the rights to distribute the film internationally. On 10 April 2014, Focus Features acquired the distribution rights to The Theory of Everything in the United States, with the plan of a 2014 limited theatrical release. publisher after, Entertainment One Films picked up the Canadian distribution rights. The first trailer of the film was released on 7 August 2014. The Theory of Everything premiered at the Toronto International Film Festival (TIFF) on 7 September 2014, where it opened in the official sidebar section, Special Presentations. The film had a limited release in the United States on 7 November 2014, expanded in successive weeks to Taiwan, Austria, and Germany, ahead of a United Kingdom release on 1 January 2015, before being released throughout Europe. == Reception == === Box office === The Theory of Everything earned US$122,873,310 worldwide, with its biggest markets coming from North America (US$35.9 million), and the United Kingdom (US$31.9 million). The film had a North American limited release on 7 November 2014; it was released in five theatres, and earned $207,000 on its opening weekend, for an average of $41,400 per theatre. The film was then widely released on 26 November across 802 cinemas, earning US$5 million, and debuting at No. 7 at the box office. During its five-day US Thanksgiving week, the film earned $6.4 million. === Critical response === Film review aggregator Rotten Tomatoes reports an approval rating of 80% based on 273 reviews, with an average rating of 7.3/10. The site's critical consensus reads, "Part biopic, part love story, The Theory of Everything rises on James Marsh's polished direction and the strength of its two leads." Metacritic assigned the film a weighted average score of 71 out of 100, based on 47 critics, indicating "generally favorable reviews". Catherine Shoard of The Guardian wrote, "Redmayne towers: this is an astonishing, genuinely visceral performance which bears comparison with Daniel Day-Lewis in My Left Foot". Justin Chang of Variety remarked, "A stirring and bittersweet love story, inflected with tasteful good humor...." He continued by praising the "superb performances" from Redmayne and Jones, as well commenting very positively about Jóhannsson's score, "whose arpeggio-like repetitions and progressions at times evoke the compositions of Philip Glass", whilst praising John Paul Kelly's production design, and Steven Noble's costumes. Leslie Felperin of The Hollywood Reporter remarked, "A solid, duly moving account of their complicated relationship, spanning roughly 25 years, and made with impeccable professional polish", praising Delhomme's cinematography as having "lush, intricately lit compositions", and adding "a splendor that keeps the film consistently watchable", and Jóhannsson's score as "dainty precision with a ineffable scientific quality about it". The Daily Telegraph's Tim Robey granted the film a positive review, stating that, "In its potted appraisal of Hawking's cosmology, The Theory of Everything bends over backwards to speak to the layman, and relies on plenty of second-hand inspiration. But it borrows from the right sources, this theory. And that's something", while praising Redmayne's performance, McCarten's script, and Delhomme's cinematography. Deadline Hollywood's Pete Hammond marked McCarten's script and Marsh's direction for praise, and of the film's Toronto reception, wrote: "To say the response here was rapturous would not be understating the enthusiasm I heard — not just from pundits, but also Academy voters with whom I spoke. One told me he came in with high expectations for a quality movie, and this one exceeded them". The film was not without its detractors. Some criticised Marsh's focus on Hawking's romantic life over his scientific achievements. Alonso Duralde of The Wrap stated that "Hawking's innovations and refusal to subscribe to outdated modes of thinking merely underscore the utter conventionality of his film biography". Eric Kohn of Indiewire added that "James Marsh's biopic salutes the famous physicist's commitment, but falls short of exploring his brilliant ideas". Dennis Overbye of the New York Times noted: The movie doesn't deserve any prizes for its drive-by muddling of Dr. Hawking's scientific work, leaving viewers in the dark about exactly why he is so famous. Instead of showing how he undermined traditional notions of space and time, it panders to religious sensibilities about what his work does or does not say about the existence of God, which in fact is very little. Writing for The Guardian's film blog, Michelle Dean argues that the film does a disservice to Jane Wilde Hawking, by "rearrang[ing] the facts to suit certain dramatic conventions... The Theory of Everything is hell-bent on preserving the cliche". The film's producers, writer, director Marsh, and actors Redmayne and Jones were widely favoured for award season success. === Accolades === The Theory of Everything received several awards and nominations following its release. At the 87th Academy Awards, it was nominated in the categories of Best Picture, Best Actor for Eddie Redmayne, Best Actress for Jones, Best Adapted Screenplay for McCarten, and Best Original Score for Jóhann Jóhannsson; with Eddie Redmayne winning the film's sole Academy Award for his performance. The film was nominated for ten British Academy Film Awards, (winning for Best Adapted Screenplay, Best British Film, and Best Actor), five Critics' Choice Movie Awards, and three Screen Actors Guild Awards. At the 72nd Golden Globe Awards, Redmayne won Best Actor – Motion Picture Drama, and Jóhannsson won Best Original Score. The film, and Jones were also nominated. Production designer John Paul Kelly earned a nomination for Excellence in Production Design for a Period Film from the Art Directors Guild, while the producers were nominated for Best Theatrical Motion Picture by the Producers Guild of America. == See also == List of films about mathematicians == References == == External links == The Theory of Everything — official website at FocusFeatures.com The Theory of Everything at the British Board of Film Classification The Theory of Everything at the British Film Institute The Theory of Everything at IMDb The Theory of Everything at Rotten Tomatoes The Theory of Everything at Box Office Mojo
Wikipedia/The_Theory_of_Everything_(2014_film)
In theoretical physics, little string theory is a non-gravitational non-local theory in six spacetime dimensions that can be obtained as an effective theory of NS5-branes in the limit in which gravity decouples. Little string theories exhibit T-duality, much like the full string theory. == References == Aharony, Ofer (2000). "A brief review of "little string theories"". Classical and Quantum Gravity. 17 (5): 929–938. arXiv:hep-th/9911147. Bibcode:2000CQGra..17..929A. doi:10.1088/0264-9381/17/5/302. S2CID 14143964. David Kutasov (2001). Introduction to Little String Theory (PDF). Spring School on Superstrings and Related Matters. Archived from the original on 2024-02-21. Retrieved 2024-02-21.{{cite conference}}: CS1 maint: bot: original URL status unknown (link)
Wikipedia/Little_string_theory
cGh physics refers to the historical attempts in physics to unify relativity, gravitation, and quantum mechanics, in particular following the ideas of Matvei Petrovich Bronstein and George Gamow. The letters are the standard symbols for the speed of light (c), the gravitational constant (G), and the Planck constant (h). If one considers these three universal constants as the basis for a 3-D coordinate system and envisions a cube, then this pedagogic construction provides a framework, which is referred to as the cGh cube, or physics cube, or cube of theoretical physics (CTP). This cube can be used for organizing major subjects within physics as occupying each of the eight corners. The eight corners of the cGh physics cube are: Classical mechanics (_, _, _) Special relativity (c, _, _), gravitation (_, G, _), quantum mechanics (_, _, h) General relativity (c, G, _), quantum field theory (c, _, h), non-relativistic quantum theory with gravity (_, G, h) Theory of everything, or relativistic quantum gravity (c, G, h) Other cGh physics topics include Hawking radiation and black-hole thermodynamics. While there are several other physical constants, these three are given special consideration because they can be used to define all Planck units and thus all physical quantities. The three constants are therefore used sometimes as a framework for philosophical study and as one of pedagogical patterns. == Overview == Before the first successful estimate of the speed of light in 1676, it was not known whether light was transmitted instantaneously or not. Because of the tremendously large value of the speed of light—c (i.e. 299,792,458 metres per second in vacuum)—compared to the range of human perceptual response and visual processing, the propagation of light is normally perceived as instantaneous. Hence, the ratio 1/c is sufficiently close to zero that all subsequent differences of calculations in relativistic mechanics are similarly 'invisible' relative to human perception. However, at speeds comparable to the speed of light (c), Lorentz transformation (as per special relativity) produces substantially different results which agree more accurately with (sufficiently precise) experimental measurement. Non-relativistic theory can then be derived by taking the limit as the speed of light tends to infinity—i.e. ignoring terms (in the Taylor expansion) with a factor of 1/c—producing a first-order approximation of the formulae. The gravitational constant (G) is irrelevant for a system where gravitational forces are negligible. For example, the special theory of relativity is the special case of general relativity in the limit G → 0. Similarly, in the theories where the effects of quantum mechanics are irrelevant, the value of Planck constant (h) can be neglected. For example, setting h → 0 in the commutation relation of quantum mechanics, the uncertainty in the simultaneous measurement of two conjugate variables tends to zero, approximating quantum mechanics with classical mechanics. == In popular culture == George Gamow chose "C. G. H." as the initials of his fictitious character, Mr C. G. H. Tompkins. == References ==
Wikipedia/CGh_physics
A conscience is a cognitive process that elicits emotion and rational associations based on an individual's moral philosophy or value system. Conscience is not an elicited emotion or thought produced by associations based on immediate sensory perceptions and reflexive responses, as in sympathetic central nervous system responses. In common terms, conscience is often described as leading to feelings of remorse when a person commits an act that conflicts with their moral values. The extent to which conscience informs moral judgment before an action and whether such moral judgments are or should be based on reason has occasioned debate through much of modern history between theories of basics in ethic of human life in juxtaposition to the theories of romanticism and other reactionary movements after the end of the Middle Ages. Religious views of conscience usually see it as linked to a morality inherent in all humans, to a beneficent universe and/or to divinity. The diverse ritualistic, mythical, doctrinal, legal, institutional and material features of religion may not necessarily cohere with experiential, emotive, spiritual or contemplative considerations about the origin and operation of conscience. Common secular or scientific views regard the capacity for conscience as probably genetically determined, with its subject probably learned or imprinted as part of a culture. Commonly used metaphors for conscience include the "voice within", the "inner light", or even Socrates' reliance on what the Greeks called his "daimōnic sign", an averting (ἀποτρεπτικός apotreptikos) inner voice heard only when he was about to make a mistake. Conscience, as is detailed in sections below, is a concept in national and international law, is increasingly conceived of as applying to the world as a whole, has motivated numerous notable acts for the public good and been the subject of many prominent examples of literature, music and film. == Views == Although humanity has no generally accepted definition of conscience or universal agreement about its role in ethical decision-making, three approaches have addressed it: Religious views Secular views Philosophical views === Religious === In the literary traditions of the Upanishads, Brahma Sutras and the Bhagavad Gita, conscience is the label given to attributes composing knowledge about good and evil, that a soul acquires from the completion of acts and consequent accretion of karma over many lifetimes. According to Adi Shankara in his Vivekachudamani morally right action (characterised as humbly and compassionately performing the primary duty of good to others without expectation of material or spiritual reward), helps "purify the heart" and provide mental tranquility but it alone does not give us "direct perception of the Reality". This knowledge requires discrimination between the eternal and non-eternal and eventually a realization in contemplation that the true self merges in a universe of pure consciousness. In the Zoroastrian faith, after death a soul must face judgment at the Bridge of the Separator; there, evil people are tormented by prior denial of their own higher nature, or conscience, and "to all time will they be guests for the House of the Lie." The Chinese concept of Ren, indicates that conscience, along with social etiquette and correct relationships, assist humans to follow The Way (Tao) a mode of life reflecting the implicit human capacity for goodness and harmony. Conscience also features prominently in Buddhism. In the Pali scriptures, for example, Buddha links the positive aspect of conscience to a pure heart and a calm, well-directed mind. It is regarded as a spiritual power, and one of the "Guardians of the World". The Buddha also associated conscience with compassion for those who must endure cravings and suffering in the world until right conduct culminates in right mindfulness and right contemplation. Santideva (685–763 CE) wrote in the Bodhicaryavatara (which he composed and delivered in the great northern Indian Buddhist university of Nalanda) of the spiritual importance of perfecting virtues such as generosity, forbearance and training the awareness to be like a "block of wood" when attracted by vices such as pride or lust; so one can continue advancing towards right understanding in meditative absorption. Conscience thus manifests in Buddhism as unselfish love for all living beings which gradually intensifies and awakens to a purer awareness where the mind withdraws from sensory interests and becomes aware of itself as a single whole. The Roman Emperor Marcus Aurelius wrote in his Meditations that conscience was the human capacity to live by rational principles that were congruent with the true, tranquil and harmonious nature of our mind and thereby that of the Universe: "To move from one unselfish action to another with God in mind. Only there, delight and stillness ... the only rewards of our existence here are an unstained character and unselfish acts." The Islamic concept of Taqwa is closely related to conscience. In the Qur’ān verses 2:197 & 22:37 Taqwa refers to "right conduct" or "piety", "guarding of oneself" or "guarding against evil". Qur’ān verse 47:17 says that God is the ultimate source of the believer's taqwā which is not simply the product of individual will but requires inspiration from God. In Qur’ān verses 91:7–8, God the Almighty talks about how He has perfected the soul, the conscience and has taught it the wrong (fujūr) and right (taqwā). Hence, the awareness of vice and virtue is inherent in the soul, allowing it to be tested fairly in the life of this world and tried, held accountable on the day of judgment for responsibilities to God and all humans. Qur’ān verse 49:13 states: "O humankind! We have created you out of male and female and constituted you into different groups and societies, so that you may come to know each other-the noblest of you, in the sight of God, are the ones possessing taqwā." In Islam, according to eminent theologians such as Al-Ghazali, although events are ordained (and written by God in al-Lawh al-Mahfūz, the Preserved Tablet), humans possess free will to choose between wrong and right and are thus responsible for their actions; the conscience being a dynamic personal connection to God enhanced by knowledge and practise of the Five Pillars of Islam, deeds of piety, repentance, self-discipline, and prayer; and disintegrated and metaphorically covered in blackness through sinful acts. Marshall Hodgson wrote the three-volume work: The Venture of Islam: Conscience and History in a World Civilization. In the Protestant Christian tradition, Martin Luther insisted at the Diet of Worms that his conscience was captive to the Word of God, and it was neither safe nor right to go against conscience. To Luther, conscience falls within the ethical, rather than the religious, sphere. John Calvin saw conscience as a battleground: "the enemies who rise up in our conscience against his Kingdom and hinder his decrees prove that God's throne is not firmly established therein". Many Christians regard following one's conscience as important as, or even more important than, obeying human authority. According to the Bible, as enunciated in Romans 2:15, conscience is the one bearing witness, accusing or excusing one another, so we would know when we break the law written in our hearts; the guilt we feel when we do something wrong tells us that we need to repent." This can sometimes (as with the conflict between William Tyndale and Thomas More over the translation of the Bible into English) lead to moral quandaries: "Do I unreservedly obey my Church/priest/military/political leader or do I follow my own inner feeling of right and wrong as instructed by prayer and a personal reading of scripture?" Some contemporary Christian churches and religious groups hold the moral teachings of the Ten Commandments or of Jesus as the highest authority in any situation, regardless of the extent to which it involves responsibilities in law. In the Gospel of John (7:53–8:11, King James Version), Jesus challenges those accusing a woman of adultery: "'He that is without sin among you, let him first cast a stone at her.' And again he stooped down, and wrote on the ground. And they which heard it, being convicted by their own conscience, went out one by one" (see Jesus and the woman taken in adultery). Of note, however, the word 'conscience' is not in the original New Testament Greek and is not in the vast majority of Bible versions. In the Gospel of Luke (10:25–37), Jesus tells the story of how a despised and heretical Samaritan (see Parable of the Good Samaritan) who (out of compassion or pity; the word 'conscience' is not used) helps an injured stranger beside a road, qualifies better for eternal life by loving his neighbor than a priest who passes by on the other side. This dilemma of obedience in conscience to divine or state law, was demonstrated dramatically in Antigone's defiance of King Creon's order against burying her brother an alleged traitor, appealing to the "unwritten law" and to a "longer allegiance to the dead than to the living". Catholic theology sees conscience as the last practical "judgment of reason which at the appropriate moment enjoins [a person] to do good and to avoid evil". The Second Vatican Council (1962–65) describes: "Deep within his conscience man discovers a law which he has not laid upon himself but which he must obey. Its voice, ever calling him to love and to do what is good and to avoid evil, tells him inwardly at the right movement: do this, shun that. For man has in his heart a law inscribed by God. His dignity lies in observing this law, and by it he will be judged. His conscience is man’s most secret core, and his sanctuary. There he is alone with God whose voice echoes in his depths." Thus, conscience is not like the will, nor a habit like prudence, but "the interior space in which we can listen to and hear the truth, the good, the voice of God. It is the inner place of our relationship with Him, who speaks to our heart and helps us to discern, to understand the path we ought to take, and once the decision is made, to move forward, to remain faithful" In terms of logic, conscience can be viewed as the practical conclusion of a moral syllogism whose major premise is an objective norm and whose minor premise is a particular case or situation to which the norm is applied. Thus, Catholics are taught to carefully educate themselves as to revealed norms and norms derived therefrom, so as to form a correct conscience. Catholics are also to examine their conscience daily and with special care before confession. Catholic teaching holds that, "Man has the right to act according to his conscience and in freedom so as personally to make moral decisions. He must not be forced to act contrary to his conscience. Nor must he be prevented from acting according to his conscience, especially in religious matters". This right of Conscience allows one to form their Morality from sincere and traditional sources and form their opinions from therein. Thus, the Church teaches that one must form their morality and then follow it to the best of their ability. Nevertheless it is taught in more than one area, that the conscience can, and sometimes should, stand against the teaching of the Church. Thus the Church teaches that the Conscience is a supreme authority, even above that of the Popes, Bishops, and Priests. Thus while the Conscience does grant man a great degree of freedom, if one is going to disagree with conventional morality or with the teachings of the Church, it is absolutely necessary to make sure that one's conscience is well formed and certain of what it is claiming or not claiming. A sincere conscience presumes one is diligently seeking moral truth from authentic sources, whether that be from the Church, or from Scripture, or from the numerous Church Fathers. Nevertheless, despite one's best effort, "[i]t can happen that moral conscience remains in ignorance and makes erroneous judgments about acts to be performed or already committed ... This ignorance can, but not always, be imputed to personal responsibility, This is the case when a man "takes little trouble to find out what is true and good", or in other words, puts forth very little effort and does not take the forming of the Conscience seriously. In such cases, the person is culpable for the wrong he commits." Not necessarily because of the error itself, but because of the bad faith or miniscule effort put forth by the one whos Conscience is in question. The Catholic Church has warned that "rejection of the Church's authority and her teaching ... can sometimes be at the source of errors in judgment in moral conduct". An example of someone following his conscience to the point of accepting the consequence of being condemned to death is Sir Thomas More (1478-1535). A theologian who wrote on the distinction between the 'sense of duty' and the 'moral sense', as two aspects of conscience, and who saw the former as some feeling that can only be explained by a divine Lawgiver, was John Henry Cardinal Newman. A well known saying of him is that he would first toast on his conscience and only then on the pope, since his conscience brought him to acknowledge the authority of the pope. This relates to the concept of the different types of heresy as understood within Church teaching. The Church distinguishes between Material Heresy and Formal Heresy. Material Heresy occurs when an individual, after sincere and thorough study of the Church’s moral teachings and a genuine effort to form their conscience in accordance with those teachings, concludes—respectfully and in good faith—that the Church is mistaken on one or more moral issues. In such cases, if the individual maintains their personal belief despite their best efforts to understand and accept Church doctrine, they are considered a Material Heretic. However, because their error stems from a well-intentioned and conscientious process, no sin is imputed to them. Formal Heresy, by contrast, involves a willful and culpable rejection of Church teaching despite recognizing its truth. In this case, the individual acknowledges that the Church's doctrine is correct but chooses to reject it knowingly, often out of pride, defiance, malice, or other forms of vice. This rejection constitutes a grave moral fault because it entails acting against one’s own conscience and embracing falsehood knowingly. As such, Formal Heresy is considered a sin, as it reflects both an intentional departure from truth and a deliberate act of dishonesty. One must maintain the seperation between Material Heresy and Formal Heresy, simply for the fact that one is sinful, and the other is not. Judaism arguably does not require uncompromising obedience to religious authority; the case has been made that throughout Jewish history, rabbis have circumvented laws they found unconscionable, such as capital punishment. Similarly, although an occupation with national destiny has been central to the Jewish faith (see Zionism) many scholars (including Moses Mendelssohn) stated that conscience as a personal revelation of scriptural truth was an important adjunct to the Talmudic tradition. The concept of inner light in the Religious Society of Friends or Quakers is associated with conscience. Freemasonry describes itself as providing an adjunct to religion and key symbols found in a Freemason Lodge are the square and compasses explained as providing lessons that Masons should "square their actions by the square of conscience", learn to "circumscribe their desires and keep their passions within due bounds toward all mankind." The historian Manning Clark viewed conscience as one of the comforters that religion placed between man and death but also a crucial part of the quest for grace encouraged by the Book of Job and the Book of Ecclesiastes, leading us to be paradoxically closest to the truth when we suspect that what matters most in life ("being there when everyone suddenly understands what it has all been for") can never happen. Leo Tolstoy, after a decade studying the issue (1877–1887), held that the only power capable of resisting the evil associated with materialism and the drive for social power of religious institutions, was the capacity of humans to reach an individual spiritual truth through reason and conscience. Many prominent religious works about conscience also have a significant philosophical component: examples are the works of Al-Ghazali, Avicenna, Aquinas, Joseph Butler and Dietrich Bonhoeffer (all discussed in the philosophical views section). === Secular === The secular approach to conscience includes psychological, physiological, sociological, humanitarian, and authoritarian views. Lawrence Kohlberg considered critical conscience to be an important psychological stage in the proper moral development of humans, associated with the capacity to rationally weigh principles of responsibility, being best encouraged in the very young by linkage with humorous personifications (such as Jiminy Cricket) and later in adolescents by debates about individually pertinent moral dilemmas. Erik Erikson placed the development of conscience in the 'pre-schooler' phase of his eight stages of normal human personality development. The psychologist Martha Stout terms conscience "an intervening sense of obligation based in our emotional attachments." Thus a good conscience is associated with feelings of integrity, psychological wholeness and peacefulness and is often described using adjectives such as "quiet", "clear" and "easy". Sigmund Freud regarded conscience as originating psychologically from the growth of civilisation, which periodically frustrated the external expression of aggression: this destructive impulse being forced to seek an alternative, healthy outlet, directed its energy as a superego against the person's own "ego" or selfishness (often taking its cue in this regard from parents during childhood). According to Freud, the consequence of not obeying our conscience is guilt, which can be a factor in the development of neurosis; Freud claimed that both the cultural and individual super-ego set up strict ideal demands with regard to the moral aspects of certain decisions, disobedience to which provokes a 'fear of conscience'. Antonio Damasio considers conscience an aspect of extended consciousness beyond survival-related dispositions and incorporating the search for truth and desire to build norms and ideals for behavior. ==== Conscience as a society-forming instinct ==== Michel Glautier argues that conscience is one of the instincts and drives which enable people to form societies: groups of humans without these drives or in whom they are insufficient cannot form societies and do not reproduce their kind as successfully as those that do. Charles Darwin considered that conscience evolved in humans to resolve conflicts between competing natural impulses-some about self-preservation but others about safety of a family or community; the claim of conscience to moral authority emerged from the "greater duration of impression of social instincts" in the struggle for survival. In such a view, behavior destructive to a person's society (either to its structures or to the persons it comprises) is bad or "evil". Thus, conscience can be viewed as an outcome of those biological drives that prompt humans to avoid provoking fear or contempt in others; being experienced as guilt and shame in differing ways from society to society and person to person. A requirement of conscience in this view is the capacity to see ourselves from the point of view of another person. Persons unable to do this (psychopaths, sociopaths, narcissists) therefore often act in ways which are "evil". Fundamental in this view of conscience is that humans consider some "other" as being in a social relationship. Thus, nationalism is invoked in conscience to quell tribal conflict and the notion of a Brotherhood of Man is invoked to quell national conflicts. Yet such crowd drives may not only overwhelm but redefine individual conscience. Friedrich Nietzsche stated: "communal solidarity is annihilated by the highest and strongest drives that, when they break out passionately, whip the individual far past the average low level of the 'herd-conscience.'" Jeremy Bentham noted that: "fanaticism never sleeps ... it is never stopped by conscience; for it has pressed conscience into its service." Hannah Arendt in her study of the trial of Adolf Eichmann in Jerusalem, notes that the accused, as with almost all his fellow Germans, had lost track of his conscience to the point where they hardly remembered it; this wasn't caused by familiarity with atrocities or by psychologically redirecting any resultant natural pity to themselves for having to bear such an unpleasant duty, so much as by the fact that anyone whose conscience did develop doubts could see no one who shared them: "Eichmann did not need to close his ears to the voice of conscience ... not because he had none, but because his conscience spoke with a "respectable voice", with the voice of the respectable society around him". Sir Arthur Keith in 1948 developed the Amity-enmity complex. We evolved as tribal groups surrounded by enemies; thus conscience evolved a dual role; the duty to save and protect members of the in-group, and the duty to show hatred and aggression towards any out-group. An interesting area of research in this context concerns the similarities between our relationships and those of animals, whether animals in human society (pets, working animals, even animals grown for food) or in the wild. One idea is that as people or animals perceive a social relationship as important to preserve, their conscience begins to respect that former "other", and urge actions that protect it. Similarly, in complex territorial and cooperative breeding bird communities (such as the Australian magpie) that have a high degree of etiquettes, rules, hierarchies, play, songs and negotiations, rule-breaking seems tolerated on occasions not obviously related to survival of the individual or group; behaviour often appearing to exhibit a touching gentleness and tenderness. ==== Evolutionary biology ==== Contemporary scientists in evolutionary biology seek to explain conscience as a function of the brain that evolved to facilitate altruism within societies. In his book The God Delusion, Richard Dawkins states that he agrees with Robert Hinde's Why Good is Good, Michael Shermer's The Science of Good and Evil, Robert Buckman's Can We Be Good Without God? and Marc Hauser's Moral Minds, that our sense of right and wrong can be derived from our Darwinian past. He subsequently reinforced this idea through the lens of the gene-centered view of evolution, since the unit of natural selection is neither an individual organism nor a group, but rather the "selfish" gene, and these genes could ensure their own "selfish" survival by, inter alia, pushing individuals to act altruistically towards its kin. ==== Neuroscience and artificial conscience ==== Numerous case studies of brain damage have shown that damage to areas of the brain (such as the anterior prefrontal cortex) results in the reduction or elimination of inhibitions, with a corresponding radical change in behaviour. When the damage occurs to adults, they may still be able to perform moral reasoning; but when it occurs to children, they may never develop that ability. Attempts have been made by neuroscientists to locate the free will necessary for what is termed the 'veto' of conscience over unconscious mental processes (see Neuroscience of free will and Benjamin Libet) in a scientifically measurable awareness of an intention to carry out an act occurring 350–400 microseconds after the electrical discharge known as the 'readiness potential.' Jacques Pitrat claims that some kind of artificial conscience is beneficial in artificial intelligence systems to improve their long-term performance and direct their introspective processing. === Philosophical === The word "conscience" derives etymologically from the Latin conscientia, meaning "privity of knowledge" or "with-knowledge". The English word implies internal awareness of a moral standard in the mind concerning the quality of one's motives, as well as a consciousness of our own actions. Thus conscience considered philosophically may be first, and perhaps most commonly, a largely unexamined "gut feeling" or "vague sense of guilt" about what ought to be or should have been done. Conscience in this sense is not necessarily the product of a process of rational consideration of the moral features of a situation (or the applicable normative principles, rules or laws) and can arise from parental, peer group, religious, state or corporate indoctrination, which may or may not be presently consciously acceptable to the person ("traditional conscience"). Conscience may be defined as the practical reason employed when applying moral convictions to a situation ("critical conscience"). In purportedly morally mature mystical people who have developed this capacity through daily contemplation or meditation combined with selfless service to others, critical conscience can be aided by a "spark" of intuitive insight or revelation (called marifa in Islamic Sufi philosophy and synderesis in medieval Christian scholastic moral philosophy). Conscience is accompanied in each case by an internal awareness of 'inner light' and approbation or 'inner darkness' and condemnation as well as a resulting conviction of right or duty either followed or declined. ==== Medieval ==== The medieval Islamic scholar and mystic Al-Ghazali divided the concept of Nafs (soul or self (spirituality)) into three categories based on the Qur’an: Nafs Ammarah (12:53) which "exhorts one to freely indulge in gratifying passions and instigates to do evil" Nafs Lawammah (75:2) which is "the conscience that directs man towards right or wrong" Nafs Mutmainnah (89:27) which is "a self that reaches the ultimate peace" The medieval Persian philosopher and physician Muhammad ibn Zakariya al-Razi believed in a close relationship between conscience or spiritual integrity and physical health; rather than being self-indulgent, man should pursue knowledge, use his intellect and apply justice in his life. The medieval Islamic philosopher Avicenna, whilst imprisoned in the castle of Fardajan near Hamadhan, wrote his famous isolated-but-awake "Floating Man" sensory deprivation thought experiment to explore the ideas of human self-awareness and the substantiality of the soul; his hypothesis being that it is through intelligence, particularly the active intellect, that God communicates truth to the human mind or conscience. According to the Islamic Sufis conscience allows Allah to guide people to the marifa, the peace or "light upon light" experienced where a Muslim's prayers lead to a melting away of the self in the inner knowledge of God; this foreshadowing the eternal Paradise depicted in the Qur’ān. Some medieval Christian scholastics such as Bonaventure made a distinction between conscience as a rational faculty of the mind (practical reason) and inner awareness, an intuitive "spark" to do good, called synderesis arising from a remnant appreciation of absolute good and when consciously denied (for example to perform an evil act), becoming a source of inner torment. Early modern theologians such as William Perkins and William Ames developed a syllogistic understanding of the conscience, where God's law made the first term, the act to be judged the second and the action of the conscience (as a rational faculty) produced the judgement. By debating test cases applying such understanding conscience was trained and refined (i.e. casuistry). In the 13th century, St. Thomas Aquinas regarded conscience as the application of moral knowledge to a particular case (S.T. I, q. 79, a. 13). Thus, conscience was considered an act or judgment of practical reason that began with synderesis, the structured development of our innate remnant awareness of absolute good (which he categorised as involving the five primary precepts proposed in his theory of Natural Law) into an acquired habit of applying moral principles. According to Singer, Aquinas held that conscience, or conscientia was an imperfect process of judgment applied to activity because knowledge of the natural law (and all acts of natural virtue implicit therein) was obscured in most people by education and custom that promoted selfishness rather than fellow-feeling (Summa Theologiae, I–II, I). Aquinas also discussed conscience in relation to the virtue of prudence to explain why some people appear to be less "morally enlightened" than others, their weak will being incapable of adequately balancing their own needs with those of others. Aquinas reasoned that acting contrary to conscience is an evil action but an errant conscience is only blameworthy if it is the result of culpable or vincible ignorance of factors that one has a duty to have knowledge of. Aquinas also argued that conscience should be educated to act towards real goods (from God) which encouraged human flourishing, rather than the apparent goods of sensory pleasures. In his Commentary on Aristotle's Nicomachean Ethics Aquinas claimed it was weak will that allowed a non-virtuous man to choose a principle allowing pleasure ahead of one requiring moral constraint. Thomas A Kempis in the medieval contemplative classic The Imitation of Christ (ca 1418) stated that the glory of a good man is the witness of a good conscience. "Preserve a quiet conscience and you will always have joy. A quiet conscience can endure much, and remains joyful in all trouble, but an evil conscience is always fearful and uneasy." The anonymous medieval author of the Christian mystical work The Cloud of Unknowing similarly expressed the view that in profound and prolonged contemplation a soul dries up the "root and ground" of the sin that is always there, even after one's confession and however busy one is in holy things: "therefore, whoever would work at becoming a contemplative must first cleanse his [or her] conscience." The medieval Flemish mystic John of Ruysbroeck likewise held that true conscience has four aspects that are necessary to render a man just in the active and contemplative life: "a free spirit, attracting itself through love"; "an intellect enlightened by grace", "a delight yielding propension or inclination" and "an outflowing losing of oneself in the abyss of ... that eternal object which is the highest and chief blessedness ... those lofty amongst men, are absorbed in it, and immersed in a certain boundless thing." ==== Modern ==== Benedict de Spinoza in his Ethics, published after his death in 1677, argued that most people, even those that consider themselves to exercise free will, make moral decisions on the basis of imperfect sensory information, inadequate understanding of their mind and will, as well as emotions which are both outcomes of their contingent physical existence and forms of thought defective from being chiefly impelled by self-preservation. The solution, according to Spinoza, was to gradually increase the capacity of our reason to change the forms of thought produced by emotions and to fall in love with viewing problems requiring moral decision from the perspective of eternity. Thus, living a life of peaceful conscience means to Spinoza that reason is used to generate adequate ideas where the mind increasingly sees the world and its conflicts, our desires and passions sub specie aeternitatis, that is without reference to time. Hegel's obscure and mystical Philosophy of Mind held that the absolute right of freedom of conscience facilitates human understanding of an all-embracing unity, an absolute which was rational, real and true. Nevertheless, Hegel thought that a functioning State would always be tempted not to recognize conscience in its form of subjective knowledge, just as similar non-objective opinions are generally rejected in science. A similar idealist notion was expressed in the writings of Joseph Butler who argued that conscience is God-given, should always be obeyed, is intuitive, and should be considered the "constitutional monarch" and the "universal moral faculty": "conscience does not only offer itself to show us the way we should walk in, but it likewise carries its own authority with it." Butler advanced ethical speculation by referring to a duality of regulative principles in human nature: first, "self-love" (seeking individual happiness) and second, "benevolence" (compassion and seeking good for another) in conscience (also linked to the agape of situational ethics). Conscience tended to be more authoritative in questions of moral judgment, thought Butler, because it was more likely to be clear and certain (whereas calculations of self-interest tended to probable and changing conclusions). John Selden in his Table Talk expressed the view that an awake but excessively scrupulous or ill-trained conscience could hinder resolve and practical action; it being "like a horse that is not well wayed, he starts at every bird that flies out of the hedge". As the sacred texts of ancient Hindu and Buddhist philosophy became available in German translations in the 18th and 19th centuries, they influenced philosophers such as Schopenhauer to hold that in a healthy mind only deeds oppress our conscience, not wishes and thoughts; "for it is only our deeds that hold us up to the mirror of our will"; the good conscience, thought Schopenhauer, we experience after every disinterested deed arises from direct recognition of our own inner being in the phenomenon of another, it affords us the verification "that our true self exists not only in our own person, this particular manifestation, but in everything that lives. By this the heart feels itself enlarged, as by egotism it is contracted." Immanuel Kant, a central figure of the Age of Enlightenment, likewise claimed that two things filled his mind with ever new and increasing admiration and awe, the oftener and more steadily they were reflected on: "the starry heavens above me and the moral law within me ... the latter begins from my invisible self, my personality, and exhibits me in a world which has true infinity but which I recognise myself as existing in a universal and necessary (and not only, as in the first case, contingent) connection." The 'universal connection' referred to here is Kant's categorical imperative: "act only according to that maxim by which you can at the same time will that it should become a universal law." Kant considered critical conscience to be an internal court in which our thoughts accuse or excuse one another; he acknowledged that morally mature people do often describe contentment or peace in the soul after following conscience to perform a duty, but argued that for such acts to produce virtue their primary motivation should simply be duty, not expectation of any such bliss. Rousseau expressed a similar view that conscience somehow connected man to a greater metaphysical unity. John Plamenatz in his critical examination of Rousseau's work considered that conscience was there defined as the feeling that urges us, in spite of contrary passions, towards two harmonies: the one within our minds and between our passions, and the other within society and between its members; "the weakest can appeal to it in the strongest, and the appeal, though often unsuccessful, is always disturbing. However, corrupted by power or wealth we may be, either as possessors of them or as victims, there is something in us serving to remind us that this corruption is against nature." Other philosophers expressed a more sceptical and pragmatic view of the operation of "conscience" in society. John Locke in his Essays on the Law of Nature argued that the widespread fact of human conscience allowed a philosopher to infer the necessary existence of objective moral laws that occasionally might contradict those of the state. Locke highlighted the metaethics problem of whether accepting a statement like "follow your conscience" supports subjectivist or objectivist conceptions of conscience as a guide in concrete morality, or as a spontaneous revelation of eternal and immutable principles to the individual: "if conscience be a proof of innate principles, contraries may be innate principles; since some men with the same bent of conscience prosecute what others avoid." Thomas Hobbes likewise pragmatically noted that opinions formed on the basis of conscience with full and honest conviction, nevertheless should always be accepted with humility as potentially erroneous and not necessarily indicating absolute knowledge or truth. William Godwin expressed the view that conscience was a memorable consequence of the "perception by men of every creed when the descend into the scene of busy life" that they possess free will. Adam Smith considered that it was only by developing a critical conscience that we can ever see what relates to ourselves in its proper shape and dimensions; or that we can ever make any proper comparison between our own interests and those of other people. John Stuart Mill believed that idealism about the role of conscience in government should be tempered with a practical realisation that few men in society are capable of directing their minds or purposes towards distant or unobvious interests, of disinterested regard for others, and especially for what comes after them, for the idea of posterity, of their country, or of humanity, whether grounded on sympathy or on a conscientious feeling. Mill held that certain amount of conscience, and of disinterested public spirit, may fairly be calculated on in the citizens of any community ripe for representative government, but that "it would be ridiculous to expect such a degree of it, combined with such intellectual discernment, as would be proof against any plausible fallacy tending to make that which was for their class interest appear the dictate of justice and of the general good." Josiah Royce (1855–1916) built on the transcendental idealism view of conscience, viewing it as the ideal of life which constitutes our moral personality, our plan of being ourself, of making common sense ethical decisions. But, he thought, this was only true insofar as our conscience also required loyalty to "a mysterious higher or deeper self". In the modern Christian tradition this approach achieved expression with Dietrich Bonhoeffer who stated during his imprisonment by the Nazis in World War II that conscience for him was more than practical reason, indeed it came from a "depth which lies beyond a man's own will and his own reason and it makes itself heard as the call of human existence to unity with itself." For Bonhoeffer a guilty conscience arose as an indictment of the loss of this unity and as a warning against the loss of one's self; primarily, he thought, it is directed not towards a particular kind of doing but towards a particular mode of being. It protests against a doing which imperils the unity of this being with itself. Conscience for Bonhoeffer did not, like shame, embrace or pass judgment on the morality of the whole of its owner's life; it reacted only to certain definite actions: "it recalls what is long past and represents this disunion as something which is already accomplished and irreparable". The man with a conscience, he believed, fights a lonely battle against the "overwhelming forces of inescapable situations" which demand moral decisions despite the likelihood of adverse consequences. Simon Soloveychik has similarly claimed that the truth distributed in the world, as the statement about human dignity, as the affirmation of the line between good and evil, lives in people as conscience. As Hannah Arendt pointed out, however, (following the utilitarian John Stuart Mill on this point): a bad conscience does not necessarily signify a bad character; in fact only those who affirm a commitment to applying moral standards will be troubled with remorse, guilt or shame by a bad conscience and their need to regain integrity and wholeness of the self. Representing our soul or true self by analogy as our house, Arendt wrote that "conscience is the anticipation of the fellow who awaits you if and when you come home." Arendt believed that people who are unfamiliar with the process of silent critical reflection about what they say and do will not mind contradicting themselves by an immoral act or crime, since they can "count on its being forgotten the next moment;" bad people are not full of regrets. Arendt also wrote eloquently on the problem of languages distinguishing the word consciousness from conscience. One reason, she held, was that conscience, as we understand it in moral or legal matters, is supposedly always present within us, just like consciousness: "and this conscience is also supposed to tell us what to do and what to repent; before it became the lumen naturale or Kant's practical reason, it was the voice of God." Albert Einstein, as a self-professed adherent of humanism and rationalism, likewise viewed an enlightened religious person as one whose conscience reflects that he "has, to the best of his ability, liberated himself from the fetters of his selfish desires and is preoccupied with thoughts, feelings and aspirations to which he clings because of their super-personal value." Einstein often referred to the "inner voice" as a source of both moral and physical knowledge: "Quantum mechanics is very impressive. But an inner voice tells me that it is not the real thing. The theory produces a good deal but hardly brings one closer to the secrets of the Old One. I am at all events convinced that He does not play dice." Simone Weil who fought for the French resistance (the Maquis) argued in her final book The Need for Roots: Prelude to a Declaration of Duties Towards Mankind that for society to become more just and protective of liberty, obligations should take precedence over rights in moral and political philosophy and a spiritual awakening should occur in the conscience of most citizens, so that social obligations are viewed as fundamentally having a transcendent origin and a beneficent impact on human character when fulfilled. Simone Weil also in that work provided a psychological explanation for the mental peace associated with a good conscience: "the liberty of men of goodwill, though limited in the sphere of action, is complete in that of conscience. For, having incorporated the rules into their own being, the prohibited possibilities no longer present themselves to the mind, and have not to be rejected." Alternatives to such metaphysical and idealist opinions about conscience arose from realist and materialist perspectives such as those of Charles Darwin. Darwin suggested that "any animal whatever, endowed with well-marked social instincts, the parental and filial affections being here included, would inevitably acquire a moral sense or conscience, as soon as its intellectual powers had become as well, or as nearly as well developed, as in man." Émile Durkheim held that the soul and conscience were particular forms of an impersonal principle diffused in the relevant group and communicated by totemic ceremonies. A. J. Ayer was a more recent realist who held that the existence of conscience was an empirical question to be answered by sociological research into the moral habits of a given person or group of people, and what causes them to have precisely those habits and feelings. Such an inquiry, he believed, fell wholly within the scope of the existing social sciences. George Edward Moore bridged the idealistic and sociological views of 'critical' and 'traditional' conscience in stating that the idea of abstract 'rightness' and the various degrees of the specific emotion excited by it are what constitute, for many persons, the specifically 'moral sentiment' or conscience. For others, however, an action seems to be properly termed 'internally right', merely because they have previously regarded it as right, the idea of 'rightness' being present in some way to his or her mind, but not necessarily among his or her deliberately constructed motives. The French philosopher Simone de Beauvoir in A Very Easy Death (Une mort très douce, 1964) reflects within her own conscience about her mother's attempts to develop such a moral sympathy and understanding of others. Michael Walzer claimed that the growth of religious toleration in Western nations arose amongst other things, from the general recognition that private conscience signified some inner divine presence regardless of the religious faith professed and from the general respectability, piety, self-limitation, and sectarian discipline which marked most of the men who claimed the rights of conscience. Walzer also argued that attempts by courts to define conscience as a merely personal moral code or as sincere belief, risked encouraging an anarchy of moral egotisms, unless such a code and motive was necessarily tempered with shared moral knowledge: derived either from the connection of the individual to a universal spiritual order, or from the common principles and mutual engagements of unselfish people. Ronald Dworkin maintains that constitutional protection of freedom of conscience is central to democracy but creates personal duties to live up to it: "Freedom of conscience presupposes a personal responsibility of reflection, and it loses much of its meaning when that responsibility is ignored. A good life need not be an especially reflective one; most of the best lives are just lived rather than studied. But there are moments that cry out for self-assertion, when a passive bowing to fate or a mechanical decision out of deference or convenience is treachery, because it forfeits dignity for ease." Edward Conze stated it is important for individual and collective moral growth that we recognise the illusion of our conscience being wholly located in our body; indeed both our conscience and wisdom expand when we act in an unselfish way and conversely "repressed compassion results in an unconscious sense of guilt." The philosopher Peter Singer considers that usually when we describe an action as conscientious in the critical sense we do so in order to deny either that the relevant agent was motivated by selfish desires, like greed or ambition, or that he acted on whim or impulse. Moral anti-realists debate whether the moral facts necessary to activate conscience supervene on natural facts with a posteriori necessity; or arise a priori because moral facts have a primary intension and naturally identical worlds may be presumed morally identical. It has also been argued that there is a measure of moral luck in how circumstances create the obstacles which conscience must overcome to apply moral principles or human rights and that with the benefit of enforceable property rights and the rule of law, access to universal health care plus the absence of high adult and infant mortality from conditions such as malaria, tuberculosis, HIV/AIDS and famine, people in relatively prosperous developed countries have been spared pangs of conscience associated with the physical necessity to steal scraps of food, bribe tax inspectors or police officers, and commit murder in guerrilla wars against corrupt government forces or rebel armies. Roger Scruton has claimed that true understanding of conscience and its relationship with morality has been hampered by an "impetuous" belief that philosophical questions are solved through the analysis of language in an area where clarity threatens vested interests. Susan Sontag similarly argued that it was a symptom of psychological immaturity not to recognise that many morally immature people willingly experience a form of delight, in some an erotic breaking of taboo, when witnessing violence, suffering and pain being inflicted on others. Jonathan Glover wrote that most of us "do not spend our lives on endless landscape gardening of our self" and our conscience is likely shaped not so much by heroic struggles, as by choice of partner, friends and job, as well as where we choose to live. Garrett Hardin, in a famous article called "The Tragedy of the Commons", argues that any instance in which society appeals to an individual exploiting a commons to restrain himself or herself for the general good—by means of his or her conscience—merely sets up a system which, by selectively diverting societal power and physical resources to those lacking in conscience, while fostering guilt (including anxiety about his or her individual contribution to over-population) in people acting upon it, actually works toward the elimination of conscience from the race. John Ralston Saul expressed the view in The Unconscious Civilization that in contemporary developed nations many people have acquiesced in turning over their sense of right and wrong, their critical conscience, to technical experts; willingly restricting their moral freedom of choice to limited consumer actions ruled by the ideology of the free market, while citizen participation in public affairs is limited to the isolated act of voting and private-interest lobbying turns even elected representatives against the public interest. Some argue on religious or philosophical grounds that it is blameworthy to act against conscience, even if the judgement of conscience is likely to be erroneous (say because it is inadequately informed about the facts, or prevailing moral (humanist or religious), professional ethical, legal and human rights norms). Failure to acknowledge and accept that conscientious judgements can be seriously mistaken, may only promote situations where one's conscience is manipulated by others to provide unwarranted justifications for non-virtuous and selfish acts; indeed, insofar as it is appealed to as glorifying ideological content, and an associated extreme level of devotion, without adequate constraint of external, altruistic, normative justification, conscience may be considered morally blind and dangerous both to the individual concerned and humanity as a whole. Langston argues that philosophers of virtue ethics have unnecessarily neglected conscience for, once conscience is trained so that the principles and rules it applies are those one would want all others to live by, its practise cultivates and sustains the virtues; indeed, amongst people in what each society considers to be the highest state of moral development there is little disagreement about how to act. Emmanuel Levinas viewed conscience as a revelatory encountering of resistance to our selfish powers, developing morality by calling into question our naive sense of freedom of will to use such powers arbitrarily, or with violence, this process being more severe the more rigorously the goal of our self was to obtain control. In other words, the welcoming of the Other, to Levinas, was the very essence of conscience properly conceived; it encouraged our ego to accept the fallibility of assuming things about other people, that selfish freedom of will "does not have the last word" and that realising this has a transcendent purpose: "I am not alone ... in conscience I have an experience that is not commensurate with any a priori [see a priori and a posteriori] framework-a conceptless experience." == Conscientious acts and the law == In the late 13th and early 14th centuries, English litigants began to petition the Lord Chancellor of England for relief from unjust judgments. As Keeper of the King's Conscience, the Chancellor intervened to allow for "merciful exceptions" to the King's laws, "to ensure that the King's conscience was right before God". The Chancellor's office evolved into the Court of Chancery and the Chancellor's decisions evolved into the body of law known as equity. English humanist lawyers in the 16th and 17th centuries interpreted conscience as a collection of universal principles given to man by god at creation to be applied by reason; this gradually reforming the medieval Roman law-based system with forms of action, written pleadings, use of juries and patterns of litigation such as Demurrer and Assumpsit that displayed an increased concern for elements of right and wrong on the actual facts. A conscience vote in a parliament allows legislators to vote without restrictions from any political party to which they may belong. In his trial in Jerusalem Nazi war criminal Adolf Eichmann claimed he was simply following legal orders under paragraph 48 of the German Military Code which provided: "punishability of an action or omission is not excused on the ground that the person considered his behaviour required by his conscience or the prescripts of his religion". The United Nations Universal Declaration on Human Rights (UDHR) which is part of international customary law specifically refers to conscience in Articles 1 and 18. Likewise, the United Nations International Covenant on Civil and Political Rights (ICCPR) mentions conscience in Article 18.1.All human beings are born free and equal in dignity and rights. They are endowed with reason and conscience and should act towards one another in a spirit of brotherhoodEveryone has the right to freedom of thought, conscience and religion; this right includes freedom to change his religion or belief, and freedom, either alone or in community with others and in public or private, to manifest his religion or belief in teaching, practice, worship and observanceEveryone shall have the right to freedom of thought, conscience and religion. This right shall include freedom to have or to adopt a religion or belief of his choice, and freedom, either individually or in community with others and in public or private, to manifest his religion or belief in worship, observance, practice and teaching It has been argued that these articles provide international legal obligations protecting conscientious objectors from service in the military. John Rawls in his A Theory of Justice defines a conscientious objector as an individual prepared to undertake, in public (and often despite widespread condemnation), an action of civil disobedience to a legal rule justifying it (also in public) by reference to contrary foundational social virtues (such as justice as liberty or fairness) and the principles of morality and law derived from them. Rawls considered civil disobedience should be viewed as an appeal, warning or admonishment (showing general respect and fidelity to the rule of law by the non-violence and transparency of methods adopted) that a law breaches a community's fundamental virtue of justice. Objections to Rawls' theory include first, its inability to accommodate conscientious objections to the society's basic appreciation of justice or to emerging moral or ethical principles (such as respect for the rights of the natural environment) which are not yet part of it and second, the difficulty of predictably and consistently determining that a majority decision is just or unjust. Conscientious objection (also called conscientious refusal or evasion) to obeying a law, should not arise from unreasoning, naive "traditional conscience", for to do so merely encourages infantile abdication of responsibility to calibrate the law against moral or human rights norms and disrespect for democratic institutions. Instead it should be based on "critical conscience' – seriously thought out, conceptually mature, personal moral or religious beliefs held to be fundamentally incompatible (that is, not merely inconsistent on the basis of selfish desires, whim or impulse), for example, either with all laws requiring conscription for military service, or legal compulsion to fight for or financially support the State in a particular war. A famous example arose when Henry David Thoreau the author of Walden was willingly jailed for refusing to pay a tax because he profoundly disagreed with a government policy and was frustrated by the corruption and injustice of the democratic machinery of the state. A more recent case concerned Kimberly Rivera, a private in the US Army and mother of four children who, having served three months in Iraq War decided the conflict was immoral and sought refugee status in Canada in 2012 (see List of Iraq War resisters), but was deported and arrested in the US. In the Second World War, Great Britain granted conscientious-objection status not just to complete pacifists, but to those who objected to fighting in that particular war; this was done partly out of genuine respect, but also to avoid the disgraceful and futile persecutions of conscientious objectors that occurred during the First World War. Amnesty International organises campaigns to protect those arrested and or incarcerated as a prisoner of conscience because of their conscientious beliefs, particularly concerning intellectual, political and artistic freedom of expression and association. Aung San Suu Kyi of Burma, was the winner of the 2009 Amnesty International Ambassador of Conscience Award. In legislation, a conscience clause is a provision in a statute that excuses a health professional from complying with the law (for example legalising surgical or pharmaceutical abortion) if it is incompatible with religious or conscientious beliefs. Expressed justifications for refusing to obey laws because of conscience vary. Many conscientious objectors are so for religious reasons—notably, members of the historic peace churches are pacifist by doctrine. Other objections can stem from a deep sense of responsibility toward humanity as a whole, or from the conviction that even acceptance of work under military orders acknowledges the principle of conscription that should be everywhere condemned before the world can ever become safe for real democracy. A conscientious objector, however, does not have a primary aim of changing the law. John Dewey considered that conscientious objectors were often the victims of "moral innocency" and inexpertness in moral training: "the moving force of events is always too much for conscience". The remedy was not to deplore the wickedness of those who manipulate world power, but to connect conscience with forces moving in another direction- to build institutions and social environments predicated on the rule of law, for example, "then will conscience itself have compulsive power instead of being forever the martyred and the coerced." As an example, Albert Einstein who had advocated conscientious objection during the First World War and had been a longterm supporter of War Resisters' International reasoned that "radical pacifism" could not be justified in the face of Nazi rearmament and advocated a world federalist organization with its own professional army. Samuel Johnson pointed out that an appeal to conscience should not allow the law to bring unjust suffering upon another. Conscience, according to Johnson, was nothing more than a conviction felt by ourselves of something to be done or something to be avoided; in questions of simple unperplexed morality, conscience is very often a guide that may be trusted. But before conscience can conclusively determine what morally should be done, he thought that the state of the question should be thoroughly known. "No man's conscience", said Johnson "can tell him the right of another man ... it is a conscience very ill informed that violates the rights of one man, for the convenience of another." Civil disobedience as nonviolent protest or civil resistance are also acts of conscience, but are designed by those who undertake them chiefly to change, by appealing to the majority and democratic processes, laws or government policies perceived to be incoherent with fundamental social virtues and principles (such as justice, equality or respect for intrinsic human dignity). Civil disobedience, in a properly functioning democracy, allows a minority who feel strongly that a law infringes their sense of justice (but have no capacity to obtain legislative amendments or a referendum on the issue) to make a potentially apathetic or uninformed majority take account of the intensity of opposing views. A notable example of civil resistance or satyagraha ("satya" in sanskrit means "truth and compassion", "agraha" means "firmness of will") involved Mahatma Gandhi making salt in India when that act was prohibited by a British statute, in order to create moral pressure for law reform. Rosa Parks similarly acted on conscience in 1955 in Montgomery, Alabama refusing a legal order to give up her seat to make room for a white passenger; her action (and the similar earlier act of 15-year-old Claudette Colvin) led to the Montgomery bus boycott. Rachel Corrie was a US citizen allegedly killed by a bulldozer operated by the Israel Defense Forces (IDF) while involved in direct action (based on the nonviolent principles of Martin Luther King Jr. and Mahatma Gandhi) to prevent demolition of the home of local Palestinian pharmacist Samir Nasrallah. Al Gore has argued "If you're a young person looking at the future of this planet and looking at what is being done right now, and not done, I believe we have reached the stage where it is time for civil disobedience to prevent the construction of new coal plants that do not have carbon capture and sequestration." In 2011, NASA climate scientist James E. Hansen, environmental leader Phil Radford and Professor Bill McKibben were arrested for opposing a tar sands oil pipeline and Canadian renewable energy professor Mark Jaccard was arrested for opposing mountain-top coal mining; in his book Storms of my Grandchildren Hansen calls for similar civil resistance on a global scale to help replace the 'business-as-usual' Kyoto Protocol cap and trade system, with a progressive carbon tax at emission source on the oil, gas and coal industries – revenue being paid as dividends to low carbon footprint families. Notable historical examples of conscientious noncompliance in a different professional context included the manipulation of the visa process in 1939 by Japanese Consul-General Chiune Sugihara in Kaunas (the temporary capital of Lithuania between Germany and the Soviet Union) and by Raoul Wallenberg in Hungary in 1944 to allow Jews to escape almost certain death. Ho Feng-Shan the Chinese Consul-General in Vienna in 1939, defied orders from the Chinese ambassador in Berlin to issue Jews with visas for Shanghai. John Rabe a German member of the Nazi Party likewise saved thousands of Chinese from massacre by the Japanese military at Nanjing. The White Rose German student movement against the Nazis declared in their 4th leaflet: "We will not be silent. We are your bad conscience. The White Rose will not leave you in peace!" Conscientious noncompliance may be the only practical option for citizens wishing to affirm the existence of an international moral order or 'core' historical rights (such as the right to life, right to a fair trial and freedom of opinion) in states where non-violent protest or civil disobedience are met with prolonged arbitrary detention, torture, forced disappearance, murder or persecution. The controversial Milgram experiment into obedience by Stanley Milgram showed that many people lack the psychological resources to openly resist authority, even when they are directed to act callously and inhumanely against an innocent victim. == World conscience == World conscience is the universalist idea that with ready global communication, all people on earth will no longer be morally estranged from one another, whether it be culturally, ethnically, or geographically; instead they will conceive ethics from the utopian point of view of the universe, eternity or infinity, rather than have their duties and obligations defined by forces arising solely within the restrictive boundaries of "blood and territory". Often this derives from a spiritual or natural law perspective, that for world peace to be achieved, conscience, properly understood, should be generally considered as not necessarily linked (often destructively) to fundamentalist religious ideologies, but as an aspect of universal consciousness, access to which is the common heritage of humanity. Thinking predicated on the development of world conscience is common to members of the Global Ecovillage Network such as the Findhorn Foundation, international conservation organisations like Fauna and Flora International, as well as performers of world music such as Alan Stivell. Non-government organizations, particularly through their work in agenda-setting, policy-making and implementation of human rights-related policy, have been referred to as the conscience of the world Edward O Wilson has developed the idea of consilience to encourage coherence of global moral and scientific knowledge supporting the premise that "only unified learning, universally shared, makes accurate foresight and wise choice possible". Thus, world conscience is a concept that overlaps with the Gaia hypothesis in advocating a balance of moral, legal, scientific and economic solutions to modern transnational problems such as global poverty and global warming, through strategies such as environmental ethics, climate ethics, natural conservation, ecology, cosmopolitanism, sustainability and sustainable development, biosequestration and legal protection of the biosphere and biodiversity. The NGO 350.org, for example, seeks to attract world conscience to the problems associated with elevation in atmospheric greenhouse gas concentrations. The microcredit initiatives of Nobel Peace Prize winner Muhammad Yunus have been described as inspiring a "war on poverty that blends social conscience and business savvy". The Green party politician Bob Brown (who was arrested by the Tasmanian state police for a conscientious act of civil disobedience during the Franklin Dam protest) expresses world conscience in these terms: "the universe, through us, is evolving towards experiencing, understanding and making choices about its future'; one example of policy outcomes from such thinking being a global tax (see Tobin tax) to alleviate global poverty and protect the biosphere, amounting to 1/10 of 1% placed on the worldwide speculative currency market. Such an approach sees world conscience best expressing itself through political reforms promoting democratically based globalisation or planetary democracy (for example internet voting for global governance organisations (see world government) based on the model of "one person, one vote, one value") which gradually will replace contemporary market-based globalisation. The American cardiologist Bernard Lown and the Russian cardiologist Yevgeniy Chazov were motivated in conscience through studying the catastrophic public health consequences of nuclear war in establishing International Physicians for the Prevention of Nuclear War (IPPNW) which was awarded the Nobel Peace Prize in 1985 and continues to work to "heal an ailing planet".Worldwide expressions of conscience contributed to the decision of the French government to halt atmospheric nuclear tests at Mururoa in the Pacific in 1974 after 41 such explosions (although below-ground nuclear tests continued there into the 1990s). A challenge to world conscience was provided by an influential 1968 article by Garrett Hardin that critically analyzed the dilemma in which multiple individuals, acting independently after rationally consulting self-interest (and, he claimed, the apparently low 'survival-of-the-fittest' value of conscience-led actions) ultimately destroy a shared limited resource, even though each acknowledges such an outcome is not in anyone's long-term interest. Hardin's conclusion that commons areas are practicably achievable only in conditions of low population density (and so their continuance requires state restriction on the freedom to breed), created controversy additionally through his direct deprecation of the role of conscience in achieving individual decisions, policies and laws that facilitate global justice and peace, as well as sustainability and sustainable development of world commons areas, for example including those officially designated such under United Nations treaties (see common heritage of humanity). Areas designated common heritage of humanity under international law include the Moon, Outer Space, deep sea bed, Antarctica, the world cultural and natural heritage (see World Heritage Convention) and the human genome. It will be a significant challenge for world conscience that as world oil, coal, mineral, timber, agricultural and water reserves are depleted, there will be increasing pressure to commercially exploit common heritage of mankind areas. The philosopher Peter Singer has argued that the United Nations Millennium Development Goals represent the emergence of an ethics based not on national boundaries but on the idea of one world. Ninian Smart has similarly predicted that the increase in global travel and communication will gradually draw the world's religions towards a pluralistic and transcendental humanism characterized by an "open spirit" of empathy and compassion. Noam Chomsky has argued that forces opposing the development of such a world conscience include free market ideologies that valorise corporate greed in nominal electoral democracies where advertising, shopping malls and indebtedness, shape citizens into apathetic consumers in relation to information and access necessary for democratic participation. John Passmore has argued that mystical considerations about the global expansion of all human consciousness, should take into account that if as a species we do become something much superior to what we are now, it will be as a consequence of conscience not only implanting a goal of moral perfectibility, but assisting us to remain periodically anxious, passionate and discontented, for these are necessary components of care and compassion. The Committee on Conscience of the US Holocaust Memorial Museum has targeted genocides such as those in Rwanda, Bosnia, Darfur, the Congo and Chechnya as challenges to the world's conscience. Oscar Arias Sanchez has criticised global arms industry spending as a failure of conscience by nation states: "When a country decides to invest in arms, rather than in education, housing, the environment, and health services for its people, it is depriving a whole generation of its right to prosperity and happiness. We have produced one firearm for every ten inhabitants of this planet, and yet we have not bothered to end hunger when such a feat is well within our reach. This is not a necessary or inevitable state of affairs. It is a deliberate choice" (see Campaign Against Arms Trade). US House of Representatives Speaker Nancy Pelosi, after meeting with the 14th Dalai Lama during the 2008 violent protests in Tibet and aftermath said: "The situation in Tibet is a challenge to the conscience of the world." Nelson Mandela, through his example and words, has been described as having shaped the conscience of the world. The Right Livelihood Award is awarded yearly in Sweden to those people, mostly strongly motivated by conscience, who have made exemplary practical contributions to resolving the great challenges facing our planet and its people. In 2009, for example, along with Catherine Hamlin (obstetric fistula and see fistula foundation)), David Suzuki (promoting awareness of climate change) and Alyn Ware (nuclear disarmament), René Ngongo shared the Right Livelihood Award "for his courage in confronting the forces that are destroying the Congo Basin's rainforests and building political support for their conservation and sustainable use". Avaaz is one of the largest global on-line organizations launched in January 2007 to promote conscience-driven activism on issues such as climate change, human rights, animal rights, corruption, poverty, and conflict, thus "closing the gap between the world we have and the world most people everywhere want". == Notable examples of modern acts based on conscience == In a notable contemporary act of conscience, Christian bushwalker Brenda Hean protested against the flooding of Lake Pedder despite threats and that ultimately led to her death. Another was the campaign by Ken Saro-Wiwa against oil extraction by multinational corporations in Nigeria that led to his execution. So too was the act by the Tank Man, or the Unknown Rebel photographed holding his shopping bag in the path of tanks during the protests at Beijing's Tiananmen Square on 5 June 1989. The actions of United Nations Secretary General Dag Hammarskjöld to try to achieve peace in the Congo despite the (eventuating) threat to his life were strongly motivated by conscience as is reflected in his diary, Vägmärken (Markings). Another example involved the actions of Warrant Officer Hugh Thompson, Jr to try to prevent the My Lai massacre in the Vietnam War. Evan Pederick voluntarily confessed and was convicted of the Sydney Hilton bombing stating that his conscience could not tolerate the guilt and that "I guess I was quite unique in the prison system in that I had to keep proving my guilt, whereas everyone else said they were innocent." Vasili Arkhipov was a Russian naval officer on out-of-radio-contact Soviet submarine B-59 being depth-charged by US warships during the Cuban Missile Crisis whose dissent when two other officers decided to launch a nuclear torpedo (unanimous agreement to launch was required) may have averted a nuclear war. In 1963 Buddhist monk Thich Quang Duc performed a famous act of self-immolation to protest against alleged persecution of his faith by the Vietnamese Ngo Dinh Diem regime. Conscience played a major role in the actions by anaesthetist Stephen Bolsin to whistleblow (see list of whistleblowers) on incompetent paediatric cardiac surgeons at the Bristol Royal Infirmary. Jeffrey Wigand was motivated by conscience to expose the Big Tobacco scandal, revealing that executives of the companies knew that cigarettes were addictive and approved the addition of carcinogenic ingredients to the cigarettes. David Graham, a Food and Drug Administration employee, was motivated by conscience to whistleblow that the arthritis pain-reliever Vioxx increased the risk of cardiovascular deaths although the manufacturer suppressed this information. Rick Piltz, from the U.S. global warming Science Program, blew the whistle on a White House official who ignored majority scientific opinion to edit a climate change report ("Our Changing Planet") to reflect the Bush administration's view that the problem was unlikely to exist. Muntadhar al-Zaidi, an Iraqi journalist, was imprisoned and allegedly tortured for his act of conscience in throwing his shoes at George W. Bush. Mordechai Vanunu, an Israeli former nuclear technician, acted on conscience to reveal details of Israel's nuclear weapons program to the British press in 1986; was kidnapped by Israeli agents, transported to Israel, convicted of treason and spent 18 years in prison, including more than 11 years in solitary confinement. At the awards ceremony for the 200 metres at the 1968 Summer Olympics in Mexico City John Carlos, Tommie Smith and Peter Norman ignored death threats and official warnings to take part in an anti-racism protest that destroyed their respective careers. W. Mark Felt an agent of the United States Federal Bureau of Investigation who retired in 1973 as the Bureau's Associate Director, acted on conscience to provide reporters Bob Woodward and Carl Bernstein with information that resulted in the Watergate scandal. Conscience was a major factor in US Public Health Service officer Peter Buxtun revealing the Tuskegee syphilis experiment to the public. The 2008 attack by the Israeli military on civilian areas of Palestinian Gaza was described as a "stain on the world's conscience". Conscience was a major factor in the refusal of Aung San Suu Kyi to leave Burma despite house arrest and persecution by the military dictatorship in that country. Conscience was a factor in Peter Galbraith's criticism of fraud in the 2009 Afghanistan election despite it costing him his United Nations job. Conscience motivated Bunnatine Greenhouse to expose irregularities in the contracting of the Halliburton company for work in Iraq. Naji al-Ali a popular cartoon artist in the Arab world, loved for his defense of the ordinary people, and for his criticism of repression and despotism by both the Israeli military and Yasser Arafat's PLO, was murdered for refusing to compromise with his conscience. The journalist Anna Politkovskaya provided (prior to her murder) an example of conscience in her opposition to the Second Chechen War and then-Russian President Vladimir Putin. Conscience motivated the Russian human rights activist Natalia Estemirova, who was abducted and murdered in Grozny, Chechnya in 2009. The Death of Neda Agha-Soltan arose from conscience-driven protests against the 2009 Iranian presidential election. Muslim lawyer Shirin Ebadi (winner of the 2003 Nobel Peace Prize) has been described as the 'conscience of the Islamic Republic' for her work in protecting the human rights of women and children in Iran. The human rights lawyer Gao Zhisheng, often referred to as the 'conscience of China' and who had previously been arrested and allegedly tortured after calling for respect for human rights and for constitutional reform, was abducted by Chinese security agents in February 2009. 2010 Nobel Peace Prize winner Liu Xiaobo in his final statement before being sentenced by a closed Chinese court to over a decade in jail as a political prisoner of conscience stated: "For hatred is corrosive of a person’s wisdom and conscience; the mentality of enmity can poison a nation’s spirit." Sergei Magnitsky, a lawyer in Russia, was arrested, held without trial for almost a year and died in custody, as a result of exposing corruption. On 6 October 2001 Laura Whittle was a naval gunner on HMAS Adelaide (FFG 01) under orders to implement a new border protection policy when they encountered the SIEV-4 (Suspected Illegal Entry Vessel-4) refugee boat in choppy seas. After being ordered to fire warning shots from her 50 calibre machinegun to make the boat turn back she saw it beginning to break up and sink with a father on board holding out his young daughter that she might be saved (see Children Overboard Affair). Whittle jumped without a life vest 12 metres into the sea to help save the refugees from drowning thinking "this isn't right; this isn't how things should be." In February 2012 journalist Marie Colvin was deliberately targeted and killed by the Syrian Army in Homs during the Syrian uprising and Siege of Homs, after she decided to stay at the "epicentre of the storm" in order to "expose what is happening". In October 2012 the Taliban organised the attempted murder of Malala Yousafzai a teenage girl who had been campaigning, despite their threats, for female education in Pakistan. In December 2012 the 2012 Delhi gang rape case was said to have stirred the collective conscience of India to civil disobedience and public protest at the lack of legal action against rapists in that country (see Rape in India) In June 2013 Edward Snowden revealed details of a US National Security Agency internet and electronic communication PRISM (surveillance program) because of a conscience-felt obligation to the freedom of humanity greater than obedience to the laws that bound his employment. == In literature, art, film, and music == The ancient epic of the Indian subcontinent, the Mahabharata of Vyasa, contains two pivotal moments of conscience. The first occurs when the warrior Arjuna being overcome with compassion against killing his opposing relatives in war, receives counsel (see Bhagavad-Gita) from Krishna about his spiritual duty ("work as though you are performing a sacrifice for the general good"). The second, at the end of the saga, is when king Yudhishthira having alone survived the moral tests of life, is offered eternal bliss, only to refuse it because a faithful dog is prevented from coming with him by purported divine rules and laws. The French author Montaigne (1533–1592) in one of the most celebrated of his essays ("On experience") expressed the benefits of living with a clear conscience: "Our duty is to compose our character, not to compose books, to win not battles and provinces, but order and tranquillity in our conduct. Our great and glorious masterpiece is to live properly". In his famous Japanese travel journal Oku no Hosomichi (Narrow Road to the Deep North) composed of mixed haiku poetry and prose, Matsuo Bashō (1644–94) in attempting to describe the eternal in this perishable world is often moved in conscience; for example by a thicket of summer grass being all that remains of the dreams and ambitions of ancient warriors. Chaucer's "Franklin's Tale" in The Canterbury Tales recounts how a young suitor releases a wife from a rash promise because of the respect in his conscience for the freedom to be truthful, gentle and generous. The critic A. C. Bradley discusses the central problem of Shakespeare's tragic character Hamlet as one where conscience in the form of moral scruples deters the young Prince with his "great anxiety to do right" from obeying his father's hell-bound ghost and murdering the usurping King ("is't not perfect conscience to quit him with this arm?" (v.ii.67)). Bradley develops a theory about Hamlet's moral agony relating to a conflict between "traditional" and "critical" conscience: "The conventional moral ideas of his time, which he shared with the Ghost, told him plainly that he ought to avenge his father; but a deeper conscience in him, which was in advance of his time, contended with these explicit conventional ideas. It is because this deeper conscience remains below the surface that he fails to recognise it, and fancies he is hindered by cowardice or sloth or passion or what not; but it emerges into light in that speech to Horatio. And it is just because he has this nobler moral nature in him that we admire and love him". The opening words of Shakespeare's Sonnet 94 ("They that have pow'r to hurt, and will do none") have been admired as a description of conscience. So has John Donne's commencement of his poem s:Goodfriday, 1613. Riding Westward: "Let man's soul be a sphere, and then, in this, Th' intelligence that moves, devotion is;" Anton Chekhov in his plays The Seagull, Uncle Vanya and Three Sisters describes the tortured emotional states of doctors who at some point in their careers have turned their back on conscience. In his short stories, Chekhov also explored how people misunderstood the voice of a tortured conscience. A promiscuous student, for example, in The Fit describes it as a "dull pain, indefinite, vague; it was like anguish and the most acute fear and despair ... in his breast, under the heart" and the young doctor examining the misunderstood agony of compassion experienced by the factory owner's daughter in From a Case Book calls it an "unknown, mysterious power ... in fact close at hand and watching him." Characteristically, Chekhov's own conscience drove him on the long journey to Sakhalin to record and alleviate the harsh conditions of the prisoners at that remote outpost. As Irina Ratushinskaya writes in the introduction to that work: "Abandoning everything, he travelled to the distant island of Sakhalin, the most feared place of exile and forced labour in Russia at that time. One cannot help but wonder why? Simply, because the lot of the people there was a bitter one, because nobody really knew about the lives and deaths of the exiles, because he felt that they stood in greater need of help that anyone else. A strange reason, maybe, but not for a writer who was the epitome of all the best traditions of a Russian man of letters. Russian literature has always focused on questions of conscience and was, therefore, a powerful force in the moulding of public opinion." E. H. Carr writes of Dostoevsky's character the young student Raskolnikov in the novel Crime and Punishment who decides to murder a 'vile and loathsome' old woman money lender on the principle of transcending conventional morals: "the sequel reveals to us not the pangs of a stricken conscience (which a less subtle writer would have given us) but the tragic and fruitless struggle of a powerful intellect to maintain a conviction which is incompatible with the essential nature of man." Hermann Hesse wrote his Siddhartha to describe how a young man in the time of the Buddha follows his conscience on a journey to discover a transcendent inner space where all things could be unified and simply understood, ending up discovering that personal truth through selfless service as a ferryman. J. R. R. Tolkien in his epic The Lord of the Rings describes how only the hobbit Frodo is pure enough in conscience to carry the ring of power through war-torn Middle-earth to destruction in the Cracks of Doom, Frodo determining at the end to journey without weapons, and being saved from failure by his earlier decision to spare the life of the creature Gollum. Conor Cruise O'Brien wrote that Albert Camus was the writer most representative of the Western consciousness and conscience in its relation to the non-Western world. Harper Lee's 1960 novel To Kill a Mockingbird portrays Atticus Finch (played by Gregory Peck in the classic film from the book (see To Kill a Mockingbird)) as a lawyer true to his conscience who sets an example to his children and community. The Robert Bolt play A Man For All Seasons focuses on the conscience of Catholic lawyer Thomas More in his struggle with King Henry VIII ("the loyal subject is more bounden to be loyal to his conscience than to any other thing"). George Orwell wrote his novel Nineteen Eighty-Four on the isolated island of Jura, Scotland to describe how a man (Winston Smith) attempts to develop critical conscience in a totalitarian state which watches every action of the people and manipulates their thinking with a mixture of propaganda, endless war and thought control through language control (double think and newspeak) to the point where prisoners look up to and even love their torturers. In the Ministry of Love, Winston's torturer (O'Brien) states: "You are imagining that there is something called human nature which will be outraged by what we do and will turn against us. But we create human nature. Men are infinitely malleable". A tapestry copy of Picasso's Guernica depicting a massacre of innocent women and children during the Spanish Civil War is displayed on the wall of the United Nations building in New York City, at the entrance to the Security Council room, demonstrably as a spur to the conscience of representatives from the nation states. Albert Tucker painted Man's Head to capture the moral disintegration, and lack of conscience, of a man convicted of kicking a dog to death. The Impressionist painter Vincent van Gogh wrote in a letter to his brother Theo in 1878 that "one must never let the fire in one's soul die, for the time will inevitably come when it will be needed. And he who chooses poverty for himself and loves it possesses a great treasure and will hear the voice of his conscience address him every more clearly. He who hears that voice, which is God's greatest gift, in his innermost being and follows it, finds in it a friend at last, and he is never alone! ... That is what all great men have acknowledged in their works, all those who have thought a little more deeply and searched and worked and loved a little more than the rest, who have plumbed the depths of the sea of life." The 1957 Ingmar Bergman film The Seventh Seal portrays the journey of a medieval knight (Max von Sydow) returning disillusioned from the crusades ("what is going to happen to those of us who want to believe, but aren't able to?") across a plague-ridden landscape, undertaking a game of chess with the personification of Death until he can perform one meaningful altruistic act of conscience (overturning the chess board to distract Death long enough for a family of jugglers to escape in their wagon). The 1942 Casablanca centers on the development of conscience in the cynical American Rick Blaine (Humphrey Bogart) in the face of oppression by the Nazis and the example of the resistance leader Victor Laszlo.The David Lean and Robert Bolt screenplay for Doctor Zhivago (an adaptation of Boris Pasternak's novel) focuses strongly on the conscience of a doctor-poet in the midst of the Russian Revolution (in the end "the walls of his heart were like paper").The 1982 Ridley Scott film Blade Runner focuses on the struggles of conscience between and within a bounty hunter (Rick Deckard (Harrison Ford)) and a renegade replicant android (Roy Batty (Rutger Hauer)) in a future society which refuses to accept that forms of artificial intelligence can have aspects of being such as conscience. Johann Sebastian Bach wrote his last great choral composition the Mass in B minor (BWV 232) to express the alternating emotions of loneliness, despair, joy and rapture that arise as conscience reflects on a departed human life. Here JS Bach's use of counterpoint and contrapuntal settings, his dynamic discourse of melodically and rhythmically distinct voices seeking forgiveness of sins ("Qui tollis peccata mundi, miserere nobis") evokes a spiraling moral conversation of all humanity expressing his belief that "with devotional music, God is always present in his grace". Ludwig van Beethoven's meditations on illness, conscience and mortality in the Late String Quartets led to his dedicating the third movement of String Quartet in A Minor (1825) Op. 132 (see String Quartet No. 15) as a "Hymn of Thanksgiving to God of a convalescent". John Lennon's work "Imagine" owes much of its popular appeal to its evocation of conscience against the atrocities created by war, religious fundamentalism and politics. The Beatles George Harrison-written track "The Inner Light" sets to Indian raga music a verse from the Tao Te Ching that "without going out of your door you can know the ways of heaven'. In the 1986 movie The Mission the guilty conscience and penance of the slave trader Mendoza is made more poignant by the haunting oboe music of Ennio Morricone ("On Earth as it is in Heaven") The song Sweet Lullaby by Deep Forest is based on a traditional Baegu lullaby from the Solomon Islands called "Rorogwela" in which a young orphan is comforted as an act of conscience by his older brother. The Dream Academy song 'Forest Fire' provided an early warning of the moral dangers of our 'black cloud' 'bringing down a different kind of weather ... letting the sunshine in, that's how the end begins." The American Society of Journalists and Authors (ASJA) presents the Conscience-in-Media Award to journalists whom the society deems worthy of recognition for demonstrating "singular commitment to the highest principles of journalism at notable personal cost or sacrifice". The Ambassador of Conscience Award, Amnesty International's most prestigious human rights award, takes its inspiration from a poem written by Irish Nobel prize-winning poet Seamus Heaney called "The Republic of Conscience". == See also == == Further reading == Slater S.J., Thomas (1925). "Book 2: On Conscience" . A manual of moral theology for English-speaking countries. Burns Oates & Washbourne Ltd. == References == == External links == The dictionary definition of conscience at Wiktionary Quotations related to Conscience at Wikiquote
Wikipedia/Conscience
Rainbow gravity (or "gravity's rainbow") is a theory that different wavelengths of light experience different gravity levels and are separated in the same way that a prism splits white light into the rainbow. This phenomenon would be imperceptible in areas of relatively low gravity, such as Earth, but would be significant in areas of extremely high gravity, such as a black hole. As such the theory claims to disprove that the universe has a beginning or Big Bang, as the big bang theory calls for all wavelengths of light to be impacted by gravity to the same extent. The theory was first proposed in 2003 by physicists Lee Smolin and João Magueijo, and claims to bridge the gap between general relativity and quantum mechanics. Scientists are currently attempting to detect rainbow gravity using the Large Hadron Collider. == Background == Rainbow gravity theory's origin is largely the product of the disparity between general relativity and quantum mechanics. More specifically, "locality," or the concept of cause and effect that drives the principles of general relativity, is mathematically irreconcilable with quantum mechanics. This issue is due to incompatible functions between the two fields; in particular, the fields apply radically different mathematical approaches in describing the concept of curvature in four-dimensional space-time. Historically, this mathematical split begins with the disparity between Einstein's theories of relativity, which saw physics through the lens of causality, and classical physics, which interpreted the structure of space-time to be random and inherent. The prevailing notion about cosmic change is that the universe is expanding at a constantly accelerating rate; moreover, it is understood that as one traces the universe's history backwards one finds that it was, at one point, far denser. If true, the Rainbow gravity theory prohibits a singularity such as that which is postulated in the Big Bang. This indicates that, when viewed in reverse, the universe slowly approaches a point of terminal density without ever reaching it, implying that the universe does not possess a point of origin. == Criticism == There are stringent constraints on energy-dependent speed-of-light scenarios. Based on these, Sabine Hossenfelder has strongly criticised the rainbow gravity concept, stating that "It is neither a theory nor a model, it is just an idea that, despite more than a decade of work, never developed into a proper model. Rainbow gravity has not been shown to be compatible with the standard model. There is no known quantization of this approach and one cannot describe interactions in this framework at all. Moreover, it is known to lead to non-localities which are ruled out already. For what I am concerned, no papers should get published on the topic until these issues have been resolved." == See also == Steady-state model Eternal inflation Cyclic model == References ==
Wikipedia/Rainbow_gravity_theory
Subtle is the Lord: The Science and the Life of Albert Einstein is a biography of Albert Einstein written by Abraham Pais. First published in 1982 by Oxford University Press, the book is one of the most acclaimed biographies of the scientist. This was not the first popular biography of Einstein, but it was the first to focus on his scientific research as opposed to his life as a popular figure. Pais, renowned for his work in theoretical particle physics, was a friend of Einstein's at the Institute for Advanced Study. Originally published in English in the United States and the United Kingdom, the book has translations in over a dozen languages. Pais later released a sequel to the book in 1994 titled Einstein Lived Here and, after his death in 2000, the University Press released a posthumous reprint of the biography in 2005, with a new foreword by Roger Penrose. Considered very popular for a science book, the biography sold tens of thousands of copies of both paperback and hardcover versions in its first year. The book has received many reviews and, the year after its initial publication, it won both the 1983 National Book Award for Nonfiction, in Science (Hardcover), and the 1983 Science Writing Award. == Background == Before becoming a science historian, Pais was a theoretical physicist and is said to be one of the founders of theoretical particle physics. Pais knew Einstein and they developed a friendship over the last decade of Einstein's life, particularly while they were colleagues at the Institute for Advanced Study in Princeton. He drew from this experience when writing the book, which includes several vignettes of their interactions, including a story of his final visit to see Einstein, who was ill and would die a few months later. The Quantum Theory portion of the book was previously published, in similar form, in a 1979 article Pais coauthored in Reviews of Modern Physics. The book draws its title from a quote by Einstein that translates to "Subtle is the Lord, but malicious he is not". The quote is inscribed in stone at Princeton University, where Einstein made the statement during a 1921 visit to deliver the lectures that would later be published as The Meaning of Relativity. When asked later in life to elaborate on the statement, Einstein said in 1930: "Nature hides its secret because of its essential loftiness, but not by means of ruse." Isaac Asimov summarized this as meaning "the laws of nature were not easy to uncover, but once uncovered, they would not give uncertain result", comparing to another famous Einstein quote: "I cannot believe that God would play dice with the universe". == Themes == The book serves as both a biography of Albert Einstein and a catalog of his works and scientific achievements. Though there were several well-known biographies of Einstein prior to the book's publication, this was the first which focused on his scientific research, as opposed to his life as a popular figure. Einstein himself, in 1946 at the age of 67, expressed a desire to be remembered for his work and not his doings, stating "the essential in the being of a man of my type lies precisely in what he thinks and how he thinks, not in what he does or suffers." Beyond the biography, the book serves as the first full-scale exposition of Albert Einstein's scientific contributions; one reviewer noted that, although literature on Einstein is not lacking, prior to this book, someone trying to research Einstein's scientific contributions "faced a choice between reading one or more popularizations of limited scope (and often even more limited depth) and trying to read and digest the almost 300 scientific papers he produced." == Content == Pais explains in the book's introduction that an illustration of Einstein's biography would have his work in special relativity building toward general relativity and his work in statistical physics would build toward his work in quantum theory, and all of them would build toward his work in unified field theory; the book's organization represents his attempt to respect that outline. The book has 31 chapters that are divided into eight major sections, with purely biographical chapters marked stylized with italics. These italicized chapters present a non-technical overview of Einstein's life, while the bulk of the book explores Einstein's contributions in mathematical detail. The first part of the book, titled Introductory, serves as a quick summary and outline of the book's contents. The second section, on statistical physics, includes Einstein's contributions to the field between 1900 and 1910 as well as a discussion of the probabilistic interpretation of thermodynamics. The third section, on special relativity, describes the history of special relativity and Einstein's contributions early to relativity theory as well as their relation to the work of Henri Poincaré and Hendrik Lorentz. The next section, on general relativity, covers Einstein's developments of the theory from around 1907 to 1915 and the development of the universally covariant gravitation field equations. The chapter also includes discussion on the development of general relativity by other scientists from 1915 to 1980. Section six includes a biography chapter on Einstein's later life and a discussion of his work in unified field theory. The final section, section six on quantum theory, covers Einstein's work in the field extending over his entire career. == Reception == === Reviews === The book received critical acclaim upon its initial release and was subsequently translated into fifteen languages, establishing Pais as an internationally renowned scholar in the subject. There were many reviews of the book, including articles published in magazines including Scientific American, The Christian Science Monitor, and The New York Review of Books, as well as newspaper articles published in The New York Times, The Los Angeles Times, The Leader-Post, The Observer, The Age, The Philadelphia Inquirer, The Santa Cruz Sentinel, and The Arizona Republic. The book has received favorable mentions in reviews of other works and papers discussing the history of Einstein's contributions. Of the reviews of the 1994 sequel, Einstein Lived Here, Engelbert Schücking wrote that the original biography was "magisterial" and Roger Highfield opened his review of the new volume with: "Among my collection of books on Einstein, there is a dog-eared copy of... Subtle is the Lord. Its poor condition pays tribute to the value of this brilliantly researched book". Bruno Augenstein wrote in 1994 that the book was a "definitive" scientific biography of Einstein. Schücking, in a 2007 review of the book Einstein: His Life and Universe by Walter Isaacson, stated that "the wonderful book by Pais, which was republished by Oxford University Press in 2005, with a preface by Roger Penrose, is still the best introduction to Einstein’s physics." Similarly, a 2005 article discussing "Einstein's quest for unification" by John Ellis opened by stating that the book is the "definitive scientific biography of Einstein" and that it "delivered an unequivocal verdict on Einstein's quest for a unified field theory". On the book's release in 1982, John Stachel wrote a positive review of the book that stated the book gives a detailed account of nearly all of Einstein's significant scientific contributions along with historical context from an "eminent physicist's perspective". Stachel went on to say that the biography sections "constitutes the most accurate account of Einstein's life yet written" and that the book is "both unique and indispensable for any serious Einstein scholar". He closed the review by saying the book would "serve not only as a source of profound insight and pleasure to many readers but as a further spur to the current renaissance of Einstein studies". In a second 1982 review, John Allen Paulos wrote that it "is a superb book.". Banesh Hoffmann reviewed the book in 1983 calling it "outstanding" and that it is a "lively book" and a "major contribution to Einstein scholarship". Isaac Asimov wrote that the book gives a "concise history of the physics involved" and that it is "engagingly written". William Hunter McCrea wrote a critical review of the book in 1983, taking issue with several of Pais' statements, but wrote that overall, it was "a major work on Einstein" and that "[f]or those who know well what Einstein achieved, but who may have wondered how he did it, this book should tell them almost all they can ever hope to learn." A third 1983 review stated that the book is a "superb biography of Einstein" and was likely to "become required reading for anyone interested in the history of 20th century physics". The book was also reviewed in German that same year. Among newspaper coverage, the book was the lead article in The New York Times Book Review issue carrying its review. The article, written by Timothy Ferris in 1982, stated that "anyone with an interest in Einstein should give this splendid book a try". After reviewing the book, Ferris closed by saying that "[of] all the biographies of Einstein, this, I think, is the one he himself would have liked the best." Another newspaper review, by Peter Mason stated that the book blending of a popular biography into a technical account of Einstein's scientific work was "so skillfully done that the flavor of the complicated arguments can generally be savored by those with little mathematical background." A third newspaper review, written by John Naughton, argued that the book provides an "uncompromising chronological account of Einstein’s theoretical work, a technical story written by a physicist for physicists", but that a non-technical biography is woven throughout, which he describes as a "book within a book". In a 1984 review of the book, Michael Redhead wrote that there had been "many biographies of Einstein but none of them can even begin to compete with the work of Pais." He praised the book for its completeness, stating that it goes much further than previous works in discussing Einstein's contributions as a whole. Redhead noted one "significant omission", relating to Erich Kretschmann's critique of universal covariance, but went on to close the review by writing "I wholeheartedly recommend anyone interested in the history of modern physics to read Pais's extraordinarily able book". In a second 1984 review, Martin J. Klein wrote that the book is "rich and rewarding" and that it "is written in a lively and effective style". Felix Gilbert, in a third 1984 review, wrote that the book is "both sensitive and thorough" and that he is "inclined to regard" it as Einstein's "definitive biography". The book was also reviewed in French the same year by Jean Largeault. Among other 1984 reviews, one stated that it was a "monumental biography" and that it "does full justice to the title, the Science and the Life of Einstein" that was written with "tremendous erudition and sensibility". A 1986 review of the book stated that the "book, despite its blindspots, shortcomings, and difficulties for the uninitiated reader, will remain an indispensable source for anyone interested" in Einstein's life. Among criticisms of various aspects of the book, several reviews noted that understanding many parts of the book would require a background in physics. Some reviewers also noted that Pais did not expound on Einstein's political and social views past a brief presentation. The reference system used in the book was also criticized by some reviewers as "unnecessarily complicated". Timothy Ferris noted several other problems in his New York Times review and pointed out that Pais has a tendency to be "overly reticent". In his 1983 review, Banesh Hoffmann wrote that the book contained "some strange omissions" relating to some of Einstein's shortcomings and statements he made. Peter Mason wrote: "One deficiency is the failure to relate Einstein’s development to the social conditions of the time." In his 1982 review, John Stachel wrote that, while the order of Einstein's contributions were sound for the first four chapters, the part on quantum mechanics backtracks to the beginning of Einstein's career once again and so overlaps with the other parts of the book. He went on to praise the book's translations of quotes from Einstein and others. In reference to the biography sections, he went on to state that "[t]he only issue on which I would seriously disagree is his effort to play down or even deny the rebellious element in Einstein's personality." Stachel wrote that Statistical Physics and The Quantum Theory parts of the book were the "most successful", stating "[n]ot only does Pais give an excellent presentation of Einstein's contributions to the development of quantum theory, he explains why Einstein felt that it never became a fundamental theory in his sense, even after the development of quantum mechanics". He also criticized the book's evaluation of the paper on the EPR paradox for neglecting certain counter-arguments. In a critical 1984 review, Paul Forman wrote that much of the information in the biography sections of the book was previously unpublished and that Pais gave a better account of Einstein's childhood than had previously been available, but that by "allotting so little space to so large a life, Pais perforce omits far more than he includes, and these few pages, dense with ill-considered detail, fail to convey any sense of the man and his situation". He went on to note that the book does not include any details on Einstein's experimental and technological designs, outside of a single recount of a 1915 experiment with Wander Johannes de Haas. Forman claimed that Pais rushed the book through development, writing that despite Pais' "mastery of the sources" and the book's scientific insights, "the account which he has hastily put together shows everywhere the marks of unpolished and unreflective work." He went on to write that Pais' observations of Einstein's philosophy were "quite superficial, though not wholly unoriginal". Forman closed the review by taking issue with Pais' statement that "the tour ends here" at the first chapter, which he felt was a "patronizing, self-congratulatory distinction between the soft, talky stuff and the real stuff" akin to saying "then the physics begins". Forman argued that the physics is "conflated" with "another creation of the physicists: a Parnassian world of apotheosized 'founders' and 'major figures'", which he states is "a fantasy world of no greater intrinsic importance than the ancestral myths of more primitive tribes and clans". === Development of relativity === As part of the relativity priority dispute, Pais dismissed E. T. Whittaker's views on the history of special relativity, as expressed in the 1953 book A History of the Theories of Aether and Electricity: The Modern Theories. In that book, Whittaker claimed that Henri Poincaré and Hendrik Lorentz developed the theory of special relativity before Albert Einstein. In a chapter titled "the edge of history", Pais stated that Whittaker's treatment shows "how well the author's lack of physical insight matches his ignorance of the literature". One reviewer wrote, in agreement with the statement, that "Pais correctly dismisses" Whittaker's point of view in the "controversy concerning priority" with an "apt sentence". Another reviewer, William Hunter McCrea in 1983, stated that the dismissal was put "in terms that can only be called scurrilous" and that "[t]o one who knew Whittaker and his regard for historical accuracy the opinion is lamentable." Outside of the priority dispute, several reviewers noted that, at the time of publication, there was no consensus among scientific scholars for some questions in the history of special and general relativity, and that Pais makes multiple assertions that are based on disputable evidence. The contested assertions included Pais' claim that the Michelson–Morley experiment did not play a major role in Einstein's development of the special theory as evidence for the charge. Noting the potential controversies, Timothy Ferris wrote that Pais "is less to be blamed for having reached arguable conclusions in matters of intense scholarly debate than praised for having had the grit to confront them." In his 1982 review, John Stachel criticized the book for not discussing the Fizeau experiment and for using an archaic explanation of the twin paradox of special relativity. Stachel also noted that Pais misattributed a quote to Einstein related to the paradox. He went on to state his belief that Pais "missed the mark" in his presentation of the postulates of special relativity, writing that the book neglects evidence that Einstein had considered alternative formulations before adopting his second postulate. Stachel also noted that Pais seemed to not have studied the notebooks Einstein wrote during the development of general relativity and stated that one of them makes Pais' version of the development of general relativity "untenable". Other reviewers brought up specific issues with the development as well, including William Hunter McCrea, who criticized the book for not including Sir Arthur Eddington's book The Mathematical Theory of Relativity in his list of books on the development of general relativity. McCrea went on to state that Pais included details of a non-existent woman who fainted from excitement upon Einstein's arrival and that the woman was later randomly transformed into a man. McCrea claimed that "[s]uch indications make one uncertain about the judgements and historical details in the book". In his 1983 review, Banesh Hoffmann noted that Pais fails to mention "Einstein's long-held erroneous belief that if one went from Minkowski coordinates to more general coordinates, one would no longer be dealing with the special theory of relativity", but that he "makes ample amends" by including a quote from Einstein on the topic, stating that "[o]ne could hardly want a clearer indication of the extraordinary power of Einstein's intuition". == Awards == The New York Times listed the volume as one of its "Notable Books of the Year" in 1982 with a caption that read: "The first biography to emphasize the physicist's scientific research rather than his life is 'splendid,' if 'written in a rigorous vocabulary.'" The book won 1983's National Book Award for Nonfiction in the category of hardcover science books. After his death in 2000, Pais' obituary in The Los Angeles Times noted that his book was "considered a definitive work" on Einstein. In recognition of Pais' contributions to the history of science, the American Physical Society and American Institute of Physics established the Abraham Pais Prize for History of Physics in 2005. == Publication history == The book was originally published in English in 1982 by Oxford University Press with ISBN 0-19-853907-X. The initial publication of the book was very popular; over 30,000 hardcover copies and a similar number of paperback copies were sold worldwide during its first year. The book performed particularly well in the United States, with 25,000 of the 30,000 copies of the hardcover edition sold there while another 2,500 were sold in Great Britain. It was reprinted in 2005, also by Oxford University Press, with ISBN 978-0-19-152402-8 with a new introduction by Roger Penrose. As of 2011, the book had been translated into fifteen languages. Among others, it has translations in Chinese, French, German, Italian, Japanese, Portuguese, and Russian. === English editions === — (1982). "Subtle is the Lord-- " : the science and the life of Albert Einstein. Oxford: Oxford University Press. ISBN 0-19-853907-X. OCLC 8195995. — (2005). "Subtle is the Lord-- " : the science and the life of Albert Einstein. Oxford: Oxford University Press. ISBN 978-0-19-152402-8. OCLC 646798828. === Foreign translations === — (1988). Shang di shi wei miao de ... : Aiyinsitan de ke xue yu sheng ping (in Chinese). Chen, Chongguang., 陈崇光. (Di 1 ban ed.). Beijing: Ke xue ji shu wen xian chu ban she. ISBN 7-5023-0578-5. OCLC 276319557. — (1998). "Shang di nan yi zhuo mo ..." : Aiyinsitan de ke xue yu sheng huo (in Chinese). Guangzhou Shi: Guangdong jiao yu chu ban she. ISBN 7-5406-4088-X. OCLC 85836939. — (2005). Albert Einstein : la vie et l'oeuvre (in French). Jeanmougin, Christian., Seyrès, Hélène. Paris. ISBN 2-10-049389-2. OCLC 492653013.{{cite book}}: CS1 maint: location missing publisher (link) — (1986). "Raffiniert ist der Herrgott ..." Albert Einstein; e. wiss. Biographi (in German). Braunschweig. ISBN 978-3-528-08560-5. OCLC 241815350.{{cite book}}: CS1 maint: location missing publisher (link) — (2009). "Raffiniert ist der Herrgott ..." Albert Einstein, eine wissenschaftliche Biographie (in German). Heidelberg. ISBN 978-3-8274-2437-2. OCLC 430531000.{{cite book}}: CS1 maint: location missing publisher (link) — (2008). La scienza e la vita di Albert Einstein (in Italian). Torino: Boringhieri. ISBN 978-88-339-1927-0. OCLC 799604290. — (1987). Kami wa rōkai ni shite (in Japanese). Kaneko, Tsutomu, 1933-, 金子, 務, 1933-. 産業図書. ISBN 4-7828-0035-5. OCLC 674588332. — (1993). Subtil é o Senhor vida e pensamento de Albert Einstein (in Portuguese) (1. ed.). Lisboa. ISBN 972-662-290-5. OCLC 722488610.{{cite book}}: CS1 maint: location missing publisher (link) — (1995). "Sutil e o senhor - " : a ciencia e a vida de Albert Einstein (in Portuguese). Rio de Janeiro (RJ): Nova Fronteira. ISBN 85-209-0631-1. OCLC 816725702. — (1989). Naučnaâ dejatel'nost' i žizn' Al'berta Ejnštejna (in Russian). Logunov, Anatolij Alekseevič., Macarskij, V. I., Macarskij, O. I. Moskva. ISBN 5-02-014028-7. OCLC 751047039. {{cite book}}: |work= ignored (help)CS1 maint: location missing publisher (link) == See also == List of scientific publications by Albert Einstein List of winners of the National Book Award Albert Einstein: Creator and Rebel Einstein and Religion Einstein for Beginners I Am Albert Einstein Introducing Relativity == References == == Cited sources == Stachel, John (3 December 1982). "Einstein". Science. 218 (4576): 989–990. doi:10.1126/science.218.4576.989. ISSN 0036-8075. JSTOR 1688704. PMID 17790583. Hoffmann, Banesh (January 1983). "Subtle Is the Lord: The Science and the Life of Albert Einstein". Physics Today. 36 (1): 81–82. Bibcode:1983PhT....36a..81P. doi:10.1063/1.2915451. ISSN 0031-9228. Morrison, Philip (February 1983). "Review of Subtle Is the Lord... : The Science and the Life of albert Einstein, PaisAbraham". Scientific American. 248 (2): 30–37. doi:10.1038/scientificamerican0283-30. ISSN 0036-8733. JSTOR 24968823. Peierls, Rudolf (28 April 1983). "What Einstein Did". The New York Review of Books. New York. ISSN 0028-7504. Asimov, Isaac (June 1983). "Review of "Subtle is the Lord..." : The Science and the Life of Albert Einstein". American Jewish History. 72 (4): 531–534. ISSN 0164-0178. JSTOR 23882512. McCrea, W.H. (August 1983). "'Subtle is the Lord …' The science and life of Albert Einstein". Physics of the Earth and Planetary Interiors. 33 (1): 64–65. doi:10.1016/0031-9201(83)90008-0. Gilbert, Felix (March 1984). "Albert Einstein, Historical and Cultural Perspectives: The Centennial Symposium in Jerusalem . Gerald Holton, Yehuda Elkana "Subtle is the Lord...": The Science and the Life of Albert Einstein . Abraham Pais". The Journal of Modern History. 56 (1): 129–133. doi:10.1086/242630. ISSN 0022-2801. JSTOR 1878191. Klein, Martin J. (June 1984). "On Unified Biographies". Isis. 75 (2): 377–379. ISSN 0021-1753. JSTOR 231838. Forman, Paul (July 1984). "'Subtle Is the Lord ...': The Science and the Life of Albert Einstein". Technology and Culture. 25 (3): 697. doi:10.2307/3104242. JSTOR 3104242. S2CID 112342312. Redhead, Michael L. G. (July 1984). "Physics and its Concepts - Abraham Pais, 'Subtle is the Lord': the science and the life of Albert Einstein. Oxford: Clarendon Press, 1982. Pp. xvi + 552. ISBN 0-19-853-907-X. £15". The British Journal for the History of Science. 17 (2): 226–227. doi:10.1017/S0007087400021002. ISSN 0007-0874. JSTOR 4026560. S2CID 122655012. == External links == Book website by publisher. Oxford University Press. 3 November 2005. ISBN 978-0-19-280672-7. Retrieved 6 November 2020.
Wikipedia/Subtle_is_the_Lord:_The_Science_and_the_Life_of_Albert_Einstein
Group dynamics is a system of behaviors and psychological processes occurring within a social group (intragroup dynamics), or between social groups (intergroup dynamics). The study of group dynamics can be useful in understanding decision-making behaviour, tracking the spread of diseases in society, creating effective therapy techniques, and following the emergence and popularity of new ideas and technologies. These applications of the field are studied in psychology, sociology, anthropology, political science, epidemiology, education, social work, leadership studies, business and managerial studies, as well as communication studies. == History == The history of group dynamics (or group processes) has a consistent, underlying premise: "the whole is greater than the sum of its parts." A social group is an entity that has qualities which cannot be understood just by studying the individuals that make up the group. In 1924, Gestalt psychologist Max Wertheimer proposed "There are entities where the behaviour of the whole cannot be derived from its individual elements nor from the way these elements fit together; rather the opposite is true: the properties of any of the parts are determined by the intrinsic structural laws of the whole". As a field of study, group dynamics has roots in both psychology and sociology. Wilhelm Wundt (1832–1920), credited as the founder of experimental psychology, had a particular interest in the psychology of communities, which he believed possessed phenomena (human language, customs, and religion) that could not be described through a study of the individual. On the sociological side, Émile Durkheim (1858–1917), who was influenced by Wundt, also recognized collective phenomena, such as public knowledge. Other key theorists include Gustave Le Bon (1841–1931) who believed that crowds possessed a 'racial unconscious' with primitive, aggressive, and antisocial instincts, and William McDougall (psychologist), who believed in a 'group mind,' which had a distinct existence born from the interaction of individuals. Eventually, the social psychologist Kurt Lewin (1890–1947) coined the term group dynamics to describe the positive and negative forces within groups of people. In 1945, he established The Group Dynamics Research Center at the Massachusetts Institute of Technology, the first institute devoted explicitly to the study of group dynamics. Throughout his career, Lewin was focused on how the study of group dynamics could be applied to real-world, social issues. Increasingly, research has applied evolutionary psychology principles to group dynamics. As human's social environments became more complex, they acquired adaptations by way of group dynamics that enhance survival. Examples include mechanisms for dealing with status, reciprocity, identifying cheaters, ostracism, altruism, group decision, leadership, and intergroup relations. == Key theorists == === Gustave Le Bon === Gustave Le Bon was a French social psychologist whose seminal study, The Crowd: A Study of the Popular Mind (1896) led to the development of group psychology. === William McDougall === The British psychologist William McDougall in his work The Group Mind (1920) researched the dynamics of groups of various sizes and degrees of organization. === Sigmund Freud === In Group Psychology and the Analysis of the Ego, (1922), Sigmund Freud based his preliminary description of group psychology on Le Bon's work, but went on to develop his own, original theory, related to what he had begun to elaborate in Totem and Taboo. Theodor Adorno reprised Freud's essay in 1951 with his Freudian Theory and the Pattern of Fascist Propaganda, and said that "It is not an overstatement if we say that Freud, though he was hardly interested in the political phase of the problem, clearly foresaw the rise and nature of fascist mass movements in purely psychological categories." === Jacob L. Moreno === Jacob L. Moreno was a psychiatrist, dramatist, philosopher and theoretician who coined the term "group psychotherapy" in the early 1930s and was highly influential at the time. === Kurt Lewin === Kurt Lewin (1943, 1948, 1951) is commonly identified as the founder of the movement to study groups scientifically. He coined the term group dynamics to describe the way groups and individuals act and react to changing circumstances. === William Schutz === William Schutz (1958, 1966) looked at interpersonal relations as stage-developmental, inclusion (am I included?), control (who is top dog here?), and affection (do I belong here?). Schutz sees groups resolving each issue in turn in order to be able to progress to the next stage. Conversely, a struggling group can devolve to an earlier stage, if unable to resolve outstanding issues at its present stage. Schutz referred to these group dynamics as "the interpersonal underworld," group processes which are largely unseen and un-acknowledged, as opposed to "content" issues, which are nominally the agenda of group meetings. === Wilfred Bion === Wilfred Bion (1961) studied group dynamics from a psychoanalytic perspective, and stated that he was much influenced by Wilfred Trotter for whom he worked at University College Hospital London, as did another key figure in the Psychoanalytic movement, Ernest Jones. He discovered several mass group processes which involved the group as a whole adopting an orientation which, in his opinion, interfered with the ability of a group to accomplish the work it was nominally engaged in. Bion's experiences are reported in his published books, especially Experiences in Groups. The Tavistock Institute has further developed and applied the theory and practices developed by Bion. === Bruce Tuckman === Bruce Tuckman (1965) proposed the four-stage model called Tuckman's Stages for a group. Tuckman's model states that the ideal group decision-making process should occur in four stages: Forming (pretending to get on or get along with others) Storming (letting down the politeness barrier and trying to get down to the issues even if tempers flare up) Norming (getting used to each other and developing trust and productivity) Performing (working in a group to a common goal on a highly efficient and cooperative basis) Tuckman later added a fifth stage for the dissolution of a group called adjourning. (Adjourning may also be referred to as mourning, i.e. mourning the adjournment of the group). This model refers to the overall pattern of the group, but of course individuals within a group work in different ways. If distrust persists, a group may never even get to the norming stage. === M. Scott Peck === M. Scott Peck developed stages for larger-scale groups (i.e., communities) which are similar to Tuckman's stages of group development. Peck describes the stages of a community as: Pseudo-community Chaos Emptiness True Community Communities may be distinguished from other types of groups, in Peck's view, by the need for members to eliminate barriers to communication in order to be able to form true community. Examples of common barriers are: expectations and preconceptions; prejudices; ideology, counterproductive norms, theology and solutions; the need to heal, convert, fix or solve and the need to control. A community is born when its members reach a stage of "emptiness" or peace. === Richard Hackman === Richard Hackman developed a synthetic, research-based model for designing and managing work groups. Hackman suggested that groups are successful when they satisfy internal and external clients, develop capabilities to perform in the future, and when members find meaning and satisfaction in the group. Hackman proposed five conditions that increase the chance that groups will be successful. These include: Being a real team: which results from having a shared task, clear boundaries which clarify who is inside or outside of the group, and stability in group membership. Compelling direction: which results from a clear, challenging, and consequential goal. Enabling structure: which results from having tasks which have variety, a group size that is not too large, talented group members who have at least moderate social skill, and strong norms that specify appropriate behaviour. Supportive context: which occurs in groups nested in larger groups (e.g. companies). In companies, supportive contexts involves a) reward systems that reward performance and cooperation (e.g. group based rewards linked to group performance), b) an educational system that develops member skills, c) an information and materials system that provides the needed information and raw materials (e.g. computers). Expert coaching: which occurs on the rare occasions when group members feel they need help with task or interpersonal issues. Hackman emphasizes that many team leaders are overbearing and undermine group effectiveness. == Intragroup dynamics == Intragroup dynamics (also referred to as ingroup-, within-group, or commonly just ‘group dynamics’) are the underlying processes that give rise to a set of norms, roles, relations, and common goals that characterize a particular social group. Examples of groups include religious, political, military, and environmental groups, sports teams, work groups, and therapy groups. Amongst the members of a group, there is a state of interdependence, through which the behaviours, attitudes, opinions, and experiences of each member are collectively influenced by the other group members. In many fields of research, there is an interest in understanding how group dynamics influence individual behaviour, attitudes, and opinions. The dynamics of a particular group depend on how one defines the boundaries of the group. Often, there are distinct subgroups within a more broadly defined group. For example, one could define U.S. residents (‘Americans’) as a group, but could also define a more specific set of U.S. residents (for example, 'Americans in the South'). For each of these groups, there are distinct dynamics that can be discussed. Notably, on this very broad level, the study of group dynamics is similar to the study of culture. For example, there are group dynamics in the U.S. South that sustain a culture of honor, which is associated with norms of toughness, honour-related violence, and self-defence. === Group formation === Group formation starts with a psychological bond between individuals. The social cohesion approach suggests that group formation comes out of bonds of interpersonal attraction. In contrast, the social identity approach suggests that a group starts when a collection of individuals perceive that they share some social category (‘smokers’, ‘nurses,’ ‘students,’ ‘hockey players’), and that interpersonal attraction only secondarily enhances the connection between individuals. Additionally, from the social identity approach, group formation involves both identifying with some individuals and explicitly not identifying with others. So to say, a level of psychological distinctiveness is necessary for group formation. Through interaction, individuals begin to develop group norms, roles, and attitudes which define the group, and are internalized to influence behaviour. Emergent groups arise from a relatively spontaneous process of group formation. For example, in response to a natural disaster, an emergent response group may form. These groups are characterized as having no preexisting structure (e.g. group membership, allocated roles) or prior experience working together. Yet, these groups still express high levels of interdependence and coordinate knowledge, resources, and tasks. === Joining groups === Joining a group is determined by a number of different factors, including an individual's personal traits; gender; social motives such as need for affiliation, need for power, and need for intimacy; attachment style; and prior group experiences. Groups can offer some advantages to its members that would not be possible if an individual decided to remain alone, including gaining social support in the forms of emotional support, instrumental support, and informational support. It also offers friendship, potential new interests, learning new skills, and enhancing self esteem. However, joining a group may also cost an individual time, effort, and personal resources as they may conform to social pressures and strive to reap the benefits that may be offered by the group. The Minimax Principle is a part of social exchange theory that states that people will join and remain in a group that can provide them with the maximum amount of valuable rewards while at the same time, ensuring the minimum amount of costs to themselves. However, this does not necessarily mean that a person will join a group simply because the reward/cost ratio seems attractive. According to Howard Kelley and John Thibaut, a group may be attractive to us in terms of costs and benefits, but that attractiveness alone does not determine whether or not we will join the group. Instead, our decision is based on two factors: our comparison level, and our comparison level for alternatives. In John Thibaut and Harold Kelley's social exchange theory, comparison level is the standard by which an individual will evaluate the desirability of becoming a member of the group and forming new social relationships within the group. This comparison level is influenced by previous relationships and membership in different groups. Those individuals who have experienced positive rewards with few costs in previous relationships and groups will have a higher comparison level than a person who experienced more negative costs and fewer rewards in previous relationships and group memberships. According to the social exchange theory, group membership will be more satisfying to a new prospective member if the group's outcomes, in terms of costs and rewards, are above the individual's comparison level. As well, group membership will be unsatisfying to a new member if the outcomes are below the individual's comparison level. Comparison level only predicts how satisfied a new member will be with the social relationships within the group. To determine whether people will actually join or leave a group, the value of other, alternative groups needs to be taken into account. This is called the comparison level for alternatives. This comparison level for alternatives is the standard by which an individual will evaluate the quality of the group in comparison to other groups the individual has the opportunity to join. Thiabaut and Kelley stated that the "comparison level for alternatives can be defined informally as the lowest level of outcomes a member will accept in the light of available alternative opportunities.” Joining and leaving groups is ultimately dependent on the comparison level for alternatives, whereas member satisfaction within a group depends on the comparison level. To summarize, if membership in the group is above the comparison level for alternatives and above the comparison level, the membership within the group will be satisfying and an individual will be more likely to join the group. If membership in the group is above the comparison level for alternatives but below the comparison level, membership will be not be satisfactory; however, the individual will likely join the group since no other desirable options are available. When group membership is below the comparison level for alternatives but above the comparison level, membership is satisfying but an individual will be unlikely to join. If group membership is below both the comparison and alternative comparison levels, membership will be dissatisfying and the individual will be less likely to join the group. === Types of groups === Groups can vary drastically from one another. For example, three best friends who interact every day as well as a collection of people watching a movie in a theater both constitute a group. Past research has identified four basic types of groups which include, but are not limited to: primary groups, social groups, collective groups, and categories. It is important to define these four types of groups because they are intuitive to most lay people. For example, in an experiment, participants were asked to sort a number of groups into categories based on their own criteria. Examples of groups to be sorted were a sports team, a family, people at a bus stop and women. It was found that participants consistently sorted groups into four categories: intimacy groups, task groups, loose associations, and social categories. These categories are conceptually similar to the four basic types to be discussed. Therefore, it seems that individuals intuitively define aggregations of individuals in this way. ==== Primary groups ==== Primary groups are characterized by relatively small, long-lasting groups of individuals who share personally meaningful relationships. Since the members of these groups often interact face-to-face, they know each other very well and are unified. Individuals that are a part of primary groups consider the group to be an important part of their lives. Consequently, members strongly identify with their group, even without regular meetings. Cooley believed that primary groups were essential for integrating individuals into their society since this is often their first experience with a group. For example, individuals are born into a primary group, their family, which creates a foundation for them to base their future relationships. Individuals can be born into a primary group; however, primary groups can also form when individuals interact for extended periods of time in meaningful ways. Examples of primary groups include family, close friends, and gangs. ==== Social groups ==== A social group is characterized by a formally organized group of individuals who are not as emotionally involved with each other as those in a primary group. These groups tend to be larger, with shorter memberships compared to primary groups. Further, social groups do not have as stable memberships, since members are able to leave their social group and join new groups. The goals of social groups are often task-oriented as opposed to relationship-oriented. Examples of social groups include coworkers, clubs, and sports teams. ==== Collectives ==== Collectives are characterized by large groups of individuals who display similar actions or outlooks. They are loosely formed, spontaneous, and brief. Examples of collectives include a flash mob, an audience at a movie, and a crowd watching a building burn. ==== Categories ==== Categories are characterized by a collection of individuals who are similar in some way. Categories become groups when their similarities have social implications. For example, when people treat others differently because of certain aspects of their appearance or heritage, for example, this creates groups of different races. For this reason, categories can appear to be higher in entitativity and essentialism than primary, social, and collective groups. Entitativity is defined by Campbell as the extent to which collections of individuals are perceived to be a group. The degree of entitativity that a group has is influenced by whether a collection of individuals experience the same fate, display similarities, and are close in proximity. If individuals believe that a group is high in entitativity, then they are likely to believe that the group has unchanging characteristics that are essential to the group, known as essentialism. Examples of categories are New Yorkers, gamblers, and women. === Group membership and social identity === The social group is a critical source of information about individual identity. We naturally make comparisons between our own group and other groups, but we do not necessarily make objective comparisons. Instead, we make evaluations that are self-enhancing, emphasizing the positive qualities of our own group (see ingroup bias). In this way, these comparisons give us a distinct and valued social identity that benefits our self-esteem. Our social identity and group membership also satisfies a need to belong. Of course, individuals belong to multiple groups. Therefore, one's social identity can have several, qualitatively distinct parts (for example, one's ethnic identity, religious identity, and political identity). Optimal distinctiveness theory suggests that individuals have a desire to be similar to others, but also a desire to differentiate themselves, ultimately seeking some balance of these two desires (to obtain optimal distinctiveness). For example, one might imagine a young teenager in the United States who tries to balance these desires, not wanting to be ‘just like everyone else,’ but also wanting to ‘fit in’ and be similar to others. One's collective self may offer a balance between these two desires. That is, to be similar to others (those who you share group membership with), but also to be different from others (those who are outside of your group). === Group cohesion === In the social sciences, group cohesion refers to the processes that keep members of a social group connected. Terms such as attraction, solidarity, and morale are often used to describe group cohesion. It is thought to be one of the most important characteristics of a group, and has been linked to group performance, intergroup conflict and therapeutic change. Group cohesion, as a scientifically studied property of groups, is commonly associated with Kurt Lewin and his student, Leon Festinger. Lewin defined group cohesion as the willingness of individuals to stick together, and believed that without cohesiveness a group could not exist. As an extension of Lewin's work, Festinger (along with Stanley Schachter and Kurt Back) described cohesion as, “the total field of forces which act on members to remain in the group” (Festinger, Schachter, & Back, 1950, p. 37). Later, this definition was modified to describe the forces acting on individual members to remain in the group, termed attraction to the group. Since then, several models for understanding the concept of group cohesion have been developed, including Albert Carron's hierarchical model and several bi-dimensional models (vertical v. horizontal cohesion, task v. social cohesion, belongingness and morale, and personal v. social attraction). Before Lewin and Festinger, there were, of course, descriptions of a very similar group property. For example, Emile Durkheim described two forms of solidarity (mechanical and organic), which created a sense of collective conscious and an emotion-based sense of community. === Black sheep effect === Beliefs within the ingroup are based on how individuals in the group see their other members. Individuals tend to upgrade likeable in-group members and deviate from unlikeable group members, making them a separate outgroup. This is called the black sheep effect. The way a person judges socially desirable and socially undesirable individuals depends upon whether they are part of the ingroup or outgroup. This phenomenon has been later accounted for by subjective group dynamics theory. According to this theory, people derogate socially undesirable (deviant) ingroup members relative to outgroup members, because they give a bad image of the ingroup and jeopardize people's social identity. In more recent studies, Marques and colleagues have shown that this occurs more strongly with regard to ingroup full members than other members. Whereas new members of a group must prove themselves to the full members to become accepted, full members have undergone socialization and are already accepted within the group. They have more privilege than newcomers but more responsibility to help the group achieve its goals. Marginal members were once full members but lost membership because they failed to live up to the group's expectations. They can rejoin the group if they go through re-socialization. Therefore, full members' behavior is paramount to define the ingroup's image. Bogart and Ryan surveyed the development of new members' stereotypes about in-groups and out-groups during socialization. Results showed that the new members judged themselves as consistent with the stereotypes of their in-groups, even when they had recently committed to join those groups or existed as marginal members. They also tended to judge the group as a whole in an increasingly less positive manner after they became full members. However, there is no evidence that this affects the way they are judged by other members. Nevertheless, depending on the self-esteem of an individual, members of the in-group may experience different private beliefs about the group's activities but will publicly express the opposite—that they actually share these beliefs. One member may not personally agree with something the group does, but to avoid the black sheep effect, they will publicly agree with the group and keep the private beliefs to themselves. If the person is privately self-aware, he or she is more likely to comply with the group even if they possibly have their own beliefs about the situation. In situations of hazing within fraternities and sororities on college campuses, pledges may encounter this type of situation and may outwardly comply with the tasks they are forced to do regardless of their personal feelings about the Greek institution they are joining. This is done in an effort to avoid becoming an outcast of the group. Outcasts who behave in a way that might jeopardize the group tend to be treated more harshly than the likeable ones in a group, creating a black sheep effect. Full members of a fraternity might treat the incoming new members harshly, causing the pledges to decide if they approve of the situation and if they will voice their disagreeing opinions about it. === Group influence on individual behaviour === Individual behaviour is influenced by the presence of others. For example, studies have found that individuals work harder and faster when others are present (see social facilitation), and that an individual's performance is reduced when others in the situation create distraction or conflict. Groups also influence individual's decision-making processes. These include decisions related to ingroup bias, persuasion (see Asch conformity experiments), obedience (see Milgram Experiment), and groupthink. There are both positive and negative implications of group influence on individual behaviour. This type of influence is often useful in the context of work settings, team sports, and political activism. However, the influence of groups on the individual can also generate extremely negative behaviours, evident in Nazi Germany, the My Lai massacre, and in the Abu Ghraib prison (also see Abu Ghraib torture and prisoner abuse). === Group structure === A group's structure is the internal framework that defines members' relations to one another over time. Frequently studied elements of group structure include roles, norms, values, communication patterns, and status differentials. Group structure has also been defined as the underlying pattern of roles, norms, and networks of relations among members that define and organize the group. Roles can be defined as a tendency to behave, contribute and interrelate with others in a particular way. Roles may be assigned formally, but more often are defined through the process of role differentiation. Role differentiation is the degree to which different group members have specialized functions. A group with a high level of role differentiation would be categorized as having many different roles that are specialized and narrowly defined. A key role in a group is the leader, but there are other important roles as well, including task roles, relationship roles, and individual roles. Functional (task) roles are generally defined in relation to the tasks the team is expected to perform. Individuals engaged in task roles focus on the goals of the group and on enabling the work that members do; examples of task roles include coordinator, recorder, critic, or technician. A group member engaged in a relationship role (or socioemotional role) is focused on maintaining the interpersonal and emotional needs of the groups' members; examples of relationship role include encourager, harmonizer, or compromiser. Norms are the informal rules that groups adopt to regulate members' behaviour. Norms refer to what should be done and represent value judgments about appropriate behaviour in social situations. Although they are infrequently written down or even discussed, norms have powerful influence on group behaviour. They are a fundamental aspect of group structure as they provide direction and motivation, and organize the social interactions of members. Norms are said to be emergent, as they develop gradually throughout interactions between group members. While many norms are widespread throughout society, groups may develop their own norms that members must learn when they join the group. There are various types of norms, including: prescriptive, proscriptive, descriptive, and injunctive. Prescriptive Norms: the socially appropriate way to respond in a social situation, or what group members are supposed to do (e.g. saying thank you after someone does a favour for you) Proscriptive Norms: actions that group members should not do; prohibitive (e.g. not belching in public) Descriptive Norms: describe what people usually do (e.g. clapping after a speech) Injunctive Norms: describe behaviours that people ought to do; more evaluative in nature than a descriptive norm Intermember Relations are the connections among the members of a group, or the social network within a group. Group members are linked to one another at varying levels. Examining the intermember relations of a group can highlight a group's density (how many members are linked to one another), or the degree centrality of members (number of ties between members). Analysing the intermember relations aspect of a group can highlight the degree centrality of each member in the group, which can lead to a better understanding of the roles of certain group (e.g. an individual who is a 'go-between' in a group will have closer ties to numerous group members which can aid in communication, etc.). Values are goals or ideas that serve as guiding principles for the group. Like norms, values may be communicated either explicitly or on an ad hoc basis. Values can serve as a rallying point for the team. However, some values (such as conformity) can also be dysfunction and lead to poor decisions by the team. Communication patterns describe the flow of information within the group and they are typically described as either centralized or decentralized. With a centralized pattern, communications tend to flow from one source to all group members. Centralized communications allow standardization of information, but may restrict the free flow of information. Decentralized communications make it easy to share information directly between group members. When decentralized, communications tend to flow more freely, but the delivery of information may not be as fast or accurate as with centralized communications. Another potential downside of decentralized communications is the sheer volume of information that can be generated, particularly with electronic media. Status differentials are the relative differences in status among group members. When a group is first formed the members may all be on an equal level, but over time certain members may acquire status and authority within the group; this can create what is known as a pecking order within a group. Status can be determined by a variety of factors and characteristics, including specific status characteristics (e.g. task-specific behavioural and personal characteristics, such as experience) or diffuse status characteristics (e.g. age, race, ethnicity). It is important that other group members perceive an individual's status to be warranted and deserved, as otherwise they may not have authority within the group. Status differentials may affect the relative amount of pay among group members and they may also affect the group's tolerance to violation of group norms (e.g. people with higher status may be given more freedom to violate group norms). === Group performance === Forsyth suggests that while many daily tasks undertaken by individuals could be performed in isolation, the preference is to perform with other people. ==== Social facilitation and performance gains ==== In a study of dynamogenic stimulation for the purpose of explaining pacemaking and competition in 1898, Norman Triplett theorized that "the bodily presence of another rider is a stimulus to the racer in arousing the competitive instinct...". This dynamogenic factor is believed to have laid the groundwork for what is now known as social facilitation—an "improvement in task performance that occurs when people work in the presence of other people". Further to Triplett's observation, in 1920, Floyd Allport found that although people in groups were more productive than individuals, the quality of their product/effort was inferior. In 1965, Robert Zajonc expanded the study of arousal response (originated by Triplett) with further research in the area of social facilitation. In his study, Zajonc considered two experimental paradigms. In the first—audience effects—Zajonc observed behaviour in the presence of passive spectators, and the second—co-action effects—he examined behaviour in the presence of another individual engaged in the same activity. Zajonc observed two categories of behaviours—dominant responses to tasks that are easier to learn and which dominate other potential responses and nondominant responses to tasks that are less likely to be performed. In his Theory of Social Facilitation, Zajonc concluded that in the presence of others, when action is required, depending on the task requirement, either social facilitation or social interference will impact the outcome of the task. If social facilitation occurs, the task will have required a dominant response from the individual resulting in better performance in the presence of others, whereas if social interference occurs the task will have elicited a nondominant response from the individual resulting in subpar performance of the task. Several theories analysing performance gains in groups via drive, motivational, cognitive and personality processes, explain why social facilitation occurs. Zajonc hypothesized that compresence (the state of responding in the presence of others) elevates an individual's drive level which in turn triggers social facilitation when tasks are simple and easy to execute, but impedes performance when tasks are challenging. Nickolas Cottrell, 1972, proposed the evaluation apprehension model whereby he suggested people associate social situations with an evaluative process. Cottrell argued this situation is met with apprehension and it is this motivational response, not arousal/elevated drive, that is responsible for increased productivity on simple tasks and decreased productivity on complex tasks in the presence of others. In The Presentation of Self in Everyday Life (1959), Erving Goffman assumes that individuals can control how they are perceived by others. He suggests that people fear being perceived as having negative, undesirable qualities and characteristics by other people, and that it is this fear that compels individuals to portray a positive self-presentation/social image of themselves. In relation to performance gains, Goffman's self-presentation theory predicts, in situations where they may be evaluated, individuals will consequently increase their efforts in order to project/preserve/maintain a positive image. Distraction-conflict theory contends that when a person is working in the presence of other people, an interference effect occurs splitting the individual's attention between the task and the other person. On simple tasks, where the individual is not challenged by the task, the interference effect is negligible and performance, therefore, is facilitated. On more complex tasks, where drive is not strong enough to effectively compete against the effects of distraction, there is no performance gain. The Stroop task (Stroop effect) demonstrated that, by narrowing a person's focus of attention on certain tasks, distractions can improve performance. Social orientation theory considers the way a person approaches social situations. It predicts that self-confident individuals with a positive outlook will show performance gains through social facilitation, whereas a self-conscious individual approaching social situations with apprehension is less likely to perform well due to social interference effects. == Intergroup dynamics == Intergroup dynamics (or intergroup relations) refers to the behavioural and psychological relationship between two or more groups. This includes perceptions, attitudes, opinions, and behaviours towards one's own group, as well as those towards another group. In some cases, intergroup dynamics is prosocial, positive, and beneficial (for example, when multiple research teams work together to accomplish a task or goal). In other cases, intergroup dynamics can create conflict. For example, Fischer & Ferlie found initially positive dynamics between a clinical institution and its external authorities dramatically changed to a 'hot' and intractable conflict when authorities interfered with its embedded clinical model. Similarly, underlying the 1999 Columbine High School shooting in Littleton, Colorado, United States, intergroup dynamics played a significant role in Eric Harris’ and Dylan Klebold’s decision to kill a teacher and 14 students (including themselves). === Intergroup conflict === According to social identity theory, intergroup conflict starts with a process of comparison between individuals in one group (the ingroup) to those of another group (the outgroup). This comparison process is not unbiased and objective. Instead, it is a mechanism for enhancing one's self-esteem. In the process of such comparisons, an individual tends to: favour the ingroup over the outgroup exaggerate and overgeneralize the differences between the ingroup and the outgroup (to enhance group distinctiveness) minimize the perception of differences between ingroup members remember more detailed and positive information about the ingroup, and more negative information about the outgroup Even without any intergroup interaction (as in the minimal group paradigm), individuals begin to show favouritism towards their own group, and negative reactions towards the outgroup. This conflict can result in prejudice, stereotypes, and discrimination. Intergroup conflict can be highly competitive, especially for social groups with a long history of conflict (for example, the 1994 Rwandan genocide, rooted in group conflict between the ethnic Hutu and Tutsi). In contrast, intergroup competition can sometimes be relatively harmless, particularly in situations where there is little history of conflict (for example, between students of different universities) leading to relatively harmless generalizations and mild competitive behaviours. Intergroup conflict is commonly recognized amidst racial, ethnic, religious, and political groups. The formation of intergroup conflict was investigated in a popular series of studies by Muzafer Sherif and colleagues in 1961, called the Robbers Cave Experiment. The Robbers Cave Experiment was later used to support realistic conflict theory. Other prominent theories relating to intergroup conflict include social dominance theory, and social-/self-categorization theory. === Intergroup conflict reduction === There have been several strategies developed for reducing the tension, bias, prejudice, and conflict between social groups. These include the contact hypothesis, the jigsaw classroom, and several categorization-based strategies. ==== Contact hypothesis (intergroup contact theory) ==== In 1954, Gordon Allport suggested that by promoting contact between groups, prejudice can be reduced. Further, he suggested four optimal conditions for contact: equal status between the groups in the situation; common goals; intergroup cooperation; and the support of authorities, law, or customs. Since then, over 500 studies have been done on prejudice reduction under variations of the contact hypothesis, and a meta-analytic review suggests overall support for its efficacy. In some cases, even without the four optimal conditions outlined by Allport, prejudice between groups can be reduced. ==== Superordinate identities ==== Under the contact hypothesis, several models have been developed. A number of these models utilize a superordinate identity to reduce prejudice. That is, a more broadly defined, ‘umbrella’ group/identity that includes the groups that are in conflict. By emphasizing this superordinate identity, individuals in both subgroups can share a common social identity. For example, if there is conflict between White, Black, and Latino students in a high school, one might try to emphasize the ‘high school’ group/identity that students share to reduce conflict between the groups. Models utilizing superordinate identities include the common ingroup identity model, the ingroup projection model, the mutual intergroup differentiation model, and the ingroup identity model. Similarly, "recategorization" is a broader term used by Gaertner et al. to describe the strategies aforementioned. ==== Interdependence ==== There are techniques that utilize interdependence, between two or more groups, with the aim of reducing prejudice. That is, members across groups have to rely on one another to accomplish some goal or task. In the Robbers Cave Experiment, Sherif used this strategy to reduce conflict between groups. Elliot Aronson’s Jigsaw Classroom also uses this strategy of interdependence. In 1971, thick racial tensions were abounding in Austin, Texas. Aronson was brought in to examine the nature of this tension within schools, and to devise a strategy for reducing it (so to improve the process of school integration, mandated under Brown v. Board of Education in 1954). Despite strong evidence for the effectiveness of the jigsaw classroom, the strategy was not widely used (arguably because of strong attitudes existing outside of the schools, which still resisted the notion that racial and ethnic minority groups are equal to Whites and, similarly, should be integrated into schools). == Selected academic journals == Group Processes & Intergroup Relations Group Dynamics: Theory, Research, and Practice Small Group Research Group Analysis International Journal of Group Psychotherapy The Journal for Specialists in Group Work Social Work With Groups International Journal on Minority and Group Rights Group Facilitation: A Research and Applications Journal Organizational and Social Dynamics == See also == == References ==
Wikipedia/Group_dynamics
The relationship between chemistry and physics is a topic of debate in the philosophy of science. The issue is a complicated one, since both physics and chemistry are divided into multiple subfields, each with their own goals. A major theme is whether, and in what sense, chemistry can be said to "reduce" to physics. == Background == Although physics and chemistry are branches of science that both study matter, they differ in the scopes of their respective subjects. While physics focuses on phenomena such as force, motion, electromagnetism, elementary particles, and spacetime, chemistry is concerned mainly with the structure and reactions of atoms and molecules, but does not necessarily deal with non-baryonic matter. However, the two disciplines overlap in subjects concerning the behaviour of fluids, the thermodynamics of chemical reactions, the magnetic forces between atoms and molecules, and quantum chemistry. Moreover, the laws of chemistry highly depend on the laws of quantum mechanics. In some respects the two sciences have developed independently, but less so towards the end of the twentieth century. There are many areas where there is major overlap, for instance both chemical physics and physical chemistry combine the two, while materials science is an interdisciplinary areas which combines both as well as some elements of engineering. This was deliberate, as recognized by the National Academies of Sciences, Engineering, and Medicine there are limitations to trying to force science into categories rather than focusing on the issues of importance, an approach now common in materials science. == Historical views == In the 19th century, Auguste Comte in his hierarchy of the sciences, classified chemistry as more dependent than physics, as chemistry requires physics. In 1958, Paul Oppenheim and Hilary Putnam put forward the idea that in the 20th century chemistry has been reduced to physics, as evidence for the unity of science. == References ==
Wikipedia/Relationship_between_chemistry_and_physics
In contemporary astronomy, 88 constellations are recognized by the International Astronomical Union (IAU). Each constellation is a region of the sky bordered by arcs of right ascension and declination, together covering the entire celestial sphere. Their boundaries were officially adopted by the International Astronomical Union in 1928 and published in 1930. The ancient Mesopotamians and later the Greeks established most of the northern constellations in international use today, listed by the Roman-Egyptian astronomer Ptolemy. The constellations along the ecliptic are called the zodiac. When explorers mapped the stars of the southern skies, European astronomers proposed new constellations for that region, as well as ones to fill gaps between the traditional constellations. Because of their Roman and European origins, every constellation has a Latin name. In 1922, the International Astronomical Union adopted three-letter abbreviations for 89 constellations, the modern list of 88 plus Argo. After this, Eugène Joseph Delporte drew up boundaries for each of the 88 constellations so that every point in the sky belonged to one constellation. When astronomers say that an object lies in a particular constellation, they mean that it is positioned within these specified boundaries. == History == Some constellations are no longer recognized by the IAU, but may appear in older star charts and other references. Most notable is Argo Navis, which was one of Ptolemy's original 48 constellations. In the 1750s the French astronomer Nicolas Louis de Lacaille divided this into three separate constellations: Carina, Puppis, and Vela. == Modern constellations == The 88 constellations depict 42 animals, 29 inanimate objects, and 17 humans or mythological characters. === Abbreviations === Each IAU constellation has an official three-letter abbreviation based on the genitive form of the constellation name. As the genitive is similar to the base name, the majority of the abbreviations are just the first three letters of the constellation name: Ori for Orion/Orionis, Ara for Ara/Arae, and Com for Coma Berenices/Comae Berenices. In some cases, the abbreviation contains letters from the genitive not appearing in the base name (as in Hyi for Hydrus/Hydri, to avoid confusion with Hydra, abbreviated Hya; and Sge for Sagitta/Sagittae, to avoid confusion with Sagittarius, abbreviated Sgr). Some abbreviations use letters beyond the initial three to unambiguously identify the constellation (for example when the name and its genitive differ in the first three letters): Aps for Apus/Apodis, CrA for Corona Australis, CrB for Corona Borealis, Crv for Corvus. (Crater is abbreviated Crt to prevent confusion with CrA.) When letters are taken from the second word of a two-word name, the first letter from the second word is capitalised: CMa for Canis Major, CMi for Canis Minor. Two cases are ambiguous: Leo for the constellation Leo could be mistaken for Leo Minor (abbreviated LMi), and Tri for Triangulum could be mistaken for Triangulum Australe (abbreviated TrA). In addition to the three-letter abbreviations used today, the IAU also introduced four-letter abbreviations in 1932. The four-letter abbreviations were repealed in 1955 and are now obsolete, but were included in the NASA Dictionary of Technical Terms for Aerospace Use (NASA SP-7) published in 1965. These are labeled "NASA" in the table below and are included here for reference only. === List === For help with the literary English pronunciations, see the pronunciation key. There is considerable diversity in how Latinate names are pronounced in English. For traditions closer to the original, see Latin spelling and pronunciation. == Asterisms == Various other unofficial patterns exist alongside the constellations. These are known as "asterisms". Some are part of one larger constellation while others consists of stars in two adjoining constellations. Examples include the Big Dipper/Plough in Ursa Major; the Teapot in Sagittarius; the Square of Pegasus in Pegasus and Andromeda; and the False Cross in Carina and Vela. == See also == Lists of astronomical objects List of constellations by area Biblical names of stars Lists of stars by constellation Constellation family Galactic quadrant == Notes == == References == == External links == The Constellations 1 – Ian Ridpath's list of constellations. Ian Ridpath's Star Tales: Constellation Mythology and History – Ian Ridpath's Star Tales. VizieR – CDS's archive of constellation boundaries. The text file constbnd.dat gives the 1875.0 coordinates of the vertices of the constellation regions, together with the constellations adjacent to each boundary segment.
Wikipedia/IAU_designated_constellations
A jet is a narrow cone of hadrons and other particles produced by the hadronization of quarks and gluons in a particle physics or heavy ion experiment. Particles carrying a color charge, i.e. quarks and gluons, cannot exist in free form because of quantum chromodynamics (QCD) confinement which only allows for colorless states. When protons collide at high energies, their color charged components each carry away some of the color charge. In accordance with confinement, these fragments create other colored objects around them to form colorless hadrons. The ensemble of these objects is called a jet, since the fragments all tend to travel in the same direction, forming a narrow "jet" of particles. Jets are measured in particle detectors and studied in order to determine the properties of the original quarks. A jet definition includes a jet algorithm and a recombination scheme. The former defines how some inputs, e.g. particles or detector objects, are grouped into jets, while the latter specifies how a momentum is assigned to a jet. For example, jets can be characterized by the thrust. The jet direction (jet axis) can be defined as the thrust axis. In particle physics experiments, jets are usually built from clusters of energy depositions in the detector calorimeter. When studying simulated processes, the calorimeter jets can be reconstructed based on a simulated detector response. However, in simulated samples, jets can also be reconstructed directly from stable particles emerging from fragmentation processes. Particle-level jets are often referred to as truth-jets. A good jet algorithm usually allows for obtaining similar sets of jets at different levels in the event evolution. Typical jet reconstruction algorithms are, e.g., the anti-kT algorithm, kT algorithm, cone algorithm. A typical recombination scheme is the E-scheme, or 4-vector scheme, in which the 4-vector of a jet is defined as the sum of 4-vectors of all its constituents. In relativistic heavy ion physics, jets are important because the originating hard scattering is a natural probe for the QCD matter created in the collision, and indicate its phase. When the QCD matter undergoes a phase crossover into quark gluon plasma, the energy loss in the medium grows significantly, effectively quenching (reducing the intensity of) the outgoing jet. Example of jet analysis techniques are: jet correlation flavor tagging (e.g., b-tagging) jet substructure. The Lund string model is an example of a jet fragmentation model. == Jet production == Jets are produced in QCD hard scattering processes, creating high transverse momentum quarks or gluons, or collectively called partons in the partonic picture. The probability of creating a certain set of jets is described by the jet production cross section, which is an average of elementary perturbative QCD quark, antiquark, and gluon processes, weighted by the parton distribution functions. For the most frequent jet pair production process, the two particle scattering, the jet production cross section in a hadronic collision is given by σ i j → k = ∑ i , j ∫ d x 1 d x 2 d t ^ f i 1 ( x 1 , Q 2 ) f j 2 ( x 2 , Q 2 ) d σ ^ i j → k d t ^ , {\displaystyle \sigma _{ij\rightarrow k}=\sum _{i,j}\int dx_{1}dx_{2}d{\hat {t}}f_{i}^{1}(x_{1},Q^{2})f_{j}^{2}(x_{2},Q^{2}){\frac {d{\hat {\sigma }}_{ij\rightarrow k}}{d{\hat {t}}}},} with x, Q2: longitudinal momentum fraction and momentum transfer σ ^ i j → k {\displaystyle {\hat {\sigma }}_{ij\rightarrow k}} : perturbative QCD cross section for the reaction ij → k f i a ( x , Q 2 ) {\displaystyle f_{i}^{a}(x,Q^{2})} : parton distribution function for finding particle species i in beam a. Elementary cross sections σ ^ {\displaystyle {\hat {\sigma }}} are e.g. calculated to the leading order of perturbation theory in Peskin & Schroeder (1995), section 17.4. A review of various parameterizations of parton distribution functions and the calculation in the context of Monte Carlo event generators is discussed in T. Sjöstrand et al. (2003), section 7.4.1. == Jet fragmentation == Perturbative QCD calculations may have colored partons in the final state, but only the colorless hadrons that are ultimately produced are observed experimentally. Thus, to describe what is observed in a detector as a result of a given process, all outgoing colored partons must first undergo parton showering and then combination of the produced partons into hadrons. The terms fragmentation and hadronization are often used interchangeably in the literature to describe soft QCD radiation, formation of hadrons, or both processes together. As the parton which was produced in a hard scatter exits the interaction, the strong coupling constant will increase with its separation. This increases the probability for QCD radiation, which is predominantly shallow-angled with respect to the progenitor parton. Thus, one parton will radiate gluons, which will in turn radiate qq pairs and so on, with each new parton nearly collinear with its parent. This can be described by convolving the spinors with fragmentation functions P j i ( x z , Q 2 ) {\displaystyle P_{ji}\!\left({\frac {x}{z}},Q^{2}\right)} , in a similar manner to the evolution of parton density functions. This is described by a Dokshitzer-Gribov-Lipatov-Altarelli-Parisi (DGLAP) type equation ∂ ∂ ln ⁡ Q 2 D i h ( x , Q 2 ) = ∑ j ∫ x 1 d z z α S 4 π P j i ( x z , Q 2 ) D j h ( z , Q 2 ) {\displaystyle {\frac {\partial }{\partial \ln Q^{2}}}D_{i}^{h}(x,Q^{2})=\sum _{j}\int _{x}^{1}{\frac {dz}{z}}{\frac {\alpha _{S}}{4\pi }}P_{ji}\!\left({\frac {x}{z}},Q^{2}\right)D_{j}^{h}(z,Q^{2})} Parton showering produces partons of successively lower energy, and must therefore exit the region of validity for perturbative QCD. Phenomenological models must then be applied to describe the length of time when showering occurs, and then the combination of colored partons into bound states of colorless hadrons, which is inherently not-perturbative. One example is the Lund String Model, which is implemented in many modern event generators. == Infrared and collinear safety == A jet algorithm is infrared safe if it yields the same set of jets after modifying an event to add a soft radiation. Similarly, a jet algorithm is collinear safe if the final set of jets is not changed after introducing a collinear splitting of one of the inputs. There are several reasons why a jet algorithm must fulfill these two requirements. Experimentally, jets are useful if they carry information about the seed parton. When produced, the seed parton is expected to undergo a parton shower, which may include a series of nearly-collinear splittings before the hadronization starts. Furthermore, the jet algorithm must be robust when it comes to fluctuations in the detector response. Theoretically, If a jet algorithm is not infrared and collinear safe, it can not be guaranteed that a finite cross-section can be obtained at any order of perturbation theory. == See also == Dijet event == References == Andersson, B.; Gustafson, G.; Ingelman, G.; Sjöstrand, T. (1983). "Parton fragmentation and string dynamics". Physics Reports. 97 (2–3). Elsevier BV: 31–145. Bibcode:1983PhR....97...31A. doi:10.1016/0370-1573(83)90080-7. ISSN 0370-1573. Ellis, Stephen D.; Soper, Davison E. (1993-10-01). "Successive combination jet algorithm for hadron collisions". Physical Review D. 48 (7). American Physical Society (APS): 3160–3166. arXiv:hep-ph/9305266. Bibcode:1993PhRvD..48.3160E. doi:10.1103/physrevd.48.3160. ISSN 0556-2821. S2CID 2667115. M. Gyulassy et al., "Jet Quenching and Radiative Energy Loss in Dense Nuclear Matter", in R.C. Hwa & X.-N. Wang (eds.), Quark Gluon Plasma 3 (World Scientific, Singapore, 2003). J. E. Huth et al., in E. L. Berger (ed.), Proceedings of Research Directions For The Decade: Snowmass 1990, (World Scientific, Singapore, 1992), 134. (Preprint at Fermilab Library Server) M. E. Peskin, D. V. Schroeder, "An Introduction to Quantum Field Theory" (Westview, Boulder, CO, 1995). T. Sjöstrand et al., "Pythia 6.3 Physics and Manual", Report LU TP 03-38 (2003). G. Sterman, "QCD and Jets", Report YITP-SB-04-59 (2004). == External links == The Pythia/Jetset Monte Carlo event generator The FastJet jet clustering program
Wikipedia/Jet_(particle_physics)
In physics, the eightfold way is an organizational scheme for a class of subatomic particles known as hadrons that led to the development of the quark model. Both the American physicist Murray Gell-Mann and the Israeli physicist Yuval Ne'eman independently and simultaneously proposed the idea in 1961. The name comes from Gell-Mann's (1961) paper and is an allusion to the Noble Eightfold Path of Buddhism. == Background == By 1947, physicists believed that they had a good understanding of what the smallest bits of matter were. There were electrons, protons, neutrons, and photons (the components that make up the vast part of everyday experience such as visible matter and light) along with a handful of unstable (i.e., they undergo radioactive decay) exotic particles needed to explain cosmic rays observations such as pions, muons and the hypothesized neutrinos. In addition, the discovery of the positron suggested there could be anti-particles for each of them. It was known a "strong interaction" must exist to overcome electrostatic repulsion in atomic nuclei. Not all particles are influenced by this strong force; but those that are, are dubbed "hadrons"; these are now further classified as mesons (from the Greek for "intermediate") and baryons (from the Greek for "heavy"). But the discovery of the neutral kaon in late 1947 and the subsequent discovery of a positively charged kaon in 1949 extended the meson family in an unexpected way, and in 1950 the lambda particle did the same thing for the baryon family. These particles decay much more slowly than they are produced, a hint that there are two different physical processes involved. This was first suggested by Abraham Pais in 1952. In 1953, Murray Gell-Mann and a collaboration in Japan, Tadao Nakano with Kazuhiko Nishijima, independently suggested a new conserved value now known as "strangeness" during their attempts to understand the growing collection of known particles. The discovery of new mesons and baryons continued through the 1950s; the number of known "elementary" particles ballooned. Physicists were interested in understanding hadron-hadron interactions via the strong interaction. The concept of isospin, introduced in 1932 by Werner Heisenberg shortly after the discovery of the neutron, was used to group some hadrons together into "multiplets" but no successful scientific theory as yet covered the hadrons as a whole. This was the beginning of a chaotic period in particle physics that has become known as the "particle zoo" era. The eightfold way represented a step out of this confusion and towards the quark model, which proved to be the solution. == Organization == Group representation theory is the mathematical underpinning of the eightfold way, but that rather technical mathematics is not needed to understand how it helps organize particles. Particles are sorted into groups as mesons or baryons. Within each group, they are further separated by their spin angular momentum. Symmetrical patterns appear when these groups of particles have their strangeness plotted against their electric charge. (This is the most common way to make these plots today, but originally physicists used an equivalent pair of properties called hypercharge and isotopic spin, the latter of which is now known as isospin.) The symmetry in these patterns is a hint of the underlying symmetry of the strong interaction between the particles themselves. In the plots below, points representing particles that lie along the same horizontal line share the same strangeness, s, while those on the same left-leaning diagonals share the same electric charge, q (given as multiples of the elementary charge). === Mesons === In the original eightfold way, the mesons were organized into octets and singlets. This is one of the finer points of differences between the eightfold way and the quark model it inspired, which suggests the mesons should be grouped into nonets (groups of nine). ==== Meson octet ==== The eightfold way organizes eight of the lowest spin-0 mesons into an octet. They are: K0, K+, K− and K0 kaons π+, π0, and π− pions η, the eta meson Diametrically opposite particles in the diagram are anti-particles of one another, while particles in the center are their own anti-particle. ==== Meson singlet ==== The chargeless, strangeless eta prime meson was originally classified by itself as a singlet: η′ Under the quark model later developed, it is better viewed as part of a meson nonet, as previously mentioned. === Baryons === ==== Baryon octet ==== The eightfold way organizes the spin-⁠1/ 2 ⁠ baryons into an octet. They consist of neutron (n) and proton (p) Σ−, Σ0, and Σ+ sigma baryons Λ0, the strange lambda baryon Ξ− and Ξ0 xi baryons ==== Baryon decuplet ==== The organizational principles of the eightfold way also apply to the spin-⁠3/2⁠ baryons, forming a decuplet. Δ−, Δ0, Δ+, and Δ++ delta baryons Σ∗−, Σ∗0, and Σ∗+ sigma baryons Ξ∗− and Ξ∗0 xi baryons Ω− omega baryon However, one of the particles of this decuplet had never been previously observed when the eightfold way was proposed. Gell-Mann called this particle the Ω− and predicted in 1962 that it would have a strangeness −3, electric charge −1 and a mass near 1680 MeV/c2. In 1964, a particle closely matching these predictions was discovered by a particle accelerator group at Brookhaven. Gell-Mann received the 1969 Nobel Prize in Physics for his work on the theory of elementary particles. == Historical development == === Development === Historically, quarks were motivated by an understanding of flavour symmetry. First, it was noticed (1961) that groups of particles were related to each other in a way that matched the representation theory of SU(3). From that, it was inferred that there is an approximate symmetry of the universe which is represented by the group SU(3). Finally (1964), this led to the discovery of three light quarks (up, down, and strange) interchanged by these SU(3) transformations. === Modern interpretation === The eightfold way may be understood in modern terms as a consequence of flavor symmetries between various kinds of quarks. Since the strong nuclear force affects quarks the same way regardless of their flavor, replacing one flavor of quark with another in a hadron should not alter its mass very much, provided the respective quark masses are smaller than the strong interaction scale—which holds for the three light quarks. Mathematically, this replacement may be described by elements of the SU(3) group. The octets and other hadron arrangements are representations of this group. == Flavor symmetry == === SU(3) === There is an abstract three-dimensional vector space: up quark → ( 1 0 0 ) , down quark → ( 0 1 0 ) , strange quark → ( 0 0 1 ) , {\displaystyle {\text{up quark}}\to {\begin{pmatrix}1\\0\\0\end{pmatrix}},\qquad {\text{down quark}}\to {\begin{pmatrix}0\\1\\0\end{pmatrix}},\qquad {\text{strange quark}}\to {\begin{pmatrix}0\\0\\1\end{pmatrix}},} and the laws of physics are approximately invariant under a determinant-1 unitary transformation to this space (sometimes called a flavour rotation): ( x y z ) ↦ A ( x y z ) , where A is in S U ( 3 ) . {\displaystyle {\begin{pmatrix}x\\y\\z\end{pmatrix}}\mapsto A{\begin{pmatrix}x\\y\\z\end{pmatrix}},\quad {\text{where}}\ A\ {\text{is in}}\ SU(3).} Here, SU(3) refers to the Lie group of 3×3 unitary matrices with determinant 1 (special unitary group). For example, the flavour rotation A = ( − 0 1 0 − 1 0 0 − 0 0 1 ) {\displaystyle A={\begin{pmatrix}{\phantom {-}}0&1&0\\-1&0&0\\{\phantom {-}}0&0&1\end{pmatrix}}} is a transformation that simultaneously turns all the up quarks in the universe into down quarks and conversely. More specifically, these flavour rotations are exact symmetries if only strong force interactions are looked at, but they are not truly exact symmetries of the universe because the three quarks have different masses and different electroweak interactions. This approximate symmetry is called flavour symmetry, or more specifically flavour SU(3) symmetry. === Connection to representation theory === Assume we have a certain particle—for example, a proton—in a quantum state | ψ ⟩ {\displaystyle |\psi \rangle } . If we apply one of the flavour rotations A to our particle, it enters a new quantum state which we can call A | ψ ⟩ {\displaystyle A|\psi \rangle } . Depending on A, this new state might be a proton, or a neutron, or a superposition of a proton and a neutron, or various other possibilities. The set of all possible quantum states spans a vector space. Representation theory is a mathematical theory that describes the situation where elements of a group (here, the flavour rotations A in the group SU(3)) are automorphisms of a vector space (here, the set of all possible quantum states that you get from flavour-rotating a proton). Therefore, by studying the representation theory of SU(3), we can learn the possibilities for what the vector space is and how it is affected by flavour symmetry. Since the flavour rotations A are approximate, not exact, symmetries, each orthogonal state in the vector space corresponds to a different particle species. In the example above, when a proton is transformed by every possible flavour rotation A, it turns out that it moves around an 8 dimensional vector space. Those 8 dimensions correspond to the 8 particles in the so-called "baryon octet" (proton, neutron, Σ+, Σ0, Σ−, Ξ−, Ξ0, Λ). This corresponds to an 8-dimensional ("octet") representation of the group SU(3). Since A is an approximate symmetry, all the particles in this octet have similar mass. Every Lie group has a corresponding Lie algebra, and each group representation of the Lie group can be mapped to a corresponding Lie algebra representation on the same vector space. The Lie algebra s u {\displaystyle {\mathfrak {su}}} (3) can be written as the set of 3×3 traceless Hermitian matrices. Physicists generally discuss the representation theory of the Lie algebra s u {\displaystyle {\mathfrak {su}}} (3) instead of the Lie group SU(3), since the former is simpler and the two are ultimately equivalent. == Notes == == References == == Further reading == M. Gell-Mann; Y. Ne'eman, eds. (1964). The Eightfold Way. W. A. Benjamin. LCCN 65013009. (contains most historical papers on the eightfold way and related topics, including the Gell-Mann–Okubo mass formula.)
Wikipedia/Eightfold_way_(physics)
In particle physics, strangeness (symbol S) is a property of particles, expressed as a quantum number, for describing decay of particles in strong and electromagnetic interactions that occur in a short period of time. The strangeness of a particle is defined as: S = − ( n s − n s ¯ ) {\displaystyle S=-(n_{\text{s}}-n_{\bar {\text{s}}})} where ns represents the number of strange quarks (s) and ns represents the number of strange antiquarks (s). Evaluation of strangeness production has become an important tool in search, discovery, observation and interpretation of quark–gluon plasma (QGP). Strangeness is an excited state of matter and its decay is governed by CKM mixing. The terms strange and strangeness predate the discovery of the quark, and were adopted after its discovery in order to preserve the continuity of the phrase: strangeness of particles as −1 and anti-particles as +1, per the original definition. For all the quark flavour quantum numbers (strangeness, charm, topness and bottomness) the convention is that the flavour charge and the electric charge of a quark have the same sign. With this, any flavour carried by a charged meson has the same sign as its charge. == Conservation == Strangeness was introduced by Murray Gell-Mann, Abraham Pais, Tadao Nakano and Kazuhiko Nishijima to explain the fact that certain particles, such as the kaons or the hyperons Σ and Λ, were created easily in particle collisions, yet decayed much more slowly than expected for their large masses and large production cross sections. Noting that collisions seemed to always produce pairs of these particles, it was postulated that a new conserved quantity, dubbed "strangeness", was preserved during their creation, but not conserved in their decay. In our modern understanding, strangeness is conserved during the strong and the electromagnetic interactions, but not during the weak interactions. Consequently, the lightest particles containing a strange quark cannot decay by the strong interaction, and must instead decay via the much slower weak interaction. In most cases these decays change the value of the strangeness by one unit. This doesn't necessarily hold in second-order weak reactions, however, where there are mixes of K0 and K0 mesons. All in all, the amount of strangeness can change in a weak interaction reaction by +1, 0 or −1 (depending on the reaction). For example, the interaction of a K− meson with a proton is represented as: K − + p → Ξ 0 + K 0 {\displaystyle K^{-}+p\rightarrow \Xi ^{0}+K^{0}} ( − 1 ) + ( 0 ) → ( − 2 ) + ( 1 ) {\displaystyle (-1)+(0)\rightarrow (-2)+(1)} Here strangeness is conserved and the interaction proceeds via the strong nuclear force. Nonetheless, in reactions like the decay of the positive kaon: K + → π + + π 0 {\displaystyle K^{+}\rightarrow \pi ^{+}+\pi ^{0}} + 1 → ( 0 ) + ( 0 ) {\displaystyle +1\rightarrow (0)+(0)} Since both pions have a strangeness of 0, this violates conservation of strangeness, meaning the reaction must go via the weak force. == See also == Strangeness and quark–gluon plasma Strange particles == References ==
Wikipedia/Strangeness_(particle_physics)
In particle physics, the parton model is a model of hadrons, such as protons and neutrons, proposed by Richard Feynman. It is useful for interpreting the cascades of radiation (a parton shower) produced from quantum chromodynamics (QCD) processes and interactions in high-energy particle collisions. == History == The parton model was proposed by Richard Feynman in 1969, used originally for analysis of high-energy hadron collisions. It was applied to electron-proton deep inelastic scattering by James Bjorken and Emmanuel Anthony Paschos. Later, with the experimental observation of Bjorken scaling, the validation of the quark model, and the confirmation of asymptotic freedom in quantum chromodynamics, partons were matched to quarks and gluons. The parton model remains a justifiable approximation at high energies, and others have extended the theory over the years. Murray Gell-Mann preferred to use the term "put-ons" to refer to partons. In 1994, partons were used by Leonard Susskind to model holography. == Model == Any hadron (for example, a proton) can be considered as a composition of a number of point-like constituents, termed "partons". === Component particles === Just as accelerated electric charges emit QED radiation (photons), the accelerated coloured partons will emit QCD radiation in the form of gluons. Unlike the uncharged photons, the gluons themselves carry colour charges and can therefore emit further radiation, leading to parton showers. === Reference frame === The hadron is defined in a reference frame where it has infinite momentum – a valid approximation at high energies. Thus, parton motion is slowed by time dilation, and the hadron charge distribution is Lorentz-contracted, so incoming particles will be scattered "instantaneously and incoherently". Partons are defined with respect to a physical scale (as probed by the inverse of the momentum transfer). For instance, a quark parton at one length scale can turn out to be a superposition of a quark parton state with a quark parton and a gluon parton state together with other states with more partons at a smaller length scale. Similarly, a gluon parton at one scale can resolve into a superposition of a gluon parton state, a gluon parton and quark-antiquark partons state and other multiparton states. Because of this, the number of partons in a hadron actually goes up with momentum transfer. At low energies (i.e. large length scales), a baryon contains three valence partons (quarks) and a meson contains two valence partons (a quark and an antiquark parton). At higher energies, however, observations show sea partons (nonvalence partons) in addition to valence partons. == Parton distribution functions == A parton distribution function (PDF) within so called collinear factorization is defined as the probability density for finding a particle with a certain longitudinal momentum fraction x at resolution scale Q2. Because of the inherent non-perturbative nature of partons which cannot be observed as free particles, parton densities cannot be calculated using perturbative QCD. Within QCD one can, however, study variation of parton density with resolution scale provided by external probe. Such a scale is for instance provided by a virtual photon with virtuality Q2 or by a jet. The scale can be calculated from the energy and the momentum of the virtual photon or jet; the larger the momentum and energy, the smaller the resolution scale—this is a consequence of Heisenberg's uncertainty principle. The variation of parton density with resolution scale has been found to agree well with experiment; this is an important test of QCD. Parton distribution functions are obtained by fitting observables to experimental data; they cannot be calculated using perturbative QCD. Recently, it has been found that they can be calculated directly in lattice QCD using large-momentum effective field theory. Experimentally determined parton distribution functions are available from various groups worldwide. The major unpolarized data sets are: ABM Archived 2022-01-19 at the Wayback Machine by S. Alekhin, J. Bluemlein, S. Moch CTEQ, from the CTEQ Collaboration GRV/GJR, from M. Glück, P. Jimenez-Delgado, E. Reya, and A. Vogt HERA PDFs, by H1 and ZEUS collaborations from the Deutsches Elektronen-Synchrotron center (DESY) in Germany MSHT/MRST/MSTW/MMHT, from A. D. Martin, R. G. Roberts, W. J. Stirling, R. S. Thorne, and collaborators NNPDF, from the NNPDF Collaboration The LHAPDF library provides a unified and easy-to-use Fortran/C++ interface to all major PDF sets. Generalized parton distributions (GPDs) are a more recent approach to better understand hadron structure by representing the parton distributions as functions of more variables, such as the transverse momentum and spin of the parton. They can be used to study the spin structure of the proton, in particular, the Ji sum rule relates the integral of GPDs to angular momentum carried by quarks and gluons. Early names included "non-forward", "non-diagonal" or "skewed" parton distributions. They are accessed through a new class of exclusive processes for which all particles are detected in the final state, such as the deeply virtual Compton scattering. Ordinary parton distribution functions are recovered by setting to zero (forward limit) the extra variables in the generalized parton distributions. Other rules show that the electric form factor, the magnetic form factor, or even the form factors associated to the energy-momentum tensor are also included in the GPDs. A full 3-dimensional image of partons inside hadrons can also be obtained from GPDs. == Simulation == Parton showers simulations are of use in computational particle physics either in automatic calculation of particle interaction or decay or event generators, in order to calibrate and interpret (and thus understand) processes in collider experiments. They are particularly important in large hadron collider (LHC) phenomenology, where they are usually explored using Monte Carlo simulation. The scale at which partons are given to hadronization is fixed by the Shower Monte Carlo program. Common choices of Shower Monte Carlo are PYTHIA and HERWIG. == See also == == References == This article contains material from Scholarpedia. == Further reading == Glück, M.; Reya, E.; Vogt, A. (1998). "Dynamical Parton Distributions Revisited". European Physical Journal C. 5 (3): 461–470. arXiv:hep-ph/9806404. Bibcode:1998EPJC....5..461G. doi:10.1007/s100529800978. S2CID 119842774. Hoodbhoy, P. A. (2006). "Generalized Parton Distributions" (PDF). National Center for Physics and Quaid-e-Azam University. Archived from the original (PDF) on 2017-03-31. Retrieved 2011-04-06. Ji, X. (2004). "Generalized Parton Distributions". Annual Review of Nuclear and Particle Science. 54: 413–450. arXiv:hep-ph/9807358. Bibcode:2004ARNPS..54..413J. doi:10.1146/annurev.nucl.54.070103.181302. Kretzer, S.; Lai, H.; Olness, F.; Tung, W. (2004). "CTEQ6 Parton Distributions with Heavy Quark Mass Effects". Physical Review D. 69 (11): 114005. arXiv:hep-ph/0307022. Bibcode:2004PhRvD..69k4005K. doi:10.1103/PhysRevD.69.114005. S2CID 119379329. Martin, A. D.; Roberts, R. G.; Stirling, W. J.; Thorne, R. S. (2005). "Parton distributions incorporating QED contributions". European Physical Journal C. 39 (2): 155–161. arXiv:hep-ph/0411040. Bibcode:2005EPJC...39..155M. doi:10.1140/epjc/s2004-02088-7. S2CID 14743824. == External links == Feltesse, Joël (2010). "Introduction to Parton Distribution Functions". Scholarpedia. 5 (11): 10160. Bibcode:2010SchpJ...510160F. doi:10.4249/scholarpedia.10160. ISSN 1941-6016. Event Generator Physics (http://www.hep.phy.cam.ac.uk/theory/webber/MCnet/MClecture2.pdf) "Introduction to QCD". people.phys.ethz.ch. Retrieved 2022-08-04. http://www.kceta.kit.edu/grk1694/img/2013_10_01_Hangst.pdf http://d-nb.info/1008230227/34 Marcantonini, Claudio (2010). Applying SCET to parton showers (Thesis). Massachusetts Institute of Technology. hdl:1721.1/62649.
Wikipedia/Parton_(particle_physics)
In quantum field theory, soft-collinear effective theory (or SCET) is a theoretical framework for doing calculations that involve interacting particles carrying widely different energies. The motivation for developing SCET was to control the infrared divergences that occur in quantum chromodynamics (QCD) calculations that involve particles that are soft—carrying much lower energy or momentum than other particles in the process—or collinear—traveling in the same direction as another particle in the process. SCET is an effective theory for highly energetic quarks interacting with collinear and/or soft gluons. It has been used for calculations of the decays of B mesons (quark-antiquark bound states involving a bottom quark) and the properties of jets (sprays of hadrons that emerge from particle collisions when a quark or gluon is produced). SCET has also been used to calculate electroweak interactions in Higgs boson production. The new feature of SCET is its ability to handle more than one soft energy scale. For example, processes involving quarks carrying a high energy Q interacting with gluons have two soft scales: the transverse momentum pT of the collinear particles, plus the even softer scale pT2/Q. SCET provides a power-counting formalism for doing perturbation theory in the small parameter ΛQCD/Q. == External links == See the original papers were by Christian Bauer, Sean Fleming, Michael Luke, Dan Pirjol, and Iain Stewart: Bauer, Christian W.; Fleming, Sean; Luke, Michael (2000-12-01). "Summing Sudakov logarithms in B→Xsγ in effective field theory". Physical Review D. 63 (1). American Physical Society (APS): 014006. arXiv:hep-ph/0005275. doi:10.1103/physrevd.63.014006. ISSN 0556-2821. Bauer, Christian W.; Fleming, Sean; Pirjol, Dan; Stewart, Iain W. (2001-05-07). "An effective field theory for collinear and soft gluons: Heavy to light decays". Physical Review D. 63 (11). American Physical Society (APS): 114020. arXiv:hep-ph/0011336. doi:10.1103/physrevd.63.114020. ISSN 0556-2821. Bauer, Christian W.; Stewart, Iain W. (2001). "Invariant operators in collinear effective theory". Physics Letters B. 516 (1–2). Elsevier BV: 134–142. arXiv:hep-ph/0107001. doi:10.1016/s0370-2693(01)00902-9. ISSN 0370-2693. Bauer, Christian W.; Pirjol, Dan; Stewart, Iain W. (2002-02-12). "Soft-collinear factorization in effective field theory". Physical Review D. 65 (5). American Physical Society (APS): 054022. arXiv:hep-ph/0109045. Bibcode:2002PhRvD..65e4022B. doi:10.1103/physrevd.65.054022. ISSN 0556-2821. Bauer, Christian W.; Pirjol, Dan; Stewart, Iain W. (2002-09-17). "Power counting in the soft-collinear effective theory". Physical Review D. 66 (5). American Physical Society (APS): 054005. arXiv:hep-ph/0205289. Bibcode:2002PhRvD..66e4005B. doi:10.1103/physrevd.66.054005. ISSN 0556-2821. == References ==
Wikipedia/Soft-collinear_effective_theory
In physics and chemistry, a nucleon is either a proton or a neutron, considered in its role as a component of an atomic nucleus. The number of nucleons in a nucleus defines the atom's mass number. Until the 1960s, nucleons were thought to be elementary particles, not made up of smaller parts. Now they are understood as composite particles, made of three quarks bound together by the strong interaction. The interaction between two or more nucleons is called internucleon interaction or nuclear force, which is also ultimately caused by the strong interaction. (Before the discovery of quarks, the term "strong interaction" referred to just internucleon interactions.) Nucleons sit at the boundary where particle physics and nuclear physics overlap. Particle physics, particularly quantum chromodynamics, provides the fundamental equations that describe the properties of quarks and of the strong interaction. These equations describe quantitatively how quarks can bind together into protons and neutrons (and all the other hadrons). However, when multiple nucleons are assembled into an atomic nucleus (nuclide), these fundamental equations become too difficult to solve directly (see lattice QCD). Instead, nuclides are studied within nuclear physics, which studies nucleons and their interactions by approximations and models, such as the nuclear shell model. These models can successfully describe nuclide properties, as for example, whether or not a particular nuclide undergoes radioactive decay. The proton and neutron are in a scheme of categories being at once fermions, hadrons and baryons. The proton carries a positive net charge, and the neutron carries a zero net charge; the proton's mass is only about 0.13% less than the neutron's. Thus, they can be viewed as two states of the same nucleon, and together form an isospin doublet (I = ⁠1/2⁠). In isospin space, neutrons can be transformed into protons and conversely by SU(2) symmetries. These nucleons are acted upon equally by the strong interaction, which is invariant under rotation in isospin space. According to Noether's theorem, isospin is conserved with respect to the strong interaction.: 129–130  == Overview == === Properties === Protons and neutrons are best known in their role as nucleons, i.e., as the components of atomic nuclei, but they also exist as free particles. Free neutrons are unstable, with a half-life of around 13 minutes, but they have important applications (see neutron radiation and neutron scattering). Protons not bound to other nucleons are the nuclei of hydrogen atoms when bound with an electron or – if not bound to anything – are ions or cosmic rays. Both the proton and the neutron are composite particles, meaning that each is composed of smaller parts, namely three quarks each; although once thought to be so, neither is an elementary particle. A proton is composed of two up quarks and one down quark, while the neutron has one up quark and two down quarks. Quarks are held together by the strong force, or equivalently, by gluons, which mediate the strong force at the quark level. An up quark has electric charge ⁠++2/3⁠ e, and a down quark has charge ⁠−+1/3⁠ e, so the summed electric charges of proton and neutron are +e and 0, respectively. Thus, the neutron has a charge of 0 (zero), and therefore is electrically neutral; indeed, the term "neutron" comes from the fact that a neutron is electrically neutral. The masses of the proton and neutron are similar: for the proton it is 1.6726×10−27 kg (938.27 MeV/c2), while for the neutron it is 1.6749×10−27 kg (939.57 MeV/c2); the neutron is roughly 0.13% heavier. The similarity in mass can be explained roughly by the slight difference in masses of up quarks and down quarks composing the nucleons. However, a detailed description remains an unsolved problem in particle physics.: 135–136  The spin of the nucleon is ⁠1/2⁠, which means that they are fermions and, like electrons, are subject to the Pauli exclusion principle: no more than one nucleon, e.g. in an atomic nucleus, may occupy the same quantum state. The isospin and spin quantum numbers of the nucleon have two states each, resulting in four combinations in total. An alpha particle is composed of four nucleons occupying all four combinations, namely, it has two protons (having opposite spin) and two neutrons (also having opposite spin), and its net nuclear spin is zero. In larger nuclei constituent nucleons, by Pauli exclusion, are compelled to have relative motion, which may also contribute to nuclear spin via the orbital quantum number. They spread out into nuclear shells analogous to electron shells known from chemistry. Both the proton and neutron have magnetic moments, though the nucleon magnetic moments are anomalous and were unexpected when they were discovered in the 1930s. The proton's magnetic moment, symbol μp, is 2.79 μN, whereas, if the proton were an elementary Dirac particle, it should have a magnetic moment of 1.0 μN. Here the unit for the magnetic moments is the nuclear magneton, symbol μN, an atomic-scale unit of measure. The neutron's magnetic moment is μn = −1.91 μN, whereas, since the neutron lacks an electric charge, it should have no magnetic moment. The value of the neutron's magnetic moment is negative because the direction of the moment is opposite to the neutron's spin. The nucleon magnetic moments arise from the quark substructure of the nucleons. The proton magnetic moment is exploited for NMR / MRI scanning. === Stability === A neutron in free state is an unstable particle, with a half-life around ten minutes. It undergoes β− decay (a type of radioactive decay) by turning into a proton while emitting an electron and an electron antineutrino. This reaction can occur because the mass of the neutron is slightly greater than that of the proton. (See the Neutron article for more discussion of neutron decay.) A proton by itself is thought to be stable, or at least its lifetime is too long to measure. This is an important discussion in particle physics (see Proton decay). Inside a nucleus, on the other hand, combined protons and neutrons (nucleons) can be stable or unstable depending on the nuclide, or nuclear species. Inside some nuclides, a neutron can turn into a proton (producing other particles) as described above; the reverse can happen inside other nuclides, where a proton turns into a neutron (producing other particles) through β+ decay or electron capture. And inside still other nuclides, both protons and neutrons are stable and do not change form. === Antinucleons === Both nucleons have corresponding antiparticles: the antiproton and the antineutron, which have the same mass and opposite charge as the proton and neutron respectively, and they interact in the same way. (This is generally believed to be exactly true, due to CPT symmetry. If there is a difference, it is too small to measure in all experiments to date.) In particular, antinucleons can bind into an "antinucleus". So far, scientists have created antideuterium and antihelium-3 nuclei. == Tables of detailed properties == === Nucleons === ^a The masses of the proton and neutron are known with far greater precision in daltons (Da) than in MeV/c2 due to the way in which these are defined. The conversion factor used is 1 Da = 931.494028(23) MeV/c2. ^b At least 1035 years. See proton decay. ^c For free neutrons; in most common nuclei, neutrons are stable. The masses of their antiparticles are assumed to be identical, and no experiments have refuted this to date. Current experiments show any relative difference between the masses of the proton and antiproton must be less than 2×10−9 and the difference between the neutron and antineutron masses is on the order of (9±6)×10−5 MeV/c2. === Nucleon resonances === Nucleon resonances are excited states of nucleon particles, often corresponding to one of the quarks having a flipped spin state, or with different orbital angular momentum when the particle decays. Only resonances with a 3- or 4-star rating at the Particle Data Group (PDG) are included in this table. Due to their extraordinarily short lifetimes, many properties of these particles are still under investigation. The symbol format is given as N(m) LIJ, where m is the particle's approximate mass, L is the orbital angular momentum (in the spectroscopic notation) of the nucleon–meson pair, produced when it decays, and I and J are the particle's isospin and total angular momentum respectively. Since nucleons are defined as having ⁠1/2⁠ isospin, the first number will always be 1, and the second number will always be odd. When discussing nucleon resonances, sometimes the N is omitted and the order is reversed, in the form LIJ (m); for example, a proton can be denoted as "N(939) S11" or "S11 (939)". The table below lists only the base resonance; each individual entry represents 4 baryons: 2 nucleon resonances particles and their 2 antiparticles. Each resonance exists in a form with a positive electric charge (Q), with a quark composition of uud like the proton, and a neutral form, with a quark composition of udd like the neutron, as well as the corresponding antiparticles with antiquark compositions of uud and udd respectively. Since they contain no strange, charm, bottom, or top quarks, these particles do not possess strangeness, etc. The table only lists the resonances with an isospin = ⁠1/2⁠. For resonances with isospin = ⁠3/2⁠, see the article on Delta baryons. † The P11(939) nucleon represents the excited state of a normal proton or neutron. Such a particle may be stable when in an atomic nucleus, e.g. in lithium-6. == Quark model classification == In the quark model with SU(2) flavour, the two nucleons are part of the ground-state doublet. The proton has quark content of uud, and the neutron, udd. In SU(3) flavour, they are part of the ground-state octet (8) of spin-⁠1/2⁠ baryons, known as the Eightfold way. The other members of this octet are the hyperons strange isotriplet Σ+, Σ0, Σ−, the Λ and the strange isodoublet Ξ0, Ξ−. One can extend this multiplet in SU(4) flavour (with the inclusion of the charm quark) to the ground-state 20-plet, or to SU(6) flavour (with the inclusion of the top and bottom quarks) to the ground-state 56-plet. The article on isospin provides an explicit expression for the nucleon wave functions in terms of the quark flavour eigenstates. == Models == Although it is known that the nucleon is made from three quarks, as of 2006, it is not known how to solve the equations of motion for quantum chromodynamics. Thus, the study of the low-energy properties of the nucleon are performed by means of models. The only first-principles approach available is to attempt to solve the equations of QCD numerically, using lattice QCD. This requires complicated algorithms and very powerful supercomputers. However, several analytic models also exist: === Skyrmion models === The skyrmion models the nucleon as a topological soliton in a nonlinear SU(2) pion field. The topological stability of the skyrmion is interpreted as the conservation of baryon number, that is, the non-decay of the nucleon. The local topological winding number density is identified with the local baryon number density of the nucleon. With the pion isospin vector field oriented in the shape of a hedgehog space, the model is readily solvable, and is thus sometimes called the hedgehog model. The hedgehog model is able to predict low-energy parameters, such as the nucleon mass, radius and axial coupling constant, to approximately 30% of experimental values. === MIT bag model === The MIT bag model confines quarks and gluons interacting through quantum chromodynamics to a region of space determined by balancing the pressure exerted by the quarks and gluons against a hypothetical pressure exerted by the vacuum on all colored quantum fields. The simplest approximation to the model confines three non-interacting quarks to a spherical cavity, with the boundary condition that the quark vector current vanish on the boundary. The non-interacting treatment of the quarks is justified by appealing to the idea of asymptotic freedom, whereas the hard-boundary condition is justified by quark confinement. Mathematically, the model vaguely resembles that of a radar cavity, with solutions to the Dirac equation standing in for solutions to the Maxwell equations, and the vanishing vector current boundary condition standing for the conducting metal walls of the radar cavity. If the radius of the bag is set to the radius of the nucleon, the bag model predicts a nucleon mass that is within 30% of the actual mass. Although the basic bag model does not provide a pion-mediated interaction, it describes excellently the nucleon–nucleon forces through the 6 quark bag s-channel mechanism using the P-matrix. === Chiral bag model === The chiral bag model merges the MIT bag model and the skyrmion model. In this model, a hole is punched out of the middle of the skyrmion and replaced with a bag model. The boundary condition is provided by the requirement of continuity of the axial vector current across the bag boundary. Very curiously, the missing part of the topological winding number (the baryon number) of the hole punched into the skyrmion is exactly made up by the non-zero vacuum expectation value (or spectral asymmetry) of the quark fields inside the bag. As of 2017, this remarkable trade-off between topology and the spectrum of an operator does not have any grounding or explanation in the mathematical theory of Hilbert spaces and their relationship to geometry. Several other properties of the chiral bag are notable: It provides a better fit to the low-energy nucleon properties, to within 5–10%, and these are almost completely independent of the chiral-bag radius, as long as the radius is less than the nucleon radius. This independence of radius is referred to as the Cheshire Cat principle, after the fading of Lewis Carroll's Cheshire Cat to just its smile. It is expected that a first-principles solution of the equations of QCD will demonstrate a similar duality of quark–meson descriptions. == See also == SLAC bag model Hadrons Electroweak interaction == Footnotes == == References == === Particle listings === == Further reading == Thomas, A. W.; Weise, W. (2001). The Structure of the Nucleon. Berlin, DE: Wiley-WCH. ISBN 3-527-40297-7. Brown, G .E.; Jackson, A. D. (1976). The Nucleon–Nucleon Interaction. North-Holland Publishing. ISBN 978-0-7204-0335-0. Nakamura, N.; Particle Data Group; et al. (2011). "Review of Particle Physics". Journal of Physics G. 37 (7): 075021. Bibcode:2010JPhG...37g5021N. doi:10.1088/0954-3899/37/7A/075021. hdl:10481/34593.
Wikipedia/Bag_model
In quantum chromodynamics, heavy quark effective theory (HQET) is an effective field theory describing the physics of heavy (that is, of mass far greater than the QCD scale) quarks. It is used in studying the properties of hadrons containing a single charm or bottom quark. The effective theory was formalised in 1990 by Howard Georgi, Estia Eichten and Christopher Hill, building upon the works of Nathan Isgur and Mark Wise, Voloshin and Shifman, and others. Quantum chromodynamics (QCD) is the theory of strong force, through which quarks and gluons interact. HQET is the limit of QCD with the quark mass taken to infinity while its four-velocity is held fixed. This approximation enables non-perturbative (in the strong interaction coupling) treatment of quarks that are much heavier than the QCD mass scale. The mass scale is of order 200 MeV. Hence the heavy quarks include charm, bottom and top quarks, whereas up, down and strange quarks are considered light. Since the top quark is extremely short-lived, only the charm and bottom quarks are of significant interest to HQET, of which only the latter has mass sufficiently high that the effective theory can be applied without major perturbative corrections. == References == == Further reading == Shifman, M. A. (1999). "Lectures on Heavy Quarks in Quantum Chromodynamics". ITEP Lectures on Particle Physics and Field Theory. World Scientific Lecture Notes in Physics. Vol. 62. pp. 1–109. arXiv:hep-ph/9510377. doi:10.1142/9789812798961_0001. ISBN 978-981-02-3947-3. ISSN 1793-1436. S2CID 18892623. Sommer, Rainer (2015). "Non-perturbative Heavy Quark Effective Theory: Introduction and Status". Nuclear and Particle Physics Proceedings. 261–262: 338–367. arXiv:1501.03060. Bibcode:2015NPPP..261..338S. doi:10.1016/j.nuclphysbps.2015.03.022. ISSN 2405-6014. S2CID 53354994.
Wikipedia/Heavy_quark_effective_theory
In physics, a gauge theory is a type of field theory in which the Lagrangian, and hence the dynamics of the system itself, does not change under local transformations according to certain smooth families of operations (Lie groups). Formally, the Lagrangian is invariant under these transformations. The term "gauge" refers to any specific mathematical formalism to regulate redundant degrees of freedom in the Lagrangian of a physical system. The transformations between possible gauges, called gauge transformations, form a Lie group—referred to as the symmetry group or the gauge group of the theory. Associated with any Lie group is the Lie algebra of group generators. For each group generator there necessarily arises a corresponding field (usually a vector field) called the gauge field. Gauge fields are included in the Lagrangian to ensure its invariance under the local group transformations (called gauge invariance). When such a theory is quantized, the quanta of the gauge fields are called gauge bosons. If the symmetry group is non-commutative, then the gauge theory is referred to as non-abelian gauge theory, the usual example being the Yang–Mills theory. Many powerful theories in physics are described by Lagrangians that are invariant under some symmetry transformation groups. When they are invariant under a transformation identically performed at every point in the spacetime in which the physical processes occur, they are said to have a global symmetry. Local symmetry, the cornerstone of gauge theories, is a stronger constraint. In fact, a global symmetry is just a local symmetry whose group's parameters are fixed in spacetime (the same way a constant value can be understood as a function of a certain parameter, the output of which is always the same). Gauge theories are important as the successful field theories explaining the dynamics of elementary particles. Quantum electrodynamics is an abelian gauge theory with the symmetry group U(1) and has one gauge field, the electromagnetic four-potential, with the photon being the gauge boson. The Standard Model is a non-abelian gauge theory with the symmetry group U(1) × SU(2) × SU(3) and has a total of twelve gauge bosons: the photon, three weak bosons and eight gluons. Gauge theories are also important in explaining gravitation in the theory of general relativity. Its case is somewhat unusual in that the gauge field is a tensor, the Lanczos tensor. Theories of quantum gravity, beginning with gauge gravitation theory, also postulate the existence of a gauge boson known as the graviton. Gauge symmetries can be viewed as analogues of the principle of general covariance of general relativity in which the coordinate system can be chosen freely under arbitrary diffeomorphisms of spacetime. Both gauge invariance and diffeomorphism invariance reflect a redundancy in the description of the system. An alternative theory of gravitation, gauge theory gravity, replaces the principle of general covariance with a true gauge principle with new gauge fields. Historically, these ideas were first stated in the context of classical electromagnetism and later in general relativity. However, the modern importance of gauge symmetries appeared first in the relativistic quantum mechanics of electrons – quantum electrodynamics, elaborated on below. Today, gauge theories are useful in condensed matter, nuclear and high energy physics among other subfields. == History == The concept and the name of gauge theory derives from the work of Hermann Weyl in 1918. Weyl, in an attempt to generalize the geometrical ideas of general relativity to include electromagnetism, conjectured that Eichinvarianz or invariance under the change of scale (or "gauge") might also be a local symmetry of general relativity. After the development of quantum mechanics, Weyl, Vladimir Fock and Fritz London replaced the simple scale factor with a complex quantity and turned the scale transformation into a change of phase, which is a U(1) gauge symmetry. This explained the electromagnetic field effect on the wave function of a charged quantum mechanical particle. Weyl's 1929 paper introduced the modern concept of gauge invariance subsequently popularized by Wolfgang Pauli in his 1941 review. In retrospect, James Clerk Maxwell's formulation, in 1864–65, of electrodynamics in "A Dynamical Theory of the Electromagnetic Field" suggested the possibility of invariance, when he stated that any vector field whose curl vanishes—and can therefore normally be written as a gradient of a function—could be added to the vector potential without affecting the magnetic field. Similarly unnoticed, David Hilbert had derived the Einstein field equations by postulating the invariance of the action under a general coordinate transformation. The importance of these symmetry invariances remained unnoticed until Weyl's work. Inspired by Pauli's descriptions of connection between charge conservation and field theory driven by invariance, Chen Ning Yang sought a field theory for atomic nuclei binding based on conservation of nuclear isospin.: 202  In 1954, Yang and Robert Mills generalized the gauge invariance of electromagnetism, constructing a theory based on the action of the (non-abelian) SU(2) symmetry group on the isospin doublet of protons and neutrons. This is similar to the action of the U(1) group on the spinor fields of quantum electrodynamics. The Yang–Mills theory became the prototype theory to resolve some of the confusion in elementary particle physics. This idea later found application in the quantum field theory of the weak force, and its unification with electromagnetism in the electroweak theory. Gauge theories became even more attractive when it was realized that non-abelian gauge theories reproduced a feature called asymptotic freedom. Asymptotic freedom was believed to be an important characteristic of strong interactions. This motivated searching for a strong force gauge theory. This theory, now known as quantum chromodynamics, is a gauge theory with the action of the SU(3) group on the color triplet of quarks. The Standard Model unifies the description of electromagnetism, weak interactions and strong interactions in the language of gauge theory. In the 1970s, Michael Atiyah began studying the mathematics of solutions to the classical Yang–Mills equations. In 1983, Atiyah's student Simon Donaldson built on this work to show that the differentiable classification of smooth 4-manifolds is very different from their classification up to homeomorphism. Michael Freedman used Donaldson's work to exhibit exotic R4s, that is, exotic differentiable structures on Euclidean 4-dimensional space. This led to an increasing interest in gauge theory for its own sake, independent of its successes in fundamental physics. In 1994, Edward Witten and Nathan Seiberg invented gauge-theoretic techniques based on supersymmetry that enabled the calculation of certain topological invariants (the Seiberg–Witten invariants). These contributions to mathematics from gauge theory have led to a renewed interest in this area. The importance of gauge theories in physics is exemplified in the success of the mathematical formalism in providing a unified framework to describe the quantum field theories of electromagnetism, the weak force and the strong force. This theory, known as the Standard Model, accurately describes experimental predictions regarding three of the four fundamental forces of nature, and is a gauge theory with the gauge group SU(3) × SU(2) × U(1). Modern theories like string theory, as well as general relativity, are, in one way or another, gauge theories. See Jackson and Okun for early history of gauge and Pickering for more about the history of gauge and quantum field theories. == Description == === Global and local symmetries === ==== Global symmetry ==== In physics, the mathematical description of any physical situation usually contains excess degrees of freedom; the same physical situation is equally well described by many equivalent mathematical configurations. For instance, in Newtonian dynamics, if two configurations are related by a Galilean transformation (an inertial change of reference frame) they represent the same physical situation. These transformations form a group of "symmetries" of the theory, and a physical situation corresponds not to an individual mathematical configuration but to a class of configurations related to one another by this symmetry group. This idea can be generalized to include local as well as global symmetries, analogous to much more abstract "changes of coordinates" in a situation where there is no preferred "inertial" coordinate system that covers the entire physical system. A gauge theory is a mathematical model that has symmetries of this kind, together with a set of techniques for making physical predictions consistent with the symmetries of the model. ==== Example of global symmetry ==== When a quantity occurring in the mathematical configuration is not just a number but has some geometrical significance, such as a velocity or an axis of rotation, its representation as numbers arranged in a vector or matrix is also changed by a coordinate transformation. For instance, if one description of a pattern of fluid flow states that the fluid velocity in the neighborhood of (x = 1, y = 0) is 1 m/s in the positive x direction, then a description of the same situation in which the coordinate system has been rotated clockwise by 90 degrees states that the fluid velocity in the neighborhood of (x = 0, y= −1) is 1 m/s in the negative y direction. The coordinate transformation has affected both the coordinate system used to identify the location of the measurement and the basis in which its value is expressed. As long as this transformation is performed globally (affecting the coordinate basis in the same way at every point), the effect on values that represent the rate of change of some quantity along some path in space and time as it passes through point P is the same as the effect on values that are truly local to P. ==== Local symmetry ==== ===== Use of fiber bundles to describe local symmetries ===== In order to adequately describe physical situations in more complex theories, it is often necessary to introduce a "coordinate basis" for some of the objects of the theory that do not have this simple relationship to the coordinates used to label points in space and time. (In mathematical terms, the theory involves a fiber bundle in which the fiber at each point of the base space consists of possible coordinate bases for use when describing the values of objects at that point.) In order to spell out a mathematical configuration, one must choose a particular coordinate basis at each point (a local section of the fiber bundle) and express the values of the objects of the theory (usually "fields" in the physicist's sense) using this basis. Two such mathematical configurations are equivalent (describe the same physical situation) if they are related by a transformation of this abstract coordinate basis (a change of local section, or gauge transformation). In most gauge theories, the set of possible transformations of the abstract gauge basis at an individual point in space and time is a finite-dimensional Lie group. The simplest such group is U(1), which appears in the modern formulation of quantum electrodynamics (QED) via its use of complex numbers. QED is generally regarded as the first, and simplest, physical gauge theory. The set of possible gauge transformations of the entire configuration of a given gauge theory also forms a group, the gauge group of the theory. An element of the gauge group can be parameterized by a smoothly varying function from the points of spacetime to the (finite-dimensional) Lie group, such that the value of the function and its derivatives at each point represents the action of the gauge transformation on the fiber over that point. A gauge transformation with constant parameter at every point in space and time is analogous to a rigid rotation of the geometric coordinate system; it represents a global symmetry of the gauge representation. As in the case of a rigid rotation, this gauge transformation affects expressions that represent the rate of change along a path of some gauge-dependent quantity in the same way as those that represent a truly local quantity. A gauge transformation whose parameter is not a constant function is referred to as a local symmetry; its effect on expressions that involve a derivative is qualitatively different from that on expressions that do not. (This is analogous to a non-inertial change of reference frame, which can produce a Coriolis effect.) === Gauge fields === The "gauge covariant" version of a gauge theory accounts for this effect by introducing a gauge field (in mathematical language, an Ehresmann connection) and formulating all rates of change in terms of the covariant derivative with respect to this connection. The gauge field becomes an essential part of the description of a mathematical configuration. A configuration in which the gauge field can be eliminated by a gauge transformation has the property that its field strength (in mathematical language, its curvature) is zero everywhere; a gauge theory is not limited to these configurations. In other words, the distinguishing characteristic of a gauge theory is that the gauge field does not merely compensate for a poor choice of coordinate system; there is generally no gauge transformation that makes the gauge field vanish. When analyzing the dynamics of a gauge theory, the gauge field must be treated as a dynamical variable, similar to other objects in the description of a physical situation. In addition to its interaction with other objects via the covariant derivative, the gauge field typically contributes energy in the form of a "self-energy" term. One can obtain the equations for the gauge theory by: starting from a naïve ansatz without the gauge field (in which the derivatives appear in a "bare" form); listing those global symmetries of the theory that can be characterized by a continuous parameter (generally an abstract equivalent of a rotation angle); computing the correction terms that result from allowing the symmetry parameter to vary from place to place; and reinterpreting these correction terms as couplings to one or more gauge fields, and giving these fields appropriate self-energy terms and dynamical behavior. This is the sense in which a gauge theory "extends" a global symmetry to a local symmetry, and closely resembles the historical development of the gauge theory of gravity known as general relativity. === Physical experiments === Gauge theories used to model the results of physical experiments engage in: limiting the universe of possible configurations to those consistent with the information used to set up the experiment, and then computing the probability distribution of the possible outcomes that the experiment is designed to measure. We cannot express the mathematical descriptions of the "setup information" and the "possible measurement outcomes", or the "boundary conditions" of the experiment, without reference to a particular coordinate system, including a choice of gauge. One assumes an adequate experiment isolated from "external" influence that is itself a gauge-dependent statement. Mishandling gauge dependence calculations in boundary conditions is a frequent source of anomalies, and approaches to anomaly avoidance classifies gauge theories. === Continuum theories === The two gauge theories mentioned above, continuum electrodynamics and general relativity, are continuum field theories. The techniques of calculation in a continuum theory implicitly assume that: given a completely fixed choice of gauge, the boundary conditions of an individual configuration are completely described given a completely fixed gauge and a complete set of boundary conditions, the least action determines a unique mathematical configuration and therefore a unique physical situation consistent with these bounds fixing the gauge introduces no anomalies in the calculation, due either to gauge dependence in describing partial information about boundary conditions or to incompleteness of the theory. Determination of the likelihood of possible measurement outcomes proceed by: establishing a probability distribution over all physical situations determined by boundary conditions consistent with the setup information establishing a probability distribution of measurement outcomes for each possible physical situation convolving these two probability distributions to get a distribution of possible measurement outcomes consistent with the setup information These assumptions have enough validity across a wide range of energy scales and experimental conditions to allow these theories to make accurate predictions about almost all of the phenomena encountered in daily life: light, heat, and electricity, eclipses, spaceflight, etc. They fail only at the smallest and largest scales due to omissions in the theories themselves, and when the mathematical techniques themselves break down, most notably in the case of turbulence and other chaotic phenomena. === Quantum field theories === Other than these classical continuum field theories, the most widely known gauge theories are quantum field theories, including quantum electrodynamics and the Standard Model of elementary particle physics. The starting point of a quantum field theory is much like that of its continuum analog: a gauge-covariant action integral that characterizes "allowable" physical situations according to the principle of least action. However, continuum and quantum theories differ significantly in how they handle the excess degrees of freedom represented by gauge transformations. Continuum theories, and most pedagogical treatments of the simplest quantum field theories, use a gauge fixing prescription to reduce the orbit of mathematical configurations that represent a given physical situation to a smaller orbit related by a smaller gauge group (the global symmetry group, or perhaps even the trivial group). More sophisticated quantum field theories, in particular those that involve a non-abelian gauge group, break the gauge symmetry within the techniques of perturbation theory by introducing additional fields (the Faddeev–Popov ghosts) and counterterms motivated by anomaly cancellation, in an approach known as BRST quantization. While these concerns are in one sense highly technical, they are also closely related to the nature of measurement, the limits on knowledge of a physical situation, and the interactions between incompletely specified experimental conditions and incompletely understood physical theory. The mathematical techniques that have been developed in order to make gauge theories tractable have found many other applications, from solid-state physics and crystallography to low-dimensional topology. == Classical gauge theory == === Classical electromagnetism === In electrostatics, one can either discuss the electric field, E, or its corresponding electric potential, V. Knowledge of one makes it possible to find the other, except that potentials differing by a constant, V ↦ V + C {\displaystyle V\mapsto V+C} , correspond to the same electric field. This is because the electric field relates to changes in the potential from one point in space to another, and the constant C would cancel out when subtracting to find the change in potential. In terms of vector calculus, the electric field is the gradient of the potential, E = − ∇ V {\displaystyle \mathbf {E} =-\nabla V} . Generalizing from static electricity to electromagnetism, we have a second potential, the vector potential A, with E = − ∇ V − ∂ A ∂ t B = ∇ × A {\displaystyle {\begin{aligned}\mathbf {E} &=-\nabla V-{\frac {\partial \mathbf {A} }{\partial t}}\\\mathbf {B} &=\nabla \times \mathbf {A} \end{aligned}}} The general gauge transformations now become not just V ↦ V + C {\displaystyle V\mapsto V+C} but A ↦ A + ∇ f V ↦ V − ∂ f ∂ t {\displaystyle {\begin{aligned}\mathbf {A} &\mapsto \mathbf {A} +\nabla f\\V&\mapsto V-{\frac {\partial f}{\partial t}}\end{aligned}}} where f is any twice continuously differentiable function that depends on position and time. The electromagnetic fields remain the same under the gauge transformation. === Example: scalar O(n) gauge theory === The remainder of this section requires some familiarity with classical or quantum field theory, and the use of Lagrangians. Definitions in this section: gauge group, gauge field, interaction Lagrangian, gauge boson. The following illustrates how local gauge invariance can be "motivated" heuristically starting from global symmetry properties, and how it leads to an interaction between originally non-interacting fields. Consider a set of n {\displaystyle n} non-interacting real scalar fields, with equal masses m. This system is described by an action that is the sum of the (usual) action for each scalar field φ i {\displaystyle \varphi _{i}} S = ∫ d 4 x ∑ i = 1 n [ 1 2 ∂ μ φ i ∂ μ φ i − 1 2 m 2 φ i 2 ] {\displaystyle {\mathcal {S}}=\int \,\mathrm {d} ^{4}x\sum _{i=1}^{n}\left[{\frac {1}{2}}\partial _{\mu }\varphi _{i}\partial ^{\mu }\varphi _{i}-{\frac {1}{2}}m^{2}\varphi _{i}^{2}\right]} The Lagrangian (density) can be compactly written as L = 1 2 ( ∂ μ Φ ) T ∂ μ Φ − 1 2 m 2 Φ T Φ {\displaystyle \ {\mathcal {L}}={\frac {1}{2}}(\partial _{\mu }\Phi )^{\mathsf {T}}\partial ^{\mu }\Phi -{\frac {1}{2}}m^{2}\Phi ^{\mathsf {T}}\Phi } by introducing a vector of fields Φ T = ( φ 1 , φ 2 , … , φ n ) {\displaystyle \ \Phi ^{\mathsf {T}}=(\varphi _{1},\varphi _{2},\ldots ,\varphi _{n})} The term ∂ μ Φ {\displaystyle \partial _{\mu }\Phi } is the partial derivative of Φ {\displaystyle \Phi } along dimension μ {\displaystyle \mu } . It is now transparent that the Lagrangian is invariant under the transformation Φ ↦ Φ ′ = G Φ {\displaystyle \ \Phi \mapsto \Phi '=G\Phi } whenever G is a constant matrix belonging to the n-by-n orthogonal group O(n). This is seen to preserve the Lagrangian, since the derivative of Φ ′ {\displaystyle \Phi '} transforms identically to Φ {\displaystyle \Phi } and both quantities appear inside dot products in the Lagrangian (orthogonal transformations preserve the dot product). ( ∂ μ Φ ) ↦ ( ∂ μ Φ ) ′ = G ∂ μ Φ {\displaystyle \ (\partial _{\mu }\Phi )\mapsto (\partial _{\mu }\Phi )'=G\partial _{\mu }\Phi } This characterizes the global symmetry of this particular Lagrangian, and the symmetry group is often called the gauge group; the mathematical term is structure group, especially in the theory of G-structures. Incidentally, Noether's theorem implies that invariance under this group of transformations leads to the conservation of the currents J μ a = i ∂ μ Φ T T a Φ {\displaystyle \ J_{\mu }^{a}=i\partial _{\mu }\Phi ^{\mathsf {T}}T^{a}\Phi } where the Ta matrices are generators of the SO(n) group. There is one conserved current for every generator. Now, demanding that this Lagrangian should have local O(n)-invariance requires that the G matrices (which were earlier constant) should be allowed to become functions of the spacetime coordinates x. In this case, the G matrices do not "pass through" the derivatives, when G = G(x), ∂ μ ( G Φ ) ≠ G ( ∂ μ Φ ) {\displaystyle \ \partial _{\mu }(G\Phi )\neq G(\partial _{\mu }\Phi )} The failure of the derivative to commute with "G" introduces an additional term (in keeping with the product rule), which spoils the invariance of the Lagrangian. In order to rectify this we define a new derivative operator such that the derivative of Φ ′ {\displaystyle \Phi '} again transforms identically with Φ {\displaystyle \Phi } ( D μ Φ ) ′ = G D μ Φ {\displaystyle \ (D_{\mu }\Phi )'=GD_{\mu }\Phi } This new "derivative" is called a (gauge) covariant derivative and takes the form D μ = ∂ μ − i g A μ {\displaystyle \ D_{\mu }=\partial _{\mu }-igA_{\mu }} where g is called the coupling constant; a quantity defining the strength of an interaction. After a simple calculation we can see that the gauge field A(x) must transform as follows A μ ′ = G A μ G − 1 − i g ( ∂ μ G ) G − 1 {\displaystyle \ A'_{\mu }=GA_{\mu }G^{-1}-{\frac {i}{g}}(\partial _{\mu }G)G^{-1}} The gauge field is an element of the Lie algebra, and can therefore be expanded as A μ = ∑ a A μ a T a {\displaystyle \ A_{\mu }=\sum _{a}A_{\mu }^{a}T^{a}} There are therefore as many gauge fields as there are generators of the Lie algebra. Finally, we now have a locally gauge invariant Lagrangian L l o c = 1 2 ( D μ Φ ) T D μ Φ − 1 2 m 2 Φ T Φ {\displaystyle \ {\mathcal {L}}_{\mathrm {loc} }={\frac {1}{2}}(D_{\mu }\Phi )^{\mathsf {T}}D^{\mu }\Phi -{\frac {1}{2}}m^{2}\Phi ^{\mathsf {T}}\Phi } Pauli uses the term gauge transformation of the first type to mean the transformation of Φ {\displaystyle \Phi } , while the compensating transformation in A {\displaystyle A} is called a gauge transformation of the second type. The difference between this Lagrangian and the original globally gauge-invariant Lagrangian is seen to be the interaction Lagrangian L i n t = i g 2 Φ T A μ T ∂ μ Φ + i g 2 ( ∂ μ Φ ) T A μ Φ − g 2 2 ( A μ Φ ) T A μ Φ {\displaystyle \ {\mathcal {L}}_{\mathrm {int} }=i{\frac {g}{2}}\Phi ^{\mathsf {T}}A_{\mu }^{\mathsf {T}}\partial ^{\mu }\Phi +i{\frac {g}{2}}(\partial _{\mu }\Phi )^{\mathsf {T}}A^{\mu }\Phi -{\frac {g^{2}}{2}}(A_{\mu }\Phi )^{\mathsf {T}}A^{\mu }\Phi } This term introduces interactions between the n scalar fields just as a consequence of the demand for local gauge invariance. However, to make this interaction physical and not completely arbitrary, the mediator A(x) needs to propagate in space. That is dealt with in the next section by adding yet another term, L g f {\displaystyle {\mathcal {L}}_{\mathrm {gf} }} , to the Lagrangian. In the quantized version of the obtained classical field theory, the quanta of the gauge field A(x) are called gauge bosons. The interpretation of the interaction Lagrangian in quantum field theory is of scalar bosons interacting by the exchange of these gauge bosons. === Yang–Mills Lagrangian for the gauge field === The picture of a classical gauge theory developed in the previous section is almost complete, except for the fact that to define the covariant derivatives D, one needs to know the value of the gauge field A ( x ) {\displaystyle A(x)} at all spacetime points. Instead of manually specifying the values of this field, it can be given as the solution to a field equation. Further requiring that the Lagrangian that generates this field equation is locally gauge invariant as well, one possible form for the gauge field Lagrangian is L gf = − 1 2 tr ⁡ ( F μ ν F μ ν ) = − 1 4 F a μ ν F μ ν a {\displaystyle {\mathcal {L}}_{\text{gf}}=-{\frac {1}{2}}\operatorname {tr} \left(F^{\mu \nu }F_{\mu \nu }\right)=-{\frac {1}{4}}F^{a\mu \nu }F_{\mu \nu }^{a}} where the F μ ν a {\displaystyle F_{\mu \nu }^{a}} are obtained from potentials A μ a {\displaystyle A_{\mu }^{a}} , being the components of A ( x ) {\displaystyle A(x)} , by F μ ν a = ∂ μ A ν a − ∂ ν A μ a + g ∑ b , c f a b c A μ b A ν c {\displaystyle F_{\mu \nu }^{a}=\partial _{\mu }A_{\nu }^{a}-\partial _{\nu }A_{\mu }^{a}+g\sum _{b,c}f^{abc}A_{\mu }^{b}A_{\nu }^{c}} and the f a b c {\displaystyle f^{abc}} are the structure constants of the Lie algebra of the generators of the gauge group. This formulation of the Lagrangian is called a Yang–Mills action. Other gauge invariant actions also exist (e.g., nonlinear electrodynamics, Born–Infeld action, Chern–Simons model, theta term, etc.). In this Lagrangian term there is no field whose transformation counterweighs the one of A {\displaystyle A} . Invariance of this term under gauge transformations is a particular case of a priori classical (geometrical) symmetry. This symmetry must be restricted in order to perform quantization, the procedure being denominated gauge fixing, but even after restriction, gauge transformations may be possible. The complete Lagrangian for the gauge theory is now L = L loc + L gf = L global + L int + L gf {\displaystyle {\mathcal {L}}={\mathcal {L}}_{\text{loc}}+{\mathcal {L}}_{\text{gf}}={\mathcal {L}}_{\text{global}}+{\mathcal {L}}_{\text{int}}+{\mathcal {L}}_{\text{gf}}} === Example: electrodynamics === As a simple application of the formalism developed in the previous sections, consider the case of electrodynamics, with only the electron field. The bare-bones action that generates the electron field's Dirac equation is S = ∫ ψ ¯ ( i ℏ c γ μ ∂ μ − m c 2 ) ψ d 4 x {\displaystyle {\mathcal {S}}=\int {\bar {\psi }}\left(i\hbar c\,\gamma ^{\mu }\partial _{\mu }-mc^{2}\right)\psi \,\mathrm {d} ^{4}x} The global symmetry for this system is ψ ↦ e i θ ψ {\displaystyle \psi \mapsto e^{i\theta }\psi } The gauge group here is U(1), just rotations of the phase angle of the field, with the particular rotation determined by the constant θ. "Localising" this symmetry implies the replacement of θ by θ(x). An appropriate covariant derivative is then D μ = ∂ μ − i e ℏ A μ {\displaystyle D_{\mu }=\partial _{\mu }-i{\frac {e}{\hbar }}A_{\mu }} Identifying the "charge" e (not to be confused with the mathematical constant e in the symmetry description) with the usual electric charge (this is the origin of the usage of the term in gauge theories), and the gauge field A(x) with the four-vector potential of the electromagnetic field results in an interaction Lagrangian L int = e ℏ ψ ¯ ( x ) γ μ ψ ( x ) A μ ( x ) = J μ ( x ) A μ ( x ) {\displaystyle {\mathcal {L}}_{\text{int}}={\frac {e}{\hbar }}{\bar {\psi }}(x)\gamma ^{\mu }\psi (x)A_{\mu }(x)=J^{\mu }(x)A_{\mu }(x)} where J μ ( x ) = e ℏ ψ ¯ ( x ) γ μ ψ ( x ) {\displaystyle J^{\mu }(x)={\frac {e}{\hbar }}{\bar {\psi }}(x)\gamma ^{\mu }\psi (x)} is the electric current four vector in the Dirac field. The gauge principle is therefore seen to naturally introduce the so-called minimal coupling of the electromagnetic field to the electron field. Adding a Lagrangian for the gauge field A μ ( x ) {\displaystyle A_{\mu }(x)} in terms of the field strength tensor exactly as in electrodynamics, one obtains the Lagrangian used as the starting point in quantum electrodynamics. L QED = ψ ¯ ( i ℏ c γ μ D μ − m c 2 ) ψ − 1 4 μ 0 F μ ν F μ ν {\displaystyle {\mathcal {L}}_{\text{QED}}={\bar {\psi }}\left(i\hbar c\,\gamma ^{\mu }D_{\mu }-mc^{2}\right)\psi -{\frac {1}{4\mu _{0}}}F_{\mu \nu }F^{\mu \nu }} == Mathematical formalism == Gauge theories are usually discussed in the language of differential geometry. Mathematically, a gauge is just a choice of a (local) section of some principal bundle. A gauge transformation is just a transformation between two such sections. Although gauge theory is dominated by the study of connections (primarily because it's mainly studied by high-energy physicists), the idea of a connection is not central to gauge theory in general. In fact, a result in general gauge theory shows that affine representations (i.e., affine modules) of the gauge transformations can be classified as sections of a jet bundle satisfying certain properties. There are representations that transform covariantly pointwise (called by physicists gauge transformations of the first kind), representations that transform as a connection form (called by physicists gauge transformations of the second kind, an affine representation)—and other more general representations, such as the B field in BF theory. There are more general nonlinear representations (realizations), but these are extremely complicated. Still, nonlinear sigma models transform nonlinearly, so there are applications. If there is a principal bundle P whose base space is space or spacetime and structure group is a Lie group, then the sections of P form a principal homogeneous space of the group of gauge transformations. Connections (gauge connection) define this principal bundle, yielding a covariant derivative ∇ in each associated vector bundle. If a local frame is chosen (a local basis of sections), then this covariant derivative is represented by the connection form A, a Lie algebra-valued 1-form, which is called the gauge potential in physics. This is evidently not an intrinsic but a frame-dependent quantity. The curvature form F, a Lie algebra-valued 2-form that is an intrinsic quantity, is constructed from a connection form by F = d A + A ∧ A {\displaystyle \mathbf {F} =\mathrm {d} \mathbf {A} +\mathbf {A} \wedge \mathbf {A} } where d stands for the exterior derivative and ∧ {\displaystyle \wedge } stands for the wedge product. ( A {\displaystyle \mathbf {A} } is an element of the vector space spanned by the generators T a {\displaystyle T^{a}} , and so the components of A {\displaystyle \mathbf {A} } do not commute with one another. Hence the wedge product A ∧ A {\displaystyle \mathbf {A} \wedge \mathbf {A} } does not vanish.) Infinitesimal gauge transformations form a Lie algebra, which is characterized by a smooth Lie-algebra-valued scalar, ε. Under such an infinitesimal gauge transformation, δ ε A = [ ε , A ] − d ε {\displaystyle \delta _{\varepsilon }\mathbf {A} =[\varepsilon ,\mathbf {A} ]-\mathrm {d} \varepsilon } where [ ⋅ , ⋅ ] {\displaystyle [\cdot ,\cdot ]} is the Lie bracket. One nice thing is that if δ ε X = ε X {\displaystyle \delta _{\varepsilon }X=\varepsilon X} , then δ ε D X = ε D X {\displaystyle \delta _{\varepsilon }DX=\varepsilon DX} where D is the covariant derivative D X = d e f d X + A X {\displaystyle DX\ {\stackrel {\mathrm {def} }{=}}\ \mathrm {d} X+\mathbf {A} X} Also, δ ε F = [ ε , F ] {\displaystyle \delta _{\varepsilon }\mathbf {F} =[\varepsilon ,\mathbf {F} ]} , which means F {\displaystyle \mathbf {F} } transforms covariantly. Not all gauge transformations can be generated by infinitesimal gauge transformations in general. An example is when the base manifold is a compact manifold without boundary such that the homotopy class of mappings from that manifold to the Lie group is nontrivial. See instanton for an example. The Yang–Mills action is now given by 1 4 g 2 ∫ Tr ⁡ [ ⋆ F ∧ F ] {\displaystyle {\frac {1}{4g^{2}}}\int \operatorname {Tr} [{\star }F\wedge F]} where ⋆ {\displaystyle {\star }} is the Hodge star operator and the integral is defined as in differential geometry. A quantity which is gauge-invariant (i.e., invariant under gauge transformations) is the Wilson loop, which is defined over any closed path, γ, as follows: χ ( ρ ) ( P { e ∫ γ A } ) {\displaystyle \chi ^{(\rho )}\left({\mathcal {P}}\left\{e^{\int _{\gamma }A}\right\}\right)} where χ is the character of a complex representation ρ and P {\displaystyle {\mathcal {P}}} represents the path-ordered operator. The formalism of gauge theory carries over to a general setting. For example, it is sufficient to ask that a vector bundle have a metric connection; when one does so, one finds that the metric connection satisfies the Yang–Mills equations of motion. == Quantization of gauge theories == Gauge theories may be quantized by specialization of methods which are applicable to any quantum field theory. However, because of the subtleties imposed by the gauge constraints (see section on Mathematical formalism, above) there are many technical problems to be solved which do not arise in other field theories. At the same time, the richer structure of gauge theories allows simplification of some computations: for example Ward identities connect different renormalization constants. === Methods and aims === The first gauge theory quantized was quantum electrodynamics (QED). The first methods developed for this involved gauge fixing and then applying canonical quantization. The Gupta–Bleuler method was also developed to handle this problem. Non-abelian gauge theories are now handled by a variety of means. Methods for quantization are covered in the article on quantization. The main point to quantization is to be able to compute quantum amplitudes for various processes allowed by the theory. Technically, they reduce to the computations of certain correlation functions in the vacuum state. This involves a renormalization of the theory. When the running coupling of the theory is small enough, then all required quantities may be computed in perturbation theory. Quantization schemes intended to simplify such computations (such as canonical quantization) may be called perturbative quantization schemes. At present some of these methods lead to the most precise experimental tests of gauge theories. However, in most gauge theories, there are many interesting questions which are non-perturbative. Quantization schemes suited to these problems (such as lattice gauge theory) may be called non-perturbative quantization schemes. Precise computations in such schemes often require supercomputing, and are therefore less well-developed currently than other schemes. === Anomalies === Some of the symmetries of the classical theory are then seen not to hold in the quantum theory; a phenomenon called an anomaly. Among the most well known are: The scale anomaly, which gives rise to a running coupling constant. In QED this gives rise to the phenomenon of the Landau pole. In quantum chromodynamics (QCD) this leads to asymptotic freedom. The chiral anomaly in either chiral or vector field theories with fermions. This has close connection with topology through the notion of instantons. In QCD this anomaly causes the decay of a pion to two photons. The gauge anomaly, which must cancel in any consistent physical theory. In the electroweak theory this cancellation requires an equal number of quarks and leptons. == Pure gauge == A pure gauge is the set of field configurations obtained by a gauge transformation on the null-field configuration, i.e., a gauge transform of zero. So it is a particular "gauge orbit" in the field configuration's space. Thus, in the abelian case, where A μ ( x ) → A μ ′ ( x ) = A μ ( x ) + ∂ μ f ( x ) {\displaystyle A_{\mu }(x)\rightarrow A'_{\mu }(x)=A_{\mu }(x)+\partial _{\mu }f(x)} , the pure gauge is just the set of field configurations A μ ′ ( x ) = ∂ μ f ( x ) {\displaystyle A'_{\mu }(x)=\partial _{\mu }f(x)} for all f(x). == See also == == References == == Bibliography == General readers Schumm, Bruce (2004) Deep Down Things. Johns Hopkins University Press. Esp. chpt. 8. A serious attempt by a physicist to explain gauge theory and the Standard Model with little formal mathematics. Carroll, Sean (2024). The Biggest Ideas in the Universe : Quanta and Fields. Dutton. p. 193-234 (chap 9 : Gauge Theory, and chap 10 : Phases). ISBN 978-0-5931-8660-2. Texts Bailin, David; Love, Alexander (2019). Introduction to Gauge Field Theory. Taylor & Francis. ISBN 9780203750100. Cheng, T.-P.; Li, L.-F. (1983). Gauge Theory of Elementary Particle Physics. Oxford University Press. ISBN 0-19-851961-3. Frampton, P. (2008). Gauge Field Theories (3rd ed.). Wiley-VCH. Kane, G.L. (1987). Modern Elementary Particle Physics. Perseus Books. ISBN 0-201-11749-5. Quigg, Chris (1983). Gauge Theories of the Strong, Weak and Electromagnetic Interactions. Addison-Wesley. ISBN 0-8053-6021-2. Articles Becchi, C. (1997). "Introduction to Gauge Theories". arXiv:hep-ph/9705211. Gross, D. (1992). "Gauge theory – Past, Present and Future". Retrieved 2009-04-23. Jackson, J.D. (2002). "From Lorenz to Coulomb and other explicit gauge transformations". Am. J. Phys. 70 (9): 917–928. arXiv:physics/0204034. Bibcode:2002AmJPh..70..917J. doi:10.1119/1.1491265. S2CID 119652556. Svetlichny, George (1999). "Preparation for Gauge Theory". arXiv:math-ph/9902027. == External links == "Gauge transformation", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Yang–Mills equations on DispersiveWiki Gauge theories on Scholarpedia
Wikipedia/Quantum_gauge_theory
Chiral perturbation theory (ChPT) is an effective field theory constructed with a Lagrangian consistent with the (approximate) chiral symmetry of quantum chromodynamics (QCD), as well as the other symmetries of parity and charge conjugation. ChPT is a theory which allows one to study the low-energy dynamics of QCD on the basis of this underlying chiral symmetry. == Goals == In the theory of the strong interaction of the standard model, we describe the interactions between quarks and gluons. Due to the running of the strong coupling constant, we can apply perturbation theory in the coupling constant only at high energies. But in the low-energy regime of QCD, the degrees of freedom are no longer quarks and gluons, but rather hadrons. This is a result of confinement. If one could "solve" the QCD partition function (such that the degrees of freedom in the Lagrangian are replaced by hadrons), then one could extract information about low-energy physics. To date this has not been accomplished. Because QCD becomes non-perturbative at low energy, it is impossible to use perturbative methods to extract information from the partition function of QCD. Lattice QCD is an alternative method that has proved successful in extracting non-perturbative information. == Method == Using different degrees of freedom, we have to assure that observables calculated in the EFT are related to those of the underlying theory. This is achieved by using the most general Lagrangian that is consistent with the symmetries of the underlying theory, as this yields the ‘‘most general possible S-matrix consistent with analyticity, perturbative unitarity, cluster decomposition and the assumed symmetry. In general there is an infinite number of terms which meet this requirement. Therefore in order to make any physical predictions, one assigns to the theory a power-ordering scheme which organizes terms by some pre-determined degree of importance. The ordering allows one to keep some terms and omit all other, higher-order corrections which can safely be temporarily ignored. There are several power counting schemes in ChPT. The most widely used one is the p {\displaystyle p} -expansion where p {\displaystyle p} stands for momentum. However, there also exist the ϵ {\displaystyle \epsilon } , δ , {\displaystyle \delta ,} and ϵ ′ {\displaystyle \epsilon ^{\prime }} expansions. All of these expansions are valid in finite volume, (though the p {\displaystyle p} expansion is the only one valid in infinite volume.) Particular choices of finite volumes require one to use different reorganizations of the chiral theory in order to correctly understand the physics. These different reorganizations correspond to the different power counting schemes. In addition to the ordering scheme, most terms in the approximate Lagrangian will be multiplied by coupling constants which represent the relative strengths of the force represented by each term. Values of these constants – also called low-energy constants or Ls – are usually not known. The constants can be determined by fitting to experimental data or be derived from underlying theory. === The model Lagrangian === The Lagrangian of the p {\displaystyle p} -expansion is constructed by writing down all interactions which are not excluded by symmetry, and then ordering them based on the number of momentum and mass powers. The order is chosen so that ( ∂ π ) 2 + m π 2 π 2 {\displaystyle (\partial \pi )^{2}+m_{\pi }^{2}\pi ^{2}} is considered in the first-order approximation, where π {\displaystyle \pi } is the pion field and m π {\displaystyle m_{\pi }} the pion mass, which breaks the underlying chiral symmetry explicitly (PCAC). Terms like m π 4 π 2 + ( ∂ π ) 6 {\displaystyle m_{\pi }^{4}\pi ^{2}+(\partial \pi )^{6}} are part of other, higher order corrections. It is also customary to compress the Lagrangian by replacing the single pion fields in each term with an infinite series of all possible combinations of pion fields. One of the most common choices is U = exp ⁡ { i F ( π 0 2 π + 2 π − − π 0 ) } {\displaystyle U=\exp \left\{{\frac {i}{F}}{\begin{pmatrix}\pi ^{0}&{\sqrt {2}}\pi ^{+}\\{\sqrt {2}}\pi ^{-}&-\pi ^{0}\end{pmatrix}}\right\}} where F {\displaystyle F} is called the pion decay constant which is 93 MeV. In general, different choices of the normalization for F {\displaystyle F} exist, so that one must choose the value that is consistent with the charged pion decay rate. === Renormalization === The effective theory in general is non-renormalizable, however given a particular power counting scheme in ChPT, the effective theory is renormalizable at a given order in the chiral expansion. For example, if one wishes to compute an observable to O ( p 4 ) {\displaystyle {\mathcal {O}}(p^{4})} , then one must compute the contact terms that come from the O ( p 4 ) {\displaystyle {\mathcal {O}}(p^{4})} Lagrangian (this is different for an SU(2) vs. SU(3) theory) at tree-level and the one-loop contributions from the O ( p 2 ) {\displaystyle {\mathcal {O}}(p^{2})} Lagrangian.) One can easily see that a one-loop contribution from the O ( p 2 ) {\displaystyle {\mathcal {O}}(p^{2})} Lagrangian counts as O ( p 4 ) {\displaystyle {\mathcal {O}}(p^{4})} by noting that the integration measure counts as p 4 {\displaystyle p^{4}} , the propagator counts as p − 2 {\displaystyle p^{-2}} , while the derivative contributions count as p 2 {\displaystyle p^{2}} . Therefore, since the calculation is valid to O ( p 4 ) {\displaystyle {\mathcal {O}}(p^{4})} , one removes the divergences in the calculation with the renormalization of the low-energy constants (LECs) from the O ( p 4 ) {\displaystyle {\mathcal {O}}(p^{4})} Lagrangian. So if one wishes to remove all the divergences in the computation of a given observable to O ( p n ) {\displaystyle {\mathcal {O}}(p^{n})} , one uses the coupling constants in the expression for the O ( p n ) {\displaystyle {\mathcal {O}}(p^{n})} Lagrangian to remove those divergences. == Successful application == === Mesons and nucleons === The theory allows the description of interactions between pions, and between pions and nucleons (or other matter fields). SU(3) ChPT can also describe interactions of kaons and eta mesons, while similar theories can be used to describe the vector mesons. Since chiral perturbation theory assumes chiral symmetry, and therefore massless quarks, it cannot be used to model interactions of the heavier quarks. For an SU(2) theory the leading order chiral Lagrangian is given by L 2 = F 2 4 t r ( ∂ μ U ∂ μ U † ) + λ F 3 4 t r ( m q U + m q † U † ) {\displaystyle {\mathcal {L}}_{2}={\frac {F^{2}}{4}}{\rm {tr}}(\partial _{\mu }U\partial ^{\mu }U^{\dagger })+{\frac {\lambda F^{3}}{4}}{\rm {tr}}(m_{q}U+m_{q}^{\dagger }U^{\dagger })} where F = 93 {\displaystyle F=93} MeV and m q {\displaystyle m_{q}} is the quark mass matrix. In the p {\displaystyle p} -expansion of ChPT, the small expansion parameters are p Λ χ , m π Λ χ . {\displaystyle {\frac {p}{\Lambda _{\chi }}},{\frac {m_{\pi }}{\Lambda _{\chi }}}.} where Λ χ {\displaystyle \Lambda _{\chi }} is the chiral symmetry breaking scale, of order 1 GeV (sometimes estimated as Λ χ = 4 π F {\displaystyle \Lambda _{\chi }=4\pi F} ). In this expansion, m q {\displaystyle m_{q}} counts as O ( p 2 ) {\displaystyle {\mathcal {O}}(p^{2})} because m π 2 = λ m q F {\displaystyle m_{\pi }^{2}=\lambda m_{q}F} to leading order in the chiral expansion. === Hadron-hadron interactions === In some cases, chiral perturbation theory has been successful in describing the interactions between hadrons in the non-perturbative regime of the strong interaction. For instance, it can be applied to few-nucleon systems, and at next-to-next-to-leading order in the perturbative expansion, it can account for three-nucleon forces in a natural way. == References == == External links == Howard Georgi, Weak Interactions and Modern Particle Theory, Benjamin Cummings, 1984; revised version 2008 H Leutwyler, On the foundations of chiral perturbation theory, Annals of Physics, 235, 1994, p 165-203. Stefan Scherer, Introduction to Chiral Perturbation Theory, Adv. Nucl. Phys. 27 (2003) 277. Gerhard Ecker, Chiral perturbation theory, Prog. Part. Nucl. Phys. 35 (1995), pp. 1–80.
Wikipedia/Chiral_perturbation_theory
In physics, a gauge theory is a type of field theory in which the Lagrangian, and hence the dynamics of the system itself, does not change under local transformations according to certain smooth families of operations (Lie groups). Formally, the Lagrangian is invariant under these transformations. The term "gauge" refers to any specific mathematical formalism to regulate redundant degrees of freedom in the Lagrangian of a physical system. The transformations between possible gauges, called gauge transformations, form a Lie group—referred to as the symmetry group or the gauge group of the theory. Associated with any Lie group is the Lie algebra of group generators. For each group generator there necessarily arises a corresponding field (usually a vector field) called the gauge field. Gauge fields are included in the Lagrangian to ensure its invariance under the local group transformations (called gauge invariance). When such a theory is quantized, the quanta of the gauge fields are called gauge bosons. If the symmetry group is non-commutative, then the gauge theory is referred to as non-abelian gauge theory, the usual example being the Yang–Mills theory. Many powerful theories in physics are described by Lagrangians that are invariant under some symmetry transformation groups. When they are invariant under a transformation identically performed at every point in the spacetime in which the physical processes occur, they are said to have a global symmetry. Local symmetry, the cornerstone of gauge theories, is a stronger constraint. In fact, a global symmetry is just a local symmetry whose group's parameters are fixed in spacetime (the same way a constant value can be understood as a function of a certain parameter, the output of which is always the same). Gauge theories are important as the successful field theories explaining the dynamics of elementary particles. Quantum electrodynamics is an abelian gauge theory with the symmetry group U(1) and has one gauge field, the electromagnetic four-potential, with the photon being the gauge boson. The Standard Model is a non-abelian gauge theory with the symmetry group U(1) × SU(2) × SU(3) and has a total of twelve gauge bosons: the photon, three weak bosons and eight gluons. Gauge theories are also important in explaining gravitation in the theory of general relativity. Its case is somewhat unusual in that the gauge field is a tensor, the Lanczos tensor. Theories of quantum gravity, beginning with gauge gravitation theory, also postulate the existence of a gauge boson known as the graviton. Gauge symmetries can be viewed as analogues of the principle of general covariance of general relativity in which the coordinate system can be chosen freely under arbitrary diffeomorphisms of spacetime. Both gauge invariance and diffeomorphism invariance reflect a redundancy in the description of the system. An alternative theory of gravitation, gauge theory gravity, replaces the principle of general covariance with a true gauge principle with new gauge fields. Historically, these ideas were first stated in the context of classical electromagnetism and later in general relativity. However, the modern importance of gauge symmetries appeared first in the relativistic quantum mechanics of electrons – quantum electrodynamics, elaborated on below. Today, gauge theories are useful in condensed matter, nuclear and high energy physics among other subfields. == History == The concept and the name of gauge theory derives from the work of Hermann Weyl in 1918. Weyl, in an attempt to generalize the geometrical ideas of general relativity to include electromagnetism, conjectured that Eichinvarianz or invariance under the change of scale (or "gauge") might also be a local symmetry of general relativity. After the development of quantum mechanics, Weyl, Vladimir Fock and Fritz London replaced the simple scale factor with a complex quantity and turned the scale transformation into a change of phase, which is a U(1) gauge symmetry. This explained the electromagnetic field effect on the wave function of a charged quantum mechanical particle. Weyl's 1929 paper introduced the modern concept of gauge invariance subsequently popularized by Wolfgang Pauli in his 1941 review. In retrospect, James Clerk Maxwell's formulation, in 1864–65, of electrodynamics in "A Dynamical Theory of the Electromagnetic Field" suggested the possibility of invariance, when he stated that any vector field whose curl vanishes—and can therefore normally be written as a gradient of a function—could be added to the vector potential without affecting the magnetic field. Similarly unnoticed, David Hilbert had derived the Einstein field equations by postulating the invariance of the action under a general coordinate transformation. The importance of these symmetry invariances remained unnoticed until Weyl's work. Inspired by Pauli's descriptions of connection between charge conservation and field theory driven by invariance, Chen Ning Yang sought a field theory for atomic nuclei binding based on conservation of nuclear isospin.: 202  In 1954, Yang and Robert Mills generalized the gauge invariance of electromagnetism, constructing a theory based on the action of the (non-abelian) SU(2) symmetry group on the isospin doublet of protons and neutrons. This is similar to the action of the U(1) group on the spinor fields of quantum electrodynamics. The Yang–Mills theory became the prototype theory to resolve some of the confusion in elementary particle physics. This idea later found application in the quantum field theory of the weak force, and its unification with electromagnetism in the electroweak theory. Gauge theories became even more attractive when it was realized that non-abelian gauge theories reproduced a feature called asymptotic freedom. Asymptotic freedom was believed to be an important characteristic of strong interactions. This motivated searching for a strong force gauge theory. This theory, now known as quantum chromodynamics, is a gauge theory with the action of the SU(3) group on the color triplet of quarks. The Standard Model unifies the description of electromagnetism, weak interactions and strong interactions in the language of gauge theory. In the 1970s, Michael Atiyah began studying the mathematics of solutions to the classical Yang–Mills equations. In 1983, Atiyah's student Simon Donaldson built on this work to show that the differentiable classification of smooth 4-manifolds is very different from their classification up to homeomorphism. Michael Freedman used Donaldson's work to exhibit exotic R4s, that is, exotic differentiable structures on Euclidean 4-dimensional space. This led to an increasing interest in gauge theory for its own sake, independent of its successes in fundamental physics. In 1994, Edward Witten and Nathan Seiberg invented gauge-theoretic techniques based on supersymmetry that enabled the calculation of certain topological invariants (the Seiberg–Witten invariants). These contributions to mathematics from gauge theory have led to a renewed interest in this area. The importance of gauge theories in physics is exemplified in the success of the mathematical formalism in providing a unified framework to describe the quantum field theories of electromagnetism, the weak force and the strong force. This theory, known as the Standard Model, accurately describes experimental predictions regarding three of the four fundamental forces of nature, and is a gauge theory with the gauge group SU(3) × SU(2) × U(1). Modern theories like string theory, as well as general relativity, are, in one way or another, gauge theories. See Jackson and Okun for early history of gauge and Pickering for more about the history of gauge and quantum field theories. == Description == === Global and local symmetries === ==== Global symmetry ==== In physics, the mathematical description of any physical situation usually contains excess degrees of freedom; the same physical situation is equally well described by many equivalent mathematical configurations. For instance, in Newtonian dynamics, if two configurations are related by a Galilean transformation (an inertial change of reference frame) they represent the same physical situation. These transformations form a group of "symmetries" of the theory, and a physical situation corresponds not to an individual mathematical configuration but to a class of configurations related to one another by this symmetry group. This idea can be generalized to include local as well as global symmetries, analogous to much more abstract "changes of coordinates" in a situation where there is no preferred "inertial" coordinate system that covers the entire physical system. A gauge theory is a mathematical model that has symmetries of this kind, together with a set of techniques for making physical predictions consistent with the symmetries of the model. ==== Example of global symmetry ==== When a quantity occurring in the mathematical configuration is not just a number but has some geometrical significance, such as a velocity or an axis of rotation, its representation as numbers arranged in a vector or matrix is also changed by a coordinate transformation. For instance, if one description of a pattern of fluid flow states that the fluid velocity in the neighborhood of (x = 1, y = 0) is 1 m/s in the positive x direction, then a description of the same situation in which the coordinate system has been rotated clockwise by 90 degrees states that the fluid velocity in the neighborhood of (x = 0, y= −1) is 1 m/s in the negative y direction. The coordinate transformation has affected both the coordinate system used to identify the location of the measurement and the basis in which its value is expressed. As long as this transformation is performed globally (affecting the coordinate basis in the same way at every point), the effect on values that represent the rate of change of some quantity along some path in space and time as it passes through point P is the same as the effect on values that are truly local to P. ==== Local symmetry ==== ===== Use of fiber bundles to describe local symmetries ===== In order to adequately describe physical situations in more complex theories, it is often necessary to introduce a "coordinate basis" for some of the objects of the theory that do not have this simple relationship to the coordinates used to label points in space and time. (In mathematical terms, the theory involves a fiber bundle in which the fiber at each point of the base space consists of possible coordinate bases for use when describing the values of objects at that point.) In order to spell out a mathematical configuration, one must choose a particular coordinate basis at each point (a local section of the fiber bundle) and express the values of the objects of the theory (usually "fields" in the physicist's sense) using this basis. Two such mathematical configurations are equivalent (describe the same physical situation) if they are related by a transformation of this abstract coordinate basis (a change of local section, or gauge transformation). In most gauge theories, the set of possible transformations of the abstract gauge basis at an individual point in space and time is a finite-dimensional Lie group. The simplest such group is U(1), which appears in the modern formulation of quantum electrodynamics (QED) via its use of complex numbers. QED is generally regarded as the first, and simplest, physical gauge theory. The set of possible gauge transformations of the entire configuration of a given gauge theory also forms a group, the gauge group of the theory. An element of the gauge group can be parameterized by a smoothly varying function from the points of spacetime to the (finite-dimensional) Lie group, such that the value of the function and its derivatives at each point represents the action of the gauge transformation on the fiber over that point. A gauge transformation with constant parameter at every point in space and time is analogous to a rigid rotation of the geometric coordinate system; it represents a global symmetry of the gauge representation. As in the case of a rigid rotation, this gauge transformation affects expressions that represent the rate of change along a path of some gauge-dependent quantity in the same way as those that represent a truly local quantity. A gauge transformation whose parameter is not a constant function is referred to as a local symmetry; its effect on expressions that involve a derivative is qualitatively different from that on expressions that do not. (This is analogous to a non-inertial change of reference frame, which can produce a Coriolis effect.) === Gauge fields === The "gauge covariant" version of a gauge theory accounts for this effect by introducing a gauge field (in mathematical language, an Ehresmann connection) and formulating all rates of change in terms of the covariant derivative with respect to this connection. The gauge field becomes an essential part of the description of a mathematical configuration. A configuration in which the gauge field can be eliminated by a gauge transformation has the property that its field strength (in mathematical language, its curvature) is zero everywhere; a gauge theory is not limited to these configurations. In other words, the distinguishing characteristic of a gauge theory is that the gauge field does not merely compensate for a poor choice of coordinate system; there is generally no gauge transformation that makes the gauge field vanish. When analyzing the dynamics of a gauge theory, the gauge field must be treated as a dynamical variable, similar to other objects in the description of a physical situation. In addition to its interaction with other objects via the covariant derivative, the gauge field typically contributes energy in the form of a "self-energy" term. One can obtain the equations for the gauge theory by: starting from a naïve ansatz without the gauge field (in which the derivatives appear in a "bare" form); listing those global symmetries of the theory that can be characterized by a continuous parameter (generally an abstract equivalent of a rotation angle); computing the correction terms that result from allowing the symmetry parameter to vary from place to place; and reinterpreting these correction terms as couplings to one or more gauge fields, and giving these fields appropriate self-energy terms and dynamical behavior. This is the sense in which a gauge theory "extends" a global symmetry to a local symmetry, and closely resembles the historical development of the gauge theory of gravity known as general relativity. === Physical experiments === Gauge theories used to model the results of physical experiments engage in: limiting the universe of possible configurations to those consistent with the information used to set up the experiment, and then computing the probability distribution of the possible outcomes that the experiment is designed to measure. We cannot express the mathematical descriptions of the "setup information" and the "possible measurement outcomes", or the "boundary conditions" of the experiment, without reference to a particular coordinate system, including a choice of gauge. One assumes an adequate experiment isolated from "external" influence that is itself a gauge-dependent statement. Mishandling gauge dependence calculations in boundary conditions is a frequent source of anomalies, and approaches to anomaly avoidance classifies gauge theories. === Continuum theories === The two gauge theories mentioned above, continuum electrodynamics and general relativity, are continuum field theories. The techniques of calculation in a continuum theory implicitly assume that: given a completely fixed choice of gauge, the boundary conditions of an individual configuration are completely described given a completely fixed gauge and a complete set of boundary conditions, the least action determines a unique mathematical configuration and therefore a unique physical situation consistent with these bounds fixing the gauge introduces no anomalies in the calculation, due either to gauge dependence in describing partial information about boundary conditions or to incompleteness of the theory. Determination of the likelihood of possible measurement outcomes proceed by: establishing a probability distribution over all physical situations determined by boundary conditions consistent with the setup information establishing a probability distribution of measurement outcomes for each possible physical situation convolving these two probability distributions to get a distribution of possible measurement outcomes consistent with the setup information These assumptions have enough validity across a wide range of energy scales and experimental conditions to allow these theories to make accurate predictions about almost all of the phenomena encountered in daily life: light, heat, and electricity, eclipses, spaceflight, etc. They fail only at the smallest and largest scales due to omissions in the theories themselves, and when the mathematical techniques themselves break down, most notably in the case of turbulence and other chaotic phenomena. === Quantum field theories === Other than these classical continuum field theories, the most widely known gauge theories are quantum field theories, including quantum electrodynamics and the Standard Model of elementary particle physics. The starting point of a quantum field theory is much like that of its continuum analog: a gauge-covariant action integral that characterizes "allowable" physical situations according to the principle of least action. However, continuum and quantum theories differ significantly in how they handle the excess degrees of freedom represented by gauge transformations. Continuum theories, and most pedagogical treatments of the simplest quantum field theories, use a gauge fixing prescription to reduce the orbit of mathematical configurations that represent a given physical situation to a smaller orbit related by a smaller gauge group (the global symmetry group, or perhaps even the trivial group). More sophisticated quantum field theories, in particular those that involve a non-abelian gauge group, break the gauge symmetry within the techniques of perturbation theory by introducing additional fields (the Faddeev–Popov ghosts) and counterterms motivated by anomaly cancellation, in an approach known as BRST quantization. While these concerns are in one sense highly technical, they are also closely related to the nature of measurement, the limits on knowledge of a physical situation, and the interactions between incompletely specified experimental conditions and incompletely understood physical theory. The mathematical techniques that have been developed in order to make gauge theories tractable have found many other applications, from solid-state physics and crystallography to low-dimensional topology. == Classical gauge theory == === Classical electromagnetism === In electrostatics, one can either discuss the electric field, E, or its corresponding electric potential, V. Knowledge of one makes it possible to find the other, except that potentials differing by a constant, V ↦ V + C {\displaystyle V\mapsto V+C} , correspond to the same electric field. This is because the electric field relates to changes in the potential from one point in space to another, and the constant C would cancel out when subtracting to find the change in potential. In terms of vector calculus, the electric field is the gradient of the potential, E = − ∇ V {\displaystyle \mathbf {E} =-\nabla V} . Generalizing from static electricity to electromagnetism, we have a second potential, the vector potential A, with E = − ∇ V − ∂ A ∂ t B = ∇ × A {\displaystyle {\begin{aligned}\mathbf {E} &=-\nabla V-{\frac {\partial \mathbf {A} }{\partial t}}\\\mathbf {B} &=\nabla \times \mathbf {A} \end{aligned}}} The general gauge transformations now become not just V ↦ V + C {\displaystyle V\mapsto V+C} but A ↦ A + ∇ f V ↦ V − ∂ f ∂ t {\displaystyle {\begin{aligned}\mathbf {A} &\mapsto \mathbf {A} +\nabla f\\V&\mapsto V-{\frac {\partial f}{\partial t}}\end{aligned}}} where f is any twice continuously differentiable function that depends on position and time. The electromagnetic fields remain the same under the gauge transformation. === Example: scalar O(n) gauge theory === The remainder of this section requires some familiarity with classical or quantum field theory, and the use of Lagrangians. Definitions in this section: gauge group, gauge field, interaction Lagrangian, gauge boson. The following illustrates how local gauge invariance can be "motivated" heuristically starting from global symmetry properties, and how it leads to an interaction between originally non-interacting fields. Consider a set of n {\displaystyle n} non-interacting real scalar fields, with equal masses m. This system is described by an action that is the sum of the (usual) action for each scalar field φ i {\displaystyle \varphi _{i}} S = ∫ d 4 x ∑ i = 1 n [ 1 2 ∂ μ φ i ∂ μ φ i − 1 2 m 2 φ i 2 ] {\displaystyle {\mathcal {S}}=\int \,\mathrm {d} ^{4}x\sum _{i=1}^{n}\left[{\frac {1}{2}}\partial _{\mu }\varphi _{i}\partial ^{\mu }\varphi _{i}-{\frac {1}{2}}m^{2}\varphi _{i}^{2}\right]} The Lagrangian (density) can be compactly written as L = 1 2 ( ∂ μ Φ ) T ∂ μ Φ − 1 2 m 2 Φ T Φ {\displaystyle \ {\mathcal {L}}={\frac {1}{2}}(\partial _{\mu }\Phi )^{\mathsf {T}}\partial ^{\mu }\Phi -{\frac {1}{2}}m^{2}\Phi ^{\mathsf {T}}\Phi } by introducing a vector of fields Φ T = ( φ 1 , φ 2 , … , φ n ) {\displaystyle \ \Phi ^{\mathsf {T}}=(\varphi _{1},\varphi _{2},\ldots ,\varphi _{n})} The term ∂ μ Φ {\displaystyle \partial _{\mu }\Phi } is the partial derivative of Φ {\displaystyle \Phi } along dimension μ {\displaystyle \mu } . It is now transparent that the Lagrangian is invariant under the transformation Φ ↦ Φ ′ = G Φ {\displaystyle \ \Phi \mapsto \Phi '=G\Phi } whenever G is a constant matrix belonging to the n-by-n orthogonal group O(n). This is seen to preserve the Lagrangian, since the derivative of Φ ′ {\displaystyle \Phi '} transforms identically to Φ {\displaystyle \Phi } and both quantities appear inside dot products in the Lagrangian (orthogonal transformations preserve the dot product). ( ∂ μ Φ ) ↦ ( ∂ μ Φ ) ′ = G ∂ μ Φ {\displaystyle \ (\partial _{\mu }\Phi )\mapsto (\partial _{\mu }\Phi )'=G\partial _{\mu }\Phi } This characterizes the global symmetry of this particular Lagrangian, and the symmetry group is often called the gauge group; the mathematical term is structure group, especially in the theory of G-structures. Incidentally, Noether's theorem implies that invariance under this group of transformations leads to the conservation of the currents J μ a = i ∂ μ Φ T T a Φ {\displaystyle \ J_{\mu }^{a}=i\partial _{\mu }\Phi ^{\mathsf {T}}T^{a}\Phi } where the Ta matrices are generators of the SO(n) group. There is one conserved current for every generator. Now, demanding that this Lagrangian should have local O(n)-invariance requires that the G matrices (which were earlier constant) should be allowed to become functions of the spacetime coordinates x. In this case, the G matrices do not "pass through" the derivatives, when G = G(x), ∂ μ ( G Φ ) ≠ G ( ∂ μ Φ ) {\displaystyle \ \partial _{\mu }(G\Phi )\neq G(\partial _{\mu }\Phi )} The failure of the derivative to commute with "G" introduces an additional term (in keeping with the product rule), which spoils the invariance of the Lagrangian. In order to rectify this we define a new derivative operator such that the derivative of Φ ′ {\displaystyle \Phi '} again transforms identically with Φ {\displaystyle \Phi } ( D μ Φ ) ′ = G D μ Φ {\displaystyle \ (D_{\mu }\Phi )'=GD_{\mu }\Phi } This new "derivative" is called a (gauge) covariant derivative and takes the form D μ = ∂ μ − i g A μ {\displaystyle \ D_{\mu }=\partial _{\mu }-igA_{\mu }} where g is called the coupling constant; a quantity defining the strength of an interaction. After a simple calculation we can see that the gauge field A(x) must transform as follows A μ ′ = G A μ G − 1 − i g ( ∂ μ G ) G − 1 {\displaystyle \ A'_{\mu }=GA_{\mu }G^{-1}-{\frac {i}{g}}(\partial _{\mu }G)G^{-1}} The gauge field is an element of the Lie algebra, and can therefore be expanded as A μ = ∑ a A μ a T a {\displaystyle \ A_{\mu }=\sum _{a}A_{\mu }^{a}T^{a}} There are therefore as many gauge fields as there are generators of the Lie algebra. Finally, we now have a locally gauge invariant Lagrangian L l o c = 1 2 ( D μ Φ ) T D μ Φ − 1 2 m 2 Φ T Φ {\displaystyle \ {\mathcal {L}}_{\mathrm {loc} }={\frac {1}{2}}(D_{\mu }\Phi )^{\mathsf {T}}D^{\mu }\Phi -{\frac {1}{2}}m^{2}\Phi ^{\mathsf {T}}\Phi } Pauli uses the term gauge transformation of the first type to mean the transformation of Φ {\displaystyle \Phi } , while the compensating transformation in A {\displaystyle A} is called a gauge transformation of the second type. The difference between this Lagrangian and the original globally gauge-invariant Lagrangian is seen to be the interaction Lagrangian L i n t = i g 2 Φ T A μ T ∂ μ Φ + i g 2 ( ∂ μ Φ ) T A μ Φ − g 2 2 ( A μ Φ ) T A μ Φ {\displaystyle \ {\mathcal {L}}_{\mathrm {int} }=i{\frac {g}{2}}\Phi ^{\mathsf {T}}A_{\mu }^{\mathsf {T}}\partial ^{\mu }\Phi +i{\frac {g}{2}}(\partial _{\mu }\Phi )^{\mathsf {T}}A^{\mu }\Phi -{\frac {g^{2}}{2}}(A_{\mu }\Phi )^{\mathsf {T}}A^{\mu }\Phi } This term introduces interactions between the n scalar fields just as a consequence of the demand for local gauge invariance. However, to make this interaction physical and not completely arbitrary, the mediator A(x) needs to propagate in space. That is dealt with in the next section by adding yet another term, L g f {\displaystyle {\mathcal {L}}_{\mathrm {gf} }} , to the Lagrangian. In the quantized version of the obtained classical field theory, the quanta of the gauge field A(x) are called gauge bosons. The interpretation of the interaction Lagrangian in quantum field theory is of scalar bosons interacting by the exchange of these gauge bosons. === Yang–Mills Lagrangian for the gauge field === The picture of a classical gauge theory developed in the previous section is almost complete, except for the fact that to define the covariant derivatives D, one needs to know the value of the gauge field A ( x ) {\displaystyle A(x)} at all spacetime points. Instead of manually specifying the values of this field, it can be given as the solution to a field equation. Further requiring that the Lagrangian that generates this field equation is locally gauge invariant as well, one possible form for the gauge field Lagrangian is L gf = − 1 2 tr ⁡ ( F μ ν F μ ν ) = − 1 4 F a μ ν F μ ν a {\displaystyle {\mathcal {L}}_{\text{gf}}=-{\frac {1}{2}}\operatorname {tr} \left(F^{\mu \nu }F_{\mu \nu }\right)=-{\frac {1}{4}}F^{a\mu \nu }F_{\mu \nu }^{a}} where the F μ ν a {\displaystyle F_{\mu \nu }^{a}} are obtained from potentials A μ a {\displaystyle A_{\mu }^{a}} , being the components of A ( x ) {\displaystyle A(x)} , by F μ ν a = ∂ μ A ν a − ∂ ν A μ a + g ∑ b , c f a b c A μ b A ν c {\displaystyle F_{\mu \nu }^{a}=\partial _{\mu }A_{\nu }^{a}-\partial _{\nu }A_{\mu }^{a}+g\sum _{b,c}f^{abc}A_{\mu }^{b}A_{\nu }^{c}} and the f a b c {\displaystyle f^{abc}} are the structure constants of the Lie algebra of the generators of the gauge group. This formulation of the Lagrangian is called a Yang–Mills action. Other gauge invariant actions also exist (e.g., nonlinear electrodynamics, Born–Infeld action, Chern–Simons model, theta term, etc.). In this Lagrangian term there is no field whose transformation counterweighs the one of A {\displaystyle A} . Invariance of this term under gauge transformations is a particular case of a priori classical (geometrical) symmetry. This symmetry must be restricted in order to perform quantization, the procedure being denominated gauge fixing, but even after restriction, gauge transformations may be possible. The complete Lagrangian for the gauge theory is now L = L loc + L gf = L global + L int + L gf {\displaystyle {\mathcal {L}}={\mathcal {L}}_{\text{loc}}+{\mathcal {L}}_{\text{gf}}={\mathcal {L}}_{\text{global}}+{\mathcal {L}}_{\text{int}}+{\mathcal {L}}_{\text{gf}}} === Example: electrodynamics === As a simple application of the formalism developed in the previous sections, consider the case of electrodynamics, with only the electron field. The bare-bones action that generates the electron field's Dirac equation is S = ∫ ψ ¯ ( i ℏ c γ μ ∂ μ − m c 2 ) ψ d 4 x {\displaystyle {\mathcal {S}}=\int {\bar {\psi }}\left(i\hbar c\,\gamma ^{\mu }\partial _{\mu }-mc^{2}\right)\psi \,\mathrm {d} ^{4}x} The global symmetry for this system is ψ ↦ e i θ ψ {\displaystyle \psi \mapsto e^{i\theta }\psi } The gauge group here is U(1), just rotations of the phase angle of the field, with the particular rotation determined by the constant θ. "Localising" this symmetry implies the replacement of θ by θ(x). An appropriate covariant derivative is then D μ = ∂ μ − i e ℏ A μ {\displaystyle D_{\mu }=\partial _{\mu }-i{\frac {e}{\hbar }}A_{\mu }} Identifying the "charge" e (not to be confused with the mathematical constant e in the symmetry description) with the usual electric charge (this is the origin of the usage of the term in gauge theories), and the gauge field A(x) with the four-vector potential of the electromagnetic field results in an interaction Lagrangian L int = e ℏ ψ ¯ ( x ) γ μ ψ ( x ) A μ ( x ) = J μ ( x ) A μ ( x ) {\displaystyle {\mathcal {L}}_{\text{int}}={\frac {e}{\hbar }}{\bar {\psi }}(x)\gamma ^{\mu }\psi (x)A_{\mu }(x)=J^{\mu }(x)A_{\mu }(x)} where J μ ( x ) = e ℏ ψ ¯ ( x ) γ μ ψ ( x ) {\displaystyle J^{\mu }(x)={\frac {e}{\hbar }}{\bar {\psi }}(x)\gamma ^{\mu }\psi (x)} is the electric current four vector in the Dirac field. The gauge principle is therefore seen to naturally introduce the so-called minimal coupling of the electromagnetic field to the electron field. Adding a Lagrangian for the gauge field A μ ( x ) {\displaystyle A_{\mu }(x)} in terms of the field strength tensor exactly as in electrodynamics, one obtains the Lagrangian used as the starting point in quantum electrodynamics. L QED = ψ ¯ ( i ℏ c γ μ D μ − m c 2 ) ψ − 1 4 μ 0 F μ ν F μ ν {\displaystyle {\mathcal {L}}_{\text{QED}}={\bar {\psi }}\left(i\hbar c\,\gamma ^{\mu }D_{\mu }-mc^{2}\right)\psi -{\frac {1}{4\mu _{0}}}F_{\mu \nu }F^{\mu \nu }} == Mathematical formalism == Gauge theories are usually discussed in the language of differential geometry. Mathematically, a gauge is just a choice of a (local) section of some principal bundle. A gauge transformation is just a transformation between two such sections. Although gauge theory is dominated by the study of connections (primarily because it's mainly studied by high-energy physicists), the idea of a connection is not central to gauge theory in general. In fact, a result in general gauge theory shows that affine representations (i.e., affine modules) of the gauge transformations can be classified as sections of a jet bundle satisfying certain properties. There are representations that transform covariantly pointwise (called by physicists gauge transformations of the first kind), representations that transform as a connection form (called by physicists gauge transformations of the second kind, an affine representation)—and other more general representations, such as the B field in BF theory. There are more general nonlinear representations (realizations), but these are extremely complicated. Still, nonlinear sigma models transform nonlinearly, so there are applications. If there is a principal bundle P whose base space is space or spacetime and structure group is a Lie group, then the sections of P form a principal homogeneous space of the group of gauge transformations. Connections (gauge connection) define this principal bundle, yielding a covariant derivative ∇ in each associated vector bundle. If a local frame is chosen (a local basis of sections), then this covariant derivative is represented by the connection form A, a Lie algebra-valued 1-form, which is called the gauge potential in physics. This is evidently not an intrinsic but a frame-dependent quantity. The curvature form F, a Lie algebra-valued 2-form that is an intrinsic quantity, is constructed from a connection form by F = d A + A ∧ A {\displaystyle \mathbf {F} =\mathrm {d} \mathbf {A} +\mathbf {A} \wedge \mathbf {A} } where d stands for the exterior derivative and ∧ {\displaystyle \wedge } stands for the wedge product. ( A {\displaystyle \mathbf {A} } is an element of the vector space spanned by the generators T a {\displaystyle T^{a}} , and so the components of A {\displaystyle \mathbf {A} } do not commute with one another. Hence the wedge product A ∧ A {\displaystyle \mathbf {A} \wedge \mathbf {A} } does not vanish.) Infinitesimal gauge transformations form a Lie algebra, which is characterized by a smooth Lie-algebra-valued scalar, ε. Under such an infinitesimal gauge transformation, δ ε A = [ ε , A ] − d ε {\displaystyle \delta _{\varepsilon }\mathbf {A} =[\varepsilon ,\mathbf {A} ]-\mathrm {d} \varepsilon } where [ ⋅ , ⋅ ] {\displaystyle [\cdot ,\cdot ]} is the Lie bracket. One nice thing is that if δ ε X = ε X {\displaystyle \delta _{\varepsilon }X=\varepsilon X} , then δ ε D X = ε D X {\displaystyle \delta _{\varepsilon }DX=\varepsilon DX} where D is the covariant derivative D X = d e f d X + A X {\displaystyle DX\ {\stackrel {\mathrm {def} }{=}}\ \mathrm {d} X+\mathbf {A} X} Also, δ ε F = [ ε , F ] {\displaystyle \delta _{\varepsilon }\mathbf {F} =[\varepsilon ,\mathbf {F} ]} , which means F {\displaystyle \mathbf {F} } transforms covariantly. Not all gauge transformations can be generated by infinitesimal gauge transformations in general. An example is when the base manifold is a compact manifold without boundary such that the homotopy class of mappings from that manifold to the Lie group is nontrivial. See instanton for an example. The Yang–Mills action is now given by 1 4 g 2 ∫ Tr ⁡ [ ⋆ F ∧ F ] {\displaystyle {\frac {1}{4g^{2}}}\int \operatorname {Tr} [{\star }F\wedge F]} where ⋆ {\displaystyle {\star }} is the Hodge star operator and the integral is defined as in differential geometry. A quantity which is gauge-invariant (i.e., invariant under gauge transformations) is the Wilson loop, which is defined over any closed path, γ, as follows: χ ( ρ ) ( P { e ∫ γ A } ) {\displaystyle \chi ^{(\rho )}\left({\mathcal {P}}\left\{e^{\int _{\gamma }A}\right\}\right)} where χ is the character of a complex representation ρ and P {\displaystyle {\mathcal {P}}} represents the path-ordered operator. The formalism of gauge theory carries over to a general setting. For example, it is sufficient to ask that a vector bundle have a metric connection; when one does so, one finds that the metric connection satisfies the Yang–Mills equations of motion. == Quantization of gauge theories == Gauge theories may be quantized by specialization of methods which are applicable to any quantum field theory. However, because of the subtleties imposed by the gauge constraints (see section on Mathematical formalism, above) there are many technical problems to be solved which do not arise in other field theories. At the same time, the richer structure of gauge theories allows simplification of some computations: for example Ward identities connect different renormalization constants. === Methods and aims === The first gauge theory quantized was quantum electrodynamics (QED). The first methods developed for this involved gauge fixing and then applying canonical quantization. The Gupta–Bleuler method was also developed to handle this problem. Non-abelian gauge theories are now handled by a variety of means. Methods for quantization are covered in the article on quantization. The main point to quantization is to be able to compute quantum amplitudes for various processes allowed by the theory. Technically, they reduce to the computations of certain correlation functions in the vacuum state. This involves a renormalization of the theory. When the running coupling of the theory is small enough, then all required quantities may be computed in perturbation theory. Quantization schemes intended to simplify such computations (such as canonical quantization) may be called perturbative quantization schemes. At present some of these methods lead to the most precise experimental tests of gauge theories. However, in most gauge theories, there are many interesting questions which are non-perturbative. Quantization schemes suited to these problems (such as lattice gauge theory) may be called non-perturbative quantization schemes. Precise computations in such schemes often require supercomputing, and are therefore less well-developed currently than other schemes. === Anomalies === Some of the symmetries of the classical theory are then seen not to hold in the quantum theory; a phenomenon called an anomaly. Among the most well known are: The scale anomaly, which gives rise to a running coupling constant. In QED this gives rise to the phenomenon of the Landau pole. In quantum chromodynamics (QCD) this leads to asymptotic freedom. The chiral anomaly in either chiral or vector field theories with fermions. This has close connection with topology through the notion of instantons. In QCD this anomaly causes the decay of a pion to two photons. The gauge anomaly, which must cancel in any consistent physical theory. In the electroweak theory this cancellation requires an equal number of quarks and leptons. == Pure gauge == A pure gauge is the set of field configurations obtained by a gauge transformation on the null-field configuration, i.e., a gauge transform of zero. So it is a particular "gauge orbit" in the field configuration's space. Thus, in the abelian case, where A μ ( x ) → A μ ′ ( x ) = A μ ( x ) + ∂ μ f ( x ) {\displaystyle A_{\mu }(x)\rightarrow A'_{\mu }(x)=A_{\mu }(x)+\partial _{\mu }f(x)} , the pure gauge is just the set of field configurations A μ ′ ( x ) = ∂ μ f ( x ) {\displaystyle A'_{\mu }(x)=\partial _{\mu }f(x)} for all f(x). == See also == == References == == Bibliography == General readers Schumm, Bruce (2004) Deep Down Things. Johns Hopkins University Press. Esp. chpt. 8. A serious attempt by a physicist to explain gauge theory and the Standard Model with little formal mathematics. Carroll, Sean (2024). The Biggest Ideas in the Universe : Quanta and Fields. Dutton. p. 193-234 (chap 9 : Gauge Theory, and chap 10 : Phases). ISBN 978-0-5931-8660-2. Texts Bailin, David; Love, Alexander (2019). Introduction to Gauge Field Theory. Taylor & Francis. ISBN 9780203750100. Cheng, T.-P.; Li, L.-F. (1983). Gauge Theory of Elementary Particle Physics. Oxford University Press. ISBN 0-19-851961-3. Frampton, P. (2008). Gauge Field Theories (3rd ed.). Wiley-VCH. Kane, G.L. (1987). Modern Elementary Particle Physics. Perseus Books. ISBN 0-201-11749-5. Quigg, Chris (1983). Gauge Theories of the Strong, Weak and Electromagnetic Interactions. Addison-Wesley. ISBN 0-8053-6021-2. Articles Becchi, C. (1997). "Introduction to Gauge Theories". arXiv:hep-ph/9705211. Gross, D. (1992). "Gauge theory – Past, Present and Future". Retrieved 2009-04-23. Jackson, J.D. (2002). "From Lorenz to Coulomb and other explicit gauge transformations". Am. J. Phys. 70 (9): 917–928. arXiv:physics/0204034. Bibcode:2002AmJPh..70..917J. doi:10.1119/1.1491265. S2CID 119652556. Svetlichny, George (1999). "Preparation for Gauge Theory". arXiv:math-ph/9902027. == External links == "Gauge transformation", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Yang–Mills equations on DispersiveWiki Gauge theories on Scholarpedia
Wikipedia/Non-abelian_gauge_theory
In particle physics, flavour or flavor refers to the species of an elementary particle. The Standard Model counts six flavours of quarks and six flavours of leptons. They are conventionally parameterized with flavour quantum numbers that are assigned to all subatomic particles. They can also be described by some of the family symmetries proposed for the quark-lepton generations. == Quantum numbers == In classical mechanics, a force acting on a point-like particle can only alter the particle's dynamical state, i.e., its momentum, angular momentum, etc. Quantum field theory, however, allows interactions that can alter other facets of a particle's nature described by non-dynamical, discrete quantum numbers. In particular, the action of the weak force is such that it allows the conversion of quantum numbers describing mass and electric charge of both quarks and leptons from one discrete type to another. This is known as a flavour change, or flavour transmutation. Due to their quantum description, flavour states may also undergo quantum superposition. In atomic physics the principal quantum number of an electron specifies the electron shell in which it resides, which determines the energy level of the whole atom. Analogously, the five flavour quantum numbers (isospin, strangeness, charm, bottomness or topness) can characterize the quantum state of quarks, by the degree to which it exhibits six distinct flavours (u, d, c, s, t, b). Composite particles can be created from multiple quarks, forming hadrons, such as mesons and baryons, each possessing unique aggregate characteristics, such as different masses, electric charges, and decay modes. A hadron's overall flavour quantum numbers depend on the numbers of constituent quarks of each particular flavour. === Conservation laws === All of the various charges discussed above are conserved by the fact that the corresponding charge operators can be understood as generators of symmetries that commute with the Hamiltonian. Thus, the eigenvalues of the various charge operators are conserved. Absolutely conserved quantum numbers in the Standard Model are: electric charge (Q) weak isospin (T3) baryon number (B) lepton number (L) In some theories, such as the grand unified theory, the individual baryon and lepton number conservation can be violated, if the difference between them (B − L) is conserved (see Chiral anomaly). Strong interactions conserve all flavours, but all flavour quantum numbers are violated (changed, non-conserved) by electroweak interactions. == Flavour symmetry == If there are two or more particles which have identical interactions, then they may be interchanged without affecting the physics. All (complex) linear combinations of these two particles give the same physics, as long as the combinations are orthogonal, or perpendicular, to each other. In other words, the theory possesses symmetry transformations such as M ( u d ) {\displaystyle M\left({u \atop d}\right)} , where u and d are the two fields (representing the various generations of leptons and quarks, see below), and M is any 2×2 unitary matrix with a unit determinant. Such matrices form a Lie group called SU(2) (see special unitary group). This is an example of flavour symmetry. In quantum chromodynamics, flavour is a conserved global symmetry. In the electroweak theory, on the other hand, this symmetry is broken, and flavour changing processes exist, such as quark decay or neutrino oscillations. == Flavour quantum numbers == === Leptons === All leptons carry a lepton number L = 1. In addition, leptons carry weak isospin, T3, which is −⁠1/2⁠ for the three charged leptons (i.e. electron, muon and tau) and +⁠1/2⁠ for the three associated neutrinos. Each doublet of a charged lepton and a neutrino consisting of opposite T3 are said to constitute one generation of leptons. In addition, one defines a quantum number called weak hypercharge, YW, which is −1 for all left-handed leptons. Weak isospin and weak hypercharge are gauged in the Standard Model. Leptons may be assigned the six flavour quantum numbers: electron number, muon number, tau number, and corresponding numbers for the neutrinos (electron neutrino, muon neutrino and tau neutrino). These are conserved in strong and electromagnetic interactions, but violated by weak interactions. Therefore, such flavour quantum numbers are not of great use. A separate quantum number for each generation is more useful: electronic lepton number (+1 for electrons and electron neutrinos), muonic lepton number (+1 for muons and muon neutrinos), and tauonic lepton number (+1 for tau leptons and tau neutrinos). However, even these numbers are not absolutely conserved, as neutrinos of different generations can mix; that is, a neutrino of one flavour can transform into another flavour. The strength of such mixings is specified by a matrix called the Pontecorvo–Maki–Nakagawa–Sakata matrix (PMNS matrix). === Quarks === All quarks carry a baryon number B = ⁠++1/3⁠ , and all anti-quarks have B = ⁠−+1/3⁠ . They also all carry weak isospin, T3 = ⁠±+1/2⁠ . The positively charged quarks (up, charm, and top quarks) are called up-type quarks and have T3 = ⁠++1/2⁠ ; the negatively charged quarks (down, strange, and bottom quarks) are called down-type quarks and have T3 = ⁠−+1/2⁠ . Each doublet of up and down type quarks constitutes one generation of quarks. For all the quark flavour quantum numbers listed below, the convention is that the flavour charge and the electric charge of a quark have the same sign. Thus any flavour carried by a charged meson has the same sign as its charge. Quarks have the following flavour quantum numbers: The third component of isospin (usually just "isospin") (I3), which has value I3 = ⁠1/2⁠ for the up quark and I3 = −⁠1/2⁠ for the down quark. Strangeness (S): Defined as S = −n s + n s̅ , where ns represents the number of strange quarks (s) and ns̅ represents the number of strange antiquarks (s). This quantum number was introduced by Murray Gell-Mann. This definition gives the strange quark a strangeness of −1 for the above-mentioned reason. Charm (C): Defined as C = n c − n c̅ , where nc represents the number of charm quarks (c) and nc̅ represents the number of charm antiquarks. The charm quark's value is +1. Bottomness (or beauty) (B′): Defined as B′ = −n b + n b̅ , where nb represents the number of bottom quarks (b) and nb̅ represents the number of bottom antiquarks. Topness (or truth) (T): Defined as T = n t − n t̅ , where nt represents the number of top quarks (t) and nt̅ represents the number of top antiquarks. However, because of the extremely short half-life of the top quark (predicted lifetime of only 5×10−25 s), by the time it can interact strongly it has already decayed to another flavour of quark (usually to a bottom quark). For that reason the top quark doesn't hadronize, that is it never forms any meson or baryon. These five quantum numbers, together with baryon number (which is not a flavour quantum number), completely specify numbers of all 6 quark flavours separately (as n q − n q̅ , i.e. an antiquark is counted with the minus sign). They are conserved by both the electromagnetic and strong interactions (but not the weak interaction). From them can be built the derived quantum numbers: Hypercharge (Y): Y = B + S + C + B′ + T Electric charge (Q): Q = I3 + ⁠1/2⁠Y (see Gell-Mann–Nishijima formula) The terms "strange" and "strangeness" predate the discovery of the quark, but continued to be used after its discovery for the sake of continuity (i.e. the strangeness of each type of hadron remained the same); strangeness of anti-particles being referred to as +1, and particles as −1 as per the original definition. Strangeness was introduced to explain the rate of decay of newly discovered particles, such as the kaon, and was used in the Eightfold Way classification of hadrons and in subsequent quark models. These quantum numbers are preserved under strong and electromagnetic interactions, but not under weak interactions. For first-order weak decays, that is processes involving only one quark decay, these quantum numbers (e.g. charm) can only vary by 1, that is, for a decay involving a charmed quark or antiquark either as the incident particle or as a decay byproduct, ΔC = ±1 ; likewise, for a decay involving a bottom quark or antiquark ΔB′ = ±1 . Since first-order processes are more common than second-order processes (involving two quark decays), this can be used as an approximate "selection rule" for weak decays. A special mixture of quark flavours is an eigenstate of the weak interaction part of the Hamiltonian, so will interact in a particularly simple way with the W bosons (charged weak interactions violate flavour). On the other hand, a fermion of a fixed mass (an eigenstate of the kinetic and strong interaction parts of the Hamiltonian) is an eigenstate of flavour. The transformation from the former basis to the flavour-eigenstate/mass-eigenstate basis for quarks underlies the Cabibbo–Kobayashi–Maskawa matrix (CKM matrix). This matrix is analogous to the PMNS matrix for neutrinos, and quantifies flavour changes under charged weak interactions of quarks. The CKM matrix allows for CP violation if there are at least three generations. === Antiparticles and hadrons === Flavour quantum numbers are additive. Hence antiparticles have flavour equal in magnitude to the particle but opposite in sign. Hadrons inherit their flavour quantum number from their valence quarks: this is the basis of the classification in the quark model. The relations between the hypercharge, electric charge and other flavour quantum numbers hold for hadrons as well as quarks. == Flavour problem == The flavour problem (also known as the flavour puzzle) is the inability of current Standard Model flavour physics to explain why the free parameters of particles in the Standard Model have the values they have, and why there are specified values for mixing angles in the PMNS and CKM matrices. These free parameters - the fermion masses and their mixing angles - appear to be specifically tuned. Understanding the reason for such tuning would be the solution to the flavor puzzle. There are very fundamental questions involved in this puzzle such as why there are three generations of quarks (up-down, charm-strange, and top-bottom quarks) and leptons (electron, muon and tau neutrino), as well as how and why the mass and mixing hierarchy arises among different flavours of these fermions. == Quantum chromodynamics == Quantum chromodynamics (QCD) contains six flavours of quarks. However, their masses differ and as a result they are not strictly interchangeable with each other. The up and down flavours are close to having equal masses, and the theory of these two quarks possesses an approximate SU(2) symmetry (isospin symmetry). === Chiral symmetry description === Under some circumstances (for instance when the quark masses are much smaller than the chiral symmetry breaking scale of 250 MeV), the masses of quarks do not substantially contribute to the system's behavior, and to zeroth approximation the masses of the lightest quarks can be ignored for most purposes, as if they had zero mass. The simplified behavior of flavour transformations can then be successfully modeled as acting independently on the left- and right-handed parts of each quark field. This approximate description of the flavour symmetry is described by a chiral group SUL(Nf) × SUR(Nf). === Vector symmetry description === If all quarks had non-zero but equal masses, then this chiral symmetry is broken to the vector symmetry of the "diagonal flavour group" SU(Nf), which applies the same transformation to both helicities of the quarks. This reduction of symmetry is a form of explicit symmetry breaking. The strength of explicit symmetry breaking is controlled by the current quark masses in QCD. Even if quarks are massless, chiral flavour symmetry can be spontaneously broken if the vacuum of the theory contains a chiral condensate (as it does in low-energy QCD). This gives rise to an effective mass for the quarks, often identified with the valence quark mass in QCD. === Symmetries of QCD === Analysis of experiments indicate that the current quark masses of the lighter flavours of quarks are much smaller than the QCD scale, ΛQCD, hence chiral flavour symmetry is a good approximation to QCD for the up, down and strange quarks. The success of chiral perturbation theory and the even more naive chiral models spring from this fact. The valence quark masses extracted from the quark model are much larger than the current quark mass. This indicates that QCD has spontaneous chiral symmetry breaking with the formation of a chiral condensate. Other phases of QCD may break the chiral flavour symmetries in other ways. == History == === Isospin === Isospin, strangeness and hypercharge predate the quark model. The first of those quantum numbers, Isospin, was introduced as a concept in 1932 by Werner Heisenberg, to explain symmetries of the then newly discovered neutron (symbol n): The mass of the neutron and the proton (symbol p) are almost identical: They are nearly degenerate, and both are thus often referred to as “nucleons”, a term that ignores their intrinsic differences. Although the proton has a positive electric charge, and the neutron is neutral, they are almost identical in all other aspects, and their nuclear binding-force interactions (old name for the residual color force) are so strong compared to the electrical force between some, that there is very little point in paying much attention to their differences. The strength of the strong interaction between any pair of nucleons is the same, independent of whether they are interacting as protons or as neutrons. Protons and neutrons were grouped together as nucleons and treated as different states of the same particle, because they both have nearly the same mass and interact in nearly the same way, if the (much weaker) electromagnetic interaction is neglected. Heisenberg noted that the mathematical formulation of this symmetry was in certain respects similar to the mathematical formulation of non-relativistic spin, whence the name "isospin" derives. The neutron and the proton are assigned to the doublet (the spin-1⁄2, 2, or fundamental representation) of SU(2), with the proton and neutron being then associated with different isospin projections I3 = ++1⁄2 and −+1⁄2 respectively. The pions are assigned to the triplet (the spin-1, 3, or adjoint representation) of SU(2). Though there is a difference from the theory of spin: The group action does not preserve flavor (in fact, the group action is specifically an exchange of flavour). When constructing a physical theory of nuclear forces, one could simply assume that it does not depend on isospin, although the total isospin should be conserved. The concept of isospin proved useful in classifying hadrons discovered in the 1950s and 1960s (see particle zoo), where particles with similar mass are assigned an SU(2) isospin multiplet. === Strangeness and hypercharge === The discovery of strange particles like the kaon led to a new quantum number that was conserved by the strong interaction: strangeness (or equivalently hypercharge). The Gell-Mann–Nishijima formula was identified in 1953, which relates strangeness and hypercharge with isospin and electric charge. === The eightfold way and quark model === Once the kaons and their property of strangeness became better understood, it started to become clear that these, too, seemed to be a part of an enlarged symmetry that contained isospin as a subgroup. The larger symmetry was named the Eightfold Way by Murray Gell-Mann, and was promptly recognized to correspond to the adjoint representation of SU(3). To better understand the origin of this symmetry, Gell-Mann proposed the existence of up, down and strange quarks which would belong to the fundamental representation of the SU(3) flavor symmetry. === GIM-Mechanism and charm === To explain the observed absence of flavor-changing neutral currents, the GIM mechanism was proposed in 1970, which introduced the charm quark and predicted the J/psi meson. The J/psi meson was indeed found in 1974, which confirmed the existence of charm quarks. This discovery is known as the November Revolution. The flavor quantum number associated with the charm quark became known as charm. === Bottomness and topness === The bottom and top quarks were predicted in 1973 in order to explain CP violation, which also implied two new flavor quantum numbers: bottomness and topness. == See also == Standard Model (mathematical formulation) Cabibbo–Kobayashi–Maskawa matrix Strong CP problem and chirality (physics) Chiral symmetry breaking and quark matter Quark flavour tagging, such as B-tagging, is an example of particle identification in experimental particle physics. == References == == Further reading == Lessons in Particle Physics Luis Anchordoqui and Francis Halzen, University of Wisconsin, 18th Dec. 2009 == External links == The particle data group.
Wikipedia/Flavor_(particle_physics)
In particle physics, the Georgi–Glashow model is a particular Grand Unified Theory (GUT) proposed by Howard Georgi and Sheldon Glashow in 1974. In this model, the Standard Model gauge groups SU(3) × SU(2) × U(1) are combined into a single simple gauge group SU(5). The unified group SU(5) is then thought to be spontaneously broken into the Standard Model subgroup below a very high energy scale called the grand unification scale. Since the Georgi–Glashow model combines leptons and quarks into single irreducible representations, there exist interactions which do not conserve baryon number, although they still conserve the quantum number B – L associated with the symmetry of the common representation. This yields a mechanism for proton decay, and the rate of proton decay can be predicted from the dynamics of the model. However, proton decay has not yet been observed experimentally, and the resulting lower limit on the lifetime of the proton contradicts the predictions of this model. Nevertheless, the elegance of the model has led particle physicists to use it as the foundation for more complex models which yield longer proton lifetimes, particularly SO(10) in basic and SUSY variants. (For a more elementary introduction to how the representation theory of Lie algebras are related to particle physics, see the article Particle physics and representation theory.) Also, this model suffers from the doublet–triplet splitting problem. == Construction == SU(5) acts on C 5 {\displaystyle \mathbb {C} ^{5}} and hence on its exterior algebra ∧ C 5 {\displaystyle \wedge \mathbb {C} ^{5}} . Choosing a C 2 ⊕ C 3 {\displaystyle \mathbb {C} ^{2}\oplus \mathbb {C} ^{3}} splitting restricts SU(5) to S(U(2)×U(3)), yielding matrices of the form ϕ : U ( 1 ) × S U ( 2 ) × S U ( 3 ) ⟶ S ( U ( 2 ) × U ( 3 ) ) ⊂ S U ( 5 ) ( α , g , h ) ⟼ ( α 3 g 0 0 α − 2 h ) {\displaystyle {\begin{matrix}\phi :&U(1)\times SU(2)\times SU(3)&\longrightarrow &S(U(2)\times U(3))\subset SU(5)\\&(\alpha ,g,h)&\longmapsto &{\begin{pmatrix}\alpha ^{3}g&0\\0&\alpha ^{-2}h\end{pmatrix}}\\\end{matrix}}} with kernel { ( α , α − 3 I d 2 , α 2 I d 3 ) | α ∈ C , α 6 = 1 } ≅ Z 6 {\displaystyle \{(\alpha ,\alpha ^{-3}\mathrm {Id} _{2},\alpha ^{2}\mathrm {Id} _{3})|\alpha \in \mathbb {C} ,\alpha ^{6}=1\}\cong \mathbb {Z} _{6}} , hence isomorphic to the Standard Model's true gauge group S U ( 3 ) × S U ( 2 ) × U ( 1 ) / Z 6 {\displaystyle SU(3)\times SU(2)\times U(1)/\mathbb {Z} _{6}} . For the zeroth power ⋀ 0 C 5 {\displaystyle {\textstyle \bigwedge }^{0}\mathbb {C} ^{5}} , this acts trivially to match a left-handed neutrino, C 0 ⊗ C ⊗ C {\displaystyle \mathbb {C} _{0}\otimes \mathbb {C} \otimes \mathbb {C} } . For the first exterior power ⋀ 1 C 5 ≅ C 5 {\displaystyle {\textstyle \bigwedge }^{1}\mathbb {C} ^{5}\cong \mathbb {C} ^{5}} , the Standard Model's group action preserves the splitting C 5 ≅ C 2 ⊕ C 3 {\displaystyle \mathbb {C} ^{5}\cong \mathbb {C} ^{2}\oplus \mathbb {C} ^{3}} . The C 2 {\displaystyle \mathbb {C} ^{2}} transforms trivially in SU(3), as a doublet in SU(2), and under the Y = ⁠1/2⁠ representation of U(1) (as weak hypercharge is conventionally normalized as α3 = α6Y); this matches a right-handed anti-lepton, C 1 2 ⊗ C 2 ∗ ⊗ C {\displaystyle \mathbb {C} _{\frac {1}{2}}\otimes \mathbb {C} ^{2*}\otimes \mathbb {C} } (as C 2 ≅ C 2 ∗ {\displaystyle \mathbb {C} ^{2}\cong \mathbb {C} ^{2*}} in SU(2)). The C 3 {\displaystyle \mathbb {C} ^{3}} transforms as a triplet in SU(3), a singlet in SU(2), and under the Y = −⁠1/3⁠ representation of U(1) (as α−2 = α6Y); this matches a right-handed down quark, C − 1 3 ⊗ C ⊗ C 3 {\displaystyle \mathbb {C} _{-{\frac {1}{3}}}\otimes \mathbb {C} \otimes \mathbb {C} ^{3}} . The second power ⋀ 2 C 5 {\displaystyle {\textstyle \bigwedge }^{2}\mathbb {C} ^{5}} is obtained via the formula ⋀ 2 ( V ⊕ W ) = ⋀ 2 V 2 ⊕ ( V ⊗ W ) ⊕ ⋀ 2 W 2 {\displaystyle {\textstyle \bigwedge }^{2}(V\oplus W)={\textstyle \bigwedge }^{2}V^{2}\oplus (V\otimes W)\oplus {\textstyle \bigwedge }^{2}W^{2}} . As SU(5) preserves the canonical volume form of C 5 {\displaystyle \mathbb {C} ^{5}} , Hodge duals give the upper three powers by ⋀ p C 5 ≅ ( ⋀ 5 − p C 5 ) ∗ {\displaystyle {\textstyle \bigwedge }^{p}\mathbb {C} ^{5}\cong ({\textstyle \bigwedge }^{5-p}\mathbb {C} ^{5})^{*}} . Thus the Standard Model's representation F ⊕ F* of one generation of fermions and antifermions lies within ∧ C 5 {\displaystyle \wedge \mathbb {C} ^{5}} . Similar motivations apply to the Pati–Salam model, and to SO(10), E6, and other supergroups of SU(5). == Explicit Embedding of the Standard Model (SM) == Owing to its relatively simple gauge group S U ( 5 ) {\displaystyle SU(5)} , GUTs can be written in terms of vectors and matrices which allows for an intuitive understanding of the Georgi–Glashow model. The fermion sector is then composed of an anti fundamental 5 ¯ {\displaystyle {\overline {\mathbf {5} }}} and an antisymmetric 10 {\displaystyle \mathbf {10} } . In terms of SM degrees of freedoms, this can be written as 5 ¯ F = ( d 1 c d 2 c d 3 c e − ν ) {\displaystyle {\overline {\mathbf {5} }}_{F}={\begin{pmatrix}d_{1}^{c}\\d_{2}^{c}\\d_{3}^{c}\\e\\-\nu \end{pmatrix}}} and 10 F = ( 0 u 3 c − u 2 c u 1 d 1 − u 3 c 0 u 1 c u 2 d 2 u 2 c − u 1 c 0 u 3 d 3 − u 1 − u 2 − u 3 0 e R − d 1 − d 2 − d 3 − e R 0 ) {\displaystyle \mathbf {10} _{F}={\begin{pmatrix}0&u_{3}^{c}&-u_{2}^{c}&u_{1}&d_{1}\\-u_{3}^{c}&0&u_{1}^{c}&u_{2}&d_{2}\\u_{2}^{c}&-u_{1}^{c}&0&u_{3}&d_{3}\\-u_{1}&-u_{2}&-u_{3}&0&e_{R}\\-d_{1}&-d_{2}&-d_{3}&-e_{R}&0\end{pmatrix}}} with d i {\displaystyle d_{i}} and u i {\displaystyle u_{i}} the left-handed up and down type quark, d i c {\displaystyle d_{i}^{c}} and u i c {\displaystyle u_{i}^{c}} their righthanded counterparts, ν {\displaystyle \nu } the neutrino, e {\displaystyle e} and e R {\displaystyle e_{R}} the left and right-handed electron, respectively. In addition to the fermions, we need to break S U ( 3 ) × S U L ( 2 ) × U Y ( 1 ) → S U ( 3 ) × U E M ( 1 ) {\displaystyle SU(3)\times SU_{L}(2)\times U_{Y}(1)\rightarrow SU(3)\times U_{EM}(1)} ; this is achieved in the Georgi–Glashow model via a fundamental 5 {\displaystyle \mathbf {5} } which contains the SM Higgs, 5 H = ( T 1 , T 2 , T 3 , H + , H 0 ) T {\displaystyle \mathbf {5} _{H}=(T_{1},T_{2},T_{3},H^{+},H^{0})^{T}} with H + {\displaystyle H^{+}} and H 0 {\displaystyle H^{0}} the charged and neutral components of the SM Higgs, respectively. Note that the T i {\displaystyle T_{i}} are not SM particles and are thus a prediction of the Georgi–Glashow model. The SM gauge fields can be embedded explicitly as well. For that we recall a gauge field transforms as an adjoint, and thus can be written as A μ a T a {\displaystyle A_{\mu }^{a}T^{a}} with T a {\displaystyle T^{a}} the S U ( 5 ) {\displaystyle SU(5)} generators. Now, if we restrict ourselves to generators with non-zero entries only in the upper 3 × 3 {\displaystyle 3\times 3} block, in the lower 2 × 2 {\displaystyle 2\times 2} block, or on the diagonal, we can identify ( G μ a T 3 a 0 0 0 ) {\displaystyle {\begin{pmatrix}G_{\mu }^{a}T_{3}^{a}&0\\0&0\end{pmatrix}}} with the S U ( 3 ) {\displaystyle SU(3)} colour gauge fields, ( 0 0 0 σ a 2 W μ a ) {\displaystyle {\begin{pmatrix}0&0\\0&{\frac {\sigma ^{a}}{2}}W_{\mu }^{a}\end{pmatrix}}} with the weak S U ( 2 ) {\displaystyle SU(2)} fields, and N B μ 0 diag ⁡ ( − 1 / 3 , − 1 / 3 , − 1 / 3 , 1 / 2 , 1 / 2 ) {\displaystyle N\,B_{\mu }^{0}\operatorname {diag} \left(-1/3,-1/3,-1/3,1/2,1/2\right)} with the U ( 1 ) {\displaystyle U(1)} hypercharge (up to some normalization N {\displaystyle N} .) Using the embedding, we can explicitly check that the fermionic fields transform as they should. This explicit embedding can be found in Ref. or in the original paper by Georgi and Glashow. == Breaking SU(5) == SU(5) breaking occurs when a scalar field (Which we will denote as 24 H {\displaystyle \mathbf {24} _{H}} ), analogous to the Higgs field and transforming in the adjoint of SU(5), acquires a vacuum expectation value (vev) proportional to the weak hypercharge generator ⟨ 24 H ⟩ = v 24 diag ⁡ ( − 1 / 3 , − 1 / 3 , − 1 / 3 , 1 / 2 , 1 / 2 ) {\displaystyle \langle \mathbf {24} _{H}\rangle =v_{24}\operatorname {diag} \left(-1/3,-1/3,-1/3,1/2,1/2\right)} . When this occurs, SU(5) is spontaneously broken to the subgroup of SU(5) commuting with the group generated by Y. Using the embedding from the previous section, we can explicitly check that S U ( 5 ) {\displaystyle SU(5)} is indeed equal to S U ( 3 ) × S U ( 2 ) × U ( 1 ) {\displaystyle SU(3)\times SU(2)\times U(1)} by noting that [ ⟨ 24 H ⟩ , G μ ] = [ ⟨ 24 H ⟩ , W μ ] = [ ⟨ 24 H ⟩ , B μ ] = 0 {\displaystyle [\langle \mathbf {24} _{H}\rangle ,G_{\mu }]=[\langle \mathbf {24} _{H}\rangle ,W_{\mu }]=[\langle \mathbf {24} _{H}\rangle ,B_{\mu }]=0} . Computation of similar commutators further shows that all other S U ( 5 ) {\displaystyle SU(5)} gauge fields acquire masses. To be precise, the unbroken subgroup is actually [ S U ( 3 ) × S U ( 2 ) × U ( 1 ) Y ] / Z 6 . {\displaystyle [SU(3)\times SU(2)\times U(1)_{Y}]/\mathbb {Z} _{6}.} Under this unbroken subgroup, the adjoint 24 transforms as 24 → ( 8 , 1 ) 0 ⊕ ( 1 , 3 ) 0 ⊕ ( 1 , 1 ) 0 ⊕ ( 3 , 2 ) − 5 6 ⊕ ( 3 ¯ , 2 ) 5 6 {\displaystyle \mathbf {24} \rightarrow (8,1)_{0}\oplus (1,3)_{0}\oplus (1,1)_{0}\oplus (3,2)_{-{\frac {5}{6}}}\oplus ({\bar {3}},2)_{\frac {5}{6}}} to yield the gauge bosons of the Standard Model plus the new X and Y bosons. See restricted representation. The Standard Model's quarks and leptons fit neatly into representations of SU(5). Specifically, the left-handed fermions combine into 3 generations of 5 ¯ ⊕ 10 ⊕ 1 . {\displaystyle \ {\overline {\mathbf {5} }}\oplus \mathbf {10} \oplus \mathbf {1} ~.} Under the unbroken subgroup these transform as 5 ¯ → ( 3 ¯ , 1 ) 1 3 ⊕ ( 1 , 2 ) − 1 2 d c a n d ℓ 10 → ( 3 , 2 ) 1 6 ⊕ ( 3 ¯ , 1 ) − 2 3 ⊕ ( 1 , 1 ) 1 q , u c a n d e c 1 → ( 1 , 1 ) 0 ν c {\displaystyle {\begin{aligned}{\overline {\mathbf {5} }}&\to ({\bar {3}},1)_{\tfrac {1}{3}}\oplus (1,2)_{-{\tfrac {1}{2}}}&&\mathrm {d} ^{\mathsf {c}}{\mathsf {~and~}}\ell \\\mathbf {10} &\to (3,2)_{\tfrac {1}{6}}\oplus ({\bar {3}},1)_{-{\tfrac {2}{3}}}\oplus (1,1)_{1}&&q,\mathrm {u} ^{\mathsf {c}}{\mathsf {~and~}}\mathrm {e} ^{\mathsf {c}}\\\mathbf {1} &\to (1,1)_{0}&&\nu ^{\mathsf {c}}\end{aligned}}} to yield precisely the left-handed fermionic content of the Standard Model where every generation dc, uc, ec, and νc correspond to anti-down-type quark, anti-up-type quark, anti-down-type lepton, and anti-up-type lepton, respectively. Also, q and ℓ {\displaystyle \ell } correspond to quark and lepton. Fermions transforming as 1 under SU(5) are now thought to be necessary because of the evidence for neutrino oscillations, unless a way is found to introduce an infinitesimal Majorana coupling for the left-handed neutrinos. Since the homotopy group is π 2 ( S U ( 5 ) [ S U ( 3 ) × S U ( 2 ) × U ( 1 ) Y ] / Z 6 ) = Z {\displaystyle \pi _{2}\left({\frac {SU(5)}{[SU(3)\times SU(2)\times U(1)_{Y}]/\mathbb {Z} _{6}}}\right)=\mathbb {Z} } , this model predicts 't Hooft–Polyakov monopoles. Because the electromagnetic charge Q is a linear combination of some SU(2) generator with ⁠Y/2⁠, these monopoles also have quantized magnetic charges Y, where by magnetic, here we mean magnetic electromagnetic charges. == Minimal supersymmetric SU(5) == The minimal supersymmetric SU(5) model assigns a Z 2 {\displaystyle \mathbb {Z} _{2}} matter parity to the chiral superfields with the matter fields having odd parity and the Higgs having even parity to protect the electroweak Higgs from quadratic radiative mass corrections (the hierarchy problem). In the non-supersymmetric version the action is invariant under a similar Z 2 {\displaystyle \mathbb {Z} _{2}} symmetry because the matter fields are all fermionic and thus must appear in the action in pairs, while the Higgs fields are bosonic. === Chiral superfields === As complex representations: === Superpotential === A generic invariant renormalizable superpotential is a (complex) S U ( 5 ) × Z 2 {\displaystyle SU(5)\times \mathbb {Z} _{2}} invariant cubic polynomial in the superfields. It is a linear combination of the following terms: Φ 2 Φ B A Φ A B Φ 3 Φ B A Φ C B Φ A C H d H u H d A H u A H d Φ H u H d A Φ B A H u B H u 10 i 10 j ϵ A B C D E H u A 10 i B C 10 j D E H d 5 ¯ i 10 j H d A 5 ¯ B i 10 j A B H u 5 ¯ i N c j H u A 5 ¯ A i N c j N c i N c j N c i N c j {\displaystyle {\begin{matrix}\Phi ^{2}&&\Phi _{B}^{A}\Phi _{A}^{B}\\[4pt]\Phi ^{3}&&\Phi _{B}^{A}\Phi _{C}^{B}\Phi _{A}^{C}\\[4pt]\mathrm {H} _{\mathsf {d}}\ \mathrm {H} _{\mathsf {u}}&&{\mathrm {H} _{\mathsf {d}}}_{A}\ {\mathrm {H} _{\mathsf {u}}}^{A}\\[4pt]\mathrm {H} _{\mathsf {d}}\ \Phi \ \mathrm {H} _{\mathsf {u}}&&{\mathrm {H} _{\mathsf {d}}}_{A}\ \Phi _{B}^{A}\ {\mathrm {H} _{\mathsf {u}}}^{B}\\[4pt]\mathrm {H} _{\mathsf {u}}\ \mathbf {10} _{i}\mathbf {10} _{j}&&\epsilon _{ABCDE}\ {\mathrm {H} _{\mathsf {u}}}^{A}\ \mathbf {10} _{i}^{BC}\ \mathbf {10} _{j}^{DE}\\[4pt]\mathrm {H} _{\mathsf {d}}\ {\overline {\mathbf {5} }}_{i}\mathbf {10} _{j}&&{\mathrm {H} _{\mathsf {d}}}_{A}\ {\overline {\mathbf {5} }}_{Bi}\ \mathbf {10} _{j}^{AB}\\[4pt]\mathrm {H} _{\mathsf {u}}\ {\overline {\mathbf {5} }}_{i}\ {\mathrm {N} ^{\mathsf {c}}}_{j}&&{\mathrm {H} _{\mathsf {u}}}^{A}\ {\overline {\mathbf {5} }}_{Ai}\ {\mathrm {N} ^{\mathsf {c}}}_{j}\\[4pt]{\mathrm {N} ^{\mathsf {c}}}_{i}\ {\mathrm {N} ^{\mathsf {c}}}_{j}&&{\mathrm {N} ^{\mathsf {c}}}_{i}\ {\mathrm {N} ^{\mathsf {c}}}_{j}\\\end{matrix}}} The first column is an Abbreviation of the second column (neglecting proper normalization factors), where capital indices are SU(5) indices, and i and j are the generation indices. The last two rows presupposes the multiplicity of N c {\displaystyle \ \mathrm {N} ^{\mathsf {c}}\ } is not zero (i.e. that a sterile neutrino exists). The coupling H u 10 i 10 j {\displaystyle \ \mathrm {H} _{\mathsf {u}}\ \mathbf {10} _{i}\ \mathbf {10} _{j}\ } has coefficients which are symmetric in i and j. The coupling N i c N j c {\displaystyle \ \mathrm {N} _{i}^{\mathsf {c}}\ \mathrm {N} _{j}^{\mathsf {c}}\ } has coefficients which are symmetric in i and j. The number of sterile neutrino generations need not be three, unless the SU(5) is embedded in a higher unification scheme such as SO(10). === Vacua === The vacua correspond to the mutual zeros of the F and D terms. Let's first look at the case where the VEVs of all the chiral fields are zero except for Φ. ==== The Φ sector ==== W = T r [ a Φ 2 + b Φ 3 ] {\displaystyle \ W=Tr\left[a\Phi ^{2}+b\Phi ^{3}\right]\ } The F zeros corresponds to finding the stationary points of W subject to the traceless constraint T r [ Φ ] = 0 . {\displaystyle \ Tr[\Phi ]=0~.} So, 2 a Φ + 3 b Φ 2 = λ 1 , {\displaystyle \ 2a\Phi +3b\Phi ^{2}=\lambda \mathbf {1} \ ,} where λ is a Lagrange multiplier. Up to an SU(5) (unitary) transformation, Φ = { diag ⁡ ( 0 , 0 , 0 , 0 , 0 ) diag ⁡ ( 2 a 9 b , 2 a 9 b , 2 a 9 b , 2 a 9 b , − 8 a 9 b ) diag ⁡ ( 4 a 3 b , 4 a 3 b , 4 a 3 b , − 2 a b , − 2 a b ) {\displaystyle \Phi ={\begin{cases}\operatorname {diag} (0,0,0,0,0)\\\operatorname {diag} ({\frac {2a}{9b}},{\frac {2a}{9b}},{\frac {2a}{9b}},{\frac {2a}{9b}},-{\frac {8a}{9b}})\\\operatorname {diag} ({\frac {4a}{3b}},{\frac {4a}{3b}},{\frac {4a}{3b}},-{\frac {2a}{b}},-{\frac {2a}{b}})\end{cases}}} The three cases are called case I, II, and III and they break the gauge symmetry into S U ( 5 ) , [ S U ( 4 ) × U ( 1 ) ] / Z 4 {\displaystyle \ SU(5),\ \left[SU(4)\times U(1)\right]/\mathbb {Z} _{4}\ } and [ S U ( 3 ) × S U ( 2 ) × U ( 1 ) ] / Z 6 {\displaystyle \ \left[SU(3)\times SU(2)\times U(1)\right]/\mathbb {Z} _{6}} respectively (the stabilizer of the VEV). In other words, there are at least three different superselection sections, which is typical for supersymmetric theories. Only case III makes any phenomenological sense and so, we will focus on this case from now onwards. It can be verified that this solution together with zero VEVs for all the other chiral multiplets is a zero of the F-terms and D-terms. The matter parity remains unbroken (right up to the TeV scale). ==== Decomposition ==== The gauge algebra 24 decomposes as ( ( 8 , 1 ) 0 ( 1 , 3 ) 0 ( 1 , 1 ) 0 ( 3 , 2 ) − 5 6 ( 3 ¯ , 2 ) 5 6 ) . {\displaystyle {\begin{pmatrix}(8,1)_{0}\\(1,3)_{0}\\(1,1)_{0}\\(3,2)_{-{\frac {5}{6}}}\\({\bar {3}},2)_{\frac {5}{6}}\end{pmatrix}}~.} This 24 is a real representation, so the last two terms need explanation. Both ( 3 , 2 ) − 5 6 {\displaystyle (3,2)_{-{\frac {5}{6}}}} and ( 3 ¯ , 2 ) 5 6 {\displaystyle \ ({\bar {3}},2)_{\frac {5}{6}}\ } are complex representations. However, the direct sum of both representation decomposes into two irreducible real representations and we only take half of the direct sum, i.e. one of the two real irreducible copies. The first three components are left unbroken. The adjoint Higgs also has a similar decomposition, except that it is complex. The Higgs mechanism causes one real HALF of the ( 3 , 2 ) − 5 6 {\displaystyle \ (3,2)_{-{\frac {5}{6}}}\ } and ( 3 ¯ , 2 ) 5 6 {\displaystyle \ ({\bar {3}},2)_{\frac {5}{6}}\ } of the adjoint Higgs to be absorbed. The other real half acquires a mass coming from the D-terms. And the other three components of the adjoint Higgs, ( 8 , 1 ) 0 , ( 1 , 3 ) 0 {\displaystyle \ (8,1)_{0},(1,3)_{0}\ } and ( 1 , 1 ) 0 {\displaystyle \ (1,1)_{0}\ } acquire GUT scale masses coming from self pairings of the superpotential, a Φ 2 + b < Φ > Φ 2 . {\displaystyle \ a\Phi ^{2}+b<\Phi >\Phi ^{2}~.} The sterile neutrinos, if any exist, would also acquire a GUT scale Majorana mass coming from the superpotential coupling νc  2  . Because of matter parity, the matter representations 5 ¯ {\displaystyle \ {\overline {\mathbf {5} }}\ } and 10 remain chiral. It is the Higgs fields 5H and 5 ¯ H {\displaystyle \ {\overline {\mathbf {5} }}_{\mathrm {H} }\ } which are interesting. The two relevant superpotential terms here are 5 H 5 ¯ H {\displaystyle \ 5_{\mathrm {H} }\ {\bar {5}}_{\mathrm {H} }\ } and ⟨ 24 ⟩ 5 H 5 ¯ H . {\displaystyle \ \langle 24\rangle 5_{\mathrm {H} }\ {\bar {5}}_{\mathrm {H} }~.} Unless there happens to be some fine tuning, we would expect both the triplet terms and the doublet terms to pair up, leaving us with no light electroweak doublets. This is in complete disagreement with phenomenology. See doublet-triplet splitting problem for more details. ==== Fermion masses ==== == Problems of the Georgi–Glashow model == === Proton decay in SU(5) === Unification of the Standard Model via an SU(5) group has significant phenomenological implications. Most notable of these is proton decay which is present in SU(5) with and without supersymmetry. This is allowed by the new vector bosons introduced from the adjoint representation of SU(5) which also contains the gauge bosons of the Standard Model forces. Since these new gauge bosons are in (3,2)−5/6 bifundamental representations, they violated baryon and lepton number. As a result, the new operators should cause protons to decay at a rate inversely proportional to their masses. This process is called dimension 6 proton decay and is an issue for the model, since the proton is experimentally determined to have a lifetime greater than the age of the universe. This means that an SU(5) model is severely constrained by this process. As well as these new gauge bosons, in SU(5) models, the Higgs field is usually embedded in a 5 representation of the GUT group. The caveat of this is that since the Higgs field is an SU(2) doublet, the remaining part, an SU(3) triplet, must be some new field - usually called D or T. This new scalar would be able to generate proton decay as well and, assuming the most basic Higgs vacuum alignment, would be massless so allowing the process at very high rates. While not an issue in the Georgi–Glashow model, a supersymmeterised SU(5) model would have additional proton decay operators due to the superpartners of the Standard Model fermions. The lack of detection of proton decay (in any form) brings into question the veracity of SU(5) GUTs of all types; however, while the models are highly constrained by this result, they are not in general ruled out. ==== Mechanism ==== In the lowest-order Feynman diagram corresponding to the simplest source of proton decay in SU(5), a left-handed and a right-handed up quark annihilate yielding an X+ boson which decays to a right-handed (or left-handed) positron and a left-handed (or right-handed) anti-down quark: u L + u R → X + → e R + + d ¯ L , {\displaystyle \mathrm {u} _{\mathsf {L}}+\mathrm {u} _{\mathsf {R}}\to X^{+}\to \mathrm {e} _{\mathsf {R}}^{+}+\mathrm {\bar {d}} _{\mathsf {L}}\ ,} u L + u R → X + → e L + + d ¯ R . {\displaystyle \mathrm {u} _{\mathsf {L}}+\mathrm {u} _{\mathsf {R}}\to X^{+}\to \mathrm {e} _{\mathsf {L}}^{+}+\mathrm {\bar {d}} _{\mathsf {R}}~.} This process conserves weak isospin, weak hypercharge, and color. GUTs equate anti-color with having two colors, g ¯ ≡ r b , {\displaystyle \ {\bar {g}}\equiv rb\ ,} and SU(5) defines left-handed normal leptons as "white" and right-handed antileptons as "black". The first vertex only involves fermions of the 10 representation, while the second only involves fermions in the 5̅ (or 10), demonstrating the preservation of SU(5) symmetry. === Mass relations === Since SM states are regrouped into S U ( 5 ) {\displaystyle SU(5)} representations their Yukawa matrices have the following relations: Y d = Y e T a n d Y u = Y u T {\displaystyle Y_{\mathrm {d} }=Y_{\mathrm {e} }^{\mathsf {T}}\quad {\mathsf {and}}\quad Y_{\mathrm {u} }=Y_{\mathrm {u} }^{\mathsf {T}}} In particular this predicts m e , μ τ ≈ m d , s , b {\displaystyle m_{e,\mu \tau }\approx m_{d,s,b}} at energies close to the scale of unification. This is however not realized in nature. === Doublet-triplet splitting === As mentioned in the above section the colour triplet of the 5 {\displaystyle {\mathbf {5} }} which contains the SM Higgs can mediate dimension 6 proton decay. Since protons seem to be quite stable such a triplet has to acquire a quite large mass in order to suppress the decay. This is however problematic. For that consider the scalar part of the Greorgi-Glashow Lagrangian: L ⊃ 5 H † ( a + b 24 H ) 5 H ⟶ S S B ( a + 2 b v 24 ) T † T + ( a − 3 b v 24 ) H † H = m T 2 T † T − μ 2 H † H {\displaystyle {\mathcal {L}}\supset {\mathbf {5} }_{\mathrm {H} }^{\dagger }(a+b\mathbf {24} _{\mathrm {H} }){\mathbf {5} }_{\mathrm {H} }{\overset {SSB}{\longrightarrow }}(a+2bv_{24})T^{\dagger }T+(a-3bv_{24})H^{\dagger }H=m_{\mathrm {T} }^{2}T^{\dagger }T-\mu ^{2}H^{\dagger }H} We here have denoted the adjoint used to break S U ( 5 ) {\displaystyle \ SU(5)\ } to the SM with 24 H , {\displaystyle \ \mathbf {24} _{H}\ ,} T is VEV by v 24 {\displaystyle \ v_{24}\ } and 5 H = ( T , H ) T {\displaystyle \ {\mathbf {5} }_{\mathrm {H} }=(T,H)^{\mathsf {T}}\ } the defining representation. which contains the SM Higgs H {\displaystyle \ H\ } and the colour triplet T {\displaystyle T} which can induce proton decay. As mentioned, we require m T > 10 12 G e V {\displaystyle \ m_{\mathrm {T} }>10^{12}\ \mathrm {GeV} \ } in order to sufficiently suppress proton decay. On the other hand, the μ {\displaystyle \ \mu \ } is typically of order 100 G e V {\displaystyle \ 100\ \mathrm {GeV} \ } in order to be consistent with observations. Looking at the above equation it becomes clear that one has to be very precise in choosing the parameters a {\displaystyle \ a\ } and b : {\displaystyle \ b\ :} any two random parameters will not do, since then μ {\displaystyle \ \mu \ } and m T {\displaystyle \ m_{\mathrm {T} }\ } could be of the same order! This is known as the doublet–triplet (DT) splitting problem: In order to be consistent we have to 'split' the 'masses' of T {\displaystyle \ T\ } and H , {\displaystyle \ H\ ,} but for that we need to fine-tune a {\displaystyle \ a\ } and b . {\displaystyle \ b~.} There are however some solutions to this problem (see e.g.) which can work quite well in SUSY models. A review of the DT splitting problem can be found in. === Neutrino masses === As the SM the original Georgi–Glashow model proposed in does not include neutrino masses. However, since neutrino oscillation has been observed such masses are required. The solutions to this problem follow the same ideas which have been applied to the SM: One on hand on can include a S U ( 5 ) {\displaystyle SU(5)} singulet which then can generate either Dirac masses or Majorana masses. As in the SM one can also implement the type-I seesaw mechanism which then generates naturally light masses. On the other hand, one can just parametrize the ignorance about neutrinos using the dimension 5 Weinberg operator: O W = ( 5 ¯ F 5 H ) Y ν Λ ( 5 ¯ F 5 H ) + h . c . {\displaystyle {\mathcal {O}}_{W}=({\overline {\mathbf {5} }}_{F}\mathbf {5} _{H}){\frac {Y_{\nu }}{\Lambda }}({\overline {\mathbf {5} }}_{F}\mathbf {5} _{H})+h.c.} with Y ν {\displaystyle Y_{\nu }} the 3 × 3 {\displaystyle 3\times 3} Yukawa matrix required for the mixing between flavours. == References ==
Wikipedia/Georgi–Glashow_model
Nuclear Physics A, Nuclear Physics B, Nuclear Physics B: Proceedings Supplements and discontinued Nuclear Physics are peer-reviewed scientific journals published by Elsevier. The scope of Nuclear Physics A is nuclear and hadronic physics, and that of Nuclear Physics B is high energy physics, quantum field theory, statistical systems, and mathematical physics. Nuclear Physics was established in 1956, and then split into Nuclear Physics A and Nuclear Physics B in 1967. A supplement series to Nuclear Physics B, called Nuclear Physics B: Proceedings Supplements has been published from 1987 onwards until 2015 and continues as Nuclear and Particle Physics Proceedings. Nuclear Physics B is part of the SCOAP3 initiative. == Abstracting and indexing == === Nuclear Physics A === Current Contents/Physics, Chemical, & Earth Sciences === Nuclear Physics B === Current Contents/Physics, Chemical, & Earth Sciences == References == == External links == Nuclear Physics Nuclear Physics A Nuclear Physics B Nuclear Physics B: Proceedings Supplements
Wikipedia/Nuclear_Physics_B
In physics, the Pati–Salam model is a Grand Unified Theory (GUT) proposed in 1974 by Abdus Salam and Jogesh Pati. Like other GUTs, its goal is to explain the seeming arbitrariness and complexity of the Standard Model in terms of a simpler, more fundamental theory that unifies what are in the Standard Model disparate particles and forces. The Pati–Salam unification is based on there being four quark color charges, dubbed red, green, blue and violet (or originally lilac), instead of the conventional three, with the new "violet" quark being identified with the leptons. The model also has left–right symmetry and predicts the existence of a high energy right handed weak interaction with heavy W' and Z' bosons and right-handed neutrinos. Originally the fourth color was labelled "lilac" to alliterate with "lepton". Pati–Salam is an alternative to the Georgi–Glashow SU(5) unification also proposed in 1974. Both can be embedded within an SO(10) unification model. == Core theory == The Pati–Salam model states that the gauge group is either SU(4) × SU(2)L × SU(2)R or (SU(4) × SU(2)L × SU(2)R)/Z2 and the fermions form three families, each consisting of the representations (4, 2, 1) and (4, 1, 2). This needs some explanation. The center of SU(4) × SU(2)L × SU(2)R is Z4 × Z2L × Z2R. The Z2 in the quotient refers to the two element subgroup generated by the element of the center corresponding to the two element of Z4 and the 1 elements of Z2L and Z2R. This includes the right-handed neutrino. See neutrino oscillations. There is also a (4, 1, 2) and/or a (4, 1, 2) scalar field called the Higgs field which acquires a non-zero vacuum expectation value (VEV). This results in a spontaneous symmetry breaking from SU(4) × SU(2)L × SU(2)R to (SU(3) × SU(2) × U(1)Y)/Z3 or from (SU(4) × SU(2)L × SU(2)R)/Z2 to (SU(3) × SU(2) × U(1)Y)/Z6 and also, (4, 2, 1) → (3, 2)⁠1/6⁠ ⊕ (1, 2)− ⁠1/2⁠ (q & l) (4, 1, 2) → (3, 1)⁠1/3⁠ ⊕ (3, 1)− ⁠2/3⁠ ⊕ (1, 1)1 ⊕ (1, 1)0 (d c, uc, ec & νc) (6, 1, 1) → (3, 1)− ⁠1/3⁠ ⊕ (3, 1)⁠1/3⁠ (1, 3, 1) → (1, 3)0 (1, 1, 3) → (1, 1)1 ⊕ (1, 1)0 ⊕ (1, 1)−1 See restricted representation. Of course, calling the representations things like (4, 1, 2) and (6, 1, 1) is purely a physicist's convention(source?), not a mathematician's convention, where representations are either labelled by Young tableaux or Dynkin diagrams with numbers on their vertices, but still, it is standard among GUT theorists. The weak hypercharge, Y, is the sum of the two matrices: ( 1 3 0 0 0 0 1 3 0 0 0 0 1 3 0 0 0 0 − 1 ) ∈ SU ( 4 ) , ( 1 0 0 − 1 ) ∈ SU ( 2 ) R {\displaystyle {\begin{pmatrix}{\frac {1}{3}}&0&0&0\\0&{\frac {1}{3}}&0&0\\0&0&{\frac {1}{3}}&0\\0&0&0&-1\end{pmatrix}}\in {\text{SU}}(4),\qquad {\begin{pmatrix}1&0\\0&-1\end{pmatrix}}\in {\text{SU}}(2)_{\text{R}}} It is possible to extend the Pati–Salam group so that it has two connected components. The relevant group is now the semidirect product ( [ S U ( 4 ) × S U ( 2 ) L × S U ( 2 ) R ] / Z 2 ) ⋊ Z 2 {\displaystyle \left([\mathrm {SU} (4)\times \mathrm {SU} (2)_{\mathrm {L} }\times \mathrm {SU} (2)_{\mathrm {R} }]/\mathbf {Z} _{2}\right)\rtimes \mathbf {Z} _{2}} . The last Z2 also needs explaining. It corresponds to an automorphism of the (unextended) Pati–Salam group which is the composition of an involutive outer automorphism of SU(4) which isn't an inner automorphism with interchanging the left and right copies of SU(2). This explains the name left and right and is one of the main motivations for originally studying this model. This extra "left-right symmetry" restores the concept of parity which had been shown not to hold at low energy scales for the weak interaction. In this extended model, (4, 2, 1) ⊕ (4, 1, 2) is an irrep and so is (4, 1, 2) ⊕ (4, 2, 1). This is the simplest extension of the minimal left-right model unifying QCD with B−L. Since the homotopy group π 2 ( S U ( 4 ) × S U ( 2 ) [ S U ( 3 ) × U ( 1 ) ] / Z 3 ) = Z , {\displaystyle \pi _{2}\left({\frac {\mathrm {SU} (4)\times \mathrm {SU} (2)}{[\mathrm {SU} (3)\times \mathrm {U} (1)]/\mathbf {Z} _{3}}}\right)=\mathbf {Z} ,} this model predicts monopoles. See 't Hooft–Polyakov monopole. This model was invented by Jogesh Pati and Abdus Salam. This model doesn't predict gauge mediated proton decay (unless it is embedded within an even larger GUT group). == Differences from the SU(5) unification == As mentioned above, both the Pati–Salam and Georgi–Glashow SU(5) unification models can be embedded in a SO(10) unification. The difference between the two models then lies in the way that the SO(10) symmetry is broken, generating different particles that may or may not be important at low scales and accessible by current experiments. If we look at the individual models, the most important difference is in the origin of the weak hypercharge. In the SU(5) model by itself there is no left-right symmetry (although there could be one in a larger unification in which the model is embedded), and the weak hypercharge is treated separately from the color charge. In the Pati–Salam model, part of the weak hypercharge (often called U(1)B-L) starts being unified with the color charge in the SU(4)C group, while the other part of the weak hypercharge is in the SU(2)R. When those two groups break then the two parts together eventually unify into the usual weak hypercharge U(1)Y. == Minimal supersymmetric Pati–Salam == Spacetime: The N = 1 superspace extension of 3 + 1 Minkowski spacetime Spatial symmetry: N=1 SUSY over 3 + 1 Minkowski spacetime with R-symmetry Gauge symmetry group: (SU(4) × SU(2)L × SU(2)R)/Z2 Global internal symmetry: U(1)A Vector superfields: Those associated with the SU(4) × SU(2)L × SU(2)R gauge symmetry Left-right extension: We can extend this model to include left-right symmetry. For that, we need the additional chiral multiplets (4, 2, 1)H and (4, 2, 1)H. === Chiral superfields === As complex representations: === Superpotential === A generic invariant renormalizable superpotential is a (complex) SU(4) × SU(2)L × SU(2)R and U(1)R invariant cubic polynomial in the superfields. It is a linear combination of the following terms: S S ( 4 , 1 , 2 ) H ( 4 ¯ , 1 , 2 ) H S ( 1 , 2 , 2 ) H ( 1 , 2 , 2 ) H ( 6 , 1 , 1 ) H ( 4 , 1 , 2 ) H ( 4 , 1 , 2 ) H ( 6 , 1 , 1 ) H ( 4 ¯ , 1 , 2 ) H ( 4 ¯ , 1 , 2 ) H ( 1 , 2 , 2 ) H ( 4 , 2 , 1 ) i ( 4 ¯ , 1 , 2 ) j ( 4 , 1 , 2 ) H ( 4 ¯ , 1 , 2 ) i ϕ j {\displaystyle {\begin{matrix}S\\S(4,1,2)_{H}({\bar {4}},1,2)_{H}\\S(1,2,2)_{H}(1,2,2)_{H}\\(6,1,1)_{H}(4,1,2)_{H}(4,1,2)_{H}\\(6,1,1)_{H}({\bar {4}},1,2)_{H}({\bar {4}},1,2)_{H}\\(1,2,2)_{H}(4,2,1)_{i}({\bar {4}},1,2)_{j}\\(4,1,2)_{H}({\bar {4}},1,2)_{i}\phi _{j}\\\end{matrix}}} i {\displaystyle i} and j {\displaystyle j} are the generation indices. == Sources == Graham G. Ross, Grand Unified Theories, Benjamin/Cummings, 1985, ISBN 0-8053-6968-6 Anthony Zee, Quantum Field Theory in a Nutshell, Princeton U. Press, Princeton, 2003, ISBN 0-691-01019-6 == References == Pati, Jogesh C.; Salam, Abdus (1 June 1974). "Lepton number as the fourth "color"". Physical Review D. 10 (1): 275–289. Bibcode:1974PhRvD..10..275P. doi:10.1103/physrevd.10.275. ISSN 0556-2821. Baez, John C.; Huerta, J. (2010). "The Algebra of Grand Unified Theories". Bulletin of the American Mathematical Society. 47 (3): 483–552. arXiv:0904.1556. doi:10.1090/S0273-0979-10-01294-2. S2CID 2941843. == External links == Wu, Dan-di; Li, Tie-Zhong (1985). "Proton decay, annihilation or fusion?". Zeitschrift für Physik C. 27 (2): 321–323. Bibcode:1985ZPhyC..27..321W. doi:10.1007/BF01556623. S2CID 121868029. – Fusion of all three quarks is the only decay mechanism mediated by the Higgs particle, not the gauge bosons, in the Pati–Salam model The Algebra of Grand Unified Theories John Huerta. Slide show: contains an overview of Pati–Salam the Pati-Salam model Motivation for the Pati–Salam model
Wikipedia/Pati–Salam_model
In string theory, a domain wall is a theoretical (d−1)-dimensional singularity. A domain wall is meant to represent an object of codimension one embedded into space (a defect in space localized in one spatial dimension). For example, D8-branes are domain walls in type II string theory. In M-theory, the existence of Horava–Witten domain walls, "ends of the world" that carry an E8 gauge theory, is important for various relations between superstring theory and M-theory. If domain walls exist, their interactions are hypothesized to emit gravitational waves that would be detectable by LIGO and similar experiments. == See also == Topological defect Cosmic string Membrane (M-theory) Gravitational singularity == References ==
Wikipedia/Domain_wall_(string_theory)
The 331 model in particle physics is an extension of the electroweak gauge symmetry which offers an explanation of why there must be three families of quarks and leptons. The name "331" comes from the full gauge symmetry group S U ( 3 ) C × S U ( 3 ) L × U ( 1 ) X {\displaystyle SU(3)_{C}\times SU(3)_{L}\times U(1)_{X}\,} . == Details == The 331 model in particle physics is an extension of the electroweak gauge symmetry from S U ( 2 ) L × U ( 1 ) Y {\displaystyle SU(2)_{L}\times U(1)_{Y}} to S U ( 3 ) L × U ( 1 ) X {\displaystyle \,SU(3)_{L}\times U(1)_{X}\,} with S U ( 2 ) L ⊂ S U ( 3 ) L {\displaystyle SU(2)_{L}\subset SU(3)_{L}} . In the 331 model, hypercharge is given by Y = β T 8 + I X {\displaystyle Y=\beta \,T_{8}+I\,X} and electric charge is given by Q = Y + T 3 2 {\displaystyle Q={\frac {Y+T_{3}}{2}}} where T 3 {\displaystyle T_{3}} and T 8 {\displaystyle T_{8}} are the Gell-Mann matrices of SU(3)L and β {\displaystyle \beta } and I {\displaystyle I} are parameters of the model. == Motivation == The 331 model offers an explanation of why there must be three families of quarks and leptons. One curious feature of the Standard Model is that the gauge anomalies independently exactly cancel for each of the three known quark-lepton families. The Standard Model thus offers no explanation of why there are three families, or indeed why there is more than one family. The idea behind the 331 model is to extend the standard model such that all three families are required for anomaly cancellation. More specifically, in this model the three families transform differently under an extended gauge group. The perfect cancellation of the anomalies within each family is ruined, but the anomalies of the extended gauge group cancel when all three families are present. The cancellation will persist for 6, 9, ... families, so having only the three families observed in nature is the least possible matter content. Such a construction necessarily requires the addition of further gauge bosons and chiral fermions, which then provide testable predictions of the model in the form of elementary particles. These particles could be found experimentally at masses above the electroweak scale, which is on the order of 102 - 103 GeV. The minimal 331 model predicts singly and doubly charged spin-one bosons, bileptons, which could show up in electron-electron scattering when it is studied at TeV energy scales and may also be produced in multi-TeV proton–proton scattering at the Large Hadron Collider which can reach 104 GeV. == See also == Physics beyond the Standard Model == References == Frampton, P.H (1992). "Chiral dilepton model and the flavor question". Physical Review Letters. 69 (20): 2889–2891. Bibcode:1992PhRvL..69.2889F. doi:10.1103/PhysRevLett.69.2889. PMID 10046667. Pisano, F.; Pleitez, V. (1992). "An SU(3) × U(1) model for electroweak interactions". Physical Review D. 46 (1): 410–417. arXiv:hep-ph/9206242. Bibcode:1992PhRvD..46..410P. doi:10.1103/PhysRevD.46.410. PMID 10014771. S2CID 116855787. Foot, R.; Hernandez, O.F.; Pisano, F.; Pleitez, V. (1993). "Lepton masses in an SU(3)L × U(1)N gauge model". Physical Review D. 47 (9): 4158–4161. arXiv:hep-ph/9207264. Bibcode:1993PhRvD..47.4158F. doi:10.1103/PhysRevD.47.4158. PMID 10016045. S2CID 10314356.
Wikipedia/331_model
Large Apparatus studying Grand Unification and Neutrino Astrophysics or LAGUNA was a European project aimed to develop the next-generation, very large volume underground neutrino observatory. The detector was to be much bigger and more sensitive than any previous detector, and make new discoveries in the field of particle and astroparticle physics. The project involved 21 European institutions in 10 European countries, and brought together over 100 scientists. The project assessed the feasibility of developing the observatory-infrastructure and the observatory particle detectors themselves, as well as looking for a deployment site (seven candidates) in Europe. There were two design studies, LAGUNA and LAGUNA/LBNO, which were finished in 2008 and 2011, respectively. The total prize of studies was €17 million, of which €7 million was direct funding from the EU, and rest came from the participating universities and other organizations. In 2016, the LAGUNA project was in practice cancelled, although no official decision was made. A similar DUSEL-project in the United States was also cancelled. However, the neutrino-component of the DUSEL-project (the Long Baseline Neutrino Experiment, LBNE) was rebooted as the DUNE project and enlarged from a USA-only project into an international project. Many leading researchers from LAGUNA moved to DUNE. The construction of DUNE started in 2017 in Sanford Lab in South Dakota, USA with expected completion 2027. == Detectors == There were three possible detector technologies being studied, the MEMPHYS, GLACIER and LENA detectors, MEMPHYS being a water-based detector, GLACIER being liquid argon and LENA liquid scintillator-based. All the detectors work by observing the faint light and electric charge produced when a neutrino particle interacts with a nucleus of the liquid inside the detector. The detectors will be based deep underground (even 1.4 km deep) to filter the noise that is developed by the atmospheric and cosmic particles that bombard everything at the surface of the Earth. These noise particles do not penetrate the Earth at that depth, but the neutrinos that interact only weakly with normal matter do. The detectors will be huge in size, with the liquid target mass being of order 100 000 – 1 000 000 tons. === LENA === LENA (Low Energy Neutrino Astronomy) is a liquid scintillation detector with a mass about 50 kton. Its cylindrical shaped tank with about 100 meters height and 30 meters diameter. The actual scintillation volume is surrounded by nylon barrier and buffer volume. Additionally the buffer volume is surrounded by a pure water volume. The detection mechanism of LENA will be the photomultiplier tubes, which are designed to cover partly the walls between buffer volume and water volume. The scintillation light produced in scintillation volume will be detected with those photomultiplier tubes. LENA's aim is to study low energy neutrinos originated by supernova explosions, Sun and Earth's interior. == Scientific goals == The goals of the project were to: study the unification of all forces by observing proton decay (a very rare phenomenon expected to occur according to some Grand Unified Theory (GUT) models but never observed), study the galactic supernovae through neutrino-observations, study terrestrial and solar neutrinos (neutrinos are formed in nuclear processes), study the excess of matter over antimatter in the universe through observing neutrino oscillations in collaboration with CERN (that provides the neutrino-beams for the experiment; neutrinos are made in the CERN and then sent as underground beam for hundreds of kilometers through the Earth to the detectors). == Sites == The candidate sites for the observatory were: Callio at Pyhäsalmi Mine (Finland) Fréjus Road Tunnel (France) Boulby Mine (United Kingdom) Umbria (this site requires a new cavern to be excavated, as in contrast to the other sites, this site is not an old mine) (Italy) SUNLAB (Sieroszowice UNderground LABoratory) in Polkowice-Sieroszowice mine (Poland) Unirea mine in Slănic (Romania) Canfranc Underground Laboratory (Spain) From these candidates, the observatory location is chosen (See the project website for more information about the sites). == References == == External links == LAGUNA and LAGUNA-LBNO Design Studies
Wikipedia/Large_Apparatus_studying_Grand_Unification_and_Neutrino_Astrophysics
Maxwell's equations, or Maxwell–Heaviside equations, are a set of coupled partial differential equations that, together with the Lorentz force law, form the foundation of classical electromagnetism, classical optics, electric and magnetic circuits. The equations provide a mathematical model for electric, optical, and radio technologies, such as power generation, electric motors, wireless communication, lenses, radar, etc. They describe how electric and magnetic fields are generated by charges, currents, and changes of the fields. The equations are named after the physicist and mathematician James Clerk Maxwell, who, in 1861 and 1862, published an early form of the equations that included the Lorentz force law. Maxwell first used the equations to propose that light is an electromagnetic phenomenon. The modern form of the equations in their most common formulation is credited to Oliver Heaviside. Maxwell's equations may be combined to demonstrate how fluctuations in electromagnetic fields (waves) propagate at a constant speed in vacuum, c (299792458 m/s). Known as electromagnetic radiation, these waves occur at various wavelengths to produce a spectrum of radiation from radio waves to gamma rays. In partial differential equation form and a coherent system of units, Maxwell's microscopic equations can be written as (top to bottom: Gauss's law, Gauss's law for magnetism, Faraday's law, Ampère-Maxwell law) ∇ ⋅ E = ρ ε 0 ∇ ⋅ B = 0 ∇ × E = − ∂ B ∂ t ∇ × B = μ 0 ( J + ε 0 ∂ E ∂ t ) {\displaystyle {\begin{aligned}\nabla \cdot \mathbf {E} \,\,\,&={\frac {\rho }{\varepsilon _{0}}}\\\nabla \cdot \mathbf {B} \,\,\,&=0\\\nabla \times \mathbf {E} &=-{\frac {\partial \mathbf {B} }{\partial t}}\\\nabla \times \mathbf {B} &=\mu _{0}\left(\mathbf {J} +\varepsilon _{0}{\frac {\partial \mathbf {E} }{\partial t}}\right)\end{aligned}}} With E {\displaystyle \mathbf {E} } the electric field, B {\displaystyle \mathbf {B} } the magnetic field, ρ {\displaystyle \rho } the electric charge density and J {\displaystyle \mathbf {J} } the current density. ε 0 {\displaystyle \varepsilon _{0}} is the vacuum permittivity and μ 0 {\displaystyle \mu _{0}} the vacuum permeability. The equations have two major variants: The microscopic equations have universal applicability but are unwieldy for common calculations. They relate the electric and magnetic fields to total charge and total current, including the complicated charges and currents in materials at the atomic scale. The macroscopic equations define two new auxiliary fields that describe the large-scale behaviour of matter without having to consider atomic-scale charges and quantum phenomena like spins. However, their use requires experimentally determined parameters for a phenomenological description of the electromagnetic response of materials. The term "Maxwell's equations" is often also used for equivalent alternative formulations. Versions of Maxwell's equations based on the electric and magnetic scalar potentials are preferred for explicitly solving the equations as a boundary value problem, analytical mechanics, or for use in quantum mechanics. The covariant formulation (on spacetime rather than space and time separately) makes the compatibility of Maxwell's equations with special relativity manifest. Maxwell's equations in curved spacetime, commonly used in high-energy and gravitational physics, are compatible with general relativity. In fact, Albert Einstein developed special and general relativity to accommodate the invariant speed of light, a consequence of Maxwell's equations, with the principle that only relative movement has physical consequences. The publication of the equations marked the unification of a theory for previously separately described phenomena: magnetism, electricity, light, and associated radiation. Since the mid-20th century, it has been understood that Maxwell's equations do not give an exact description of electromagnetic phenomena, but are instead a classical limit of the more precise theory of quantum electrodynamics. == History of the equations == == Conceptual descriptions == === Gauss's law === Gauss's law describes the relationship between an electric field and electric charges: an electric field points away from positive charges and towards negative charges, and the net outflow of the electric field through a closed surface is proportional to the enclosed charge, including bound charge due to polarization of material. The coefficient of the proportion is the permittivity of free space. === Gauss's law for magnetism === Gauss's law for magnetism states that electric charges have no magnetic analogues, called magnetic monopoles; no north or south magnetic poles exist in isolation. Instead, the magnetic field of a material is attributed to a dipole, and the net outflow of the magnetic field through a closed surface is zero. Magnetic dipoles may be represented as loops of current or inseparable pairs of equal and opposite "magnetic charges". Precisely, the total magnetic flux through a Gaussian surface is zero, and the magnetic field is a solenoidal vector field. === Faraday's law === The Maxwell–Faraday version of Faraday's law of induction describes how a time-varying magnetic field corresponds to the negative curl of an electric field. In integral form, it states that the work per unit charge required to move a charge around a closed loop equals the rate of change of the magnetic flux through the enclosed surface. The electromagnetic induction is the operating principle behind many electric generators: for example, a rotating bar magnet creates a changing magnetic field and generates an electric field in a nearby wire. === Ampère–Maxwell law === The original law of Ampère states that magnetic fields relate to electric current. Maxwell's addition states that magnetic fields also relate to changing electric fields, which Maxwell called displacement current. The integral form states that electric and displacement currents are associated with a proportional magnetic field along any enclosing curve. Maxwell's modification of Ampère's circuital law is important because the laws of Ampère and Gauss must otherwise be adjusted for static fields. As a consequence, it predicts that a rotating magnetic field occurs with a changing electric field. A further consequence is the existence of self-sustaining electromagnetic waves which travel through empty space. The speed calculated for electromagnetic waves, which could be predicted from experiments on charges and currents, matches the speed of light; indeed, light is one form of electromagnetic radiation (as are X-rays, radio waves, and others). Maxwell understood the connection between electromagnetic waves and light in 1861, thereby unifying the theories of electromagnetism and optics. == Formulation in terms of electric and magnetic fields (microscopic or in vacuum version) == In the electric and magnetic field formulation there are four equations that determine the fields for given charge and current distribution. A separate law of nature, the Lorentz force law, describes how the electric and magnetic fields act on charged particles and currents. By convention, a version of this law in the original equations by Maxwell is no longer included. The vector calculus formalism below, the work of Oliver Heaviside, has become standard. It is rotationally invariant, and therefore mathematically more transparent than Maxwell's original 20 equations in x, y and z components. The relativistic formulations are more symmetric and Lorentz invariant. For the same equations expressed using tensor calculus or differential forms (see § Alternative formulations). The differential and integral formulations are mathematically equivalent; both are useful. The integral formulation relates fields within a region of space to fields on the boundary and can often be used to simplify and directly calculate fields from symmetric distributions of charges and currents. On the other hand, the differential equations are purely local and are a more natural starting point for calculating the fields in more complicated (less symmetric) situations, for example using finite element analysis. === Key to the notation === Symbols in bold represent vector quantities, and symbols in italics represent scalar quantities, unless otherwise indicated. The equations introduce the electric field, E, a vector field, and the magnetic field, B, a pseudovector field, each generally having a time and location dependence. The sources are the total electric charge density (total charge per unit volume), ρ, and the total electric current density (total current per unit area), J. The universal constants appearing in the equations (the first two ones explicitly only in the SI formulation) are: the permittivity of free space, ε0, and the permeability of free space, μ0, and the speed of light, c = ( ε 0 μ 0 ) − 1 / 2 {\displaystyle c=({\varepsilon _{0}\mu _{0}})^{-1/2}} ==== Differential equations ==== In the differential equations, the nabla symbol, ∇, denotes the three-dimensional gradient operator, del, the ∇⋅ symbol (pronounced "del dot") denotes the divergence operator, the ∇× symbol (pronounced "del cross") denotes the curl operator. ==== Integral equations ==== In the integral equations, Ω is any volume with closed boundary surface ∂Ω, and Σ is any surface with closed boundary curve ∂Σ, The equations are a little easier to interpret with time-independent surfaces and volumes. Time-independent surfaces and volumes are "fixed" and do not change over a given time interval. For example, since the surface is time-independent, we can bring the differentiation under the integral sign in Faraday's law: d d t ∬ Σ B ⋅ d S = ∬ Σ ∂ B ∂ t ⋅ d S , {\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}\iint _{\Sigma }\mathbf {B} \cdot \mathrm {d} \mathbf {S} =\iint _{\Sigma }{\frac {\partial \mathbf {B} }{\partial t}}\cdot \mathrm {d} \mathbf {S} \,,} Maxwell's equations can be formulated with possibly time-dependent surfaces and volumes by using the differential version and using Gauss' and Stokes' theorems as appropriate. ∫ ∂ Ω {\displaystyle {\vphantom {\int }}_{\scriptstyle \partial \Omega }} is a surface integral over the boundary surface ∂Ω, with the loop indicating the surface is closed ∭ Ω {\displaystyle \iiint _{\Omega }} is a volume integral over the volume Ω, ∮ ∂ Σ {\displaystyle \oint _{\partial \Sigma }} is a line integral around the boundary curve ∂Σ, with the loop indicating the curve is closed. ∬ Σ {\displaystyle \iint _{\Sigma }} is a surface integral over the surface Σ, The total electric charge Q enclosed in Ω is the volume integral over Ω of the charge density ρ (see the "macroscopic formulation" section below): Q = ∭ Ω ρ d V , {\displaystyle Q=\iiint _{\Omega }\rho \ \mathrm {d} V,} where dV is the volume element. The net magnetic flux ΦB is the surface integral of the magnetic field B passing through a fixed surface, Σ: Φ B = ∬ Σ B ⋅ d S , {\displaystyle \Phi _{B}=\iint _{\Sigma }\mathbf {B} \cdot \mathrm {d} \mathbf {S} ,} The net electric flux ΦE is the surface integral of the electric field E passing through Σ: Φ E = ∬ Σ E ⋅ d S , {\displaystyle \Phi _{E}=\iint _{\Sigma }\mathbf {E} \cdot \mathrm {d} \mathbf {S} ,} The net electric current I is the surface integral of the electric current density J passing through Σ: I = ∬ Σ J ⋅ d S , {\displaystyle I=\iint _{\Sigma }\mathbf {J} \cdot \mathrm {d} \mathbf {S} ,} where dS denotes the differential vector element of surface area S, normal to surface Σ. (Vector area is sometimes denoted by A rather than S, but this conflicts with the notation for magnetic vector potential). === Formulation in the SI === === Formulation in the Gaussian system === The definitions of charge, electric field, and magnetic field can be altered to simplify theoretical calculation, by absorbing dimensioned factors of ε0 and μ0 into the units (and thus redefining these). With a corresponding change in the values of the quantities for the Lorentz force law this yields the same physics, i.e. trajectories of charged particles, or work done by an electric motor. These definitions are often preferred in theoretical and high energy physics where it is natural to take the electric and magnetic field with the same units, to simplify the appearance of the electromagnetic tensor: the Lorentz covariant object unifying electric and magnetic field would then contain components with uniform unit and dimension.: vii  Such modified definitions are conventionally used with the Gaussian (CGS) units. Using these definitions, colloquially "in Gaussian units", the Maxwell equations become: The equations simplify slightly when a system of quantities is chosen in the speed of light, c, is used for nondimensionalization, so that, for example, seconds and lightseconds are interchangeable, and c = 1. Further changes are possible by absorbing factors of 4π. This process, called rationalization, affects whether Coulomb's law or Gauss's law includes such a factor (see Heaviside–Lorentz units, used mainly in particle physics). == Relationship between differential and integral formulations == The equivalence of the differential and integral formulations are a consequence of the Gauss divergence theorem and the Kelvin–Stokes theorem. === Flux and divergence === According to the (purely mathematical) Gauss divergence theorem, the electric flux through the boundary surface ∂Ω can be rewritten as ∮ ∂ Ω E ⋅ d S = ∭ Ω ∇ ⋅ E d V {\displaystyle {\vphantom {\oint }}_{\scriptstyle \partial \Omega }\mathbf {E} \cdot \mathrm {d} \mathbf {S} =\iiint _{\Omega }\nabla \cdot \mathbf {E} \,\mathrm {d} V} The integral version of Gauss's equation can thus be rewritten as ∭ Ω ( ∇ ⋅ E − ρ ε 0 ) d V = 0 {\displaystyle \iiint _{\Omega }\left(\nabla \cdot \mathbf {E} -{\frac {\rho }{\varepsilon _{0}}}\right)\,\mathrm {d} V=0} Since Ω is arbitrary (e.g. an arbitrary small ball with arbitrary center), this is satisfied if and only if the integrand is zero everywhere. This is the differential equations formulation of Gauss equation up to a trivial rearrangement. Similarly rewriting the magnetic flux in Gauss's law for magnetism in integral form gives ∮ ∂ Ω B ⋅ d S = ∭ Ω ∇ ⋅ B d V = 0. {\displaystyle {\vphantom {\oint }}_{\scriptstyle \partial \Omega }\mathbf {B} \cdot \mathrm {d} \mathbf {S} =\iiint _{\Omega }\nabla \cdot \mathbf {B} \,\mathrm {d} V=0.} which is satisfied for all Ω if and only if ∇ ⋅ B = 0 {\displaystyle \nabla \cdot \mathbf {B} =0} everywhere. === Circulation and curl === By the Kelvin–Stokes theorem we can rewrite the line integrals of the fields around the closed boundary curve ∂Σ to an integral of the "circulation of the fields" (i.e. their curls) over a surface it bounds, i.e. ∮ ∂ Σ B ⋅ d ℓ = ∬ Σ ( ∇ × B ) ⋅ d S , {\displaystyle \oint _{\partial \Sigma }\mathbf {B} \cdot \mathrm {d} {\boldsymbol {\ell }}=\iint _{\Sigma }(\nabla \times \mathbf {B} )\cdot \mathrm {d} \mathbf {S} ,} Hence the Ampère–Maxwell law, the modified version of Ampère's circuital law, in integral form can be rewritten as ∬ Σ ( ∇ × B − μ 0 ( J + ε 0 ∂ E ∂ t ) ) ⋅ d S = 0. {\displaystyle \iint _{\Sigma }\left(\nabla \times \mathbf {B} -\mu _{0}\left(\mathbf {J} +\varepsilon _{0}{\frac {\partial \mathbf {E} }{\partial t}}\right)\right)\cdot \mathrm {d} \mathbf {S} =0.} Since Σ can be chosen arbitrarily, e.g. as an arbitrary small, arbitrary oriented, and arbitrary centered disk, we conclude that the integrand is zero if and only if the Ampère–Maxwell law in differential equations form is satisfied. The equivalence of Faraday's law in differential and integral form follows likewise. The line integrals and curls are analogous to quantities in classical fluid dynamics: the circulation of a fluid is the line integral of the fluid's flow velocity field around a closed loop, and the vorticity of the fluid is the curl of the velocity field. == Charge conservation == The invariance of charge can be derived as a corollary of Maxwell's equations. The left-hand side of the Ampère–Maxwell law has zero divergence by the div–curl identity. Expanding the divergence of the right-hand side, interchanging derivatives, and applying Gauss's law gives: 0 = ∇ ⋅ ( ∇ × B ) = ∇ ⋅ ( μ 0 ( J + ε 0 ∂ E ∂ t ) ) = μ 0 ( ∇ ⋅ J + ε 0 ∂ ∂ t ∇ ⋅ E ) = μ 0 ( ∇ ⋅ J + ∂ ρ ∂ t ) {\displaystyle 0=\nabla \cdot (\nabla \times \mathbf {B} )=\nabla \cdot \left(\mu _{0}\left(\mathbf {J} +\varepsilon _{0}{\frac {\partial \mathbf {E} }{\partial t}}\right)\right)=\mu _{0}\left(\nabla \cdot \mathbf {J} +\varepsilon _{0}{\frac {\partial }{\partial t}}\nabla \cdot \mathbf {E} \right)=\mu _{0}\left(\nabla \cdot \mathbf {J} +{\frac {\partial \rho }{\partial t}}\right)} i.e., ∂ ρ ∂ t + ∇ ⋅ J = 0. {\displaystyle {\frac {\partial \rho }{\partial t}}+\nabla \cdot \mathbf {J} =0.} By the Gauss divergence theorem, this means the rate of change of charge in a fixed volume equals the net current flowing through the boundary: d d t Q Ω = d d t ∭ Ω ρ d V = − {\displaystyle {\frac {d}{dt}}Q_{\Omega }={\frac {d}{dt}}\iiint _{\Omega }\rho \mathrm {d} V=-} ∮ ∂ Ω J ⋅ d S = − I ∂ Ω . {\displaystyle {\vphantom {\oint }}_{\scriptstyle \partial \Omega }\mathbf {J} \cdot {\rm {d}}\mathbf {S} =-I_{\partial \Omega }.} In particular, in an isolated system the total charge is conserved. == Vacuum equations, electromagnetic waves and speed of light == In a region with no charges (ρ = 0) and no currents (J = 0), such as in vacuum, Maxwell's equations reduce to: ∇ ⋅ E = 0 , ∇ × E + ∂ B ∂ t = 0 , ∇ ⋅ B = 0 , ∇ × B − μ 0 ε 0 ∂ E ∂ t = 0. {\displaystyle {\begin{aligned}\nabla \cdot \mathbf {E} &=0,&\nabla \times \mathbf {E} +{\frac {\partial \mathbf {B} }{\partial t}}=0,\\\nabla \cdot \mathbf {B} &=0,&\nabla \times \mathbf {B} -\mu _{0}\varepsilon _{0}{\frac {\partial \mathbf {E} }{\partial t}}=0.\end{aligned}}} Taking the curl (∇×) of the curl equations, and using the curl of the curl identity we obtain μ 0 ε 0 ∂ 2 E ∂ t 2 − ∇ 2 E = 0 , μ 0 ε 0 ∂ 2 B ∂ t 2 − ∇ 2 B = 0. {\displaystyle {\begin{aligned}\mu _{0}\varepsilon _{0}{\frac {\partial ^{2}\mathbf {E} }{\partial t^{2}}}-\nabla ^{2}\mathbf {E} =0,\\\mu _{0}\varepsilon _{0}{\frac {\partial ^{2}\mathbf {B} }{\partial t^{2}}}-\nabla ^{2}\mathbf {B} =0.\end{aligned}}} The quantity μ 0 ε 0 {\displaystyle \mu _{0}\varepsilon _{0}} has the dimension (T/L)2. Defining c = ( μ 0 ε 0 ) − 1 / 2 {\displaystyle c=(\mu _{0}\varepsilon _{0})^{-1/2}} , the equations above have the form of the standard wave equations 1 c 2 ∂ 2 E ∂ t 2 − ∇ 2 E = 0 , 1 c 2 ∂ 2 B ∂ t 2 − ∇ 2 B = 0. {\displaystyle {\begin{aligned}{\frac {1}{c^{2}}}{\frac {\partial ^{2}\mathbf {E} }{\partial t^{2}}}-\nabla ^{2}\mathbf {E} =0,\\{\frac {1}{c^{2}}}{\frac {\partial ^{2}\mathbf {B} }{\partial t^{2}}}-\nabla ^{2}\mathbf {B} =0.\end{aligned}}} Already during Maxwell's lifetime, it was found that the known values for ε 0 {\displaystyle \varepsilon _{0}} and μ 0 {\displaystyle \mu _{0}} give c ≈ 2.998 × 10 8 m/s {\displaystyle c\approx 2.998\times 10^{8}~{\text{m/s}}} , then already known to be the speed of light in free space. This led him to propose that light and radio waves were propagating electromagnetic waves, since amply confirmed. In the old SI system of units, the values of μ 0 = 4 π × 10 − 7 {\displaystyle \mu _{0}=4\pi \times 10^{-7}} and c = 299 792 458 m/s {\displaystyle c=299\,792\,458~{\text{m/s}}} are defined constants, (which means that by definition ε 0 = 8.854 187 8... × 10 − 12 F/m {\displaystyle \varepsilon _{0}=8.854\,187\,8...\times 10^{-12}~{\text{F/m}}} ) that define the ampere and the metre. In the new SI system, only c keeps its defined value, and the electron charge gets a defined value. In materials with relative permittivity, εr, and relative permeability, μr, the phase velocity of light becomes v p = 1 μ 0 μ r ε 0 ε r , {\displaystyle v_{\text{p}}={\frac {1}{\sqrt {\mu _{0}\mu _{\text{r}}\varepsilon _{0}\varepsilon _{\text{r}}}}},} which is usually less than c. In addition, E and B are perpendicular to each other and to the direction of wave propagation, and are in phase with each other. A sinusoidal plane wave is one special solution of these equations. Maxwell's equations explain how these waves can physically propagate through space. The changing magnetic field creates a changing electric field through Faraday's law. In turn, that electric field creates a changing magnetic field through Maxwell's modification of Ampère's circuital law. This perpetual cycle allows these waves, now known as electromagnetic radiation, to move through space at velocity c. == Macroscopic formulation == The above equations are the microscopic version of Maxwell's equations, expressing the electric and the magnetic fields in terms of the (possibly atomic-level) charges and currents present. This is sometimes called the "general" form, but the macroscopic version below is equally general, the difference being one of bookkeeping. The microscopic version is sometimes called "Maxwell's equations in vacuum": this refers to the fact that the material medium is not built into the structure of the equations, but appears only in the charge and current terms. The microscopic version was introduced by Lorentz, who tried to use it to derive the macroscopic properties of bulk matter from its microscopic constituents.: 5  "Maxwell's macroscopic equations", also known as Maxwell's equations in matter, are more similar to those that Maxwell introduced himself. In the macroscopic equations, the influence of bound charge Qb and bound current Ib is incorporated into the displacement field D and the magnetizing field H, while the equations depend only on the free charges Qf and free currents If. This reflects a splitting of the total electric charge Q and current I (and their densities ρ and J) into free and bound parts: Q = Q f + Q b = ∭ Ω ( ρ f + ρ b ) d V = ∭ Ω ρ d V , I = I f + I b = ∬ Σ ( J f + J b ) ⋅ d S = ∬ Σ J ⋅ d S . {\displaystyle {\begin{aligned}Q&=Q_{\text{f}}+Q_{\text{b}}=\iiint _{\Omega }\left(\rho _{\text{f}}+\rho _{\text{b}}\right)\,\mathrm {d} V=\iiint _{\Omega }\rho \,\mathrm {d} V,\\I&=I_{\text{f}}+I_{\text{b}}=\iint _{\Sigma }\left(\mathbf {J} _{\text{f}}+\mathbf {J} _{\text{b}}\right)\cdot \mathrm {d} \mathbf {S} =\iint _{\Sigma }\mathbf {J} \cdot \mathrm {d} \mathbf {S} .\end{aligned}}} The cost of this splitting is that the additional fields D and H need to be determined through phenomenological constituent equations relating these fields to the electric field E and the magnetic field B, together with the bound charge and current. See below for a detailed description of the differences between the microscopic equations, dealing with total charge and current including material contributions, useful in air/vacuum; and the macroscopic equations, dealing with free charge and current, practical to use within materials. === Bound charge and current === When an electric field is applied to a dielectric material its molecules respond by forming microscopic electric dipoles – their atomic nuclei move a tiny distance in the direction of the field, while their electrons move a tiny distance in the opposite direction. This produces a macroscopic bound charge in the material even though all of the charges involved are bound to individual molecules. For example, if every molecule responds the same, similar to that shown in the figure, these tiny movements of charge combine to produce a layer of positive bound charge on one side of the material and a layer of negative charge on the other side. The bound charge is most conveniently described in terms of the polarization P of the material, its dipole moment per unit volume. If P is uniform, a macroscopic separation of charge is produced only at the surfaces where P enters and leaves the material. For non-uniform P, a charge is also produced in the bulk. Somewhat similarly, in all materials the constituent atoms exhibit magnetic moments that are intrinsically linked to the angular momentum of the components of the atoms, most notably their electrons. The connection to angular momentum suggests the picture of an assembly of microscopic current loops. Outside the material, an assembly of such microscopic current loops is not different from a macroscopic current circulating around the material's surface, despite the fact that no individual charge is traveling a large distance. These bound currents can be described using the magnetization M. The very complicated and granular bound charges and bound currents, therefore, can be represented on the macroscopic scale in terms of P and M, which average these charges and currents on a sufficiently large scale so as not to see the granularity of individual atoms, but also sufficiently small that they vary with location in the material. As such, Maxwell's macroscopic equations ignore many details on a fine scale that can be unimportant to understanding matters on a gross scale by calculating fields that are averaged over some suitable volume. === Auxiliary fields, polarization and magnetization === The definitions of the auxiliary fields are: D ( r , t ) = ε 0 E ( r , t ) + P ( r , t ) , H ( r , t ) = 1 μ 0 B ( r , t ) − M ( r , t ) , {\displaystyle {\begin{aligned}\mathbf {D} (\mathbf {r} ,t)&=\varepsilon _{0}\mathbf {E} (\mathbf {r} ,t)+\mathbf {P} (\mathbf {r} ,t),\\\mathbf {H} (\mathbf {r} ,t)&={\frac {1}{\mu _{0}}}\mathbf {B} (\mathbf {r} ,t)-\mathbf {M} (\mathbf {r} ,t),\end{aligned}}} where P is the polarization field and M is the magnetization field, which are defined in terms of microscopic bound charges and bound currents respectively. The macroscopic bound charge density ρb and bound current density Jb in terms of polarization P and magnetization M are then defined as ρ b = − ∇ ⋅ P , J b = ∇ × M + ∂ P ∂ t . {\displaystyle {\begin{aligned}\rho _{\text{b}}&=-\nabla \cdot \mathbf {P} ,\\\mathbf {J} _{\text{b}}&=\nabla \times \mathbf {M} +{\frac {\partial \mathbf {P} }{\partial t}}.\end{aligned}}} If we define the total, bound, and free charge and current density by ρ = ρ b + ρ f , J = J b + J f , {\displaystyle {\begin{aligned}\rho &=\rho _{\text{b}}+\rho _{\text{f}},\\\mathbf {J} &=\mathbf {J} _{\text{b}}+\mathbf {J} _{\text{f}},\end{aligned}}} and use the defining relations above to eliminate D, and H, the "macroscopic" Maxwell's equations reproduce the "microscopic" equations. === Constitutive relations === In order to apply 'Maxwell's macroscopic equations', it is necessary to specify the relations between displacement field D and the electric field E, as well as the magnetizing field H and the magnetic field B. Equivalently, we have to specify the dependence of the polarization P (hence the bound charge) and the magnetization M (hence the bound current) on the applied electric and magnetic field. The equations specifying this response are called constitutive relations. For real-world materials, the constitutive relations are rarely simple, except approximately, and usually determined by experiment. See the main article on constitutive relations for a fuller description.: 44–45  For materials without polarization and magnetization, the constitutive relations are (by definition): 2  D = ε 0 E , H = 1 μ 0 B , {\displaystyle \mathbf {D} =\varepsilon _{0}\mathbf {E} ,\quad \mathbf {H} ={\frac {1}{\mu _{0}}}\mathbf {B} ,} where ε0 is the permittivity of free space and μ0 the permeability of free space. Since there is no bound charge, the total and the free charge and current are equal. An alternative viewpoint on the microscopic equations is that they are the macroscopic equations together with the statement that vacuum behaves like a perfect linear "material" without additional polarization and magnetization. More generally, for linear materials the constitutive relations are: 44–45  D = ε E , H = 1 μ B , {\displaystyle \mathbf {D} =\varepsilon \mathbf {E} ,\quad \mathbf {H} ={\frac {1}{\mu }}\mathbf {B} ,} where ε is the permittivity and μ the permeability of the material. For the displacement field D the linear approximation is usually excellent because for all but the most extreme electric fields or temperatures obtainable in the laboratory (high power pulsed lasers) the interatomic electric fields of materials of the order of 1011 V/m are much higher than the external field. For the magnetizing field H {\displaystyle \mathbf {H} } , however, the linear approximation can break down in common materials like iron leading to phenomena like hysteresis. Even the linear case can have various complications, however. For homogeneous materials, ε and μ are constant throughout the material, while for inhomogeneous materials they depend on location within the material (and perhaps time).: 463  For isotropic materials, ε and μ are scalars, while for anisotropic materials (e.g. due to crystal structure) they are tensors.: 421 : 463  Materials are generally dispersive, so ε and μ depend on the frequency of any incident EM waves.: 625 : 397  Even more generally, in the case of non-linear materials (see for example nonlinear optics), D and P are not necessarily proportional to E, similarly H or M is not necessarily proportional to B. In general D and H depend on both E and B, on location and time, and possibly other physical quantities. In applications one also has to describe how the free currents and charge density behave in terms of E and B possibly coupled to other physical quantities like pressure, and the mass, number density, and velocity of charge-carrying particles. E.g., the original equations given by Maxwell (see History of Maxwell's equations) included Ohm's law in the form J f = σ E . {\displaystyle \mathbf {J} _{\text{f}}=\sigma \mathbf {E} .} == Alternative formulations == Following are some of the several other mathematical formalisms of Maxwell's equations, with the columns separating the two homogeneous Maxwell equations from the two inhomogeneous ones. Each formulation has versions directly in terms of the electric and magnetic fields, and indirectly in terms of the electrical potential φ and the vector potential A. Potentials were introduced as a convenient way to solve the homogeneous equations, but it was thought that all observable physics was contained in the electric and magnetic fields (or relativistically, the Faraday tensor). The potentials play a central role in quantum mechanics, however, and act quantum mechanically with observable consequences even when the electric and magnetic fields vanish (Aharonov–Bohm effect). Each table describes one formalism. See the main article for details of each formulation. The direct spacetime formulations make manifest that the Maxwell equations are relativistically invariant, where space and time are treated on equal footing. Because of this symmetry, the electric and magnetic fields are treated on equal footing and are recognized as components of the Faraday tensor. This reduces the four Maxwell equations to two, which simplifies the equations, although we can no longer use the familiar vector formulation. Maxwell equations in formulation that do not treat space and time manifestly on the same footing have Lorentz invariance as a hidden symmetry. This was a major source of inspiration for the development of relativity theory. Indeed, even the formulation that treats space and time separately is not a non-relativistic approximation and describes the same physics by simply renaming variables. For this reason the relativistic invariant equations are usually called the Maxwell equations as well. Each table below describes one formalism. In the tensor calculus formulation, the electromagnetic tensor Fαβ is an antisymmetric covariant order 2 tensor; the four-potential, Aα, is a covariant vector; the current, Jα, is a vector; the square brackets, [ ], denote antisymmetrization of indices; ∂α is the partial derivative with respect to the coordinate, xα. In Minkowski space coordinates are chosen with respect to an inertial frame; (xα) = (ct, x, y, z), so that the metric tensor used to raise and lower indices is ηαβ = diag(1, −1, −1, −1). The d'Alembert operator on Minkowski space is ◻ = ∂α∂α as in the vector formulation. In general spacetimes, the coordinate system xα is arbitrary, the covariant derivative ∇α, the Ricci tensor, Rαβ and raising and lowering of indices are defined by the Lorentzian metric, gαβ and the d'Alembert operator is defined as ◻ = ∇α∇α. The topological restriction is that the second real cohomology group of the space vanishes (see the differential form formulation for an explanation). This is violated for Minkowski space with a line removed, which can model a (flat) spacetime with a point-like monopole on the complement of the line. In the differential form formulation on arbitrary space times, F = ⁠1/2⁠Fαβ‍dxα ∧ dxβ is the electromagnetic tensor considered as a 2-form, A = Aαdxα is the potential 1-form, J = − J α ⋆ d x α {\displaystyle J=-J_{\alpha }{\star }\mathrm {d} x^{\alpha }} is the current 3-form, d is the exterior derivative, and ⋆ {\displaystyle {\star }} is the Hodge star on forms defined (up to its orientation, i.e. its sign) by the Lorentzian metric of spacetime. In the special case of 2-forms such as F, the Hodge star ⋆ {\displaystyle {\star }} depends on the metric tensor only for its local scale. This means that, as formulated, the differential form field equations are conformally invariant, but the Lorenz gauge condition breaks conformal invariance. The operator ◻ = ( − ⋆ d ⋆ d − d ⋆ d ⋆ ) {\displaystyle \Box =(-{\star }\mathrm {d} {\star }\mathrm {d} -\mathrm {d} {\star }\mathrm {d} {\star })} is the d'Alembert–Laplace–Beltrami operator on 1-forms on an arbitrary Lorentzian spacetime. The topological condition is again that the second real cohomology group is 'trivial' (meaning that its form follows from a definition). By the isomorphism with the second de Rham cohomology this condition means that every closed 2-form is exact. Other formalisms include the geometric algebra formulation and a matrix representation of Maxwell's equations. Historically, a quaternionic formulation was used. == Solutions == Maxwell's equations are partial differential equations that relate the electric and magnetic fields to each other and to the electric charges and currents. Often, the charges and currents are themselves dependent on the electric and magnetic fields via the Lorentz force equation and the constitutive relations. These all form a set of coupled partial differential equations which are often very difficult to solve: the solutions encompass all the diverse phenomena of classical electromagnetism. Some general remarks follow. As for any differential equation, boundary conditions and initial conditions are necessary for a unique solution. For example, even with no charges and no currents anywhere in spacetime, there are the obvious solutions for which E and B are zero or constant, but there are also non-trivial solutions corresponding to electromagnetic waves. In some cases, Maxwell's equations are solved over the whole of space, and boundary conditions are given as asymptotic limits at infinity. In other cases, Maxwell's equations are solved in a finite region of space, with appropriate conditions on the boundary of that region, for example an artificial absorbing boundary representing the rest of the universe, or periodic boundary conditions, or walls that isolate a small region from the outside world (as with a waveguide or cavity resonator). Jefimenko's equations (or the closely related Liénard–Wiechert potentials) are the explicit solution to Maxwell's equations for the electric and magnetic fields created by any given distribution of charges and currents. It assumes specific initial conditions to obtain the so-called "retarded solution", where the only fields present are the ones created by the charges. However, Jefimenko's equations are unhelpful in situations when the charges and currents are themselves affected by the fields they create. Numerical methods for differential equations can be used to compute approximate solutions of Maxwell's equations when exact solutions are impossible. These include the finite element method and finite-difference time-domain method. For more details, see Computational electromagnetics. == Overdetermination of Maxwell's equations == Maxwell's equations seem overdetermined, in that they involve six unknowns (the three components of E and B) but eight equations (one for each of the two Gauss's laws, three vector components each for Faraday's and Ampère's circuital laws). (The currents and charges are not unknowns, being freely specifiable subject to charge conservation.) This is related to a certain limited kind of redundancy in Maxwell's equations: It can be proven that any system satisfying Faraday's law and Ampère's circuital law automatically also satisfies the two Gauss's laws, as long as the system's initial condition does, and assuming conservation of charge and the nonexistence of magnetic monopoles. This explanation was first introduced by Julius Adams Stratton in 1941. Although it is possible to simply ignore the two Gauss's laws in a numerical algorithm (apart from the initial conditions), the imperfect precision of the calculations can lead to ever-increasing violations of those laws. By introducing dummy variables characterizing these violations, the four equations become not overdetermined after all. The resulting formulation can lead to more accurate algorithms that take all four laws into account. Both identities ∇ ⋅ ∇ × B ≡ 0 , ∇ ⋅ ∇ × E ≡ 0 {\displaystyle \nabla \cdot \nabla \times \mathbf {B} \equiv 0,\nabla \cdot \nabla \times \mathbf {E} \equiv 0} , which reduce eight equations to six independent ones, are the true reason of overdetermination. Equivalently, the overdetermination can be viewed as implying conservation of electric and magnetic charge, as they are required in the derivation described above but implied by the two Gauss's laws. For linear algebraic equations, one can make 'nice' rules to rewrite the equations and unknowns. The equations can be linearly dependent. But in differential equations, and especially partial differential equations (PDEs), one needs appropriate boundary conditions, which depend in not so obvious ways on the equations. Even more, if one rewrites them in terms of vector and scalar potential, then the equations are underdetermined because of gauge fixing. == Maxwell's equations as the classical limit of QED == Maxwell's equations and the Lorentz force law (along with the rest of classical electromagnetism) are extraordinarily successful at explaining and predicting a variety of phenomena. However, they do not account for quantum effects, and so their domain of applicability is limited. Maxwell's equations are thought of as the classical limit of quantum electrodynamics (QED). Some observed electromagnetic phenomena cannot be explained with Maxwell's equations if the source of the electromagnetic fields are the classical distributions of charge and current. These include photon–photon scattering and many other phenomena related to photons or virtual photons, "nonclassical light" and quantum entanglement of electromagnetic fields (see Quantum optics). E.g. quantum cryptography cannot be described by Maxwell theory, not even approximately. The approximate nature of Maxwell's equations becomes more and more apparent when going into the extremely strong field regime (see Euler–Heisenberg Lagrangian) or to extremely small distances. Finally, Maxwell's equations cannot explain any phenomenon involving individual photons interacting with quantum matter, such as the photoelectric effect, Planck's law, the Duane–Hunt law, and single-photon light detectors. However, many such phenomena may be explained using a halfway theory of quantum matter coupled to a classical electromagnetic field, either as external field or with the expected value of the charge current and density on the right hand side of Maxwell's equations. This is known as semiclassical theory or self-field QED and was initially discovered by de Broglie and Schrödinger and later fully developed by E.T. Jaynes and A.O. Barut. == Variations == Popular variations on the Maxwell equations as a classical theory of electromagnetic fields are relatively scarce because the standard equations have stood the test of time remarkably well. === Magnetic monopoles === Maxwell's equations posit that there is electric charge, but no magnetic charge (also called magnetic monopoles), in the universe. Indeed, magnetic charge has never been observed, despite extensive searches, and may not exist. If they did exist, both Gauss's law for magnetism and Faraday's law would need to be modified, and the resulting four equations would be fully symmetric under the interchange of electric and magnetic fields.: 273–275  == See also == == Explanatory notes == == References == == Further reading == Imaeda, K. (1995), "Biquaternionic Formulation of Maxwell's Equations and their Solutions", in Ablamowicz, Rafał; Lounesto, Pertti (eds.), Clifford Algebras and Spinor Structures, Springer, pp. 265–280, doi:10.1007/978-94-015-8422-7_16, ISBN 978-90-481-4525-6 === Historical publications === On Faraday's Lines of Force – 1855/56. Maxwell's first paper (Part 1 & 2) – Compiled by Blaze Labs Research (PDF). On Physical Lines of Force – 1861. Maxwell's 1861 paper describing magnetic lines of force – Predecessor to 1873 Treatise. James Clerk Maxwell, "A Dynamical Theory of the Electromagnetic Field", Philosophical Transactions of the Royal Society of London 155, 459–512 (1865). (This article accompanied a December 8, 1864 presentation by Maxwell to the Royal Society.) A Dynamical Theory Of The Electromagnetic Field – 1865. Maxwell's 1865 paper describing his 20 equations, link from Google Books. J. Clerk Maxwell (1873), "A Treatise on Electricity and Magnetism": Maxwell, J. C., "A Treatise on Electricity And Magnetism" – Volume 1 – 1873 – Posner Memorial Collection – Carnegie Mellon University. Maxwell, J. C., "A Treatise on Electricity And Magnetism" – Volume 2 – 1873 – Posner Memorial Collection – Carnegie Mellon University. Developments before the theory of relativity Larmor Joseph (1897). "On a dynamical theory of the electric and luminiferous medium. Part 3, Relations with material media" . Phil. Trans. R. Soc. 190: 205–300. Lorentz Hendrik (1899). "Simplified theory of electrical and optical phenomena in moving systems" . Proc. Acad. Science Amsterdam. I: 427–443. Lorentz Hendrik (1904). "Electromagnetic phenomena in a system moving with any velocity less than that of light" . Proc. Acad. Science Amsterdam. IV: 669–678. Henri Poincaré (1900) "La théorie de Lorentz et le Principe de Réaction" (in French), Archives Néerlandaises, V, 253–278. Henri Poincaré (1902) "La Science et l'Hypothèse" (in French). Henri Poincaré (1905) "Sur la dynamique de l'électron" (in French), Comptes Rendus de l'Académie des Sciences, 140, 1504–1508. Catt, Walton and Davidson. "The History of Displacement Current" Archived 2008-05-06 at the Wayback Machine. Wireless World, March 1979. == External links == "Maxwell equations", Encyclopedia of Mathematics, EMS Press, 2001 [1994] maxwells-equations.com — An intuitive tutorial of Maxwell's equations. The Feynman Lectures on Physics Vol. II Ch. 18: The Maxwell Equations Wikiversity Page on Maxwell's Equations === Modern treatments === Electromagnetism (ch. 11), B. Crowell, Fullerton College Lecture series: Relativity and electromagnetism, R. Fitzpatrick, University of Texas at Austin Electromagnetic waves from Maxwell's equations on Project PHYSNET. MIT Video Lecture Series (36 × 50 minute lectures) (in .mp4 format) – Electricity and Magnetism Taught by Professor Walter Lewin. === Other === Silagadze, Z. K. (2002). "Feynman's derivation of Maxwell equations and extra dimensions". Annales de la Fondation Louis de Broglie. 27: 241–256. arXiv:hep-ph/0106235. Nature Milestones: Photons – Milestone 2 (1861) Maxwell's equations
Wikipedia/Maxwell_equations
In particle physics, a generation or family is a division of the elementary particles. Between generations, particles differ by their flavour quantum number and mass, but their electric and strong interactions are identical. There are three generations according to the Standard Model of particle physics. Each generation contains two types of leptons and two types of quarks. The two leptons may be classified into one with electric charge −1 (electron-like) and neutral (neutrino); the two quarks may be classified into one with charge −1⁄3 (down-type) and one with charge +2⁄3 (up-type). The basic features of quark–lepton generation or families, such as their masses and mixings etc., can be described by some of the proposed family symmetries. == Overview == Each member of a higher generation has greater mass than the corresponding particle of the previous generation, with the possible exception of the neutrinos (whose small but non-zero masses have not been accurately determined). For example, the first-generation electron has a mass of only 0.511 MeV/c2, the second-generation muon has a mass of 106 MeV/c2, and the third-generation tau has a mass of 1777 MeV/c2 (almost twice as heavy as a proton). This mass hierarchy causes particles of higher generations to decay to the first generation, which explains why everyday matter (atoms) is made of particles from the first generation only. Electrons surround a nucleus made of protons and neutrons, which contain up and down quarks. The second and third generations of charged particles do not occur in normal matter and are only seen in extremely high-energy environments such as cosmic rays or particle accelerators. The term generation was first introduced by Haim Harari in Les Houches Summer School, 1976. Neutrinos of all generations stream throughout the universe but rarely interact with other matter. It is hoped that a comprehensive understanding of the relationship between the generations of the leptons may eventually explain the ratio of masses of the fundamental particles, and shed further light on the nature of mass generally, from a quantum perspective. == Fourth generation == Fourth and further generations are considered unlikely by many (but not all) theoretical physicists. Some arguments against the possibility of a fourth generation are based on the subtle modifications of precision electroweak observables that extra generations would induce; such modifications are strongly disfavored by measurements. There are functions used to generalize terms for introduction in a new quark that is an isosinglet and is responsible for generating Flavour-Changing-Neutral-Currents' (FCNC) at tree level in the electroweak sectors. Nonetheless, searches at high-energy colliders for particles from a fourth generation continue, but as yet no evidence has been observed. In such searches, fourth-generation particles are denoted by the same symbols as third-generation ones with an added prime (e.g. b′ and t′). A fourth generation with a 'light' neutrino (one with a mass less than about 45 GeV/c2) was ruled out by measurements of the decay widths of the Z boson at CERN's Large Electron–Positron Collider (LEP) as early as 1989. The lower bound for a fourth generation neutrino (ν'τ) mass as of 2010 was at about 60 GeV (millions of times larger than the upper bound for the other 3 neutrino masses). As of 2024, no evidence of a fourth-generation neutrino has ever been observed in neutrino oscillation studies either. Because even in the third generation (tau) neutrino ντ, mass is extremely small (making ντ the only third-generation particle that outside highly most energetic conditions will not readily decay), a fourth-generation neutrino ν'τ that observes the general rules for the known 3 neutrino generations should both be easily within current particle accelerators' energy levels, and occur during the regular and highly predictable switching-of-generations (oscillation) neutrinos perform. If the Koide formula continues to hold, the masses of the fourth generation charged lepton would be 44 GeV (ruled out) and b′ and t′ should be 3.6 TeV and 84 TeV respectively (The maximum possible energy for protons in the LHC is about 6 TeV). The lower bound for a fourth generation of quark (b′, t′) masses as of 2019 was at 1.4 TeV from experiments at the LHC. The lower bound for a fourth generation charged lepton (τ') mass in 2012 was 100GeV, with a proposed upper bound of 1.2 TeV from unitarity considerations. == Origin == The origin of multiple generations of fermions, and the particular count of 3, is an unsolved problem of physics. String theory provides a cause for multiple generations, but the particular number depends on the details of the compactification of the D-brane intersections. Additionally, E8 grand unified theories in 10 dimensions compactified on certain orbifolds down to 4 D naturally contain 3 generations of matter. This includes many heterotic string theory models. In standard quantum field theory, under certain assumptions, a single fermion field can give rise to multiple fermion poles with mass ratios of around eπ ≈ 23 and e2π ≈ 535 potentially explaining the large ratios of fermion masses between successive generations and their origin. The existence of precisely three generations with the correct structure was at least tentatively derived from first principles through a connection with gravity. The result implies a unification of gauge forces into SU(5). The question regarding the masses is unsolved, but this is a logically separate question, related to the Higgs sector of the theory. == See also == Grand Unified Theory Koide formula Neutrino mass hierarchy == References ==
Wikipedia/Generation_(particle_physics)
A chiral phenomenon is one that is not identical to its mirror image (see the article on mathematical chirality). The spin of a particle may be used to define a handedness, or helicity, for that particle, which, in the case of a massless particle, is the same as chirality. A symmetry transformation between the two is called parity transformation. Invariance under parity transformation by a Dirac fermion is called chiral symmetry. == Chirality and helicity == The helicity of a particle is positive ("right-handed") if the direction of its spin is the same as the direction of its motion. It is negative ("left-handed") if the directions of spin and motion are opposite. So a standard clock, with its spin vector defined by the rotation of its hands, has left-handed helicity if tossed with its face directed forwards. Mathematically, helicity is the sign of the projection of the spin vector onto the momentum vector: "left" is negative, "right" is positive. The chirality of a particle is more abstract: It is determined by whether the particle transforms in a right- or left-handed representation of the Poincaré group. For massless particles – photons, gluons, and (hypothetical) gravitons – chirality is the same as helicity; a given massless particle appears to spin in the same direction along its axis of motion regardless of point of view of the observer. For massive particles – such as electrons, quarks, and neutrinos – chirality and helicity must be distinguished: In the case of these particles, it is possible for an observer to change to a reference frame that is moving faster than the spinning particle is, in which case the particle will then appear to move backwards, and its helicity (which may be thought of as "apparent chirality") will be reversed. Helicity is a constant of motion, but it is not Lorentz invariant. Chirality is Lorentz invariant, but is not a constant of motion: a massive left-handed spinor, when propagating, will evolve into a right handed spinor over time, and vice versa. A massless particle moves with the speed of light, so no real observer (who must always travel at less than the speed of light) can be in any reference frame in which the particle appears to reverse its relative direction of spin, meaning that all real observers see the same helicity. Because of this, the direction of spin of massless particles is not affected by a change of inertial reference frame (a Lorentz boost) in the direction of motion of the particle, and the sign of the projection (helicity) is fixed for all reference frames: The helicity of massless particles is a relativistic invariant (a quantity whose value is the same in all inertial reference frames) and always matches the massless particle's chirality. The discovery of neutrino oscillation implies that neutrinos have mass, leaving the photon as the only confirmed massless particle; gluons are expected to also be massless, although this has not been conclusively tested. Hence, these are the only two particles now known for which helicity could be identical to chirality, of which only the photon has been confirmed by measurement. All other observed particles have mass and thus may have different helicities in different reference frames. == Chiral theories == Particle physicists have only observed or inferred left-chiral fermions and right-chiral antifermions engaging in the charged weak interaction. In the case of the weak interaction, which can in principle engage with both left- and right-chiral fermions, only two left-handed fermions interact. Interactions involving right-handed or opposite-handed fermions have not been shown to occur, implying that the universe has a preference for left-handed chirality. This preferential treatment of one chiral realization over another violates parity, as first noted by Chien Shiung Wu in her famous experiment known as the Wu experiment. This is a striking observation, since parity is a symmetry that holds for all other fundamental interactions. Chirality for a Dirac fermion ψ is defined through the operator γ5, which has eigenvalues ±1; the eigenvalue's sign is equal to the particle's chirality: +1 for right-handed, −1 for left-handed. Any Dirac field can thus be projected into its left- or right-handed component by acting with the projection operators ⁠1/2⁠(1 − γ5) or ⁠1/2⁠(1 + γ5) on ψ. The coupling of the charged weak interaction to fermions is proportional to the first projection operator, which is responsible for this interaction's parity symmetry violation. A common source of confusion is due to conflating the γ5, chirality operator with the helicity operator. Since the helicity of massive particles is frame-dependent, it might seem that the same particle would interact with the weak force according to one frame of reference, but not another. The resolution to this paradox is that the chirality operator is equivalent to helicity for massless fields only, for which helicity is not frame-dependent. By contrast, for massive particles, chirality is not the same as helicity, or, alternatively, helicity is not Lorentz invariant, so there is no frame dependence of the weak interaction: a particle that couples to the weak force in one frame does so in every frame. A theory that is asymmetric with respect to chiralities is called a chiral theory, while a non-chiral (i.e., parity-symmetric) theory is sometimes called a vector theory. Many pieces of the Standard Model of physics are non-chiral, which is traceable to anomaly cancellation in chiral theories. Quantum chromodynamics is an example of a vector theory, since both chiralities of all quarks appear in the theory, and couple to gluons in the same way. The electroweak theory, developed in the mid 20th century, is an example of a chiral theory. Originally, it assumed that neutrinos were massless, and assumed the existence of only left-handed neutrinos and right-handed antineutrinos. After the observation of neutrino oscillations, which implies that no fewer than two of the three neutrinos are massive, the revised theories of the electroweak interaction now include both right- and left-handed neutrinos. However, it is still a chiral theory, as it does not respect parity symmetry. The exact nature of the neutrino is still unsettled and so the electroweak theories that have been proposed are somewhat different, but most accommodate the chirality of neutrinos in the same way as was already done for all other fermions. == Chiral symmetry == Vector gauge theories with massless Dirac fermion fields ψ exhibit chiral symmetry, i.e., rotating the left-handed and the right-handed components independently makes no difference to the theory. We can write this as the action of rotation on the fields: ψ L → e i θ L ψ L {\displaystyle \psi _{\rm {L}}\rightarrow e^{i\theta _{\rm {L}}}\psi _{\rm {L}}} and ψ R → ψ R {\displaystyle \psi _{\rm {R}}\rightarrow \psi _{\rm {R}}} or ψ L → ψ L {\displaystyle \psi _{\rm {L}}\rightarrow \psi _{\rm {L}}} and ψ R → e i θ R ψ R . {\displaystyle \psi _{\rm {R}}\rightarrow e^{i\theta _{\rm {R}}}\psi _{\rm {R}}.} With N flavors, we have unitary rotations instead: U(N)L × U(N)R. More generally, we write the right-handed and left-handed states as a projection operator acting on a spinor. The right-handed and left-handed projection operators are P R = 1 + γ 5 2 {\displaystyle P_{\rm {R}}={\frac {1+\gamma ^{5}}{2}}} and P L = 1 − γ 5 2 {\displaystyle P_{\rm {L}}={\frac {1-\gamma ^{5}}{2}}} Massive fermions do not exhibit chiral symmetry, as the mass term in the Lagrangian, mψψ, breaks chiral symmetry explicitly. Spontaneous chiral symmetry breaking may also occur in some theories, as it most notably does in quantum chromodynamics. The chiral symmetry transformation can be divided into a component that treats the left-handed and the right-handed parts equally, known as vector symmetry, and a component that actually treats them differently, known as axial symmetry. (cf. Current algebra.) A scalar field model encoding chiral symmetry and its breaking is the chiral model. The most common application is expressed as equal treatment of clockwise and counter-clockwise rotations from a fixed frame of reference. The general principle is often referred to by the name chiral symmetry. The rule is absolutely valid in the classical mechanics of Newton and Einstein, but results from quantum mechanical experiments show a difference in the behavior of left-chiral versus right-chiral subatomic particles. === Example: u and d quarks in QCD === Consider quantum chromodynamics (QCD) with two massless quarks u and d (massive fermions do not exhibit chiral symmetry). The Lagrangian reads L = u ¯ i ⧸ D u + d ¯ i ⧸ D d + L g l u o n s . {\displaystyle {\mathcal {L}}={\overline {u}}\,i\displaystyle {\not }D\,u+{\overline {d}}\,i\displaystyle {\not }D\,d+{\mathcal {L}}_{\mathrm {gluons} }~.} In terms of left-handed and right-handed spinors, it reads L = u ¯ L i ⧸ D u L + u ¯ R i ⧸ D u R + d ¯ L i ⧸ D d L + d ¯ R i ⧸ D d R + L g l u o n s . {\displaystyle {\mathcal {L}}={\overline {u}}_{\rm {L}}\,i\displaystyle {\not }D\,u_{\rm {L}}+{\overline {u}}_{\rm {R}}\,i\displaystyle {\not }D\,u_{\rm {R}}+{\overline {d}}_{\rm {L}}\,i\displaystyle {\not }D\,d_{\rm {L}}+{\overline {d}}_{\rm {R}}\,i\displaystyle {\not }D\,d_{\rm {R}}+{\mathcal {L}}_{\mathrm {gluons} }~.} (Here, i is the imaginary unit and ⧸ D {\displaystyle \displaystyle {\not }D} the Dirac operator.) Defining q = [ u d ] , {\displaystyle q={\begin{bmatrix}u\\d\end{bmatrix}},} it can be written as L = q ¯ L i ⧸ D q L + q ¯ R i ⧸ D q R + L g l u o n s . {\displaystyle {\mathcal {L}}={\overline {q}}_{\rm {L}}\,i\displaystyle {\not }D\,q_{\rm {L}}+{\overline {q}}_{\rm {R}}\,i\displaystyle {\not }D\,q_{\rm {R}}+{\mathcal {L}}_{\mathrm {gluons} }~.} The Lagrangian is unchanged under a rotation of qL by any 2×2 unitary matrix L, and qR by any 2×2 unitary matrix R. This symmetry of the Lagrangian is called flavor chiral symmetry, and denoted as U(2)L × U(2)R. It decomposes into S U ( 2 ) L × S U ( 2 ) R × U ( 1 ) V × U ( 1 ) A . {\displaystyle \mathrm {SU} (2)_{\text{L}}\times \mathrm {SU} (2)_{\text{R}}\times \mathrm {U} (1)_{V}\times \mathrm {U} (1)_{A}~.} The singlet vector symmetry, U(1)V, acts as q L → e i θ ( x ) q L q R → e i θ ( x ) q R , {\displaystyle q_{\text{L}}\rightarrow e^{i\theta (x)}q_{\text{L}}\qquad q_{\text{R}}\rightarrow e^{i\theta (x)}q_{\text{R}}~,} and thus invariant under U(1) gauge symmetry. This corresponds to baryon number conservation. The singlet axial group U(1)A transforms as the following global transformation q L → e i θ q L q R → e − i θ q R . {\displaystyle q_{\text{L}}\rightarrow e^{i\theta }q_{\text{L}}\qquad q_{\text{R}}\rightarrow e^{-i\theta }q_{\text{R}}~.} However, it does not correspond to a conserved quantity, because the associated axial current is not conserved. It is explicitly violated by a quantum anomaly. The remaining chiral symmetry SU(2)L × SU(2)R turns out to be spontaneously broken by a quark condensate ⟨ q ¯ R a q L b ⟩ = v δ a b {\displaystyle \textstyle \langle {\bar {q}}_{\text{R}}^{a}q_{\text{L}}^{b}\rangle =v\delta ^{ab}} formed through nonperturbative action of QCD gluons, into the diagonal vector subgroup SU(2)V known as isospin. The Goldstone bosons corresponding to the three broken generators are the three pions. As a consequence, the effective theory of QCD bound states like the baryons, must now include mass terms for them, ostensibly disallowed by unbroken chiral symmetry. Thus, this chiral symmetry breaking induces the bulk of hadron masses, such as those for the nucleons — in effect, the bulk of the mass of all visible matter. In the real world, because of the nonvanishing and differing masses of the quarks, SU(2)L × SU(2)R is only an approximate symmetry to begin with, and therefore the pions are not massless, but have small masses: they are pseudo-Goldstone bosons. === More flavors === For more "light" quark species, N flavors in general, the corresponding chiral symmetries are U(N)L × U(N)R′, decomposing into S U ( N ) L × S U ( N ) R × U ( 1 ) V × U ( 1 ) A , {\displaystyle \mathrm {SU} (N)_{\text{L}}\times \mathrm {SU} (N)_{\text{R}}\times \mathrm {U} (1)_{V}\times \mathrm {U} (1)_{A}~,} and exhibiting a very analogous chiral symmetry breaking pattern. Most usually, N = 3 is taken, the u, d, and s quarks taken to be light (the eightfold way), so then approximately massless for the symmetry to be meaningful to a lowest order, while the other three quarks are sufficiently heavy to barely have a residual chiral symmetry be visible for practical purposes. === An application in particle physics === In theoretical physics, the electroweak model breaks parity maximally. All its fermions are chiral Weyl fermions, which means that the charged weak gauge bosons W+ and W− only couple to left-handed quarks and leptons. Some theorists found this objectionable, and so conjectured a GUT extension of the weak force which has new, high energy W′ and Z′ bosons, which do couple with right handed quarks and leptons: S U ( 2 ) W × U ( 1 ) Y Z 2 {\displaystyle {\frac {\mathrm {SU} (2)_{\text{W}}\times \mathrm {U} (1)_{Y}}{\mathbb {Z} _{2}}}} to S U ( 2 ) L × S U ( 2 ) R × U ( 1 ) B − L Z 2 . {\displaystyle {\frac {\mathrm {SU} (2)_{\text{L}}\times \mathrm {SU} (2)_{\text{R}}\times \mathrm {U} (1)_{B-L}}{\mathbb {Z} _{2}}}.} Here, SU(2)L (pronounced "SU(2) left") is SU(2)W from above, while B−L is the baryon number minus the lepton number. The electric charge formula in this model is given by Q = T 3 L + T 3 R + B − L 2 ; {\displaystyle Q=T_{\rm {3L}}+T_{\rm {3R}}+{\frac {B-L}{2}}\,;} where T 3 L {\displaystyle \ T_{\rm {3L}}\ } and T 3 R {\displaystyle \ T_{\rm {3R}}\ } are the left and right weak isospin values of the fields in the theory. There is also the chromodynamic SU(3)C. The idea was to restore parity by introducing a left-right symmetry. This is a group extension of Z 2 {\displaystyle \mathbb {Z} _{2}} (the left-right symmetry) by S U ( 3 ) C × S U ( 2 ) L × S U ( 2 ) R × U ( 1 ) B − L Z 6 {\displaystyle {\frac {\mathrm {SU} (3)_{\text{C}}\times \mathrm {SU} (2)_{\text{L}}\times \mathrm {SU} (2)_{\text{R}}\times \mathrm {U} (1)_{B-L}}{\mathbb {Z} _{6}}}} to the semidirect product S U ( 3 ) C × S U ( 2 ) L × S U ( 2 ) R × U ( 1 ) B − L Z 6 ⋊ Z 2 . {\displaystyle {\frac {\mathrm {SU} (3)_{\text{C}}\times \mathrm {SU} (2)_{\text{L}}\times \mathrm {SU} (2)_{\text{R}}\times \mathrm {U} (1)_{B-L}}{\mathbb {Z} _{6}}}\rtimes \mathbb {Z} _{2}\ .} This has two connected components where Z 2 {\displaystyle \mathbb {Z} _{2}} acts as an automorphism, which is the composition of an involutive outer automorphism of SU(3)C with the interchange of the left and right copies of SU(2) with the reversal of U(1)B−L. It was shown by Mohapatra & Senjanovic (1975) that left-right symmetry can be spontaneously broken to give a chiral low energy theory, which is the Standard Model of Glashow, Weinberg, and Salam, and also connects the small observed neutrino masses to the breaking of left-right symmetry via the seesaw mechanism. In this setting, the chiral quarks ( 3 , 2 , 1 ) + 1 3 {\displaystyle (3,2,1)_{+{1 \over 3}}} and ( 3 ¯ , 1 , 2 ) − 1 3 {\displaystyle \left({\bar {3}},1,2\right)_{-{1 \over 3}}} are unified into an irreducible representation ("irrep") ( 3 , 2 , 1 ) + 1 3 ⊕ ( 3 ¯ , 1 , 2 ) − 1 3 . {\displaystyle (3,2,1)_{+{1 \over 3}}\oplus \left({\bar {3}},1,2\right)_{-{1 \over 3}}\ .} The leptons are also unified into an irreducible representation ( 1 , 2 , 1 ) − 1 ⊕ ( 1 , 1 , 2 ) + 1 . {\displaystyle (1,2,1)_{-1}\oplus (1,1,2)_{+1}\ .} The Higgs bosons needed to implement the breaking of left-right symmetry down to the Standard Model are ( 1 , 3 , 1 ) 2 ⊕ ( 1 , 1 , 3 ) 2 . {\displaystyle (1,3,1)_{2}\oplus (1,1,3)_{2}\ .} This then provides three sterile neutrinos which are perfectly consistent with current neutrino oscillation data. Within the seesaw mechanism, the sterile neutrinos become superheavy without affecting physics at low energies. Because the left–right symmetry is spontaneously broken, left–right models predict domain walls. This left-right symmetry idea first appeared in the Pati–Salam model (1974) and Mohapatra–Pati models (1975). == Chirality in materials science == Chirality in other branches of physics is often used for classifying and studying the properties of bodies and materials under external influences. Classification by chirality, as a special case of symmetry classification, allows for a better understanding of first-principles construction of molecules, crystals, quasicrystals, and more. An example is the homochirality of amino acids in all known forms of life, which can be reproduced in physical experiments under external influence. Optical activity (including circular dichroism and magnetic circular dichroism) of materials is determined by their chirality. Chiral physical systems are characterized by the absence of invariance under the parity operator. An ambiguity arises in defining chirality in physics depending on whether one compares directions of motion using the reflection or spatial inversion operation. Accordingly, one distinguishes between "true" chirality (which is invariant under the time-reversal operation) and "false" chirality (non-invariant under time reversal). Many physical quantities change sign under the time-reversal operation (e.g., velocity, power, electric current, magnetization). Accordingly, "false" chirality is so typical in physics that the term can be misleading, and it is clearer to speak of T-invariant and T-non-invariant chirality. Effects related to chirality are described using pseudoscalar or axial vector physical quantities in general, and particularly, in magnetically ordered media, are described using time-direction-dependent chirality. This approach is formalized using dichromatic symmetry groups. T-invariant chirality corresponds to the absence in the symmetry group of any symmetry operations that include spatial inversion 1 ¯ {\displaystyle {\bar {1}}} or reflection m, according to international notation. The criterion for T-non-invariant chirality is the presence of these symmetry operations, but only when combined with time reversal 1 ′ {\displaystyle 1'} , such as operations m′ or 1 ¯ ′ {\displaystyle {\bar {1}}'} . At the level of atomic structure of materials, one distinguishes vector, scalar, and other types of chirality depending on the direction/sign of triple and vector products of spins. == See also == Electroweak theory Chirality (chemistry) Chirality (mathematics) Chiral symmetry breaking Handedness Spinors Fermionic field § Dirac fields Sigma model Chiral model == Notes == == References == Walter Greiner; Berndt Müller (2000). Gauge Theory of Weak Interactions. Springer. ISBN 3-540-67672-4. Gordon L. Kane (1987). Modern Elementary Particle Physics. Perseus Books. ISBN 0-201-11749-5. Kondepudi, Dilip K.; Hegstrom, Roger A. (January 1990). "The Handedness of the Universe". Scientific American. 262 (1): 108–115. Bibcode:1990SciAm.262a.108H. doi:10.1038/scientificamerican0190-108. Winters, Jeffrey (November 1995). "Looking for the Right Hand". Discover. Retrieved 12 September 2015. == External links == History of science: parity violation Helicity, Chirality, Mass, and the Higgs (Quantum Diaries blog) Chirality vs helicity chart (Robert D. Klauber)
Wikipedia/Left-right_model
The grand unification energy Λ G U T {\displaystyle \Lambda _{GUT}} , or the GUT scale, is the energy level above which, it is believed, the electromagnetic force, weak force, and strong force become equal in strength and unify to one force governed by a simple Lie group. The exact value of the grand unification energy (if grand unification is indeed realized in nature) depends on the precise physics present at shorter distance scales not yet explored by experiments. If one assumes the Desert and supersymmetry, it is at around 1025 eV or 10 16 {\displaystyle 10^{16}} GeV (≈ 1.6 megajoules). Some Grand Unified Theories (GUTs) can predict the grand unification energy but, usually, with large uncertainties due to model dependent details such as the choice of the gauge group, the Higgs sector, the matter content or further free parameters. Furthermore, at the moment it seems fair to state that there is no agreed minimal GUT. The unification of the electroweak force and the strong force with the gravitational force in a so-called "Theory of Everything" requires an even higher energy level which is generally assumed to be close to the Planck scale of 10 19 {\displaystyle 10^{19}} GeV. In theory, at such short distances, gravity becomes comparable in strength to the other three forces of nature known to date. This statement is modified if there exist additional dimensions of space at intermediate scales. In this case, the strength of gravitational interactions increases faster at smaller distances and the energy scale at which all known forces of nature unify can be considerably lower. This effect is exploited in models of large extra dimensions. The most powerful collider to date, the Large Hadron Collider (LHC), is designed to reach about 104 GeV in proton–proton collisions. The scale 1016 GeV is only a few orders of magnitude below the Planck energy of 1019 GeV, and thus not within reach of man-made earth bound colliders. == See also == Desert (particle physics) Standard Model Timeline of the Big Bang == References ==
Wikipedia/Grand_unification_energy
In particle physics, SO(10) refers to a grand unified theory (GUT) based on the spin group Spin(10). The shortened name SO(10) is conventional among physicists, and derives from the Lie algebra or less precisely the Lie group of SO(10), which is a special orthogonal group that is double covered by Spin(10). SO(10) subsumes the Georgi–Glashow and Pati–Salam models, and unifies all fermions in a generation into a single field. This requires 12 new gauge bosons, in addition to the 12 of SU(5) and 9 of SU(4)×SU(2)×SU(2). == History == Before the SU(5) theory behind the Georgi–Glashow model, Harald Fritzsch and Peter Minkowski, and independently Howard Georgi, found that all the matter contents are incorporated into a single representation, spinorial 16 of SO(10). However, Georgi found the SO(10) theory just a few hours before finding SU(5) at the end of 1973. == Important subgroups == It has the branching rules to [SU(5)×U(1)χ]/Z5. 45 → 24 0 ⊕ 10 − 4 ⊕ 10 ¯ 4 ⊕ 1 0 {\displaystyle 45\rightarrow 24_{0}\oplus 10_{-4}\oplus {\overline {10}}_{4}\oplus 1_{0}} 16 → 10 1 ⊕ 5 ¯ − 3 ⊕ 1 5 {\displaystyle 16\rightarrow 10_{1}\oplus {\bar {5}}_{-3}\oplus 1_{5}} 10 → 5 − 2 ⊕ 5 ¯ 2 . {\displaystyle 10\rightarrow 5_{-2}\oplus {\bar {5}}_{2}.} If the hypercharge is contained within SU(5), this is the conventional Georgi–Glashow model, with the 16 as the matter fields, the 10 as the electroweak Higgs field and the 24 within the 45 as the GUT Higgs field. The superpotential may then include renormalizable terms of the form Tr(45 ⋅ 45); Tr(45 ⋅ 45 ⋅ 45); 10 ⋅ 45 ⋅ 10, 10 ⋅ 16* ⋅ 16 and 16* ⋅ 16. The first three are responsible to the gauge symmetry breaking at low energies and give the Higgs mass, and the latter two give the matter particles masses and their Yukawa couplings to the Higgs. There is another possible branching, under which the hypercharge is a linear combination of an SU(5) generator and χ. This is known as flipped SU(5). Another important subgroup is either [SU(4) × SU(2)L × SU(2)R]/Z2 or Z2 ⋊ [SU(4) × SU(2)L × SU(2)R]/Z2 depending upon whether or not the left-right symmetry is broken, yielding the Pati–Salam model, whose branching rule is 45 → ( 15 , 1 , 1 ) ⊕ ( 6 , 2 , 2 ) ⊕ ( 1 , 3 , 1 ) ⊕ ( 1 , 1 , 3 ) {\displaystyle 45\rightarrow (15,1,1)\oplus (6,2,2)\oplus (1,3,1)\oplus (1,1,3)} 16 → ( 4 , 2 , 1 ) ⊕ ( 4 ¯ , 1 , 2 ) . {\displaystyle 16\rightarrow (4,2,1)\oplus ({\bar {4}},1,2).} == Spontaneous symmetry breaking == The symmetry breaking of SO(10) is usually done with a combination of (( a 45H OR a 54H) AND ((a 16H AND a 16 ¯ H {\displaystyle {\overline {16}}_{H}} ) OR (a 126H AND a 126 ¯ H {\displaystyle {\overline {126}}_{H}} )) ). Let's say we choose a 54H. When this Higgs field acquires a GUT scale VEV, we have a symmetry breaking to Z2 ⋊ [SU(4) × SU(2)L × SU(2)R]/Z2, i.e. the Pati–Salam model with a Z2 left-right symmetry. If we have a 45H instead, this Higgs field can acquire any VEV in a two dimensional subspace without breaking the standard model. Depending on the direction of this linear combination, we can break the symmetry to SU(5)×U(1), the Georgi–Glashow model with a U(1) (diag(1,1,1,1,1,-1,-1,-1,-1,-1)), flipped SU(5) (diag(1,1,1,-1,-1,-1,-1,-1,1,1)), SU(4)×SU(2)×U(1) (diag(0,0,0,1,1,0,0,0,-1,-1)), the minimal left-right model (diag(1,1,1,0,0,-1,-1,-1,0,0)) or SU(3)×SU(2)×U(1)×U(1) for any other nonzero VEV. The choice diag(1,1,1,0,0,-1,-1,-1,0,0) is called the Dimopoulos-Wilczek mechanism aka the "missing VEV mechanism" and it is proportional to B−L. The choice of a 16H and a 16 ¯ H {\displaystyle {\overline {16}}_{H}} breaks the gauge group down to the Georgi–Glashow SU(5). The same comment applies to the choice of a 126H and a 126 ¯ H {\displaystyle {\overline {126}}_{H}} . It is the combination of BOTH a 45/54 and a 16/ 16 ¯ {\displaystyle {\overline {16}}} or 126/ 126 ¯ {\displaystyle {\overline {126}}} which breaks SO(10) down to the Standard Model. == The electroweak Higgs and the doublet-triplet splitting problem == The electroweak Higgs doublets come from an SO(10) 10H. Unfortunately, this same 10 also contains triplets. The masses of the doublets have to be stabilized at the electroweak scale, which is many orders of magnitude smaller than the GUT scale whereas the triplets have to be really heavy in order to prevent triplet-mediated proton decays. See doublet-triplet splitting problem. Among the solutions for it is the Dimopoulos-Wilczek mechanism, or the choice of diag(1,1,1,0,0,-1,-1,-1,0,0) of <45>. Unfortunately, this is not stable once the 16/ 16 ¯ {\displaystyle {\overline {16}}} or 126/ 126 ¯ {\displaystyle {\overline {126}}} sector interacts with the 45 sector. == Content == === Matter === The matter representations come in three copies (generations) of the 16 representation. The Yukawa coupling is 10H 16f 16f. This includes a right-handed neutrino. One may either include three copies of singlet representations φ and a Yukawa coupling < 16 ¯ H > 16 f ϕ {\displaystyle <{\overline {16}}_{H}>16_{f}\phi } (the "double seesaw mechanism"); or else, add the Yukawa interaction < 126 ¯ H > 16 f 16 f {\displaystyle <{\overline {126}}_{H}>16_{f}16_{f}} or add the nonrenormalizable coupling < 16 ¯ H >< 16 ¯ H > 16 f 16 f {\displaystyle <{\overline {16}}_{H}><{\overline {16}}_{H}>16_{f}16_{f}} . See seesaw mechanism. The 16f field branches to [SU(5)×U(1)χ]/Z5 and SU(4) × SU(2)L × SU(2)R as 16 → 10 1 ⊕ 5 ¯ − 3 ⊕ 1 5 {\displaystyle 16\rightarrow 10_{1}\oplus {\bar {5}}_{-3}\oplus 1_{5}} 16 → ( 4 , 2 , 1 ) ⊕ ( 4 ¯ , 1 , 2 ) . {\displaystyle 16\rightarrow (4,2,1)\oplus ({\bar {4}},1,2).} === Gauge fields === The 45 field branches to [SU(5)×U(1)χ]/Z5 and SU(4) × SU(2)L × SU(2)R as 45 → 24 0 ⊕ 10 − 4 ⊕ 10 ¯ 4 ⊕ 1 0 {\displaystyle 45\rightarrow 24_{0}\oplus 10_{-4}\oplus {\overline {10}}_{4}\oplus 1_{0}} 45 → ( 15 , 1 , 1 ) ⊕ ( 6 , 2 , 2 ) ⊕ ( 1 , 3 , 1 ) ⊕ ( 1 , 1 , 3 ) {\displaystyle 45\rightarrow (15,1,1)\oplus (6,2,2)\oplus (1,3,1)\oplus (1,1,3)} and to the standard model [SU(3)C × SU(2)L × U(1)Y]/Z6 as 45 → ( 8 , 1 ) 0 ⊕ ( 1 , 3 ) 0 ⊕ ( 1 , 1 ) 0 ⊕ ( 3 , 2 ) − 5 6 ⊕ ( 3 ¯ , 2 ) 5 6 ⊕ ( 3 , 1 ) 2 3 ⊕ ( 3 ¯ , 1 ) − 2 3 ⊕ ( 1 , 1 ) 1 ⊕ ( 1 , 1 ) − 1 ⊕ ( 1 , 1 ) 0 ⊕ ( 3 , 2 ) 1 6 ⊕ ( 3 ¯ , 2 ) − 1 6 . {\displaystyle {\begin{aligned}45\rightarrow &(8,1)_{0}\oplus (1,3)_{0}\oplus (1,1)_{0}\oplus \\&(3,2)_{-{\frac {5}{6}}}\oplus ({\bar {3}},2)_{\frac {5}{6}}\oplus \\&(3,1)_{\frac {2}{3}}\oplus ({\bar {3}},1)_{-{\frac {2}{3}}}\oplus (1,1)_{1}\oplus (1,1)_{-1}\oplus (1,1)_{0}\oplus \\&(3,2)_{\frac {1}{6}}\oplus ({\bar {3}},2)_{-{\frac {1}{6}}}.\\\end{aligned}}} The four lines are the SU(3)C, SU(2)L, and U(1)B−L bosons; the SU(5) leptoquarks which don't mutate X charge; the Pati-Salam leptoquarks and SU(2)R bosons; and the new SO(10) leptoquarks. (The standard electroweak U(1)Y is a linear combination of the (1,1)0 bosons.) == Proton decay == These graphics refer to the X bosons and Higgs bosons. Note that SO(10) contains both the Georgi–Glashow SU(5) and flipped SU(5). == Anomaly free from local and global anomalies == It has been long known that the SO(10) model is free from all perturbative local anomalies, computable by Feynman diagrams. However, it only became clear in 2018 that the SO(10) model is also free from all nonperturbative global anomalies on non-spin manifolds --- an important rule for confirming the consistency of SO(10) grand unified theory, with a Spin(10) gauge group and chiral fermions in the 16-dimensional spinor representations, defined on non-spin manifolds. == See also == Flipped SO(10) == Notes ==
Wikipedia/SO(10)_(physics)
The Islamic sciences (Arabic: علوم الدين, romanized: ʿulūm al-dīn, lit. 'the sciences of religion') are a set of traditionally defined religious sciences practiced by Islamic scholars (ʿulamāʾ), aimed at the construction and interpretation of Islamic religious knowledge. == Different sciences == These sciences include: ʿIlm al-fiqh: Islamic jurisprudence ʿIlm al-ḥadīth: the study of the authenticity of Prophetic traditions or hadith ʿIlm al-rijāl: the biographical study of hadith transmitters with the purpose of evaluating their trustworthiness ʿIlm al-kalām (sometimes also called uṣūl al-dīn, "the roots of religion"): speculative theology / and some reasoning ʿIlm al-lugha: Arabic grammar ʿIlm al-tafsīr: interpretation of the Qur'an ʿIlm al-naskh: the study of abrogation (parts of the Qur'an which supersede or cancel other parts) ʿIlm al-tajwīd: rules for the proper recitation of the Qur'an ʿIlm al-qirāʾāt: on the various ways in which the Qur'an can be recited ʿIlm ākhir al-zamān: Islamic eschatology (on the end times and the Day of Resurrection (yawm al-qiyāma)) ʿIlm al-akhlaq: moral ethics was an important subject for Muslim intellectuals in medieval Islam. == In Shiʿi Islam == Shiʿi Islam Many of the same subjects are studied at Shiʿi seminaries (known as hawza), but there are some differences: Falsafa (Islamic philosophy) Fiqh (jurisprudence) 'Ilm al-Hadith (traditions) Ilm al-Kalam (theology) 'Ilm ar-Rijal (evaluation of biographies) ʿIrfān (Islamic mysticism) Manṭiq (Logic) Lugha (language studies) Tafsir al-Qur'an (interpretation of the Qur'an) Tarikh (history) Ulum al-Qur'an (Qur'an sciences) Usul al-Fiqh (principles of jurisprudence) == According to Abu Hamid Al-Ghazali == The celebrated Islamic scholar Abu Hamid Al-Ghazali wrote on Islamic sciences in his well known book The Revival of Religious Sciences (Ihya `ulum al‑din). He argued that a Muslim has a religious obligation (wajib) to know whatever aspects of religious science are necessary for them to obey Shari'ah in doing whatever work it is they do. So, for example, someone working in animal husbandry should know rules concerning zakat; a merchant "doing business in an usurious environment", should learn rules about riba so as "to effectively avoid it". Sciences whose knowledge is wajib kifa'i (must be known by some people in society, although once enough people have met the obligation, the rest of the population is relieved of it). Al‑Ghazali considers wajib kifa'i religious sciences to be classified into four groups: Usul (principles; i.e. the Qur’an, the sunnah, ijma` or consensus and the traditions of the Prophet's companions) Furu` (secondary matters; i.e. problems of jurisprudence, ethics and mystical experience) Introductory studies (Arabic grammar, syntax, etc.) Complementary studies (recitation and interpretation of the Qur’an, study of the principles of jurisprudence, `ilm al‑rijal or biographical research about narrators of Islamic traditions etc.) Al‑Ghazzali aserts that not all religious sciences are "praiseworthy" (mahmud), as some proport to be "oriented towards the Shari'ah but actually deviate from its teachings". These are known as "undesirable" (madhmum). == See also == List of contemporary Islamic scholars Ulama Islamic advice literature == References == === Works cited === Campo, Juan E. (2009). "Ethics and morality". Encyclopedia of Islam. pp. 214–216. ISBN 9781438126968. Retrieved 21 February 2022. Gilliot, Cl.; Repp, R.C; Nizami, K.A.; Hooker, M.B.; Lin, Chang-Kuan; Hunwick, J.O. (1960–2007). "ʿUlamāʾ". In Bearman, P.; Bianquis, Th.; Bosworth, C.E.; van Donzel, E.; Heinrichs, W.P. (eds.). Encyclopaedia of Islam, Second Edition. doi:10.1163/1573-3912_islam_COM_1278. Gimaret, D. (1960–2007). "Uṣūl al-Dīn". In Bearman, P.; Bianquis, Th.; Bosworth, C.E.; van Donzel, E.; Heinrichs, W.P. (eds.). Encyclopaedia of Islam, Second Edition. doi:10.1163/1573-3912_islam_SIM_7760. Schmidtke, Sabine (2016). "Introduction". In Schmidtke, Sabine (ed.). The Oxford Handbook of Islamic Theology. Oxford: Oxford University Press. pp. 1–26. doi:10.1093/oxfordhb/9780199696703.013.48.
Wikipedia/Islamic_sciences
A drug is any chemical substance other than a nutrient or an essential dietary ingredient, which, when administered to a living organism, produces a biological effect. Consumption of drugs can be via inhalation, injection, smoking, ingestion, absorption via a patch on the skin, suppository, or dissolution under the tongue. In pharmacology, a drug is a chemical substance, typically of known structure, which, when administered to a living organism, produces a biological effect. A pharmaceutical drug, also called a medication or medicine, is a chemical substance used to treat, cure, prevent, or diagnose a disease or to promote well-being. Traditionally drugs were obtained through extraction from medicinal plants, but more recently also by organic synthesis. Pharmaceutical drugs may be used for a limited duration, or on a regular basis for chronic disorders. == Classification == Pharmaceutical drugs are often classified into drug classes—groups of related drugs that have similar chemical structures, the same mechanism of action (binding to the same biological target), a related mode of action, and that are used to treat the same disease. The Anatomical Therapeutic Chemical Classification System (ATC), the most widely used drug classification system, assigns drugs a unique ATC code, which is an alphanumeric code that assigns it to specific drug classes within the ATC system. Another major classification system is the Biopharmaceutics Classification System. This classifies drugs according to their solubility and permeability or absorption properties. Psychoactive drugs are substances that affect the function of the central nervous system, altering perception, mood or consciousness. These drugs are divided into different groups such as: stimulants, depressants, antidepressants, anxiolytics, antipsychotics, and hallucinogens. These psychoactive drugs have been proven useful in treating a wide range of medical conditions including mental disorders around the world. The most widely used drugs in the world include caffeine, nicotine and alcohol, which are also considered recreational drugs, since they are used for pleasure rather than medicinal purposes. All drugs can have potential side effects. Abuse of several psychoactive drugs can cause addiction or physical dependence. Excessive use of stimulants can promote stimulant psychosis. Many recreational drugs are illicit; international treaties such as the Single Convention on Narcotic Drugs exist for the purpose of their prohibition. == Etymology == In English, the noun "drug" is thought to originate from Old French "drogue", possibly deriving from "droge (vate)" from Middle Dutch meaning "dry (barrels)", referring to medicinal plants preserved as dry matter in barrels. In the 1990s however, Spanish lexicographer Federico Corriente Córdoba documented the possible origin of the word in {ḥṭr} an early romanized form of the Al-Andalus language from the northwestern part of the Iberian peninsula. The term could approximately be transcribed as حطروكة or hatruka. The term "drug" has become a skunked term with negative connotation, being used as a synonym for illegal substances like cocaine or heroin or for drugs used recreationally. In other contexts the terms "drug" and "medicine" are used interchangeably. == Efficacy == Drug action is highly specific and their effects may only be detected in certain individuals. For instance, the 10 highest-grossing drugs in the US may help only 4-25% of people. Often, the activity of a drug depends on the genotype of a patient. For example, Erbitux (cetuximab) increases the survival rate of colorectal cancer patients if they carry a particular mutation in the EGFR gene. Some drugs are specifically approved for certain genotypes. Vemurafenib is such a case which is used for melanoma patients who carry a mutation in the BRAF gene. The number of people who benefit from a drug determines if drug trials are worth carrying out, given that phase III trials may cost between $100 million and $700 million per drug. This is the motivation behind personalized medicine, that is, to develop drugs that are adapted to individual patients. == Medication == A medication or medicine is a drug taken to cure or ameliorate any symptoms of an illness or medical condition. The use may also be as preventive medicine that has future benefits but does not treat any existing or pre-existing diseases or symptoms. Dispensing of medication is often regulated by governments into three categories—over-the-counter medications, which are available in pharmacies and supermarkets without special restrictions; behind-the-counter medicines, which are dispensed by a pharmacist without needing a doctor's prescription, and prescription only medicines, which must be prescribed by a licensed medical professional, usually a physician. In the United Kingdom, behind-the-counter medicines are called pharmacy medicines which can only be sold in registered pharmacies, by or under the supervision of a pharmacist. These medications are designated by the letter P on the label. The range of medicines available without a prescription varies from country to country. Medications are typically produced by pharmaceutical companies and are often patented to give the developer exclusive rights to produce them. Those that are not patented (or with expired patents) are called generic drugs since they can be produced by other companies without restrictions or licenses from the patent holder. Pharmaceutical drugs are usually categorised into drug classes. A group of drugs will share a similar chemical structure, have the same mechanism of action or the same related mode of action, or target the same illness or related illnesses. The Anatomical Therapeutic Chemical Classification System (ATC), the most widely used drug classification system, assigns drugs a unique ATC code, which is an alphanumeric code that assigns it to specific drug classes within the ATC system. Another major classification system is the Biopharmaceutics Classification System. This groups drugs according to their solubility and permeability or absorption properties. == Spiritual and religious use == Some religions, particularly ethnic religions, are based completely on the use of certain drugs, known as entheogens, which are mostly hallucinogens—psychedelics, dissociatives, or deliriants. Some entheogens include kava which can act as a stimulant, a sedative, a euphoriant and an anesthetic. The roots of the kava plant are used to produce a drink consumed throughout the cultures of the Pacific Ocean. Some shamans from different cultures use entheogens, defined as "generating the divine within," to achieve religious ecstasy. Amazonian shamans use ayahuasca (yagé), a hallucinogenic brew, for this purpose. Mazatec shamans have a long and continuous tradition of religious use of Salvia divinorum, a psychoactive plant. Its use is to facilitate visionary states of consciousness during spiritual healing sessions. Silene undulata is regarded by the Xhosa people as a sacred plant and used as an entheogen. Its roots are traditionally used to induce vivid (and according to the Xhosa, prophetic) lucid dreams during the initiation process of shamans, classifying it a naturally occurring oneirogen similar to the more well-known dream herb Calea ternifolia. Peyote, a small spineless cactus, has been a major source of psychedelic mescaline and has probably been used by Native Americans for at least five thousand years. Most mescaline is now obtained from a few species of columnar cacti in particular from San Pedro and not from the vulnerable peyote. The entheogenic use of cannabis has also been widely practised for centuries. Rastafari use marijuana (ganja) as a sacrament in their religious ceremonies. Psychedelic mushrooms (psilocybin mushrooms), commonly called magic mushrooms or shrooms have also long been used as entheogens. == Smart drugs and designer drugs == Nootropics, also commonly referred to as "smart drugs", are drugs that are claimed to improve human cognitive abilities. Nootropics are used to improve memory, concentration, thought, mood, and learning. An increasingly used nootropic among students, also known as a study drug, is methylphenidate branded commonly as Ritalin and used for the treatment of attention deficit hyperactivity disorder (ADHD) and narcolepsy. At high doses methylphenidate can become highly addictive. Serious addiction can lead to psychosis, anxiety and heart problems, and the use of this drug is related to a rise in suicides, and overdoses. Evidence for use outside of student settings is limited but suggests that it is commonplace. Intravenous use of methylphenidate can lead to emphysematous damage to the lungs, known as Ritalin lung. Other drugs known as designer drugs are produced. An early example of what today would be labelled a 'designer drug' was LSD, which was synthesised from ergot. Other examples include analogs of performance-enhancing drugs such as designer steroids taken to improve physical capabilities; these are sometimes used (legally or not) for this purpose, often by professional athletes. Other designer drugs mimic the effects of psychoactive drugs. Since the late 1990s there has been the identification of many of these synthesised drugs. In Japan and the United Kingdom this has spurred the addition of many designer drugs into a newer class of controlled substances known as a temporary class drug. Synthetic cannabinoids have been produced for a longer period of time and are used in the designer drug synthetic cannabis. == Recreational drug use == Recreational drug use is the use of a drug (legal, controlled, or illegal) with the primary intention of altering the state of consciousness through alteration of the central nervous system in order to create positive emotions and feelings. The hallucinogen LSD is a psychoactive drug commonly used as a recreational drug. Ketamine is a drug used for anesthesia, and is also used as a recreational drug, both in powder and liquid form, for its hallucinogenic and dissociative effects. Some national laws prohibit the use of different recreational drugs; medicinal drugs that have the potential for recreational use are often heavily regulated. However, there are many recreational drugs that are legal in many jurisdictions and widely culturally accepted. Cannabis is the most commonly consumed controlled recreational drug in the world (as of 2012). Its use in many countries is illegal but is legally used in several countries usually with the proviso that it can only be used for personal use. It can be used in the leaf form of marijuana (grass), or in the resin form of hashish. Marijuana is a more mild form of cannabis than hashish. There may be an age restriction on the consumption and purchase of legal recreational drugs. Some recreational drugs that are legal and accepted in many places include alcohol, tobacco, betel nut, and caffeine products, and in some areas of the world the legal use of drugs such as khat is common. There are a number of legal intoxicants commonly called legal highs that are used recreationally. The most widely used of these is alcohol. == Administration of drugs == All drugs have a route of administration, and many can be administered by more than one. A bolus is the administration of a medication, drug or other compound that is given to raise its concentration in blood rapidly to an effective level, regardless of the route of administration. == Control of drugs == Numerous governmental offices in many countries deal with the control and supervision of drug manufacture and use, and the implementation of various drug laws. The Single Convention on Narcotic Drugs is an international treaty brought about in 1961 to prohibit the use of narcotics save for those used in medical research and treatment. In 1971, a second treaty the Convention on Psychotropic Substances had to be introduced to deal with newer recreational psychoactive and psychedelic drugs. The legal status of Salvia divinorum varies in many countries and even in states within the United States. Where it is legislated against, the degree of prohibition also varies. The Food and Drug Administration (FDA) in the United States is a federal agency responsible for protecting and promoting public health through the regulation and supervision of food safety, tobacco products, dietary supplements, prescription and over-the-counter medications, vaccines, biopharmaceuticals, blood transfusions, medical devices, electromagnetic radiation emitting devices, cosmetics, animal foods and veterinary drugs. In India, the Narcotics Control Bureau (NCB), an Indian federal law enforcement and intelligence agency under the Ministry of Home Affairs, is tasked with combating drug trafficking and assisting international use of illegal substances under the provisions of Narcotic Drugs and Psychotropic Substances Act. == See also == === Lists of drugs === List of drugs List of pharmaceutical companies List of psychoactive plants List of Schedule I drugs (US) == References == == Further reading == Richard J. Miller (2014). Drugged: the science and culture behind psychotropic drugs. Oxford University Press. ISBN 978-0-19-995797-2. == External links == DrugBank, a database of 13,400 drugs and 5,100 protein drug targets "Drugs", BBC Radio 4 discussion with Richard Davenport-Hines, Sadie Plant and Mike Jay (In Our Time, May 23, 2002)
Wikipedia/Drug
This timeline of science and engineering in the Muslim world covers the time period from the eighth century AD to the introduction of European science to the Muslim world in the nineteenth century. All year dates are given according to the Gregorian calendar except where noted. == Eighth century == Astronomers and astrologers d 777 CE Ibrāhīm al-Fazārī Ibrahim ibn Habib ibn Sulayman ibn Samura ibn Jundab al-Fazari (Arabic: إبراهيم بن حبيب بن سليمان بن سمرة بن جندب الفزاري‎) (died 777 CE) was an 8th-century Muslim mathematician and astronomer at the Abbasid court of the Caliph Al-Mansur (r. 754–775). He should not be confused with his son Muḥammad ibn Ibrāhīm al-Fazārī, also an astronomer. He composed various astronomical writings ("on the astrolabe", "on the armillary spheres", "on the calendar"). d 796 Muhammad ibn Ibrahim ibn Habib ibn Sulayman ibn Samra ibn Jundab al-Fazari (Arabic: إبراهيم بن حبيب بن سليمان بن سمرة بن جندب الفزاري‎) (died 796 or 806) was a Muslim philosopher, mathematician and astronomer. He is not to be confused with his father Ibrāhīm al-Fazārī, also an astronomer and mathematician. Some sources refer to him as an Arab, other sources state that he was a Persian. Al-Fazārī translated many scientific books into Arabic and Persian. He is credited to have built the first astrolabe in the Islamic world. Along with Yaʿqūb ibn Ṭāriq and his father he helped translate the Indian astronomical text by Brahmagupta (fl. 7th century), the Brāhmasphuṭasiddhānta, into Arabic as Az-Zīj ‛alā Sinī al-‛Arab., or the Sindhind. This translation was possibly the vehicle by means of which the Hindu numerals were transmitted from India to Islam. Biologists, neuroscientists, and psychologists (654–728) Ibn Sirin Muhammad Ibn Sirin (Arabic: محمد بن سيرين‎) (born in Basra) was a Muslim mystic and interpreter of dreams who lived in the 8th century. He was a contemporary of Anas ibn Malik. Once regarded as the same person as Achmet son of Seirim, this is no longer believed to be true, as shown by Maria Mavroudi. Mathematics 780 – 850: al-Khwarizmi Developed the "calculus of resolution and juxtaposition" (hisab al-jabr w'al-muqabala), more briefly referred to as al-jabr, or algebra. == Ninth century == Chemistry 801 – 873: al-Kindi writes on the distillation of wine as that of rose water and gives 107 recipes for perfumes, in his book Kitab Kimia al-'otoor wa al-tas`eedat (Book of the Chemistry of Perfumes and Distillations.) 865 – 925: al-Razi wrote on Naft (naphta or petroleum) and its distillates in his book "Kitab sirr al-asrar" (book of the secret of secrets.) When choosing a site to build Baghdad's hospital, he hung pieces of fresh meat in different parts of the city. The location where the meat took the longest to rot was the one he chose for building the hospital. Advocated that patients not be told their real condition so that fear or despair do not affect the healing process. Wrote on alkali, caustic soda, soap and glycerine. Gave descriptions of equipment processes and methods in his book Kitab al-Asrar (Book of Secrets). Mathematics 826 – 901: Thabit ibn Qurra (Latinized, Thebit.) Studied at Baghdad's House of Wisdom under the Banu Musa brothers. Discovered a theorem that enables pairs of amicable numbers to be found. Later, al-Baghdadi (b. 980) developed a variant of the theorem. Miscellaneous c. 810: Bayt al-Hikma (House of Wisdom) set up in Baghdad. There Greek and Indian mathematical and astronomy works are translated into Arabic. 810 – 887: Abbas ibn Firnas. Planetarium, artificial crystals. According to one account that was written seven centuries after his death, Ibn Firnas was injured during an elevated winged trial flight. == Tenth century == By this century, three systems of counting are used in the Arab world. Finger-reckoning arithmetic, with numerals written entirely in words, used by the business community; the sexagesimal system, a remnant originating with the Babylonians, with numerals denoted by letters of the arabic alphabet and used by Arab mathematicians in astronomical work; and the Indian numeral system, which was used with various sets of symbols. Its arithmetic at first required the use of a dust board (a sort of handheld blackboard) because "the methods required moving the numbers around in the calculation and rubbing some out as the calculation proceeded." Chemistry 957: Abul Hasan Ali Al-Masudi, wrote on the reaction of alkali water with zaj (vitriol) water giving sulfuric acid. Mathematics 920: al-Uqlidisi. Modified arithmetic methods for the Indian numeral system to make it possible for pen and paper use. Hitherto, doing calculations with the Indian numerals necessitated the use of a dust board as noted earlier. 940: Born Abu'l-Wafa al-Buzjani. Wrote several treatises using the finger-counting system of arithmetic and was also an expert on the Indian numerals system. About the Indian system, he wrote: "[It] did not find application in business circles and among the population of the Eastern Caliphate for a long time." Using the Indian numeral system, abu'l Wafa was able to extract roots. 980: al-Baghdadi Studied a slight variant of Thabit ibn Qurra's theorem on amicable numbers. Al-Baghdadi also wrote about and compared the three systems of counting and arithmetic used in the region during this period. == Eleventh century == Mathematics 1048 – 1131: Omar Khayyam. Persian mathematician and poet. "Gave a complete classification of cubic equations with geometric solutions found by means of intersecting conic sections." Extracted roots using the decimal system (the Indian numeral system). == Twelfth century == Cartography 1100–1165: Muhammad al-Idrisi, aka Idris al-Saqalli aka al-sharif al-idrissi of Andalusia and Sicily. Known for having drawn some of the most advanced ancient world maps. Mathematics 1130–1180: Al-Samawal. An important member of al-Karaji's school of algebra. Gave this definition of algebra: "[it is concerned] with operating on unknowns using all the arithmetical tools, in the same way as the arithmetician operates on the known." 1135: Sharaf al-Din al-Tusi. Follows al-Khayyam's application of algebra of geometry, rather than follow the general development that came through al-Karaji's school of algebra. Wrote a treatise on cubic equations which describes thus: "[the treatise] represents an essential contribution to another algebra which aimed to study curves by means of equations, thus inaugurating the beginning of algebraic geometry." (quoted in ). == Thirteenth century == Chemistry Al-Jawbari describes the preparation of rose water in the work "Book of Selected Disclosure of Secrets" (Kitab kashf al-Asrar). Materials; glassmaking: Arabic manuscript on the manufacture of false gemstones and diamonds. Also describes spirits of alum, spirits of saltpetre and spirits of salts (hydrochloric acid). An Arabic manuscript written in Syriac script gives description of various chemical materials and their properties such as sulfuric acid, sal-ammoniac, saltpetre and zaj (vitriol). Mathematics 1260: al-Farisi. Gave a new proof of Thabit ibn Qurra's theorem, introducing important new ideas concerning factorization and combinatorial methods. He also gave the pair of amicable numbers 17296, 18416 which have also been joint attributed to Fermat as well as Thabit ibn Qurra. Astronomy Jaghmini completed the al-Mulakhkhas fi al-Hay’ah ("Epitome of plain theoretical astronomy"), an astronomical textbook which spawned many commentaries and whose educational use lasted until the 18th century. Miscellaneous Mechanical engineering: Ismail al-Jazari described 100 mechanical devices, some 80 of which are trick vessels of various kinds, along with instructions on how to construct them. Medicine; Scientific method: Ibn Al-Nafis (1213–1288) Damascene physician and anatomist. Discovered the lesser circulatory system (the cycle involving the ventricles of the heart and the lungs) and described the mechanism of breathing and its relation to the blood and how it nourishes on air in the lungs. Followed a "constructivist" path of the smaller circulatory system: "blood is purified in the lungs for the continuance of life and providing the body with the ability to work". During his time, the common view was that blood originates in the liver then travels to the right ventricle, then on to the organs of the body; another contemporary view was that blood is filtered through the diaphragm where it mixes with the air coming from the lungs. Ibn al-Nafis discredited all these views including ones by Galen and Avicenna (ibn Sina). At least an illustration of his manuscript is still extant. William Harvey explained the circulatory system without reference to ibn al-Nafis in 1628. Ibn al-Nafis extolled the study of comparative anatomy in his "Explaining the dissection of [Avicenna's] Al-Qanoon" which includes a preface, and citations of sources. Emphasized the rigours of verification by measurement, observation and experiment. Subjected conventional wisdom of his time to a critical review and verified it with experiment and observation, discarding errors. == Fourteenth century == Astronomy 1393–1449: Ulugh Beg commissions an observatory at Samarqand in present-day Uzbekistan. Mathematics 1380–1429: al-Kashi. According to, "contributed to the development of decimal fractions not only for approximating algebraic numbers, but also for real numbers such as pi. His contribution to decimal fractions is so major that for many years he was considered as their inventor. Although not the first to do so, al-Kashi gave an algorithm for calculating nth roots which is a special case of the methods given many centuries later by Ruffini and Horner." == Fifteenth century == Mathematics Ibn al-Banna and al-Qalasadi used symbols for mathematics "and, although we do not know exactly when their use began, we know that symbols were used at least a century before this." == Seventeenth century == Mathematics The Persian mathematician Muhammad Baqir Yazdi discovered the pair of amicable numbers 9,363,584 and 9,437,056 for which he is jointly credited with Descartes. A seventeenth-century celestial globe was made by Diya’ ad-din Muhammad in Lahore, 1663 (now in Pakistan). It is now housed at the National Museum of Scotland. It is encircled by a meridian ring and a horizon ring. The latitude angle of 32° indicates that the globe was made in the Lahore workshop. This specific 'workshop claims 21 signed globes—the largest number from a single shop’ making this globe a good example of Celestial Globe production at its peak. == Modern science == Muslim scientists made significant contributions to modern science. These include the development of the electroweak unification theory by Abdus Salam, development of femtochemistry by Ahmed Zewail, invention of quantum dots by Moungi Bawendi, and development of fuzzy set theory by Lotfi A. Zadeh. Other major contributions include introduction of Kardar–Parisi–Zhang equation by Mehran Kardar, the development of Circuit topology by Alireza Mashaghi, and the first description of Behçet's disease by Hulusi Behçet. Contributions of Muslim scientists have been recognized by 4 Nobel Prizes. Abdus Salam was the first Muslim to win a Nobel Prize in science. == See also == Arab Agricultural Revolution Islamic Golden Age Science in the medieval Islamic world Ibn Sina Academy of Medieval Medicine and Sciences List of inventions in the medieval Islamic world == References == === Citations === === Sources === == External links == Qatar Digital Library - an online portal providing access to previously undigitised British Library archive materials relating to Gulf history and Arabic science "How Greek Science Passed to the Arabs" by De Lacy O'Leary St-Andrew's chronology of mathematics
Wikipedia/Timeline_of_science_and_engineering_in_the_Muslim_world
Abū Mūsā Jābir ibn Ḥayyān (Arabic: أَبو موسى جابِر بِن حَيّان, variously called al-Ṣūfī, al-Azdī, al-Kūfī, or al-Ṭūsī), died c. 806−816, is the purported author of a large number of works in Arabic, often called the Jabirian corpus. The c. 215 treatises that survive today mainly deal with alchemy and chemistry, magic, and Shi'ite religious philosophy. However, the original scope of the corpus was vast, covering a wide range of topics ranging from cosmology, astronomy and astrology, over medicine, pharmacology, zoology and botany, to metaphysics, logic, and grammar. The works attributed to Jabir, which are tentatively dated to c. 850 – c. 950, contain the oldest known systematic classification of chemical substances, and the oldest known instructions for deriving an inorganic compound (sal ammoniac or ammonium chloride) from organic substances (such as plants, blood, and hair) by chemical means. His works also contain one of the earliest known versions of the sulfur-mercury theory of metals, a mineralogical theory that would remain dominant until the 18th century. A significant part of Jabir's writings deal with a philosophical theory known as "the science of the balance" (Arabic: ʿilm al-mīzān), which was aimed at reducing all phenomena (including material substances and their elements) to a system of measures and quantitative proportions. The Jabirian works also contain some of the earliest preserved Shi'ite imamological doctrines, which Jabir presented as deriving from his purported master, the Shi'ite Imam Jaʿfar al-Ṣādiq (died 765). As early as the 10th century, the identity and exact corpus of works of Jabir was in dispute in Islamic scholarly circles. The authorship of all these works by a single figure, and even the existence of a historical Jabir, are also doubted by modern scholars. Instead, Jabir ibn Hayyan is generally thought to have been a pseudonym used by an anonymous school of Shi'ite alchemists writing in the late 9th and early 10th centuries. Some Arabic Jabirian works (e.g., The Great Book of Mercy, and The Book of Seventy) were translated into Latin under the Latinized name Geber, and in 13th-century Europe an anonymous writer, usually referred to as pseudo-Geber, started to produce alchemical and metallurgical writings under this name. == Biography == === Historicity === It is not clear whether Jabir ibn Hayyan ever existed as a historical person. He is purported to have lived in the 8th century, and to have been a disciple of the Shi'ite Imam Jaʿfar al-Ṣādiq (died 765). However, he is not mentioned in any historical source before c. 900, and the first known author to write about Jabir from a biographical point of view was the Baghdadi bibliographer Ibn al-Nadīm (c. 932–995). In his Fihrist ("The Book Catalogue", written in 987), Ibn al-Nadīm compiled a list of Jabir's works, adding a short notice on the various claims that were then circulating about Jabir. Already in Ibn al-Nadīm's time, there were some people who explicitly asserted that Jabir had never existed, although Ibn al-Nadīm himself disagreed with this claim. Jabir was often ignored by later medieval Islamic biographers and historians, but even early Shi'ite biographers such as Aḥmad al-Barqī (died c. 893), Abū ʿAmr al-Kashshī (first half of the 10th century), Aḥmad ibn ʿAlī al-Najāshī (983–1058), and Abū Jaʿfar al-Ṭūsī (995–1067), who wrote long volumes on the companions of the Shi'ite Imams (including the many companions of Jaʿfar al-Ṣādiq), did not mention Jabir at all. === Dating of the Jabirian corpus === Apart from outright denying his existence, there were also some who, already in Ibn al-Nadīm's time, questioned whether the writings attributed to Jabir were really written by him. The authenticity of these writings was expressly denied by the Baghdadi philosopher Abū Sulaymān al-Sijistānī (c. 912–985) and his pupil Abū Ḥayyān al-Tawḥīdī (c. 932–1023), though this may have been related to the hostility of both these thinkers to alchemy in general. Modern scholarly analysis has tended to confirm the inauthenticity of the writings attributed to Jabir. Much of the philosophical terminology used in the Jabirian treatises was only coined around the middle of the 9th century, and some of the Greek philosophical texts cited in the Jabirian writings are known to have been translated into Arabic towards the end of the 9th century. Moreover, an important part of the corpus deals with early Shi'ite religious philosophy that is elsewhere only attested in late 9th-century and early 10th-century sources. As a result, the dating of the Jabirian corpus to c. 850–950 has been widely accepted in modern scholarship. However, it has also been noted that many Jabirian treatises show clear signs of having been redacted multiple times, and the writings as we now have them may well have been based on an earlier 8th-century core. Despite the obscurity involved, it is not impossible that some of these writings, in their earliest form, were written by a real Jabir ibn Hayyan. In any case, it is clear that Jabir's name was used as a pseudonym by one or more anonymous Shi'ite alchemists writing in the late 9th and early 10th centuries, who also redacted the corpus as we now know it. === Biographical clues and legend === Jabir was generally known by the kunya Abū Mūsā ("Father of Mūsā"), or sometimes Abū ʿAbd Allāh ("Father of ʿAbd Allāh"), and by the nisbas (attributive names) al-Ṣūfī, al-Azdī, al-Kūfī, or al-Ṭūsī. His grandfather's name is mentioned by Ibn al-Nadim as ʿAbd Allāh. If the attribution of the name al-Azdī to Jabir is authentic, this would point to his affiliation with the Southern-Arabian (Yemenite) tribe of the Azd. However, it is not clear whether Jabir was an Arab belonging to the Azd tribe, or a non-Arab Muslim client (mawlā) of the Azd. If he was a non-Arab Muslim client of the Azd, he is most likely to have been Persian, given his ties with eastern Iran (his nisba al-Ṭūsī also points to Tus, a city in Khurasan). According to Ibn al-Nadīm, Jabir hailed from Khurasan (eastern Iran), but spent most of his life in Kufa (Iraq), both regions where the Azd tribe was well-settled. Various late reports put his date of death between 806 (190 AH) and 816 (200 AH). Given the lack of independent biographical sources, most of the biographical information about Jabir can be traced back to the Jabirian writings themselves. There are references throughout the Jabirian corpus to the Shi'ite Imam Jaʿfar al-Ṣādiq (died 765), whom Jabir generally calls "my master" (Arabic: sayyidī), and whom he represents as the original source of all his knowledge. In one work, Jabir is also represented as an associate of the Bactrian vizier family of the Barmakids, whereas Ibn al-Nadīm reports that some claimed Jabir to have been especially devoted to Jaʿfar ibn Yaḥyā al-Barmakī (767–803), the Abbasid vizier of One Thousand and One Nights fame. Jabir's links with the Abbasids were stressed even more by later tradition, which turned him into a favorite of the Abbasid caliph Hārūn al-Rashīd (c. 763–809, also appearing in One Thousand and One Nights), for whom Jabir would have composed a treatise on alchemy, and who is supposed to have commanded the translation of Greek works into Arabic on Jabir's instigation. Given Jabir's purported ties with both the Shi'ite Imam Jaʿfar al-Ṣādiq and the Barmakid family (who served the Abbasids as viziers), or with the Abbasid caliphs themselves, it has sometimes been thought plausible that Ḥayyān al-ʿAṭṭār ("Hayyan the Druggist"), a proto-Shi'ite activist who was fighting for the Abbasid cause in the early 8th century, may have been Jabir's father (Jabir's name "Ibn Hayyan" literally means "The Son of Hayyan"). Although there is no direct evidence supporting this hypothesis, it fits very well in the historical context, and it allows one to think of Jabir, however obscure, as a historical figure. Because Ḥayyān al-ʿAṭṭār was supposedly executed not long after 721, the hypothesis even made it possible to estimate Jabir's date of birth at c. 721. However, it has recently been argued that Ḥayyān al-ʿAṭṭār probably lived at least until c. 744, and that as a client (mawlā) of the Nakhaʿ tribe he is highly unlikely to have been the father of Jabir (who is supposed to have been a client/member of the Azd). == The Jabirian corpus == There are about 600 Arabic works attributed to Jabir ibn Hayyan that are known by name, approximately 215 of which are still extant today. Though some of these are full-length works (e.g., The Great Book on Specific Properties), most of them are relatively short treatises and belong to larger collections (The One Hundred and Twelve Books, The Five Hundred Books, etc.) in which they function rather more like chapters. When the individual chapters of some full-length works are counted as separate treatises too, the total length of the corpus may be estimated at 3000 treatises/chapters. The overwhelming majority of Jabirian treatises that are still extant today deal with alchemy or chemistry (though these may also contain religious speculations, and discuss a wide range of other topics ranging from cosmology to grammar). Nevertheless, there are also a few extant treatises which deal with magic, i.e., "the science of talismans" (ʿilm al-ṭilasmāt, a form of theurgy) and "the science of specific properties" (ʿilm al-khawāṣṣ, the science dealing with the hidden powers of mineral, vegetable and animal substances, and with their practical applications in medical and various other pursuits). Other writings dealing with a great variety of subjects were also attributed to Jabir (this includes such subjects as engineering, medicine, pharmacology, zoology, botany, logic, metaphysics, mathematics, astronomy and astrology), but almost all of these are lost today. === Alchemical writings === Note that Paul Kraus, who first catalogued the Jabirian writings and whose numbering is followed here, conceived of his division of Jabir's alchemical writings (Kr. nos. 5–1149) as roughly chronological in order. The Great Book of Mercy (Kitāb al-Raḥma al-kabīr, Kr. no. 5): This was considered by Kraus to be the oldest work in the corpus, from which it may have been relatively independent. Some 10th-century skeptics considered it to be the only authentic work written by Jabir himself. The Persian physician, alchemist and philosopher Abū Bakr al-Rāzī (c. 865–925) appears to have written a (lost) commentary on it. It was translated into Latin in the 13th century under the title Liber Misericordiae. The One Hundred and Twelve Books (al-Kutub al-miʾa wa-l-ithnā ʿashar, Kr. nos. 6–122): This collection consists of relatively independent treatises dealing with different practical aspects of alchemy, often framed as an explanation of the symbolic allusions of the 'ancients'. An important role is played by organic alchemy. Its theoretical foundations are similar to those of The Seventy Books (i.e., the reduction of bodies to the elements fire, air, water and earth, and of the elements to the 'natures' hot, cold, moist, and dry), though their exposition is less systematic. Just like in The Seventy Books, the quantitative directions in The One Hundred and Twelve Books are still of a practical and 'experimental' rather than of a theoretical and speculative nature, such as will be the case in The Books of the Balances. The first four treatises in this collection, i.e., the three-part Book of the Element of the Foundation (Kitāb Usṭuqus al-uss, Kr. nos. 6–8, the second part of which contains an early version of the famous Emerald Tablet attributed to Hermes Trismegistus) and a commentary on it (Tafsīr kitāb al-usṭuqus, Kr. no. 9), have been translated into English. The Seventy Books (al-Kutub al-sabʿūn, Kr. nos. 123–192) (also called The Book of Seventy, Kitāb al-Sabʿīn): This contains a systematic exposition of Jabirian alchemy, in which the several treatises form a much more unified whole as compared to The One Hundred and Twelve Books. It is organized into seven parts, containing ten treatises each: three parts dealing with the preparation of the elixir from animal, vegetable, and mineral substances, respectively; two parts dealing with the four elements from a theoretical and practical point of view, respectively; one part focusing on the alchemical use of animal substances, and one part focusing on minerals and metals. It was translated into Latin by Gerard of Cremona (c. 1114–1187) under the title Liber de Septuaginta. Ten books added to the Seventy (ʿasharat kutub muḍāfa ilā l-sabʿīn, Kr. nos. 193–202): The sole surviving treatise from this small collection (The Book of Clarification, Kitāb al-Īḍāḥ, Kr. no. 195) briefly discusses the different methods for preparing the elixir, criticizing the philosophers who have only expounded the method of preparing the elixir starting from mineral substances, to the exclusion of vegetable and animal substances. The Ten Books of Rectifications (al-Muṣaḥḥaḥāt al-ʿashara, Kr. nos. 203–212): Relates the successive improvements (“rectifications”, muṣaḥḥaḥāt) brought to the art by such 'alchemists' as 'Pythagoras' (Kr. no. 203), 'Socrates' (Kr. no. 204), 'Plato' (Kr. no. 205), 'Aristotle' (Kr. no. 206), 'Archigenes' (Kr. nos. 207–208), 'Homer' (Kr. no. 209), 'Democritus' (Kr. no. 210), Ḥarbī al-Ḥimyarī (Kr. no. 211), and Jabir himself (Kr. no. 212). The only surviving treatise from this small collection (The Book of the Rectifications of Plato, Kitāb Muṣaḥḥaḥāt Iflāṭūn, Kr. no. 205) is divided into 90 chapters: 20 chapters on processes using only mercury, 10 chapters on processes using mercury and one additional 'medicine' (dawāʾ), 30 chapters on processes using mercury and two additional 'medicines', and 30 chapters on processes using mercury and three additional 'medicines'. All of these are preceded by an introduction describing the laboratory equipment mentioned in the treatise. The Twenty Books (al-Kutub al-ʿishrūn, Kr. nos. 213–232): Only one treatise (The Book of the Crystal, Kitāb al-Billawra, Kr. no. 220) and a long extract from another one (The Book of the Inner Consciousness, Kitāb al-Ḍamīr, Kr. no. 230) survive. The Book of the Inner Consciousness appears to deal with the subject of specific properties (khawāṣṣ) and with talismans (ṭilasmāt). The Seventeen Books (Kr. nos. 233–249); three treatises added to the Seventeen Books (Kr. nos. 250–252); thirty unnamed books (Kr. nos. 253–282); The Four Treatises and some related treatises (Kr. nos. 283–286, 287–292); The Ten Books According to the Opinion of Balīnās, the Master of Talismans (Kr. nos. 293–302): Of these, only three treatises appear to be extant, i.e., the Kitāb al-Mawāzīn (Kr. no. 242), the Kitāb al-Istiqṣāʾ (Kr. no. 248), and the Kitāb al-Kāmil (Kr. no. 291). The Books of the Balances (Kutub al-Mawāzīn, Kr. nos. 303–446): This collection appears to have consisted of 144 treatises of medium length, 79 of which are known by name and 44 of which are still extant. Though relatively independent from each other and devoted to a very wide range of topics (cosmology, grammar, music theory, medicine, logic, metaphysics, mathematics, astronomy, astrology, etc.), they all approach their subject matter from the perspective of "the science of the balance" (ʿilm al-mīzān, a theory which aims at reducing all phenomena to a system of measures and quantitative proportions). The Books of the Balances are also an important source for Jabir's speculations regarding the apparition of the "two brothers" (al-akhawān), a doctrine which was later to become of great significance to the Egyptian alchemist Ibn Umayl (c. 900–960). The Five Hundred Books (al-Kutub al-Khamsumiʾa, Kr. nos. 447–946): Only 29 treatises in this collection are known by name, 15 of which are extant. Its contents appear to have been mainly religious in nature, with moral exhortations and alchemical allegories occupying an important place. Among the extant treatises, The Book of the Glorious (Kitāb al-Mājid, Kr. no. 706) and The Book of Explication (Kitāb al-Bayān, Kr. no. 785) are notable for containing some of the earliest preserved Shi'ite eschatological, soteriological and imamological doctrines. Intermittent extracts from The Book of Kingship (Kitāb al-Mulk, Kr. no. 454) exist in a Latin translation under the title Liber regni. The Books on the Seven Metals (Kr. nos. 947–956): Seven treatises which are closely related to The Books of the Balances, each one dealing with one of Jabir's seven metals (respectively gold, silver, copper, iron, tin, lead, and khārṣīnī or "chinese metal"). In one manuscript, these are followed by the related three-part Book of Concision (Kitāb al-Ījāz, Kr. nos. 954–956). Diverse alchemical treatises (Kr. nos. 957–1149): In this category, Kraus placed a large number of named treatises which he could not with any confidence attribute to one of the alchemical collections of the corpus. According to Kraus, some of them may actually have been part of The Five Hundred Books. === Writings on magic (talismans, specific properties) === Among the surviving Jabirian treatises, there are also a number of relatively independent treatises dealing with "the science of talismans" (ʿilm al-ṭilasmāt, a form of theurgy) and with "the science of specific properties" (ʿilm al-khawāṣṣ, i.e., the science dealing with the hidden powers of mineral, vegetable and animal substances, and with their practical applications in medical and various other pursuits). These are: The Book of the Search (Kitāb al-Baḥth, also known as The Book of Extracts, Kitāb al-Nukhab, Kr. no. 1800): This long work deals with the philosophical foundations of theurgy or "the science of talismans" (ʿilm al-ṭilasmāt). It is also notable for citing a significant number of Greek authors: there are references to (the works of) Plato, Aristotle, Archimedes, Galen, Alexander of Aphrodisias, Porphyry, Themistius, (pseudo-)Apollonius of Tyana, and others. The Book of Fifty (Kitāb al-Khamsīn, perhaps identical to The Great Book on Talismans, Kitāb al-Ṭilasmāt al-kabīr, Kr. nos. 1825–1874): This work, only extracts of which are extant, deals with subjects such as the theoretical basis of theurgy, specific properties, astrology, and demonology. The Great Book on Specific Properties (Kitāb al-Khawāṣṣ al-kabīr, Kr. nos. 1900–1970): This is Jabir's main work on "the science of specific properties" (ʿilm al-khawāṣṣ), i.e., the science dealing with the hidden powers of mineral, vegetable and animal substances, and with their practical applications in medical and various other pursuits. However, it also contains a number of chapters on "the science of the balance" (ʿilm al-mīzān, a theory which aims at reducing all phenomena to a system of measures and quantitative proportions). The Book of the King (Kitāb al-Malik, kr. no. 1985): Short treatise on the effectiveness of talismans. The Book of Black Magic (Kitāb al-Jafr al-aswad, Kr. no. 1996): This treatise is not mentioned in any other Jabirian work. === Other extant writings === Writings on a wide variety of other topics were also attributed to Jabir. Most of these are lost (see below), except for: The Book on Poisons and on the Repelling of their Harmful Effects (Kitāb al-Sumūm wa-dafʿ maḍārrihā, Kr. no. 2145): on pharmacology. The Book of Comprehensiveness (Kitāb al-Ishtimāl, Kr. no. 2715): a long extract of this philosophical treatise is preserved by the poet and alchemist al-Ṭughrāʾī (1061–c. 1121). === Lost writings === Although a significant number of the Jabirian treatises on alchemy and magic do survive, many of them are also lost. Apart from two surviving treatises (see immediately above), Jabir's many writings on other topics are all lost: Catalogues (Kr. nos. 1–4): There are three catalogues which Jabir is said to have written of his own works (Kr. nos. 1–3), and one Book on the Order of Reading our Books (Kitāb Tartīb qirāʾat kutubinā, Kr. no. 4). They are all lost. The Books on Stratagems (Kutub al-Ḥiyal, Kr. nos. 1150–1449) and The Books on Military Stratagems and Tricks (Kutub al-Ḥiyal al-ḥurūbiyya wa-l-makāyid, Kr. nos. 1450–1749): Two large collections on 'mechanical tricks' (the Arabic word ḥiyal translates Greek μηχαναί, mēchanai) and military engineering, both lost. Medical and pharmacological writings (Kr. nos. 2000–2499): Seven treatises are known by name, the only one extant being The Book on Poisons and on the Repelling of their Harmful Effects (Kitāb al-Sumūm wa-dafʿ maḍārrihā, Kr. no. 2145). Kraus also included into this category a lost treatise on zoology (The Book of Animals, Kitāb al-Ḥayawān, Kr. no. 2458) and a lost treatise on botany (The Book of Plants or The Book of Herbs, Kitāb al-Nabāt or Kitāb al-Ḥashāʾish, Kr. no. 2459). Philosophical writings (Kutub al-falsafa, Kr. nos. 2500–2799): Under this heading, Kraus mentioned 23 works, most of which appear to deal with Aristotelian philosophy (titles include, e.g., The Books of Logic According to the Opinion of Aristotle, Kr. no. 2580; The Book of Categories, Kr. no. 2582; The Book on Interpretation, Kr. no. 2583; The Book of Metaphysics, Kr. no. 2681; The Book of the Refutation of Aristotle in his Book On the Soul, Kr. no. 2734). Of one treatise (The Book of Comprehensiveness, Kitāb al-Ishtimāl, Kr. no. 2715) a long extract is preserved by the poet and alchemist al-Ṭughrāʾī (1061–c. 1121), but all other treatises in this group are lost. Mathematical, astronomical and astrological writings (Kr. nos. 2800–2899): Thirteen treatises in this category are known by name, all of which are lost. Notable titles include a Book of Commentary on Euclid (Kitāb Sharḥ Uqlīdiyas, Kr. no. 2813), a Commentary on the Book of the Weight of the Crown by Archimedes (Sharḥ kitāb wazn al-tāj li-Arshamīdas, Kr. no. 2821), a Book of Commentary on the Almagest (Kitāb Sharḥ al-Majisṭī, Kr. no. 2834), a Subtle Book on Astronomical Tables (Kitāb al-Zāj al-laṭīf, Kr. no. 2839), a Compendium on the Astrolabe from a Theoretical and Practical Point of View (Kitāb al-jāmiʿ fī l-asṭurlāb ʿilman wa-ʿamalan, Kr. no. 2845), and a Book of the Explanation of the Figures of the Zodiac and Their Activities (Kitāb Sharḥ ṣuwar al-burūj wa-afʿālihā, Kr. no. 2856). Religious writings (Kr. nos. 2900–3000): Apart from those known to belong to The Five Hundred Books (see above), there are a number of religious treatises whose exact place in the corpus is uncertain, all of which are lost. Notable titles include Books on the Shi'ite Schools of Thought (Kutub fī madhāhib al-shīʿa, Kr. no. 2914), Our Books on the Transmigration of the Soul (Kutubunā fī l-tanāsukh, Kr. no. 2947), The Book of the Imamate (Kitāb al-Imāma, Kr. no. 2958), and The Book in Which I Explained the Torah (Kitābī alladhī fassartu fīhi al-tawrāt, Kr. no. 2982). == Historical background == === Greco-Egyptian, Byzantine and Persian alchemy === The Jabirian writings contain a number of references to Greco-Egyptian alchemists such as pseudo-Democritus (fl. c. 60), Mary the Jewess (fl. c. 0–300), Agathodaemon (fl. c. 300), and Zosimos of Panopolis (fl. c. 300), as well as to legendary figures such as Hermes Trismegistus and Ostanes, and to scriptural figures such as Moses and Jesus (to whom a number of alchemical writings were also ascribed). However, these references may have been meant as an appeal to ancient authority rather than as an acknowledgement of any intellectual borrowing, and in any case Jabirian alchemy was very different from what is found in the extant Greek alchemical treatises: it was much more systematic and coherent, it made much less use of allegory and symbols, and a much more important place was occupied by philosophical speculations and their application to laboratory experiments. Furthermore, whereas Greek alchemical texts had been almost exclusively focused on the use of mineral substances (i.e., on 'inorganic chemistry'), Jabirian alchemy pioneered the use of vegetable and animal substances, and so represented an innovative shift towards 'organic chemistry'. Nevertheless, there are some important theoretical similarities between Jabirian alchemy and contemporary Byzantine alchemy, and even though the Jabirian authors do not seem to have known Byzantine works that are extant today such as the alchemical works attributed to the Neoplatonic philosophers Olympiodorus (c. 495–570) and Stephanus of Alexandria (fl. c. 580–640), it seems that they were at least partly drawing on a parallel tradition of theoretical and philosophical alchemy. In any case, the writings actually used by the Jabirian authors appear to have mainly consisted of alchemical works falsely attributed to ancient philosophers like Socrates, Plato, and Apollonius of Tyana, only some of which are still extant today, and whose philosophical content still needs to be determined. One of the innovations in Jabirian alchemy was the addition of sal ammoniac (ammonium chloride) to the category of chemical substances known as 'spirits' (i.e., strongly volatile substances). This included both naturally occurring sal ammoniac and synthetic ammonium chloride as produced from organic substances, and so the addition of sal ammoniac to the list of 'spirits' is likely a product of the new focus on organic chemistry. Since the word for sal ammoniac used in the Jabirian corpus (nošāder) is Iranian in origin, it has been suggested that the direct precursors of Jabirian alchemy may have been active in the Hellenizing and Syriacizing schools of the Sassanid Empire. == Chemical philosophy == === Elements and natures === According to Aristotelian physics, each element is composed of two qualities: fire is hot and dry, earth is cold and dry, water is cold and moist, and air is hot and moist. In the Jabirian corpus, these qualities came to be called "natures" (Arabic: ṭabāʾiʿ), and elements are said to be composed of these 'natures', plus an underlying "substance" (jawhar). In metals two of these 'natures' were interior and two were exterior. For example, lead was predominantly cold and dry and gold was predominantly hot and moist. Thus, Jabir theorized, by rearranging the natures of one metal, a different metal would result. Like Zosimos, Jabir believed this would require a catalyst, an al-iksir, the elusive elixir that would make this transformation possible – which in European alchemy became known as the philosopher's stone. === The sulfur–mercury theory of metals === The sulfur–mercury theory of metals, though first attested in pseudo-Apollonius of Tyana's The Secret of Creation (Sirr al-khalīqa, late 8th or early 9th century, but largely based on older sources), was also adopted by the Jabirian authors. According to the Jabirian version of this theory, metals form in the earth through the mixing of sulfur and mercury. Depending on the quality of the sulfur, different metals are formed, with gold being formed by the most subtle and well-balanced sulfur. This theory, which is ultimately based on ancient meteorological speculations such as those found in Aristotle's Meteorology, formed the basis of all theories of metallic composition until the 18th century. == See also == History of chemistry Timeline of chemistry Abū Bakr al-Rāzī (c. 865–925, famous contemporary chemist) Pseudo-Geber (13th–14th century Latin authors writing under Jabir's name) Science in medieval Islam == References == == Bibliography == === Tertiary sources === De Smet, Daniel (2008–2012). "Jaʿfar al-Ṣādeq iv. Esoteric Sciences". Encyclopaedia Iranica. Forster, Regula (2018). "Jābir b. Ḥayyān". In Fleet, Kate; Krämer, Gudrun; Matringe, Denis; Nawas, John; Rowson, Everett (eds.). Encyclopaedia of Islam, Three. doi:10.1163/1573-3912_ei3_COM_32665. Kraus, Paul; Plessner, Martin (1960–2007). "Djābir B. Ḥayyān". In Bearman, P.; Bianquis, Th.; Bosworth, C.E.; van Donzel, E.; Heinrichs, W.P. (eds.). Encyclopaedia of Islam, Second Edition. doi:10.1163/1573-3912_islam_SIM_1898. Lory, Pierre (2008a). "Jābir Ibn Hayyān". In Koertge, Noretta (ed.). New Dictionary of Scientific Biography. Vol. 4. Detroit: Thomson Gale. pp. 19–20. ISBN 978-0-684-31320-7. Lory, Pierre (2008b). "Kimiā". Encyclopaedia Iranica. Plessner, Martin (1981). "Jābir Ibn Hayyān". In Gillispie, Charles C. (ed.). Dictionary of Scientific Biography. Vol. 7. New York: Charles Scribners’s Sons. pp. 39–43. === Secondary sources === al-Hassan, Ahmad Y. (2009). Studies in al-Kimya': Critical Issues in Latin and Arabic Alchemy and Chemistry. Hildesheim: Georg Olms Verlag. ISBN 978-3-487-14273-9. (the same content and more is also available online) (argues against the great majority of scholars that the Latin Geber works were translated from the Arabic and that ethanol and mineral acids were known in early Arabic alchemy) Burnett, Charles (2001). "The Coherence of the Arabic-Latin Translation Program in Toledo in the Twelfth Century". Science in Context. 14 (1–2): 249–288. doi:10.1017/S0269889701000096. S2CID 143006568. Capezzone, Leonardo (1997). "Jābir ibn Ḥayyān nella città cortese. Materiali eterodossi per una storia del pensiero della scienza nell'Islam medievale". Rivista degli Studi Orientali. LXXI (1/4): 97–144. JSTOR 41880991. Capezzone, Leonardo (2020). "The Solitude of the Orphan: Ǧābir b. Ḥayyān and the Shiite Heterodox Milieu of the Third/Ninth–Fourth/Tenth Centuries". Bulletin of the School of Oriental and African Studies. 83 (1): 51–73. doi:10.1017/S0041977X20000014. S2CID 214044897. (recent study of Jabirian Shi'ism, arguing that it was not of a form of Isma'ilism, but an independent sectarian current related to the late 9th-century Shi'ites known as ghulāt) Corbin, Henry (1950). "Le livre du Glorieux de Jâbir ibn Hayyân". Eranos-Jahrbuch. 18: 48–114. Corbin, Henry (1986). Alchimie comme art hiératique. Paris: L’Herne. ISBN 9782851971029. Coulon, Jean-Charles (2017). La Magie en terre d'Islam au Moyen Âge. Paris: CTHS. ISBN 9782735508525. Delva, Thijs (2017). "The Abbasid Activist Ḥayyān al-ʿAṭṭār as the Father of Jābir b. Ḥayyān: An Influential Hypothesis Revisited". Journal of Abbasid Studies. 4 (1): 35–61. doi:10.1163/22142371-12340030. (rejects Holmyard 1927's hypothesis that Jabir was the son of a proto-Shi'ite pharmacist called Ḥayyān al-ʿAṭṭār on the basis of newly available evidence; contains the most recent status quaestionis on Jabir's biography, listing a number of primary sources on this subject that were still unknown to Kraus 1942–1943) El-Eswed, Bassam I. (2006). "Spirits: The Reactive Substances in Jābir's Alchemy". Arabic Sciences and Philosophy. 16 (1): 71–90. doi:10.1017/S0957423906000270. S2CID 170880312. (the first study since the days of Berthelot, Stapleton, and Ruska to approach the Jabirian texts from a modern chemical point of view) Fück, Johann W. (1951). "The Arabic Literature on Alchemy According to An-Nadīm (A.D. 987)". Ambix. 4 (3–4): 81–144. doi:10.1179/amb.1951.4.3-4.81. Gannagé, Emma (1998). Le commentaire d'Alexandre d'Aphrodise In de generatione et corruptione perdu en grec, retrouvé en arabe dans Ǧābir ibn Ḥayyān, Kitāb al-Taṣrīf (Unpublished PhD diss.). Université Paris 1 Panthéon-Sorbonne. Holmyard, Eric J. (1923). "Jābir ibn Ḥayyān". Proceedings of the Royal Society of Medicine. 16: 46–57. doi:10.1177/003591572301601606. PMID 19983239. (pioneering paper first showing that a great deal of Jabir's non-religious alchemical treatises are still extant, that some of these treatises contain a sophisticated system of natural philosophy, and that Jabir knew the sulfur-mercury theory of metals) Holmyard, Eric J. (1927). "An Essay on Jābir ibn Ḥayyān". In Ruska, Julius (ed.). Studien zur Geschichte der Chemie: Festgabe Edmund O. v. Lippmann. Berlin: Springer. pp. 28–37. doi:10.1007/978-3-642-51355-8_5. ISBN 978-3-642-51236-0. {{cite book}}: ISBN / Date incompatibility (help) (seminal paper first presenting the hypothesis that Jabir was the son of a proto-Shi'ite pharmacist called Ḥayyān al-ʿAṭṭār) Kraus, Paul (1930). "Dschābir ibn Ḥajjān und die Ismāʿīlijja". In Ruska, Julius (ed.). Dritter Jahresbericht des Forschungsinstituts für Geschichte der Naturwissenschaften. Mit einer Wissenschaftlichen Beilage: Der Zusammenbruch der Dschābir-Legende. Berlin: Springer. pp. 23–42. OCLC 913815541. (seminal paper arguing that the Jabirian writings should be dated to ca. 850–950; the first to point out the similarities between Jabirian Shi'ism and early Isma'ilism) Kraus, Paul (1931). "Studien zu Jābir ibn Hayyān" (PDF). Isis. 15 (1): 7–30. doi:10.1086/346536. JSTOR 224568. S2CID 143876602. (contains further arguments for the late dating of the Jabirian writings; analyses Jabir's accounts of his relations with the Barmakids, rejecting their historicity) Kraus, Paul (1942). "Les dignitaires de la hiérarchie religieuse selon Ǧābir ibn Ḥayyān". Bulletin de l'institut français d'archéologie orientale. 41: 83–97. doi:10.3406/bifao.1942.2022. (pioneering paper on Jabirian proto-Shi'ism) Kraus, Paul (1942–1943). Jâbir ibn Hayyân: Contribution à l'histoire des idées scientifiques dans l'Islam. I. Le corpus des écrits jâbiriens. II. Jâbir et la science grecque. Cairo: Institut Français d'Archéologie Orientale. ISBN 978-3-487-09115-0. OCLC 468740510. {{cite book}}: ISBN / Date incompatibility (help) (vol. 1 contains a pioneering analysis of the sources for Jabir's biography, and a catalogue of all known Jabirian treatises and the larger collections they belong to; vol. 2 contains a seminal analysis of the Jabirian philosophical system and its relation to Greek philosophy; remains the standard reference work on Jabir even today) Laufer, Berthold (1919). Sino-Iranica: Chinese Contributions to the History of Civilization in Ancient Iran. Fieldiana, Anthropological series. Vol. 15. Chicago: Field Museum of Natural History. OCLC 1084859541. Lory, Pierre (1983). Jâbir ibn Hayyân: Dix traités d'alchimie. Les dix premiers Traités du Livre des Soixante-dix. Paris: Sindbad. ISBN 9782742710614. (elaborates Kraus's suggestion that the Jabirian writings may have developed from an earlier core, arguing that some of them, even though receiving their final redaction only in ca. 850–950, may date back to the late 8th century) Lory, Pierre (1989). Alchimie et mystique en terre d'Islam. Lagrasse: Verdier. ISBN 9782864320913. (focuses on Jabir's religious philosophy; contains an analysis of Jabirian Shi'ism, arguing that it is in some respects different from Isma'ilism and may have been relatively independent) Lory, Pierre (1994). "Mots d'alchimie, alchimie des mots". In Jacquart, D. (ed.). La formation du vocabulaire scientifique et intellectuel dans le monde arabe. Civicima. Vol. 7. Turnhout: Brepols. pp. 91–106. doi:10.1484/M.CIVI-EB.4.00077. ISBN 978-2-503-37007-1. Lory, Pierre (2000). "Eschatologie alchimique chez jâbir ibn Hayyân". Revue des mondes musulmans et de la Méditerranée. 91–94 (91–94): 73–92. doi:10.4000/remmm.249. Lory, Pierre (2016a). "Aspects de l'ésotérisme chiite dans le Corpus Ǧābirien: Les trois Livres de l'Elément de fondation". Al-Qantara. 37 (2): 279–298. doi:10.3989/alqantara.2016.009. Lory, Pierre (2016b). "Esotérisme shi'ite et alchimie. Quelques remarques sur la doctrine de l'initiation dans le Corpus Jābirien". In Amir-Moezzi, Mohammad Ali; De Cillis, Maria; De Smet, Daniel; Mir-Kasimov, Orkhan (eds.). L'Ésotérisme shi'ite, ses racines et ses prolongements – Shi'i Esotericism: Its Roots and Developments. Bibliothèque de l'Ecole des Hautes Etudes, Sciences Religieuses. Vol. 177. Turnhout: Brepols. pp. 411–422. doi:10.1484/M.BEHE-EB.4.01179. ISBN 978-2-503-56874-4. Marquet, Yves (1988). La philosophie des alchimistes et l'alchimie des philosophes — Jâbir ibn Hayyân et les « Frères de la Pureté ». Paris: Maisonneuve et Larose. ISBN 9782706809545. Moureau, Sébastien (2020). "Min al-kīmiyāʾ ad alchimiam. The Transmission of Alchemy from the Arab-Muslim World to the Latin West in the Middle Ages". Micrologus. 28: 87–141. hdl:2078.1/211340. (a survey of all Latin alchemical texts known to have been translated from the Arabic) Newman, William R. (1985). "New Light on the Identity of Geber". Sudhoffs Archiv. 69 (1): 76–90. JSTOR 20776956. PMID 2932819. Newman, William R. (1991). The Summa perfectionis of Pseudo-Geber: A Critical Edition, Translation and Study. Leiden: Brill. ISBN 978-90-04-09464-2. Newman, William R. (1996). "The Occult and the Manifest among the Alchemists". In Ragep, F. Jamil; Ragep, Sally P.; Livesey, Steven (eds.). Tradition, Transmission, Transformation: Proceedings of Two Conferences on Pre-Modern Science held at the University of Oklahoma. Leiden: Brill. pp. 173–198. ISBN 978-90-04-10119-7. Nomanul Haq, Syed (1994). Names, Natures and Things: The Alchemist Jābir ibn Ḥayyān and his Kitāb al-Aḥjār (Book of Stones). Dordrecht: Kluwer. ISBN 9789401118989. (signalled some new sources on Jabir's biography; followed Sezgin 1971 in arguing for an early date for the Jabirian writings) Norris, John (2006). "The Mineral Exhalation Theory of Metallogenesis in Pre-Modern Mineral Science". Ambix. 53 (1): 43–65. doi:10.1179/174582306X93183. S2CID 97109455. (important overview of the sulfur-mercury theory of metals from its conceptual origins in ancient Greek philosophy to the 18th century; discussion of the Arabic texts is brief and dependent on secondary sources) Ruska, Julius (1923a). "Sal ammoniacus, Nušādir und Salmiak". Sitzungsberichte der Heidelberger Akademie der Wissenschaften, Philosophisch-Historische Klasse. 14 (5). doi:10.11588/diglit.38046. Ruska, Julius (1923b). "Über das Schriftenverzeichnis des Ǧābir ibn Ḥajjān und die Unechtheit einiger ihm zugeschriebenen Abhandlungen". Archiv für Geschichte der Medizin. 15: 53–67. JSTOR 20773292. Ruska, Julius (1927). "Die siebzig Bücher des Ǵābir ibn Ḥajjān". In Ruska, Julius (ed.). Studien zur Geschichte der Chemie: Festgabe Edmund O. v. Lippmann. Berlin: Springer. pp. 38–47. doi:10.1007/978-3-642-51355-8_6. ISBN 978-3-642-51236-0. {{cite book}}: ISBN / Date incompatibility (help) Ruska, Julius (1928). "Der Salmiak in der Geschichte der Alchemie". Zeitschrift für angewandte Chemie. 41 (50): 1321–1324. Bibcode:1928AngCh..41.1321R. doi:10.1002/ange.19280415006. Ruska, Julius; Garbers, Karl (1939). "Vorschriften zur Herstellung von scharfen Wässern bei Gabir und Razi". Der Islam. 25: 1–34. doi:10.1515/islm.1938.25.1.1. S2CID 161055255. (contains a comparison of Jabir's and Abū Bakr al-Rāzī's knowledge of chemical apparatus, processes and substances) Sarton, George (1927–1948). Introduction to the History of Science. Vol. I–III. Baltimore: Williams & Wilkins. OCLC 476555889. Sezgin, Fuat (1971). Geschichte des arabischen Schrifttums, Band IV: Alchimie, Chemie, Botanik, Agrikultur bis ca. 430 H. Leiden: Brill. pp. 132–269. ISBN 9789004020092. (contains a penetrating critique of Kraus’ thesis on the late dating of the Jabirian works) Stapleton, Henry E. (1905). "Sal Ammoniac: A Study in Primitive Chemistry". Memoirs of the Asiatic Society of Bengal. I (2): 25–40. Stapleton, Henry E.; Azo, R.F.; Hidayat Husain, M. (1927). "Chemistry in Iraq and Persia in the Tenth Century A.D." Memoirs of the Asiatic Society of Bengal. VIII (6): 317–418. OCLC 706947607. Starr, Peter (2009). "Towards a Context for Ibn Umayl, Known to Chaucer as the Alchemist Senior" (PDF). Journal of Arts and Sciences. 11: 61–77. Archived from the original (PDF) on 25 September 2020. Retrieved 28 November 2020. Ullmann, Manfred (1972). Die Natur- und Geheimwissenschaften im Islam. Leiden: Brill. ISBN 978-90-04-03423-5. Watanabe, Masayo (2023). Nature in the Books of Seven Metals – Ǧābirian Corpus in Dialogue with Ancient Greek Philosophy and Byzantine Alchemy (PhD thesis). University of Bologna. Weisser, Ursula (1980). Das "Buch über das Geheimnis der Schöpfung" von Pseudo-Apollonios von Tyana. Berlin: De Gruyter. doi:10.1515/9783110866933. ISBN 978-3-11-086693-3. === Primary sources === ==== Editions of Arabic Jabirian texts ==== Abū Rīda, Muḥammad A. (1984). "Thalāth rasāʾil falsafiyya li-Jābir b. Ḥayyān". Zeitschrift für Geschichte der Arabisch-Islamischen Wissenschaften. 1: 50–67. Abū Rīda, Muḥammad A. (1985). "Risālatān falsafiyyatān li-Jābir b. Ḥayyān". Zeitschrift für Geschichte der Arabisch-Islamischen Wissenschaften. 2: 75–84. Berthelot, Marcellin; Houdas, Octave V. (1893). La Chimie au Moyen Âge. Vol. III. Paris: Imprimerie nationale. al-Mazyadī, Aḥmad Farīd (2006). Rasāʾil Jābir ibn Ḥayyān. Beirut: Dār al-Kutub al-ʿIlmiyya. (pirated edition of Berthelot & Houdas 1893, Holmyard 1928 and Kraus 1935) Gannagé, Emma (1998). Le commentaire d'Alexandre d'Aphrodise In de generatione et corruptione perdu en grec, retrouvé en arabe dans Ǧābir ibn Ḥayyān, Kitāb al-Taṣrīf (Unpublished PhD diss.). Université Paris 1 Panthéon-Sorbonne. (edition of the Kitāb al-Taṣrīf) Holmyard, E. John (1928). The Arabic Works of Jâbir ibn Hayyân. Paris: Paul Geuthner. Kraus, Paul (1935). Essai sur l'histoire des idées scientifiques dans l'Islam / Mukhtār Rasāʾil Jābir b. Ḥayyān. Paris/Cairo: G.P. Maisonneuve/Maktabat al-Khānjī. Nomanul Haq, Syed (1994). Names, Natures and Things: The Alchemist Jābir ibn Ḥayyān and his Kitāb al-Aḥjār (Book of Stones). Dordrecht: Kluwer. ISBN 9789401118989. (contains a new edition of parts of the Kitāb al-Aḥjār with English translation) Lory, Pierre (1988). Tadbīr al-iksīr al-aʿẓam. Arbaʿ ʿashara risāla fī ṣanʿat al-kīmiyāʾ / L'élaboration de l'élixir suprême. Quatorze traités de Gâbir ibn Ḥayyân sur le grand oeuvre alchimique. Damascus: Institut français de Damas. Ruska, Julius; Garbers, Karl (1939). "Vorschriften zur Herstellung von scharfen Wässern bei Gabir und Razi". Der Islam. 25: 1–34. doi:10.1515/islm.1938.25.1.1. S2CID 161055255. Sezgin, Fuat (1986). The Book of Seventy. Frankfurt am Main: Institute for the History of Arabic-Islamic Science. (facsimile of the Kitāb al-Sabʿīn) Siggel, Alfred (1958). Das Buch der Gifte des Ǧābir ibn Ḥayyān. Wiesbaden: Steiner. (facsimile of the Kitāb al-Sumūm wa-dafʿ maḍārrihā) Zirnis, Peter (1979). The Kitāb Usṭuqus al-uss of Jābir ibn Ḥayyān (Unpublished PhD diss.). New York University. (contains an annotated copy of the Kitāb Usṭuqus al-uss with English translation) Watanabe, Masayo (2023). Nature in the Books of Seven Metals – Ǧābirian Corpus in Dialogue with Ancient Greek Philosophy and Byzantine Alchemy (PhD thesis). University of Bologna. (edition of excerpts from the first six Books on the Seven Metals (Kitāb al-Dhahab, Kr. no. 947; Kitāb al-Fiḍḍa, Kr. no. 948; Kitāb al-Nuḥās, Kr. no. 949; Kitāb al-Ḥadīd, Kr. no. 950; Kitāb al-Raṣāṣ al-qalaʿī, Kr. no. 951; Kitāb al-Usrub, Kr. no. 952), the full text of the Kitāb al-Khārṣīnī, Kr. no. 953, and an excerpt from the Kitāb al-Ṭabīʿa al-khāmisa, Kr. no. 396) ==== Modern translations of Arabic Jabirian texts ==== Berthelot, Marcellin; Houdas, Octave V. (1893). La Chimie au Moyen Âge. Vol. III. Paris: Imprimerie nationale. (French translations of the edited Arabic texts) Corbin, Henry (1950). "Le livre du Glorieux de Jâbir ibn Hayyân". Eranos-Jahrbuch. 18: 48–114. (French translation of the Kitāb al-Mājid) Gannagé, Emma (1998). Le commentaire d'Alexandre d'Aphrodise In de generatione et corruptione perdu en grec, retrouvé en arabe dans Ǧābir ibn Ḥayyān, Kitāb al-Taṣrīf (Unpublished PhD diss.). Université Paris 1 Panthéon-Sorbonne. (French translation of the Kitāb al-Taṣrīf) Lory, Pierre (1983). Jâbir ibn Hayyân: Dix traités d'alchimie. Les dix premiers Traités du Livre des Soixante-dix. Paris: Sindbad. ISBN 9782742710614. (French translations of the first ten books of the Kitāb al-Sabʿīn) Lory, Pierre (2000). "Eschatologie alchimique chez jâbir ibn Hayyân". Revue des mondes musulmans et de la Méditerranée. 91–94 (91–94): 73–92. doi:10.4000/remmm.249. (French translation of the Kitāb al-Bayān) Nomanul Haq, Syed (1994). Names, Natures and Things: The Alchemist Jābir ibn Ḥayyān and his Kitāb al-Aḥjār (Book of Stones). Dordrecht: Kluwer. ISBN 9789401118989. (contains a new edition of parts of the Kitāb al-Aḥjār with English translation) O’Connor, Kathleen M. (1994). The Alchemical Creation of Life (Takwīn) and Other Concepts of Genesis in Medieval Islam (PhD diss.). University of Pennsylvania. (contains translations of extensive passages from various Jabirian works, with discussion) Rex, Friedemann (1975). Zur Theorie der Naturprozesse in der früharabischen Wissenschaft. Wiesbaden: Steiner. (German translation of the Kitāb Ikhrāj mā fī al-quwwa ilā al-fiʿl) Ruska, Julius; Garbers, Karl (1939). "Vorschriften zur Herstellung von scharfen Wässern bei Gabir und Razi". Der Islam. 25: 1–34. doi:10.1515/islm.1938.25.1.1. S2CID 161055255. (German translations of edited Arabic fragments) Siggel, Alfred (1958). Das Buch der Gifte des Ǧābir ibn Ḥayyān. Wiesbaden: Steiner. (German translation of the facsimile of Kitāb al-Sumūm wa-dafʿ maḍārrihā) Zirnis, Peter (1979). The Kitāb Usṭuqus al-uss of Jābir ibn Ḥayyān (Unpublished PhD diss.). New York University. (contains an annotated copy of the Kitāb Usṭuqus al-uss with English translation) ==== Medieval translations of Arabic Jabirian texts (Latin) ==== Berthelot, Marcellin (1906). "Archéologie et Histoire des sciences". Mémoires de l'Académie des sciences de l'Institut de France. 49. (pp. 310–363 contain an edition of the Latin translation of Jabir's Seventy Books under the title Liber de Septuaginta) Colinet, Andrée (2000). "Le Travail des quatre éléments ou lorsqu'un alchimiste byzantin s'inspire de Jabir". In Draelants, Isabelle; Tihon, Anne; Van den Abeele, Baudouin (eds.). Occident et Proche-Orient: Contacts scientifiques au temps des Croisades. Actes du colloque de Louvain-la-Neuve, 24 et 25 mars 1997. Reminisciences. Vol. 5. Turnhout: Brepols. pp. 165–190. doi:10.1484/M.REM-EB.6.09070802050003050101010600. ISBN 978-2-503-51116-0. (pp. 179–187 contain an edition of the Latin translation of a separate treatise belonging to Jabir's Seventy Books, i.e., The Book of the Thirty Words, Kitāb al-Thalāthīn kalima, Kr. no. 125, translated as Liber XXX verborum) Darmstaedter, Ernst (1925). "Liber Misericordiae Geber: Eine lateinische Übersetzung des gröβeren Kitâb l-raḥma". Archiv für Geschichte der Medizin. 17 (4): 181–197. (edition of the Latin translation of Jabir's The Great Book of Mercy, Kitāb al-Raḥma al-kabīr, Kr. no. 5, under the title Liber Misericordiae) Newman, William R. (1994). "Arabo-Latin Forgeries: The Case of the Summa Perfectionis (with the text of Jābir ibn Ḥayyān's Liber Regni)". In Russell, G. A. (ed.). The 'Arabick' Interest of the Natural Philosophers in Seventeenth-Century England. Leiden: Brill. pp. 278–296. ISBN 978-90-04-09888-6. (pp. 288–291 contain a Latin translation of intermittent extracts of Jabir's Book of Kingship, Kitāb al-Mulk, Kr. no. 454, under the title Liber regni, with an English translation on pp. 291–293) Note that some other Latin works attributed to Jabir/Geber (Summa perfectionis, De inventione veritatis, De investigatione perfectionis, Liber fornacum, Testamentum Geberi, and Alchemia Geberi) are widely considered to be pseudepigraphs which, though largely drawing on Arabic sources, were originally written by Latin authors in the 13th–14th centuries (see pseudo-Geber); see Moureau 2020, p. 112; cf. Forster 2018.
Wikipedia/Sulfur-mercury_theory_of_metals
A drug is any chemical substance other than a nutrient or an essential dietary ingredient, which, when administered to a living organism, produces a biological effect. Consumption of drugs can be via inhalation, injection, smoking, ingestion, absorption via a patch on the skin, suppository, or dissolution under the tongue. In pharmacology, a drug is a chemical substance, typically of known structure, which, when administered to a living organism, produces a biological effect. A pharmaceutical drug, also called a medication or medicine, is a chemical substance used to treat, cure, prevent, or diagnose a disease or to promote well-being. Traditionally drugs were obtained through extraction from medicinal plants, but more recently also by organic synthesis. Pharmaceutical drugs may be used for a limited duration, or on a regular basis for chronic disorders. == Classification == Pharmaceutical drugs are often classified into drug classes—groups of related drugs that have similar chemical structures, the same mechanism of action (binding to the same biological target), a related mode of action, and that are used to treat the same disease. The Anatomical Therapeutic Chemical Classification System (ATC), the most widely used drug classification system, assigns drugs a unique ATC code, which is an alphanumeric code that assigns it to specific drug classes within the ATC system. Another major classification system is the Biopharmaceutics Classification System. This classifies drugs according to their solubility and permeability or absorption properties. Psychoactive drugs are substances that affect the function of the central nervous system, altering perception, mood or consciousness. These drugs are divided into different groups such as: stimulants, depressants, antidepressants, anxiolytics, antipsychotics, and hallucinogens. These psychoactive drugs have been proven useful in treating a wide range of medical conditions including mental disorders around the world. The most widely used drugs in the world include caffeine, nicotine and alcohol, which are also considered recreational drugs, since they are used for pleasure rather than medicinal purposes. All drugs can have potential side effects. Abuse of several psychoactive drugs can cause addiction or physical dependence. Excessive use of stimulants can promote stimulant psychosis. Many recreational drugs are illicit; international treaties such as the Single Convention on Narcotic Drugs exist for the purpose of their prohibition. == Etymology == In English, the noun "drug" is thought to originate from Old French "drogue", possibly deriving from "droge (vate)" from Middle Dutch meaning "dry (barrels)", referring to medicinal plants preserved as dry matter in barrels. In the 1990s however, Spanish lexicographer Federico Corriente Córdoba documented the possible origin of the word in {ḥṭr} an early romanized form of the Al-Andalus language from the northwestern part of the Iberian peninsula. The term could approximately be transcribed as حطروكة or hatruka. The term "drug" has become a skunked term with negative connotation, being used as a synonym for illegal substances like cocaine or heroin or for drugs used recreationally. In other contexts the terms "drug" and "medicine" are used interchangeably. == Efficacy == Drug action is highly specific and their effects may only be detected in certain individuals. For instance, the 10 highest-grossing drugs in the US may help only 4-25% of people. Often, the activity of a drug depends on the genotype of a patient. For example, Erbitux (cetuximab) increases the survival rate of colorectal cancer patients if they carry a particular mutation in the EGFR gene. Some drugs are specifically approved for certain genotypes. Vemurafenib is such a case which is used for melanoma patients who carry a mutation in the BRAF gene. The number of people who benefit from a drug determines if drug trials are worth carrying out, given that phase III trials may cost between $100 million and $700 million per drug. This is the motivation behind personalized medicine, that is, to develop drugs that are adapted to individual patients. == Medication == A medication or medicine is a drug taken to cure or ameliorate any symptoms of an illness or medical condition. The use may also be as preventive medicine that has future benefits but does not treat any existing or pre-existing diseases or symptoms. Dispensing of medication is often regulated by governments into three categories—over-the-counter medications, which are available in pharmacies and supermarkets without special restrictions; behind-the-counter medicines, which are dispensed by a pharmacist without needing a doctor's prescription, and prescription only medicines, which must be prescribed by a licensed medical professional, usually a physician. In the United Kingdom, behind-the-counter medicines are called pharmacy medicines which can only be sold in registered pharmacies, by or under the supervision of a pharmacist. These medications are designated by the letter P on the label. The range of medicines available without a prescription varies from country to country. Medications are typically produced by pharmaceutical companies and are often patented to give the developer exclusive rights to produce them. Those that are not patented (or with expired patents) are called generic drugs since they can be produced by other companies without restrictions or licenses from the patent holder. Pharmaceutical drugs are usually categorised into drug classes. A group of drugs will share a similar chemical structure, have the same mechanism of action or the same related mode of action, or target the same illness or related illnesses. The Anatomical Therapeutic Chemical Classification System (ATC), the most widely used drug classification system, assigns drugs a unique ATC code, which is an alphanumeric code that assigns it to specific drug classes within the ATC system. Another major classification system is the Biopharmaceutics Classification System. This groups drugs according to their solubility and permeability or absorption properties. == Spiritual and religious use == Some religions, particularly ethnic religions, are based completely on the use of certain drugs, known as entheogens, which are mostly hallucinogens—psychedelics, dissociatives, or deliriants. Some entheogens include kava which can act as a stimulant, a sedative, a euphoriant and an anesthetic. The roots of the kava plant are used to produce a drink consumed throughout the cultures of the Pacific Ocean. Some shamans from different cultures use entheogens, defined as "generating the divine within," to achieve religious ecstasy. Amazonian shamans use ayahuasca (yagé), a hallucinogenic brew, for this purpose. Mazatec shamans have a long and continuous tradition of religious use of Salvia divinorum, a psychoactive plant. Its use is to facilitate visionary states of consciousness during spiritual healing sessions. Silene undulata is regarded by the Xhosa people as a sacred plant and used as an entheogen. Its roots are traditionally used to induce vivid (and according to the Xhosa, prophetic) lucid dreams during the initiation process of shamans, classifying it a naturally occurring oneirogen similar to the more well-known dream herb Calea ternifolia. Peyote, a small spineless cactus, has been a major source of psychedelic mescaline and has probably been used by Native Americans for at least five thousand years. Most mescaline is now obtained from a few species of columnar cacti in particular from San Pedro and not from the vulnerable peyote. The entheogenic use of cannabis has also been widely practised for centuries. Rastafari use marijuana (ganja) as a sacrament in their religious ceremonies. Psychedelic mushrooms (psilocybin mushrooms), commonly called magic mushrooms or shrooms have also long been used as entheogens. == Smart drugs and designer drugs == Nootropics, also commonly referred to as "smart drugs", are drugs that are claimed to improve human cognitive abilities. Nootropics are used to improve memory, concentration, thought, mood, and learning. An increasingly used nootropic among students, also known as a study drug, is methylphenidate branded commonly as Ritalin and used for the treatment of attention deficit hyperactivity disorder (ADHD) and narcolepsy. At high doses methylphenidate can become highly addictive. Serious addiction can lead to psychosis, anxiety and heart problems, and the use of this drug is related to a rise in suicides, and overdoses. Evidence for use outside of student settings is limited but suggests that it is commonplace. Intravenous use of methylphenidate can lead to emphysematous damage to the lungs, known as Ritalin lung. Other drugs known as designer drugs are produced. An early example of what today would be labelled a 'designer drug' was LSD, which was synthesised from ergot. Other examples include analogs of performance-enhancing drugs such as designer steroids taken to improve physical capabilities; these are sometimes used (legally or not) for this purpose, often by professional athletes. Other designer drugs mimic the effects of psychoactive drugs. Since the late 1990s there has been the identification of many of these synthesised drugs. In Japan and the United Kingdom this has spurred the addition of many designer drugs into a newer class of controlled substances known as a temporary class drug. Synthetic cannabinoids have been produced for a longer period of time and are used in the designer drug synthetic cannabis. == Recreational drug use == Recreational drug use is the use of a drug (legal, controlled, or illegal) with the primary intention of altering the state of consciousness through alteration of the central nervous system in order to create positive emotions and feelings. The hallucinogen LSD is a psychoactive drug commonly used as a recreational drug. Ketamine is a drug used for anesthesia, and is also used as a recreational drug, both in powder and liquid form, for its hallucinogenic and dissociative effects. Some national laws prohibit the use of different recreational drugs; medicinal drugs that have the potential for recreational use are often heavily regulated. However, there are many recreational drugs that are legal in many jurisdictions and widely culturally accepted. Cannabis is the most commonly consumed controlled recreational drug in the world (as of 2012). Its use in many countries is illegal but is legally used in several countries usually with the proviso that it can only be used for personal use. It can be used in the leaf form of marijuana (grass), or in the resin form of hashish. Marijuana is a more mild form of cannabis than hashish. There may be an age restriction on the consumption and purchase of legal recreational drugs. Some recreational drugs that are legal and accepted in many places include alcohol, tobacco, betel nut, and caffeine products, and in some areas of the world the legal use of drugs such as khat is common. There are a number of legal intoxicants commonly called legal highs that are used recreationally. The most widely used of these is alcohol. == Administration of drugs == All drugs have a route of administration, and many can be administered by more than one. A bolus is the administration of a medication, drug or other compound that is given to raise its concentration in blood rapidly to an effective level, regardless of the route of administration. == Control of drugs == Numerous governmental offices in many countries deal with the control and supervision of drug manufacture and use, and the implementation of various drug laws. The Single Convention on Narcotic Drugs is an international treaty brought about in 1961 to prohibit the use of narcotics save for those used in medical research and treatment. In 1971, a second treaty the Convention on Psychotropic Substances had to be introduced to deal with newer recreational psychoactive and psychedelic drugs. The legal status of Salvia divinorum varies in many countries and even in states within the United States. Where it is legislated against, the degree of prohibition also varies. The Food and Drug Administration (FDA) in the United States is a federal agency responsible for protecting and promoting public health through the regulation and supervision of food safety, tobacco products, dietary supplements, prescription and over-the-counter medications, vaccines, biopharmaceuticals, blood transfusions, medical devices, electromagnetic radiation emitting devices, cosmetics, animal foods and veterinary drugs. In India, the Narcotics Control Bureau (NCB), an Indian federal law enforcement and intelligence agency under the Ministry of Home Affairs, is tasked with combating drug trafficking and assisting international use of illegal substances under the provisions of Narcotic Drugs and Psychotropic Substances Act. == See also == === Lists of drugs === List of drugs List of pharmaceutical companies List of psychoactive plants List of Schedule I drugs (US) == References == == Further reading == Richard J. Miller (2014). Drugged: the science and culture behind psychotropic drugs. Oxford University Press. ISBN 978-0-19-995797-2. == External links == DrugBank, a database of 13,400 drugs and 5,100 protein drug targets "Drugs", BBC Radio 4 discussion with Richard Davenport-Hines, Sadie Plant and Mike Jay (In Our Time, May 23, 2002)
Wikipedia/Drugs
Islamic calligraphy is the artistic practice of penmanship and calligraphy, in the languages which use Arabic alphabet or the alphabets derived from it. It is a highly stylized and structured form of handwriting that follows artistic conventions and is often used for Islamic religious texts, architecture, and decoration. It includes Arabic, Persian, Ottoman, and Urdu calligraphy. It is known in Arabic as khatt Arabi (خط عربي), literally meaning 'line', 'design', or 'construction'. The development of Islamic calligraphy is strongly tied to the Qur'an, as chapters and verses from the Qur'an are a common and almost universal text upon which Islamic calligraphy is based. Although artistic depictions of people and animals are not explicitly forbidden in the Qur'an, Islamic traditions have often limited figural representation in Islamic religious texts in order to avoid idolatry. Some scholars argue that Kufic script was developed by the late 7th century in Kufa, Iraq, from which it takes its name. This early style later evolved into several forms, including floral, foliated, plaited or interlaced, bordered, and square Kufic. In the ancient world, though, artists sometimes circumvented aniconic prohibitions by creating intricate calligraphic compositions that formed shapes and figures using tiny script. Calligraphy was a valued art form, and was regarded as both an aesthetic and moral pursuit. An ancient Arabic proverb illustrates this point by emphatically stating that "Purity of writing is purity of the soul." Beyond religious contexts, Islamic calligraphy is widely used in secular art, architecture, and decoration. Its prominence in Islamic art is not solely due to religious constraints on figurative imagery, but rather reflects the central role of writing and the written word in Islamic culture. Islamic calligraphy evolved primarily from two major styles: Kufic and Naskh, with numerous regional and stylistic variations. In the modern era, Arabic and Persian calligraphy have influenced modern art, particularly in the post-colonial Middle East, and have also inspired the fusion style known as calligraffiti. == Instruments and media == The traditional instrument of the Islamic calligrapher is the qalam, a pen normally made of dried reed or bamboo. The ink is often in colour and chosen so that its intensity can vary greatly, creating dynamism and movement in the letter forms. Some styles are often written using a metallic-tip pen. Islamic calligraphy can be applied to a wide range of decorative mediums other than paper, such as tiles, vessels, carpets, and stone. Before the advent of paper, papyrus and parchment were used for writing. During the 9th century, an influx of paper from China revolutionized calligraphy. Libraries in the Muslim world regularly contained hundreds and even thousands of books.: 218  For centuries, the art of writing has fulfilled a central iconographic function in Islamic art. Although the academic tradition of Islamic calligraphy began in Baghdad, the centre of the Islamic empire during much of its early history, it eventually spread as far as India and Spain. Coins were another support for calligraphy. Beginning in 692, the Islamic caliphate reformed the coinage of the Near East by replacing Byzantine Christian imagery with Islamic phrases inscribed in Arabic. This was especially true for dinars, or gold coins of high value. Generally, the coins were inscribed with quotes from the Qur'an. By the tenth century, the Persians, who had converted to Islam, began weaving inscriptions onto elaborately patterned silks. So precious were textiles featuring Arabic text that Crusaders brought them to Europe as prized possessions. A notable example is the Suaire de Saint-Josse, used to wrap the bones of St. Josse in the Abbey of St. Josse-sur-Mer, near Caen in north-western France.: 223–225  As Islamic calligraphy is highly venerated, most works follow examples set by well-established calligraphers, with the exception of secular or contemporary works. In the Islamic tradition, calligraphers underwent extensive training in three stages, including the study of their teacher's models, in order to be granted certification. == Styles == === Kufic === The Kufic style emphasizes rigid and angular strokes, it developed alongside the Naskh script in the 7th century. Although some scholars dispute this, Kufic script was supposedly developed around the end of the 7th century in Kufa, Iraq, from which it takes its name. The style later developed into several varieties, including floral, foliated, plaited or interlaced, bordered, and square kufic. Due to its straight and orderly style of lettering, Kufic was frequently used in ornamental stone carving as well as on coins. It was the main script used to copy the Qur'an from the 8th to 10th century and went out of general use in the 12th century when the flowing naskh style become more practical. However, it continued to be used as a decorative element to contrast superseding styles. There was no set rules of using the Kufic script; the only common feature is the angular, linear shapes of the characters. Due to the lack of standardization of early Kufic, the script differs widely between regions, ranging from very square and rigid forms to flowery and decorative ones. Common varieties include square Kufic, a technique known as banna'i. Contemporary calligraphy using this style is also popular in modern decorations. Decorative Kufic inscriptions are often imitated into pseudo-kufics in Middle age and Renaissance Europe. Pseudo-kufics is especially common in Renaissance depictions of people from the Holy Land. The exact reason for the incorporation of pseudo-Kufic is unclear. It seems that Westerners mistakenly associated 13th-14th century Middle Eastern scripts with systems of writing used during the time of Jesus, and thus found it natural to represent early Christians in association with them. === Naskh === The use of cursive scripts coexisted with Kufic, and historically cursive was commonly used for informal purposes. Naskh first appeared within the first century of the Islamic calendar. Naskh translates to "copying", as it became the standard for transcribing books and manuscripts. The script is the most ubiquitous among other styles, used in the Qur'an, official decrees, and private correspondence. It became the basis of modern Arabic print. Kufic is commonly believed to predate naskh, but historians have traced the two scripts as coexisting long before their codification by ibn Muqla, as the two served different purposes. Kufi was used primarily in decoration, while Naskh served for everyday scribal use. === Thuluth === Thuluth was developed during the 15th century and slowly refined by Ottoman Calligraphers including Mustafa Râkim, Shaykh Hamdallah, and others, till it became what it is today. Letters in this script have long vertical lines with broad spacing. The name, meaning "one third", may possibly be a reference to the x-height, which is one-third of the 'alif, or to the fact that the pen used to write the vowels and ornaments is one third the width of that used in writing the letters. === Reqāʿ === Reqāʿ is a handwriting style similar to thuluth. It first appeared in the 10th century. The shape is simple with short strokes and small flourishes. Yaqut al-Musta'simi was one of the calligraphers who employed this style. The Arab, Ibn al-Bawwab is actually believed to have created this script. === Muhaqqaq === Muhaqqaq is a majestic style used by accomplished calligraphers, and is a variation of thuluth. Along with thuluth, it was considered one of the most beautiful scripts, as well as one of the most difficult to execute. Muhaqqaq was commonly used during the Mamluk era, but its use became largely restricted to short phrases, such as the basmallah, from the 18th century onward. === Regional styles === With the spread of Islam, the Arabic script was established in a vast geographic area with many regions developing their own unique style. From the 14th century onward, other cursive styles began to develop in Turkey, Persia, and China. Maghrebi scripts developed from Kufic letters in the Maghreb (North Africa) and al-Andalus (Iberia), Maghrebi scripts are traditionally written with a pointed tip (القلم المذبب), producing a line of even thickness. Within the Maghrebi family, there are different styles including the cursive mujawher and the ceremonial mabsut. Sudani scripts developed in Biled as-Sudan (the West African Sahel) and can be considered a subcategory of Maghrebi scripts Diwani is a cursive style of Arabic calligraphy developed during the reign of the early Ottoman Turks in the 16th and early 17th centuries. It was invented by Housam Roumi, and reached its height of popularity under Süleyman I the Magnificent (1520–1566). Spaces between letters are often narrow, and lines ascend upwards from right to left. Larger variations called djali are filled with dense decorations of dots and diacritical marks in the space between, giving it a compact appearance. Diwani is difficult to read and write due to its heavy stylization and became the ideal script for writing court documents as it ensured confidentiality and prevented forgery. Nasta'liq is a cursive style originally devised to write the Persian language for literary and non-Qur'anic works. Nasta'liq is thought to be a later development of the naskh and the earlier ta'liq script used in Iran. Quite rapidly gaining popularity as a script in South Asia. The name ta'liq means "hanging", and refers to the slightly sloped quality of lines of text in this script. Letters have short vertical strokes with broad and sweeping horizontal strokes. The shapes are deep, hook-like, and have high contrast. A variant called Shikasteh was developed in the 17th century for more formal contexts. Sini is a style developed in China. The shape is greatly influenced by Chinese calligraphy, using a horsehair brush instead of the standard reed pen. A famous modern calligrapher in this tradition is Hajji Noor Deen Mi Guangjiang. === Modern === In the post-colonial era, artists working in North Africa and the Middle East transformed Arabic calligraphy into a modern art movement, known as the Hurufiyya movement. Artists working in this style use calligraphy as a graphic element within contemporary artwork. The term, hurufiyya is derived from the Arabic term, harf for letter. Traditionally, the term was charged with Sufi intellectual and esoteric meaning. It is an explicit reference to a medieval system of teaching involving political theology and lettrism. In this theology, letters were seen as primordial signifiers and manipulators of the cosmos. Hurufiyya artists blended Western art concepts with an artistic identity and sensibility drawn from their own culture and heritage. These artists integrated Islamic visual traditions, especially calligraphy, and elements of modern art into syncretic contemporary compositions. Although hurufiyyah artists struggled to find their own individual dialogue within the context of nationalism, they also worked towards an aesthetic that transcended national boundaries and represented a broader affiliation with an Islamic identity. The hurufiyya artistic style as a movement most likely began in North Africa c. 1955 with the work of Ibrahim el-Salahi. However, the use of calligraphy in modern artworks appears to have emerged independently in various Islamic states. Artists working in this were often unaware of other hurufiyya artists's works, allowing for different manifestations of the style to emerge in different regions. In Sudan, for instance, artworks include both Islamic calligraphy and West African motifs. The hurufiyya art movement was not confined to painters and included artists working in a variety of media. One example is the Jordanian ceramicist, Mahmoud Taha who combined the traditional aesthetics of calligraphy with skilled craftsmanship. Although not affiliated with the hurufiyya movement, the contemporary artist Shirin Neshat integrates Arabic text into her black-and-white photography, creating contrast and duality. In Iraq, the movement was known as Al Bu'd al Wahad (or the One Dimension Group)", and in Iran, it was known as the Saqqa-Khaneh movement. Western art has influenced Arabic calligraphy in other ways, with forms such as calligraffiti, which is the use of calligraphy in public art to make politico-social messages or to ornament public buildings and spaces. Notable Islamic calligraffiti artists include: Yazan Halwani active in Lebanon, el Seed working in France and Tunisia, and Caiand A1one in Tehran. In 2017 the Sultanate of Oman unveiled the Mushaf Muscat, an interactive calligraphic Quran following supervision and support from the Omani Ministry of Endowments and Religious Affairs, a voting member of the Unicode Consortium. == Gallery == === Kufic === === Naskh and Thuluth === === Regional varieties === === Modern examples === === Craft === == List of calligraphers == Some classical calligraphers: == See also == == References == == External links == Islamic Calligraphy Pictures Mushaf Muscat mastersofistanbul.com baradariarts.com Gallery with much calligraphy in Turkish mosque Anthology of Persian calligraphers from 10th to 20th centuries
Wikipedia/Islamic_calligraphy
The historiography of early Islam is the secular scholarly literature on the early history of Islam during the 7th century, from Muhammad's first purported revelations in 610 until the disintegration of the Rashidun Caliphate in 661, and arguably throughout the 8th century and the duration of the Umayyad Caliphate, terminating in the incipient Islamic Golden Age around the beginning of the 9th century. Muslims developed methodologies such as the "science of biography" and the "science of hadith" to evaluate the reliability of these narratives, while prominent figures like Ibn Khaldun introduced critical historiographical methods, emphasizing the importance of context and the systematic evaluation of historical data. == Primary sources == === 7th-century Islamic sources === Birmingham Quran manuscript. Between c. 568 and 645 CE Tübingen fragment. Radiocarbon dated between c. 649 and 675 CE (though written in the post-8th century Kufic script) Sanaa manuscript. Between c. 578 and 669 CE Qur'anic Mosaic on the Dome of the Rock. 692 CE The Book of Sulaym ibn Qays. The work is an early Shia hadith collection, attributed to Sulaym ibn Qays (death 694–714), and it is often recognised as the earliest such collection. There is a manuscript of the work dating to the 10th century. Some Shia scholars are dubious about the authenticity of some features of the book, and Western scholars are almost unanimously sceptical concerning the work, with most placing its initial composition in the eighth or ninth century. The work is generally considered pseudepigraphic by modern scholars. === 7th-century non-Islamic sources === There are numerous early references to Islam in non-Islamic sources. Many have been collected in historiographer Robert G. Hoyland's compilation Seeing Islam As Others Saw It. One of the first books to analyze these works was Hagarism authored by Michael Cook and Patricia Crone. Hagarism contends that looking at the early non-Islamic sources provides a much different picture of early Islamic history than the later Islamic sources do. The date of composition of some of the early non-Islamic sources is controversial. Hagarism has been widely dismissed by academics as being too conjectural in its hypothesis and biased in its sources. 634 Doctrina Iacobi 636 Fragment on the Arab Conquests 639 Sophronius, Patriarch of Jerusalem 640 Thomas the Presbyter 643 PERF 558 644 Coptic Apocalypse of Pseudo-Shenute 648 Life of Gabriel of Qartmin 650 Fredegar 655 Pope Martin I 659 Isho'yahb III of Adiabene 660 Sebeos, Bishop of the Bagratunis 660 Khuzistan Chronicle 662 Maximus the Confessor 665 Benjamin I 670 Arculf, a pilgrim 676 Synod of Giwargis I 680 George of Resh'aina 680 The Secrets of Rabbi Simon ben Yohai 680 Bundahishn 681 Trophies of Damascus 687 Athanasius of Balad, Patriarch of Antioch 687 John bar Penkaye 690 Syriac Apocalypse of Pseudo-Methodius 692 Syriac Apocalypse of Pseudo-Ephraem 694 John of Nikiu === Epigraphy === According to archaeologists Yehuda D. Nevo and Judith Koren, there are thousands of pagan and monotheist epigraphs or rock inscriptions throughout the Arabian peninsula and in the Syro-Jordanian desert immediately north, many of them dating from the 7th and 8th century. According to historian Leor Halevi, Muslim tombstones from 30-40 AH / 650-660 CE named Allah (Arabic for God) and referred to the names of the months of the Hijri calendar, but showed few other indications of Islamization. From 70-110 AH/690-730 CE, Muslim tombstones began to reveal deeper signs of Islamization, invoking Muhammad and quoted from the Quran. Some epigraphs found from the first century of Islam include: Analysis of a sandstone inscription found in 2008 determined that it read: "In the name of Allah/ I, Zuhayr, wrote (this) at the time 'Umar died/year four/And twenty." It is worthwhile pointing out that caliph Umar bin al-Khattāb died on the last night of the month of Dhūl-Hijjah of the year 23 AH, and was buried next day on the first day of Muharram of the new year 24 AH/644 CE. Thus the date mentioned in the inscription (above) conforms to the established and known date of the death of ʿUmar bin al-Khattāb. Jerusalem 32 - An Inscription unearthed at the south-west corner of the Ḥaram al-Sharīf in Jerusalem during excavations conducted by Professor Benjamin Mazar of the Hebrew University of Jerusalem in 1968 from 32 AH / 652 CE mentions, "In the name of Allah, the Beneficent, the Merciful...the protection of Allah and the guarantee of His Messenger... And witnessed it ʿAbd al-Raḥmān bin ʿAwf al-Zuhrī, and Abū ʿUbaydah bin al-Jarrāḥ and its writer - Muʿāwiya....the year thirty two" An Inscription, at Taymāʾ, Saudi Arabia, c. 36 AH / 656 CE reads, "I am Qays, the scribe of Abū Kutayr. Curse of Allah on [those] who murdered ʿUthmān ibn ʿAffān and [those who] have led to the killing without mercy." Greek Inscription In The Baths Of Hammat Gader, 42 AH / 662-63 CE mentions, "In the days of the servant of God Muʿāwiya (abdalla Maavia), the commander of the faithful (amēra almoumenēn) the hot baths of the people there were saved and rebuilt..." Tombstone of a woman named ʿAbāssa Bint Juraij, kept in Museum of Islamic Art Cairo, from 71 AH / 691 CE mentions,"In the name of God, the Merciful, the Compassionate. The greatest misfortune for the people of Islām (ahl al-Islām) is the death of Muḥammad the Prophet, Peace be upon him..." An Inscription at Ḥuma al-Numoor, near Ṭāʾif from 78 AH / 697-698 CE mentions, "This was written in the year the Masjid al-Ḥarām was built in the seventy eighth year." === Traditional Muslim historiography === === Religious sciences of biography, hadith, and Isnad === Muslims believe that the historical traditions first began their development in the early 7th century with the reconstruction of Muhammad's life following his death. Because narratives regarding Muhammad and his companions came from various sources and a great many contradicted each other, it was necessary to verify which sources were more reliable. In order to evaluate these sources, various methodologies were developed, such as the "science of biography", "science of hadith" and "Isnad" (chain of transmission). These methodologies were later applied to other historical figures in the Muslim world. Ilm ar-Rijal (Arabic) is the "science of biography" especially as practiced in Islam, where it was first applied to the sira, the life of the prophet of Islam, Muhammad, and then the lives of the four Rightly Guided Caliphs who expanded Islamic dominance rapidly. Since validating the sayings of Muhammad is a major study ("Isnad"), accurate biography has always been of great interest to Muslim biographers, who accordingly attempted to sort out facts from accusations, bias from evidence, etc. The earliest surviving Islamic biography is Ibn Ishaq's Sirat Rasul Allah, written in the 8th century, but known to us only from later quotes and recensions (9th–10th century). The "science of hadith" is the process that Muslim scholars use to evaluate hadith. The classification of Hadith into Sahih (sound), Hasan (good) and Da'if (weak) was firmly established by Ali ibn al-Madini (778CE/161AH – 849CE/234AH). Later, al-Madini's student Muhammad al-Bukhari (810–870) authored a collection that he believed contained only Sahih hadith, which is now known as the Sahih Bukhari. Al-Bukhari's historical methods of testing hadiths and isnads is seen as the beginning of the method of citation and a precursor to the scientific method. I. A. Ahmad writes: "The vagueness of ancient historians about their sources stands in stark contrast to the insistence that scholars such as Bukhari and Muslim manifested in knowing every member in a chain of transmission and examining their reliability. They published their findings, which were then subjected to additional scrutiny by future scholars for consistency with each other and the Qur'an." Other famous Muslim historians who studied the science of biography or science of hadith included Urwah ibn Zubayr (died 712), Wahb ibn Munabbih (died 728), Ibn Ishaq (died 761), al-Waqidi (745–822), Ibn Hisham (died 834), al-Maqrizi (1364–1442), and Ibn Hajar Asqalani (1372–1449), among others. === Historiography, cultural history, and philosophy of history === The first detailed studies on the subject of historiography itself and the first critiques on historical methods appeared in the works of the Arab Muslim historian and historiographer Ibn Khaldun (1332–1406), who is regarded as the father of historiography, cultural history, and the philosophy of history, especially for his historiographical writings in the Muqaddimah (Latinized as Prolegomena) and Kitab al-Ibar (Book of Advice). His Muqaddimah also laid the groundwork for the observation of the role of state, communication, propaganda and systematic bias in history, and he discussed the rise and fall of civilizations. Franz Rosenthal wrote in the History of Muslim Historiography: "Muslim historiography has at all times been united by the closest ties with the general development of scholarship in Islam, and the position of historical knowledge in MusIim education has exercised a decisive influence upon the intellectual level of historicai writing....The Muslims achieved a definite advance beyond previous historical writing in the sociology In the Muqaddimah, Ibn Khaldun warned of seven mistakes that he thought that historians regularly committed. In this criticism, he approached the past as strange and in need of interpretation. The originality of Ibn Khaldun was to claim that the cultural difference of another age must govern the evaluation of relevant historical material, to distinguish the principles according to which it might be possible to attempt the evaluation, and lastly, to feel the need for experience, in addition to rational principles, in order to assess a culture of the past. Ibn Khaldun often criticized "idle superstition and uncritical acceptance of historical data." As a result, he introduced a scientific method to the study of history, which was considered something "new to his age", and he often referred to it as his "new science", now associated with historiography. His historical method also laid the groundwork for the observation of the role of state, communication, propaganda and systematic bias in history, and he is thus considered to be the "father of historiography" or the "father of the philosophy of history". === World history === Muhammad ibn Jarir al-Tabari (838–923) is known for writing a detailed and comprehensive chronicle of Mediterranean and Middle Eastern history in his History of the Prophets and Kings in 915. Abu al-Hasan 'Alī al-Mas'ūdī (896–956), known as the "Herodotus of the Arabs", was the first to combine history and scientific geography in a large-scale work, Muruj adh-dhahab wa ma'adin al-jawahir (The Meadows of Gold and Mines of Gems), a book on world history. Until the 10th century, history most often meant political and military history, but this was not so with Central Asian historian Biruni (973–1048). In his Kitab fi Tahqiq ma l'il-Hind (Researches on India), he did not record political and military history in any detail, but wrote more on India's cultural, scientific, social and religious history. Along with his Researches on India, Biruni discussed more on his idea of history in his chronological work The Chronology of the Ancient Nations. === Famous Muslim historians === Urwah ibn Zubayr (died 712) Hadith of Umar's speech of forbidding Mut'ah Ibn Shihab al-Zuhri (died 742) Hadith of Umar's speech of forbidding Mut'ah Hadith of prohibition of Mut'ah at Khaybar Ibn Ishaq (died 761) Sirah Rasul Allah Imam Malik (died 796) Al-Muwatta Al-Waqidi (745–822) Book of History and Campaigns Ali ibn al-Madini (777–850) The Book of Knowledge about the Companions Ibn Hisham (died 834) Sirah Rasul Allah Dhul-Nun al-Misri (died 859) Muhammad al-Bukhari (810–870) Sahih Bukhari Muslim b. al-Hajjaj (died 875) Sahih Muslim Ibn Majah (died 886) Sunan Ibn Majah Abu Da'ud (died 888) Sunan Abi Da'ud Al-Tirmidhi (died 892) Sunan al-Tirmidhi Abu al-Hasan 'Alī al-Mas'ūdī (896–956) Muruj adh-dhahab wa ma'adin al-jawahir (The Meadows of Gold and Mines of Gems) (947) Ibn Wahshiyya (c. 904) Nabataean Agriculture Kitab Shawq al-Mustaham Al-Nasa'i (died 915) Sunan al-Sughra Muhammad ibn Jarir al-Tabari (838–923) History of the Prophets and Kings Tafsir al-Tabari Al-Baladhuri (died 892) Kitab Futuh al-Buldan Genealogies of the Nobles Hakim al-Nishaburi (died 1014) Al-Mustadrak alaa al-Sahihain Abū Rayhān al-Bīrūnī (973–1048) Indica History of Mahmud of Ghazni and his father History of Khawarazm Abd al-Latif al-Baghdadi (13th century) Ibn Abi Zar (died 1310/1320) Rawd al-Qirtas Al-Dhahabi (1274–1348) Major History of Islam Talkhis al-Mustadrak Tadhkirat al-huffaz Al-Kamal fi ma`rifat al-rijal Ibn Kathir (1300-1373) Al-Bidāya wa-n-Nihāya Al-Sira Al-Nabawiyya Ibn Khaldun (1332–1406) Muqaddimah (1377) Kitab al-Ibar Ibn Hajar al-Asqalani (1372–1449) Fath al-Bari Tahdhib al-Tahdhib Finding the Truth in Judging the Companinons Bulugh al-Maram == Modern academic scholarship == The earliest academic scholarship on Islam in Western countries tended to involve Christian and Jewish translators and commentators. They translated the readily available Sunni texts from Arabic into European languages (including German, Italian, French, and English), then summarized and commented in a fashion that was often hostile to Islam. Notable Christian scholars included: William Muir (1819–1905) Reinhart Dozy (1820–1883) "Die Israeliten zu Mecca" (1864) David Samuel Margoliouth (1858–1940) William St. Clair Tisdall (1859–1928) Leone Caetani (1869–1935) Alphonse Mingana (1878–1937) All these scholars worked in the late 19th and early 20th centuries. Another pioneer of Islamic studies, Abraham Geiger (1810–1874), a prominent Jewish rabbi, approached Islam from that standpoint in his Was hat Mohammed aus dem Judenthume aufgenommen? (What did Muhammad borrow from Judaism?) (1833). Geiger's themes continued in Rabbi Abraham I. Katsh's "Judaism and the Koran" (1962) === Establishment of academic research === Other scholars, notably those in the German tradition, took a more neutral view. (The 19th-century scholar Julius Wellhausen (1844–1918) offers a prime example.) They also started, cautiously, to question the truth of the Arabic texts. They took a source-critical approach, trying to sort the Islamic texts into elements to be accepted as historically true, and elements to be discarded as polemic or as pious fiction. Such scholars included: Michael Jan de Goeje (1836–1909) Theodor Nöldeke (1836–1930) Ignaz Goldziher (1850–1921) Henri Lammens (1862–1937) Arthur Jeffery (1892–1959) H. A. R. Gibb (1895–1971) Joseph Schacht (1902–1969) Montgomery Watt (1909–2006) === The revisionist challenge === In the 1970s the Revisionist School of Islamic Studies, or what has been described as a "wave of sceptical scholars", challenged a great deal of the received wisdom in Islamic studies. They argued that the Islamic historical tradition had been greatly corrupted in transmission. They tried to correct or reconstruct the early history of Islam from other, presumably more reliable, sources—such as found coins, inscriptions, and non-Islamic sources of that era. They argue that contrary to Islamic historical tradition, "Islam was like other religions, the product of a religious evolution". The idea that there was an abrupt "discontinuity between the pre-Islamic and Islamic worlds" — i.e. between Persian and Byzantine civilization and Islamic religion, governance, culture — "strains the imagination". But if "we begin by assuming that there must have been some continuity, we need either go beyond the Islamic sources" which indicate abrupt change, or "reinterpret them". The oldest of this group was John Wansbrough (1928–2002). Wansbrough's works were widely noted, but not necessarily widely read, owing to (according to Fred Donner), his "awkward prose style, diffuse organization, and tendency to rely on suggestive implication rather than tight argument". Nonetheless, his scepticism influenced a number of younger scholars, including: Martin Hinds (1941–1988) Patricia Crone (1945-2015) Michael Cook (1940- ) In 1977 Crone and Cook published Hagarism: The Making of the Islamic World, which argued that the traditional early history of Islam is a myth, generated after the Arab conquests of Egypt, Syria, and Persia to give a solid ideological foundation to the new Arab regimes in those lands. Hagarism suggests that the Qur'an was composed later than the traditional narrative tell us, and that the Arab conquests may have been the cause, rather than the consequence, of Islam. The main evidence adduced for this thesis consisted of contemporary non-Muslim sources recording many early Islamic events. If such events could not be supported by outside evidence, then (according to Crone and Cook) they should be dismissed as myth. Crone defended the use of non-Muslim sources saying that "of course these sources are hostile [to the conquering Muslims] and from a classical Islamic view they have simply got everything wrong; but unless we are willing to entertain the notion of an all-pervading literary conspiracy between the non-Muslim peoples of the Middle East, the crucial point remains that they have got things wrong on very much the same points." Crone and Cook's more recent work has involved intense scrutiny of early Islamic sources, but not their total rejection. (See, for instance, Crone's 1987 publications, Roman, Provincial, and Islamic Law and Meccan Trade and the Rise of Islam, both of which assume the standard outline of early Islamic history while questioning certain aspects of it; also Cook's 2001 Commanding Right and Forbidding Wrong in Islamic Thought, which also cites early Islamic sources as authoritative.) Both Crone and Cook have later suggested that the central thesis of their book "Hagarism: The Making of the Islamic World" was mistaken because the evidence they had to support the thesis was not sufficient or internally consistent enough. Crone has suggested that the book was “a graduate essay" and "a hypothesis," not "a conclusive finding.” In 1972 construction workers discovered a cache of ancient Qur'ans – commonly known as the Sana'a manuscripts – in a mosque in Sana'a, Yemen. The German scholar Gerd R. Puin has been investigating these Qur'an fragments for years. His research team made 35,000 microfilm photographs of the manuscripts, which he dated to the early part of the 8th century. Puin has not published the entirety of his work, but has noted unconventional verse orderings, minor textual variations, and rare styles of orthography. He has also suggested that some of the parchments were palimpsests which had been reused. Puin believed that this implied an evolving text as opposed to a fixed one. Karl-Heinz Ohlig has also researched Christian/Jewish roots of the Qur'an and its related texts. He sees the name Muhammad itself ("the blessed", as in Benedictus qui venit) as part of that tradition. In their study of the traditional Islamic accounts of the early conquest of different cities—Damascus and Caesarea in Syria, Babilyn/al-Fusat and Alexandria in Egypt, Tustar in Khuzistan and Cordoba in Spain—scholars Albrecht Noth and Lawrence Conrad find a suspicious pattern whereby the cities "are all described as having fallen into the hands of the Muslims in precisely the same fashion". There is a "traitor who, ... points out a weak spot in the city's fortification to the Muslim besiegers; a celebration in the city which diverts the attention of the besieged; then a few assault troops who scale the walls, ... a shout of Allahu akbar! ... from the assault troops as a sign that they have entered the town; the opening of one of the gates from inside, and the onslaught of the entire army." They conclude these accounts can not be "the reporting of history" but are instead stereotyped story tales with little historical value. Contemporary scholars have tended to use the histories rather than the hadith, and to analyze the histories in terms of the tribal and political affiliations of the narrators (if that can be established), thus making it easier to guess in which direction the material might have been slanted. Notable scholars include: Fred M. Donner Wilferd Madelung Gerald Hawting Jonathan Berkey Andrew Rippin An alternative postrevisionist approach has made use of hadith of uncertain authenticity to tell a history of early Islam after the death of Muhammad. Here the key has been to analyze hadith as collective memories that shaped the culture and society of urban Muslims in the late seventh and eighth centuries CE. Muhammad′s Grave: Death Rites and the Making of Islamic Society by Leor Halevi is an example of this approach. == Scholars combining traditional and academic scholarship == A few scholars have attempted to bridge the divide between Islamic and Western-style secular scholarship. Joel Hayward Sherman Jackson Fazlur Rahman They have completed both Islamic and Western academic training. == See also == Succession to Muhammad Timeline of early Islamic history Timeline of 7th-century Muslim history Timeline of 8th-century Muslim history List of biographies of Muhammad Early Muslim conquests Classical Islam == References == == Bibliography == Yılmaz, Halil İbrahim; İzgi, Mahmut Cihat; Erbay, Enes Ensar; Şenel, Samet (2024). "Studying early Islam in the third millennium: a bibliometric analysis". Humanities and Social Sciences Communications. 11 (1): Article 1521. doi:10.1057/s41599-024-04058-2. Charles, Robert H. (2007) [1916]. The Chronicle of John, Bishop of Nikiu: Translated from Zotenberg's Ethiopic Text. Merchantville, NJ: Evolution Publishing. ISBN 9781889758879. Donner, Fred (1998). Narratives of Islamic Origins: The Beginnings of Islamic Historical Writing. Darwin Press. ISBN 978-0878501274. Hoyland, Robert (1997). Seeing Islam as Others Saw It: A Survey and Evaluation of Christian, Jewish and Zoroastrian Writings on Early Islam. Darwin Press. ISBN 978-0878501250. Madelung, Wilferd (1997). The Succession to Muhammad: A Study of the Early Caliphate. Cambridge University Press. ISBN 0-521-64696-0. Vansina, Jan (1985). Oral Tradition as History. University of Wisconsin Press. ISBN 978-0299102142. == External links == Muslim historiography an article by online Britannica
Wikipedia/Historiography_of_early_Islam
Muslim scholars have developed a spectrum of viewpoints on science within the context of Islam. Scientists of medieval Muslim civilization (e.g. Ibn al-Haytham) contributed to the new discoveries in science. From the eighth to fifteenth century, Muslim mathematicians and astronomers furthered the development of mathematics. Concerns have been raised about the lack of scientific literacy in parts of the modern Muslim world. Islamic scientific achievements encompassed a wide range of subject areas, especially medicine, mathematics, astronomy, agriculture as well as physics, economics, engineering and optics. Aside from these contributions, some Muslim writers have made claims that the Quran made prescient statements about scientific phenomena as regards to the structure of the embryo, the Solar System, and the development of the universe. == Terminology == According to Toby Huff, there is no true word for science in Arabic as commonly defined in English and other languages. In Arabic, "science" can simply mean different forms of knowledge. This view has been criticized by other scholars. For example, according to Muzaffar Iqbal, Huff's framework of inquiry "is based on the synthetic model of Robert Merton who had made no use of any Islamic sources or concepts dealing with the theory of knowledge or social organization" Each branch of science has its own name, but all branches of science have a common prefix, ilm. For example, physics is more literally translated from Arabic as "the science of nature", علم الطبيعة ‘ilm aṭ-ṭabī‘a; arithmetic as the "science of accounts" علم الحساب ilm al-hisab. The religious study of Islam (through Islamic sciences like Quranic exegesis, hadith studies, etc.) is called العلم الديني "science of religion" (al-ilm ad-dinniy), using the same word for science as "the science of nature". According to the Hans Wehr Dictionary of Arabic, while علم’ ilm is defined as "knowledge, learning, lore," etc. the word for "science" is the plural form علوم’ ulūm. (So, for example, كلية العلوم kullīyat al-‘ulūm, the Faculty of Science of the Egyptian University, is literally "the Faculty of Sciences ...") == History == === Classical science in the Muslim world === One of the earliest accounts of the use of science in the Islamic world is during the eighth and sixteenth centuries, known as the Islamic Golden Age. It is also known as "Arabic science" because of the majority of texts that were translated from Greek into Arabic. The mass translation movement, that occurred in the ninth century allowed for the integration of science into the Islamic world. The teachings from the Greeks were now translated and their scientific knowledge was now passed on to the Arab world. Despite these conditions, not all scientists during this period were Muslim or Arab, as there were a number of notable non-Arab scientists (most notably Persians), as well as some non-Muslim scientists, who contributed to scientific studies in the Muslim world. A number of modern scholars such as Fielding H. Garrison, Sultan Bashir Mahmood, Hossein Nasr consider modern science and the scientific method to have been greatly inspired by Muslim scientists who introduced a modern empirical, experimental and quantitative approach to scientific inquiry. Certain advances made by medieval Muslim astronomers, geographers and mathematicians were motivated by problems presented in Islamic scripture, such as Al-Khwarizmi's (c. 780–850) development of algebra in order to solve the Islamic inheritance laws, and developments in astronomy, geography, spherical geometry and spherical trigonometry in order to determine the direction of the Qibla, the times of Salah prayers, and the dates of the Islamic calendar. These new studies of math and science would allow for the Islamic world to get ahead of the rest of the world. ‘With these inspiration at work, Muslim mathematicians and astronomers contributed significantly to the development to just about every domain of mathematics between the eight and fifteenth centuries" The increased use of dissection in Islamic medicine during the 12th and 13th centuries was influenced by the writings of the Islamic theologian, Al-Ghazali, who encouraged the study of anatomy and use of dissections as a method of gaining knowledge of God's creation. In al-Bukhari's and Muslim's collection of sahih hadith it is said: "There is no disease that God has created, except that He also has created its treatment." (Bukhari 7-71:582). This culminated in the work of Ibn al-Nafis (1213–1288), who discovered the pulmonary circulation in 1242 and used his discovery as evidence for the orthodox Islamic doctrine of bodily resurrection. Ibn al-Nafis also used Islamic scripture as justification for his rejection of wine as self-medication. Criticisms against alchemy and astrology were also motivated by religion, as orthodox Islamic theologians viewed the beliefs of alchemists and astrologists as being superstitious. Fakhr al-Din al-Razi (1149–1209), in dealing with his conception of physics and the physical world in his Matalib, discusses Islamic cosmology, criticizes the Aristotelian notion of the Earth's centrality within the universe, and "explores the notion of the existence of a multiverse in the context of his commentary," based on the Quranic verse, "All praise belongs to God, Lord of the Worlds." He raises the question of whether the term "worlds" in this verse refers to "multiple worlds within this single universe or cosmos, or to many other universes or a multiverse beyond this known universe." On the basis of this verse, he argues that God has created more than "a thousand thousand worlds (alfa alfi 'awalim) beyond this world such that each one of those worlds be bigger and more massive than this world as well as having the like of what this world has." Ali Kuşçu's (1403–1474) support for the Earth's rotation and his rejection of Aristotelian cosmology (which advocates a stationary Earth) was motivated by religious opposition to Aristotle by orthodox Islamic theologians, such as Al-Ghazali. According to many historians, science in the Muslim civilization flourished during the Middle Ages, but began declining at some time around the 14th to 16th centuries. At least some scholars blame this on the "rise of a clerical faction which froze this same science and withered its progress." Examples of conflicts with prevailing interpretations of Islam and science – or at least the fruits of science – thereafter include the demolition of Taqi al-Din's great Constantinople observatory in Galata, "comparable in its technical equipment and its specialist personnel with that of his celebrated contemporary, the Danish astronomer Tycho Brahe." But while Brahe's observatory "opened the way to a vast new development of astronomical science," Taqi al-Din's was demolished by a squad of Janissaries, "by order of the sultan, on the recommendation of the Chief Mufti," sometime after 1577 CE. ==== Science and religious practice ==== Scientific methods have been historically applied to find solutions to the technical exigencies of Islamic religious rituals, which is a characteristic of Islam that sets it apart from other religions. These ritual considerations include a lunar calendar, definition of prayer times based on the position of the sun, and a direction of prayer set at a specific location. Scientific methods have also been applied to Islamic laws governing the distribution of inheritances and to Islamic decorative arts. Some of these problems were tackled by both medieval scientists of the Islamic world and scholars of Islamic law. Though these two groups generally used different methods, there is little evidence of serious controversy between them on these subjects, with the exception of the criticism leveled by religious scholars at the methods of astronomy due to its association with astrology. === Modern science in the Muslim world === At the beginning of the nineteenth century, modern science arrived in the Muslim world, bringing with it "the transfer of various philosophical currents entangled with science" including schools of thought such as Positivism and Darwinism. This had a profound effect on the minds of Muslim scientists and intellectuals and also had a noticeable impact on some Islamic theological doctrines. While the majority of Muslim scientists tried to adapt their understanding of Islam to the findings of modern science, some rejected modern science as "corrupt foreign thought, considering it incompatible with Islamic teachings", others advocated for the wholesale replacement of religious worldviews with a scientific worldview, and some Muslim philosophers suggested separating the findings of modern science from its philosophical attachments. Among the majority of Muslim thinkers, a key justification for the use of modern science was the benefits that modern knowledge clearly brought to society. Others concluded that science could ultimately be reconciled with faith. A further apologetic trend saw the emergence of theories that scientific discoveries had been predicted in the Quran and Islamic tradition, thereby internalizing science within religion. According to 2013 survey by the Pew Research Center asking Muslims in different Muslim majority countries in the Middle East and North Africa if there was a conflict between science and religion few agreed in Morocco (18%), Egypt (16%), Iraq (15%), Jordan (15%) and the Palestinian territories (14%). More agreed in Albania (57%), Turkey (40%), Lebanon (53%) and Tunisia (42%). The poll also found a variance in how Muslim population in some countries are at odds with current scientific theories about biological evolution and the origin of man. Only four of the 22 countries surveyed that at least 50% of the Muslims surveyed rejected evolution (Iraq 67%, Tajikistan 55%, Indonesia 55%, Afghanistan 62%). Countries with relatively low rates of disbelief in evolution (i.e. agreeing to the statement "humans and other living things have always existed in present form") include Lebanon (21%), Albania (24%), Kazakhstan (16%). As of 2018, three Muslim scientists have won a Nobel Prize for science (Abdus Salam from Pakistan in physics, Ahmed Zewail from Egypt and Aziz Sancar from Turkey in Chemistry). According to Mustafa Akyol, the relative lack of Muslim Nobel laureates in sciences per capita can be attributed to more insular interpretations of the religion than in the golden age of Islamic discovery and development, when Islamic society and intellectuals were more open to foreign ideas. Ahmed Zewail who won the 1999 Nobel Prize in Chemistry and is known as the father of femtochemistry said that "There is nothing fundamental in Islam against science." However, according to an Islamic scholar from Indonesia, Harun Nasution, said that the stagnation and decline of Islamic civilization in the fields of science and technology was caused by none other than the type of theology that was widely accepted in Islamic society. He blamed Ash'arite theology, which is widely accepted by Muslim society, as the cause of scientific stagnation in the Muslim world. According to him, Ash'arite teachings prioritize occasionalism and fatalism which create a distance between science and Muslim society. On the contrary, he advocated the revival of Mu'tazila thought, known for its rationality, as a potential solution for scientific revival in Muslim society. ==== Conflict with religion ==== The conflicts between Islam and science can become quite complicated. It has been argued that "Muslims must be able to maintain the traditional Islamic intellectual space for the legitimate continuation of the Islamic view of the nature of reality to which Islamic ethics corresponds, without denying the legitimacy of modern science within their own confines". While the natural sciences have not been "fully institutionalized" in predominantly Islamic countries, engineering is considered an applied science that can function in conjunction with religion, and it is one of the most popular career choices of Middle Eastern students. Islamic academic Abu Ammaar Yasir Qadhi has noted that important technological innovations—once "considered to be bizarre, strange, haram (religiously forbidden), bidʻah (innovation), against the tradition" in the Muslim world, were later accepted as "standard". An issue for accepting scientific knowledge rises from the supposed origin: For Muslims, absolute truth comes from God, not from the flawed human pursuit of knowledge. Islamic values hold that "knowledge of reality [is] based not on reason alone, but also on revelation and inspiration". A passage in the Quran encourages congruency with the truth attained by modern science: "hence they should be both in agreement and concordant with the findings of modern science". This passage was used more often during the time where "modern science" was full of different discoveries. However, many scientific thinkers through the Islamic word still take this passage to heart when it comes to their work. There are also some strong believers that modern viewpoints, such as social Darwinism, challenged all medieval world views, including that of Islam. Some did not even want to be affiliated with modern science, and thought it was just an outside look into Islam. Many followers tend to see problems regarding the integration of Islam with science, and there are many that still stand by the viewpoints of Ahmad ibn Hanbal, that the pursuit of science is still the pursuit of knowledge: One of the main reasons the Muslim world was held behind when Europe continued its ascent was that the printing press was banned. And there was a time when the Ottoman Sultan issued a decree that anybody caught with a printing press shall be executed for heresy, and anybody who owns a printed book shall basically be thrown into jail. And for 350 years when Europe is printing, when [René] Descartes is printing, when Galileo is printing, when [Isaac] Newton is printing, the only way you can get a copy of any book in the Arab world is to go and hand write it yourself. The reluctance of the Muslim world to embrace science is manifest in the disproportionately small amount of scientific output, as measured by citations of articles published in internationally circulating science journals, annual expenditures on research and development, and numbers of research scientists and engineers. Concerns have been raised that the contemporary Muslim world suffers from scientific illiteracy. Skepticism of science among some Muslims is reflected in issues such as the resistance in Muslim northern Nigeria to polio inoculation, which some believe is "an imaginary thing created in the West or it is a ploy to get us to submit to this evil agenda." In Pakistan, a small number of post-graduate physics students have been known to blame earthquakes on "sinfulness, moral laxity, deviation from the Islamic true path", while "only a couple of muffled voices supported the scientific view that earthquakes are a natural phenomenon unaffected by human activity." In the early twentieth century, Iranian Shia Ulama forbade the learning of foreign languages and the dissection of human bodies in the medical school in Iran. On the other hand, contrary to the current cliché concerning the opposition of the Imamate Shiite Ulama to modern astronomy in the nineteenth century, there is no evidence showing their literal or explicit objection to modern astronomy based on Islamic doctrines. They showed themselves the advocates of modern astronomy with the publication of Hibat al-Dīn Shahristānī's al-Islām wa al-Hayʾa (Islam and Astronomy) in 1910. After that, Shia ulama not only were not against the modern astronomy but also believed that the Quran and Islamic hadiths admit it. During the twentieth century, the Islamic world introduction to modern science was facilitated by the expansion of educational systems. For example, in 1900 and 1925, Istanbul and Cairo opened universities. In these universities, new concerns have emerged among the students. One major issue was naturalism and social Darwinism, which challenged some beliefs. On the other hand, there were efforts to harmonize science with Islam. An example is the nineteenth-century study of Kudsî of Baku, who made connections between his discoveries in astronomy and what he knew from the Quran. These included "the creation of the universe and the beginning of like; in the second part, with doomsday and the end of the world; and the third was the resurrection after death". ===== Late Ottoman Empire and Turkey ===== Ahmet Hamdi Akseki, supported by the official institute for religious affairs in Turkey (Diyanet), published various articles about the creation of humanity. He emphazises that the purpose of the Quran is to offer parables and moral lessons, not offering scientific data or accounts of history. To demonstrate the ambiguity of the Islamic tradition in regards to the Earth's age he brings forth several narratives embedded in Islamic exegesis. First, he recounts several narratives about creatures preceding the creation of Adam. Such species include hinn, binn, timm, rimm. A second one adds the belief that, before God has created Adam, thirty previous races were created, each with a gap of thousand years in between. During that time, the earth has been empty, until a new creation began to be formed. Lastly, he offers a dialogue between the Andalusian scholar ibn Arabi and a strange man: During his visit to Mecca, he came across a person in strange cloths. When he asked the identity of the strange man, the man said: "I am from your ancient ancestors. I died forty thousand years ago!" Bewildered by this response, Ibn al-‘Arabı¯ asked, "What are you talking about? Books narrate that Adam was created about six thousand years ago." The man replied "What Adam are you talking about? Beware of the fact that there were a hundred thousand Adams before Adam, your ancestor." The latter, so Akseki, underlines that the idea of Young Earth creationism is a challenge of the Judeo-Christian tradition. He admits that material of a young earth does exists among Muslim commentators, as in the case of ibn Arabi himself, but these are used as supplementary materials borrowed from Jewish sources (Isra'iliyyat) and are not part of the Islamic canon. Süleyman Ateş, who was president of the Directorate of Religious Affairs in 1976-1978 and issued a tafsir (Interpretation of the Quran), employed similar arguments to that of Aksesi, while using references to Quranic verses to support his arguments. Pointing at 32:7, stating "He began the creation of man from clay.", he points out that humanity was not, in contrast to the Biblical interpretation, created an instant, but emerged as a process. To further support his argument to be in line with Islamic tradition, rather than a secular one, he looked at the Islamic heritage of previous scholars evoking the idea of an evolutionary process, such as the 9th century theologian Jahiz and the 18th century Turkish scholar İbrahim Hakkı Erzurumi, both utilized as references of pre-Darwinian accounts of evolution. Hasan Karacadağ in his movie Semum, features the trope of conflict between science and religion. When the victim of the movie (Canan) is possessed by a demon, her husband brings her to a psychiatrist (Oğuz) and later to an excorcist (Hoca). A discussion starts between them, those practise is more beneficial to help Canan. While the psychiatrist symbolizes an anti-theistic attitude, Hoca represents a most faithful believer. The psychiatrist calls the Hoca a charlatan and dismisses his belief-system entire, while the Hoca affirms the validity of science, but asserts that science is limited to the knowable world, thus impotent in supernatural matters (i.e. the "unknown"). The Hoca, by his reconciling approach, is depicted as superior, when the demonic cause of Canan's illness is shown. Yet, the film makes clear that the psychiatrist does not fail on behalf of being a scientist, but by his anti-theistism. Exercised properly, science and religion would go hand in hand. When the director was asked if he himself believes in the existence of demons, he said that in such a "chaotic space" it is unlikely that humans are alone. His popular cultural depiction of demons might be seen as a representation of what lies beyond the limits of science, Islam being a tool to guide people to the unknown and unexplainable. ===== Islamist movements ===== Islamist author Muhammad Qutb (brother, and promoter, of Sayyid Qutb) in his influential book Islam, the misunderstood religion, states that "science is a powerful instrument" to increase human knowledge but has become a "corrupting influence on men's thoughts and feelings" for much of the world's population, steering them away from "the Right Path". As an example, he gives the scientific community's disapproval of claims of telepathy, when he claims that it is documented in hadith that Caliph Umar prevented commander Sariah from being ambushed by communicating with him telepathically. Muslim scientists and scholars have subsequently developed a spectrum of viewpoints on the place of scientific learning within the context of Islam. Until the 1960s, Saudi Sunni ulama opposed any attempts at modernisation, considering them as innovations (bidah). They opposed the spread of electricity, radios, and TVs. As recently as 2015, Sheikh Bandar al-Khaibari rejected the fact that the Earth orbits the Sun, instead claiming that the Earth is "stationary and does not move". In Afghanistan, Sunni Taliban have turned secular schools into Islamic madrasas, prioritizing religious studies over material science. == Science and the Quran == Many Muslims agree that doing science is an act of religious merit, even a collective duty of the Muslim community. According to M. Shamsher Ali, there are around 750 verses in the Quran dealing with natural phenomena. According to the Encyclopedia of the Quran, many verses of the Quran ask mankind to study nature, and this has been interpreted to mean an encouragement for scientific inquiry, and the investigation of the truth. Some include, "Travel throughout the earth and see how He brings life into being" (Q29:20), "Behold in the creation of the heavens and the earth, and the alternation of night and day, there are indeed signs for men of understanding ..." (Q3:190) Mohammad Hashim Kamali has stated that "scientific observation, experimental knowledge and rationality" are the primary tools with which humanity can achieve the goals laid out for it in the Quran. Ziauddin Sardar argues that Muslims developed the foundations of modern science, by "highlighting the repeated calls of the Quran to observe and reflect upon natural phenomenon". The physicist Abdus Salam believed there is no contradiction between Islam and the discoveries that science allows humanity to make about nature and the universe; and that the Quran and the Islamic spirit of study and rational reflection was the source of extraordinary civilizational development. Salam highlights, in particular, the work of Ibn al-Haytham and Al-Biruni as the pioneers of empiricism who introduced the experimental approach, breaking way from Aristotle's influence, and thus giving birth to modern science. Salam differentiated between metaphysics and physics, and advised against empirically probing certain matters on which "physics is silent and will remain so," such as the doctrine of "creation from nothing" which in Salam's view is outside the limits of science and thus "gives way" to religious considerations. Islam has its own world view system including beliefs about "ultimate reality, epistemology, ontology, ethics, purpose, etc." according to Mehdi Golshani. Toshihiko Izutsu writes that in Islam, nature is not seen as something separate but as an integral part of a holistic outlook on God, humanity, the world and the cosmos. These links imply a sacred aspect to Muslims' pursuit of scientific knowledge, as nature itself is viewed in the Quran as a compilation of signs pointing to the Divine. It was with this understanding that the pursuit of science, especially prior to the colonization of the Muslim world, was respected in Islamic civilizations. The astrophysicist Nidhal Guessoum argues that the Quran has developed "the concept of knowledge" that encourages scientific discovery. He writes: The Qur'an draws attention to the danger of conjecturing without evidence (And follow not that of which you have not the (certain) knowledge of... 17:36) and in several different verses asks Muslims to require proofs (Say: Bring your proof if you are truthful 2:111), both in matters of theological belief and in natural science. Guessoum cites Ghaleb Hasan on the definition of "proof" according the Quran being "clear and strong... convincing evidence or argument." Also, such a proof cannot rely on an argument from authority, citing verse 5:104. Lastly, both assertions and rejections require a proof, according to verse 4:174. Ismail al-Faruqi and Taha Jabir Alalwani are of the view that any reawakening of the Muslim civilization must start with the Quran; however, the biggest obstacle on this route is the "centuries old heritage of tafseer (exegesis) and other classical disciplines" which inhibit a "universal, epistemiological and systematic conception" of the Quran's message. The philosopher Muhammad Iqbal considered the Quran's methodology and epistemology to be empirical and rational. Guessoum also suggests scientific knowledge may influence Quranic readings, stating that "for a long time Muslims believed, on the basis on their literal understanding of some Qur’anic verses, that the gender of an unborn baby is only known to God, and the place and time of death of each one of us is likewise al-Ghaib [unknown/unseen]. Such literal under-standings, when confronted with modern scientific (medical) knowledge, led many Muslims to realize that first-degree readings of the Quran can lead to contradictions and predicaments." Islamists such as Sayyid Qutb argue that since "Islam appointed" Muslims "as representatives of God and made them responsible for learning all the sciences," science cannot but prosper in a society of true Islam. (However, since Muslim majority countries governments have failed to follow the sharia law in its completeness, true Islam has not prevailed and this explains the failure of science and many other things in the Muslim world, according to Qutb.) Others claim traditional interpretations of Islam are not compatible with the development of science. Author Rodney Stark argues that Islam's lag behind the West in scientific advancement after (roughly) 1500 CE was due to opposition by traditional ulema to efforts to formulate systematic explanation of natural phenomenon with "natural laws." He claims that they believed such laws were blasphemous because they limit "God's freedom to act" as He wishes, a principle enshired in aya 14:4: "God sendeth whom He will astray, and guideth whom He will," which (they believed) applied to all of creation not just humanity. Taner Edis wrote An Illusion of Harmony: Science and Religion in Islam. Edis worries that secularism in Turkey, one of the most westernized Muslim nations, is on its way out; he points out that the population of Turkey rejects evolution by a large majority. To Edis, many Muslims appreciate technology and respect the role that science plays in its creation. As a result, he says there is a great deal of Islamic pseudoscience attempting to reconcile this respect with other respected religious beliefs. Edis maintains that the motivation to read modern scientific truths into holy books is also stronger for Muslims than Christians. This is because, according to Edis, true criticism of the Quran is almost non-existent in the Muslim world. While Christianity is less prone to see its Holy Book as the direct word of God, fewer Muslims will compromise on this idea – causing them to believe that scientific truths simply must appear in the Quran. However, Edis argues that there are endless examples of scientific discoveries that could be read into the Bible or Quran if one would like to. Edis qualifies that Muslim thought certainly cannot be understood by looking at the Quran alone; cultural and political factors play large roles. === Miracle literature (Tafsir'ilmi) === Starting in the 1970s and 1980s, the idea of the presence of scientific evidence in the Quran became popularized as ijaz (miracle) literature. The genre of interpreting the Quran as revealing scientific truths before mankind's discovery is also known as Tafsir'ilmi. This approach gained much popularity through French author Maurice Bucaille, whose works have been distributed through Muslim bookstores and websites, and discussed on television programs by Islamic preachers. The movement contends that the Quran abounds with "scientific facts" that appeared centuries before their discovery by science and which "could not have been known" by people at the time. By asserting the presence of scientific truths stemming from the Quran, it also overlaps with Islamic creationism. This approach has been rejected by orthodox theologians who argue that the purpose of the Quran is religious guidance and not for proposing scientific theories. According to author Ziauddin Sardar, the ijaz movement has created a "global craze in Muslim societies", and has developed into an industry that is "widespread and well-funded". Individuals connected with the movement include Abdul Majeed al-Zindani, who established the Commission on Scientific Signs in the Quran and Sunnah; Zakir Naik, the Indian televangelist; and Adnan Oktar, the Turkish creationist. Enthusiasts of the movement argue that among the [scientific] miracles found in the Quran are "everything, from relativity, quantum mechanics, Big Bang theory, black holes and pulsars, genetics, embryology, modern geology, thermodynamics, even the laser and hydrogen fuel cells". Zafar Ishaq Ansari terms the modern trend of claiming the identification of "scientific truths" in the Quran as the "scientific exegesis" of the holy book. An example is the verse: "So verily I swear by the stars that run and hide ..." (Q81:15–16), which proponents claim demonstrates the Quran's knowledge of the existence of black holes; or: "[I swear by] the Moon in her fullness that ye shall journey on from stage to stage" (Q84:18–19) refers, according to proponents, to human flight into outer space. ==== Embryology in the Quran ==== One claim that has received widespread attention and has even been the subject of a medical school textbook widely used in the Muslim world is that several Quranic verses foretell the study of embryology and "provide a detailed description of the significant events in human development from the stages of gametes and conception until the full term pregnancy and delivery or even post partum." In 1983, an authority on embryology, Keith L. Moore, had a special edition published of his widely used textbook on embryology (The Developing Human: Clinically Oriented Embryology), co-authored by a leader of the scientific miracles movement, Abdul Majeed al-Zindani. This edition, The Developing Human: Clinically Oriented Embryology with Islamic Additions, interspersed pages of "embryology-related Quranic verse and hadith" by al-Zindani into Moore's original work. At least one Muslim-born physician (Ali A. Rizvi) studying the textbook of Moore and al-Zindani found himself "confused" by "why Moore was so 'astonished by'" the Quranic references, which Rizvi found "vague", and insofar as they were specific, preceded by the observations of Aristotle and the Ayr-veda, and/or easily explained by "common sense". Some of the main verses are (Q39:6) God creates us "in the womb of your mothers, creation after creation, within three darknessess," or "three veils of darkness". The "three" allegedly referring to the abdominal wall, the wall of the uterus, and the chorioamniotic membrane. Verse Q32:9 identifies the order of organ development of the embryo—ears, then eyes, then heart. Verses referring to "sperm drop" (an-nutfa), and to al-3alaqa (translated as "clinging clot" or "leech like structure") in (Q23:13-14); and to "sperm-drop mixture" (an-nuṭfatin amshaajin) in (Q76:2). The miraculousness of these verse is said to come from the resemblance of the human embryo to a leech, and to the claim that "sperm-drop mixture" refers to a mixture sperm and egg. (Q53:45-46) "And that He creates the two mates—the male and female—from a sperm-drop when it is emitted," allegedly refers to the fact that the sperm contributes X and Y chromosomes that determine the gender of the baby. However, The "three darknesses" or three walls (Q39:6) could easily have been observed by cutting open of pregnant mammals, something done by human beings before the revelation of the Quran ("dissections of human cadavers by Greek scientists have been documented as early as the third century BCE"). Contrary to the claims made about Q32:9, ears do not develop before eyes, which do not develop before heart. The heart begins development "at about 20 days, and the ears and eyes begin to develop simultaneously in the fourth week". However, the verse itself does not mention or claim the order of how the embryo will form first in the womb. "Then He proportioned him and breathed into him from His [created] soul and made for you hearing and vision and hearts; little are you grateful." The embryo may resemble a leech (ala "clinging clot" or "leech like structure" of al-3alaqa in Q23:13-14), but it resembles many things during the eight week course of its development—none for very long. While it is generally agreed the Quran mentions sperm (an-nutfa in several verses), "sperm-drop mixture" (an-nuṭfatin amshaajin in Q76:2) of a mixture of sperm and egg is more problematic as nowhere does the Quran mention the Egg cell or ovum—a rather glaring omission in any description of embryo development, as it the ovum the source of more than half the genetic material of the embryo. With mention of male sperm but not female egg in the Quran, it seems likely Q53:45-46—"And that He creates the two mates, the male and female, from a sperm-drop when it is emitted"—is talking about the erroneous idea that all genetic material for offspring comes from the male and the mother simply provides a womb for the developing baby (as opposed to the sperm contributing the X and Y chromosomes that determine the gender of the baby). This idea originated with the ancient Greeks and was popular before modern biology developed. In 2002, Moore declined to be interviewed by The Wall Street Journal on the subject of his work on Islam, stating that "it's been ten or eleven years since I was involved in the Qur'an." Some researchers have proposed an evolutionary reading of the verses related to the creation of man in the Qur'an and then considered these meanings as examples of scientific miracles. ==== Criticism ==== Critics argue, verses that proponents say explain modern scientific facts, about subjects such as biology, the origin and history of the Earth, and the evolution of human life, contain fallacies and are unscientific. As of 2008, both Muslims and non-Muslims have disputed whether there actually are "scientific miracles" in the Quran. Muslim critics of the movement include Indian Islamic theologian Maulana Ashraf Ali Thanwi, Muslim historian Syed Nomanul Haq, Muzaffar Iqbal, president of Center for Islam and Science in Alberta, Canada, and Egyptian Muslim scholar Khaled Montaser. Pakistani theoretical physicist Pervez Hoodbhoy criticizes these claims and says there is no explanation that why many modern scientific discoveries such as quantum mechanics, molecular genetics, etc. were discovered elsewhere. Giving the example of the roundness of the earth and the invention of the television, a Christian site ("Evidence for God's Unchanging World") complains the "scientific facts" are too vague to be miraculous. Critics argue that while it is generally agreed the Quran contains many verses proclaiming the wonders of nature, it requires "considerable mental gymnastics and distortions to find scientific facts or theories in these verses" (Ziauddin Sardar); that the Quran is the source of guidance in right faith (iman) and righteous action (alladhina amanu wa amilu l-salihat) but the idea that it contained "all knowledge, including scientific" knowledge has not been a mainstream view among Muslim scholarship (Zafar Ishaq Ansari); and that "Science is ever-changing ... the Copernican revolution overturning polemic models of the universe to Einstein's general relativity overshadowing Newtonian mechanisms". So while "Science is probabilistic in nature" the Quran deals in "absolute certainty". (Ali Talib); Nidhal Guessoum says that the central issue in the Islam-science discourse is the hierarchical positioning or place of the Quran in the scientific enterprise. Mustansir Mir argues for a proper approach to Quran with regard to science that allows multiple and multi-level interpretations. He writes: From a linguistic standpoint, it is quite possible for a word, phrase or statement to have more than one layer of meaning, such that one layer would make sense to one audience in one age and another layer of meaning would, without negating the first, be meaningful to another audience in a subsequent age. == See also == == References == === Notes === === Citations === == Further reading == Huff, Toby. The Rise of Early Modern Science: Islam, China, and the West (Cambridge University Press, 1993). Nasr, Seyyed Hossein. "Islam, Muslims, and modern technology." Islam and Science 3.2 (2005): 109–126. online Stearns, Justin. "The Legal Status of Science in the Muslim World in the Early Modern Period: An Initial Consideration of Fatwās from Three Maghribī Sources." in The Islamic Scholarly Tradition (Brill, 2011) pp. 265–290. online == External links == Islam & Science Science and the Islamic world—The quest for rapprochement by Pervez Hoodbhoy. Islamic Science by Ziauddin Sardar (2002). Can Science Dispense With Religion? Archived 2016-05-29 at the Wayback Machine by Mehdi Golshani. Islam, science and Muslims by Seyyed Hossein Nasr. Center for Islam and Science Explore Islamic achievements and contributions to science Is There Such A Thing As Islamic Science? The Influence Of Islam On The World Of Science How Islam Won, and Lost, the Lead in Science Radicalism among Muslim professionals worries many
Wikipedia/Islamic_attitudes_towards_science
The conversion of non-Islamic places of worship into mosques occurred during the life of Muhammad and continued during subsequent Islamic conquests and invasions and under historical Muslim rule. Hindu temples, Jain Temples, churches, synagogues, and Zoroastrian fire temples have been converted into mosques. Several such mosques in the areas of former Muslim rule have since been reconverted or have become museums, including the Parthenon in Greece and numerous mosques in Spain, such as Mosque–Cathedral of Córdoba. Conversion of non-Islamic buildings into mosques influenced distinctive regional styles of Islamic architecture. == Qur'anic holy sites == === Jerusalem === Upon the capture of Jerusalem, it is commonly reported that Umar refused to pray in the Church of the Holy Sepulchre in spite of a treaty. The architecturally similar Dome of the Rock was built on the Temple Mount, which was a destroyed site of the holiest Jewish temple, destroyed by the Romans in AD 70 and with consistent Jewish presence in Jerusalem has always been a site of religious prayer for Jews. Umar initially built there a small prayer house which laid the foundation for the later construction of the Al-Aqsa Mosque by the Umayyads. == Conversion of church buildings == === North America === ==== Bermuda (U.K.) ==== A lot was purchased by the Nation of Islam in Bermuda from the Roman Catholic Cathedral of St. Theresa in 1975, and the church on the property was converted into Muhammad's Mosque, named for the leader of the NOI. Following the dissolution of the NOI, it has been reformed into a Sunni mosque. === Europe === ==== Albania ==== The Catholic church of Saint Nicholas (Shën Nikollë) was turned into a mosque. After being destroyed in the Communist 1967 anti-religious campaign, the site was turned into an open air mausoleum. The church of St Stephen in Shkodër was converted into a mosque in 1479 after the city was conquered by the Ottomans. ==== Bosnia and Herzegovina ==== The Fethija Mosque (since 1592) of Bihać was a Catholic church devoted to Saint Anthony of Padua (1266). ==== Cyprus ==== Following the Ottoman conquest of Cyprus, a number of churches (especially the Catholic ones) were converted into mosques. A relatively significant surge in church-to-mosque conversion followed the 1974 Turkish Invasion of Cyprus. Many of the Orthodox churches in Northern Cyprus have been converted, and many are still in the process of becoming mosques. ==== Greece ==== Numerous orthodox churches were converted to mosques during the Ottoman period in Greece. After the Greek War of Independence, many of them were later reconverted into churches. Among them: The Church of the Acheiropoietos (Eski Mosque), the Church of Hosios David (Suluca or Murad Mosque), the Church of Prophet Elijah (Saraylı Mosque), the Church of Saint Catherine (Yakup Pasha Mosque), the Church of Saint Panteleimon (Ishakiye Mosque), the Church of Holy Apostles (Soğuksu Mosque), the Church of Hagios Demetrios (Kasımiye Mosque), the Church of Hagia Sophia (Ayasofya Mosque), the Church of Panagia Chalkeon (Kazancilar Mosque), the church of Taxiarches (İki Şerefiye Mosque), the Rotonda of Galerius (Mosque of Suleyman Hortaji Effendi) in Thessaloniki. The Cathedral church of Veria (Hünkar Mosque) and the Church of Saint Paul in Veria (Medrese Mosque). The Church of Saint John in Ioannina, destroyed by the Ottomans and the Aslan Pasha Mosque was built in its place. The Theotokos Kosmosoteira monastery in Feres was converted into a mosque in the mid-14th century, it was reconverted in 1940. The original Pantocrator (Kursum Mosque) church building in Patras. The gothic-style Panagia tou Kastrou (Enderun Mosque), the Holy Trinity church in Knights Avenue (Khan Zade Mosque) in Rhodes. Converted in 1522, reconverted in 1947. The Brontochion Monastery, the Hagia Sophia (Ayasofya Mosque), and Panagia Hodegetria (Fethiye Mosque) churches in Laconia. The Hagia Sophia (Bey Mosque) in Drama. Converted in 1430, reconverted in 1922. Parthenon in Athens: Some time before the close of the fifteenth century, the Parthenon became a mosque. Before that the Parthenon had been a Greek Orthodox church. Much of it was destroyed in a 1687 explosion, and a smaller mosque was erected within the ruins in 1715; this mosque was demolished in 1843. See Parthenon mosque. The Fethiye Mosque in Athens was built on top of a Byzantine basilica. It is currently an exhibition centre. The church of Saint Nicholas (Hünkar Mosque) was originally a Roman Catholic church before it was converted into a mosque in the mid-17th century. It was reconverted in 1918. ==== Hungary ==== Following the Ottoman conquest of the Kingdom of Hungary, a number of churches were converted into mosques. Those that survived the era of Ottoman rule, were later reconverted into churches after the Great Turkish War. Church of Our Lady of Buda, converted into Eski Djami immediately after the capture of Buda in 1541, reconverted in 1686. Church of Mary Magdalene, Buda, converted into Fethiye Djami c. 1602, reconverted in 1686. The Franciscan Church of St John the Baptist in Buda, converted into Pasha Djami, destroyed in 1686. ==== Spain ==== A Catholic church dedicated to Saint Vincent of Lérins, was built by the Visigoths in Córdoba; during the reign of Abd al-Rahman I, it was converted into a mosque. In the time of the Reconquista, Christian rule was reestablished and the building became a church once again, the Cathedral of Our Lady of the Assumption. ==== Crimean Peninsula ==== After the Ottomans conquered Mangup, the capital of Principality of Theodoro, a prayer for the Sultan recited in one of the churches which converted into a mosque, and according to Turkish authors "the house of the infidel became the house of Islam." === Middle East and North Africa === ==== Iraq ==== The Islamic State converted a number of churches into mosques after they occupied Mosul in 2014. The churches were restored to their original function after Mosul was liberated in 2017. Chaldean Church of St. Joseph in Mosul, Iraq ==== Israel and Palestinian territories ==== Cave of the Patriarchs Tombs of Nathan and Gad in Halhoul, transformed into Mosque of Prophet Yunus. The Herodian shrine of the Cave of the Patriarchs in Hebron, the second most holy site in Judaism, was converted into a church during the Crusades before being turned into a mosque in 1266 and henceforth banned to Jews and Christians. Part of it was restored as a synagogue by Israel after 1967. Other sites in Hebron have undergone Islamification. The Tomb of Jesse and Ruth became the Church of the Forty Martyrs, which then became the Tomb of Isai and later Deir Al Arba'een. ==== Lebanon ==== Al-Omari Grand Mosque in Beirut, Lebanon; built as the Church of St. John the Baptist by the Knights Hospitaller; converted to mosque in 1291. ==== Morocco ==== Grand Mosque of Tangier; built on a formerly Roman pagan, and then Roman Christian, site. ==== Syria ==== The Umayyad Mosque in Damascus; built on the site of a Christian basilica dedicated to John the Baptist (Yahya), which was earlier, a Roman Pagan temple of Jupiter. Great Mosque of al-Nuri in Homs; initially a pagan temple for the sun god ("El-Gabal"), then converted into a church dedicated to Saint John the Baptist Great Mosque of Hama; a temple to worship the Roman god Jupiter, later it became a church during the Byzantine era Great Mosque of Aleppo; the agora of the Hellenistic period, which later became the garden for the Cathedral of Saint Helena The mosque of Job in Al-Shaykh Saad, Syria, was previously a church of Job. === Turkey === ==== Istanbul ==== ===== Hagia Sophia ===== Following the Ottoman conquest of Anatolia, virtually all of the churches of Istanbul were converted into mosques except the Church of Saint Mary of the Mongols. Hagia Sophia (from the Greek: Ἁγία Σοφία, "Holy Wisdom"; Latin: Sancta Sophia or Sancta Sapientia; Turkish: Ayasofya) was the cathedral of Constantinople in the state church of the Roman Empire and the seat of the Eastern Orthodox Church's Patriarchate. After 1453 it became a mosque, and since 1931 it has been a museum in Istanbul, Turkey. From the date of its dedication in 360 until 1453, it served as the Orthodox cathedral of the imperial capital, except between 1204 and 1261, when it became the Roman Catholic cathedral under the Latin Patriarch of Constantinople of the Western Crusader-established Latin Empire. In 1453, Constantinople was conquered by the Ottoman Turks under Sultan Mehmed II, who subsequently ordered the building converted into a mosque. The bells, altar, iconostasis, ambo and sacrificial vessels were removed and many of the mosaics were plastered over. Islamic features – such as the mihrab, minbar, and four minarets – were added while in the possession of the Ottomans. The building was a mosque from 29 May 1453 until 1931, when it was secularised. It was opened as a museum on 1 February 1935. On 10 July 2020, the decision of the Council of Ministers to transform it into a museum was canceled by Council of State and the Turkish President Erdoğan signed a decree annulling the Hagia Sophia's museum status, reverting it to a mosque. ===== Other churches ===== ==== Rest of Turkey ==== Elsewhere in Turkey numerous churches were converted into mosques, including: ===== Orthodox ===== Parkhali Monastery in Artvin Khakhuli Monastery in Erzurum ===== Armenian Apostolic ===== Hundreds of Armenian Churches were converted into Mosques in Turkey and Azerbaijan. Cathedral of Kars Cathedral of Ani Liberation Mosque, ex St Mary's Church Cathedral, Gaziantep == Conversion of Hindu and Jain temples == == Conversion of synagogues == === North Africa === ==== Algeria ==== Great Synagogue of Algiers, now Ben Farès Mosque Great Synagogue of Oran, now Abdellah Ben Salem Mosque === Europe === ==== France ==== Or Thora Synagogue of Marseille, built in the 1960s by Jews from Algeria, was turned into a mosque in 2016 after being bought by a conservative Muslim organization, the al-Badr organization. ==== The Netherlands ==== The Ashkenazi synagogue on Wagenstraat street of The Hague, built in 1844, became the Aqsa Mosque in 1981. The synagogue had been sold to the city by the Jewish community in 1976, on the grounds that it would not be converted into a church. In 1979 Turkish Muslim residents occupied the abandoned building and demanded it be turned into a mosque, citing alleged construction safety concerns with their usual mosque. The synagogue was conceded to the Muslim community three years later. == Influence on Islamic architecture == The conversion of non-Islamic religious buildings into mosques during the first centuries of Islam played a major role in the development of Islamic architectural styles. Distinct regional styles of mosque design, which have come to be known by such names as Arab, Persian, Andalusian, and others, commonly reflected the external and internal stylistic elements of churches and other temples characteristic for that region. == See also == == References == == External links == Quotations related to Conversion of non-Islamic places of worship into mosques at Wikiquote
Wikipedia/Conversion_of_non-Islamic_places_of_worship_into_mosques
Clinical trials are prospective biomedical or behavioral research studies on human participants designed to answer specific questions about biomedical or behavioral interventions, including new treatments (such as novel vaccines, drugs, dietary choices, dietary supplements, and medical devices) and known interventions that warrant further study and comparison. Clinical trials generate data on dosage, safety and efficacy. They are conducted only after they have received health authority/ethics committee approval in the country where approval of the therapy is sought. These authorities are responsible for vetting the risk/benefit ratio of the trial—their approval does not mean the therapy is 'safe' or effective, only that the trial may be conducted. Depending on product type and development stage, investigators initially enroll volunteers or patients into small pilot studies, and subsequently conduct progressively larger scale comparative studies. Clinical trials can vary in size and cost, and they can involve a single research center or multiple centers, in one country or in multiple countries. Clinical study design aims to ensure the scientific validity and reproducibility of the results. Costs for clinical trials can range into the billions of dollars per approved drug, and the complete trial process to approval may require 7–15 years. The sponsor may be a governmental organization or a pharmaceutical, biotechnology or medical-device company. Certain functions necessary to the trial, such as monitoring and lab work, may be managed by an outsourced partner, such as a contract research organization or a central laboratory. Only 10 percent of all drugs started in human clinical trials become approved drugs. == Overview == === Trials of drugs === Some clinical trials involve healthy subjects with no pre-existing medical conditions. Other clinical trials pertain to people with specific health conditions who are willing to try an experimental treatment. Pilot experiments are conducted to gain insights for design of the clinical trial to follow. There are two goals to testing medical treatments: to learn whether they work well enough, called "efficacy", or "effectiveness"; and to learn whether they are safe enough, called "safety". Neither is an absolute criterion; both safety and efficacy are evaluated relative to how the treatment is intended to be used, what other treatments are available, and the severity of the disease or condition. The benefits must outweigh the risks.: 8  For example, many drugs to treat cancer have severe side effects that would not be acceptable for an over-the-counter pain medication, yet the cancer drugs have been approved since they are used under a physician's care and are used for a life-threatening condition. In the US the elderly constitute 14% of the population, while they consume over one-third of drugs. People over 55 (or a similar cutoff age) are often excluded from trials because their greater health issues and drug use complicate data interpretation, and because they have different physiological capacity than younger people. Children and people with unrelated medical conditions are also frequently excluded. Pregnant women are often excluded due to potential risks to the fetus. The sponsor designs the trial in coordination with a panel of expert clinical investigators, including what alternative or existing treatments to compare to the new drug and what type(s) of patients might benefit. If the sponsor cannot obtain enough test subjects at one location investigators at other locations are recruited to join the study. During the trial, investigators recruit subjects with the predetermined characteristics, administer the treatment(s) and collect data on the subjects' health for a defined time period. Data include measurements such as vital signs, concentration of the study drug in the blood or tissues, changes to symptoms, and whether improvement or worsening of the condition targeted by the study drug occurs. The researchers send the data to the trial sponsor, who then analyzes the pooled data using statistical tests. Examples of clinical trial goals include assessing the safety and relative effectiveness of a medication or device: On a specific kind of patient At varying dosages For a new indication Evaluation for improved efficacy in treating a condition as compared to the standard therapy for that condition Evaluation of the study drug or device relative to two or more already approved/common interventions for that condition While most clinical trials test one alternative to the novel intervention, some expand to three or four and may include a placebo. Except for small, single-location trials, the design and objectives are specified in a document called a clinical trial protocol. The protocol is the trial's "operating manual" and ensures all researchers perform the trial in the same way on similar subjects and that the data is comparable across all subjects. As a trial is designed to test hypotheses and rigorously monitor and assess outcomes, it can be seen as an application of the scientific method, specifically the experimental step. The most common clinical trials evaluate new pharmaceutical products, medical devices, biologics, diagnostic assays, psychological therapies, or other interventions. Clinical trials may be required before a national regulatory authority approves marketing of the innovation. === Trials of devices === Similarly to drugs, manufacturers of medical devices in the United States are required to conduct clinical trials for premarket approval. Device trials may compare a new device to an established therapy, or may compare similar devices to each other. An example of the former in the field of vascular surgery is the Open versus Endovascular Repair (OVER trial) for the treatment of abdominal aortic aneurysm, which compared the older open aortic repair technique to the newer endovascular aneurysm repair device. An example of the latter are clinical trials on mechanical devices used in the management of adult female urinary incontinence. === Trials of procedures === Similarly to drugs, medical or surgical procedures may be subjected to clinical trials, such as comparing different surgical approaches in treatment of fibroids for subfertility. However, when clinical trials are unethical or logistically impossible in the surgical setting, case-controlled studies will be replaced. === Patient and public involvement === Besides being participants in a clinical trial, members of the public can be actively collaborate with researchers in designing and conducting clinical research. This is known as patient and public involvement (PPI). Public involvement involves a working partnership between patients, caregivers, people with lived experience, and researchers to shape and influence what is researcher and how. PPI can improve the quality of research and make it more relevant and accessible. People with current or past experience of illness can provide a different perspective than professionals and compliment their knowledge. Through their personal knowledge they can identify research topics that are relevant and important to those living with an illness or using a service. They can also help to make the research more grounded in the needs of the specific communities they are part of. Public contributors can also ensure that the research is presented in plain language that is clear to the wider society and the specific groups it is most relevant for. == History == === Development === Although early medical experimentation was performed often, the use of a control group to provide an accurate comparison for the demonstration of the intervention's efficacy was generally lacking. For instance, Lady Mary Wortley Montagu, who campaigned for the introduction of inoculation (then called variolation) to prevent smallpox, arranged for seven prisoners who had been sentenced to death to undergo variolation in exchange for their life. Although they survived and did not contract smallpox, there was no control group to assess whether this result was due to the inoculation or some other factor. Similar experiments performed by Edward Jenner over his smallpox vaccine were equally conceptually flawed. The first proper clinical trial was conducted by the Scottish physician James Lind. The disease scurvy, now known to be caused by a Vitamin C deficiency, would often have terrible effects on the welfare of the crew of long-distance ocean voyages. In 1740, the catastrophic result of Anson's circumnavigation attracted much attention in Europe; out of 1900 men, 1400 had died, most of them allegedly from having contracted scurvy. John Woodall, an English military surgeon of the British East India Company, had recommended the consumption of citrus fruit from the 17th century, but their use did not become widespread. Lind conducted the first systematic clinical trial in 1747. He included a dietary supplement of an acidic quality in the experiment after two months at sea, when the ship was already afflicted with scurvy. He divided twelve scorbutic sailors into six groups of two. They all received the same diet but, in addition, group one was given a quart of cider daily, group two twenty-five drops of elixir of vitriol (sulfuric acid), group three six spoonfuls of vinegar, group four half a pint of seawater, group five received two oranges and one lemon, and the last group a spicy paste plus a drink of barley water. The treatment of group five stopped after six days when they ran out of fruit, but by then one sailor was fit for duty while the other had almost recovered. Apart from that, only group one also showed some effect of its treatment. Each year, May 20 is celebrated as Clinical Trials Day in honor of Lind's research. After 1750 the discipline began to take its modern shape. The English doctor John Haygarth demonstrated the importance of a control group for the correct identification of the placebo effect in his celebrated study of the ineffective remedy called Perkin's tractors. Further work in that direction was carried out by the eminent physician Sir William Gull, 1st Baronet in the 1860s. Frederick Akbar Mahomed (d. 1884), who worked at Guy's Hospital in London, made substantial contributions to the process of clinical trials, where "he separated chronic nephritis with secondary hypertension from what we now term essential hypertension. He also founded the Collective Investigation Record for the British Medical Association; this organization collected data from physicians practicing outside the hospital setting and was the precursor of modern collaborative clinical trials." === Modern trials === Ideas of Sir Ronald A. Fisher still play a role in clinical trials. While working for the Rothamsted experimental station in the field of agriculture, Fisher developed his Principles of experimental design in the 1920s as an accurate methodology for the proper design of experiments. Among his major ideas include the importance of randomization—the random assignment of individual elements (eg crops or patients) to different groups for the experiment; replication—to reduce uncertainty, measurements should be repeated and experiments replicated to identify sources of variation; blocking—to arrange experimental units into groups of units that are similar to each other, and thus reducing irrelevant sources of variation; use of factorial experiments—efficient at evaluating the effects and possible interactions of several independent factors. Of these, blocking and factorial design are seldom applied in clinical trials, because the experimental units are human subjects and there is typically only one independent intervention: the treatment. The British Medical Research Council officially recognized the importance of clinical trials from the 1930s. The council established the Therapeutic Trials Committee to advise and assist in the arrangement of properly controlled clinical trials on new products that seem likely on experimental grounds to have value in the treatment of disease. The first randomised curative trial was carried out at the MRC Tuberculosis Research Unit by Sir Geoffrey Marshall (1887–1982). The trial, carried out between 1946 and 1947, aimed to test the efficacy of the chemical streptomycin for curing pulmonary tuberculosis. The trial was both double-blind and placebo-controlled. The methodology of clinical trials was further developed by Sir Austin Bradford Hill, who had been involved in the streptomycin trials. From the 1920s, Hill applied statistics to medicine, attending the lectures of renowned mathematician Karl Pearson, among others. He became famous for a landmark study carried out in collaboration with Richard Doll on the correlation between smoking and lung cancer. They carried out a case-control study in 1950, which compared lung cancer patients with matched control and also began a sustained long-term prospective study into the broader issue of smoking and health, which involved studying the smoking habits and health of more than 30,000 doctors over a period of several years. His certificate for election to the Royal Society called him "... the leader in the development in medicine of the precise experimental methods now used nationally and internationally in the evaluation of new therapeutic and prophylactic agents." International clinical trials day is celebrated on 20 May. The acronyms used in the titling of clinical trials are often contrived, and have been the subject of derision. == Types == Clinical trials are classified by the research objective created by the investigators. In an observational study, the investigators observe the subjects and measure their outcomes. The researchers do not actively manage the study. In an interventional study, the investigators give the research subjects an experimental drug, surgical procedure, use of a medical device, diagnostic or other intervention to compare the treated subjects with those receiving no treatment or the standard treatment. Then the researchers assess how the subjects' health changes. Trials are classified by their purpose. After approval for human research is granted to the trial sponsor, the U.S. Food and Drug Administration (FDA) organizes and monitors the results of trials according to type: Prevention trials look for ways to prevent disease in people who have never had the disease or to prevent a disease from returning. These approaches may include drugs, vitamins or other micronutrients, vaccines, or lifestyle changes. Screening trials test for ways to identify certain diseases or health conditions. Diagnostic trials are conducted to find better tests or procedures for diagnosing a particular disease or condition. Treatment trials test experimental drugs, new combinations of drugs, or new approaches to surgery or radiation therapy. Quality of life trials (supportive care trials) evaluate how to improve comfort and quality of care for people with a chronic illness. Genetic trials are conducted to assess the prediction accuracy of genetic disorders making a person more or less likely to develop a disease. Epidemiological trials have the goal of identifying the general causes, patterns or control of diseases in large numbers of people. Compassionate use trials or expanded access trials provide partially tested, unapproved therapeutics to a small number of patients who have no other realistic options. Usually, this involves a disease for which no effective therapy has been approved, or a patient who has already failed all standard treatments and whose health is too compromised to qualify for participation in randomized clinical trials. Usually, case-by-case approval must be granted by both the FDA and the pharmaceutical company for such exceptions. Fixed trials consider existing data only during the trial's design, do not modify the trial after it begins, and do not assess the results until the study is completed. Adaptive clinical trials use existing data to design the trial, and then use interim results to modify the trial as it proceeds. Modifications include dosage, sample size, drug undergoing trial, patient selection criteria and "cocktail" mix. Adaptive trials often employ a Bayesian experimental design to assess the trial's progress. In some cases, trials have become an ongoing process that regularly adds and drops therapies and patient groups as more information is gained. The aim is to more quickly identify drugs that have a therapeutic effect and to zero in on patient populations for whom the drug is appropriate. Clinical trials are conducted typically in four phases, with each phase using different numbers of subjects and having a different purpose to construct focus on identifying a specific effect. === Phases === Clinical trials involving new drugs are commonly classified into five phases. Each phase of the drug approval process is treated as a separate clinical trial. The drug development process will normally proceed through phases I–IV over many years, frequently involving a decade or longer. If the drug successfully passes through phases I, II, and III, it will usually be approved by the national regulatory authority for use in the general population. Phase IV trials are performed after the newly approved drug, diagnostic or device is marketed, providing assessment about risks, benefits, or best uses. == Trial design == A fundamental distinction in evidence-based practice is between observational studies and randomized controlled trials. Types of observational studies in epidemiology, such as the cohort study and the case-control study, provide less compelling evidence than the randomized controlled trial. In observational studies, the investigators retrospectively assess associations between the treatments given to participants and their health status, with potential for considerable errors in design and interpretation. A randomized controlled trial can provide compelling evidence that the study treatment causes an effect on human health. Some Phase II and most Phase III drug trials are designed as randomized, double-blind, and placebo-controlled. Randomized: Each study subject is randomly assigned to receive either the study treatment or a placebo. Blind: The subjects involved in the study do not know which study treatment they receive. If the study is double-blind, the researchers also do not know which treatment a subject receives. This intent is to prevent researchers from treating the two groups differently. A form of double-blind study called a "double-dummy" design allows additional insurance against bias. In this kind of study, all patients are given both placebo and active doses in alternating periods. Placebo-controlled: The use of a placebo (fake treatment) allows the researchers to isolate the effect of the study treatment from the placebo effect. Clinical studies having small numbers of subjects may be "sponsored" by single researchers or a small group of researchers, and are designed to test simple questions or feasibility to expand the research for a more comprehensive randomized controlled trial. Clinical studies can be "sponsored" (financed and organized) by academic institutions, pharmaceutical companies, government entities and even private groups. Trials are conducted for new drugs, biotechnology, diagnostic assays or medical devices to determine their safety and efficacy prior to being submitted for regulatory review that would determine market approval. === Active control studies === In cases where giving a placebo to a person suffering from a disease may be unethical, "active comparator" (also known as "active control") trials may be conducted instead. In trials with an active control group, subjects are given either the experimental treatment or a previously approved treatment with known effectiveness. In other cases, sponsors may conduct an active comparator trial to establish an efficacy claim relative to the active comparator instead of the placebo in labeling. === Master protocol === A master protocol includes multiple substudies, which may have different objectives and involve coordinated efforts to evaluate one or more medical products in one or more diseases or conditions within the overall study structure. Trials that could develop a master protocol include the umbrella trial (multiple medical products for a single disease), platform trial (multiple products for a single disease entering and leaving the platform), and basket trial (one medical product for multiple diseases or disease subtypes). Genetic testing enables researchers to group patients according to their genetic profile, deliver drugs based on that profile to that group and compare the results. Multiple companies can participate, each bringing a different drug. The first such approach targets squamous cell cancer, which includes varying genetic disruptions from patient to patient. Amgen, AstraZeneca and Pfizer are involved, the first time they have worked together in a late-stage trial. Patients whose genomic profiles do not match any of the trial drugs receive a drug designed to stimulate the immune system to attack cancer. === Clinical trial protocol === A clinical trial protocol is a document used to define and manage the trial. It is prepared by a panel of experts. All study investigators are expected to strictly observe the protocol. The protocol describes the scientific rationale, objective(s), design, methodology, statistical considerations and organization of the planned trial. Details of the trial are provided in documents referenced in the protocol, such as an investigator's brochure. The protocol contains a precise study plan to assure safety and health of the trial subjects and to provide an exact template for trial conduct by investigators. This allows data to be combined across all investigators/sites. The protocol also informs the study administrators (often a contract research organization). The format and content of clinical trial protocols sponsored by pharmaceutical, biotechnology or medical device companies in the United States, European Union, or Japan have been standardized to follow Good Clinical Practice guidance issued by the International Conference on Harmonisation (ICH). Regulatory authorities in Canada, China, South Korea, and the UK also follow ICH guidelines. Journals such as Trials, encourage investigators to publish their protocols. === Design features === ==== Informed consent ==== Clinical trials recruit study subjects to sign a document representing their "informed consent". The document includes details such as its purpose, duration, required procedures, risks, potential benefits, key contacts and institutional requirements. The participant then decides whether to sign the document. The document is not a contract, as the participant can withdraw at any time without penalty. Informed consent is a legal process in which a recruit is instructed about key facts before deciding whether to participate. Researchers explain the details of the study in terms the subject can understand. The information is presented in the subject's native language. Generally, children cannot autonomously provide informed consent, but depending on their age and other factors, may be required to provide informed assent. ==== Statistical power ==== In any clinical trial, the number of subjects, also called the sample size, has a large impact on the ability to reliably detect and measure the effects of the intervention. This ability is described as its "power", which must be calculated before initiating a study to figure out if the study is worth its costs. In general, a larger sample size increases the statistical power, also the cost. The statistical power estimates the ability of a trial to detect a difference of a particular size (or larger) between the treatment and control groups. For example, a trial of a lipid-lowering drug versus placebo with 100 patients in each group might have a power of 0.90 to detect a difference between placebo and trial groups receiving dosage of 10 mg/dL or more, but only 0.70 to detect a difference of 6 mg/dL. === Placebo groups === Merely giving a treatment can have nonspecific effects. These are controlled for by the inclusion of patients who receive only a placebo. Subjects are assigned randomly without informing them to which group they belonged. Many trials are doubled-blinded so that researchers do not know to which group a subject is assigned. Assigning a subject to a placebo group can pose an ethical problem if it violates his or her right to receive the best available treatment. The Declaration of Helsinki provides guidelines on this issue. === Duration === Clinical trials are only a small part of the research that goes into developing a new treatment. Potential drugs, for example, first have to be discovered, purified, characterized, and tested in labs (in cell and animal studies) before ever undergoing clinical trials. In all, about 1,000 potential drugs are tested before just one reaches the point of being tested in a clinical trial. For example, a new cancer drug has, on average, six years of research behind it before it even makes it to clinical trials. But the major holdup in making new cancer drugs available is the time it takes to complete clinical trials themselves. On average, about eight years pass from the time a cancer drug enters clinical trials until it receives approval from regulatory agencies for sale to the public. Drugs for other diseases have similar timelines. Some reasons a clinical trial might last several years: For chronic conditions such as cancer, it takes months, if not years, to see if a cancer treatment has an effect on a patient. For drugs that are not expected to have a strong effect (meaning a large number of patients must be recruited to observe 'any' effect), recruiting enough patients to test the drug's effectiveness (i.e., getting statistical power) can take several years. Only certain people who have the target disease condition are eligible to take part in each clinical trial. Researchers who treat these particular patients must participate in the trial. Then they must identify the desirable patients and obtain consent from them or their families to take part in the trial. A clinical trial might also include an extended post-study follow-up period from months to years for people who have participated in the trial, a so-called "extension phase", which aims to identify long-term impact of the treatment. The biggest barrier to completing studies is the shortage of people who take part. All drug and many device trials target a subset of the population, meaning not everyone can participate. Some drug trials require patients to have unusual combinations of disease characteristics. It is a challenge to find the appropriate patients and obtain their consent, especially when they may receive no direct benefit (because they are not paid, the study drug is not yet proven to work, or the patient may receive a placebo). In the case of cancer patients, fewer than 5% of adults with cancer will participate in drug trials. According to the Pharmaceutical Research and Manufacturers of America (PhRMA), about 400 cancer medicines were being tested in clinical trials in 2005. Not all of these will prove to be useful, but those that are may be delayed in getting approved because the number of participants is so low. For clinical trials involving potential for seasonal influences (such as airborne allergies, seasonal affective disorder, influenza, and skin diseases), the study may be done during a limited part of the year (such as spring for pollen allergies), when the drug can be tested. Clinical trials that do not involve a new drug usually have a much shorter duration. (Exceptions are epidemiological studies, such as the Nurses' Health Study). == Administration == Clinical trials designed by a local investigator, and (in the US) federally funded clinical trials, are almost always administered by the researcher who designed the study and applied for the grant. Small-scale device studies may be administered by the sponsoring company. Clinical trials of new drugs are usually administered by a contract research organization (CRO) hired by the sponsoring company. The sponsor provides the drug and medical oversight. A CRO is contracted to perform all the administrative work on a clinical trial. For Phases II–IV the CRO recruits participating researchers, trains them, provides them with supplies, coordinates study administration and data collection, sets up meetings, monitors the sites for compliance with the clinical protocol, and ensures the sponsor receives data from every site. Specialist site management organizations can also be hired to coordinate with the CRO to ensure rapid IRB/IEC approval and faster site initiation and patient recruitment. Phase I clinical trials of new medicines are often conducted in a specialist clinical trial clinic, with dedicated pharmacologists, where the subjects can be observed by full-time staff. These clinics are often run by a CRO which specialises in these studies. At a participating site, one or more research assistants (often nurses) do most of the work in conducting the clinical trial. The research assistant's job can include some or all of the following: providing the local institutional review board (IRB) with the documentation necessary to obtain its permission to conduct the study, assisting with study start-up, identifying eligible patients, obtaining consent from them or their families, administering study treatment(s), collecting and statistically analyzing data, maintaining and updating data files during followup, and communicating with the IRB, as well as the sponsor and CRO. === Quality === In the context of a clinical trial, quality typically refers to the absence of errors which can impact decision making, both during the conduct of the trial and in use of the trial results. === Marketing === An Interactional Justice Model may be used to test the effects of willingness to talk with a doctor about clinical trial enrollment. Results found that potential clinical trial candidates were less likely to enroll in clinical trials if the patient is more willing to talk with their doctor. The reasoning behind this discovery may be patients are happy with their current care. Another reason for the negative relationship between perceived fairness and clinical trial enrollment is the lack of independence from the care provider. Results found that there is a positive relationship between a lack of willingness to talk with their doctor and clinical trial enrollment. Lack of willingness to talk about clinical trials with current care providers may be due to patients' independence from the doctor. Patients who are less likely to talk about clinical trials are more willing to use other sources of information to gain a better insight of alternative treatments. Clinical trial enrollment should be motivated to utilize websites and television advertising to inform the public about clinical trial enrollment. === Information technology === The last decade has seen a proliferation of information technology use in the planning and conduct of clinical trials. Clinical trial management systems are often used by research sponsors or CROs to help plan and manage the operational aspects of a clinical trial, particularly with respect to investigational sites. Advanced analytics for identifying researchers and research sites with expertise in a given area utilize public and private information about ongoing research. Web-based electronic data capture (EDC) and clinical data management systems are used in a majority of clinical trials to collect case report data from sites, manage its quality and prepare it for analysis. Interactive voice response systems are used by sites to register the enrollment of patients using a phone and to allocate patients to a particular treatment arm (although phones are being increasingly replaced with web-based (IWRS) tools which are sometimes part of the EDC system). While patient-reported outcome were often paper based in the past, measurements are increasingly being collected using web portals or hand-held ePRO (or eDiary) devices, sometimes wireless. Statistical software is used to analyze the collected data and prepare them for regulatory submission. Access to many of these applications are increasingly aggregated in web-based clinical trial portals. In 2011, the FDA approved a Phase I trial that used telemonitoring, also known as remote patient monitoring, to collect biometric data in patients' homes and transmit it electronically to the trial database. This technology provides many more data points and is far more convenient for patients, because they have fewer visits to trial sites. As noted below, decentralized clinical trials are those that do not require patients' physical presence at a site, and instead rely largely on digital health data collection, digital informed consent processes, and so on. == Analysis == A clinical trial produces data that could reveal quantitative differences between two or more interventions; statistical analyses are used to determine whether such differences are true, result from chance, or are the same as no treatment (placebo). Data from a clinical trial accumulate gradually over the trial duration, extending from months to years. Accordingly, results for participants recruited early in the study become available for analysis while subjects are still being assigned to treatment groups in the trial. Early analysis may allow the emerging evidence to assist decisions about whether to stop the study, or to reassign participants to the more successful segment of the trial. Investigators may also want to stop a trial when data analysis shows no treatment effect. == Ethical aspects == Clinical trials are closely supervised by appropriate regulatory authorities. All studies involving a medical or therapeutic intervention on patients must be approved by a supervising ethics committee before permission is granted to run the trial. The local ethics committee has discretion on how it will supervise noninterventional studies (observational studies or those using already collected data). In the US, this body is called the Institutional Review Board (IRB); in the EU, they are called Ethics committees. Most IRBs are located at the local investigator's hospital or institution, but some sponsors allow the use of a central (independent/for profit) IRB for investigators who work at smaller institutions. To be ethical, researchers must obtain the full and informed consent of participating human subjects. (One of the IRB's main functions is to ensure potential patients are adequately informed about the clinical trial.) If the patient is unable to consent for him/herself, researchers can seek consent from the patient's legally authorized representative. In addition, the clinical trial participants must be made aware that they can withdraw from the clinical trial at any time without any adverse action taken against them. In California, the state has prioritized the individuals who can serve as the legally authorized representative. In some US locations, the local IRB must certify researchers and their staff before they can conduct clinical trials. They must understand the federal patient privacy (HIPAA) law and good clinical practice. The International Conference of Harmonisation Guidelines for Good Clinical Practice is a set of standards used internationally for the conduct of clinical trials. The guidelines aim to ensure the "rights, safety and well being of trial subjects are protected". The notion of informed consent of participating human subjects exists in many countries but its precise definition may still vary. Informed consent is clearly a 'necessary' condition for ethical conduct but does not 'ensure' ethical conduct. In compassionate use trials the latter becomes a particularly difficult problem. The final objective is to serve the community of patients or future patients in a best-possible and most responsible way. See also Expanded access. However, it may be hard to turn this objective into a well-defined, quantified, objective function. In some cases this can be done, however, for instance, for questions of when to stop sequential treatments (see Odds algorithm), and then quantified methods may play an important role. Additional ethical concerns are present when conducting clinical trials on children (pediatrics), and in emergency or epidemic situations. Ethically balancing the rights of multiple stakeholders may be difficult. For example, when drug trials fail, the sponsors may have a duty to tell current and potential investors immediately, which means both the research staff and the enrolled participants may first hear about the end of a trial through public business news. === Conflicts of interest and unfavorable studies === In response to specific cases in which unfavorable data from pharmaceutical company-sponsored research were not published, the Pharmaceutical Research and Manufacturers of America published new guidelines urging companies to report all findings and limit the financial involvement in drug companies by researchers. The US Congress signed into law a bill which requires Phase II and Phase III clinical trials to be registered by the sponsor on the clinicaltrials.gov website compiled by the National Institutes of Health. Drug researchers not directly employed by pharmaceutical companies often seek grants from manufacturers, and manufacturers often look to academic researchers to conduct studies within networks of universities and their hospitals, e.g., for translational cancer research. Similarly, competition for tenured academic positions, government grants and prestige create conflicts of interest among academic scientists. According to one study, approximately 75% of articles retracted for misconduct-related reasons have no declared industry financial support. Seeding trials are particularly controversial. In the United States, all clinical trials submitted to the FDA as part of a drug approval process are independently assessed by clinical experts within the Food and Drug Administration, including inspections of primary data collection at selected clinical trial sites. In 2001, the editors of 12 major journals issued a joint editorial, published in each journal, on the control over clinical trials exerted by sponsors, particularly targeting the use of contracts which allow sponsors to review the studies prior to publication and withhold publication. They strengthened editorial restrictions to counter the effect. The editorial noted that contract research organizations had, by 2000, received 60% of the grants from pharmaceutical companies in the US. Researchers may be restricted from contributing to the trial design, accessing the raw data, and interpreting the results. Despite explicit recommendations by stakeholders of measures to improve the standards of industry-sponsored medical research, in 2013, Tohen warned of the persistence of a gap in the credibility of conclusions arising from industry-funded clinical trials, and called for ensuring strict adherence to ethical standards in industrial collaborations with academia, in order to avoid further erosion of the public's trust. Issues referred for attention in this respect include potential observation bias, duration of the observation time for maintenance studies, the selection of the patient populations, factors that affect placebo response, and funding sources. === During public health crisis === Conducting clinical trials of vaccines during epidemics and pandemics is subject to ethical concerns. For diseases with high mortality rates like Ebola, assigning individuals to a placebo or control group can be viewed as a death sentence. In response to ethical concerns regarding clinical research during epidemics, the National Academy of Medicine authored a report identifying seven ethical and scientific considerations. These considerations are: === Pregnant women and children === Pregnant women and children are typically excluded from clinical trials as vulnerable populations, though the data to support excluding them is not robust. By excluding them from clinical trials, information about the safety and effectiveness of therapies for these populations is often lacking. During the early history of the HIV/AIDS epidemic, a scientist noted that by excluding these groups from potentially life-saving treatment, they were being "protected to death". Projects such as Research Ethics for Vaccines, Epidemics, and New Technologies (PREVENT) have advocated for the ethical inclusion of pregnant women in vaccine trials. Inclusion of children in clinical trials has additional moral considerations, as children lack decision-making autonomy. Trials in the past had been criticized for using hospitalized children or orphans; these ethical concerns effectively stopped future research. In efforts to maintain effective pediatric care, several European countries and the US have policies to entice or compel pharmaceutical companies to conduct pediatric trials. International guidance recommends ethical pediatric trials by limiting harm, considering varied risks, and taking into account the complexities of pediatric care. == Safety == Responsibility for the safety of the subjects in a clinical trial is shared between the sponsor, the local site investigators (if different from the sponsor), the various IRBs that supervise the study, and (in some cases, if the study involves a marketable drug or device), the regulatory agency for the country where the drug or device will be sold. A systematic concurrent safety review is frequently employed to assure research participant safety. The conduct and on-going review is designed to be proportional to the risk of the trial. Typically this role is filled by a Data and Safety Committee, an externally appointed Medical Safety Monitor, an Independent Safety Officer, or for small or low-risk studies the principal investigator. For safety reasons, many clinical trials of drugs are designed to exclude women of childbearing age, pregnant women, or women who become pregnant during the study. In some cases, the male partners of these women are also excluded or required to take birth control measures. === Sponsor === Throughout the clinical trial, the sponsor is responsible for accurately informing the local site investigators of the true historical safety record of the drug, device or other medical treatments to be tested, and of any potential interactions of the study treatment(s) with already approved treatments. This allows the local investigators to make an informed judgment on whether to participate in the study or not. The sponsor is also responsible for monitoring the results of the study as they come in from the various sites as the trial proceeds. In larger clinical trials, a sponsor will use the services of a data monitoring committee (DMC, known in the US as a data safety monitoring board). This independent group of clinicians and statisticians meets periodically to review the unblinded data the sponsor has received so far. The DMC has the power to recommend termination of the study based on their review, for example if the study treatment is causing more deaths than the standard treatment, or seems to be causing unexpected and study-related serious adverse events. The sponsor is responsible for collecting adverse event reports from all site investigators in the study, and for informing all the investigators of the sponsor's judgment as to whether these adverse events were related or not related to the study treatment. The sponsor and the local site investigators are jointly responsible for writing a site-specific informed consent that accurately informs the potential subjects of the true risks and potential benefits of participating in the study, while at the same time presenting the material as briefly as possible and in ordinary language. FDA regulations state that participating in clinical trials is voluntary, with the subject having the right not to participate or to end participation at any time. === Local site investigators === The ethical principle of primum non-nocere ("first, do no harm") guides the trial, and if an investigator believes the study treatment may be harming subjects in the study, the investigator can stop participating at any time. On the other hand, investigators often have a financial interest in recruiting subjects, and could act unethically to obtain and maintain their participation. The local investigators are responsible for conducting the study according to the study protocol, and supervising the study staff throughout the duration of the study. The local investigator or his/her study staff are also responsible for ensuring the potential subjects in the study understand the risks and potential benefits of participating in the study. In other words, they (or their legally authorized representatives) must give truly informed consent. Local investigators are responsible for reviewing all adverse event reports sent by the sponsor. These adverse event reports contain the opinions of both the investigator (at the site where the adverse event occurred) and the sponsor, regarding the relationship of the adverse event to the study treatments. Local investigators also are responsible for making an independent judgment of these reports, and promptly informing the local IRB of all serious and study treatment-related adverse events. When a local investigator is the sponsor, there may not be formal adverse event reports, but study staff at all locations are responsible for informing the coordinating investigator of anything unexpected. The local investigator is responsible for being truthful to the local IRB in all communications relating to the study. === Institutional review boards (IRBs) === Approval by an Institutional Review Board (IRB), or Independent Ethics Committee (IEC), is necessary before all but the most informal research can begin. In commercial clinical trials, the study protocol is not approved by an IRB before the sponsor recruits sites to conduct the trial. However, the study protocol and procedures have been tailored to fit generic IRB submission requirements. In this case, and where there is no independent sponsor, each local site investigator submits the study protocol, the consent(s), the data collection forms, and supporting documentation to the local IRB. Universities and most hospitals have in-house IRBs. Other researchers (such as in walk-in clinics) use independent IRBs. The IRB scrutinizes the study both for medical safety and for protection of the patients involved in the study, before it allows the researcher to begin the study. It may require changes in study procedures or in the explanations given to the patient. A required yearly "continuing review" report from the investigator updates the IRB on the progress of the study and any new safety information related to the study. === Regulatory agencies === In the US, the FDA can audit the files of local site investigators after they have finished participating in a study, to see if they were correctly following study procedures. This audit may be random, or for cause (because the investigator is suspected of fraudulent data). Avoiding an audit is an incentive for investigators to follow study procedures. A 'covered clinical study' refers to a trial submitted to the FDA as part of a marketing application (for example, as part of an NDA or 510(k)), about which the FDA may require disclosure of financial interest of the clinical investigator in the outcome of the study. For example, the applicant must disclose whether an investigator owns equity in the sponsor, or owns proprietary interest in the product under investigation. The FDA defines a covered study as "... any study of a drug, biological product or device in humans submitted in a marketing application or reclassification petition that the applicant or FDA relies on to establish that the product is effective (including studies that show equivalence to an effective product) or any study in which a single investigator makes a significant contribution to the demonstration of safety." Alternatively, many American pharmaceutical companies have moved some clinical trials overseas. Benefits of conducting trials abroad include lower costs (in some countries) and the ability to run larger trials in shorter timeframes, whereas a potential disadvantage exists in lower-quality trial management. Different countries have different regulatory requirements and enforcement abilities. An estimated 40% of all clinical trials now take place in Asia, Eastern Europe, and Central and South America. "There is no compulsory registration system for clinical trials in these countries and many do not follow European directives in their operations", says Jacob Sijtsma of the Netherlands-based WEMOS, an advocacy health organisation tracking clinical trials in developing countries. Beginning in the 1980s, harmonization of clinical trial protocols was shown as feasible across countries of the European Union. At the same time, coordination between Europe, Japan and the United States led to a joint regulatory-industry initiative on international harmonization named after 1990 as the International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH) Currently, most clinical trial programs follow ICH guidelines, aimed at "ensuring that good quality, safe and effective medicines are developed and registered in the most efficient and cost-effective manner. These activities are pursued in the interest of the consumer and public health, to prevent unnecessary duplication of clinical trials in humans and to minimize the use of animal testing without compromising the regulatory obligations of safety and effectiveness." === Aggregation of safety data during clinical development === Aggregating safety data across clinical trials during drug development is important because trials are generally designed to focus on determining how well the drug works. The safety data collected and aggregated across multiple trials as the drug is developed allows the sponsor, investigators and regulatory agencies to monitor the aggregate safety profile of experimental medicines as they are developed. The value of assessing aggregate safety data is: a) decisions based on aggregate safety assessment during development of the medicine can be made throughout the medicine's development and b) it sets up the sponsor and regulators well for assessing the medicine's safety after the drug is approved. == Economics == Clinical trial costs vary depending on trial phase, type of trial, and disease studied. A study of clinical trials conducted in the United States from 2004 to 2012 found the average cost of Phase I trials to be between $1.4 million and $6.6 million, depending on the type of disease. Phase II trials ranged from $7 million to $20 million, and Phase III trials from $11 million to $53 million. === Sponsor === The cost of a study depends on many factors, especially the number of sites conducting the study, the number of patients involved, and whether the study treatment is already approved for medical use. The expenses incurred by a pharmaceutical company in administering a Phase III or IV clinical trial may include, among others: production of the drug(s) or device(s) being evaluated staff salaries for the designers and administrators of the trial payments to the contract research organization, the site management organization (if used) and any outside consultants payments to local researchers and their staff for their time and effort in recruiting test subjects and collecting data for the sponsor the cost of study materials and the charges incurred to ship them communication with the local researchers, including on-site monitoring by the CRO before and (in some cases) multiple times during the study one or more investigator training meetings expense incurred by the local researchers, such as pharmacy fees, IRB fees and postage any payments to subjects enrolled in the trial the expense of treating a test subject who develops a medical condition caused by the study drug These expenses are incurred over several years. In the US, sponsors may receive a 50 percent tax credit for clinical trials conducted on drugs being developed for the treatment of orphan diseases. National health agencies, such as the US National Institutes of Health, offer grants to investigators who design clinical trials that attempt to answer research questions of interest to the agency. In these cases, the investigator who writes the grant and administers the study acts as the sponsor, and coordinates data collection from any other sites. These other sites may or may not be paid for participating in the study, depending on the amount of the grant and the amount of effort expected from them. Using internet resources can, in some cases, reduce the economic burden. === Investigators === Investigators are often compensated for their work in clinical trials. These amounts can be small, just covering a partial salary for research assistants and the cost of any supplies (usually the case with national health agency studies), or be substantial and include "overhead" that allows the investigator to pay the research staff during times between clinical trials. === Subjects === Participants in Phase I drug trials do not gain any direct health benefit from taking part. They are generally paid a fee for their time, with payments regulated and not related to any risk involved. Motivations of healthy volunteers is not limited to financial reward and may include other motivations such as contributing to science and others. In later phase trials, subjects may not be paid to ensure their motivation for participating with potential for a health benefit or contributing to medical knowledge. Small payments may be made for study-related expenses such as travel or as compensation for their time in providing follow-up information about their health after the trial treatment ends. == Participant recruitment and participation == Phase 0 and Phase I drug trials seek healthy volunteers. Most other clinical trials seek patients who have a specific disease or medical condition. The diversity observed in society should be reflected in clinical trials through the appropriate inclusion of ethnic minority populations. Patient recruitment or participant recruitment plays a significant role in the activities and responsibilities of sites conducting clinical trials. All volunteers being considered for a trial are required to undertake a medical screening. Requirements differ according to the trial needs, but typically volunteers would be screened in a medical laboratory for: Measurement of the electrical activity of the heart (ECG) Measurement of blood pressure, heart rate, and body temperature Blood sampling Urine sampling Weight and height measurement Drug abuse testing Pregnancy testing It has been observed that participants in clinical trials are disproportionately white. Often, minorities are not informed about clinical trials. One recent systematic review of the literature found that race/ethnicity as well as sex were not well-represented nor at times even tracked as participants in a large number of clinical trials of hearing loss management in adults. This may reduce the validity of findings in respect of non-white patients by not adequately representing the larger populations. === Locating trials === Depending on the kind of participants required, sponsors of clinical trials, or contract research organizations working on their behalf, try to find sites with qualified personnel as well as access to patients who could participate in the trial. Working with those sites, they may use various recruitment strategies, including patient databases, newspaper and radio advertisements, flyers, posters in places the patients might go (such as doctor's offices), and personal recruitment of patients by investigators. Volunteers with specific conditions or diseases have additional online resources to help them locate clinical trials. For example, the Fox Trial Finder connects Parkinson's disease trials around the world to volunteers who have a specific set of criteria such as location, age, and symptoms. Other disease-specific services exist for volunteers to find trials related to their condition. Volunteers may search directly on ClinicalTrials.gov to locate trials using a registry run by the U.S. National Institutes of Health and National Library of Medicine. There also is software that allows clinicians to find trial options for an individual patient based on data such as genomic data. === Research === The risk information seeking and processing (RISP) model analyzes social implications that affect attitudes and decision making pertaining to clinical trials. People who hold a higher stake or interest in the treatment provided in a clinical trial showed a greater likelihood of seeking information about clinical trials. Cancer patients reported more optimistic attitudes towards clinical trials than the general population. Having a more optimistic outlook on clinical trials also leads to greater likelihood of enrolling. === Matching === Matching involves a systematic comparison of a patient's clinical and demographic information against the eligibility criteria of various trials. Methods include: Manual: Healthcare providers or clinical trial coordinators manually review patient records and available trial criteria to identify potential matches. This might also include manually searching in clinical trial databases. Electronic health records (EHR). Some systems integrate with EHRs to automatically flag patients that may be eligible for trials based on their medical data. These systems may leverage machine learning, artificial intelligence or precision medicine methods to more effectively match patients to trials. These methods are faced with the challenge of overcoming the limitations of EHR records such as omissions and logging errors. Direct-to-patient services: Resources are specialized to support patients in finding clinical trials through online platforms, hotlines, and personalized support. == Decentralized trials == Although trials are commonly conducted at major medical centers, some participants are excluded due to the distance and expenses required for travel, leading to hardship, disadvantage, and inequity for participants, especially those in rural and underserved communities. Therefore, the concept of a "decentralized clinical trial" that minimizes or eliminates the need for patients to travel to sites, is now more widespread, a capability improved by telehealth and wearable technologies. == See also == Outcome measure Odds algorithm Preregistration (science) Marketing authorisation == References == == External links == The International Conference on Harmonization of Technical Requirements for Registration of Pharmaceuticals for Human Use, a guideline for regulation of clinical trials ClinicalTrials.gov, a worldwide database of registered clinical trials; US National Library of Medicine Cochrane Central Register of Controlled Trials (CENTRAL); a concentrated source for bibliographic reports of randomized controlled trials ClinicalTrials.eu, European Clinical Trials Information Network; Clinical Trials easily understood. The Hidden World of Clinical Trials: A Journey into Medical Innovation - A blog providing insights into medical innovation in clinical trials.
Wikipedia/Clinical_trials
Science in the medieval Islamic world was the science developed and practised during the Islamic Golden Age under the Abbasid Caliphate of Baghdad, the Umayyads of Córdoba, the Abbadids of Seville, the Samanids, the Ziyarids and the Buyids in Persia and beyond, spanning the period roughly between 786 and 1258. Islamic scientific achievements encompassed a wide range of subject areas, especially astronomy, mathematics, and medicine. Other subjects of scientific inquiry included alchemy and chemistry, botany and agronomy, geography and cartography, ophthalmology, pharmacology, physics, and zoology. Medieval Islamic science had practical purposes as well as the goal of understanding. For example, astronomy was useful for determining the Qibla, the direction in which to pray, botany had practical application in agriculture, as in the works of Ibn Bassal and Ibn al-'Awwam, and geography enabled Abu Zayd al-Balkhi to make accurate maps. Islamic mathematicians such as Al-Khwarizmi, Avicenna and Jamshīd al-Kāshī made advances in algebra, trigonometry, geometry and Arabic numerals. Islamic doctors described diseases like smallpox and measles, and challenged classical Greek medical theory. Al-Biruni, Avicenna and others described the preparation of hundreds of drugs made from medicinal plants and chemical compounds. Islamic physicists such as Ibn Al-Haytham, Al-Bīrūnī and others studied optics and mechanics as well as astronomy, and criticised Aristotle's view of motion. During the Middle Ages, Islamic science flourished across a wide area around the Mediterranean Sea and further afield, for several centuries, in a wide range of institutions. == Context and history == The Islamic era began in 622. Islamic armies eventually conquered Arabia, Egypt and Mesopotamia, and successfully displaced the Persian and Byzantine Empires from the region within a few decades. Within a century, Islam had reached the area of present-day Portugal in the west and Central Asia in the east. The Islamic Golden Age (roughly between 786 and 1258) spanned the period of the Abbasid Caliphate (750–1258), with stable political structures and flourishing trade. Major religious and cultural works of the Islamic empire were translated into Arabic and occasionally Persian. Islamic culture inherited Greek, Indic, Assyrian and Persian influences. A new common civilisation formed, based on Islam. An era of high culture and innovation ensued, with rapid growth in population and cities. The Arab Agricultural Revolution in the countryside brought more crops and improved agricultural technology, especially irrigation. This supported the larger population and enabled culture to flourish. From the 9th century onwards, scholars such as Al-Kindi translated Indian, Assyrian, Sasanian (Persian) and Greek knowledge, including the works of Aristotle, into Arabic. These translations supported advances by scientists across the Islamic world. Islamic science survived the initial Christian reconquest of Spain, including the fall of Seville in 1248, as work continued in the eastern centres (such as in Persia). After the completion of the Spanish reconquest in 1492, the Islamic world went into an economic and cultural decline. The Abbasid caliphate was followed by the Ottoman Empire (c. 1299–1922), centred in Turkey, and the Safavid Empire (1501–1736), centred in Persia, where work in the arts and sciences continued. == Fields of inquiry == Medieval Islamic scientific achievements encompassed a wide range of subject areas, especially mathematics, astronomy, and medicine. Other subjects of scientific inquiry included physics, alchemy and chemistry, ophthalmology, and geography and cartography. === Alchemy and chemistry === The early Islamic period saw the development of theoretical frameworks in alchemy and chemistry, laying the foundation for later advancements in both fields. The sulfur-mercury theory of metals, first found in Sirr al-khalīqa ("The Secret of Creation", c. 750–850, falsely attributed to Apollonius of Tyana), and in the writings attributed to Jabir ibn Hayyan (written c. 850–950), remained the basis of theories of metallic composition until the 18th century. The Emerald Tablet, a cryptic text that all later alchemists up to and including Isaac Newton saw as the foundation of their art, first occurs in the Sirr al-khalīqa and in one of the works attributed to Jabir. In practical chemistry, the works of Jabir, and those of the Persian alchemist and physician Abu Bakr al-Razi (c. 865–925), contain the earliest systematic classifications of chemical substances. Alchemists were also interested in artificially creating such substances. Jabir describes the synthesis of ammonium chloride (sal ammoniac) from organic substances, and Abu Bakr al-Razi experimented with the heating of ammonium chloride, vitriol, and other salts, which would eventually lead to the discovery of the mineral acids by 13th-century Latin alchemists such as pseudo-Geber. === Astronomy and cosmology === Astronomy became a major discipline within Islamic science. Astronomers devoted effort both towards understanding the nature of the cosmos and to practical purposes. One application involved determining the Qibla, the direction to face during prayer. Another was astrology, predicting events affecting human life and selecting suitable times for actions such as going to war or founding a city. Al-Battani (850–922) accurately determined the length of the solar year. He contributed to the Tables of Toledo, used by astronomers to predict the movements of the sun, moon and planets across the sky. Copernicus (1473–1543) later used some of Al-Battani's astronomic tables. Al-Zarqali (1028–1087) developed a more accurate astrolabe, used for centuries afterwards. He constructed a water clock in Toledo, discovered that the Sun's apogee moves slowly relative to the fixed stars, and obtained a good estimate of its motion for its rate of change. Nasir al-Din al-Tusi (1201–1274) wrote an important revision to Ptolemy's 2nd-century celestial model. When Tusi became Helagu's astrologer, he was given an observatory and gained access to Chinese techniques and observations. He developed trigonometry as a separate field, and compiled the most accurate astronomical tables available up to that time. === Botany and agronomy === The study of the natural world extended to a detailed examination of plants. The work done proved directly useful in the unprecedented growth of pharmacology across the Islamic world. Al-Dinawari (815–896) popularised botany in the Islamic world with his six-volume Kitab al-Nabat (Book of Plants). Only volumes 3 and 5 have survived, with part of volume 6 reconstructed from quoted passages. The surviving text describes 637 plants in alphabetical order from the letters sin to ya, so the whole book must have covered several thousand kinds of plants. Al-Dinawari described the phases of plant growth and the production of flowers and fruit. The thirteenth century encyclopedia compiled by Zakariya al-Qazwini (1203–1283) – ʿAjā'ib al-makhlūqāt (The Wonders of Creation) – contained, among many other topics, both realistic botany and fantastic accounts. For example, he described trees which grew birds on their twigs in place of leaves, but which could only be found in the far-distant British Isles. The use and cultivation of plants was documented in the 11th century by Muhammad bin Ibrāhīm Ibn Bassāl of Toledo in his book Dīwān al-filāha (The Court of Agriculture), and by Ibn al-'Awwam al-Ishbīlī (also called Abū l-Khayr al-Ishbīlī) of Seville in his 12th century book Kitāb al-Filāha (Treatise on Agriculture). Ibn Bassāl had travelled widely across the Islamic world, returning with a detailed knowledge of agronomy that fed into the Arab Agricultural Revolution. His practical and systematic book describes over 180 plants and how to propagate and care for them. It covered leaf- and root-vegetables, herbs, spices and trees. === Geography and cartography === The spread of Islam across Western Asia and North Africa encouraged an unprecedented growth in trade and travel by land and sea as far away as Southeast Asia, China, much of Africa, Scandinavia and even Iceland. Geographers worked to compile increasingly accurate maps of the known world, starting from many existing but fragmentary sources. Abu Zayd al-Balkhi (850–934), founder of the Balkhī school of cartography in Baghdad, wrote an atlas called Figures of the Regions (Suwar al-aqalim). Al-Biruni (973–1048) measured the radius of the earth using a new method. It involved observing the height of a mountain at Nandana (now in Pakistan). Al-Idrisi (1100–1166) drew a map of the world for Roger, the Norman King of Sicily (ruled 1105–1154). He also wrote the Tabula Rogeriana (Book of Roger), a geographic study of the peoples, climates, resources and industries of the whole of the world known at that time. The Ottoman admiral Piri Reis (c. 1470–1553) made a map of the New World and West Africa in 1513. He made use of maps from Greece, Portugal, Muslim sources, and perhaps one made by Christopher Columbus. He represented a part of a major tradition of Ottoman cartography. === Mathematics === Islamic mathematicians gathered, organised and clarified the mathematics they inherited from ancient Egypt, Greece, India, Mesopotamia and Persia, and went on to make innovations of their own. Islamic mathematics covered algebra, geometry and arithmetic. Algebra was mainly used for recreation: it had few practical applications at that time. Geometry was studied at different levels. Some texts contain practical geometrical rules for surveying and for measuring figures. Theoretical geometry was a necessary prerequisite for understanding astronomy and optics, and it required years of concentrated work. Early in the Abbasid caliphate (founded 750), soon after the foundation of Baghdad in 762, some mathematical knowledge was assimilated by al-Mansur's group of scientists from the pre-Islamic Persian tradition in astronomy. Astronomers from India were invited to the court of the caliph in the late eighth century; they explained the rudimentary trigonometrical techniques used in Indian astronomy. Ancient Greek works such as Ptolemy's Almagest and Euclid's Elements were translated into Arabic. By the second half of the ninth century, Islamic mathematicians were already making contributions to the most sophisticated parts of Greek geometry. Islamic mathematics reached its apogee in the Eastern part of the Islamic world between the tenth and twelfth centuries. Most medieval Islamic mathematicians wrote in Arabic, others in Persian. Al-Khwarizmi (8th–9th centuries) was instrumental in the adoption of the Hindu–Arabic numeral system and the development of algebra, introduced methods of simplifying equations, and used Euclidean geometry in his proofs. He was the first to treat algebra as an independent discipline in its own right, and presented the first systematic solution of linear and quadratic equations.: 14  Ibn Ishaq al-Kindi (801–873) worked on cryptography for the Abbasid Caliphate, and gave the first known recorded explanation of cryptanalysis and the first description of the method of frequency analysis. Avicenna (c. 980–1037) contributed to mathematical techniques such as casting out nines. Thābit ibn Qurra (835–901) calculated the solution to a chessboard problem involving an exponential series. Al-Farabi (c. 870–950) attempted to describe, geometrically, the repeating patterns popular in Islamic decorative motifs in his book Spiritual Crafts and Natural Secrets in the Details of Geometrical Figures. Omar Khayyam (1048–1131), known in the West as a poet, calculated the length of the year to within 5 decimal places, and found geometric solutions to all 13 forms of cubic equations, developing some quadratic equations still in use. Jamshīd al-Kāshī (c. 1380–1429) is credited with several theorems of trigonometry, including the law of cosines, also known as Al-Kashi's Theorem. He has been credited with the invention of decimal fractions, and with a method like Horner's to calculate roots. He calculated π correctly to 17 significant figures. Sometime around the seventh century, Islamic scholars adopted the Hindu–Arabic numeral system, describing their use in a standard type of text fī l-ḥisāb al hindī, (On the numbers of the Indians). A distinctive Western Arabic variant of the Eastern Arabic numerals began to emerge around the 10th century in the Maghreb and Al-Andalus (sometimes called ghubar numerals, though the term is not always accepted), which are the direct ancestor of the modern Arabic numerals used throughout the world. === Medicine === Islamic society paid careful attention to medicine, following a hadith enjoining the preservation of good health. Its physicians inherited knowledge and traditional medical beliefs from the civilisations of classical Greece, Rome, Syria, Persia and India. These included the writings of Hippocrates such as on the theory of the four humours, and the theories of Galen. al-Razi (c. 865–925) identified smallpox and measles, and recognized fever as a part of the body's defenses. He wrote a 23-volume compendium of Chinese, Indian, Persian, Syriac and Greek medicine. al-Razi questioned the classical Greek medical theory of how the four humours regulate life processes. He challenged Galen's work on several fronts, including the treatment of bloodletting, arguing that it was effective. al-Zahrawi (936–1013) was a surgeon whose most important surviving work is referred to as al-Tasrif (Medical Knowledge). It is a 30-volume set mainly discussing medical symptoms, treatments, and pharmacology. The last volume, on surgery, describes surgical instruments, supplies, and pioneering procedures. Avicenna (c. 980–1037) wrote the major medical textbook, The Canon of Medicine. Ibn al-Nafis (1213–1288) wrote an influential book on medicine; it largely replaced Avicenna's Canon in the Islamic world. He wrote commentaries on Galen and on Avicenna's works. One of these commentaries, discovered in 1924, described the circulation of blood through the lungs. === Optics and ophthalmology === Optics developed rapidly in this period. By the ninth century, there were works on physiological, geometrical and physical optics. Topics covered included mirror reflection. Hunayn ibn Ishaq (809–873) wrote the book Ten Treatises on the Eye; this remained influential in the West until the 17th century. Abbas ibn Firnas (810–887) developed lenses for magnification and the improvement of vision. Ibn Sahl (c. 940–1000) discovered the law of refraction known as Snell's law. He used the law to produce the first Aspheric lenses that focused light without geometric aberrations. In the eleventh century Ibn al-Haytham (Alhazen, 965–1040) rejected the Greek ideas about vision, whether the Aristotelian tradition that held that the form of the perceived object entered the eye (but not its matter), or that of Euclid and Ptolemy which held that the eye emitted a ray. Al-Haytham proposed in his Book of Optics that vision occurs by way of light rays forming a cone with its vertex at the center of the eye. He suggested that light was reflected from different surfaces in different directions, thus causing objects to look different. He argued further that the mathematics of reflection and refraction needed to be consistent with the anatomy of the eye. He was also an early proponent of the scientific method, the concept that a hypothesis must be proved by experiments based on confirmable procedures or mathematical evidence, five centuries before Renaissance scientists. === Pharmacology === Advances in botany and chemistry in the Islamic world encouraged developments in pharmacology. Muhammad ibn Zakarīya Rāzi (Rhazes) (865–915) promoted the medical uses of chemical compounds. Abu al-Qasim al-Zahrawi (Abulcasis) (936–1013) pioneered the preparation of medicines by sublimation and distillation. His Liber servitoris provides instructions for preparing "simples" from which were compounded the complex drugs then used. Sabur Ibn Sahl (died 869) was the first physician to describe a large variety of drugs and remedies for ailments. Al-Muwaffaq, in the 10th century, wrote The foundations of the true properties of Remedies, describing chemicals such as arsenious oxide and silicic acid. He distinguished between sodium carbonate and potassium carbonate, and drew attention to the poisonous nature of copper compounds, especially copper vitriol, and also of lead compounds. Al-Biruni (973–1050) wrote the Kitab al-Saydalah (The Book of Drugs), describing in detail the properties of drugs, the role of pharmacy and the duties of the pharmacist. Ibn Sina (Avicenna) described 700 preparations, their properties, their mode of action and their indications. He devoted a whole volume to simples in The Canon of Medicine. Works by Masawaih al-Mardini (c. 925–1015) and by Ibn al-Wafid (1008–1074) were printed in Latin more than fifty times, appearing as De Medicinis universalibus et particularibus by Mesue the Younger (died 1015) and as the Medicamentis simplicibus by Abenguefit (c. 997 – 1074) respectively. Peter of Abano (1250–1316) translated and added a supplement to the work of al-Mardini under the title De Veneris. Ibn al-Baytar (1197–1248), in his Al-Jami fi al-Tibb, described a thousand simples and drugs based directly on Mediterranean plants collected along the entire coast between Syria and Spain, for the first time exceeding the coverage provided by Dioscorides in classical times. Islamic physicians such as Ibn Sina described clinical trials for determining the efficacy of medical drugs and substances. === Physics === The fields of physics studied in this period, apart from optics and astronomy which are described separately, are aspects of mechanics: statics, dynamics, kinematics and motion. In the sixth century John Philoponus (c. 490 – c. 570) rejected the Aristotelian view of motion. He argued instead that an object acquires an inclination to move when it has a motive power impressed on it. In the eleventh century Ibn Sina adopted roughly the same idea, namely that a moving object has force which is dissipated by external agents like air resistance. Ibn Sina distinguished between "force" and "inclination" (mayl); he claimed that an object gained mayl when the object is in opposition to its natural motion. He concluded that continuation of motion depends on the inclination that is transferred to the object, and that the object remains in motion until the mayl is spent. He also claimed that a projectile in a vacuum would not stop unless it is acted upon. That view accords with Newton's first law of motion, on inertia. As a non-Aristotelian suggestion, it was essentially abandoned until it was described as "impetus" by Jean Buridan (c. 1295–1363), who was likely influenced by Ibn Sina's Book of Healing. In the Shadows, Abū Rayḥān al-Bīrūnī (973–1048) describes non-uniform motion as the result of acceleration. Ibn-Sina's theory of mayl tried to relate the velocity and weight of a moving object, a precursor of the concept of momentum. Aristotle's theory of motion stated that a constant force produces a uniform motion; Abu'l-Barakāt al-Baghdādī (c. 1080 – 1164/5) disagreed, arguing that velocity and acceleration are two different things, and that force is proportional to acceleration, not to velocity. The Banu Musa brothers, Jafar-Muhammad, Ahmad and al-Hasan (c. early 9th century) invented automated devices described in their Book of Ingenious Devices. Advances on the subject were also made by al-Jazari and Ibn Ma'ruf. === Zoology === Many classical works, including those of Aristotle, were transmitted from Greek to Syriac, then to Arabic, then to Latin in the Middle Ages. Aristotle's zoology remained dominant in its field for two thousand years. The Kitāb al-Hayawān (كتاب الحيوان, English: Book of Animals) is a 9th-century Arabic translation of History of Animals: 1–10, On the Parts of Animals: 11–14, and Generation of Animals: 15–19. The book was mentioned by Al-Kindī (died 850), and commented on by Avicenna (Ibn Sīnā) in his The Book of Healing. Avempace (Ibn Bājja) and Averroes (Ibn Rushd) commented on and criticised On the Parts of Animals and Generation of Animals. == Significance == Muslim scientists helped in laying the foundations for an experimental science with their contributions to the scientific method and their empirical, experimental and quantitative approach to scientific inquiry. In a more general sense, the positive achievement of Islamic science was simply to flourish, for centuries, in a wide range of institutions from observatories to libraries, madrasas to hospitals and courts, both at the height of the Islamic golden age and for some centuries afterwards. It did not lead to a scientific revolution like that in Early modern Europe, but such external comparisons are probably to be rejected as imposing "chronologically and culturally alien standards" on a successful medieval culture. == See also == == References == == Notes == == Sources == Linton, Christopher M. (2004). From Eudoxus to Einstein—A History of Mathematical Astronomy. Cambridge University Press. ISBN 978-0-521-82750-8. Masood, Ehsan (2009). Science and Islam: A History. Icon Books. ISBN 978-1-785-78202-2. McClellan, James E. III; Dorn, Harold, eds. (2006). Science and Technology in World History (2 ed.). Johns Hopkins. ISBN 978-0-8018-8360-6. Morelon, Régis; Rashed, Roshdi (1996). Encyclopedia of the History of Arabic Science. Vol. 3. Routledge. ISBN 978-0-415-12410-2. Turner, Howard R. (1997). Science in Medieval Islam: An Illustrated Introduction. University of Texas Press. ISBN 978-0-292-78149-8. == Further reading == Al-Daffa, Ali Abdullah; Stroyls, J.J. (1984). Studies in the exact sciences in medieval Islam. Wiley. ISBN 978-0-471-90320-8. Hogendijk, Jan P.; Sabra, Abdelhamid I. (2003). The Enterprise of Science in Islam: New Perspectives. MIT Press. ISBN 978-0-262-19482-2. Hill, Donald Routledge (1993). Islamic Science And Engineering. Edinburgh University Press. ISBN 978-0-7486-0455-5. Huff, Toby (1993). The Rise of Early Modern Science: Islam, China, and the West. Cambridge University Press. Kennedy, Edward S. (1983). Studies in the Islamic Exact Sciences. Syracuse University Press. ISBN 978-0-8156-6067-5. Lindberg, D. C.; Shank, M. H., eds. (2013). The Cambridge History of Science. Volume 2: Medieval Science. Cambridge University Press. (chapters 1–5 cover science, mathematics and medicine in Islam) Morelon, Régis; Rashed, Roshdi (1996). Encyclopedia of the History of Arabic Science. Vol. 2–3. Routledge. ISBN 978-0-415-02063-3. Saliba, George (2007). Islamic Science and the Making of the European Renaissance. MIT Press. ISBN 978-0-262-19557-7. == External links == "How Greek Science Passed to the Arabs" by De Lacy O'Leary Saliba, George. "Whose Science is Arabic Science in Renaissance Europe?". Habibi, Golareh. is there such a thing as Islamic science? the influence of Islam on the world of science, Science Creative Quarterly.
Wikipedia/Islamic_science
A pseudepigraph (also anglicized as "pseudepigraphon") is a falsely attributed work, a text whose claimed author is not the true author, or a work whose real author attributed it to a figure of the past. The name of the author to whom the work is falsely attributed is often prefixed with the particle "pseudo-", such as for example "pseudo-Aristotle" or "pseudo-Dionysius": these terms refer to the anonymous authors of works falsely attributed to Aristotle and Dionysius the Areopagite, respectively. In biblical studies, the term pseudepigrapha can refer to an assorted collection of Jewish religious works thought to be written c. 300 BCE to 300 CE. They are distinguished by Protestants from the deuterocanonical books (Catholic and Orthodox) or Apocrypha (Protestant), the books that appear in extant copies of the Septuagint in the fourth century or later and the Vulgate, but not in the Hebrew Bible or in Protestant Bibles. The Catholic Church distinguishes only between the deuterocanonical and all other books; the latter are called biblical apocrypha, which in Catholic usage includes the pseudepigrapha. In addition, two books considered canonical in the Orthodox Tewahedo churches, the Book of Enoch and Book of Jubilees, are categorized as pseudepigrapha from the point of view of Chalcedonian Christianity. In addition to the sets of generally agreed to be non-canonical works, scholars will also apply the term to canonical works who make a direct claim of authorship, yet this authorship is doubted. For example, the Book of Daniel is considered by some to have been written in the 2nd century BCE, 400 years after the prophet Daniel lived, and thus the work is pseudepigraphic. A New Testament example might be the book of 2 Peter, considered by some to be written approximately 80 years after Saint Peter's death. Early Christians, such as Origen, harbored doubts as to the authenticity of the book's authorship. The term has also been used by Quranist Muslims to describe hadiths: Quranists claim that most hadiths are fabrications created in the 8th and 9th century CE, and falsely attributed to the Islamic prophet Muhammad. == Etymology == The word pseudepigraph (from the Greek: ψευδής, pseudḗs, "false" and ἐπιγραφή, epigraphḗ, "name" or "inscription" or "ascription"; thus when taken together it means "false superscription or title"; see the related epigraphy). The plural of "pseudepigraph" (sometimes Latinized as "pseudepigraphon" or "pseudepigraphum") is "pseudepigrapha". == Naming == When a text is shown to have been falsely attributed to a particular author, and the true identity of the author is not known, the author can be referred to by a combination of pseudo- and the traditional authors name. For example, the Armenian History has been falsely attributed to an Armenian historian named seventh-century Sebeos, and it is therefore called Pseudo-Sebeos. == Levels of authenticity == Scholars have identified seven levels of authenticity which they have organized in a hierarchy ranging from literal authorship, meaning written in the author's own hand, to outright forgery: Literal authorship. A church leader writes a letter in his own hand. Dictation. A church leader dictates a letter almost word for word to an amanuensis. Delegated authorship. A church leader describes the basic content of an intended letter to a disciple or to an amanuensis. Posthumous authorship. A church leader dies, and his disciples finish a letter that he had intended to write, sending it posthumously in his name. Apprentice authorship. A church leader dies, and disciples who had been authorized to speak for him while he was alive continue to do so by writing letters in his name years or decades after his death. Honorable pseudepigraphy. A church leader dies, and admirers seek to honor him by writing letters in his name as a tribute to his influence and in a sincere belief that they are responsible bearers of his tradition. Forgery. A church leader obtains sufficient prominence that, either before or after his death, people seek to exploit his legacy by forging letters in his name, presenting him as a supporter of their own ideas. == Classical and biblical studies == === Old Testament and intertestamental studies === In biblical studies, pseudepigrapha refers particularly to works which purport to be written by noted authorities in either the Old and New Testaments or by persons involved in Jewish or Christian religious study or history. These works can also be written about biblical matters, often in such a way that they appear to be as authoritative as works which have been included in the many versions of the Judeo-Christian scriptures. Eusebius indicates this usage dates back at least to Serapion of Antioch, whom Eusebius records as having said: "But those writings which are falsely inscribed with their name (ta pseudepigrapha), we as experienced persons reject...." Many such works were also referred to as Apocrypha, which originally connoted "private" or "non-public": those that were not endorsed for public reading in the liturgy. An example of a text that is both apocryphal and pseudepigraphical is the Odes of Solomon. It is considered pseudepigraphical because it was not actually written by Solomon but instead is a collection of early Christian (first to second century) hymns and poems, originally written not in Hebrew, and apocryphal because they were not accepted in either the Tanakh or the New Testament. There is a tendency not to use the word pseudepigrapha when describing works later than about 300 CE when referring to biblical matters.: 222–28  But the late-appearing Gospel of Barnabas, Apocalypse of Pseudo-Methodius, the Pseudo-Apuleius (author of a fifth-century herbal ascribed to Apuleius), and the author traditionally referred to as the "Pseudo-Dionysius the Areopagite", are classic examples of pseudepigraphy. In the fifth century the moralist Salvian published Contra avaritiam ("Against avarice") under the name of Timothy; the letter in which he explained to his former pupil, Bishop Salonius, his motives for so doing survives. The term pseudepigrapha is also commonly used to describe numerous works of Jewish religious literature written from about 300 BCE to 300 CE. Not all of these works are actually pseudepigraphical. It also refers to books of the New Testament canon whose authorship is misrepresented. Such works include the following: 3 Maccabees 4 Maccabees Assumption of Moses Ethiopic Book of Enoch (1 Enoch) Slavonic Second Book of Enoch Book of Jubilees 3 Baruch Letter of Aristeas Life of Adam and Eve Ascension of Isaiah Psalms of Solomon Sibylline Oracles 2 Baruch Testaments of the Twelve Patriarchs 4 Ezra Apocalypse of Abraham Exodus Various canonical works accepted as scripture have since been reexamined and considered by modern scholars in the 19th century onward as likely cases of pseudepigraphica. The Book of Daniel directly claims to be written by the prophet Daniel, yet there are strong reasons to believe it was not written until centuries after Daniel's death, such as references to the book only appearing from the 2nd century BCE onward. The book is an apocalypse wherein Daniel offers a series of predictions of the future, and is meant to reassure the Jews of the period that the tyrant Antiochus IV Epiphanes would soon be overthrown. By backdating the book to the 6th century BCE and providing a series of correct prophecies as to the history of the past 400 years, the authorship claim of Daniel would have strengthened a later author's predictions of the coming fall of the Seleucid Empire. === New Testament studies === Christian scholars traditionally maintain that nothing known to be pseudepigraphical was admitted to the New Testament canon. The Catholic Encyclopedia notes, The first four historical books of the New Testament are supplied with titles, which however ancient, do not go back to the respective authors of those sacred texts. The Canon of Muratori, Clement of Alexandria, and St. Irenaeus bear distinct witness to the existence of those headings in the latter part of the second century of our era. Indeed, the manner in which Clement (Strom. I, xxi), and St. Irenaeus (Adv. Haer. III, xi, 7) employ them implies that, at that early date, our present titles to the gospels had been in current use for some considerable time. Hence, it may be inferred that they were prefixed to the evangelical narratives as early as the first part of that same century. That however, they do not go back to the first century of the Christian era, or at least that they are not original, is a position generally held at the present day. It is felt that since they are similar for the four Gospels, although the same Gospels were composed at some interval from each other, those titles were not framed and consequently not prefixed to each individual narrative, before the collection of the four Gospels was actually made. Besides as well pointed out by Prof. Bacon, "the historical books of the New Testament differ from its apocalyptic and epistolary literature, as those of the Old Testament differ from its prophecy, in being invariably anonymous, and for the same reason. Prophecies, whether in the earlier or in the later sense, and letters, to have authority, must be referable to some individual; the greater his name, the better. But history was regarded as common possession. Its facts spoke for themselves. Only as the springs of common recollection began to dwindle, and marked differences to appear between the well-informed and accurate Gospels and the untrustworthy ... become worth while for the Christian teacher or apologist to specify whether the given representation of the current tradition was 'according to' this or that special compiler, and to state his qualifications". It thus appears that the present titles of the Gospels are not traceable to the Evangelists themselves. However, agnostic biblical scholar Bart D. Ehrman holds that only seven of Paul's epistles are convincingly genuine, and that all of the other 20 books in the New Testament appear to be written by unknown people who were not the well-known biblical figures to whom the early Christian leaders originally attributed authorship. The earliest and best manuscripts of Matthew, Mark, Luke, and John were all written anonymously. Furthermore, the books of Acts, Hebrews, 1 John, 2 John, and 3 John were also written anonymously. ==== Pauline epistles ==== Thirteen New Testament letters are attributed to Paul and are still considered by Christians to carry Paul's authority. These letters are part of the Christian Bible and are foundational for the Christian Church. Therefore, letters which some claim to be pseudepigraphic are not considered any less valuable to Christians. Authorship of 6 out of the 13 canonical epistles of Paul has been questioned by both Christian and non-Christian biblical scholars. These are the Epistle to the Ephesians, Epistle to the Colossians, Second Epistle to the Thessalonians, First Epistle to Timothy, Second Epistle to Timothy, and Epistle to Titus. These six books are referred by sceptical scholars such as Bart Ehrman as "deutero-Pauline letters", meaning "secondary" standing in the corpus of Paul's writings, on the grounds of proposed evidence that they could not have been written by Paul, despite internal attribution to Paul. Those known as the "Pastoral Epistles" (Timothy, 2 Timothy, and Titus) are all so similar that they are thought to be written by the same unknown author, either by Paul or in Paul's name. ==== Catholic epistles ==== Seven New Testament letters are attributed to several apostles, such as Saint Peter, John the Apostle, and Jesus's brothers James and Jude. Three of the seven letters are anonymous. These three have traditionally been attributed to John the Apostle, the son of Zebedee and one of the Twelve Apostles of Jesus. Consequently, these letters have been labelled the Johannine epistles, despite the fact that none of the epistles mentions any author. Most modern scholars believe the author is not John the Apostle, but there is no scholarly consensus for any particular historical figure. (see: Authorship of the Johannine works). Two of the letters claim to have been written or issued by Simon Peter, one of the Twelve Apostles of Jesus. Therefore, they have traditionally been called the Petrine epistles. However, most modern scholars agree the second epistle was probably not written by Peter, because it appears to have been written in the early 2nd century, long after Peter had died. Yet, opinions on the first epistle are more divided; many scholars do think this letter is authentic. In one epistle, the author only calls himself James (Ἰάκωβος Iákobos). It is not known which James this is supposed to be. There are several different traditional Christian interpretations of other New Testament texts which mention a James, brother of Jesus. However, most modern scholars tend to reject this line of reasoning, since the author himself does not indicate any familial relationship with Jesus. A similar problem presents itself with the Epistle of Jude (Ἰούδας Ioudas): the writer names himself a brother of James (ἀδελφὸς δὲ Ἰακώβου adelphos de Iakóbou), but it is not clear which James is meant. According to some Christian traditions, this is the same James as the author of the Epistle of James, who was allegedly a brother of Jesus; and so, this Jude should also be a brother of Jesus, despite the fact he does not indicate any such thing in his text. === Later pseudepigrapha === The Gospel of Peter and the attribution to Paul of the Epistle to the Laodiceans are both examples of pseudepigrapha that were excluded from the New Testament canon. They are often referred to as New Testament apocrypha. Further examples of New Testament pseudepigrapha include the Gospel of Barnabas and the Gospel of Judas, which begins by presenting itself as "the secret account of the revelation that Jesus spoke in conversation with Judas Iscariot". The Vision of Ezra is an ancient apocryphal text purportedly written by the biblical scribe Ezra. The earliest surviving manuscripts, composed in Latin, date to the 11th century CE, although textual peculiarities strongly suggest that the text was originally written in Greek. Like the Greek Apocalypse of Ezra, the work is clearly Christian, and features several apostles being seen in heaven. However, the text is significantly shorter than the Apocalypse. The Donation of Constantine is a forged Roman imperial decree by which the 4th-century emperor Constantine the Great supposedly transferred authority over Rome and the western part of the Roman Empire to the Pope. Composed probably in the 8th century, it was used, especially in the 13th century, in support of claims of political authority by the papacy. Lorenzo Valla, an Italian Catholic priest and Renaissance humanist, is credited with first exposing the forgery with solid philological arguments in 1439–1440, although the document's authenticity had been repeatedly contested since 1001. In Russian history, in 1561 Muscovites supposedly received a letter from the Patriarch of Constantinople which asserted the right of Ivan the Terrible to claim the title of Tsar. This, too, turned out to be false. While earlier Russian Monarchs had on some occasions used the title "Tsar", Ivan the Terrible previously known as "Grand Prince of all the Russias" was the first to be formally crowned as Tsar of All Rus (Russian: Царь Всея Руси). This was related to Russia's growing ambitions to become an Orthodox "Third Rome", after the Fall of Constantinople – for which the supposed approval by the Patriarch added weight. The Anaphorae of Mar Nestorius, employed in the Eastern Churches, is attributed to Nestorius, but its earliest manuscripts are in Syriac, which question its Greek authorship. === The Zohar === The Zohar (Hebrew: זֹהַר, lit. Splendor or Radiance), foundational work in the literature of Jewish mystical thought known as Kabbalah, first appeared in Spain in the 13th century, and was published by a Jewish writer named Moses de León. De León ascribed the work to Shimon bar Yochai ("Rashbi"), a rabbi of the 2nd century during the Roman persecution who, according to Jewish legend, hid in a cave for thirteen years studying the Torah and was inspired by the Prophet Elijah to write the Zohar. This accords with the traditional claim by adherents that Kabbalah is the concealed part of the Oral Torah. Modern academic analysis of the Zohar, such as that by the 20th century religious historian Gershom Scholem, has theorized that de León was the actual author, as textual analysis points to a Medieval Spanish Jewish writer rather than one living in Roman-ruled Palestine. === Ovid === Conrad Celtes, a noted German humanist scholar and poet of the German Renaissance, collected numerous Greek and Latin manuscripts in his function as librarian of the Imperial Library in Vienna. In a 1504 letter to the Venetian publisher Aldus Manutius Celtes claimed to have discovered the missing books of Ovid's Fasti. However, it turned out that the purported Ovid verses had actually been composed by an 11th-century monk and were known to the Empire of Nicaea according to William of Rubruck. Even so, many contemporary scholars believed Celtes and continued to write about the existence of the missing books until well into the 17th century. == As a literary device == Pseudepigraphy has been employed as a metafictional technique. Authors who have made notable use of this device include James Hogg (The Private Memoirs and Confessions of a Justified Sinner), Thomas Carlyle (Sartor Resartus), Jorge Luis Borges ("An Examination of the Works of Herbert Quain"; "Pierre Menard, Author of the Quixote"), Vladimir Nabokov (Pale Fire), Stanislaw Lem (A Perfect Vacuum; Imaginary Magnitude) Roberto Bolaño (Nazi Literature in the Americas) and Stefan Heym (The Lenz Papers). Edgar Rice Burroughs also presented many of his works – including the most well-known, the Tarzan books – as pseudepigrapha, prefacing each book with a detailed introduction presenting the supposed actual author, with Burroughs himself pretending to be no more than the literary editor. J.R.R. Tolkien in The Lord of the Rings presents that story and The Hobbit as translated from the fictional Red Book of Westmarch written by characters within the novels. The twelve books of The Flashman Papers series by George MacDonald Fraser similarly pretend to be transcriptions of the papers left by an "illustrious Victorian soldier", each volume prefaced by a long semi-scholarly Explanatory Note stating that "additional packets of Flashman's papers have been found and are here presented to the public". A similar device was used by Ian Fleming in The Spy Who Loved Me and by various other writers of popular fiction. == See also == Channeling (New Age) Criticism of Mormon sacred texts False attribution Found manuscript Journal for the Study of the Pseudepigrapha List of Old Testament pseudepigrapha Literary forgery Modern pseudepigrapha Prophecy of the Popes == Citations == == Sources == Cueva, Edmund P.; Martínez, Javier, eds. (2016). Splendide Mendax: Rethinking Fakes and Forgeries in Classical, Late Antique, and Early Christian Literature. Groningen: Barkhuis. DiTommaso, Lorenzo (2001). A Bibliography of Pseudepigrapha Research 1850–1999. Sheffield: Sheffield Academic Press. Ehrman, Bart (2013). Forgery and Counterforgery: The Use of Literary Deceit in Early Christian Polemics. Oxford: Oxford University Press. Kiley, Mark (1986). Colossians as Pseudepigraphy. Bible Seminar. Vol. 4. Sheffield: JSOT Press. — Colossians as a non-deceptive school product Metzger, Bruce M. (1972). "Literary forgeries and canonical pseudepigrapha". Journal of Biblical Literature. 91 (1): 3–24. doi:10.2307/3262916. JSTOR 3262916. von Fritz, Kurt, ed. (1972). Pseudepigraphica 1. Geneva: Foundation Hardt. — Contributions on pseudopythagorica (the literature ascribed to Pythagoras), the Platonic Epistles, Jewish-Hellenistic literature, and the characteristics particular to religious forgeries == External links == Online Critical Pseudepigrapha Online texts of the Pseudepigrapha in their original or extant ancient languages Smith, Mahlon H. Pseudepigrapha entry in Into His Own: Perspective on the World of Jesus online historical source book, at VirtualReligion.net Journal for the Study of the Pseudepigrapha official website
Wikipedia/Pseudepigraphy
In fluid mechanics, the center of pressure is the point on a body where a single force acting at that point can represent the total effect of the pressure field acting on the body. The total force vector acting at the center of pressure is the surface integral of the pressure vector field across the surface of the body. The resultant force and center of pressure location produce an equivalent force and moment on the body as the original pressure field. Pressure fields occur in both static and dynamic fluid mechanics. Specification of the center of pressure, the reference point from which the center of pressure is referenced, and the associated force vector allows the moment generated about any point to be computed by a translation from the reference point to the desired new point. It is common for the center of pressure to be located on the body, but in fluid flows it is possible for the pressure field to exert a moment on the body of such magnitude that the center of pressure is located outside the body. == Hydrostatic example (dam) == Since the forces of water on a dam are hydrostatic forces, they vary linearly with depth. The total force on the dam is then the integral of the pressure multiplied by the width of the dam as a function of the depth. The center of pressure is located at the centroid of the triangular shaped pressure field 2 3 {\displaystyle {\tfrac {2}{3}}} from the top of the water line. The hydrostatic force and tipping moment on the dam about some point can be computed from the total force and center of pressure location relative to the point of interest. == Historical usage for sailboats == Center of pressure is used in sailboat design to represent the position on a sail where the aerodynamic force is concentrated. The relationship of the aerodynamic center of pressure on the sails to the hydrodynamic center of pressure (referred to as the center of lateral resistance) on the hull determines the behavior of the boat in the wind. This behavior is known as the "helm" and is either a weather helm or lee helm. A slight amount of weather helm is thought by some sailors to be a desirable situation, both from the standpoint of the "feel" of the helm, and the tendency of the boat to head slightly to windward in stronger gusts, to some extent self-feathering the sails. Other sailors disagree and prefer a neutral helm. The fundamental cause of "helm", be it weather or lee, is the relationship of the center of pressure of the sail plan to the center of lateral resistance of the hull. If the center of pressure is astern of the center of lateral resistance, a weather helm, the tendency of the vessel is to want to turn into the wind. If the situation is reversed, with the center of pressure forward of the center of lateral resistance of the hull, a "lee" helm will result, which is generally considered undesirable, if not dangerous. Too much of either helm is not good, since it forces the helmsman to hold the rudder deflected to counter it, thus inducing extra drag beyond what a vessel with neutral or minimal helm would experience. == Aircraft aerodynamics == A stable configuration is desirable not only in sailing, but in aircraft design as well. Aircraft design therefore borrowed the term center of pressure. And like a sail, a rigid non-symmetrical airfoil not only produces lift, but a moment. The center of pressure of an aircraft is the point where all of the aerodynamic pressure field may be represented by a single force vector with no moment. A similar idea is the aerodynamic center which is the point on an airfoil where the pitching moment produced by the aerodynamic forces is constant with angle of attack. The aerodynamic center plays an important role in analysis of the longitudinal static stability of all flying machines. It is desirable that when the pitch angle and angle of attack of an aircraft are disturbed (by, for example wind shear/vertical gust) that the aircraft returns to its original trimmed pitch angle and angle of attack without a pilot or autopilot changing the control surface deflection. For an aircraft to return towards its trimmed attitude, without input from a pilot or autopilot, it must have positive longitudinal static stability. == Missile aerodynamics == Missiles typically do not have a preferred plane or direction of maneuver and thus have symmetric airfoils. Since the center of pressure for symmetric airfoils is relatively constant for small angle of attack, missile engineers typically speak of the complete center of pressure of the entire vehicle for stability and control analysis. In missile analysis, the center of pressure is typically defined as the center of the additional pressure field due to a change in the angle of attack off of the trim angle of attack. For unguided rockets the trim position is typically zero angle of attack and the center of pressure is defined to be the center of pressure of the resultant flow field on the entire vehicle resulting from a very small angle of attack (that is, the center of pressure is the limit as angle of attack goes to zero). For positive stability in missiles, the total vehicle center of pressure defined as given above must be further from the nose of the vehicle than the center of gravity. In missiles at lower angles of attack, the contributions to the center of pressure are dominated by the nose, wings, and fins. The normalized normal force coefficient derivative with respect to the angle of attack of each component multiplied by the location of the center of pressure can be used to compute a centroid representing the total center of pressure. The center of pressure of the added flow field is behind the center of gravity and the additional force "points" in the direction of the added angle of attack; this produces a moment that pushes the vehicle back to the trim position. In guided missiles where the fins can be moved to trim the vehicles in different angles of attack, the center of pressure is the center of pressure of the flow field at that angle of attack for the undeflected fin position. This is the center of pressure of any small change in the angle of attack (as defined above). Once again for positive static stability, this definition of center of pressure requires that the center of pressure be further from the nose than the center of gravity. This ensures that any increased forces resulting from increased angle of attack results in increased restoring moment to drive the missile back to the trimmed position. In missile analysis, positive static margin implies that the complete vehicle makes a restoring moment for any angle of attack from the trim position. == Movement of center of pressure for aerodynamic fields == The center of pressure on a symmetric airfoil typically lies close to 25% of the chord length behind the leading edge of the airfoil. (This is called the "quarter-chord point".) For a symmetric airfoil, as angle of attack and lift coefficient change, the center of pressure does not move. It remains around the quarter-chord point for angles of attack below the stalling angle of attack. The role of center of pressure in the control characterization of aircraft takes a different form than in missiles. On a cambered airfoil the center of pressure does not occupy a fixed location. For a conventionally cambered airfoil, the center of pressure lies a little behind the quarter-chord point at maximum lift coefficient (large angle of attack), but as lift coefficient reduces (angle of attack reduces) the center of pressure moves toward the rear. When the lift coefficient is zero an airfoil is generating no lift but a conventionally cambered airfoil generates a nose-down pitching moment, so the location of the center of pressure is an infinite distance behind the airfoil. For a reflex-cambered airfoil, the center of pressure lies a little ahead of the quarter-chord point at maximum lift coefficient (large angle of attack), but as lift coefficient reduces (angle of attack reduces) the center of pressure moves forward. When the lift coefficient is zero an airfoil is generating no lift but a reflex-cambered airfoil generates a nose-up pitching moment, so the location of the center of pressure is an infinite distance ahead of the airfoil. This direction of movement of the center of pressure on a reflex-cambered airfoil has a stabilising effect. The way the center of pressure moves as lift coefficient changes makes it difficult to use the center of pressure in the mathematical analysis of longitudinal static stability of an aircraft. For this reason, it is much simpler to use the aerodynamic center when carrying out a mathematical analysis. The aerodynamic center occupies a fixed location on an airfoil, typically close to the quarter-chord point. The aerodynamic center is the conceptual starting point for longitudinal stability. The horizontal stabilizer contributes extra stability and this allows the center of gravity to be a small distance aft of the aerodynamic center without the aircraft reaching neutral stability. The position of the center of gravity at which the aircraft has neutral stability is called the neutral point. == See also == == Notes == == References == Hurt, Hugh H. Jr. (January 1965). Aerodynamics for Naval Aviators. Washington, D.C.: Naval Air Systems Command, United States Navy. pp. 16–21. NAVWEPS 00-80T-80. Smith, Hubert (1992). The Illustrated Guide to Aerodynamics (2nd ed.). New York: TAB Books. pp. 24–27. ISBN 0-8306-3901-2. Anderson, John D. (1999), Aircraft Performance and Design, McGraw-Hill. ISBN 0-07-116010-8 Clancy, L.J. (1975), Aerodynamics, Pitman Publishing Limited, London. ISBN 0-273-01120-0
Wikipedia/Center_of_pressure_(fluid_mechanics)
The circle of forces, traction circle, friction circle, or friction ellipse is a useful way to think about the dynamic interaction between a vehicle's tire and the road surface. The diagram below shows the tire from above, so that the road surface lies in the xy-plane. The vehicle to which the tire is attached is moving in the positive y direction. In this example, the vehicle would be cornering to the right (i.e. the positive x direction points to the center of the corner). Note that the plane of rotation of the tire is at an angle to the actual direction that the tire is moving (the positive y direction). Put differently, rather than being allowed to simply "roll" in the direction that it is "pointing" (in this case, rightwards from the positive y direction), the tire instead must "slip" in a different direction from that which it is pointing in order to maintain its "forward" motion in the positive y direction. This difference between the direction the tire "points" (its plane of rotation) and the tire's actual direction of travel is the slip angle. A tire can generate horizontal force where it meets the road surface by the mechanism of slip. That force is represented in the diagram by the vector F. Note that in this example, F is perpendicular to the plane of the tire. That is because the tire is rolling freely, with no torque applied to it by the vehicle's brakes or drive train. However, that is not always the case. The magnitude of F is limited by the dashed circle, but it can be any combination of the components Fx and Fy that does not extend beyond the dashed circle. (For a real-world tire, the circle is likely to be closer to an ellipse, with the y axis slightly longer than the x axis.) In the example, the tire is generating a component of force in the x direction (Fx) which, when transferred to the vehicle's chassis via the suspension system in combination with similar forces from the other tires, will cause the vehicle to turn to the right. Note that there is also a small component of force in the negative y direction (Fy). This represents drag that will, if not countered by some other force, cause the vehicle to decelerate. Drag of this kind is an unavoidable consequence of the mechanism of slip, by which the tire generates lateral force. The diameter of the circle of forces, and therefore the maximum horizontal force that the tire can generate, depends upon many factors, including the design of the tire and its condition (age and temperature, for example), the qualities of the road surface, and the vertical load on the tire. == See also == Cornering force Racetrack (game) Skidpad Slip (vehicle dynamics) Vehicle dynamics == References ==
Wikipedia/Circle_of_forces
Automotive suspension design is an aspect of automotive engineering, concerned with designing the suspension for cars and trucks. Suspension design for other vehicles is similar, though the process may not be as well established. The process entails Selecting appropriate vehicle level targets Selecting a system architecture Choosing the location of the 'hard points', or theoretical centres of each ball joint or bushing Selecting the rates of the bushings Analysing the loads in the suspension Designing the spring rates Designing shock absorber characteristics Designing the structure of each component so that it is strong, stiff, light, and cheap Analysing the vehicle dynamics of the resulting design Since the 1990s the use of multibody simulation and finite element software has made this series of tasks more straightforward. == Vehicle level targets == A partial list would include: Maximum steady state lateral acceleration (in understeer mode) Roll stiffness (degrees per g of lateral acceleration) Ride frequencies Lateral load transfer percentage distribution front to rear Roll moment distribution front to rear Ride heights at various states of load Understeer gradient Turning circle Ackermann Jounce travel Rebound travel Once the overall vehicle targets have been identified they can be used to set targets for the two suspensions. For instance, the overall understeer target can be broken down into contributions from each end using a Bundorf analysis. == System architecture == Typically a vehicle designer is operating within a set of constraints. The suspension architecture selected for each end of the vehicle will have to obey those constraints. For both ends of the car this would include the type of spring, location of the spring, and location of the shock absorbers. For the front suspension the following need to be considered The type of suspension (MacPherson strut or double wishbone suspension) Type of steering actuator (rack and pinion or recirculating ball) Location of the steering actuator in front of, or behind, the wheel centre For the rear suspension there are many more possible suspension types, in practice. == Hardpoints == The hardpoints control the static settings and the kinematics of the suspension. The static settings are Toe Camber Caster Roll center height at design load Mechanical (or caster) trail Anti-dive and anti-squat Kingpin Inclination Scrub radius Spring and shock absorber motion ratios The kinematics describe how important characteristics change as the suspension moves, typically in roll or steer. They include Bump Steer Roll Steer Tractive Force Steer Brake Force Steer Camber gain in roll Caster gain in roll Roll centre height gain Ackermann change with steering angle Track gain in roll The analysis for these parameters can be done graphically, or by CAD, or by the use of kinematics software. == Compliance analysis == The compliance of the bushings, the body, and other parts modify the behaviour of the suspension. In general it is difficult to improve the kinematics of a suspension using the bushings, but one example where it does work is the toe control bush used in Twist-beam rear suspensions. More generally, modern cars suspensions include a Noise, vibration, and harshness (NVH) bush. This is designed as the main path for the vibrations and forces that cause road noise and impact noise, and is supposed to be tunable without affecting the kinematics too much. In racing cars, bushings tend to be made of harder materials for good handling such as brass or delrin. In Passenger cars, bushings tend to be made of softer material for added comfort. In general physical terms, the mass and mechanical hysteresis (damping effect) of solid parts should be accounted for in a dynamic analysis, as well as their elasticity. == Loads == Once the basic geometry is established the loads in each suspension part can be estimated. This can be as simple as deciding what a likely maximum load case is at the contact patch, and then drawing a Free body diagram of each part to work out the forces, or as complex as simulating the behaviour of the suspension over a rough road, and calculating the loads caused. Often loads that have been measured on a similar suspension are used instead - this is the most reliable method. == Detailed design of arms == The loads and geometry are then used to design the arms and spindle. Inevitably some problems will be found in the course of this that force compromises to be made with the basic geometry of the suspension. == References == === Notes === === Sources === The Automotive Chassis Engineering Principles - J. Reimpell H. Stoll J. W. Betzler. - ISBN 978-0-7680-0657-5 Race Car Vehicle Dynamics - William F. Milliken and Douglas L. Milliken. Fundamentals of Vehicle Dynamics - Thomas Gillespie. Chassis Design - Principles and Analysis - William F. Milliken and Douglas L. Milliken. Simulation and direct equations: Abramov, S., Mannan, S., & Durieux, O. (2009)'Semi-Active Suspension System Simulation Using SIMULINK'. International Journal of Engineering Systems Modelling and Simulation, 1(2/3), 101 - 114 http://collections.crest.ac.uk/232/1/fulltext.pdf
Wikipedia/Automotive_suspension_design
Flight dynamics is the science of air vehicle orientation and control in three dimensions. The three critical flight dynamics parameters are the angles of rotation in three dimensions about the vehicle's center of gravity (cg), known as pitch, roll and yaw. These are collectively known as aircraft attitude, often principally relative to the atmospheric frame in normal flight, but also relative to terrain during takeoff or landing, or when operating at low elevation. The concept of attitude is not specific to fixed-wing aircraft, but also extends to rotary aircraft such as helicopters, and dirigibles, where the flight dynamics involved in establishing and controlling attitude are entirely different. Control systems adjust the orientation of a vehicle about its cg. A control system includes control surfaces which, when deflected, generate a moment (or couple from ailerons) about the cg which rotates the aircraft in pitch, roll, and yaw. For example, a pitching moment comes from a force applied at a distance forward or aft of the cg, causing the aircraft to pitch up or down. A fixed-wing aircraft increases or decreases the lift generated by the wings when it pitches nose up or down by increasing or decreasing the angle of attack (AOA). The roll angle is also known as bank angle on a fixed-wing aircraft, which usually "banks" to change the horizontal direction of flight. An aircraft is streamlined from nose to tail to reduce drag making it advantageous to keep the sideslip angle near zero, though an aircraft may be deliberately "sideslipped" to increase drag and descent rate during landing, to keep aircraft heading same as runway heading during cross-wind landings and during flight with asymmetric power. == Background == Roll, pitch and yaw refer to rotations about the respective axes starting from a defined steady flight equilibrium state. The equilibrium roll angle is known as wings level or zero bank angle. The most common aeronautical convention defines roll as acting about the longitudinal axis, positive with the starboard (right) wing down. Yaw is about the vertical body axis, positive with the nose to starboard. Pitch is about an axis perpendicular to the longitudinal plane of symmetry, positive nose up. === Reference frames === Three right-handed, Cartesian coordinate systems see frequent use in flight dynamics. The first coordinate system has an origin fixed in the reference frame of the Earth: Earth frame Origin - arbitrary, fixed relative to the surface of the Earth xE axis - positive in the direction of north yE axis - positive in the direction of east zE axis - positive towards the center of the Earth In many flight dynamics applications, the Earth frame is assumed to be inertial with a flat xE,yE-plane, though the Earth frame can also be considered a spherical coordinate system with origin at the center of the Earth. The other two reference frames are body-fixed, with origins moving along with the aircraft, typically at the center of gravity. For an aircraft that is symmetric from right-to-left, the frames can be defined as: Body frame Origin - airplane center of gravity xb axis - positive out the nose of the aircraft in the plane of symmetry of the aircraft zb axis - perpendicular to the xb axis, in the plane of symmetry of the aircraft, positive below the aircraft yb axis - perpendicular to the xb,zb-plane, positive determined by the right-hand rule (generally, positive out the right wing) Wind frame Origin - airplane center of gravity xw axis - positive in the direction of the velocity vector of the aircraft relative to the air zw axis - perpendicular to the xw axis, in the plane of symmetry of the aircraft, positive below the aircraft yw axis - perpendicular to the xw,zw-plane, positive determined by the right hand rule (generally, positive to the right) Asymmetric aircraft have analogous body-fixed frames, but different conventions must be used to choose the precise directions of the x and z axes. The Earth frame is a convenient frame to express aircraft translational and rotational kinematics. The Earth frame is also useful in that, under certain assumptions, it can be approximated as inertial. Additionally, one force acting on the aircraft, weight, is fixed in the +zE direction. The body frame is often of interest because the origin and the axes remain fixed relative to the aircraft. This means that the relative orientation of the Earth and body frames describes the aircraft attitude. Also, the direction of the force of thrust is generally fixed in the body frame, though some aircraft can vary this direction, for example by thrust vectoring. The wind frame is a convenient frame to express the aerodynamic forces and moments acting on an aircraft. In particular, the net aerodynamic force can be divided into components along the wind frame axes, with the drag force in the −xw direction and the lift force in the −zw direction. In addition to defining the reference frames, the relative orientation of the reference frames can be determined. The relative orientation can be expressed in a variety of forms, including: Rotation matrices Direction cosines Euler angles Quaternions The various Euler angles relating the three reference frames are important to flight dynamics. Many Euler angle conventions exist, but all of the rotation sequences presented below use the z-y'-x" convention. This convention corresponds to a type of Tait-Bryan angles, which are commonly referred to as Euler angles. This convention is described in detail below for the roll, pitch, and yaw Euler angles that describe the body frame orientation relative to the Earth frame. The other sets of Euler angles are described below by analogy. === Transformations (Euler angles) === ==== From Earth frame to body frame ==== First, rotate the Earth frame axes xE and yE around the zE axis by the yaw angle ψ. This results in an intermediate reference frame with axes denoted x',y',z', where z'=zE. Second, rotate the x' and z' axes around the y' axis by the pitch angle θ. This results in another intermediate reference frame with axes denoted x",y",z", where y"=y'. Finally, rotate the y" and z" axes around the x" axis by the roll angle φ. The reference frame that results after the three rotations is the body frame. Based on the rotations and axes conventions above: Yaw angle ψ: angle between north and the projection of the aircraft longitudinal axis onto the horizontal plane; Pitch angle θ: angle between the aircraft longitudinal axis and horizontal; Roll angle φ: rotation around the aircraft longitudinal axis after rotating by yaw and pitch. ==== From Earth frame to wind frame ==== Heading angle σ: angle between north and the horizontal component of the velocity vector, which describes which direction the aircraft is moving relative to cardinal directions. Flight path angle γ: is the angle between horizontal and the velocity vector, which describes whether the aircraft is climbing or descending. Bank angle μ: represents a rotation of the lift force around the velocity vector, which may indicate whether the airplane is turning. When performing the rotations described above to obtain the body frame from the Earth frame, there is this analogy between angles: σ, ψ (heading vs yaw) γ, θ (Flight path vs pitch) μ, φ (Bank vs Roll) ==== From wind frame to body frame ==== sideslip angle β: angle between the velocity vector and the projection of the aircraft longitudinal axis onto the xw,yw-plane, which describes whether there is a lateral component to the aircraft velocity angle of attack α: angle between the xw,yw-plane and the aircraft longitudinal axis and, among other things, is an important variable in determining the magnitude of the force of lift When performing the rotations described earlier to obtain the body frame from the Earth frame, there is this analogy between angles: β, ψ (sideslip vs yaw) α, θ (attack vs pitch) (φ = 0) (nothing vs roll) === Analogies === Between the three reference frames there are hence these analogies: Yaw / Heading / Sideslip (Z axis, vertical) Pitch / Flight path / Attack angle (Y axis, wing) Roll / Bank / nothing (X axis, nose) == Design cases == In analyzing the stability of an aircraft, it is usual to consider perturbations about a nominal steady flight state. So the analysis would be applied, for example, assuming: Straight and level flight Turn at constant speed Approach and landing Takeoff The speed, height and trim angle of attack are different for each flight condition, in addition, the aircraft will be configured differently, e.g. at low speed flaps may be deployed and the undercarriage may be down. Except for asymmetric designs (or symmetric designs at significant sideslip), the longitudinal equations of motion (involving pitch and lift forces) may be treated independently of the lateral motion (involving roll and yaw). The following considers perturbations about a nominal straight and level flight path. To keep the analysis (relatively) simple, the control surfaces are assumed fixed throughout the motion, this is stick-fixed stability. Stick-free analysis requires the further complication of taking the motion of the control surfaces into account. Furthermore, the flight is assumed to take place in still air, and the aircraft is treated as a rigid body. == Forces of flight == Three forces act on an aircraft in flight: weight, thrust, and the aerodynamic force. === Aerodynamic force === ==== Components of the aerodynamic force ==== The expression to calculate the aerodynamic force is: F A = ∫ Σ ( − Δ p n + f ) d σ {\displaystyle \mathbf {F} _{A}=\int _{\Sigma }(-\Delta p\mathbf {n} +\mathbf {f} )\,d\sigma } where: Δ p ≡ {\displaystyle \Delta p\equiv } Difference between static pressure and free current pressure n ≡ {\displaystyle \mathbf {n} \equiv } outer normal vector of the element of area f ≡ {\displaystyle \mathbf {f} \equiv } tangential stress vector practised by the air on the body Σ ≡ {\displaystyle \Sigma \equiv } adequate reference surface projected on wind axes we obtain: F A = − ( i w D + j w Q + k w L ) {\displaystyle \mathbf {F} _{A}=-(\mathbf {i} _{w}D+\mathbf {j} _{w}Q+\mathbf {k} _{w}L)} where: D ≡ {\displaystyle D\equiv } Drag Q ≡ {\displaystyle Q\equiv } Lateral force L ≡ {\displaystyle L\equiv } Lift ==== Aerodynamic coefficients ==== Dynamic pressure of the free current ≡ q = 1 2 ρ V 2 {\displaystyle \equiv q={\tfrac {1}{2}}\,\rho \,V^{2}} Proper reference surface (wing surface, in case of planes) ≡ S {\displaystyle \equiv S} Pressure coefficient ≡ C p = p − p ∞ q {\displaystyle \equiv C_{p}={\dfrac {p-p_{\infty }}{q}}} Friction coefficient ≡ C f = f q {\displaystyle \equiv C_{f}={\dfrac {f}{q}}} Drag coefficient ≡ C d = D q S = − 1 S ∫ Σ [ ( − C p ) n ⋅ i w + C f t ⋅ i w ] d σ {\displaystyle \equiv C_{d}={\dfrac {D}{qS}}=-{\dfrac {1}{S}}\int _{\Sigma }[(-C_{p})\mathbf {n} \cdot \mathbf {i_{w}} +C_{f}\mathbf {t} \cdot \mathbf {i_{w}} ]\,d\sigma } Lateral force coefficient ≡ C Q = Q q S = − 1 S ∫ Σ [ ( − C p ) n ⋅ j w + C f t ⋅ j w ] d σ {\displaystyle \equiv C_{Q}={\dfrac {Q}{qS}}=-{\dfrac {1}{S}}\int _{\Sigma }[(-C_{p})\mathbf {n} \cdot \mathbf {j_{w}} +C_{f}\mathbf {t} \cdot \mathbf {j_{w}} ]\,d\sigma } Lift coefficient ≡ C L = L q S = − 1 S ∫ Σ [ ( − C p ) n ⋅ k w + C f t ⋅ k w ] d σ {\displaystyle \equiv C_{L}={\dfrac {L}{qS}}=-{\dfrac {1}{S}}\int _{\Sigma }[(-C_{p})\mathbf {n} \cdot \mathbf {k_{w}} +C_{f}\mathbf {t} \cdot \mathbf {k_{w}} ]\,d\sigma } It is necessary to know Cp and Cf in every point on the considered surface. ==== Dimensionless parameters and aerodynamic regimes ==== In absence of thermal effects, there are three remarkable dimensionless numbers: Compressibility of the flow: Mach number ≡ M = V a {\displaystyle \equiv M={\dfrac {V}{a}}} Viscosity of the flow: Reynolds number ≡ R e = ρ V l μ {\displaystyle \equiv Re={\dfrac {\rho Vl}{\mu }}} Rarefaction of the flow: Knudsen number ≡ K n = λ l {\displaystyle \equiv Kn={\dfrac {\lambda }{l}}} where: a = k R θ ≡ {\displaystyle a={\sqrt {kR\theta }}\equiv } speed of sound k ≡ {\displaystyle k\equiv } specific heat ratio R ≡ {\displaystyle R\equiv } gas constant by mass unity θ ≡ {\displaystyle \theta \equiv } absolute temperature λ = μ ρ π 2 R θ = M R e k π 2 ≡ {\displaystyle \lambda ={\dfrac {\mu }{\rho }}{\sqrt {\dfrac {\pi }{2R\theta }}}={\dfrac {M}{Re}}{\sqrt {\dfrac {k\pi }{2}}}\equiv } mean free path According to λ there are three possible rarefaction grades and their corresponding motions are called: Continuum current (negligible rarefaction): M R e ≪ 1 {\displaystyle {\dfrac {M}{Re}}\ll 1} Transition current (moderate rarefaction): M R e ≈ 1 {\displaystyle {\dfrac {M}{Re}}\approx 1} Free molecular current (high rarefaction): M R e ≫ 1 {\displaystyle {\dfrac {M}{Re}}\gg 1} The motion of a body through a flow is considered, in flight dynamics, as continuum current. In the outer layer of the space that surrounds the body viscosity will be negligible. However viscosity effects will have to be considered when analysing the flow in the nearness of the boundary layer. Depending on the compressibility of the flow, different kinds of currents can be considered: Incompressible subsonic current: 0 < M < 0.3 {\displaystyle 0<M<0.3} Compressible subsonic current: 0.3 < M < 0.8 {\displaystyle 0.3<M<0.8} Transonic current: 0.8 < M < 1.2 {\displaystyle 0.8<M<1.2} Supersonic current: 1.2 < M < 5 {\displaystyle 1.2<M<5} Hypersonic current: 5 < M {\displaystyle 5<M} ==== Drag coefficient equation and aerodynamic efficiency ==== If the geometry of the body is fixed and in case of symmetric flight (β=0 and Q=0), pressure and friction coefficients are functions depending on: C p = C p ( α , M , R e , P ) {\displaystyle C_{p}=C_{p}(\alpha ,M,Re,P)} C f = C f ( α , M , R e , P ) {\displaystyle C_{f}=C_{f}(\alpha ,M,Re,P)} where: α ≡ {\displaystyle \alpha \equiv } angle of attack P ≡ {\displaystyle P\equiv } considered point of the surface Under these conditions, drag and lift coefficient are functions depending exclusively on the angle of attack of the body and Mach and Reynolds numbers. Aerodynamic efficiency, defined as the relation between lift and drag coefficients, will depend on those parameters as well. { C D = C D ( α , M , R e ) C L = C L ( α , M , R e ) E = E ( α , M , R e ) = C L C D {\displaystyle {\begin{cases}C_{D}=C_{D}(\alpha ,M,Re)\\C_{L}=C_{L}(\alpha ,M,Re)\\E=E(\alpha ,M,Re)={\dfrac {C_{L}}{C_{D}}}\\\end{cases}}} It is also possible to get the dependency of the drag coefficient respect to the lift coefficient. This relation is known as the drag coefficient equation: C D = C D ( C L , M , R e ) ≡ {\displaystyle C_{D}=C_{D}(C_{L},M,Re)\equiv } drag coefficient equation The aerodynamic efficiency has a maximum value, Emax, respect to CL where the tangent line from the coordinate origin touches the drag coefficient equation plot. The drag coefficient, CD, can be decomposed in two ways. First typical decomposition separates pressure and friction effects: C D = C D f + C D p { C D f = D q S = − 1 S ∫ Σ C f t ∙ i w d σ C D p = D q S = − 1 S ∫ Σ ( − C p ) n ∙ i w d σ {\displaystyle C_{D}=C_{Df}+C_{Dp}{\begin{cases}C_{Df}={\dfrac {D}{qS}}=-{\dfrac {1}{S}}\int _{\Sigma }C_{f}\mathbf {t} \bullet \mathbf {i_{w}} \,d\sigma \\C_{Dp}={\dfrac {D}{qS}}=-{\dfrac {1}{S}}\int _{\Sigma }(-C_{p})\mathbf {n} \bullet \mathbf {i_{w}} \,d\sigma \end{cases}}} There is a second typical decomposition taking into account the definition of the drag coefficient equation. This decomposition separates the effect of the lift coefficient in the equation, obtaining two terms CD0 and CDi. CD0 is known as the parasitic drag coefficient and it is the base drag coefficient at zero lift. CDi is known as the induced drag coefficient and it is produced by the body lift. C D = C D 0 + C D i { C D 0 = ( C D ) C L = 0 C D i {\displaystyle C_{D}=C_{D0}+C_{Di}{\begin{cases}C_{D0}=(C_{D})_{C_{L}=0}\\C_{Di}\end{cases}}} ==== Parabolic and generic drag coefficient ==== A good attempt for the induced drag coefficient is to assume a parabolic dependency of the lift C D i = k C L 2 ⇒ C D = C D 0 + k C L 2 {\displaystyle C_{Di}=kC_{L}^{2}\Rightarrow C_{D}=C_{D0}+kC_{L}^{2}} Aerodynamic efficiency is now calculated as: E = C L C D 0 + k C L 2 ⇒ { E m a x = 1 2 k C D 0 ( C L ) E m a x = C D 0 k ( C D i ) E m a x = C D 0 {\displaystyle E={\dfrac {C_{L}}{C_{D0}+kC_{L}^{2}}}\Rightarrow {\begin{cases}E_{max}={\dfrac {1}{2{\sqrt {kC_{D0}}}}}\\(C_{L})_{Emax}={\sqrt {\dfrac {C_{D0}}{k}}}\\(C_{Di})_{Emax}=C_{D0}\end{cases}}} If the configuration of the plane is symmetrical respect to the XY plane, minimum drag coefficient equals to the parasitic drag of the plane. C D m i n = ( C D ) C L = 0 = C D 0 {\displaystyle C_{Dmin}=(C_{D})_{CL=0}=C_{D0}} In case the configuration is asymmetrical respect to the XY plane, however, minimum drag differs from the parasitic drag. On these cases, a new approximate parabolic drag equation can be traced leaving the minimum drag value at zero lift value. C D m i n = C D M ≠ ( C D ) C L = 0 {\displaystyle C_{Dmin}=C_{DM}\neq (C_{D})_{CL=0}} C D = C D M + k ( C L − C L M ) 2 {\displaystyle C_{D}=C_{DM}+k(C_{L}-C_{LM})^{2}} ==== Variation of parameters with the Mach number ==== The Coefficient of pressure varies with Mach number by the relation given below: C p = C p 0 | 1 − M ∞ 2 | {\displaystyle C_{p}={\frac {C_{p0}}{\sqrt {|1-{M_{\infty }}^{2}|}}}} where Cp is the compressible pressure coefficient Cp0 is the incompressible pressure coefficient M∞ is the freestream Mach number. This relation is reasonably accurate for 0.3 < M < 0.7 and when M = 1 it becomes ∞ which is impossible physical situation and is called Prandtl–Glauert singularity. ==== Aerodynamic force in a specified atmosphere ==== see Aerodynamic force == Stability == Stability is the ability of the aircraft to counteract disturbances to its flight path. According to David P. Davies, there are six types of aircraft stability: speed stability, stick free static longitudinal stability, static lateral stability, directional stability, oscillatory stability, and spiral stability.: 164  === Speed stability === An aircraft in cruise flight is typically speed stable. If speed increases, drag increases, which will reduce the speed back to equilibrium for its configuration and thrust setting. If speed decreases, drag decreases, and the aircraft will accelerate back to its equilibrium speed where thrust equals drag. However, in slow flight, due to lift-induced drag, as speed decreases, drag increases (and vice versa). This is known as the "back of the drag curve". The aircraft will be speed unstable, because a decrease in speed will cause a further decrease in speed. === Static stability and control === ==== Longitudinal static stability ==== Longitudinal stability refers to the stability of an aircraft in pitch. For a stable aircraft, if the aircraft pitches up, the wings and tail create a pitch-down moment which tends to restore the aircraft to its original attitude. For an unstable aircraft, a disturbance in pitch will lead to an increasing pitching moment. Longitudinal static stability is the ability of an aircraft to recover from an initial disturbance. Longitudinal dynamic stability refers to the damping of these stabilizing moments, which prevents persistent or increasing oscillations in pitch. ==== Directional stability ==== Directional or weathercock stability is concerned with the static stability of the airplane about the z axis. Just as in the case of longitudinal stability it is desirable that the aircraft should tend to return to an equilibrium condition when subjected to some form of yawing disturbance. For this the slope of the yawing moment curve must be positive. An airplane possessing this mode of stability will always point towards the relative wind, hence the name weathercock stability. === Dynamic stability and control === ==== Longitudinal modes ==== It is common practice to derive a fourth order characteristic equation to describe the longitudinal motion, and then factorize it approximately into a high frequency mode and a low frequency mode. The approach adopted here is using qualitative knowledge of aircraft behavior to simplify the equations from the outset, reaching the result by a more accessible route. The two longitudinal motions (modes) are called the short period pitch oscillation (SPPO), and the phugoid. ===== Short-period pitch oscillation ===== A short input (in control systems terminology an impulse) in pitch (generally via the elevator in a standard configuration fixed-wing aircraft) will generally lead to overshoots about the trimmed condition. The transition is characterized by a damped simple harmonic motion about the new trim. There is very little change in the trajectory over the time it takes for the oscillation to damp out. Generally this oscillation is high frequency (hence short period) and is damped over a period of a few seconds. A real-world example would involve a pilot selecting a new climb attitude, for example 5° nose up from the original attitude. A short, sharp pull back on the control column may be used, and will generally lead to oscillations about the new trim condition. If the oscillations are poorly damped the aircraft will take a long period of time to settle at the new condition, potentially leading to Pilot-induced oscillation. If the short period mode is unstable it will generally be impossible for the pilot to safely control the aircraft for any period of time. This damped harmonic motion is called the short period pitch oscillation; it arises from the tendency of a stable aircraft to point in the general direction of flight. It is very similar in nature to the weathercock mode of missile or rocket configurations. The motion involves mainly the pitch attitude θ {\displaystyle \theta } (theta) and incidence α {\displaystyle \alpha } (alpha). The direction of the velocity vector, relative to inertial axes is θ − α {\displaystyle \theta -\alpha } . The velocity vector is: u f = U cos ⁡ ( θ − α ) {\displaystyle u_{f}=U\cos(\theta -\alpha )} w f = U sin ⁡ ( θ − α ) {\displaystyle w_{f}=U\sin(\theta -\alpha )} where u f {\displaystyle u_{f}} , w f {\displaystyle w_{f}} are the inertial axes components of velocity. According to Newton's second law, the accelerations are proportional to the forces, so the forces in inertial axes are: X f = m d u f d t = m d U d t cos ⁡ ( θ − α ) − m U d ( θ − α ) d t sin ⁡ ( θ − α ) {\displaystyle X_{f}=m{\frac {du_{f}}{dt}}=m{\frac {dU}{dt}}\cos(\theta -\alpha )-mU{\frac {d(\theta -\alpha )}{dt}}\sin(\theta -\alpha )} Z f = m d w f d t = m d U d t sin ⁡ ( θ − α ) + m U d ( θ − α ) d t cos ⁡ ( θ − α ) {\displaystyle Z_{f}=m{\frac {dw_{f}}{dt}}=m{\frac {dU}{dt}}\sin(\theta -\alpha )+mU{\frac {d(\theta -\alpha )}{dt}}\cos(\theta -\alpha )} where m is the mass. By the nature of the motion, the speed variation m d U d t {\displaystyle m{\frac {dU}{dt}}} is negligible over the period of the oscillation, so: X f = − m U d ( θ − α ) d t sin ⁡ ( θ − α ) {\displaystyle X_{f}=-mU{\frac {d(\theta -\alpha )}{dt}}\sin(\theta -\alpha )} Z f = m U d ( θ − α ) d t cos ⁡ ( θ − α ) {\displaystyle Z_{f}=mU{\frac {d(\theta -\alpha )}{dt}}\cos(\theta -\alpha )} But the forces are generated by the pressure distribution on the body, and are referred to the velocity vector. But the velocity (wind) axes set is not an inertial frame so we must resolve the fixed axes forces into wind axes. Also, we are only concerned with the force along the z-axis: Z = − Z f cos ⁡ ( θ − α ) + X f sin ⁡ ( θ − α ) {\displaystyle Z=-Z_{f}\cos(\theta -\alpha )+X_{f}\sin(\theta -\alpha )} Or: Z = − m U d ( θ − α ) d t {\displaystyle Z=-mU{\frac {d(\theta -\alpha )}{dt}}} In words, the wind axes force is equal to the centripetal acceleration. The moment equation is the time derivative of the angular momentum: M = B d 2 θ d t 2 {\displaystyle M=B{\frac {d^{2}\theta }{dt^{2}}}} where M is the pitching moment, and B is the moment of inertia about the pitch axis. Let: d θ d t = q {\displaystyle {\frac {d\theta }{dt}}=q} , the pitch rate. The equations of motion, with all forces and moments referred to wind axes are, therefore: d α d t = q + Z m U {\displaystyle {\frac {d\alpha }{dt}}=q+{\frac {Z}{mU}}} d q d t = M B {\displaystyle {\frac {dq}{dt}}={\frac {M}{B}}} We are only concerned with perturbations in forces and moments, due to perturbations in the states α {\displaystyle \alpha } and q, and their time derivatives. These are characterized by stability derivatives determined from the flight condition. The possible stability derivatives are: Z α {\displaystyle Z_{\alpha }} Lift due to incidence, this is negative because the z-axis is downwards whilst positive incidence causes an upwards force. Z q {\displaystyle Z_{q}} Lift due to pitch rate, arises from the increase in tail incidence, hence is also negative, but small compared with Z α {\displaystyle Z_{\alpha }} . M α {\displaystyle M_{\alpha }} Pitching moment due to incidence - the static stability term. Static stability requires this to be negative. M q {\displaystyle M_{q}} Pitching moment due to pitch rate - the pitch damping term, this is always negative. Since the tail is operating in the flowfield of the wing, changes in the wing incidence cause changes in the downwash, but there is a delay for the change in wing flowfield to affect the tail lift, this is represented as a moment proportional to the rate of change of incidence: M α ˙ {\displaystyle M_{\dot {\alpha }}} The delayed downwash effect gives the tail more lift and produces a nose down moment, so M α ˙ {\displaystyle M_{\dot {\alpha }}} is expected to be negative. The equations of motion, with small perturbation forces and moments become: d α d t = ( 1 + Z q m U ) q + Z α m U α {\displaystyle {\frac {d\alpha }{dt}}=\left(1+{\frac {Z_{q}}{mU}}\right)q+{\frac {Z_{\alpha }}{mU}}\alpha } d q d t = M q B q + M α B α + M α ˙ B α ˙ {\displaystyle {\frac {dq}{dt}}={\frac {M_{q}}{B}}q+{\frac {M_{\alpha }}{B}}\alpha +{\frac {M_{\dot {\alpha }}}{B}}{\dot {\alpha }}} These may be manipulated to yield as second order linear differential equation in α {\displaystyle \alpha } : d 2 α d t 2 − ( Z α m U + M q B + ( 1 + Z q m U ) M α ˙ B ) d α d t + ( Z α m U M q B − M α B ( 1 + Z q m U ) ) α = 0 {\displaystyle {\frac {d^{2}\alpha }{dt^{2}}}-\left({\frac {Z_{\alpha }}{mU}}+{\frac {M_{q}}{B}}+(1+{\frac {Z_{q}}{mU}}){\frac {M_{\dot {\alpha }}}{B}}\right){\frac {d\alpha }{dt}}+\left({\frac {Z_{\alpha }}{mU}}{\frac {M_{q}}{B}}-{\frac {M_{\alpha }}{B}}(1+{\frac {Z_{q}}{mU}})\right)\alpha =0} This represents a damped simple harmonic motion. We should expect Z q m U {\displaystyle {\frac {Z_{q}}{mU}}} to be small compared with unity, so the coefficient of α {\displaystyle \alpha } (the 'stiffness' term) will be positive, provided M α < Z α m U M q {\displaystyle M_{\alpha }<{\frac {Z_{\alpha }}{mU}}M_{q}} . This expression is dominated by M α {\displaystyle M_{\alpha }} , which defines the longitudinal static stability of the aircraft, it must be negative for stability. The damping term is reduced by the downwash effect, and it is difficult to design an aircraft with both rapid natural response and heavy damping. Usually, the response is underdamped but stable. ===== Phugoid ===== If the stick is held fixed, the aircraft will not maintain straight and level flight (except in the unlikely case that it happens to be perfectly trimmed for level flight at its current altitude and thrust setting), but will start to dive, level out and climb again. It will repeat this cycle until the pilot intervenes. This long period oscillation in speed and height is called the phugoid mode. This is analyzed by assuming that the SSPO performs its proper function and maintains the angle of attack near its nominal value. The two states which are mainly affected are the flight path angle γ {\displaystyle \gamma } (gamma) and speed. The small perturbation equations of motion are: m U d γ d t = − Z {\displaystyle mU{\frac {d\gamma }{dt}}=-Z} which means the centripetal force is equal to the perturbation in lift force. For the speed, resolving along the trajectory: m d u d t = X − m g γ {\displaystyle m{\frac {du}{dt}}=X-mg\gamma } where g is the acceleration due to gravity at the Earth's surface. The acceleration along the trajectory is equal to the net x-wise force minus the component of weight. We should not expect significant aerodynamic derivatives to depend on the flight path angle, so only X u {\displaystyle X_{u}} and Z u {\displaystyle Z_{u}} need be considered. X u {\displaystyle X_{u}} is the drag increment with increased speed, it is negative, likewise Z u {\displaystyle Z_{u}} is the lift increment due to speed increment, it is also negative because lift acts in the opposite sense to the z-axis. The equations of motion become: m U d γ d t = − Z u u {\displaystyle mU{\frac {d\gamma }{dt}}=-Z_{u}u} m d u d t = X u u − m g γ {\displaystyle m{\frac {du}{dt}}=X_{u}u-mg\gamma } These may be expressed as a second order equation in flight path angle or speed perturbation: d 2 u d t 2 − X u m d u d t − Z u g m U u = 0 {\displaystyle {\frac {d^{2}u}{dt^{2}}}-{\frac {X_{u}}{m}}{\frac {du}{dt}}-{\frac {Z_{u}g}{mU}}u=0} Now lift is very nearly equal to weight: Z = 1 2 ρ U 2 c L S w = W {\displaystyle Z={\frac {1}{2}}\rho U^{2}c_{L}S_{w}=W} where ρ {\displaystyle \rho } is the air density, S w {\displaystyle S_{w}} is the wing area, W the weight and c L {\displaystyle c_{L}} is the lift coefficient (assumed constant because the incidence is constant), we have, approximately: Z u = 2 W U = 2 m g U {\displaystyle Z_{u}={\frac {2W}{U}}={\frac {2mg}{U}}} The period of the phugoid, T, is obtained from the coefficient of u: 2 π T = 2 g 2 U 2 {\displaystyle {\frac {2\pi }{T}}={\sqrt {\frac {2g^{2}}{U^{2}}}}} Or: T = 2 π U 2 g {\displaystyle T={\frac {2\pi U}{{\sqrt {2}}g}}} Since the lift is very much greater than the drag, the phugoid is at best lightly damped. A propeller with fixed speed would help. Heavy damping of the pitch rotation or a large rotational inertia increase the coupling between short period and phugoid modes, so that these will modify the phugoid. ==== Lateral modes ==== With a symmetrical rocket or missile, the directional stability in yaw is the same as the pitch stability; it resembles the short period pitch oscillation, with yaw plane equivalents to the pitch plane stability derivatives. For this reason, pitch and yaw directional stability are collectively known as the "weathercock" stability of the missile. Aircraft lack the symmetry between pitch and yaw, so that directional stability in yaw is derived from a different set of stability derivatives. The yaw plane equivalent to the short period pitch oscillation, which describes yaw plane directional stability is called Dutch roll. Unlike pitch plane motions, the lateral modes involve both roll and yaw motion. ===== Dutch roll ===== It is customary to derive the equations of motion by formal manipulation in what, to the engineer, amounts to a piece of mathematical sleight of hand. The current approach follows the pitch plane analysis in formulating the equations in terms of concepts which are reasonably familiar. Applying an impulse via the rudder pedals should induce Dutch roll, which is the oscillation in roll and yaw, with the roll motion lagging yaw by a quarter cycle, so that the wing tips follow elliptical paths with respect to the aircraft. The yaw plane translational equation, as in the pitch plane, equates the centripetal acceleration to the side force. d β d t = Y m U − r {\displaystyle {\frac {d\beta }{dt}}={\frac {Y}{mU}}-r} where β {\displaystyle \beta } (beta) is the sideslip angle, Y the side force and r the yaw rate. The moment equations are a bit trickier. The trim condition is with the aircraft at an angle of attack with respect to the airflow. The body x-axis does not align with the velocity vector, which is the reference direction for wind axes. In other words, wind axes are not principal axes (the mass is not distributed symmetrically about the yaw and roll axes). Consider the motion of an element of mass in position -z, x in the direction of the y-axis, i.e. into the plane of the paper. If the roll rate is p, the velocity of the particle is: v = − p z + x r {\displaystyle v=-pz+xr} Made up of two terms, the force on this particle is first the proportional to rate of v change, the second is due to the change in direction of this component of velocity as the body moves. The latter terms gives rise to cross products of small quantities (pq, pr, qr), which are later discarded. In this analysis, they are discarded from the outset for the sake of clarity. In effect, we assume that the direction of the velocity of the particle due to the simultaneous roll and yaw rates does not change significantly throughout the motion. With this simplifying assumption, the acceleration of the particle becomes: d v d t = − d p d t z + d r d t x {\displaystyle {\frac {dv}{dt}}=-{\frac {dp}{dt}}z+{\frac {dr}{dt}}x} The yawing moment is given by: δ m x d v d t = − d p d t x z δ m + d r d t x 2 δ m {\displaystyle \delta mx{\frac {dv}{dt}}=-{\frac {dp}{dt}}xz\delta m+{\frac {dr}{dt}}x^{2}\delta m} There is an additional yawing moment due to the offset of the particle in the y direction: d r d t y 2 δ m {\displaystyle {\frac {dr}{dt}}y^{2}\delta m} The yawing moment is found by summing over all particles of the body: N = − d p d t ∫ x z d m + d r d t ∫ x 2 + y 2 d m = − E d p d t + C d r d t {\displaystyle N=-{\frac {dp}{dt}}\int xzdm+{\frac {dr}{dt}}\int x^{2}+y^{2}dm=-E{\frac {dp}{dt}}+C{\frac {dr}{dt}}} where N is the yawing moment, E is a product of inertia, and C is the moment of inertia about the yaw axis. A similar reasoning yields the roll equation: L = A d p d t − E d r d t {\displaystyle L=A{\frac {dp}{dt}}-E{\frac {dr}{dt}}} where L is the rolling moment and A the roll moment of inertia. ===== Lateral and longitudinal stability derivatives ===== The states are β {\displaystyle \beta } (sideslip), r (yaw rate) and p (roll rate), with moments N (yaw) and L (roll), and force Y (sideways). There are nine stability derivatives relevant to this motion, the following explains how they originate. However a better intuitive understanding is to be gained by simply playing with a model airplane, and considering how the forces on each component are affected by changes in sideslip and angular velocity: Y β {\displaystyle Y_{\beta }} Side force due to side slip (in absence of yaw). Sideslip generates a sideforce from the fin and the fuselage. In addition, if the wing has dihedral, side slip at a positive roll angle increases incidence on the starboard wing and reduces it on the port side, resulting in a net force component directly opposite to the sideslip direction. Sweep back of the wings has the same effect on incidence, but since the wings are not inclined in the vertical plane, backsweep alone does not affect Y β {\displaystyle Y_{\beta }} . However, anhedral may be used with high backsweep angles in high performance aircraft to offset the wing incidence effects of sideslip. Oddly enough this does not reverse the sign of the wing configuration's contribution to Y β {\displaystyle Y_{\beta }} (compared to the dihedral case). Y p {\displaystyle Y_{p}} Side force due to roll rate. Roll rate causes incidence at the fin, which generates a corresponding side force. Also, positive roll (starboard wing down) increases the lift on the starboard wing and reduces it on the port. If the wing has dihedral, this will result in a side force momentarily opposing the resultant sideslip tendency. Anhedral wing and or stabilizer configurations can cause the sign of the side force to invert if the fin effect is swamped. Y r {\displaystyle Y_{r}} Side force due to yaw rate. Yawing generates side forces due to incidence at the rudder, fin and fuselage. N β {\displaystyle N_{\beta }} Yawing moment due to sideslip forces. Sideslip in the absence of rudder input causes incidence on the fuselage and empennage, thus creating a yawing moment counteracted only by the directional stiffness which would tend to point the aircraft's nose back into the wind in horizontal flight conditions. Under sideslip conditions at a given roll angle N β {\displaystyle N_{\beta }} will tend to point the nose into the sideslip direction even without rudder input, causing a downward spiraling flight. N p {\displaystyle N_{p}} Yawing moment due to roll rate. Roll rate generates fin lift causing a yawing moment and also differentially alters the lift on the wings, thus affecting the induced drag contribution of each wing, causing a (small) yawing moment contribution. Positive roll generally causes positive N p {\displaystyle N_{p}} values unless the empennage is anhedral or fin is below the roll axis. Lateral force components resulting from dihedral or anhedral wing lift differences has little effect on N p {\displaystyle N_{p}} because the wing axis is normally closely aligned with the center of gravity. N r {\displaystyle N_{r}} Yawing moment due to yaw rate. Yaw rate input at any roll angle generates rudder, fin and fuselage force vectors which dominate the resultant yawing moment. Yawing also increases the speed of the outboard wing whilst slowing down the inboard wing, with corresponding changes in drag causing a (small) opposing yaw moment. N r {\displaystyle N_{r}} opposes the inherent directional stiffness which tends to point the aircraft's nose back into the wind and always matches the sign of the yaw rate input. L β {\displaystyle L_{\beta }} Rolling moment due to sideslip. A positive sideslip angle generates empennage incidence which can cause positive or negative roll moment depending on its configuration. For any non-zero sideslip angle dihedral wings causes a rolling moment which tends to return the aircraft to the horizontal, as does back swept wings. With highly swept wings the resultant rolling moment may be excessive for all stability requirements and anhedral could be used to offset the effect of wing sweep induced rolling moment. L r {\displaystyle L_{r}} Rolling moment due to yaw rate. Yaw increases the speed of the outboard wing whilst reducing speed of the inboard one, causing a rolling moment to the inboard side. The contribution of the fin normally supports this inward rolling effect unless offset by anhedral stabilizer above the roll axis (or dihedral below the roll axis). L p {\displaystyle L_{p}} Rolling moment due to roll rate. Roll creates counter rotational forces on both starboard and port wings whilst also generating such forces at the empennage. These opposing rolling moment effects have to be overcome by the aileron input in order to sustain the roll rate. If the roll is stopped at a non-zero roll angle the L β {\displaystyle L_{\beta }} upward rolling moment induced by the ensuing sideslip should return the aircraft to the horizontal unless exceeded in turn by the downward L r {\displaystyle L_{r}} rolling moment resulting from sideslip induced yaw rate. Longitudinal stability could be ensured or improved by minimizing the latter effect. ===== Equations of motion ===== Since Dutch roll is a handling mode, analogous to the short period pitch oscillation, any effect it might have on the trajectory may be ignored. The body rate r is made up of the rate of change of sideslip angle and the rate of turn. Taking the latter as zero, assuming no effect on the trajectory, for the limited purpose of studying the Dutch roll: d β d t = − r {\displaystyle {\frac {d\beta }{dt}}=-r} The yaw and roll equations, with the stability derivatives become: C d r d t − E d p d t = N β β − N r d β d t + N p p {\displaystyle C{\frac {dr}{dt}}-E{\frac {dp}{dt}}=N_{\beta }\beta -N_{r}{\frac {d\beta }{dt}}+N_{p}p} (yaw) A d p d t − E d r d t = L β β − L r d β d t + L p p {\displaystyle A{\frac {dp}{dt}}-E{\frac {dr}{dt}}=L_{\beta }\beta -L_{r}{\frac {d\beta }{dt}}+L_{p}p} (roll) The inertial moment due to the roll acceleration is considered small compared with the aerodynamic terms, so the equations become: − C d 2 β d t 2 = N β β − N r d β d t + N p p {\displaystyle -C{\frac {d^{2}\beta }{dt^{2}}}=N_{\beta }\beta -N_{r}{\frac {d\beta }{dt}}+N_{p}p} E d 2 β d t 2 = L β β − L r d β d t + L p p {\displaystyle E{\frac {d^{2}\beta }{dt^{2}}}=L_{\beta }\beta -L_{r}{\frac {d\beta }{dt}}+L_{p}p} This becomes a second order equation governing either roll rate or sideslip: ( N p C E A − L p A ) d 2 β d t 2 + ( L p A N r C − N p C L r A ) d β d t − ( L p A N β C − L β A N p C ) β = 0 {\displaystyle \left({\frac {N_{p}}{C}}{\frac {E}{A}}-{\frac {L_{p}}{A}}\right){\frac {d^{2}\beta }{dt^{2}}}+\left({\frac {L_{p}}{A}}{\frac {N_{r}}{C}}-{\frac {N_{p}}{C}}{\frac {L_{r}}{A}}\right){\frac {d\beta }{dt}}-\left({\frac {L_{p}}{A}}{\frac {N_{\beta }}{C}}-{\frac {L_{\beta }}{A}}{\frac {N_{p}}{C}}\right)\beta =0} The equation for roll rate is identical. But the roll angle, ϕ {\displaystyle \phi } (phi) is given by: d ϕ d t = p {\displaystyle {\frac {d\phi }{dt}}=p} If p is a damped simple harmonic motion, so is ϕ {\displaystyle \phi } , but the roll must be in quadrature with the roll rate, and hence also with the sideslip. The motion consists of oscillations in roll and yaw, with the roll motion lagging 90 degrees behind the yaw. The wing tips trace out elliptical paths. Stability requires the "stiffness" and "damping" terms to be positive. These are: L p A N r C − N p C L r A N p C E A − L p A {\displaystyle {\frac {{\frac {L_{p}}{A}}{\frac {N_{r}}{C}}-{\frac {N_{p}}{C}}{\frac {L_{r}}{A}}}{{\frac {N_{p}}{C}}{\frac {E}{A}}-{\frac {L_{p}}{A}}}}} (damping) L β A N p C − L p A N β C N p C E A − L p A {\displaystyle {\frac {{\frac {L_{\beta }}{A}}{\frac {N_{p}}{C}}-{\frac {L_{p}}{A}}{\frac {N_{\beta }}{C}}}{{\frac {N_{p}}{C}}{\frac {E}{A}}-{\frac {L_{p}}{A}}}}} (stiffness) The denominator is dominated by L p {\displaystyle L_{p}} , the roll damping derivative, which is always negative, so the denominators of these two expressions will be positive. Considering the "stiffness" term: − L p N β {\displaystyle -L_{p}N_{\beta }} will be positive because L p {\displaystyle L_{p}} is always negative and N β {\displaystyle N_{\beta }} is positive by design. L β {\displaystyle L_{\beta }} is usually negative, whilst N p {\displaystyle N_{p}} is positive. Excessive dihedral can destabilize the Dutch roll, so configurations with highly swept wings require anhedral to offset the wing sweep contribution to L β {\displaystyle L_{\beta }} . The damping term is dominated by the product of the roll damping and the yaw damping derivatives, these are both negative, so their product is positive. The Dutch roll should therefore be damped. The motion is accompanied by slight lateral motion of the center of gravity and a more "exact" analysis will introduce terms in Y β {\displaystyle Y_{\beta }} etc. In view of the accuracy with which stability derivatives can be calculated, this is an unnecessary pedantry, which serves to obscure the relationship between aircraft geometry and handling, which is the fundamental objective of this article. ===== Roll subsidence ===== Jerking the stick sideways and returning it to center causes a net change in roll orientation. The roll motion is characterized by an absence of natural stability, there are no stability derivatives which generate moments in response to the inertial roll angle. A roll disturbance induces a roll rate which is only canceled by pilot or autopilot intervention. This takes place with insignificant changes in sideslip or yaw rate, so the equation of motion reduces to: A d p d t = L p p . {\displaystyle A{\frac {dp}{dt}}=L_{p}p.} L p {\displaystyle L_{p}} is negative, so the roll rate will decay with time. The roll rate reduces to zero, but there is no direct control over the roll angle. ===== Spiral mode ===== Simply holding the stick still, when starting with the wings near level, an aircraft will usually have a tendency to gradually veer off to one side of the straight flightpath. This is the (slightly unstable) spiral mode. ====== Spiral mode trajectory ====== In studying the trajectory, it is the direction of the velocity vector, rather than that of the body, which is of interest. The direction of the velocity vector when projected on to the horizontal will be called the track, denoted μ {\displaystyle \mu } (mu). The body orientation is called the heading, denoted ψ {\displaystyle \psi } (psi). The force equation of motion includes a component of weight: d μ d t = Y m U + g U ϕ {\displaystyle {\frac {d\mu }{dt}}={\frac {Y}{mU}}+{\frac {g}{U}}\phi } where g is the gravitational acceleration, and U is the speed. Including the stability derivatives: d μ d t = Y β m U β + Y r m U r + Y p m U p + g U ϕ {\displaystyle {\frac {d\mu }{dt}}={\frac {Y_{\beta }}{mU}}\beta +{\frac {Y_{r}}{mU}}r+{\frac {Y_{p}}{mU}}p+{\frac {g}{U}}\phi } Roll rates and yaw rates are expected to be small, so the contributions of Y r {\displaystyle Y_{r}} and Y p {\displaystyle Y_{p}} will be ignored. The sideslip and roll rate vary gradually, so their time derivatives are ignored. The yaw and roll equations reduce to: N β β + N r d μ d t + N p p = 0 {\displaystyle N_{\beta }\beta +N_{r}{\frac {d\mu }{dt}}+N_{p}p=0} (yaw) L β β + L r d μ d t + L p p = 0 {\displaystyle L_{\beta }\beta +L_{r}{\frac {d\mu }{dt}}+L_{p}p=0} (roll) Solving for β {\displaystyle \beta } and p: β = ( L r N p − L p N r ) ( L p N β − N p L β ) d μ d t {\displaystyle \beta ={\frac {(L_{r}N_{p}-L_{p}N_{r})}{(L_{p}N_{\beta }-N_{p}L_{\beta })}}{\frac {d\mu }{dt}}} p = ( L β N r − L r N β ) ( L p N β − N p L β ) d μ d t {\displaystyle p={\frac {(L_{\beta }N_{r}-L_{r}N_{\beta })}{(L_{p}N_{\beta }-N_{p}L_{\beta })}}{\frac {d\mu }{dt}}} Substituting for sideslip and roll rate in the force equation results in a first order equation in roll angle: d ϕ d t = m g ( L β N r − N β L r ) m U ( L p N β − N p L β ) − Y β ( L r N p − L p N r ) ϕ {\displaystyle {\frac {d\phi }{dt}}=mg{\frac {(L_{\beta }N_{r}-N_{\beta }L_{r})}{mU(L_{p}N_{\beta }-N_{p}L_{\beta })-Y_{\beta }(L_{r}N_{p}-L_{p}N_{r})}}\phi } This is an exponential growth or decay, depending on whether the coefficient of ϕ {\displaystyle \phi } is positive or negative. The denominator is usually negative, which requires L β N r > N β L r {\displaystyle L_{\beta }N_{r}>N_{\beta }L_{r}} (both products are positive). This is in direct conflict with the Dutch roll stability requirement, and it is difficult to design an aircraft for which both the Dutch roll and spiral mode are inherently stable. Since the spiral mode has a long time constant, the pilot can intervene to effectively stabilize it, but an aircraft with an unstable Dutch roll would be difficult to fly. It is usual to design the aircraft with a stable Dutch roll mode, but slightly unstable spiral mode. == See also == == References == === Notes === === Bibliography === NK Sinha and N Ananthkrishnan (2013), Elementary Flight Dynamics with an Introduction to Bifurcation and Continuation Methods, CRC Press, Taylor & Francis. Babister, A. W. (1980). Aircraft dynamic stability and response (1st ed.). Oxford: Pergamon Press. ISBN 978-0080247687. == External links == MIXR - mixed reality simulation platform JSBsim, An open source, platform-independent, flight dynamics & control software library in C++
Wikipedia/Flight_dynamics_(aircraft)
In (automotive) vehicle dynamics, slip describes the relative motion between a tire and the road surface it is moving on. This slip can be generated either by the tire's angular velocity being greater or less than the free-rolling speed (referred to as slip ratio), or by the tire's front facing direction being at an angle to its direction of motion (referred to as slip angle). When both of these measurements do not equal zero, the tire enters a state called combined slip. == Longitudinal slip ratio == The longitudinal slip (commonly referred to as longitudinal slip ratio) is used to describe the rotational state of a tire at any given speed. The most common definition is given as the ratio between the tire's slip velocity and the forward facing component of the tire's linear velocity. Mathematically, this definition can be expressed as: σ = r e Ω − v x v x {\displaystyle \sigma ={\frac {r_{e}\Omega -v_{x}}{v_{x}}}} where Ω {\displaystyle \Omega } is the tire's angular velocity along the axle, r e {\displaystyle r_{e}} is the effective radius from the hub to the center of the contact patch, and v x {\displaystyle v_{x}} is the forward facing component of the tire's linear velocity. A slip ratio of zero indicates that the tire is free-rolling at its zero-slip velocity ( r e Ω = v x {\displaystyle r_{e}\Omega =v_{x}} ). A positive slip ratio indicates that the tire is rolling at an angular velocity greater than its ideal zero-slip velocity, which would ideally generate a force that speeds up the assembly. Accordingly, a negative slip ratio means that the tire is spinning at an angular velocity less than its ideal zero-slip velocity, which would ideally generate a force that slows down the assembly to resolve the slip ratio. == Lateral slip angle == The lateral slip of a tire is the angle between the direction it is moving and the direction it is pointing. This can occur, for instance, in cornering, and is enabled by deformation in the tire carcass and tread. Despite the name, no actual sliding is necessary for small slip angles. Sliding may occur, starting at the rear of the contact patch, as slip angle increases. The slip angle can be defined as: α = arctan ⁡ ( v y | v x | ) {\displaystyle \alpha =\arctan \left({\frac {v_{y}}{|v_{x}|}}\right)} == References == == See also == Contact patch Frictional contact mechanics Aristotle's wheel paradox Explanation with animation of the elastic slip website tec-science.com
Wikipedia/Slip_(vehicle_dynamics)
MTS Systems Corporation (MTS) is a supplier of test systems and industrial position sensors. The company provides test and measurement products to determine the performance and reliability of vehicles, aircraft, civil structures, biomedical materials and devices and raw materials. Examples of MTS products include: aerodynamics simulators, seismic simulators, load frames, hydraulic actuators and sensors. The company operates in two divisions: Test and Sensors. In December 2020, Amphenol Corporation announced it had reached an agreement to acquire MTS in an acquisition completed on April 7, 2021. In January 2021, ITW announced it had in turn reached an agreement to acquire the test and simulation business of MTS from Amphenol in the future. == Notable Projects == Custom built Flat Trac LTRe for the National Tire Research Center in Alton, VA, which displays "...some of the most advanced technology in tire testing and research." State-of-the-art 360-degree driving simulator. Commonly referred to as "The World's Most Advanced Driving Simulator" First of its kind tuned mass damper for Citigroup Center in New York City and John Hancock Tower in Boston. Designed the AWD Slot-Car ride system for Walt Disney Imagineering’s Enhanced motion vehicle ride system. Manufacturing and design for Universal Parks & Resorts, including the launch system for the Incredible Hulk Coaster; the ride systems for The Cat in the Hat and Men in Black: Alien Attack at Universal Orlando, Jaws at Universal Studios Japan; as well as the dinosaurs at Islands of Adventure's Jurassic Park River Adventure. == References == == External links == Media related to MTS Systems Corporation at Wikimedia Commons
Wikipedia/MTS_Systems_Corporation
Radial force variation or road force variation (RFV) is a property of a tire that affects steering, traction, braking and load support. High values of RFV for a given tire reflect a high level of manufacturing variations in the tire structure that will impart ride disturbances into the vehicle in the vertical direction. RFV is measured according to processes specified by the ASTM International in ASTM F1806 – Standard Practice for Tire Testing. == Explanation == RFV can best be explained by example. Assume a perfectly uniform tire mounted on a perfectly round wheel loaded with a constant force against a perfectly round test wheel. As the wheel turns, it turns the tire, and the tire carcass undergoes repeated deformation and recovery as it enters and exits the contact area. If we measure the radial force between the tire and the wheel we will see zero change as the tire turns. If we now test a typical production tire we will see the radial force vary as the tire turns. This variation will be induced by two primary mechanisms, variation in the thickness of the tire, and variation in the elastomeric properties of the tire. Consider a good tire with RFV of 6 pounds (27 N). This tire will induce a 6-pound force upward into the vehicle every rotation. The frequency of the force will increase in direct proportion to rotating speed. Tire makers test tires at the point of manufacture to verify that the RFV is within allowable quality limits. Tires that exceed these limits may be scrapped or sold to markets that do not require stringent quality. == Spring model == RFV is often explained by modelling the tire as being a ring composed of short compression springs. As the tire turns a spring element makes contact with the road and is compressed. As the spring rotates out of the contact area it recovers to its original length. In practice, these springs have slight differences in their lengths and spring constants. These variations result in RFV. Tires are complex composite structures made of many different components that are assembled on a drum and cured in a mold. As a result, there are many conditions that result in RFV. These include variations in: tread extrusion thickness and symmetry, tread splice, body ply splices, inner liner splice, bead symmetry, turn-up symmetry, building drum alignment, transfer ring alignment, curing press bead seating, shaping bladder alignment and control, mold runout, and mold alignment. All of these factors can lead to variations in the material distribution and thickness that are modelled as spring length. The various tire components also are made from different materials, each of which exhibit variation in their elastic properties. These variations are influenced by rubber viscoelastic properties, mixing dispersion and uniformity, and cure heat history, among other things. == Waveform analysis == RFV is a complex waveform. It is expressed using several standard methods, including peak-to-peak, first harmonic, second harmonic, and higher-order harmonics. In production RFV testing these are reported as both magnitudes and angles. == References ==
Wikipedia/Radial_Force_Variation
Modelica is an object-oriented, declarative, multi-domain modeling language for component-oriented modeling of complex systems, e.g., systems containing mechanical, electrical, electronic, hydraulic, thermal, control, electric power or process-oriented subcomponents. The free Modelica language is developed by the non-profit Modelica Association. The Modelica Association also develops the free Modelica Standard Library that contains about 1400 generic model components and 1200 functions in various domains, as of version 4.0.0. == Characteristics == While Modelica resembles object-oriented programming languages, such as C++ or Java, it differs in two important respects. First, Modelica is a modeling language rather than a conventional programming language. Modelica classes are not compiled in the usual sense, but they are translated into objects which are then exercised by a simulation engine. The simulation engine is not specified by the language, although certain required capabilities are outlined. Second, although classes may contain algorithmic components similar to statements or blocks in programming languages, their primary content is a set of equations. In contrast to a typical assignment statement, such as x := 2 + y; where the left-hand side of the statement is assigned a value calculated from the expression on the right-hand side, an equation may have expressions on both its right- and left-hand sides, for example, x + y = 3 * z; Equations do not describe assignment but equality. In Modelica terms, equations have no pre-defined causality. The simulation engine may (and usually must) manipulate the equations symbolically to determine their order of execution and which components in the equation are inputs and which are outputs. == History == The Modelica design effort was initiated in September 1996 by Hilding Elmqvist. The goal was to develop an object-oriented language for modeling of technical systems in order to reuse and exchange dynamic system models in a standardized format. Modelica 1.0 is based on the PhD thesis of Hilding Elmqvist and on the experience with the modeling languages Allan, Dymola, NMF ObjectMath, Omola, SIDOPS+, and Smile. Hilding Elmqvist is the key architect of Modelica, but many other people have contributed as well (see appendix E in the Modelica specification). In September 1997, version 1.0 of the Modelica specification was released which was the basis for a prototype implementation within the commercial Dymola software system. In year 2000, the non-profit Modelica Association was formed to manage the continually evolving Modelica language and the development of the free Modelica Standard Library. In the same year, the usage of Modelica in industrial applications started. This table presents the timeline of the Modelica specification history: == Implementations == Commercial front-ends for Modelica include AMESim from the French company Imagine SA (now part of Siemens Digital Industries Software), Dymola from the Swedish company Dynasim AB (now part of Dassault Systèmes), Wolfram SystemModeler (formerly MathModelica) from the Swedish company Wolfram MathCore AB (now part of Wolfram Research), SimulationX from the German company ESI ITI GmbH, MapleSim from the Canadian company Maplesoft, JModelica.org (open source, discontinued) and Modelon Impact, from the Swedish company Modelon AB, and CATIA Systems from Dassault Systèmes (CATIA is one of the major CAD systems). Openmodelica is an open-source Modelica-based modeling and simulation environment intended for industrial and academic usage. Its long-term development is supported by a non-profit organization – the Open Source Modelica Consortium (OSMC). The goal with the OpenModelica effort is to create a comprehensive Open Source Modelica modeling, compilation and simulation environment based on free software distributed in binary and source code form for research, teaching, and industrial usage. The free simulation environment Scicos uses a subset of Modelica for component modeling. Support for a larger part of the Modelica language is currently under development. Nevertheless, there is still some incompatibility and diverging interpretation between all the different tools concerning the Modelica language. == Examples == The following code fragment shows a very simple example of a first order system ( x ˙ = − c ⋅ x , x ( 0 ) = 10 {\displaystyle {\dot {x}}=-c\cdot x,x(0)=10} ): The following code fragment shows an example to calculate the second derivative of a trigonometric function, using OMShell, as a means to develop the program written below. Interesting things to note about this example are the 'parameter' qualifier, which indicates that a given variable is time-invariant and the 'der' operator, which represents (symbolically) the time derivative of a variable. Also worth noting are the documentation strings that can be associated with declarations and equations. The main application area of Modelica is the modeling of physical systems. The most basic structuring concepts are shown at hand of simple examples from the electrical domain: === Built-in and user derived types === Modelica has the four built-in types Real, Integer, Boolean, String. Typically, user-defined types are derived, to associate physical quantity, unit, nominal values, and other attributes: === Connectors describing physical interaction === The interaction of a component to other components is defined by physical ports, called connectors, e.g., an electrical pin is defined as When drawing connection lines between ports, the meaning is that corresponding connector variables without the "flow" prefix are identical (here: "v") and that corresponding connector variables with the "flow" prefix (here: "i") are defined by a zero-sum equation (the sum of all corresponding "flow" variables is zero). The motivation is to automatically fulfill the relevant balance equations at the infinitesimally small connection point. === Basic model components === A basic model component is defined by a model and contains equations that describe the relationship between the connector variables in a declarative form (i.e., without specifying the calculation order): The goal is that a connected set of model components leads to a set of differential, algebraic and discrete equations where the number of unknowns and the number of equations is identical. In Modelica, this is achieved by requiring so called balanced models. The full rules for defining balanced models are rather complex, and can be read from in section 4.7. However, for most cases, a simple rule can be issued, that counts variables and equations the same way as most simulation tools do: A model is balanced when the number of its equations equals the number of its variables. given that variables and equations must be counted according to the following rule: ->Number of model equations = Number of equations defined in the model + number of flow variables in the outside connectors ->Number of model variables = Number of variables defined in the model (including the variables in the physical connectors) Note that standard input connectors (such as RealInput or IntegerInput) do not contribute to the count of variables since no new variables are defined inside them. The reason for this rule can be understood thinking of the capacitor defined above. Its pins contain a flow variable, i.e. a current, each. When we check it, it is connected to nothing. This corresponds to set an equation pin.i=0 for each pin. That's why we must add an equation for each flow variable. Obviously the example can be extended to other cases, in which other kinds of flow variables are involved (e.g. forces, torques, etc.). When our capacitor is connected to another (balanced) model through one of its pins, a connection equation will be generated that will substitute the two i=0 equations of the pins being connected. Since the connection equation corresponds to two scalar equations, the connection operation will leave the balanced larger model (constituted by our Capacitor and the model it is connected to). The Capacitor model above is balanced, since number of equations = 3+2=5 (flow variables: pin_p.i, pin_n.i, u) number of variables = 5 (u, pin_p.u, pin_p.i, pin_n.u, pi_n.i) Verification using OpenModelica of this model gives, in fact Class Capacitor has 5 equation(s) and 5 variable(s). 3 of these are trivial equation(s). Another example, containing both input connectors and physical connectors is the following component from Modelica Standard Library: The component SignalVoltage is balanced since number of equations = 3+2=5 (flow variables: pin_p.i, pin_n.i, u) number of variables = 5 (i, pin_p.u, pin_p.i, pin_n.u, pi_n.i) Again, checking with OpenModelica gives Class Modelica.Electrical.Analog.Sources.SignalVoltage has 5 equation(s) and 5 variable(s). 4 of these are trivial equation(s). === Hierarchical models === A hierarchical model is built-up from basic models, by instantiating basic models, providing suitable values for the model parameters, and by connecting model connectors. A typical example is the following electrical circuit: Via the language element annotation(...), definitions can be added to a model that do not have an influence on a simulation. Annotations are used to define graphical layout, documentation and version information. A basic set of graphical annotations is standardized to ensure that the graphical appearance and layout of models in different Modelica tools is the same. == Applications == Modelica is designed to be domain neutral and, as a result, is used in a wide variety of applications, such as fluid systems (for example, steam power generation, hydraulics, etc.), automotive applications (especially powertrain) and mechanical systems (for example, multi-body systems, mechatronics, etc.). In the automotive sector, many of the major automotive OEMs are using Modelica. These include Ford, General Motors, Toyota, BMW, and Daimler. Modelica is also being increasingly used for the simulation of thermo-fluid and energy systems. The characteristics of Modelica (acausal, object-oriented, domain neutral) make it well suited to system-level simulation, a domain where Modelica is now well established. == See also == AMESim AMPL APMonitor ASCEND Domain-Specific Modeling DSM Dymola EcosimPro: Continuous and Discrete Modelling and Simulation Software EMSO GAMS JModelica.org OpenModelica MapleSim MATLAB SimulationX Simulink Wolfram SystemModeler Scilab/Xcos Kepler (Ptolemy) == Notes == == External links == Modelica Language Specification Version 3.6 Modelica Association, the homepage of the non-profit Modelica Association (developing Modelica) Modelica by Example A free interactive HTML book for learning Modelica, by Michael Tiller Introduction to Physical Modeling with Modelica, book by Michael Tiller Fritzson, Peter (February 2004). Principles of Object-Oriented Modeling and Simulation with Modelica 2.1 (PDF). Wiley-IEEE Press. ISBN 978-0-471-47163-9. Modelica Overview
Wikipedia/Modelica
Cornering force or side force is the lateral (i.e., parallel to wheel axis) force produced by a vehicle tire during cornering. Cornering force is generated by tire slip and is proportional to slip angle at low slip angles. The rate at which cornering force builds up is described by relaxation length. Slip angle describes the deformation of the tire contact patch, and this deflection of the contact patch deforms the tire in a fashion akin to a spring. As with deformation of a spring, deformation of the tire contact patch generates a reaction force in the tire; the cornering force. Integrating the force generated by every tread element along the contact patch length gives the total cornering force. Although the term, "tread element" is used, the compliance in the tire that leads to this effect is actually a combination of sidewall deflection and deflection of the rubber within the contact patch. The exact ratio of sidewall compliance to tread compliance is a factor in tire construction and inflation pressure. Because the tire deformation tends to reach a maximum behind the center of the contact patch, by a distance known as pneumatic trail, it tends to generate a torque about a vertical axis known as self aligning torque. The diagram is misleading because the reaction force would appear to be acting in the wrong direction. It is simply a matter of convention to quote positive cornering force as acting in the opposite direction to positive tire slip so that calculations are simplified, since a vehicle cornering under the influence of a cornering force to the left will generate a tire slip to the right. The same principles can be applied to a tire being deformed longitudinally, or in a combination of both longitudinal and lateral directions. The behaviour of a tire under combined longitudinal and lateral deformation can be described by a traction circle. == See also == Camber thrust Lateral force variation Circle of forces Skidpad == References ==
Wikipedia/Cornering_force
Bicycle and motorcycle dynamics is the science of the motion of bicycles and motorcycles and their components, due to the forces acting on them. Dynamics falls under a branch of physics known as classical mechanics. Bike motions of interest include balancing, steering, braking, accelerating, suspension activation, and vibration. The study of these motions began in the late 19th century and continues today. Bicycles and motorcycles are both single-track vehicles and so their motions have many fundamental attributes in common and are fundamentally different from and more difficult to study than other wheeled vehicles such as dicycles, tricycles, and quadracycles. As with unicycles, bikes lack lateral stability when stationary, and under most circumstances can only remain upright when moving forward. Experimentation and mathematical analysis have shown that a bike stays upright when it is steered to keep its center of mass over its wheels. This steering is usually supplied by a rider, or in certain circumstances, by the bike itself. Several factors, including geometry, mass distribution, and gyroscopic effect all contribute in varying degrees to this self-stability, but long-standing hypotheses and claims that any single effect, such as gyroscopic or trail (the distance between steering axis and ground contact of the front tire), is solely responsible for the stabilizing force have been discredited. While remaining upright may be the primary goal of beginning riders, a bike must lean in order to maintain balance in a turn: the higher the speed or smaller the turn radius, the more lean is required. This balances the roll torque about the wheel contact patches generated by centrifugal force due to the turn with that of the gravitational force. This lean is usually produced by a momentary steering in the opposite direction, called countersteering. Unlike other wheeled vehicles, the primary control input on bikes is steering torque, not position. Although longitudinally stable when stationary, bikes often have a high enough center of mass and a short enough wheelbase to lift a wheel off the ground under sufficient acceleration or deceleration. When braking, depending on the location of the combined center of mass of the bike and rider with respect to the point where the front wheel contacts the ground, and if the front brake is applied hard enough, bikes can either: skid the front wheel which may or not result in a crash; or flip the bike and rider over the front wheel. A similar situation is possible while accelerating, but with respect to the rear wheel. == History == The history of the study of bike dynamics is nearly as old as the bicycle itself. It includes contributions from famous scientists such as Rankine, Appell, and Whipple. In the early 19th century Karl von Drais, credited with inventing the two-wheeled vehicle variously called the laufmaschine, velocipede, draisine, and dandy horse, showed that a rider could balance his device by steering the front wheel. In 1869, Rankine published an article in The Engineer repeating von Drais' assertion that balance is maintained by steering in the direction of a lean. In 1897, the French Academy of Sciences made understanding bicycle dynamics the goal of its Prix Fourneyron competition. Thus, by the end of the 19th century, Carlo Bourlet, Emmanuel Carvallo, and Francis Whipple had shown with rigid-body dynamics that some safety bicycles could actually balance themselves if moving at the right speed. Bourlet won the Prix Fourneyron, and Whipple won the Cambridge University Smith Prize. It is not clear to whom should go the credit for tilting the steering axis from the vertical which helps make this possible. In 1970, David E. H. Jones published an article in Physics Today showing that gyroscopic effects are not necessary for a person to balance a bicycle. Since 1971, when he identified and named the wobble, weave and capsize modes, Robin Sharp has written regularly about the behavior of motorcycles and bicycles. While at Imperial College, London, he worked with David Limebeer and Simos Evangelou. In the early 1970s, Cornell Aeronautical Laboratory (CAL, later Calspan Corporation in Buffalo, NY USA) was sponsored by the Schwinn Bicycle Company and others to study and simulate bicycle and motorcycle dynamics. Portions of this work have now been released to the public and scans of over 30 detailed reports have been posted at this TU Delft Bicycle Dynamics site. Since the 1990s, Cossalter, et al., have been researching motorcycle dynamics at the University of Padova. Their research, both experimental and numerical, has covered weave, wobble, chatter, simulators, vehicle modelling, tire modelling, handling, and minimum lap time maneuvering. In 2007, Meijaard, et al., published the canonical linearized equations of motion, in the Proceedings of the Royal Society A, along with verification by two different methods. These equations assumed the tires to roll without slip, that is to say, to go where they point, and the rider to be rigidly attached to the rear frame of the bicycle. In 2011, Kooijman, et al., published an article in Science showing that neither gyroscopic effects nor so-called caster effects due to trail are necessary for a bike to balance itself. They designed a two-mass-skate bicycle that the equations of motion predict is self-stable even with negative trail, the front wheel contacts the ground in front of the steering axis, and with counter-rotating wheels to cancel any gyroscopic effects. Then they constructed a physical model to validate that prediction. This may require some of the details provided below about steering geometry or stability to be re-evaluated. Bicycle dynamics was named 26 of Discover's 100 top stories of 2011. In 2013, Eddy Merckx Cycles was awarded over €150,000 with Ghent University to examine bicycle stability. == Forces == If the bike and rider are considered to be a single system, the forces that act on that system and its components can be roughly divided into two groups: internal and external. The external forces are due to gravity, inertia, contact with the ground, and contact with the atmosphere. The internal forces are caused by the rider and by interaction between components. === External forces === As with all masses, gravity pulls the rider and all the bike components toward the earth. At each tire contact patch there are ground reaction forces with both horizontal and vertical components. The vertical components mostly counteract the force of gravity, but also vary with braking and accelerating. For details, see the section on longitudinal stability below. The horizontal components, due to friction between the wheels and the ground, including rolling resistance, are in response to propulsive forces, braking forces, and turning forces. Aerodynamic forces due to the atmosphere are mostly in the form of drag, but can also be from crosswinds. At normal bicycling speeds on level ground, aerodynamic drag is the largest force resisting forward motion.: 188  At faster speed, aerodynamic drag becomes overwhelmingly the largest force resisting forward motion. Turning forces are generated during maneuvers for balancing in addition to just changing direction of travel. These may be interpreted as centrifugal forces in the accelerating reference frame of the bike and rider; or simply as inertia in a stationary, inertial reference frame and not forces at all. Gyroscopic forces acting on rotating parts such as wheels, engine, transmission, etc., are also due to the inertia of those rotating parts. They are discussed further in the section on gyroscopic effects below. === Internal forces === Internal forces, those between components of the bike and rider system, are mostly caused by the rider or by friction. In addition to pedaling, the rider can apply torques between the steering mechanism (front fork, handlebars, front wheel, etc.) and rear frame, and between the rider and the rear frame. Friction exists between any parts that move against each other: in the drive train, between the steering mechanism and the rear frame, etc. In addition to brakes, which create friction between rotating wheels and non-rotating frame parts, many bikes have front and rear suspensions. Some motorcycles and bicycles have a steering damper to dissipate undesirable kinetic energy, and some bicycles have a spring connecting the front fork to the frame to provide a progressive torque that tends to steer the bicycle straight ahead. On bikes with rear suspensions, feedback between the drive train and the suspension is an issue designers attempt to handle with various linkage configurations and dampers. == Motions == Motions of a bike can be roughly grouped into those out of the central plane of symmetry: lateral; and those in the central plane of symmetry: longitudinal or vertical. Lateral motions include balancing, leaning, steering, and turning. Motions in the central plane of symmetry include rolling forward, of course, but also stoppies, wheelies, brake diving, and most suspension activation. Motions in these two groups are linearly decoupled, that is they do not interact with each other to the first order. An uncontrolled bike is laterally unstable when stationary and can be laterally self-stable when moving under the right conditions or when controlled by a rider. Conversely, a bike is longitudinally stable when stationary and can be longitudinally unstable when undergoing sufficient acceleration or deceleration. == Lateral dynamics == Of the two, lateral dynamics has proven to be the more complicated, requiring three-dimensional, multibody dynamic analysis with at least two generalized coordinates to analyze. At a minimum, two coupled, second-order differential equations are required to capture the principal motions. Exact solutions are not possible, and numerical methods must be used instead. Competing theories of how bikes balance can still be found in print and online. On the other hand, as shown in later sections, much longitudinal dynamic analysis can be accomplished simply with planar kinetics and just one coordinate. === Balance === When discussing bike balance, it is necessary to distinguish carefully between "stability", "self-stability", and "controllability". Recent research suggests that "rider-controlled stability of bicycles is indeed related to their self-stability". A bike remains upright when it is steered so that the ground reaction forces exactly balance all the other internal and external forces it experiences, such as gravitational if leaning, inertial or centrifugal if in a turn, gyroscopic if being steered, and aerodynamic if in a crosswind. Steering may be supplied by a rider or, under certain circumstances, by the bike itself. This self-stability is generated by a combination of several effects that depend on the geometry, mass distribution, and forward speed of the bike. Tires, suspension, steering damping, and frame flex can also influence it, especially in motorcycles. Even when staying relatively motionless, a rider can balance a bike by the same principle. While performing a track stand, the rider can keep the line between the two contact patches under the combined center of mass by steering the front wheel to one side or the other and then moving forward and backward slightly to move the front contact patch from side to side as necessary. Forward motion can be generated simply by pedaling. Backwards motion can be generated the same way on a fixed-gear bicycle. Otherwise, the rider can take advantage of an opportune slope of the pavement or lurch the upper body backwards while the brakes are momentarily engaged. If the steering of a bike is locked, it becomes virtually impossible to balance while riding. On the other hand, if the gyroscopic effect of rotating bike wheels is cancelled by adding counter-rotating wheels, it is still easy to balance while riding. One other way that a bike can be balanced, with or without locked steering, is by applying appropriate torques between the bike and rider similar to the way a gymnast can swing up from hanging straight down on uneven parallel bars, a person can start swinging on a swing from rest by pumping their legs, or a double inverted pendulum can be controlled with an actuator only at the elbow. ==== Forward speed ==== The rider applies torque to the handlebars in order to turn the front wheel and so to control lean and maintain balance. At high speeds, small steering angles quickly move the ground contact points laterally; at low speeds, larger steering angles are required to achieve the same results in the same amount of time. Because of this, it is usually easier to maintain balance at high speeds. As self-stability typically occurs at speeds above a certain threshold, going faster increases the chances that a bike is contributing to its own stability. ==== Center of mass ==== The farther forward (closer to front wheel) the center of mass of the combined bike and rider, the less the front wheel has to move laterally in order to maintain balance. Conversely, the farther back (closer to the rear wheel) the center of mass is located, the more front wheel lateral movement or bike forward motion is required to regain balance. This can be noticeable on long-wheelbase recumbents, choppers, and wheelie bikes. It can also be a challenge for touring bikes that carry a heavy load of gear over or even behind the rear wheel. Mass over the rear wheel can be more easily controlled if it is lower than mass over the front wheel. A bike is also an example of an inverted pendulum. Just as a broomstick is more easily balanced in the hand than a pencil, a tall bike (with a high center of mass) can be easier to balance when ridden than a low one because the tall bike's lean rate (rate at which its angle of lean increases as it begins to fall over) will be slower. However, a rider can have the opposite impression of a bike when it is stationary. A top-heavy bike can require more effort to keep upright, when stopped in traffic for example, than a bike which is just as tall but with a lower center of mass. This is an example of a vertical second-class lever. A small force at the end of the lever, the seat or handlebars at the top of the bike, more easily moves a large mass if the mass is closer to the fulcrum, where the tires touch the ground. This is why touring cyclists are advised to carry loads low on a bike, and panniers hang down on either side of front and rear racks. ==== Trail ==== A factor that influences how easy or difficult a bike will be to ride is trail, the distance by which the front wheel ground contact point trails behind the steering axis ground contact point. The steering axis is the axis about which the entire steering mechanism (fork, handlebars, front wheel, etc.) pivots. In traditional bike designs, with a steering axis tilted back from the vertical, positive trail tends to steer the front wheel into the direction of a lean, independent of forward speed. This can be simulated by pushing a stationary bike to one side. The front wheel will usually also steer to that side. In a lean, gravity provides this force. The dynamics of a moving bike are more complicated, however, and other factors can contribute to or detract from this effect. Trail is a function of head angle, fork offset or rake, and wheel size. Their relationship can be described by this formula: Trail = ( R w cos ⁡ ( A h ) − O f ) sin ⁡ ( A h ) {\displaystyle {\text{Trail}}={\frac {(R_{w}\cos(A_{h})-O_{f})}{\sin(A_{h})}}} where R w {\displaystyle R_{w}} is wheel radius, A h {\displaystyle A_{h}} is the head angle measured clock-wise from the horizontal and O f {\displaystyle O_{f}} is the fork offset or rake. Trail can be increased by increasing the wheel size, decreasing the head angle, or decreasing the fork rake. The more trail a traditional bike has, the more stable it feels, although too much trail can make a bike feel difficult to steer. Bikes with negative trail (where the contact patch is in front of where the steering axis intersects the ground), while still rideable, are reported to feel very unstable. Normally, road racing bicycles have more trail than touring bikes but less than mountain bikes. Mountain bikes are designed with less-vertical head angles than road bikes so as to have greater trail and hence improved stability for descents. Touring bikes are built with small trail to allow the rider to control a bike weighed down with baggage. As a consequence, an unloaded touring bike can feel unstable. In bicycles, fork rake, often a curve in the fork blades forward of the steering axis, is used to diminish trail. Bikes with negative trail exist, such as the Python Lowracer, and are rideable, and an experimental bike with negative trail has been shown to be self-stable. In motorcycles, rake refers to the head angle instead, and offset created by the triple tree is used to diminish trail. A small survey by Whitt and Wilson found: touring bicycles with head angles between 72° and 73° and trail between 43 mm and 60 mm racing bicycles with head angles between 73° and 74° and trail between 28 mm and 45 mm track bicycles with head angles of 75° and trail between 23.5 mm and 37 mm. However, these ranges are not hard and fast. For example, LeMond Racing Cycles offers both with forks that have 45 mm of offset or rake and the same size wheels: a 2006 Tete de Course, designed for road racing, with a head angle that varies from 71+1⁄4° to 74°, depending on frame size, and thus trail that varies from 51.5 mm to 69 mm. a 2007 Filmore, designed for the track, with a head angle that varies from 72+1⁄2° to 74°, depending on frame size, and thus trail that varies from 51.5 mm to 61 mm. The amount of trail a particular bike has may vary with time for several reasons. On bikes with front suspension, especially telescopic forks, compressing the front suspension, due to heavy braking for example, can steepen the steering axis angle and reduce trail. Trail also varies with lean angle, and steering angle, usually decreasing from a maximum when the bike is straight upright and steered straight ahead. Trail can decrease to zero with sufficiently large lean and steer angles, which can alter how stable a bike feels. Finally, even the profile of the front tire can influence how trail varies as the bike is leaned and steered. A measurement similar to trail, called either mechanical trail, normal trail, or true trail, is the perpendicular distance from the steering axis to the centroid of the front wheel contact patch. ==== Wheelbase ==== A factor that influences the directional stability of a bike is wheelbase, the horizontal distance between the ground contact points of the front and rear wheels. For a given displacement of the front wheel, due to some disturbance, the angle of the resulting path from the original is inversely proportional to wheelbase. Also, the radius of curvature for a given steer angle and lean angle is proportional to the wheelbase. Finally, the wheelbase increases when the bike is leaned and steered. In the extreme, when the lean angle is 90°, and the bike is steered in the direction of that lean, the wheelbase is increased by the radius of the front and rear wheels. ==== Steering mechanism mass distribution ==== Another factor that can also contribute to the self-stability of traditional bike designs is the distribution of mass in the steering mechanism, which includes the front wheel, the fork, and the handlebar. If the center of mass for the steering mechanism is in front of the steering axis, then the pull of gravity will also cause the front wheel to steer in the direction of a lean. This can be seen by leaning a stationary bike to one side. The front wheel will usually also steer to that side independent of any interaction with the ground. Additional parameters, such as the fore-to-aft position of the center of mass and the elevation of the center of mass also contribute to the dynamic behavior of a bike. ==== Gyroscopic effects ==== The role of the gyroscopic effect in most bike designs is to help steer the front wheel into the direction of a lean. This phenomenon is called precession, and the rate at which an object precesses is inversely proportional to its rate of spin. The slower a front wheel spins, the faster it will precess when the bike leans, and vice versa. The rear wheel is prevented from precessing by friction of the tires on the ground, and so continues to lean as though it were not spinning at all. Hence gyroscopic forces do not provide any resistance to tipping. At low forward speeds, the precession of the front wheel is too quick, contributing to an uncontrolled bike's tendency to oversteer, start to lean the other way and eventually oscillate and fall over. At high forward speeds, the precession is usually too slow, contributing to an uncontrolled bike's tendency to understeer and eventually fall over without ever having reached the upright position. This instability is very slow, on the order of seconds, and is easy for most riders to counteract. Thus a fast bike may feel stable even though it is actually not self-stable and would fall over if it were uncontrolled. Another contribution of gyroscopic effects is a roll moment generated by the front wheel during countersteering. For example, steering left causes a moment to the right. The moment is small compared to the moment generated by the out-tracking front wheel, but begins as soon as the rider applies torque to the handlebars and so can be helpful in motorcycle racing. For more detail, see the section countersteering, below, and the countersteering article. ==== Self-stability ==== Between the two unstable regimes mentioned in the previous section, and influenced by all the factors described above that contribute to balance (trail, mass distribution, gyroscopic effects, etc.), there may be a range of forward speeds for a given bike design at which these effects steer an uncontrolled bike upright. It has been proven that neither gyroscopic effects nor positive trail are sufficient by themselves or necessary for self-stability, although they certainly can enhance hands-free control. However, even without self-stability a bike may be ridden by steering it to keep it over its wheels. Note that the effects mentioned above that would combine to produce self-stability may be overwhelmed by additional factors such as headset friction and stiff control cables. This video shows a riderless bicycle exhibiting self-stability. ==== Longitudinal acceleration ==== Longitudinal acceleration has been shown to have a large and complex effect on lateral dynamics. In one study, positive acceleration eliminates self stability, and negative acceleration (deceleration) changes the speeds of self stability. === Turning === In order for a bike to turn, that is, change its direction of forward travel, the front wheel must aim approximately in the desired direction, as with any front-wheel steered vehicle. Friction between the wheels and the ground then generates the centripetal acceleration necessary to alter the course from straight ahead as a combination of cornering force and camber thrust. The radius of the turn of an upright (not leaning) bike can be roughly approximated, for small steering angles, by: r = w δ cos ⁡ ( ϕ ) {\displaystyle r={\frac {w}{\delta \cos \left(\phi \right)}}} where r {\displaystyle r\,\!} is the approximate radius, w {\displaystyle w\,\!} is the wheelbase, δ {\displaystyle \delta \,\!} is the steer angle, and ϕ {\displaystyle \phi \,\!} is the caster angle of the steering axis. ==== Leaning ==== However, unlike other wheeled vehicles, bikes must also lean during a turn to balance the relevant forces: gravitational, inertial, frictional, and ground support. The angle of lean, θ, can easily be calculated using the laws of circular motion: θ = arctan ⁡ ( v 2 g r ) {\displaystyle \theta =\arctan \left({\frac {v^{2}}{gr}}\right)} where v is the forward speed, r is the radius of the turn and g is the acceleration of gravity. This is in the idealized case. A slight increase in the lean angle may be required on motorcycles to compensate for the width of modern tires at the same forward speed and turn radius. It can also be seen however that this simple 2-dimensional model, essentially an inverted pendulum on a turntable, predicts that the steady-state turn is unstable. If the bike is displaced slightly downwards from its equilibrium lean angle, the torque of gravity increases, that of centrifugal force decreases and the displacement gets amplified. A more-sophisticated model that allows a wheel to steer, adjust the path, and counter the torque of gravity, is necessary to capture the self-stability observed in real bikes. For example, a bike in a 10 m (33 ft) radius steady-state turn at 10 m/s (36 km/h, 22 mph) must be at an angle of 45.6°. A rider can lean with respect to the bike in order to keep either the torso or the bike more or less upright if desired. The angle that matters is the one between the horizontal plane and the plane defined by the tire contacts and the location of the center of mass of bike and rider. This lean of the bike decreases the actual radius of the turn proportionally to the cosine of the lean angle. The resulting radius can be roughly approximated (within 2% of exact value) by: r = w cos ⁡ ( θ ) δ cos ⁡ ( ϕ ) {\displaystyle r={\frac {w\cos \left(\theta \right)}{\delta \cos \left(\phi \right)}}} where r {\displaystyle r\,\!} is the approximate radius, w {\displaystyle w\,\!} is the wheelbase, θ {\displaystyle \theta \,\!} is the lean angle, δ {\displaystyle \delta \,\!} is the steering angle, and ϕ {\displaystyle \phi \,\!} is the caster angle of the steering axis. As a bike leans, the tires' contact patches move farther to the side causing wear. The portions at either edge of a motorcycle tire that remain unworn by leaning into turns is sometimes referred to as chicken strips. The finite width of the tires alters the actual lean angle of the rear frame from the ideal lean angle described above. The actual lean angle between the frame and the vertical must increase with tire width and decrease with center of mass height. Bikes with fat tires and low center of mass must lean more than bikes with skinnier tires or higher centers of mass to negotiate the same turn at the same speed. The increase in lean angle due to a tire thickness of 2t can be calculated as arcsin ⁡ ( t sin ⁡ ( ϕ ) h − t ) {\displaystyle \arcsin \left(t{\frac {\sin(\phi )}{h-t}}\right)} where φ is the ideal lean angle, and h is the height of the center of mass. For example, a motorcycle with a 12 inch wide rear tire will have t = 6 inches. If the combined bike and rider center of mass is at a height of 26 inches, then a 25° lean must be increased by 7.28°: a nearly 30% increase. If the tires are only 6 inches wide, then the lean angle increase is only 3.16°, just under half. The couple created by gravity and the ground reaction forces is necessary for a bicycle to turn at all. On a custom built bicycle with spring-loaded outriggers that exactly cancel this couple, so that the bicycle and rider may assume any lean angle when traveling in a straight line, riders find it impossible to make a turn. As soon as the wheels deviate from a straight path, the bicycle and rider begin to lean in the opposite direction, and the only way to right them is to steer back onto the straight path. ==== Countersteering ==== To initiate a turn and the necessary lean in the direction of that turn, a bike must momentarily steer in the opposite direction. This is often referred to as countersteering. With the front wheel now at a finite angle to the direction of motion, a lateral force is developed at the contact patch of the tire. This force creates a torque around the longitudinal (roll) axis of the bike, and this torque causes the bike to lean away from the initially steered direction and toward the direction of the desired turn. Where there is no external influence, such as an opportune side wind to create the force necessary to lean the bike, countersteering is necessary to initiate a rapid turn. While the initial steer torque and steer angle are both opposite the desired turn direction, this may not be the case to maintain a steady-state turn. The sustained steer angle is usually in the same direction as the turn, but may remain opposite to the direction of the turn, especially at high speeds. The sustained steer torque required to maintain that steer angle is usually opposite the turn direction. The actual magnitude and orientation of both the sustained steer angle and sustained steer torque of a particular bike in a particular turn depend on forward speed, bike geometry, tire properties, and combined bike and rider mass distribution. Once in a turn, the radius can only be changed with an appropriate change in lean angle, and this can be accomplished by additional countersteering out of the turn to increase lean and decrease radius, then into the turn to decrease lean and increase radius. To exit the turn, the bike must again countersteer, momentarily steering more into the turn in order to decrease the radius, thus increasing inertial forces, and thereby decreasing the angle of lean. ==== Steady-state turning ==== Once a turn is established, the torque that must be applied to the steering mechanism in order to maintain a constant radius at a constant forward speed depends on the forward speed and the geometry and mass distribution of the bike. At speeds below the capsize speed, described below in the section on Eigenvalues and also called the inversion speed, the self-stability of the bike will cause it to tend to steer into the turn, righting itself and exiting the turn, unless a torque is applied in the opposite direction of the turn. At speeds above the capsize speed, the capsize instability will cause it to tend to steer out of the turn, increasing the lean, unless a torque is applied in the direction of the turn. At the capsize speed no input steering torque is necessary to maintain the steady-state turn. ==== Steering angle ==== Several effects influence the steering angle, the angle at which the front assembly is rotated about the steering axis, necessary to maintain a steady-state turn. Some of these are unique to single-track vehicles, while others are also experienced by automobiles. Some of these may be mentioned elsewhere in this article, and they are repeated here, though not necessarily in order of importance, so that they may be found in one place. First, the actual kinematic steering angle, the angle projected onto the road plane to which the front assembly is rotated is a function of the steering angle and the steering axis angle: Δ = δ cos ⁡ ( ϕ ) {\displaystyle \Delta =\delta \cos \left(\phi \right)} where Δ {\displaystyle \Delta \,\!} is the kinematic steering angle, δ {\displaystyle \delta \,\!} is the steering angle, and ϕ {\displaystyle \phi \,\!} is the caster angle of the steering axis. Second, the lean of the bike decreases the actual radius of the turn proportionally to the cosine of the lean angle. The resulting radius can be roughly approximated (within 2% of exact value) by: r = w cos ⁡ ( θ ) δ cos ⁡ ( ϕ ) {\displaystyle r={\frac {w\cos \left(\theta \right)}{\delta \cos \left(\phi \right)}}} where r {\displaystyle r\,\!} is the approximate radius, w {\displaystyle w\,\!} is the wheelbase, θ {\displaystyle \theta \,\!} is the lean angle, δ {\displaystyle \delta \,\!} is the steering angle, and ϕ {\displaystyle \phi \,\!} is the caster angle of the steering axis. Third, because the front and rear tires can have different slip angles due to weight distribution, tire properties, etc., bikes can experience understeer or oversteer. When understeering, the steering angle must be greater, and when oversteering, the steering angle must be less than it would be if the slip angles were equal to maintain a given turn radius. Some authors even use the term counter-steering to refer to the need on some bikes under some conditions to steer in the opposite direction of the turn (negative steering angle) to maintain control in response to significant rear wheel slippage. Fourth, camber thrust contributes to the centripetal force necessary to cause the bike to deviate from a straight path, along with cornering force due to the slip angle, and can be the largest contributor. Camber thrust contributes to the ability of bikes to negotiate a turn with the same radius as automobiles but with a smaller steering angle. When a bike is steered and leaned in the same direction, the camber angle of the front tire is greater than that of the rear and so can generate more camber thrust, all else being equal. ==== No hands ==== While countersteering is usually initiated by applying torque directly to the handlebars, on lighter vehicles such as bicycles, it can be accomplished by shifting the rider's weight. If the rider leans to the right relative to the bike, the bike leans to the left to conserve angular momentum, and the combined center of mass remains nearly in the same vertical plane. This leftward lean of the bike, called counter lean by some authors, will cause it to steer to the left and initiate a right-hand turn as if the rider had countersteered to the left by applying a torque directly to the handlebars. This technique may be complicated by additional factors such as headset friction and stiff control cables. The combined center of mass does move slightly to the left when the rider leans to the right relative to the bike, and the bike leans to the left in response. The action, in space, would have the tires move right, but this is prevented by friction between the tires and the ground, and thus pushes the combined center of mass left. This is a small effect, however, as evidenced by the difficulty most people have in balancing a bike by this method alone. ==== Gyroscopic effects ==== As mentioned above in the section on balance, one effect of turning the front wheel is a roll moment caused by gyroscopic precession. The magnitude of this moment is proportional to the moment of inertia of the front wheel, its spin rate (forward motion), the rate that the rider turns the front wheel by applying a torque to the handlebars, and the cosine of the angle between the steering axis and the vertical. For a sample motorcycle moving at 22 m/s (50 mph) that has a front wheel with a moment of inertia of 0.6 kg·m2, turning the front wheel one degree in half a second generates a roll moment of 3.5 N·m. In comparison, the lateral force on the front tire as it tracks out from under the motorcycle reaches a maximum of 50 N. This, acting on the 0.6 m (2 ft) height of the center of mass, generates a roll moment of 30 N·m. While the moment from gyroscopic forces is only 12% of this, it can play a significant part because it begins to act as soon as the rider applies the torque, instead of building up more slowly as the wheel out-tracks. This can be especially helpful in motorcycle racing. ==== Two-wheel steering ==== Because of theoretical benefits, such as a tighter turning radius at low speed, attempts have been made to construct motorcycles with two-wheel steering. One working prototype by Ian Drysdale in Australia is reported to "work very well". Issues in the design include whether to provide active control of the rear wheel or let it swing freely. In the case of active control, the control algorithm needs to decide between steering with or in the opposite direction of the front wheel, when, and how much. One implementation of two-wheel steering, the Sideways bike, lets the rider control the steering of both wheels directly. Another, the Swing Bike, had the second steering axis in front of the seat so that it could also be controlled by the handlebars. Milton W. Raymond built a long low two-wheel steering bicycle, called "X-2", with various steering mechanisms to control the two wheels independently. Steering motions included "balance", in which both wheels move together to steer the tire contacts under the center of mass; and "true circle", in which the wheels steer equally in opposite directions and thus steering the bicycle without substantially changing the lateral position of the tire contacts relative to the center of mass. X-2 was also able to go "crabwise" with the wheels parallel but out of line with the frame, for instance with the front wheel near the roadway center line and rear wheel near the curb. "Balance" steering allowed easy balancing despite long wheelbase and low center of mass, but no self-balancing ("no hands") configuration was discovered. True circle, as expected, was essentially impossible to balance, as steering does not correct for misalignment of the tire patch and center of mass. Crabwise cycling at angles tested up to about 45° did not show a tendency to fall over, even under braking. X-2 is mentioned in passing in Whitt and Wilson's Bicycling Science 2nd edition. ==== Rear-wheel steering ==== Because of the theoretical benefits, especially a simplified front-wheel drive mechanism, attempts have been made to construct a rideable rear-wheel steering bike. The Bendix Company built a rear-wheel steering bicycle, and the U.S. Department of Transportation commissioned the construction of a rear-wheel steering motorcycle: both proved to be unrideable. Rainbow Trainers, Inc. in Alton, Illinois, offered US$5,000 to the first person "who can successfully ride the rear-steered bicycle, Rear Steered Bicycle I". One documented example of someone successfully riding a rear-wheel steering bicycle is that of L. H. Laiterman at Massachusetts Institute of Technology, on a specially designed recumbent bike. The difficulty is that turning left, accomplished by turning the rear wheel to the right, initially moves the center of mass to the right, and vice versa. This complicates the task of compensating for leans induced by the environment. Examination of the eigenvalues for bicycles with common geometries and mass distributions shows that when moving in reverse, so as to have rear-wheel steering, they are inherently unstable. This does not mean they are unridable, but that the effort to control them is higher. Other, purpose-built designs have been published, however, that do not suffer this problem. ==== Center steering ==== Between the extremes of bicycles with classical front-wheel steering and those with strictly rear-wheel steering is a class of bikes with a pivot point somewhere between the two, referred to as center-steering, and similar to articulated steering. An early implementation of the concept was the Phantom bicycle in the early 1870s promoted as a safer alternative to the penny-farthing. This design allows for simple front-wheel drive and current implementations appear to be quite stable, even rideable no-hands, as many photographs illustrate. These designs, such as the Python Lowracer, a recumbent, usually have very lax head angles (40° to 65°) and positive or even negative trail. The builder of a bike with negative trail states that steering the bike from straight ahead forces the seat (and thus the rider) to rise slightly and this offsets the destabilizing effect of the negative trail. ==== Reverse steering ==== Bicycles have been constructed, for investigation and demonstration purposes, with the steering reversed so that turning the handlebars to the left causes the front wheel to turn to the right, and vica versa. It is possible to ride such a bicycle, but riders experienced with normal bicycles find it very difficult to learn, if they can manage it at all. ==== Tiller effect ==== Tiller effect is the expression used to describe how handlebars that extend far behind the steering axis (head tube) act like a tiller on a boat, in that one moves the bars to the right in order to turn the front wheel to the left, and vice versa. This situation is commonly found on cruiser bicycles, some recumbents, and some motorcycles. It can be troublesome when it limits the ability to steer because of interference or the limits of arm reach. ==== Tires ==== Tires have a large influence over bike handling, especially on motorcycles, but also on bicycles. Tires influence bike dynamics in two distinct ways: finite crown radius and force generation. Increase the crown radius of the front tire has been shown to decrease the size or eliminate self stability. Increasing the crown radius of the rear tire has the opposite effect, but to a lesser degree. Tires generate the lateral forces necessary for steering and balance through a combination of cornering force and camber thrust. Tire inflation pressures have also been found to be important variables in the behavior of a motorcycle at high speeds. Because the front and rear tires can have different slip angles due to weight distribution, tire properties, etc., bikes can experience understeer or oversteer. Of the two, understeer, in which the front wheel slides more than the rear wheel, is more dangerous since front wheel steering is critical for maintaining balance. Because real tires have a finite contact patch with the road surface that can generate a scrub torque, and when in a turn, can experience some side slipping as they roll, they can generate torques about an axis normal to the plane of the contact patch. One torque generated by a tire, called the self aligning torque, is caused by asymmetries in the side-slip along the length of the contact patch. The resultant force of this side-slip occurs behind the geometric center of the contact patch, a distance described as the pneumatic trail, and so creates a torque on the tire. Since the direction of the side-slip is towards the outside of the turn, the force on the tire is towards the center of the turn. Therefore, this torque tends to turn the front wheel in the direction of the side-slip, away from the direction of the turn, and therefore tends to increase the radius of the turn. Another torque is produced by the finite width of the contact patch and the lean of the tire in a turn. The portion of the contact patch towards the outside of the turn is actually moving rearward, with respect to the wheel's hub, faster than the rest of the contact patch, because of its greater radius from the hub. By the same reasoning, the inner portion is moving rearward more slowly. So the outer and inner portions of the contact patch slip on the pavement in opposite directions, generating a torque that tends to turn the front wheel in the direction of the turn, and therefore tends to decrease the turn radius. The combination of these two opposite torques creates a resulting yaw torque on the front wheel, and its direction is a function of the side-slip angle of the tire, the angle between the actual path of the tire and the direction it is pointing, and the camber angle of the tire (the angle that the tire leans from the vertical). The result of this torque is often the suppression of the inversion speed predicted by rigid wheel models described above in the section on steady-state turning. ==== High side ==== A highsider is a type of bike motion which is caused by a rear wheel gaining traction when it is not facing in the direction of travel, usually after slipping sideways in a curve. This can occur under heavy braking, acceleration, a varying road surface, or suspension activation, especially due to interaction with the drive train. It can take the form of a single slip-then-flip or a series of violent oscillations. === Maneuverability and handling === Bike maneuverability and handling is difficult to quantify for several reasons. The geometry of a bike, especially the steering axis angle makes kinematic analysis complicated. Under many conditions, bikes are inherently unstable and must always be under rider control. Finally, the rider's skill has a large influence on the bike's performance in any maneuver. Bike designs tend to consist of a trade-off between maneuverability and stability. ==== Rider control inputs ==== The primary control input that the rider can make is to apply a torque directly to the steering mechanism via the handlebars. Because of the bike's own dynamics, due to steering geometry and gyroscopic effects, direct position control over steering angle has been found to be problematic. A secondary control input that the rider can make is to lean the upper torso relative to the bike. As mentioned above, the effectiveness of rider lean varies inversely with the mass of the bike. On heavy bikes, such as motorcycles, rider lean mostly alters the ground clearance requirements in a turn, improves the view of the road, and improves the bike system dynamics in a very low-frequency passive manner. In motorcycle racing, leaning the torso, moving the body, and projecting a knee to the inside of the turn relative to the bike can also cause an aerodynamic yawing moment that facilitates entering and rounding the turn. ==== Differences from automobiles ==== The need to keep a bike upright to avoid injury to the rider and damage to the vehicle limits the type of maneuverability testing commonly performed. For example, while automobile enthusiast publications often perform and quote skidpad results, motorcycle publications do not. The need to "set up" for a turn, lean the bike to the appropriate angle, means that the rider must see further ahead than is necessary for a typical car at the same speed, and this need increases more than in proportion to the speed. ==== Rating schemes ==== Several schemes have been devised to rate the handling of bikes, particularly motorcycles. The roll index is the ratio between steering torque and roll or lean angle. The acceleration index is the ratio between steering torque and lateral or centripetal acceleration. The steering ratio is the ratio between the theoretical turning radius based on ideal tire behavior and the actual turning radius. Values less than one, where the front wheel side slip is greater than the rear wheel side slip, are described as under-steering; equal to one as neutral steering; and greater than one as over-steering. Values less than zero, in which the front wheel must be turned opposite the direction of the curve due to much greater rear wheel side slip than front wheel have been described as counter-steering. Riders tend to prefer neutral or slight over-steering. Car drivers tend to prefer under-steering. The Koch index is the ratio between peak steering torque and the product of peak lean rate and forward speed. Large, touring motorcycles tend to have a high Koch index, sport motorcycles tend to have a medium Koch index, and scooters tend to have a low Koch index. It is easier to maneuver light scooters than heavy motorcycles. === Lateral motion theory === Although its equations of motion can be linearized, a bike is a nonlinear system. The variable(s) to be solved for cannot be written as a linear sum of independent components, i.e. its behavior is not expressible as a sum of the behaviors of its descriptors. Generally, nonlinear systems are difficult to solve and are much less understandable than linear systems. In the idealized case, in which friction and any flexing is ignored, a bike is a conservative system. Damping, however, can still be demonstrated: under the right circumstances, side-to-side oscillations will decrease with time. Energy added with a sideways jolt to a bike running straight and upright (demonstrating self-stability) is converted into increased forward speed, not lost, as the oscillations die out. A bike is a nonholonomic system because its outcome is path-dependent. In order to know its exact configuration, especially location, it is necessary to know not only the configuration of its parts, but also their histories: how they have moved over time. This complicates mathematical analysis. Finally, in the language of control theory, a bike exhibits non-minimum phase behavior. It turns in the direction opposite of how it is initially steered, as described above in the section on countersteering ==== Degrees of freedom ==== The number of degrees of freedom of a bike depends on the particular model being used. The simplest model that captures the key dynamic features, called the "Whipple model" after Francis Whipple who first developed the equations for it, has four rigid bodies with knife edge wheels rolling without slip on a flat smooth surface, and has 7 degrees of freedom (configuration variables required to completely describe the location and orientation of all 4 bodies): x coordinate of rear wheel contact point y coordinate of rear wheel contact point orientation angle of rear frame (yaw) rotation angle of rear wheel rotation angle of front wheel lean angle of rear frame (roll) steering angle between rear frame and front end ==== Equations of motion ==== The equations of motion of an idealized bike, consisting of a rigid frame, a rigid fork, two knife-edged, rigid wheels, all connected with frictionless bearings and rolling without friction or slip on a smooth horizontal surface and operating at or near the upright and straight-ahead, unstable equilibrium can be represented by a single fourth-order linearized ordinary differential equation or two coupled second-order differential equations, the lean equation M θ θ θ r ¨ + K θ θ θ r + M θ ψ ψ ¨ + C θ ψ ψ ˙ + K θ ψ ψ = M θ {\displaystyle M_{\theta \theta }{\ddot {\theta _{r}}}+K_{\theta \theta }\theta _{r}+M_{\theta \psi }{\ddot {\psi }}+C_{\theta \psi }{\dot {\psi }}+K_{\theta \psi }\psi =M_{\theta }} and the steer equation M ψ ψ ψ ¨ + C ψ ψ ψ ˙ + K ψ ψ ψ + M ψ θ θ r ¨ + C ψ θ θ r ˙ + K ψ θ θ r = M ψ , {\displaystyle M_{\psi \psi }{\ddot {\psi }}+C_{\psi \psi }{\dot {\psi }}+K_{\psi \psi }\psi +M_{\psi \theta }{\ddot {\theta _{r}}}+C_{\psi \theta }{\dot {\theta _{r}}}+K_{\psi \theta }\theta _{r}=M_{\psi }{\mbox{,}}} where θ r {\displaystyle \theta _{r}} is the lean angle of the rear assembly, ψ {\displaystyle \psi } is the steer angle of the front assembly relative to the rear assembly and M θ {\displaystyle M_{\theta }} and M ψ {\displaystyle M_{\psi }} are the moments (torques) applied at the rear assembly and the steering axis, respectively. For the analysis of an uncontrolled bike, both are taken to be zero. These can be represented in matrix form as M q ¨ + C q ˙ + K q = f {\displaystyle M\mathbf {\ddot {q}} +C\mathbf {\dot {q}} +K\mathbf {q} =\mathbf {f} } where M {\displaystyle M} is the symmetrical mass matrix which contains terms that include only the mass and geometry of the bike, C {\displaystyle C} is the so-called damping matrix, even though an idealized bike has no dissipation, which contains terms that include the forward speed v {\displaystyle v} and is asymmetric, K {\displaystyle K} is the so-called stiffness matrix which contains terms that include the gravitational constant g {\displaystyle g} and v 2 {\displaystyle v^{2}} and is symmetric in g {\displaystyle g} and asymmetric in v 2 {\displaystyle v^{2}} , q {\displaystyle \mathbf {q} } is a vector of lean angle and steer angle, and f {\displaystyle \mathbf {f} } is a vector of external forces, the moments mentioned above. In this idealized and linearized model, there are many geometric parameters (wheelbase, head angle, mass of each body, wheel radius, etc.), but only four significant variables: lean angle, lean rate, steer angle, and steer rate. These equations have been verified by comparison with multiple numeric models derived completely independently. The equations show that the bicycle is like an inverted pendulum with the lateral position of its support controlled by terms representing roll acceleration, roll velocity and roll displacement to steering torque feedback. The roll acceleration term is normally of the wrong sign for self-stabilization and can be expected to be important mainly in respect of wobble oscillations. The roll velocity feedback is of the correct sign, is gyroscopic in nature, being proportional to speed, and is dominated by the front wheel contribution. The roll displacement term is the most important one and is mainly controlled by trail, steering rake and the offset of the front frame mass center from the steering axis. All the terms involve complex combinations of bicycle design parameters and sometimes the speed. The limitations of the benchmark bicycle are considered and extensions to the treatments of tires, frames and riders, and their implications, are included. Optimal rider controls for stabilization and path-following control are also discussed. ==== Eigenvalues ==== It is possible to calculate eigenvalues, one for each of the four state variables (lean angle, lean rate, steer angle, and steer rate), from the linearized equations in order to analyze the normal modes and self-stability of a particular bike design. In the plot to the right, eigenvalues of one particular bicycle are calculated for forward speeds of 0–10 m/s (22 mph). When the real parts of all eigenvalues (shown in dark blue) are negative, the bike is self-stable. When the imaginary parts of any eigenvalues (shown in cyan) are non-zero, the bike exhibits oscillation. The eigenvalues are point symmetric about the origin and so any bike design with a self-stable region in forward speeds will not be self-stable going backwards at the same speed. There are three forward speeds that can be identified in the plot to the right at which the motion of the bike changes qualitatively: The forward speed at which oscillations begin, at about 1 m/s (2.2 mph) in this example, sometimes called the double root speed due to there being a repeated root to the characteristic polynomial (two of the four eigenvalues have exactly the same value). Below this speed, the bike simply falls over as an inverted pendulum does. The forward speed at which oscillations do not increase, where the weave mode eigenvalues switch from positive to negative in a Hopf bifurcation at about 5.3 m/s (12 mph) in this example, is called the weave speed. Below this speed, oscillations increase until the uncontrolled bike falls over. Above this speed, oscillations eventually die out. The forward speed at which non-oscillatory leaning increases, where the capsize mode eigenvalues switch from negative to positive in a pitchfork bifurcation at about 8 m/s (18 mph) in this example, is called the capsize speed. Above this speed, this non-oscillating lean eventually causes the uncontrolled bike to fall over. Between these last two speeds, if they both exist, is a range of forward speeds at which the particular bike design is self-stable. In the case of the bike whose eigenvalues are shown here, the self-stable range is 5.3–8.0 m/s (12–18 mph). The fourth eigenvalue, which is usually stable (very negative), represents the castoring behavior of the front wheel, as it tends to turn towards the direction in which the bike is traveling. Note that this idealized model does not exhibit the wobble or shimmy and rear wobble instabilities described above. They are seen in models that incorporate tire interaction with the ground or other degrees of freedom. Experimentation with real bikes has so far confirmed the weave mode predicted by the eigenvalues. It was found that tire slip and frame flex are not important for the lateral dynamics of the bicycle in the speed range up to 6 m/s. ==== Modes ==== Bikes, as complex mechanisms, have a variety of modes: fundamental ways that they can move. These modes can be stable or unstable, depending on the bike parameters and its forward speed. In this context, "stable" means that an uncontrolled bike will continue rolling forward without falling over as long as forward speed is maintained. Conversely, "unstable" means that an uncontrolled bike will eventually fall over, even if forward speed is maintained. The modes can be differentiated by the speed at which they switch stability and the relative phases of leaning and steering as the bike experiences that mode. Any bike motion consists of a combination of various amounts of the possible modes, and there are three main modes that a bike can experience: capsize, weave, and wobble. A lesser known mode is rear wobble, and it is usually stable. ===== Capsize ===== Capsize is falling over without oscillation. During capsize, an uncontrolled front wheel usually steers in the direction of lean, but never enough to stop the increasing lean, until a very high lean angle is reached, at which point the steering may turn in the opposite direction. A capsize can happen very slowly if the bike is moving forward rapidly. Because the capsize instability is so slow, on the order of seconds, it is easy for the rider to control, and is actually used by the rider to initiate the lean necessary for a turn. For most bikes, depending on geometry and mass distribution, capsize is stable at low speeds, and becomes less stable as speed increases until it is no longer stable. However, on many bikes, tire interaction with the pavement is sufficient to prevent capsize from becoming unstable at high speeds. ===== Weave ===== Weave is a slow (0&–4&–Hz) oscillation between leaning left and steering right, and vice versa. The entire bike is affected with significant changes in steering angle, lean angle (roll), and heading angle (yaw). The steering is 180° out of phase with the heading and 90° out of phase with the leaning. This AVI movie shows weave. For most bikes, depending on geometry and mass distribution, weave is unstable at low speeds, and becomes less pronounced as speed increases until it is no longer unstable. While the amplitude may decrease, the frequency actually increases with speed. ===== Wobble or shimmy ===== Wobble, shimmy, tank-slapper, speed wobble, and death wobble are all words and phrases used to describe a rapid (4–10 Hz) oscillation of primarily just the front end (front wheel, fork, and handlebars). Also involved is the yawing of the rear frame which may contribute to the wobble when too flexible. This instability occurs mostly at high speed and is similar to that experienced by shopping cart wheels, airplane landing gear, and automobile front wheels. While wobble or shimmy can be easily remedied by adjusting speed, position, or grip on the handlebar, it can be fatal if left uncontrolled. Wobble or shimmy begins when some otherwise minor irregularity, such as fork asymmetry, accelerates the wheel to one side. The restoring force is applied in phase with the progress of the irregularity, and the wheel turns to the other side where the process is repeated. If there is insufficient damping in the steering the oscillation will increase until system failure occurs. The oscillation frequency can be changed by changing the forward speed, making the bike stiffer or lighter, or increasing the stiffness of the steering, of which the rider is a main component. ===== Rear wobble ===== The term rear wobble is used to describe a mode of oscillation in which lean angle (roll) and heading angle (yaw) are almost in phase and both 180° out of phase with steer angle. The rate of this oscillation is moderate with a maximum of about 6.5 Hz. Rear wobble is heavily damped and falls off quickly as bike speed increases. ===== Design criteria ===== The effect that the design parameters of a bike have on these modes can be investigated by examining the eigenvalues of the linearized equations of motion. For more details on the equations of motion and eigenvalues, see the section on the equations of motion above. Some general conclusions that have been drawn are described here. The lateral and torsional stiffness of the rear frame and the wheel spindle affects wobble-mode damping substantially. Long wheelbase and trail and a flat steering-head angle have been found to increase weave-mode damping. Lateral distortion can be countered by locating the front fork torsional axis as low as possible. Cornering weave tendencies are amplified by degraded damping of the rear suspension. Cornering, camber stiffnesses and relaxation length of the rear tire make the largest contribution to weave damping. The same parameters of the front tire have a lesser effect. Rear loading also amplifies cornering weave tendencies. Rear load assemblies with appropriate stiffness and damping, however, were successful in damping out weave and wobble oscillations. One study has shown theoretically that, while a bike leaned in a turn, road undulations can excite the weave mode at high speed or the wobble mode at low speed if either of their frequencies match the vehicle speed and other parameters. Excitation of the wobble mode can be mitigated by an effective steering damper and excitation of the weave mode is worse for light riders than for heavy riders. === Riding on treadmills and rollers === Riding on a treadmill is theoretically identical to riding on stationary pavement, and physical testing has confirmed this. Treadmills have been developed specifically for indoor bicycle training. Riding on rollers is still under investigation. === Other hypotheses === Although bicycles and motorcycles can appear to be simple mechanisms with only four major moving parts (frame, fork, and two wheels), these parts are arranged in a way that makes them complicated to analyze. While it is an observable fact that bikes can be ridden even when the gyroscopic effects of their wheels are canceled out, the hypothesis that the gyroscopic effects of the wheels are what keep a bike upright is common in print and online. Examples in print: "Angular momentum and motorcycle counter-steering: A discussion and demonstration", A. J. Cox, Am. J. Phys. 66, 1018–1021 ~1998 "The motorcycle as a gyroscope", J. Higbie, Am. J. Phys. 42, 701–702 The Physics of Everyday Phenomena, W. T. Griffith, McGraw–Hill, New York, 1998, pp. 149–150. The Way Things Work., Macaulay, Houghton-Mifflin, New York, NY, 1989 == Longitudinal dynamics == Bikes may experience a variety of longitudinal forces and motions. On most bikes, when the front wheel is turned to one side or the other, the entire rear frame pitches forward slightly, depending on the steering axis angle and the amount of trail. On bikes with suspensions, either front, rear, or both, trim is used to describe the geometric configuration of the bike, especially in response to forces of braking, accelerating, turning, drive train, and aerodynamic drag. The load borne by the two wheels varies not only with center of mass location, which in turn varies with the number of passengers, the amount of luggage, and the location of passengers and luggage, but also with acceleration and deceleration. This phenomenon is known as load transfer or weight transfer, depending on the author, and provides challenges and opportunities to both riders and designers. For example, motorcycle racers can use it to increase the friction available to the front tire when cornering, and attempts to reduce front suspension compression during heavy braking has spawned several motorcycle fork designs. The net aerodynamic drag forces may be considered to act at a single point, called the center of pressure. At high speeds, this will create a net moment about the rear driving wheel and result in a net transfer of load from the front wheel to the rear wheel. Also, depending on the shape of the bike and the shape of any fairing that might be installed, aerodynamic lift may be present that either increases or further reduces the load on the front wheel. === Stability === Though longitudinally stable when stationary, a bike may become longitudinally unstable under sufficient acceleration or deceleration, and Euler's second law can be used to analyze the ground reaction forces generated. For example, the normal (vertical) ground reaction forces at the wheels for a bike with a wheelbase L {\displaystyle L} and a center of mass at height h {\displaystyle h} and at a distance b {\displaystyle b} in front of the rear wheel hub, and for simplicity, with both wheels locked, can be expressed as: N r = m g ( L − b L − μ h L ) {\displaystyle N_{r}=mg\left({\frac {L-b}{L}}-\mu {\frac {h}{L}}\right)} for the rear wheel and N f = m g ( b L + μ h L ) {\displaystyle N_{f}=mg\left({\frac {b}{L}}+\mu {\frac {h}{L}}\right)} for the front wheel. The frictional (horizontal) forces are simply F r = μ N r {\displaystyle F_{r}=\mu N_{r}\,} for the rear wheel and F f = μ N f {\displaystyle F_{f}=\mu N_{f}\,} for the front wheel, where μ {\displaystyle \mu } is the coefficient of friction, m {\displaystyle m} is the total mass of the bike and rider, and g {\displaystyle g} is the acceleration of gravity. Therefore, if μ ≥ L − b h , {\displaystyle \mu \geq {\frac {L-b}{h}},} which occurs if the center of mass is anywhere above or in front of a line extending back from the front wheel contact patch and inclined at the angle θ = tan − 1 ⁡ ( 1 μ ) {\displaystyle \theta =\tan ^{-1}\left({\frac {1}{\mu }}\right)\,} above the horizontal, then the normal force of the rear wheel will be zero (at which point the equation no longer applies) and the bike will begin to flip or loop forward over the front wheel. On the other hand, if the center of mass height is behind or below the line, such as on most tandem bicycles or long-wheel-base recumbent bicycles, as well as cars, it is less likely that the front wheel can generate enough braking force to flip the bike. This means they can decelerate up to nearly the limit of adhesion of the tires to the road, which could reach 0.8 g if the coefficient of friction is 0.8, which is 40% more than an upright bicycle under even the best conditions. Bicycling Science author David Gordon Wilson points out that this puts upright bicyclists at particular risk of causing a rear-end collision if they tailgate cars. Similarly, powerful motorcycles can generate enough torque at the rear wheel to lift the front wheel off the ground in a maneuver called a wheelie. A line similar to the one described above to analyze braking performance can be drawn from the rear wheel contact patch to predict if a wheelie is possible given the available friction, the center of mass location, and sufficient power. This can also happen on bicycles, although there is much less power available, if the center of mass is back or up far enough or the rider lurches back when applying power to the pedals. Of course, the angle of the terrain can influence all of the calculations above. All else remaining equal, the risk of pitching over the front end is reduced when riding up hill and increased when riding down hill. The possibility of performing a wheelie increases when riding up hill, and is a major factor in motorcycle hillclimbing competitions. === Braking according to ground conditions === When braking, the rider in motion is seeking to change the speed of the combined mass m of rider plus bike. This is a negative acceleration a in the line of travel. F=ma, the acceleration a causes an inertial forward force F on mass m. The braking a is from an initial speed u to a final speed v, over a length of time t. The equation u - v=at implies that the greater the acceleration the shorter the time needed to change speed. The stopping distance s is also shortest when acceleration a is at the highest possible value compatible with road conditions: the equation s=ut + 1/2 at2 makes s low when a is high and t is low. How much braking force to apply to each wheel depends both on ground conditions and on the balance of weight on the wheels at each instant in time. The total braking force cannot exceed the gravity force on the rider and bike times the coefficient of friction μ of the tire on the ground. mgμ >= Ff + Fr. A skid occurs if the ratio of either Ff over Nf or Fr over Nr is greater than μ, with a rear wheel skid having less of a negative impact on lateral stability. When braking, the inertial force ma in the line of travel, not being co-linear with f, tends to rotate m about f. This tendency to rotate, an overturning moment, is resisted by a moment from mg. Taking moments about the front wheel contact point at an instance in time: When there is no braking, mass m is typically above the bottom bracket, about 2/3 of the way back between the front and rear wheels, with Nr thus greater than Nf. In constant light braking, whether because an emergency stop is not required or because poor ground conditions prevent heavy braking, much weight still rests on the rear wheel, meaning that Nr is still large and Fr can contribute towards a. As braking a increases, Nr and Fr decrease because the moment mah increases with a. At maximum constant a, clockwise and anti-clockwise moments are equal, at which point Nr=0. Any greater Ff initiates a stoppie. Other factors: Downhill it is much easier to topple over the front wheel because the incline moves the line of mg closer to f. To try to reduce this tendency the rider can stand back on the pedals to try to keep m as far back as possible. When braking is increasing the center of mass m may move forward relative to the front wheel, as the rider moves forward relative to the bike, and, if the bike has suspension on the front wheel, the front forks compress under load, changing the bike geometry. This all puts extra load on the front wheel. At the end of a brake maneuver, as the rider comes to a halt, the suspension decompresses and pushes the rider back. Values for μ vary greatly depending on a number of factors: The material that the ground or road surface is made of. Whether the ground is wet or dry. The temperature of the tyre and ground. The smoothness or roughness of the ground. The firmness or looseness of the ground. The speed of the vehicle, with friction reducing above 30 mph (50 km/h). Whether friction is rolling or sliding, with sliding friction at least 10% below peak rolling friction. === Braking === Most of the braking force of standard upright bikes comes from the front wheel. As the analysis above shows, if the brakes themselves are strong enough, the rear wheel is easy to skid, while the front wheel often can generate enough stopping force to flip the rider and bike over the front wheel. This is called a stoppie if the rear wheel is lifted but the bike does not flip, or an endo (abbreviated form of end-over-end) if the bike flips. On long or low bikes, however, such as cruiser motorcycles and recumbent bicycles, the front tire will skid instead, possibly causing a loss of balance. Assuming no loss of balance, it is possible to calculate optimum braking performance depending on the bike's geometry, the location of center of gravity of bike and rider, and the maximum coefficient of friction. In the case of a front suspension, especially telescoping fork tubes, the increase in downward force on the front wheel during braking may cause the suspension to compress and the front end to lower. This is known as brake diving. A riding technique that takes advantage of how braking increases the downward force on the front wheel is known as trail braking. ==== Front wheel braking ==== The limiting factors on the maximum deceleration in front wheel braking are: the maximum, limiting value of static friction between the tire and the ground, often between 0.5 and 0.8 for rubber on dry asphalt, the kinetic friction between the brake pads and the rim or disk, and pitching or looping (of bike and rider) over the front wheel. For an upright bicycle on dry asphalt with excellent brakes, pitching will probably be the limiting factor. The combined center of mass of a typical upright bicycle and rider will be about 60 cm (24 in) back from the front wheel contact patch and 120 cm (47 in) above, allowing a maximum deceleration of 0.5 g (5 m/s2 or 16 ft/s2). If the rider modulates the brakes properly, however, pitching can be avoided. If the rider moves his weight back and down, even larger decelerations are possible. ==== Rear-wheel braking ==== The rear brake of an upright bicycle can only produce about 0.25 g (≈2.5 m/s2) deceleration at best, because of the decrease in normal force at the rear wheel as described above. All such bikes with only rear braking are subject to this limitation: for example, bikes with only a coaster brake, and fixed-gear bikes with no other braking mechanism. There are, however, situations that may warrant rear wheel braking Slippery surfaces or bumpy surfaces. Under front wheel braking, the lower coefficient of friction may cause the front wheel to skid which often results in a loss of balance. Front flat tire. Braking a wheel with a flat tire can cause the tire to come off the rim which greatly reduces friction and, in the case of a front wheel, result in a loss of balance. To deliberately induce a rear wheel skid to induce oversteer and achieve a smaller turn radius on tight turns. Front brake failure. Recumbent bicycles. Long-wheelbase recumbents require a good rear brake as the CG is near the rear wheel. ==== Braking technique ==== Expert opinion varies from "use both levers equally at first" to "the fastest that you can stop any bike of normal wheelbase is to apply the front brake so hard that the rear wheel is just about to lift off the ground", depending on road conditions, rider skill level, and desired fraction of maximum possible deceleration. The SureStop System uses a sliding mechanism to enable the front brakes to be actuated by the friction applied to the back brake shoes by the rotation of the rear wheel. This is designed to optimise the braking friction to that of the road conditions so as to mitigate the risk of going over the handlebars. == Suspension == Bikes may have only front, only rear, full suspension or no suspension that operate primarily in the central plane of symmetry; though with some consideration given to lateral compliance. The goals of a bike suspension are to reduce vibration experienced by the rider, maintain wheel contact with the ground, reduce the loss of momentum when riding over an object, reduce impact forces caused by jumps or drops and maintain vehicle trim. The primary suspension parameters are stiffness, damping, sprung and unsprung mass, and tire characteristics. == Vibration == The study of vibrations in bikes includes its causes, such as engine balance, wheel balance, ground surface, and aerodynamics; its transmission and absorption; and its effects on the bike, the rider, and safety. An important factor in any vibration analysis is a comparison of the natural frequencies of the system with the possible driving frequencies of the vibration sources. A close match means mechanical resonance that can result in large amplitudes. A challenge in vibration damping is to create compliance in certain directions (vertically) without sacrificing frame rigidity needed for power transmission and handling (torsionally). Another issue with vibration for the bike is the possibility of failure due to material fatigue Effects of vibration on riders include discomfort, loss of efficiency, Hand-Arm Vibration Syndrome, a secondary form Raynaud's disease, and whole body vibration. Vibrating instruments may be inaccurate or difficult to read. === In bicycles === The primary cause of vibrations in a properly functioning bicycle is the surface over which it rolls. In addition to pneumatic tires and traditional bicycle suspensions, a variety of techniques have been developed to damp vibrations before they reach the rider. These include materials, such as carbon fiber, either in the whole frame or just key components such as the front fork, seatpost, or handlebars; tube shapes, such as curved seat stays;, gel handlebar grips and saddles and special inserts, such as Zertz by Specialized, and Buzzkills by Bontrager. === In motorcycles === In addition to the road surface, vibrations in a motorcycle can be caused by the engine and wheels, if unbalanced. Manufacturers employ a variety of technologies to reduce or damp these vibrations, such as engine balance shafts, rubber engine mounts, and tire weights. The problems that vibration causes have also spawned an industry of after-market parts and systems designed to reduce it. Add-ons include handlebar weights, isolated foot pegs, and engine counterweights. At high speeds, motorcycles and their riders may also experience aerodynamic flutter or buffeting. This can be abated by changing the air flow over key parts, such as the windshield. == Experimentation == A variety of experiments verify or disprove various hypotheses about bike dynamics. David Jones built several bikes in a search for an unrideable configuration. Richard Klein built several bikes to confirm Jones' findings. Richard Klein also built a "Torque Wrench Bike" and a "Rocket Bike" to investigate steering torques and their effects. Keith Code built a motorcycle with fixed handlebars to investigate the effects of rider motion and position on steering. Schwab and Kooijman have performed measurements with an instrumented bike. Hubbard and Moore have performed measurements with an instrumented bike. == See also == == References == == Further reading == 'An Introduction to Bicycle Geometry and Handling', Karl Anderson 'What keeps the bicycle upright?' by Jobst Brandt 'Report on Stability of the Dahon Bicycle' Archived 2021-03-09 at the Wayback Machine by John Forester == External links == Videos: Video of riderless bicycle demonstrating self-stability Why bicycles do not fall: Arend Schwab at TEDx Delft 2012 Wobble movie (AVI) Weave movie (AVI) Wobble Crash (Flash) Video on Science Friday Research centers: Bicycle Dynamics at Delft University of Technology Bicycle Mechanics at Cornell University Bicycle Science at the University of Illinois Motorcycle Dynamics at the University of Padova Control and Power Research Group at Imperial College Bicycle dynamics, control and handling at UC Davis Bicycle and Motorcycle Engineering Research Laboratory at the University of Wisconsin-Milwaukee Conferences: Single Track Vehicle Dynamics at DSCC 2012: two sessions at the ASME Dynamic Systems and Control Conference in Fort Lauderdale, Florida, USA, October 17–19, 2012 Bicycle and Motorcycle Dynamics 2013 Archived 2021-01-20 at the Wayback Machine: Symposium on Dynamics and Control of Single Track Vehicles, Nihon University, Nov 11–13, 2013 Bicycle and Motorcycle Dynamics Conference: Summary page
Wikipedia/Bicycle_and_motorcycle_dynamics
Flight dynamics in aviation and spacecraft, is the study of the performance, stability, and control of vehicles flying through the air or in outer space. It is concerned with how forces acting on the vehicle determine its velocity and attitude with respect to time. For a fixed-wing aircraft, its changing orientation with respect to the local air flow is represented by two critical angles, the angle of attack of the wing ("alpha") and the angle of attack of the vertical tail, known as the sideslip angle ("beta"). A sideslip angle will arise if an aircraft yaws about its centre of gravity and if the aircraft sideslips bodily, i.e. the centre of gravity moves sideways. These angles are important because they are the principal source of changes in the aerodynamic forces and moments applied to the aircraft. Spacecraft flight dynamics involve three main forces: propulsive (rocket engine), gravitational, and atmospheric resistance. Propulsive force and atmospheric resistance have significantly less influence over a given spacecraft compared to gravitational forces. == Aircraft == Flight dynamics is the science of air-vehicle orientation and control in three dimensions. The critical flight dynamics parameters are the angles of rotation with respect to the three aircraft's principal axes about its center of gravity, known as roll, pitch and yaw. Aircraft engineers develop control systems for a vehicle's orientation (attitude) about its center of gravity. The control systems include actuators, which exert forces in various directions, and generate rotational forces or moments about the center of gravity of the aircraft, and thus rotate the aircraft in pitch, roll, or yaw. For example, a pitching moment is a vertical force applied at a distance forward or aft from the center of gravity of the aircraft, causing the aircraft to pitch up or down. Roll, pitch and yaw refer, in this context, to rotations about the respective axes starting from a defined equilibrium state. The equilibrium roll angle is known as wings level or zero bank angle, equivalent to a level heeling angle on a ship. Yaw is known as "heading". A fixed-wing aircraft increases or decreases the lift generated by the wings when it pitches nose up or down by increasing or decreasing the angle of attack (AOA). The roll angle is also known as bank angle on a fixed-wing aircraft, which usually "banks" to change the horizontal direction of flight. An aircraft is streamlined from nose to tail to reduce drag making it advantageous to keep the sideslip angle near zero, though aircraft are deliberately "side-slipped" when landing in a cross-wind, as explained in slip (aerodynamics). == Spacecraft and satellites == The forces acting on space vehicles are of three types: propulsive force (usually provided by the vehicle's engine thrust); gravitational force exerted by the Earth and other celestial bodies; and aerodynamic lift and drag (when flying in the atmosphere of the Earth or another body, such as Mars or Venus). The vehicle's attitude must be controlled during powered atmospheric flight because of its effect on the aerodynamic and propulsive forces. There are other reasons, unrelated to flight dynamics, for controlling the vehicle's attitude in non-powered flight (e.g., thermal control, solar power generation, communications, or astronomical observation). The flight dynamics of spacecraft differ from those of aircraft in that the aerodynamic forces are of very small, or vanishingly small effect for most of the vehicle's flight, and cannot be used for attitude control during that time. Also, most of a spacecraft's flight time is usually unpowered, leaving gravity as the dominant force. == See also == Aerodynamics – Branch of dynamics concerned with studying the motion of air Aircraft flight control system – How aircraft are controlled Fixed-wing aircraft – Heavier-than-air aircraft with fixed wings generating aerodynamic lift Flight control surfaces – Surface that allows a pilot to adjust and control an aircraft's flight attitude Flight dynamics (fixed-wing aircraft) – Science of air vehicle orientation and control in three dimensionsPages displaying short descriptions of redirect targets Moving frame – Generalization of an ordered basis of a vector space == References ==
Wikipedia/Flight_dynamics
Electronic stability control (ESC), also referred to as electronic stability program (ESP) or dynamic stability control (DSC), is a computerized technology that improves a vehicle's stability by detecting and reducing loss of traction (skidding). When ESC detects loss of steering control, it automatically applies the brakes to help steer the vehicle where the driver intends to go. Braking is automatically applied to wheels individually, such as the outer front wheel to counter oversteer, or the inner rear wheel to counter understeer. Some ESC systems also reduce engine power until control is regained. ESC does not improve a vehicle's cornering performance; instead, it helps reduce the chance of the driver losing control of the vehicle on a slippery road. According to the U.S. National Highway Traffic Safety Administration and the Insurance Institute for Highway Safety in 2004 and 2006, one-third of fatal accidents could be prevented by the use of this technology. In Europe the electronic stability program had saved an estimated 15,000 lives as of 2020. ESC became mandatory in new cars in Canada, the US, and the European Union in 2011, 2012, and 2014, respectively. Worldwide, 82 percent of all new passenger cars feature the anti-skid system. == History == In 1983, a four-wheel electronic "Anti-Skid Control" system was introduced on the Toyota Crown. In 1987, Mercedes-Benz, BMW and Toyota introduced their first traction control systems. Traction control works by applying individual wheel braking and throttle to maintain traction under acceleration, but unlike ESC, it is not designed to aid in steering. In 1990, Mitsubishi released the Diamante in Japan. Developed to help the driver maintain the intended line through a corner; an onboard computer monitored several vehicle operating parameters through various sensors. When too much throttle had been used when taking a curve, engine output and braking were automatically regulated to ensure the proper line through a curve and to provide the proper amount of traction under various road surface conditions. While conventional traction control systems at the time featured only a slip control function, Mitsubishi's TCL system had an active safety function, which improved course tracing performance by automatically adjusting the traction force (called "trace control"), thereby restraining the development of excessive lateral acceleration while turning. Although not a ‘proper’ modern stability control system, trace control monitors steering angle, throttle position and individual wheel speeds, although there is no yaw input. The TCL system's standard wheel slip control function enabled better traction on slippery surfaces or during cornering. In addition to the system's individual effect, it also worked together with the Diamante's electronically controlled suspension and four-wheel steering to improve total handling and performance. BMW, working with Bosch and Continental, developed a system to reduce engine torque to prevent loss of control and applied it to most of the BMW model line for 1992, excluding the E30 and E36. This system could be ordered with the winter package, which came with a limited-slip differential, heated seats, and heated mirrors. From 1987 to 1992, Mercedes-Benz and Bosch co-developed a system called Elektronisches Stabilitätsprogramm ("Electronic Stability Program", trademarked as ESP) to control lateral slippage. === Introduction, second generation === In 1995, three automobile manufacturers introduced ESC systems. Mercedes-Benz, supplied by Bosch, was the first to implement ESP with their Mercedes-Benz S 600 Coupé. Toyota's Vehicle Stability Control (VSC) system appeared on the Toyota Crown Majesta in 1995. General Motors worked with Delphi Automotive and introduced its version of ESC, called "StabiliTrak", in 1996 for the 1997 model year on select Cadillac models. StabiliTrak was made standard equipment on all GM SUVs and vans sold in the U.S. and Canada by 2007, except for certain commercial and fleet vehicles. While the StabiliTrak name is used on most General Motors vehicles for the U.S. market, "Electronic Stability Control" is used for GM's overseas brands, such as Opel, Holden and Saab, except in the cases of Saab's 9-7X and 9-4X (which also use the StabiliTrak name). The same year, Cadillac introduced an integrated vehicle handling and software control system called the Integrated Chassis Control System (ICCS), on the Cadillac Eldorado. It involves an omnibus computer integration of engine, traction control, Stabilitrak electronic stability control, steering, and adaptive continuously variable road sensing suspension (CVRSS), with the intent of improving responsiveness to driver input, performance, and overall safety, similar to Toyota/Lexus Vehicle Dynamics Integrated Management. In 1997, Audi introduced the first series production ESP for all-wheel drive vehicles (Audi A8 and Audi A6 with quattro (four-wheel drive system)). In 1998, Volvo Cars began to offer their version of ESC called Dynamic Stability and Traction Control (DSTC) on the new Volvo S80. Meanwhile, others investigated and developed their own systems. During a moose test, Swedish journalist Robert Collin of Teknikens Värld rolled a Mercedes A-Class (without ESC) at 78 km/h in October 1997. Because Mercedes Benz promoted a reputation for safety, they recalled and retrofitted 130,000 A-Class cars with firmer suspension and sportier tyres; all newly produced A- class featured ESC as standard along with the upgraded suspension and wheels. This produced a significant reduction in crashes, and the number of vehicles with ESC rose. The availability of ESC in small cars like the A-Class ignited a market trend; thus, ESC became available for all models (whether standard or as an option). Ford's version of ESC, called AdvanceTrac, was launched in the year 2000. Ford later added Roll Stability Control to AdvanceTrac which was first introduced in the Volvo XC90 in 2003. It has been implemented in many Ford vehicles since. Ford and Toyota announced that all their North American vehicles would be equipped with ESC standard by the end of 2009 (it was standard on Toyota SUVs as of 2004, and after the 2011 model year, all Lexus, Toyota, and Scion vehicles had ESC; the last one to get it was the 2011 model-year Scion tC). However, as of November 2010, Ford still sold models in North America without ESC. General Motors had made a similar announcement for the end of 2010. === Third generation and after === In 2003 in Sweden the purchase rate on new cars with ESC was 15%. The Swedish road safety administration issued a strong ESC recommendation and in September 2004, 16 months later, the purchase rate was 58%. A stronger ESC recommendation was then given and in December 2004, the purchase rate on new cars had reached 69% and by 2008 it had grown to 96%. ESC advocates around the world are promoting increased ESC use through legislation and public awareness campaigns and by 2012, most new vehicles should be equipped with ESC. === Legislation === In 2009, the European Union decided to make ESC mandatory. Since November 1, 2011, EU type approval is only granted to models equipped with ESC. Since November 1, 2014, ESC has been required on all newly registered cars in the EU. The NHTSA required all new passenger vehicles sold in the US to be equipped with ESC as of the 2012 model year, and estimated it will prevent 5,300–9,600 annual fatalities. == Concept and operation == During normal driving, ESC continuously monitors steering and vehicle direction. It compares the driver's intended direction (determined by the measured steering wheel angle) to the vehicle's actual direction (determined through measured lateral acceleration, vehicle rotation, and individual road wheel speeds). === Normal operation === ESC intervenes only when it detects a probable loss of steering control, such as when the vehicle is not going where the driver is steering. This may happen, for example, when skidding during emergency evasive swerves, understeer or oversteer during poorly judged turns on slippery roads, or hydroplaning. During high-performance driving, ESC can intervene when unwanted, because steering input may not always be indicative of the intended direction of travel (such as during controlled drifting). ESC estimates the direction of the skid, and then applies the brakes to individual wheels asymmetrically in order to create torque about the vehicle's vertical axis, opposing the skid and bringing the vehicle back in line with the driver's commanded direction. Additionally, the system may reduce engine power or operate the transmission to slow the vehicle down. ESC can function on any surface, from dry pavement to frozen lakes. It reacts to and corrects skidding much faster and more effectively than the typical human driver, often before the driver is even aware of any imminent loss of control. This has led to some concern that ESC could allow drivers to become overconfident in their vehicle's handling and/or their own driving skills. For this reason, ESC systems typically alert the driver when they intervene, so that the driver is aware that the vehicle's handling limits have been reached. Most activate a dashboard indicator light and/or alert tone; some intentionally allow the vehicle's corrected course to deviate very slightly from the driver-commanded direction, even if it is possible to more precisely match it. All ESC manufacturers emphasize that the system is not a performance enhancement nor a replacement for safe driving practices, but rather a safety technology to assist the driver in recovering from dangerous situations. ESC does not increase traction, so it does not enable faster cornering (although it can facilitate better-controlled cornering). More generally, ESC works within the limits of the vehicle's handling and available traction between the tyres and road. A reckless maneuver can still exceed these limits, resulting in loss of control. For example, during hydroplaning, the wheels that ESC would use to correct a skid may lose contact with the road surface, reducing its effectiveness. Due to the fact that stability control can be incompatible with high-performance driving, many vehicles have an override control which allows the system to be partially or fully deactivated. In simple systems, a single button may disable all features, while more complicated setups may have a multi-position switch or may never be fully disengaged. === Off-road use === ESC systems—due to their ability to enhance vehicle stability and braking—often work to improve traction in off-road situations, in addition to their on-road duties. The effectiveness of traction control systems can vary significantly, due to the significant number of external and internal factors involved at any given time, as well as the programming and testing performed by the manufacturer. At a rudimentary level, off-road traction varies from typical operational characteristics of on-road traction, depending on the terrain encountered. In an open differential setup, power transfer takes the path of least resistance. In slippery conditions, this means when one wheel loses traction, power will counter-productively be fed to that axle instead of the one with higher grip. ESCs focus on braking wheels that are spinning at a rate drastically different from the opposing axle. While on-road application often supplements rapidly intermittent wheel braking with a reduction of power in loss-of-traction situations, off-road use will typically require consistent (or even increased) power delivery to retain vehicle momentum while the vehicle's braking system applies intermittent braking force over a longer duration to the slipping wheel until excessive wheel-spin is no longer detected. In intermediate level ESC systems, ABS will be disabled, or the computer will actively lock the wheels when brakes are applied. In these systems, or in vehicles without ABS, the performance in emergency braking in slippery conditions is greatly improved as grip state can change extremely rapidly and unpredictably off-road when coupled with inertia. When the brakes are applied and wheels are locked, the tyres do not have to contend with the wheel rolling (providing no braking force) and braking repeatedly. Grip provided by the tyres is constant and as such can make full use of traction wherever it is available. This effect is enhanced where more aggressive tread patterns are present as the large tread lugs dig into the imperfections on the surface or below the substrate, as well as dragging dirt in front of the tyre to increase the rolling resistance even further. Many newer vehicles designed for off-road duties from the factory, are equipped with Hill Descent Control systems to minimise the risk of such runaway events occurring with novice drivers and provide a more consistent and safe descent than either no ABS, or on-road orientated ABS. These systems aim to keep a fixed speed (or user selected speed) while descending, applying strategic braking or acceleration at the correct moments to ensure wheels all rotate at the same rate while applying full locking braking when required. In some vehicles, ESC systems automatically detect whether to operate in off- or on-road mode, depending on the engagement of the 4WD system. Mitsubishi's unique Super-Select 4WD system (found in Pajero, Triton and Pajero Sport models), operates in on-road mode in 2WD as well as 4WD High-range with the centre differential unlocked. However, it automatically activates off-road traction control and disables ABS braking when shifted into 4WD High-range with centre differential locked, or 4WD Low-range with centre differential locked. Most modern vehicles with fully electronically controlled 4WD systems such as various Land Rovers and Range Rovers, also automatically switch to an off-road-orientated mode of stability and traction control once low range, or certain terrain modes are manually selected. == Effectiveness == Numerous studies around the world have confirmed that ESC is highly effective in helping the driver maintain control of the car, thereby saving lives and reducing the probability of occurrence and severity of crashes. In the fall of 2004, the American National Highway and Traffic Safety Administration (NHTSA) confirmed international studies, releasing results of a field study of ESC effectiveness in the USA. The NHTSA concluded that ESC reduces crashes by 35%. Additionally, SUVs with stability control are involved in 67% fewer accidents than SUVs without the system. The United States Insurance Institute for Highway Safety (IIHS) issued its own study in June 2006 showing that up to 10,000 fatal US crashes could be avoided annually if all vehicles were equipped with ESC. The IIHS study concluded that ESC reduces the likelihood of all fatal crashes by 43%, fatal single-vehicle crashes by 56%, and fatal single-vehicle rollovers by 77–80%. ESC is described as the most important advance in auto safety by many experts, including Nicole Nason, administrator of the NHTSA, Jim Guest and David Champion of Consumers Union of the Fédération Internationale de l'Automobile (FIA), E-Safety Aware, Csaba Csere, former editor of Car and Driver, and Jim Gill, long time ESC proponent of Continental Automotive Systems. The European New Car Assessment Program (Euro NCAP) "strongly recommends" that people buy cars fitted with stability control. The IIHS requires that a vehicle must have ESC as an available option in order for it to qualify for their Top Safety Pick award for occupant protection and accident avoidance. == Components and design == ESC incorporates yaw rate control into the anti-lock braking system (ABS). Anti-lock brakes enable ESC to slow down individual wheels. Many ESC systems also incorporate a traction control system (TCS or ASR), which senses drive-wheel slip under acceleration and individually brakes the slipping wheel or wheels and/or reduces excess engine power until control is regained. However, ESC serves a different purpose from that of ABS or traction control. The ESC system uses several sensors to determine where the driver intends to travel. Other sensors indicate the actual state of the vehicle. The control algorithm compares driver input to vehicle response and decides, when necessary, to apply brakes and/or reduce throttle by the amounts calculated through the state space (set of equations used to model the dynamics of the vehicle). The ESC controller can also receive data from and issue commands to other controllers on the vehicle such as an all-wheel drive system or an active suspension system to improve vehicle stability and controllability. The sensors in an ESC system have to send data at all times in order to detect a loss of traction as soon as possible. They have to be resistant to possible forms of interference, such as precipitation or potholes. The most important sensors are as follows: A steering wheel angle sensor that determines where the driver wants to steer. This kind of sensor often uses AMR elements. A yaw rate sensor that measures the rotation rate of the car. The data from the yaw sensor is compared with the data from the steering wheel angle sensor to determine regulating action. A lateral acceleration sensor that measures the vehicle's lateral acceleration. This is often called an accelerometer. Wheel speed sensors that measure wheel speed. Other sensors can include: A longitudinal acceleration sensor that is similar to the lateral acceleration sensor in design, but provides additional information about road pitch, as well as being another sensor for vehicle acceleration and speed. A roll rate sensor that is similar to the yaw rate sensor in design, but improves the fidelity of the controller's vehicle model and provides more accurate data in combination with the other sensors. ESC uses a hydraulic modulator to assure that each wheel receives the correct brake force. A similar modulator is used in ABS. Whereas ABS reduces hydraulic pressure during braking, ESC may increase pressure in certain situations, and an active vacuum brake booster unit may be utilised in addition to the hydraulic pump to meet these demanding pressure gradients. At the centre of the ESC system is the electronic control unit (ECU), which contains various control techniques. Often, the same ECU is used for different systems at the same time (such as ABS, traction control, or climate control). The input signals are sent through an input circuit to the digital controller. The desired vehicle state is determined based upon the steering wheel angle, its gradient, and the wheel speed. Simultaneously, the yaw sensor measures the vehicle's actual yaw rate. The controller computes the needed brake or acceleration force for each wheel and directs the valves of the hydraulic modulator. The ECU is connected with other systems via a Controller Area Network interface in order to avoid conflicting with them. Many ESC systems have an override switch so the driver can disable ESC, which may be used on loose surfaces such as mud or sand, or if using a small spare tire, which could interfere with the sensors. Some systems also offer an additional mode with raised thresholds, so that a driver can utilize the limits of their vehicle's grip with less electronic intervention. However, the ESC reactivates when the ignition is restarted. Some ESC systems that lack an off switch, such as on many recent Toyota and Lexus vehicles, can be temporarily disabled through an undocumented series of brake pedal and handbrake operations. Furthermore, unplugging a wheel speed sensor is another method of disabling most ESC systems. The ESC implementation on newer Ford vehicles cannot be completely disabled, even through the use of the "off switch". The ESC will automatically reactivate at highway speeds, and below such speeds if it detects a skid with the brake pedal depressed. == Regulation == === Public awareness and law === While Sweden used public awareness campaigns to promote ESC use, others implemented or proposed legislation. The Canadian province of Quebec was the first jurisdiction to implement an ESC law, making it compulsory for carriers of dangerous goods (without data recorders) in 2005. The United States followed, with the National Highway Traffic Safety Administration implementing FMVSS 126, which requires ESC for all passenger vehicles under 10,000 pounds (4536 kg). The regulation phased in starting with 55% of 2009 models (effective 1 September 2008), 75% of 2010 models, 95% of 2011 models, and all 2012 and later models. The standard endorses the use of the Sine with Dwell test. In 2015 NHTSA finalized updated regulations requiring ESC for truck tractors and certain buses. Canada required all new passenger vehicles to have ESC from 1 September 2011. The Australian government announced on 23 June 2009 that ESC would be compulsory from 1 November 2011 for all new passenger vehicles sold in Australia, and for all new vehicles from November 2013, however the State Government of Victoria preceded this unilaterally on Jan 1 2011, much as they had done seatbelts 40 years before. The New Zealand government followed suit in February 2014 making it compulsory on all new vehicles from 1 July 2015 with a staggered roll-out to all used-import passenger vehicles by 1 January 2020. The European Parliament has also called for the accelerated introduction of ESC. The European Commission has confirmed a proposal for the mandatory introduction of ESC on all new cars and commercial vehicle models sold in the EU from 2012, with all new cars being equipped by 2014. Argentina requires all new normal cars to have ESC since 1 January 2022, for all new normal vehicles from January 2024. Chile requires all new cars to have ESC from August 2022. Brazil has required all new cars to have ESC from 1 January 2024. === International vehicle regulations === The United Nations Economic Commission for Europe has passed a Global Technical Regulation to harmonize ESC standards. Global Technical Regulation No. 8 ELECTRONIC STABILITY CONTROL SYSTEMS was sponsored by the United States of America, and is based on Federal Motor Vehicle Safety Standard FMVSS 126. In Unece countries, approval is based on UN Regulation 140: Electronic Stability Control (ESC) Systems. == Availability and cost == === Cost === ESC is built on top of an anti-lock brake system, and all ESC-equipped vehicles are fitted with traction control. ESC components include a yaw rate sensor, a lateral acceleration sensor, a steering wheel sensor, and an upgraded integrated control unit. In the US, federal regulations have required that ESC be installed as a standard feature on all passenger cars and light trucks as of the 2012 model year. According to NHTSA research, ABS in 2005 cost an estimated US$368; ESC cost a further US$111. The retail price of ESC varies; as a stand-alone option it retails for as little as US$250. ESC was once rarely offered as a sole option, and was generally not available for aftermarket installation. Instead, it was frequently bundled with other features or more expensive trims, so the cost of a package that included ESC was several thousand dollars. Nonetheless, ESC is considered highly cost-effective and may pay for itself in reduced insurance premiums. === Availability === Availability of ESC in passenger vehicles has varied between manufacturers and countries. In 2007, ESC was available in roughly 50% of new North American models compared to about 75% in Sweden. However, consumer awareness affects buying patterns, so that roughly 45% of vehicles sold in North America and the UK were purchased with ESC, contrasting with 78–96% in other European countries such as Germany, Denmark, and Sweden. While few vehicles had ESC prior to 2004, increased awareness has increased the number of vehicles with ESC on the used car market. ESC is available on cars, SUVs and pickup trucks from all major automakers. Luxury cars, sports cars, SUVs, and crossovers are usually equipped with ESC. Midsize cars have also been gradually catching on, though the 2008 model years of the Nissan Altima and Ford Fusion only offered ESC on their V6 engine-equipped cars; however, some midsize cars, such as the Honda Accord, had it as standard by then. While traction control is usually included with ESC, there were vehicles such as the 2008 Chevrolet Malibu LS, 2008 Mazda6, and 2007 Lincoln MKZ that had traction control but not ESC. ESC was rare among subcompact cars in 2008. The 2009 Toyota Corolla in the United States (but not Canada) had stability control as a $250 option on all trims below that of the XRS, which had it as standard. In Canada, for the 2010 Mazda3, ESC was an option on the midrange GS trim as part of its sunroof package, and is standard on the top-of-the-line GT version. The 2009 Ford Focus had ESC as an option for the S and SE models, and it was standard on the SEL and SES models In the UK, even mass-market superminis such as the Ford Fiesta Mk.6 and VW Polo Mk.5 came with ESC as standard. Elaborate ESC and ESP systems (including Roll Stability Control) are available for many commercial vehicles, including transport trucks, trailers, and buses from manufacturers such as Daimler, Scania, and Prevost. In heavy trucks the ESC and ESP functions must be realized as part of the pneumatic brake system. Typical component and system suppliers are e.g. Bendix, and WABCO,. ESC is also available on some motor homes. The ChooseESC! campaign, run by the EU's eSafetyAware! project, provides a global perspective on ESC. One ChooseESC! publication shows the availability of ESC in EU member countries. In the US, the Insurance Institute for Highway Safety website shows availability of ESC in individual US models and the National Highway Traffic Safety Administration website lists US models with ESC. In Australia, the NRMA shows the availability of ESC in Australian models. == Future == Just as ESC is founded on the anti-lock braking system (ABS), ESC is the foundation for new advances such as Roll Stability Control or active rollover protection that works in the vertical plane much like ESC works in the horizontal plane. When RSC detects impending rollover (usually on transport trucks or SUVs), RSC applies brakes, reduces throttle, induces understeer, and/or slows down the vehicle. The computing power of ESC facilitates the networking of active and passive safety systems, addressing other causes of crashes. For example, sensors may detect when a vehicle is following too closely and slow down the vehicle, straighten up seat backs, and tighten seat belts, avoiding and/or preparing for a crash. Moreover, current research on electronic stability control focuses on the integration of information: i) from systems from multiple domains within the same vehicle, for example radars, cameras, lidars and navigation system; and ii) from other vehicles, road users and infrastructure. Consistently with the trend towards the implementation of forms of model-based and predictive control, such ongoing progress is likely to bring a new generation of vehicle stability controllers in the next few years, capable of pre-emptive interventions, e.g., as a function of the expected path and road curvature ahead. == ESC products == === Product names === Electronic stability control (ESC) is the generic term recognised by the European Automobile Manufacturers Association (ACEA), the North American Society of Automotive Engineers (SAE), the Japan Automobile Manufacturers Association, and other worldwide authorities. However, vehicle manufacturers may use a variety of different trade names for ESC: === System manufacturers === ESC system manufacturers include: == References == 95. "Solutions By Industry" Johnson Electric. Archived from the original on 2015-02-14. Retrieved 2017-05-24. == External links == Bosch ESC Information ChooseESC! a combined initiative from the European Commission, eSafetyAware, and Euro NCAP NHTSA on ESC including US Regulation and list of US vehicles with ESC Transport Canada on ESC Australia (Victoria) on ESC
Wikipedia/Electronic_stability_control
Hysteresis is the dependence of the state of a system on its history. For example, a magnet may have more than one possible magnetic moment in a given magnetic field, depending on how the field changed in the past. Plots of a single component of the moment often form a loop or hysteresis curve, where there are different values of one variable depending on the direction of change of another variable. This history dependence is the basis of memory in a hard disk drive and the remanence that retains a record of the Earth's magnetic field magnitude in the past. Hysteresis occurs in ferromagnetic and ferroelectric materials, as well as in the deformation of rubber bands and shape-memory alloys and many other natural phenomena. In natural systems, it is often associated with irreversible thermodynamic change such as phase transitions and with internal friction; and dissipation is a common side effect. Hysteresis can be found in physics, chemistry, engineering, biology, and economics. It is incorporated in many artificial systems: for example, in thermostats and Schmitt triggers, it prevents unwanted frequent switching. Hysteresis can be a dynamic lag between an input and an output that disappears if the input is varied more slowly; this is known as rate-dependent hysteresis. However, phenomena such as the magnetic hysteresis loops are mainly rate-independent, which makes a durable memory possible. Systems with hysteresis are nonlinear, and can be mathematically challenging to model. Some hysteretic models, such as the Preisach model (originally applied to ferromagnetism) and the Bouc–Wen model, attempt to capture general features of hysteresis; and there are also phenomenological models for particular phenomena such as the Jiles–Atherton model for ferromagnetism. It is difficult to define hysteresis precisely. Isaak D. Mayergoyz wrote "...the very meaning of hysteresis varies from one area to another, from paper to paper and from author to author. As a result, a stringent mathematical definition of hysteresis is needed in order to avoid confusion and ambiguity.". == Etymology and history == The term "hysteresis" is derived from ὑστέρησις, an Ancient Greek word meaning "deficiency" or "lagging behind". It was coined in 1881 by Sir James Alfred Ewing to describe the behaviour of magnetic materials. Some early work on describing hysteresis in mechanical systems was performed by James Clerk Maxwell. Subsequently, hysteretic models have received significant attention in the works of Ferenc Preisach (Preisach model of hysteresis), Louis Néel and Douglas Hugh Everett in connection with magnetism and absorption. A more formal mathematical theory of systems with hysteresis was developed in the 1970s by a group of Russian mathematicians led by Mark Krasnosel'skii. == Types == === Rate-dependent === One type of hysteresis is a lag between input and output. An example is a sinusoidal input X(t) that results in a sinusoidal output Y(t), but with a phase lag φ: X ( t ) = X 0 sin ⁡ ω t Y ( t ) = Y 0 sin ⁡ ( ω t − φ ) . {\displaystyle {\begin{aligned}X(t)&=X_{0}\sin \omega t\\Y(t)&=Y_{0}\sin \left(\omega t-\varphi \right).\end{aligned}}} Such behavior can occur in linear systems, and a more general form of response is Y ( t ) = χ i X ( t ) + ∫ 0 ∞ Φ d ( τ ) X ( t − τ ) d τ , {\displaystyle Y(t)=\chi _{\text{i}}X(t)+\int _{0}^{\infty }\Phi _{\text{d}}(\tau )X(t-\tau )\,\mathrm {d} \tau ,} where χ i {\displaystyle \chi _{\text{i}}} is the instantaneous response and Φ d ( τ ) {\displaystyle \Phi _{d}(\tau )} is the impulse response to an impulse that occurred τ {\displaystyle \tau } time units in the past. In the frequency domain, input and output are related by a complex generalized susceptibility that can be computed from Φ d {\displaystyle \Phi _{d}} ; it is mathematically equivalent to a transfer function in linear filter theory and analogue signal processing. This kind of hysteresis is often referred to as rate-dependent hysteresis. If the input is reduced to zero, the output continues to respond for a finite time. This constitutes a memory of the past, but a limited one because it disappears as the output decays to zero. The phase lag depends on the frequency of the input, and goes to zero as the frequency decreases. When rate-dependent hysteresis is due to dissipative effects like friction, it is associated with power loss. === Rate-independent === Systems with rate-independent hysteresis have a persistent memory of the past that remains after the transients have died out. The future development of such a system depends on the history of states visited, but does not fade as the events recede into the past. If an input variable X(t) cycles from X0 to X1 and back again, the output Y(t) may be Y0 initially but a different value Y2 upon return. The values of Y(t) depend on the path of values that X(t) passes through but not on the speed at which it traverses the path. Many authors restrict the term hysteresis to mean only rate-independent hysteresis. Hysteresis effects can be characterized using the Preisach model and the generalized Prandtl−Ishlinskii model. == In engineering == === Control systems === In control systems, hysteresis can be used to filter signals so that the output reacts less rapidly than it otherwise would by taking recent system history into account. For example, a thermostat controlling a heater may switch the heater on when the temperature drops below A, but not turn it off until the temperature rises above B. (For instance, if one wishes to maintain a temperature of 20 °C then one might set the thermostat to turn the heater on when the temperature drops to below 18 °C and off when the temperature exceeds 22 °C). Similarly, a pressure switch can be designed to exhibit hysteresis, with pressure set-points substituted for temperature thresholds. === Electronic circuits === Often, some amount of hysteresis is intentionally added to an electronic circuit to prevent unwanted rapid switching. This and similar techniques are used to compensate for contact bounce in switches, or noise in an electrical signal. A Schmitt trigger is a simple electronic circuit that exhibits this property. A latching relay uses a solenoid to actuate a ratcheting mechanism that keeps the relay closed even if power to the relay is terminated. Some positive feedback from the output to one input of a comparator can increase the natural hysteresis (a function of its gain) it exhibits. Hysteresis is essential to the workings of some memristors (circuit components which "remember" changes in the current passing through them by changing their resistance). Hysteresis can be used when connecting arrays of elements such as nanoelectronics, electrochrome cells and memory effect devices using passive matrix addressing. Shortcuts are made between adjacent components (see crosstalk) and the hysteresis helps to keep the components in a particular state while the other components change states. Thus, all rows can be addressed at the same time instead of individually. In the field of audio electronics, a noise gate often implements hysteresis intentionally to prevent the gate from "chattering" when signals close to its threshold are applied. === User interface design === A hysteresis is sometimes intentionally added to computer algorithms. The field of user interface design has borrowed the term hysteresis to refer to times when the state of the user interface intentionally lags behind the apparent user input. For example, a menu that was drawn in response to a mouse-over event may remain on-screen for a brief moment after the mouse has moved out of the trigger region and the menu region. This allows the user to move the mouse directly to an item on the menu, even if part of that direct mouse path is outside of both the trigger region and the menu region. For instance, right-clicking on the desktop in most Windows interfaces will create a menu that exhibits this behavior. === Aerodynamics === In aerodynamics, hysteresis can be observed when decreasing the angle of attack of a wing after stall, regarding the lift and drag coefficients. The angle of attack at which the flow on top of the wing reattaches is generally lower than the angle of attack at which the flow separates during the increase of the angle of attack. === Hydraulics === Hysteresis can be observed in the stage-flow relationship of a river during rapidly changing conditions such as passing of a flood wave. It is most pronounced in low gradient streams with steep leading edge hydrographs. === Backlash === Moving parts within machines, such as the components of a gear train, normally have a small gap between them, to allow movement and lubrication. As a consequence of this gap, any reversal in direction of a drive part will not be passed on immediately to the driven part. This unwanted delay is normally kept as small as practicable, and is usually called backlash. The amount of backlash will increase with time as the surfaces of moving parts wear. == In mechanics == === Elastic hysteresis === In the elastic hysteresis of rubber, the area in the centre of a hysteresis loop is the energy dissipated due to material internal friction. Elastic hysteresis was one of the first types of hysteresis to be examined. The effect can be demonstrated using a rubber band with weights attached to it. If the top of a rubber band is hung on a hook and small weights are attached to the bottom of the band one at a time, it will stretch and get longer. As more weights are loaded onto it, the band will continue to stretch because the force the weights are exerting on the band is increasing. When each weight is taken off, or unloaded, the band will contract as the force is reduced. As the weights are taken off, each weight that produced a specific length as it was loaded onto the band now contracts less, resulting in a slightly longer length as it is unloaded. This is because the band does not obey Hooke's law perfectly. The hysteresis loop of an idealized rubber band is shown in the figure. In terms of force, the rubber band was harder to stretch when it was being loaded than when it was being unloaded. In terms of time, when the band is unloaded, the effect (the length) lagged behind the cause (the force of the weights) because the length has not yet reached the value it had for the same weight during the loading part of the cycle. In terms of energy, more energy was required during the loading than the unloading, the excess energy being dissipated as thermal energy. Elastic hysteresis is more pronounced when the loading and unloading is done quickly than when it is done slowly. Some materials such as hard metals don't show elastic hysteresis under a moderate load, whereas other hard materials like granite and marble do. Materials such as rubber exhibit a high degree of elastic hysteresis. When the intrinsic hysteresis of rubber is being measured, the material can be considered to behave like a gas. When a rubber band is stretched, it heats up, and if it is suddenly released, it cools down perceptibly. These effects correspond to a large hysteresis from the thermal exchange with the environment and a smaller hysteresis due to internal friction within the rubber. This proper, intrinsic hysteresis can be measured only if the rubber band is thermally isolated. Small vehicle suspensions using rubber (or other elastomers) can achieve the dual function of springing and damping because rubber, unlike metal springs, has pronounced hysteresis and does not return all the absorbed compression energy on the rebound. Mountain bikes have made use of elastomer suspension, as did the original Mini car. The primary cause of rolling resistance when a body (such as a ball, tire, or wheel) rolls on a surface is hysteresis. This is attributed to the viscoelastic characteristics of the material of the rolling body. === Contact angle hysteresis === The contact angle formed between a liquid and solid phase will exhibit a range of contact angles that are possible. There are two common methods for measuring this range of contact angles. The first method is referred to as the tilting base method. Once a drop is dispensed on the surface with the surface level, the surface is then tilted from 0° to 90°. As the drop is tilted, the downhill side will be in a state of imminent wetting while the uphill side will be in a state of imminent dewetting. As the tilt increases the downhill contact angle will increase and represents the advancing contact angle while the uphill side will decrease; this is the receding contact angle. The values for these angles just prior to the drop releasing will typically represent the advancing and receding contact angles. The difference between these two angles is the contact angle hysteresis. The second method is often referred to as the add/remove volume method. When the maximum liquid volume is removed from the drop without the interfacial area decreasing the receding contact angle is thus measured. When volume is added to the maximum before the interfacial area increases, this is the advancing contact angle. As with the tilt method, the difference between the advancing and receding contact angles is the contact angle hysteresis. Most researchers prefer the tilt method; the add/remove method requires that a tip or needle stay embedded in the drop which can affect the accuracy of the values, especially the receding contact angle. === Bubble shape hysteresis === The equilibrium shapes of bubbles expanding and contracting on capillaries (blunt needles) can exhibit hysteresis depending on the relative magnitude of the maximum capillary pressure to ambient pressure, and the relative magnitude of the bubble volume at the maximum capillary pressure to the dead volume in the system. The bubble shape hysteresis is a consequence of gas compressibility, which causes the bubbles to behave differently across expansion and contraction. During expansion, bubbles undergo large non equilibrium jumps in volume, while during contraction the bubbles are more stable and undergo a relatively smaller jump in volume resulting in an asymmetry across expansion and contraction. The bubble shape hysteresis is qualitatively similar to the adsorption hysteresis, and as in the contact angle hysteresis, the interfacial properties play an important role in bubble shape hysteresis. The existence of the bubble shape hysteresis has important consequences in interfacial rheology experiments involving bubbles. As a result of the hysteresis, not all sizes of the bubbles can be formed on a capillary. Further the gas compressibility causing the hysteresis leads to unintended complications in the phase relation between the applied changes in interfacial area to the expected interfacial stresses. These difficulties can be avoided by designing experimental systems to avoid the bubble shape hysteresis. === Adsorption hysteresis === Hysteresis can also occur during physical adsorption processes. In this type of hysteresis, the quantity adsorbed is different when gas is being added than it is when being removed. The specific causes of adsorption hysteresis are still an active area of research, but it is linked to differences in the nucleation and evaporation mechanisms inside mesopores. These mechanisms are further complicated by effects such as cavitation and pore blocking. In physical adsorption, hysteresis is evidence of mesoporosity-indeed, the definition of mesopores (2–50 nm) is associated with the appearance (50 nm) and disappearance (2 nm) of mesoporosity in nitrogen adsorption isotherms as a function of Kelvin radius. An adsorption isotherm showing hysteresis is said to be of Type IV (for a wetting adsorbate) or Type V (for a non-wetting adsorbate), and hysteresis loops themselves are classified according to how symmetric the loop is. Adsorption hysteresis loops also have the unusual property that it is possible to scan within a hysteresis loop by reversing the direction of adsorption while on a point on the loop. The resulting scans are called "crossing", "converging", or "returning", depending on the shape of the isotherm at this point. === Matric potential hysteresis === The relationship between matric water potential and water content is the basis of the water retention curve. Matric potential measurements (Ψm) are converted to volumetric water content (θ) measurements based on a site or soil specific calibration curve. Hysteresis is a source of water content measurement error. Matric potential hysteresis arises from differences in wetting behaviour causing dry medium to re-wet; that is, it depends on the saturation history of the porous medium. Hysteretic behaviour means that, for example, at a matric potential (Ψm) of 5 kPa, the volumetric water content (θ) of a fine sandy soil matrix could be anything between 8% and 25%. Tensiometers are directly influenced by this type of hysteresis. Two other types of sensors used to measure soil water matric potential are also influenced by hysteresis effects within the sensor itself. Resistance blocks, both nylon and gypsum based, measure matric potential as a function of electrical resistance. The relation between the sensor's electrical resistance and sensor matric potential is hysteretic. Thermocouples measure matric potential as a function of heat dissipation. Hysteresis occurs because measured heat dissipation depends on sensor water content, and the sensor water content–matric potential relationship is hysteretic. As of 2002, only desorption curves are usually measured during calibration of soil moisture sensors. Despite the fact that it can be a source of significant error, the sensor specific effect of hysteresis is generally ignored. == In materials == === Magnetic hysteresis === When an external magnetic field is applied to a ferromagnetic material such as iron, the atomic domains align themselves with it. Even when the field is removed, part of the alignment will be retained: the material has become magnetized. Once magnetized, the magnet will stay magnetized indefinitely. To demagnetize it requires heat or a magnetic field in the opposite direction. This is the effect that provides the element of memory in a hard disk drive. The relationship between field strength H and magnetization M is not linear in such materials. If a magnet is demagnetized (H = M = 0) and the relationship between H and M is plotted for increasing levels of field strength, M follows the initial magnetization curve. This curve increases rapidly at first and then approaches an asymptote called magnetic saturation. If the magnetic field is now reduced monotonically, M follows a different curve. At zero field strength, the magnetization is offset from the origin by an amount called the remanence. If the H-M relationship is plotted for all strengths of applied magnetic field the result is a hysteresis loop called the main loop. The width of the middle section is twice the coercivity of the material. A closer look at a magnetization curve generally reveals a series of small, random jumps in magnetization called Barkhausen jumps. This effect is due to crystallographic defects such as dislocations. Magnetic hysteresis loops are not exclusive to materials with ferromagnetic ordering. Other magnetic orderings, such as spin glass ordering, also exhibit this phenomenon. ==== Physical origin ==== The phenomenon of hysteresis in ferromagnetic materials is the result of two effects: rotation of magnetization and changes in size or number of magnetic domains. In general, the magnetization varies (in direction but not magnitude) across a magnet, but in sufficiently small magnets, it does not. In these single-domain magnets, the magnetization responds to a magnetic field by rotating. Single-domain magnets are used wherever a strong, stable magnetization is needed (for example, magnetic recording). Larger magnets are divided into regions called domains. Across each domain, the magnetization does not vary; but between domains are relatively thin domain walls in which the direction of magnetization rotates from the direction of one domain to another. If the magnetic field changes, the walls move, changing the relative sizes of the domains. Because the domains are not magnetized in the same direction, the magnetic moment per unit volume is smaller than it would be in a single-domain magnet; but domain walls involve rotation of only a small part of the magnetization, so it is much easier to change the magnetic moment. The magnetization can also change by addition or subtraction of domains (called nucleation and denucleation). ==== Magnetic hysteresis models ==== The most known empirical models in hysteresis are Preisach and Jiles-Atherton models. These models allow an accurate modeling of the hysteresis loop and are widely used in the industry. However, these models lose the connection with thermodynamics and the energy consistency is not ensured. A more recent model, with a more consistent thermodynamical foundation, is the vectorial incremental nonconservative consistent hysteresis (VINCH) model of Lavet et al. (2011) ==== Applications ==== There are a great variety of applications of the hysteresis in ferromagnets. Many of these make use of their ability to retain a memory, for example magnetic tape, hard disks, and credit cards. In these applications, hard magnets (high coercivity) like iron are desirable, such that as much energy is absorbed as possible during the write operation and the resultant magnetized information is not easily erased. On the other hand, magnetically soft (low coercivity) iron is used for the cores in electromagnets. The low coercivity minimizes the energy loss associated with hysteresis, as the magnetic field periodically reverses in the presence of an alternating current. The low energy loss during a hysteresis loop is the reason why soft iron is used for transformer cores and electric motors. === Electrical hysteresis === Electrical hysteresis typically occurs in ferroelectric material, where domains of polarization contribute to the total polarization. Polarization is the electrical dipole moment (either C·m−2 or C·m). The mechanism, an organization of the polarization into domains, is similar to that of magnetic hysteresis. === Liquid–solid-phase transitions === Hysteresis manifests itself in state transitions when melting temperature and freezing temperature do not agree. For example, agar melts at 85 °C (185 °F) and solidifies from 32 to 40 °C (90 to 104 °F). This is to say that once agar is melted at 85 °C, it retains a liquid state until cooled to 40 °C. Therefore, from the temperatures of 40 to 85 °C, agar can be either solid or liquid, depending on which state it was before. == In biology == === Cell biology and genetics === Hysteresis in cell biology often follows bistable systems where the same input state can lead to two different, stable outputs. Where bistability can lead to digital, switch-like outputs from the continuous inputs of chemical concentrations and activities, hysteresis makes these systems more resistant to noise. These systems are often characterized by higher values of the input required to switch into a particular state as compared to the input required to stay in the state, allowing for a transition that is not continuously reversible, and thus less susceptible to noise. ==== Irreversible hysteresis ==== In the case of mitosis, irreversibility is essential to maintain the overall integrity of the system such that we have three designated checkpoints to account for this: G1/S, G2/M, and the spindle checkpoint. Irreversible hysteresis in this context ensures that once a cell commits to a specific phase (e.g., entering mitosis or DNA replication), it does not revert to a previous phase, even if conditions or regulatory signals change. Based on the irreversible hysteresis curve, there does exist an input at which the cell jumps to the next stable state, but there is no input that allows the cell to revert to its previous stable state, even when the input is 0, demonstrating irreversibility. Positive feedback is critical for generating hysteresis in the cell cycle. For example: In the G2/M transition, active CDK1 promotes the activation of more CDK1 molecules by inhibiting Wee1 (an inhibitor) and activating Cdc25 (a phosphatase that activates CDK1). These loops lock the cell into its current state and amplify the activation of CDK1. Positive feedback also serves to create a bistable system where CDK1 is either fully inactivated or fully activated. Hysteresis prevents the cell from oscillating between these two states from small perturbations in signal (input). ==== Reversible hysteresis ==== A biochemical system that is under the control of reversible hysteresis has both forward and reverse trajectories. The system generally requires a higher [input] to proceed forward into the next bistable state then to exit from that stage. For example, cells undergoing cell division exhibit reversible hysteresis in that it takes a higher concentration of cyclins to switch them from G2 phase into mitosis than to stay in mitosis once begun. Additionally, because the [cyclin] required to reverse the cell back to the G2 phase is much lower than the [cycilin] to enter mitosis, this improved the bistability of mitosis because it is more resistance to weak or transient signals. Small perturbations the [input] will be unable to push the cell out of mitosis so easily. ==== History and memory ==== In systems with bistability, the same input level can correspond to two distinct stable states (e.g., "low output" and "high output"). The actual state of the system depends on its history –whether the input level was increasing (forward trajectory) or decreasing (backward trajectory). Thus, it is difficult to determine which state a cell is in if given only a bistability curve. The cell's ability to "remember" its prior state ensures stability and prevents it from switching states unnecessarily due to minor fluctuations in input. This memory is often maintained through molecular feedback loops, such as positive feedback in signaling pathways, or the persistence of regulatory molecules like proteins or phosphorylated components. For example, the refractory period in action potentials is primarily controlled by history. Absolute refraction period prevents a volted-gated sodium channel from activating or refiring after it has just fired. This is because following the absolute refractory period, the neuron is less excitable due to hyperpolarization caused by potassium efflux. This molecular inhibitory feedback creates a memory for the neuron or cell, so that the neuron does not fire too soon. As time passes, the neuron or cell will slowly lose the memory of having fired and will begin to fire again. Thus, memory is time-dependent, which is important in maintaining homeostasis and regulating many different biological processes. ==== Biochemical systems: regulating the cell cycle in xenopus laevis egg extracts ==== Cells advancing through the cell cycle must make an irreversible commitment to mitosis, ensuring they do not revert to interphase before successfully segregating their chromosomes. A mathematical model of cell-cycle progression in cell-free egg extracts from frogs suggests that hysteresis in the molecular control system drives these irreversible transitions into and out of mitosis. Here, Cdc2 (Cyclin-dependent kinase 1 or CDK1) is responsible for mitotic entry and exit such that binding of cyclin B forms a complex called Maturation-Promoting Factor (MPF). The activation threshold for mitotic entry was found to be between 32 and 40 nM cyclin B in the frog extracts while the inactivation threshold for exiting mitosis was lower, between 16 and 24 nM cyclin B. The higher threshold for mitotic entry compared to the lower threshold for mitotic exit indicates hysteresis, a hallmark of history-dependent behavior in the system. Concentrations between 24 and 32 nM cyclin B demonstrated bistability, where the system could exist in either interphase or mitosis, depending on its prior state (history). Though, the cell cycle is not completely irreversible, the difference in thresholds is enough for growth and survival of the cells. Hysteric thresholds in biological systems are not definite and can be recalibrated. For example, unreplicated DNA or chromosomes inhibits Cdc25 phosphatase and maintains Wee1 kinase activity. This prevents the activation of Cyclin B-Cdc2, effectively raising the threshold for mitotic entry. As a result, the cell delays the transition to mitosis until replication is complete, ensuring genomic integrity. Other instances may be DNA damage and unattached chromosomes during the spindle assembly checkpoint. ==== Biochemical systems: regulating the cell cycle in yeast ==== Biochemical systems can also show hysteresis-like output when slowly varying states that are not directly monitored are involved, as in the case of the cell cycle arrest in yeast exposed to mating pheromone. The proposed model is that α-factor, a yeast mating pheromone binds to its analog receptor on another yeast cell promoting transcription of Fus3 and promoting mating. Fus3 further promotes Far1 which inhibits Cln1/2, activators of the cell cycle. This is representative of a coherent feedforward loop that can modeled as a hysteresis curve. Far1 transcription is the primary mechanism responsible for the hysteresis observed in cell-cycle reentry. The history of pheromone exposure influences the accumulation of Far1, which, in turn, determines the delay in cell-cycle reentry. Previous pulse experiments demonstrated that after exposure to high pheromone concentrations, cells enter a stabilized arrested state where reentry thresholds are elevated due to increased Far1-dependent inhibition of CDK activity. Even when pheromone levels drop to concentrations that would allow naive cells to reenter the cell cycle, pre-exposed cells take longer to resume proliferation. This delay reflects the history-dependent nature of hysteresis, where past exposure to high pheromone concentrations influences the current state. Hysteresis ensures that cells make robust and irreversible decisions about mating and proliferation in response to pheromone signals. It allows cells to "remember" high pheromone exposure, and this helps yeast cells adapt and stability their responses to environmental conditions, avoiding fast premature reentry into the cell cycle, the moment that pheromone signal dies down. Additionally, the duration of cell cycle arrest depends not only on the final level of input Fus3, but also on the previously achieved Fus3 levels. This effect is achieved due to the slower time scales involved in the transcription of intermediate Far1, such that the total Far1 activity reaches its equilibrium value slowly, and for transient changes in Fus3 concentration, the response of the system depends on the Far1 concentration achieved with the transient value. Experiments in this type of hysteresis benefit from the ability to change the concentration of the inputs with time. The mechanisms are often elucidated by allowing independent control of the concentration of the key intermediate, for instance, by using an inducible promoter. Biochemical systems can also show hysteresis-like output when slowly varying states that are not directly monitored are involved, as in the case of the cell cycle arrest in yeast exposed to mating pheromone. Here, the duration of cell cycle arrest depends not only on the final level of input Fus3, but also on the previously achieved Fus3 levels. This effect is achieved due to the slower time scales involved in the transcription of intermediate Far1, such that the total Far1 activity reaches its equilibrium value slowly, and for transient changes in Fus3 concentration, the response of the system depends on the Far1 concentration achieved with the transient value. Experiments in this type of hysteresis benefit from the ability to change the concentration of the inputs with time. The mechanisms are often elucidated by allowing independent control of the concentration of the key intermediate, for instance, by using an inducible promoter. Darlington in his classic works on genetics discussed hysteresis of the chromosomes, by which he meant "failure of the external form of the chromosomes to respond immediately to the internal stresses due to changes in their molecular spiral", as they lie in a somewhat rigid medium in the limited space of the cell nucleus. In developmental biology, cell type diversity is regulated by long range-acting signaling molecules called morphogens that pattern uniform pools of cells in a concentration- and time-dependent manner. The morphogen sonic hedgehog (Shh), for example, acts on limb bud and neural progenitors to induce expression of a set of homeodomain-containing transcription factors to subdivide these tissues into distinct domains. It has been shown that these tissues have a 'memory' of previous exposure to Shh. In neural tissue, this hysteresis is regulated by a homeodomain (HD) feedback circuit that amplifies Shh signaling. In this circuit, expression of Gli transcription factors, the executors of the Shh pathway, is suppressed. Glis are processed to repressor forms (GliR) in the absence of Shh, but in the presence of Shh, a proportion of Glis are maintained as full-length proteins allowed to translocate to the nucleus, where they act as activators (GliA) of transcription. By reducing Gli expression then, the HD transcription factors reduce the total amount of Gli (GliT), so a higher proportion of GliT can be stabilized as GliA for the same concentration of Shh. === Immunology === There is some evidence that T cells exhibit hysteresis in that it takes a lower signal threshold to activate T cells that have been previously activated. Ras GTPase activation is required for downstream effector functions of activated T cells. Triggering of the T cell receptor induces high levels of Ras activation, which results in higher levels of GTP-bound (active) Ras at the cell surface. Since higher levels of active Ras have accumulated at the cell surface in T cells that have been previously stimulated by strong engagement of the T cell receptor, weaker subsequent T cell receptor signals received shortly afterwards will deliver the same level of activation due to the presence of higher levels of already activated Ras as compared to a naïve cell. === Neuroscience === The property by which some neurons do not return to their basal conditions from a stimulated condition immediately after removal of the stimulus is an example of hysteresis. === Neuropsychology === Neuropsychology, in exploring the neural correlates of consciousness, interfaces with neuroscience, although the complexity of the central nervous system is a challenge to its study (that is, its operation resists easy reduction). Context-dependent memory and state-dependent memory show hysteretic aspects of neurocognition. === Respiratory physiology === Lung hysteresis is evident when observing the compliance of a lung on inspiration versus expiration. The difference in compliance (Δvolume/Δpressure) is due to the additional energy required to overcome surface tension forces during inspiration to recruit and inflate additional alveoli. The transpulmonary pressure vs Volume curve of inhalation is different from the Pressure vs Volume curve of exhalation, the difference being described as hysteresis. Lung volume at any given pressure during inhalation is less than the lung volume at any given pressure during exhalation. === Voice and speech physiology === A hysteresis effect may be observed in voicing onset versus offset. The threshold value of the subglottal pressure required to start the vocal fold vibration is lower than the threshold value at which the vibration stops, when other parameters are kept constant. In utterances of vowel-voiceless consonant-vowel sequences during speech, the intraoral pressure is lower at the voice onset of the second vowel compared to the voice offset of the first vowel, the oral airflow is lower, the transglottal pressure is larger and the glottal width is smaller. === Ecology and epidemiology === Hysteresis is a commonly encountered phenomenon in ecology and epidemiology, where the observed equilibrium of a system can not be predicted solely based on environmental variables, but also requires knowledge of the system's past history. Notable examples include the theory of spruce budworm outbreaks and behavioral-effects on disease transmission. It is commonly examined in relation to critical transitions between ecosystem or community types in which dominant competitors or entire landscapes can change in a largely irreversible fashion. == In ocean and climate science == Complex ocean and climate models rely on the principle. == In economics == Economic systems can exhibit hysteresis. For example, export performance is subject to strong hysteresis effects: because of the fixed transportation costs it may take a big push to start a country's exports, but once the transition is made, not much may be required to keep them going. When some negative shock reduces employment in a company or industry, fewer employed workers then remain. As usually the employed workers have the power to set wages, their reduced number incentivizes them to bargain for higher wages when the economy again gets better instead of letting the wage be at the equilibrium wage level, where the supply and demand of workers would match. This causes hysteresis: the unemployment becomes permanently higher after negative shocks. === Permanently higher unemployment === The idea of hysteresis is used extensively in the area of labor economics, specifically with reference to the unemployment rate. According to theories based on hysteresis, severe economic downturns (recession) and/or persistent stagnation (slow demand growth, usually after a recession) cause unemployed individuals to lose their job skills (commonly developed on the job) or to find that their skills have become obsolete, or become demotivated, disillusioned or depressed or lose job-seeking skills. In addition, employers may use time spent in unemployment as a screening tool, i.e., to weed out less desired employees in hiring decisions. Then, in times of an economic upturn, recovery, or "boom", the affected workers will not share in the prosperity, remaining unemployed for long periods (e.g., over 52 weeks). This makes unemployment "structural", i.e., extremely difficult to reduce simply by increasing the aggregate demand for products and labor without causing increased inflation. That is, it is possible that a ratchet effect in unemployment rates exists, so a short-term rise in unemployment rates tends to persist. For example, traditional anti-inflationary policy (the use of recession to fight inflation) leads to a permanently higher "natural" rate of unemployment (more scientifically known as the NAIRU). This occurs first because inflationary expectations are "sticky" downward due to wage and price rigidities (and so adapt slowly over time rather than being approximately correct as in theories of rational expectations) and second because labor markets do not clear instantly in response to unemployment. The existence of hysteresis has been put forward as a possible explanation for the persistently high unemployment of many economies in the 1990s. Hysteresis has been invoked by Olivier Blanchard among others to explain the differences in long run unemployment rates between Europe and the United States. Labor market reform (usually meaning institutional change promoting more flexible wages, firing, and hiring) or strong demand-side economic growth may not therefore reduce this pool of long-term unemployed. Thus, specific targeted training programs are presented as a possible policy solution. However, the hysteresis hypothesis suggests such training programs are aided by persistently high demand for products (perhaps with incomes policies to avoid increased inflation), which reduces the transition costs out of unemployment and into paid employment easier. == Models == Hysteretic models are mathematical models capable of simulating complex nonlinear behavior (hysteresis) characterizing mechanical systems and materials used in different fields of engineering, such as aerospace, civil, and mechanical engineering. Some examples of mechanical systems and materials having hysteretic behavior are: materials, such as steel, reinforced concrete, wood; structural elements, such as steel, reinforced concrete, or wood joints; devices, such as seismic isolators and dampers. Each subject that involves hysteresis has models that are specific to the subject. In addition, there are hysteretic models that capture general features of many systems with hysteresis. An example is the Preisach model of hysteresis, which represents a hysteresis nonlinearity as a linear superposition of square loops called non-ideal relays. Many complex models of hysteresis arise from the simple parallel connection, or superposition, of elementary carriers of hysteresis termed hysterons. A simple and intuitive parametric description of various hysteresis loops may be found in the Lapshin model. Along with the smooth loops, substitution of trapezoidal, triangular or rectangular pulses instead of the harmonic functions allows piecewise-linear hysteresis loops frequently used in discrete automatics to be built in the model. There are implementations of the hysteresis loop model in Mathcad and in R programming language. The Bouc–Wen model of hysteresis is often used to describe non-linear hysteretic systems. It was introduced by Bouc and extended by Wen, who demonstrated its versatility by producing a variety of hysteretic patterns. This model is able to capture in analytical form, a range of shapes of hysteretic cycles which match the behaviour of a wide class of hysteretical systems; therefore, given its versability and mathematical tractability, the Bouc–Wen model has quickly gained popularity and has been extended and applied to a wide variety of engineering problems, including multi-degree-of-freedom (MDOF) systems, buildings, frames, bidirectional and torsional response of hysteretic systems two- and three-dimensional continua, and soil liquefaction among others. The Bouc–Wen model and its variants/extensions have been used in applications of structural control, in particular in the modeling of the behaviour of magnetorheological dampers, base isolation devices for buildings and other kinds of damping devices; it has also been used in the modelling and analysis of structures built of reinforced concrete, steel, masonry and timber.. The most important extension of Bouc-Wen Model was carried out by Baber and Noori and later by Noori and co-workers. That extended model, named, BWBN, can reproduce the complex shear pinching or slip-lock phenomenon that earlier model could not reproduce. The BWBN model has been widely used in a wide spectrum of applications and implementations are available in software such as OpenSees. Hysteretic models may have a generalized displacement u {\displaystyle u} as input variable and a generalized force f {\displaystyle f} as output variable, or vice versa. In particular, in rate-independent hysteretic models, the output variable does not depend on the rate of variation of the input one. Rate-independent hysteretic models can be classified into four different categories depending on the type of equation that needs to be solved to compute the output variable: algebraic models transcendental models differential models integral models === List of models === Some notable hysteretic models are listed below, along with their associated fields. Bean's critical state model (magnetism) Bouc–Wen model (structural engineering) Ising model (magnetism) Jiles–Atherton model (magnetism) Novak–Tyson model (cell-cycle control) Preisach model (magnetism) Stoner–Wohlfarth model (magnetism) == Energy == When hysteresis occurs with extensive and intensive variables, the work done on the system is the area under the hysteresis graph. == See also == == References == == Further reading == == External links == Overview of contact angle Hysteresis Preisach model of hysteresis – Matlab codes developed by Zs. Szabó Hysteresis What's hysteresis? Archived 2009-09-04 at the Wayback Machine Dynamical systems with hysteresis (interactive web page) Magnetization reversal app (coherent rotation) Elastic hysteresis and rubber bands
Wikipedia/Tipping_point_(physics)
Structural fracture mechanics is the field of structural engineering concerned with the study of load-carrying structures that includes one or several failed or damaged components. It uses methods of analytical solid mechanics, structural engineering, safety engineering, probability theory, and catastrophe theory to calculate the load and stress in the structural components and analyze the safety of a damaged structure. There is a direct analogy between fracture mechanics of solid and structural fracture mechanics: There are different causes of the first component failure: mechanical overload, fatigue (material), unpredicted scenario, etc. “human intervention” like unprofessional behavior or a terrorist attack. There are two typical scenarios: A localized failure does NOT cause immediate collapse of the entire structure. The entire structure fails immediately after one of its components fails. If the structure does not collapse immediately there is a limited period of time until the catastrophic structural failure of the entire structure. There is a critical number of structural elements that defines whether the system has reserve ability or not. Safety engineers use the failure of the first component as an indicator and try to intervene during the given period of time to avoid the catastrophe of the entire structure. For example, “Leak-Before-Break” methodology means that a leak will be discovered prior to a catastrophic failure of the entire piping system occurring in service. It has been applied to pressure vessels, nuclear piping, gas and oil pipelines, etc. The methods of structural fracture mechanics are used as checking calculations to estimate sensitivity of a structure to its component failure. The failure of a complex system with parallel redundancy can be estimated based on probabilistic properties of the system elements. == See also == Catastrophic failure – Sudden and total failure from which recovery is impossible Catastrophe theory – Area of mathematics Fracture mechanics – Study of propagation of cracks in materials Nuclear safety and security – Regulations for uses of radioactive materials Progressive collapse – Building collapse type Safety engineering – Engineering discipline which assures that engineered systems provide acceptable levels of safety Structural integrity and failure – Ability of a structure to support a designed structural load without breaking == References ==
Wikipedia/Structural_fracture_mechanics
In the physical sciences, the Airy function (or Airy function of the first kind) Ai(x) is a special function named after the British astronomer George Biddell Airy (1801–1892). The function Ai(x) and the related function Bi(x), are linearly independent solutions to the differential equation d 2 y d x 2 − x y = 0 , {\displaystyle {\frac {d^{2}y}{dx^{2}}}-xy=0,} known as the Airy equation or the Stokes equation. Because the solution of the linear differential equation d 2 y d x 2 − k y = 0 {\displaystyle {\frac {d^{2}y}{dx^{2}}}-ky=0} is oscillatory for k<0 and exponential for k>0, the Airy functions are oscillatory for x<0 and exponential for x>0. In fact, the Airy equation is the simplest second-order linear differential equation with a turning point (a point where the character of the solutions changes from oscillatory to exponential). == Definitions == For real values of x, the Airy function of the first kind can be defined by the improper Riemann integral: Ai ⁡ ( x ) = 1 π ∫ 0 ∞ cos ⁡ ( t 3 3 + x t ) d t ≡ 1 π lim b → ∞ ∫ 0 b cos ⁡ ( t 3 3 + x t ) d t , {\displaystyle \operatorname {Ai} (x)={\dfrac {1}{\pi }}\int _{0}^{\infty }\cos \left({\dfrac {t^{3}}{3}}+xt\right)\,dt\equiv {\dfrac {1}{\pi }}\lim _{b\to \infty }\int _{0}^{b}\cos \left({\dfrac {t^{3}}{3}}+xt\right)\,dt,} which converges by Dirichlet's test. For any real number x there is a positive real number M such that function t 3 3 + x t {\textstyle {\tfrac {t^{3}}{3}}+xt} is increasing, unbounded and convex with continuous and unbounded derivative on interval [ M , ∞ ) . {\displaystyle [M,\infty ).} The convergence of the integral on this interval can be proven by Dirichlet's test after substitution u = t 3 3 + x t . {\textstyle u={\tfrac {t^{3}}{3}}+xt.} y = Ai(x) satisfies the Airy equation y ″ − x y = 0. {\displaystyle y''-xy=0.} This equation has two linearly independent solutions. Up to scalar multiplication, Ai(x) is the solution subject to the condition y → 0 as x → ∞. The standard choice for the other solution is the Airy function of the second kind, denoted Bi(x). It is defined as the solution with the same amplitude of oscillation as Ai(x) as x → −∞ which differs in phase by π/2: Bi ⁡ ( x ) = 1 π ∫ 0 ∞ [ exp ⁡ ( − t 3 3 + x t ) + sin ⁡ ( t 3 3 + x t ) ] d t . {\displaystyle \operatorname {Bi} (x)={\frac {1}{\pi }}\int _{0}^{\infty }\left[\exp \left(-{\tfrac {t^{3}}{3}}+xt\right)+\sin \left({\tfrac {t^{3}}{3}}+xt\right)\,\right]dt.} == Properties == The values of Ai(x) and Bi(x) and their derivatives at x = 0 are given by Ai ⁡ ( 0 ) = 1 3 2 / 3 Γ ( 2 3 ) , Ai ′ ⁡ ( 0 ) = − 1 3 1 / 3 Γ ( 1 3 ) , Bi ⁡ ( 0 ) = 1 3 1 / 6 Γ ( 2 3 ) , Bi ′ ⁡ ( 0 ) = 3 1 / 6 Γ ( 1 3 ) . {\displaystyle {\begin{aligned}\operatorname {Ai} (0)&{}={\frac {1}{3^{2/3}\,\Gamma \!\left({\frac {2}{3}}\right)}},&\quad \operatorname {Ai} '(0)&{}=-{\frac {1}{3^{1/3}\,\Gamma \!\left({\frac {1}{3}}\right)}},\\\operatorname {Bi} (0)&{}={\frac {1}{3^{1/6}\,\Gamma \!\left({\frac {2}{3}}\right)}},&\quad \operatorname {Bi} '(0)&{}={\frac {3^{1/6}}{\Gamma \!\left({\frac {1}{3}}\right)}}.\end{aligned}}} Here, Γ denotes the Gamma function. It follows that the Wronskian of Ai(x) and Bi(x) is 1/π. When x is positive, Ai(x) is positive, convex, and decreasing exponentially to zero, while Bi(x) is positive, convex, and increasing exponentially. When x is negative, Ai(x) and Bi(x) oscillate around zero with ever-increasing frequency and ever-decreasing amplitude. This is supported by the asymptotic formulae below for the Airy functions. The Airy functions are orthogonal in the sense that ∫ − ∞ ∞ Ai ⁡ ( t + x ) Ai ⁡ ( t + y ) d t = δ ( x − y ) {\displaystyle \int _{-\infty }^{\infty }\operatorname {Ai} (t+x)\operatorname {Ai} (t+y)dt=\delta (x-y)} again using an improper Riemann integral. Real zeros of Ai(x) and its derivative Ai'(x) Neither Ai(x) nor its derivative Ai'(x) have positive real zeros. The "first" real zeros (i.e. nearest to x=0) are: "first" zeros of Ai(x) are at x ≈ −2.33811, −4.08795, −5.52056, −6.78671, ... "first" zeros of its derivative Ai'(x) are at x ≈ −1.01879, −3.24820, −4.82010, −6.16331, ... == Asymptotic formulae == As explained below, the Airy functions can be extended to the complex plane, giving entire functions. The asymptotic behaviour of the Airy functions as |z| goes to infinity at a constant value of arg(z) depends on arg(z): this is called the Stokes phenomenon. For |arg(z)| < π we have the following asymptotic formula for Ai(z): Ai ⁡ ( z ) ∼ 1 2 π z 1 / 4 exp ⁡ ( − 2 3 z 3 / 2 ) [ ∑ n = 0 ∞ ( − 1 ) n Γ ( n + 5 6 ) Γ ( n + 1 6 ) ( 3 4 ) n 2 π n ! z 3 n / 2 ] . {\displaystyle \operatorname {Ai} (z)\sim {\dfrac {1}{2{\sqrt {\pi }}\,z^{1/4}}}\exp \left(-{\frac {2}{3}}z^{3/2}\right)\left[\sum _{n=0}^{\infty }{\dfrac {(-1)^{n}\,\Gamma \!\left(n+{\frac {5}{6}}\right)\,\Gamma \!\left(n+{\frac {1}{6}}\right)\left({\frac {3}{4}}\right)^{n}}{2\pi \,n!\,z^{3n/2}}}\right].} or Ai ⁡ ( z ) ∼ e − ζ 4 π 3 / 2 z 1 / 4 [ ∑ n = 0 ∞ Γ ( n + 5 6 ) Γ ( n + 1 6 ) n ! ( − 2 ζ ) n ] . {\displaystyle \operatorname {Ai} (z)\sim {\dfrac {e^{-\zeta }}{4\pi ^{3/2}\,z^{1/4}}}\left[\sum _{n=0}^{\infty }{\dfrac {\Gamma \!\left(n+{\frac {5}{6}}\right)\,\Gamma \!\left(n+{\frac {1}{6}}\right)}{n!(-2\zeta )^{n}}}\right].} where ζ = 2 3 z 3 / 2 . {\displaystyle \zeta ={\tfrac {2}{3}}z^{3/2}.} In particular, the first few terms are Ai ⁡ ( z ) = e − ζ 2 π 1 / 2 z 1 / 4 ( 1 − 5 72 ζ + 385 10368 ζ 2 + O ( ζ − 3 ) ) {\displaystyle \operatorname {Ai} (z)={\frac {e^{-\zeta }}{2\pi ^{1/2}z^{1/4}}}\left(1-{\frac {5}{72\zeta }}+{\frac {385}{10368\zeta ^{2}}}+O(\zeta ^{-3})\right)} There is a similar one for Bi(z), but only applicable when |arg(z)| < π/3: Bi ⁡ ( z ) ∼ 1 π z 1 / 4 exp ⁡ ( 2 3 z 3 / 2 ) [ ∑ n = 0 ∞ Γ ( n + 5 6 ) Γ ( n + 1 6 ) ( 3 4 ) n 2 π n ! z 3 n / 2 ] . {\displaystyle \operatorname {Bi} (z)\sim {\frac {1}{{\sqrt {\pi }}\,z^{1/4}}}\exp \left({\frac {2}{3}}z^{3/2}\right)\left[\sum _{n=0}^{\infty }{\dfrac {\Gamma \!\left(n+{\frac {5}{6}}\right)\,\Gamma \!\left(n+{\frac {1}{6}}\right)\left({\frac {3}{4}}\right)^{n}}{2\pi \,n!\,z^{3n/2}}}\right].} A more accurate formula for Ai(z) and a formula for Bi(z) when π/3 < |arg(z)| < π or, equivalently, for Ai(−z) and Bi(−z) when |arg(z)| < 2π/3 but not zero, are: Ai ⁡ ( − z ) ∼ 1 π z 1 / 4 sin ⁡ ( 2 3 z 3 / 2 + π 4 ) [ ∑ n = 0 ∞ ( − 1 ) n Γ ( 2 n + 5 6 ) Γ ( 2 n + 1 6 ) ( 3 4 ) 2 n 2 π ( 2 n ) ! z 3 n ] − 1 π z 1 / 4 cos ⁡ ( 2 3 z 3 / 2 + π 4 ) [ ∑ n = 0 ∞ ( − 1 ) n Γ ( 2 n + 11 6 ) Γ ( 2 n + 7 6 ) ( 3 4 ) 2 n + 1 2 π ( 2 n + 1 ) ! z 3 n + 3 / 2 ] Bi ⁡ ( − z ) ∼ 1 π z 1 / 4 cos ⁡ ( 2 3 z 3 / 2 + π 4 ) [ ∑ n = 0 ∞ ( − 1 ) n Γ ( 2 n + 5 6 ) Γ ( 2 n + 1 6 ) ( 3 4 ) 2 n 2 π ( 2 n ) ! z 3 n ] + 1 π z 1 4 sin ⁡ ( 2 3 z 3 / 2 + π 4 ) [ ∑ n = 0 ∞ ( − 1 ) n Γ ( 2 n + 11 6 ) Γ ( 2 n + 7 6 ) ( 3 4 ) 2 n + 1 2 π ( 2 n + 1 ) ! z 3 n + 3 / 2 ] . {\displaystyle {\begin{aligned}\operatorname {Ai} (-z)\sim &{}\ {\frac {1}{{\sqrt {\pi }}\,z^{1/4}}}\sin \left({\frac {2}{3}}z^{3/2}+{\frac {\pi }{4}}\right)\left[\sum _{n=0}^{\infty }{\dfrac {(-1)^{n}\,\Gamma \!\left(2n+{\frac {5}{6}}\right)\,\Gamma \!\left(2n+{\frac {1}{6}}\right)\left({\frac {3}{4}}\right)^{2n}}{2\pi \,(2n)!\,z^{3n}}}\right]\\[6pt]&{}-{\frac {1}{{\sqrt {\pi }}\,z^{1/4}}}\cos \left({\frac {2}{3}}z^{3/2}+{\frac {\pi }{4}}\right)\left[\sum _{n=0}^{\infty }{\dfrac {(-1)^{n}\,\Gamma \!\left(2n+{\frac {11}{6}}\right)\,\Gamma \!\left(2n+{\frac {7}{6}}\right)\left({\frac {3}{4}}\right)^{2n+1}}{2\pi \,(2n+1)!\,z^{3n\,+\,3/2}}}\right]\\[6pt]\operatorname {Bi} (-z)\sim &{}{\frac {1}{{\sqrt {\pi }}\,z^{1/4}}}\cos \left({\frac {2}{3}}z^{3/2}+{\frac {\pi }{4}}\right)\left[\sum _{n=0}^{\infty }{\dfrac {(-1)^{n}\,\Gamma \!\left(2n+{\frac {5}{6}}\right)\,\Gamma \!\left(2n+{\frac {1}{6}}\right)\left({\frac {3}{4}}\right)^{2n}}{2\pi \,(2n)!\,z^{3n}}}\right]\\[6pt]&{}+{\frac {1}{{\sqrt {\pi }}\,z^{\frac {1}{4}}}}\sin \left({\frac {2}{3}}z^{3/2}+{\frac {\pi }{4}}\right)\left[\sum _{n=0}^{\infty }{\dfrac {(-1)^{n}\,\Gamma \!\left(2n+{\frac {11}{6}}\right)\,\Gamma \!\left(2n+{\frac {7}{6}}\right)\left({\frac {3}{4}}\right)^{2n+1}}{2\pi \,(2n+1)!\,z^{3n\,+\,3/2}}}\right].\end{aligned}}} When |arg(z)| = 0 these are good approximations but are not asymptotic because the ratio between Ai(−z) or Bi(−z) and the above approximation goes to infinity whenever the sine or cosine goes to zero. Asymptotic expansions for these limits are also available. These are listed in (Abramowitz and Stegun, 1983) and (Olver, 1974). One is also able to obtain asymptotic expressions for the derivatives Ai'(z) and Bi'(z). Similarly to before, when |arg(z)| < π: Ai ′ ⁡ ( z ) ∼ − z 1 / 4 2 π exp ⁡ ( − 2 3 z 3 / 2 ) [ ∑ n = 0 ∞ 1 + 6 n 1 − 6 n ( − 1 ) n Γ ( n + 5 6 ) Γ ( n + 1 6 ) ( 3 4 ) n 2 π n ! z 3 n / 2 ] . {\displaystyle \operatorname {Ai} '(z)\sim -{\dfrac {z^{1/4}}{2{\sqrt {\pi }}\,}}\exp \left(-{\frac {2}{3}}z^{3/2}\right)\left[\sum _{n=0}^{\infty }{\frac {1+6n}{1-6n}}{\dfrac {(-1)^{n}\,\Gamma \!\left(n+{\frac {5}{6}}\right)\,\Gamma \!\left(n+{\frac {1}{6}}\right)\left({\frac {3}{4}}\right)^{n}}{2\pi \,n!\,z^{3n/2}}}\right].} When |arg(z)| < π/3 we have: Bi ′ ⁡ ( z ) ∼ z 1 / 4 π exp ⁡ ( 2 3 z 3 / 2 ) [ ∑ n = 0 ∞ 1 + 6 n 1 − 6 n Γ ( n + 5 6 ) Γ ( n + 1 6 ) ( 3 4 ) n 2 π n ! z 3 n / 2 ] . {\displaystyle \operatorname {Bi} '(z)\sim {\frac {z^{1/4}}{{\sqrt {\pi }}\,}}\exp \left({\frac {2}{3}}z^{3/2}\right)\left[\sum _{n=0}^{\infty }{\frac {1+6n}{1-6n}}{\dfrac {\Gamma \!\left(n+{\frac {5}{6}}\right)\,\Gamma \!\left(n+{\frac {1}{6}}\right)\left({\frac {3}{4}}\right)^{n}}{2\pi \,n!\,z^{3n/2}}}\right].} Similarly, an expression for Ai'(−z) and Bi'(−z) when |arg(z)| < 2π/3 but not zero, are Ai ′ ⁡ ( − z ) ∼ − z 1 / 4 π cos ⁡ ( 2 3 z 3 / 2 + π 4 ) [ ∑ n = 0 ∞ 1 + 12 n 1 − 12 n ( − 1 ) n Γ ( 2 n + 5 6 ) Γ ( 2 n + 1 6 ) ( 3 4 ) 2 n 2 π ( 2 n ) ! z 3 n ] − z 1 / 4 π sin ⁡ ( 2 3 z 3 / 2 + π 4 ) [ ∑ n = 0 ∞ 7 + 12 n − 5 − 12 n ( − 1 ) n Γ ( 2 n + 11 6 ) Γ ( 2 n + 7 6 ) ( 3 4 ) 2 n + 1 2 π ( 2 n + 1 ) ! z 3 n + 3 / 2 ] Bi ′ ⁡ ( − z ) ∼ z 1 / 4 π sin ⁡ ( 2 3 z 3 / 2 + π 4 ) [ ∑ n = 0 ∞ 1 + 12 n 1 − 12 n ( − 1 ) n Γ ( 2 n + 5 6 ) Γ ( 2 n + 1 6 ) ( 3 4 ) 2 n 2 π ( 2 n ) ! z 3 n ] − z 1 / 4 π cos ⁡ ( 2 3 z 3 / 2 + π 4 ) [ ∑ n = 0 ∞ 7 + 12 n − 5 − 12 n ( − 1 ) n Γ ( 2 n + 11 6 ) Γ ( 2 n + 7 6 ) ( 3 4 ) 2 n + 1 2 π ( 2 n + 1 ) ! z 3 n + 3 / 2 ] {\displaystyle {\begin{aligned}\operatorname {Ai} '(-z)\sim &{}-{\frac {z^{1/4}}{{\sqrt {\pi }}\,}}\cos \left({\frac {2}{3}}z^{3/2}+{\frac {\pi }{4}}\right)\left[\sum _{n=0}^{\infty }{\frac {1+12n}{1-12n}}{\dfrac {(-1)^{n}\,\Gamma \!\left(2n+{\frac {5}{6}}\right)\,\Gamma \!\left(2n+{\frac {1}{6}}\right)\left({\frac {3}{4}}\right)^{2n}}{2\pi \,(2n)!\,z^{3n}}}\right]\\[6pt]&{}-{\frac {z^{1/4}}{{\sqrt {\pi }}\,}}\sin \left({\frac {2}{3}}z^{3/2}+{\frac {\pi }{4}}\right)\left[\sum _{n=0}^{\infty }{\frac {7+12n}{-5-12n}}{\dfrac {(-1)^{n}\,\Gamma \!\left(2n+{\frac {11}{6}}\right)\,\Gamma \!\left(2n+{\frac {7}{6}}\right)\left({\frac {3}{4}}\right)^{2n+1}}{2\pi \,(2n+1)!\,z^{3n\,+\,3/2}}}\right]\\[6pt]\operatorname {Bi} '(-z)\sim &{}\ {\frac {z^{1/4}}{{\sqrt {\pi }}\,}}\sin \left({\frac {2}{3}}z^{3/2}+{\frac {\pi }{4}}\right)\left[\sum _{n=0}^{\infty }{\frac {1+12n}{1-12n}}{\dfrac {(-1)^{n}\,\Gamma \!\left(2n+{\frac {5}{6}}\right)\,\Gamma \!\left(2n+{\frac {1}{6}}\right)\left({\frac {3}{4}}\right)^{2n}}{2\pi \,(2n)!\,z^{3n}}}\right]\\[6pt]&{}-{\frac {z^{1/4}}{{\sqrt {\pi }}\,}}\cos \left({\frac {2}{3}}z^{3/2}+{\frac {\pi }{4}}\right)\left[\sum _{n=0}^{\infty }{\frac {7+12n}{-5-12n}}{\dfrac {(-1)^{n}\,\Gamma \!\left(2n+{\frac {11}{6}}\right)\,\Gamma \!\left(2n+{\frac {7}{6}}\right)\left({\frac {3}{4}}\right)^{2n+1}}{2\pi \,(2n+1)!\,z^{3n\,+\,3/2}}}\right]\\\end{aligned}}} == Complex arguments == We can extend the definition of the Airy function to the complex plane by Ai ⁡ ( z ) = 1 2 π i ∫ C exp ⁡ ( t 3 3 − z t ) d t , {\displaystyle \operatorname {Ai} (z)={\frac {1}{2\pi i}}\int _{C}\exp \left({\tfrac {t^{3}}{3}}-zt\right)\,dt,} where the integral is over a path C starting at the point at infinity with argument −π/3 and ending at the point at infinity with argument π/3. Alternatively, we can use the differential equation y′′ − xy = 0 to extend Ai(x) and Bi(x) to entire functions on the complex plane. The asymptotic formula for Ai(x) is still valid in the complex plane if the principal value of x2/3 is taken and x is bounded away from the negative real axis. The formula for Bi(x) is valid provided x is in the sector x ∈ C : | arg ⁡ ( x ) | < π 3 − δ {\displaystyle x\in \mathbb {C} :\left|\arg(x)\right|<{\tfrac {\pi }{3}}-\delta } for some positive δ. Finally, the formulae for Ai(−x) and Bi(−x) are valid if x is in the sector x ∈ C : | arg ⁡ ( x ) | < 2 π 3 − δ . {\displaystyle x\in \mathbb {C} :\left|\arg(x)\right|<{\tfrac {2\pi }{3}}-\delta .} It follows from the asymptotic behaviour of the Airy functions that both Ai(x) and Bi(x) have an infinity of zeros on the negative real axis. The function Ai(x) has no other zeros in the complex plane, while the function Bi(x) also has infinitely many zeros in the sector z ∈ C : π 3 < | arg ⁡ ( z ) | < π 2 . {\displaystyle z\in \mathbb {C} :{\tfrac {\pi }{3}}<\left|\arg(z)\right|<{\tfrac {\pi }{2}}.} === Plots === == Relation to other special functions == For positive arguments, the Airy functions are related to the modified Bessel functions: Ai ⁡ ( x ) = 1 π x 3 K 1 / 3 ( 2 3 x 3 / 2 ) , Bi ⁡ ( x ) = x 3 [ I 1 / 3 ( 2 3 x 3 / 2 ) + I − 1 / 3 ( 2 3 x 3 / 2 ) ] . {\displaystyle {\begin{aligned}\operatorname {Ai} (x)&{}={\frac {1}{\pi }}{\sqrt {\frac {x}{3}}}\,K_{1/3}\!\left({\frac {2}{3}}x^{3/2}\right),\\\operatorname {Bi} (x)&{}={\sqrt {\frac {x}{3}}}\left[I_{1/3}\!\left({\frac {2}{3}}x^{3/2}\right)+I_{-1/3}\!\left({\frac {2}{3}}x^{3/2}\right)\right].\end{aligned}}} Here, I±1/3 and K1/3 are solutions of x 2 y ″ + x y ′ − ( x 2 + 1 9 ) y = 0. {\displaystyle x^{2}y''+xy'-\left(x^{2}+{\tfrac {1}{9}}\right)y=0.} The first derivative of the Airy function is A i ′ ⁡ ( x ) = − x π 3 K 2 / 3 ( 2 3 x 3 / 2 ) . {\displaystyle \operatorname {Ai'} (x)=-{\frac {x}{\pi {\sqrt {3}}}}\,K_{2/3}\!\left({\frac {2}{3}}x^{3/2}\right).} Functions K1/3 and K2/3 can be represented in terms of rapidly convergent integrals (see also modified Bessel functions) For negative arguments, the Airy function are related to the Bessel functions: Ai ⁡ ( − x ) = x 9 [ J 1 / 3 ( 2 3 x 3 / 2 ) + J − 1 / 3 ( 2 3 x 3 / 2 ) ] , Bi ⁡ ( − x ) = x 3 [ J − 1 / 3 ( 2 3 x 3 / 2 ) − J 1 / 3 ( 2 3 x 3 / 2 ) ] . {\displaystyle {\begin{aligned}\operatorname {Ai} (-x)&{}={\sqrt {\frac {x}{9}}}\left[J_{1/3}\!\left({\frac {2}{3}}x^{3/2}\right)+J_{-1/3}\!\left({\frac {2}{3}}x^{3/2}\right)\right],\\\operatorname {Bi} (-x)&{}={\sqrt {\frac {x}{3}}}\left[J_{-1/3}\!\left({\frac {2}{3}}x^{3/2}\right)-J_{1/3}\!\left({\frac {2}{3}}x^{3/2}\right)\right].\end{aligned}}} Here, J±1/3 are solutions of x 2 y ″ + x y ′ + ( x 2 − 1 9 ) y = 0. {\displaystyle x^{2}y''+xy'+\left(x^{2}-{\frac {1}{9}}\right)y=0.} The Scorer's functions Hi(x) and -Gi(x) solve the equation y′′ − xy = 1/π. They can also be expressed in terms of the Airy functions: Gi ⁡ ( x ) = Bi ⁡ ( x ) ∫ x ∞ Ai ⁡ ( t ) d t + Ai ⁡ ( x ) ∫ 0 x Bi ⁡ ( t ) d t , Hi ⁡ ( x ) = Bi ⁡ ( x ) ∫ − ∞ x Ai ⁡ ( t ) d t − Ai ⁡ ( x ) ∫ − ∞ x Bi ⁡ ( t ) d t . {\displaystyle {\begin{aligned}\operatorname {Gi} (x)&{}=\operatorname {Bi} (x)\int _{x}^{\infty }\operatorname {Ai} (t)\,dt+\operatorname {Ai} (x)\int _{0}^{x}\operatorname {Bi} (t)\,dt,\\\operatorname {Hi} (x)&{}=\operatorname {Bi} (x)\int _{-\infty }^{x}\operatorname {Ai} (t)\,dt-\operatorname {Ai} (x)\int _{-\infty }^{x}\operatorname {Bi} (t)\,dt.\end{aligned}}} == Fourier transform == Using the definition of the Airy function Ai(x), it is straightforward to show that its Fourier transform is given by F ( Ai ) ( k ) := ∫ − ∞ ∞ Ai ⁡ ( x ) e − 2 π i k x d x = e i 3 ( 2 π k ) 3 . {\displaystyle {\mathcal {F}}(\operatorname {Ai} )(k):=\int _{-\infty }^{\infty }\operatorname {Ai} (x)\ e^{-2\pi ikx}\,dx=e^{{\frac {i}{3}}(2\pi k)^{3}}.} This can be obtained by taking the Fourier transform of the Airy equation. Let y ^ = 1 2 π i ∫ y e − i k x d x {\textstyle {\hat {y}}={\frac {1}{2\pi i}}\int ye^{-ikx}dx} . Then, i y ^ ′ + k 2 y ^ = 0 {\displaystyle i{\hat {y}}'+k^{2}{\hat {y}}=0} , which then has solutions y ^ = C e i k 3 / 3 . {\displaystyle {\hat {y}}=Ce^{ik^{3}/3}.} There is only one dimension of solutions because the Fourier transform requires y to decay to zero fast enough; Bi grows to infinity exponentially fast, so it cannot be obtained via a Fourier transform. == Applications == === Quantum mechanics === The Airy function is the solution to the time-independent Schrödinger equation for a particle confined within a triangular potential well and for a particle in a one-dimensional constant force field. For the same reason, it also serves to provide uniform semiclassical approximations near a turning point in the WKB approximation, when the potential may be locally approximated by a linear function of position. The triangular potential well solution is directly relevant for the understanding of electrons trapped in semiconductor heterojunctions. === Optics === A transversally asymmetric optical beam, where the electric field profile is given by the Airy function, has the interesting property that its maximum intensity accelerates towards one side instead of propagating in a straight line as is the case in symmetric beams. This is at expense of the low-intensity tail being spread in the opposite direction, so the overall momentum of the beam is of course conserved. === Caustics === The Airy function underlies the form of the intensity near an optical directional caustic, such as that of the rainbow (called supernumerary rainbow). Historically, this was the mathematical problem that led Airy to develop this special function. In 1841, William Hallowes Miller experimentally measured the analog to supernumerary rainbow by shining light through a thin cylinder of water, then observing through a telescope. He observed up to 30 bands. === Probability === In the mid-1980s, the Airy function was found to be intimately connected to Chernoff's distribution. The Airy function also appears in the definition of Tracy–Widom distribution which describes the law of largest eigenvalues in Random matrix. Due to the intimate connection of random matrix theory with the Kardar–Parisi–Zhang equation, there are central processes constructed in KPZ such as the Airy process. == History == The Airy function is named after the British astronomer and physicist George Biddell Airy (1801–1892), who encountered it in his early study of optics in physics (Airy 1838). The notation Ai(x) was introduced by Harold Jeffreys. Airy had become the British Astronomer Royal in 1835, and he held that post until his retirement in 1881. == See also == Airy zeta function == Notes == == References == Abramowitz, Milton; Stegun, Irene Ann, eds. (1983) [June 1964]. "Chapter 10". Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. Applied Mathematics Series. Vol. 55 (Ninth reprint with additional corrections of tenth original printing with corrections (December 1972); first ed.). Washington D.C.; New York: United States Department of Commerce, National Bureau of Standards; Dover Publications. p. 448. ISBN 978-0-486-61272-0. LCCN 64-60036. MR 0167642. LCCN 65-12253. Airy (1838), "On the intensity of light in the neighbourhood of a caustic", Transactions of the Cambridge Philosophical Society, 6, University Press: 379–402, Bibcode:1838TCaPS...6..379A Frank William John Olver (1974). Asymptotics and Special Functions, Chapter 11. Academic Press, New York. Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007), "Section 6.6.3. Airy Functions", Numerical Recipes: The Art of Scientific Computing (3rd ed.), New York: Cambridge University Press, ISBN 978-0-521-88068-8, archived from the original on 2011-08-11, retrieved 2011-08-09 Vallée, Olivier; Soares, Manuel (2004), Airy functions and applications to physics, London: Imperial College Press, ISBN 978-1-86094-478-9, MR 2114198, archived from the original on 2010-01-13, retrieved 2010-05-14 == External links == "Airy functions", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Weisstein, Eric W. "Airy Functions". MathWorld. Wolfram function pages for Ai and Bi functions. Includes formulas, function evaluator, and plotting calculator. Olver, F. W. J. (2010), "Airy and related functions", in Olver, Frank W. J.; Lozier, Daniel M.; Boisvert, Ronald F.; Clark, Charles W. (eds.), NIST Handbook of Mathematical Functions, Cambridge University Press, ISBN 978-0-521-19225-5, MR 2723248.
Wikipedia/Airy_function
In molecular physics/nanotechnology, electrostatic deflection is the deformation of a beam-like structure/element bent by an electric field. It can be due to interaction between electrostatic fields and net charge or electric polarization effects. The beam-like structure/element is generally cantilevered (fix at one of its ends). In nanomaterials, carbon nanotubes (CNTs) are typical ones for electrostatic deflections. Mechanisms of electric deflection due to electric polarization can be understood as follows: When a material is brought into an electric field (E), the field tends to shift the positive charge (in red) and the negative charge (in blue) in opposite directions. Thus, induced dipoles are created. (Fig. 2) Fig. 3 shows a beam-like structure/element in an electric field. The interaction between the molecular dipole moment and the electric field results an induced torque (T). Then this torque tends to align the beam toward the direction of field. In case of a cantilevered CNT, it would be bent to the field direction. Meanwhile the electrically induced torque and stiffness of the CNT compete against each other. This deformation has been observed in experiments. This property is an important characteristic for CNTs promising nanoelectromechanical systems applications, as well as for their fabrication, separation and electromanipulation. Recently, several nanoelectromechanical systems based on cantilevered CNTs have been reported such as: nanorelays, nanoswitches, nanotweezers and feedback device which are designed for memory, sensing or actuation uses. Furthermore, theoretical studies have been carried out to try to get a full understanding of the electric deflection of carbon nanotubes, == References ==
Wikipedia/Electrostatic_deflection_(molecular_physics/nanotechnology)
A quantum mechanical system or particle that is bound—that is, confined spatially—can only take on certain discrete values of energy, called energy levels. This contrasts with classical particles, which can have any amount of energy. The term is commonly used for the energy levels of the electrons in atoms, ions, or molecules, which are bound by the electric field of the nucleus, but can also refer to energy levels of nuclei or vibrational or rotational energy levels in molecules. The energy spectrum of a system with such discrete energy levels is said to be quantized. In chemistry and atomic physics, an electron shell, or principal energy level, may be thought of as the orbit of one or more electrons around an atom's nucleus. The closest shell to the nucleus is called the "1 shell" (also called "K shell"), followed by the "2 shell" (or "L shell"), then the "3 shell" (or "M shell"), and so on further and further from the nucleus. The shells correspond with the principal quantum numbers (n = 1, 2, 3, 4, ...) or are labeled alphabetically with letters used in the X-ray notation (K, L, M, N, ...). Each shell can contain only a fixed number of electrons: The first shell can hold up to two electrons, the second shell can hold up to eight (2 + 6) electrons, the third shell can hold up to 18 (2 + 6 + 10) and so on. The general formula is that the nth shell can in principle hold up to 2n2 electrons. Since electrons are electrically attracted to the nucleus, an atom's electrons will generally occupy outer shells only if the more inner shells have already been completely filled by other electrons. However, this is not a strict requirement: atoms may have two or even three incomplete outer shells. (See Madelung rule for more details.) For an explanation of why electrons exist in these shells see electron configuration. If the potential energy is set to zero at infinite distance from the atomic nucleus or molecule, the usual convention, then bound electron states have negative potential energy. If an atom, ion, or molecule is at the lowest possible energy level, it and its electrons are said to be in the ground state. If it is at a higher energy level, it is said to be excited, or any electrons that have higher energy than the ground state are excited. An energy level is regarded as degenerate if there is more than one measurable quantum mechanical state associated with it. == Explanation == Quantized energy levels result from the wave behavior of particles, which gives a relationship between a particle's energy and its wavelength. For a confined particle such as an electron in an atom, the wave functions that have well defined energies have the form of a standing wave. States having well-defined energies are called stationary states because they are the states that do not change in time. Informally, these states correspond to a whole number of wavelengths of the wavefunction along a closed path (a path that ends where it started), such as a circular orbit around an atom, where the number of wavelengths gives the type of atomic orbital (0 for s-orbitals, 1 for p-orbitals and so on). Elementary examples that show mathematically how energy levels come about are the particle in a box and the quantum harmonic oscillator. Any superposition (linear combination) of energy states is also a quantum state, but such states change with time and do not have well-defined energies. A measurement of the energy results in the collapse of the wavefunction, which results in a new state that consists of just a single energy state. Measurement of the possible energy levels of an object is called spectroscopy. == History == The first evidence of quantization in atoms was the observation of spectral lines in light from the sun in the early 1800s by Joseph von Fraunhofer and William Hyde Wollaston. The notion of energy levels was proposed in 1913 by Danish physicist Niels Bohr in the Bohr theory of the atom. The modern quantum mechanical theory giving an explanation of these energy levels in terms of the Schrödinger equation was advanced by Erwin Schrödinger and Werner Heisenberg in 1926. == Atoms == === Intrinsic energy levels === In the formulas for energy of electrons at various levels given below in an atom, the zero point for energy is set when the electron in question has completely left the atom; i.e. when the electron's principal quantum number n = ∞. When the electron is bound to the atom in any closer value of n, the electron's energy is lower and is considered negative. ==== Orbital state energy level: atom/ion with nucleus + one electron ==== Assume there is one electron in a given atomic orbital in a hydrogen-like atom (ion). The energy of its state is mainly determined by the electrostatic interaction of the (negative) electron with the (positive) nucleus. The energy levels of an electron around a nucleus are given by: E n = − h c R ∞ Z 2 n 2 {\displaystyle E_{n}=-hcR_{\infty }{\frac {Z^{2}}{n^{2}}}} (typically between 1 eV and 103 eV), where R∞ is the Rydberg constant, Z is the atomic number, n is the principal quantum number, h is the Planck constant, and c is the speed of light. For hydrogen-like atoms (ions) only, the Rydberg levels depend only on the principal quantum number n. This equation is obtained from combining the Rydberg formula for any hydrogen-like element (shown below) with E = hν = hc / λ assuming that the principal quantum number n above = n1 in the Rydberg formula and n2 = ∞ (principal quantum number of the energy level the electron descends from, when emitting a photon). The Rydberg formula was derived from empirical spectroscopic emission data. 1 λ = R Z 2 ( 1 n 1 2 − 1 n 2 2 ) {\displaystyle {\frac {1}{\lambda }}=RZ^{2}\left({\frac {1}{n_{1}^{2}}}-{\frac {1}{n_{2}^{2}}}\right)} An equivalent formula can be derived quantum mechanically from the time-independent Schrödinger equation with a kinetic energy Hamiltonian operator using a wave function as an eigenfunction to obtain the energy levels as eigenvalues, but the Rydberg constant would be replaced by other fundamental physics constants. ==== Electron–electron interactions in atoms ==== If there is more than one electron around the atom, electron–electron interactions raise the energy level. These interactions are often neglected if the spatial overlap of the electron wavefunctions is low. For multi-electron atoms, interactions between electrons cause the preceding equation to be no longer accurate as stated simply with Z as the atomic number. A simple (though not complete) way to understand this is as a shielding effect, where the outer electrons see an effective nucleus of reduced charge, since the inner electrons are bound tightly to the nucleus and partially cancel its charge. This leads to an approximate correction where Z is substituted with an effective nuclear charge symbolized as Zeff that depends strongly on the principal quantum number. E n , ℓ = − h c R ∞ Z e f f 2 n 2 {\displaystyle E_{n,\ell }=-hcR_{\infty }{\frac {{Z_{\rm {eff}}}^{2}}{n^{2}}}} In such cases, the orbital types (determined by the azimuthal quantum number ℓ) as well as their levels within the molecule affect Zeff and therefore also affect the various atomic electron energy levels. The Aufbau principle of filling an atom with electrons for an electron configuration takes these differing energy levels into account. For filling an atom with electrons in the ground state, the lowest energy levels are filled first and consistent with the Pauli exclusion principle, the Aufbau principle, and Hund's rule. ==== Fine structure splitting ==== Fine structure arises from relativistic kinetic energy corrections, spin–orbit coupling (an electrodynamic interaction between the electron's spin and motion and the nucleus's electric field) and the Darwin term (contact term interaction of s shell electrons inside the nucleus). These affect the levels by a typical order of magnitude of 10−3 eV. ==== Hyperfine structure ==== This even finer structure is due to electron–nucleus spin–spin interaction, resulting in a typical change in the energy levels by a typical order of magnitude of 10−4 eV. === Energy levels due to external fields === ==== Zeeman effect ==== There is an interaction energy associated with the magnetic dipole moment, μL, arising from the electronic orbital angular momentum, L, given by U = − μ L ⋅ B {\displaystyle U=-{\boldsymbol {\mu }}_{L}\cdot \mathbf {B} } with − μ L = e ℏ 2 m L = μ B L {\displaystyle -{\boldsymbol {\mu }}_{L}={\dfrac {e\hbar }{2m}}\mathbf {L} =\mu _{B}\mathbf {L} } . Additionally taking into account the magnetic momentum arising from the electron spin. Due to relativistic effects (Dirac equation), there is a magnetic momentum, μS, arising from the electron spin − μ S = − μ B g S S {\displaystyle -{\boldsymbol {\mu }}_{S}=-\mu _{\text{B}}g_{S}\mathbf {S} } , with gS the electron-spin g-factor (about 2), resulting in a total magnetic moment, μ, μ = μ L + μ S {\displaystyle {\boldsymbol {\mu }}={\boldsymbol {\mu }}_{L}+{\boldsymbol {\mu }}_{S}} . The interaction energy therefore becomes U B = − μ ⋅ B = μ B B ( M L + g S M S ) {\displaystyle U_{B}=-{\boldsymbol {\mu }}\cdot \mathbf {B} =\mu _{\text{B}}B(M_{L}+g_{S}M_{S})} . ==== Stark effect ==== == Molecules == Chemical bonds between atoms in a molecule form because they make the situation more stable for the involved atoms, which generally means the sum energy level for the involved atoms in the molecule is lower than if the atoms were not so bonded. As separate atoms approach each other to covalently bond, their orbitals affect each other's energy levels to form bonding and antibonding molecular orbitals. The energy level of the bonding orbitals is lower, and the energy level of the antibonding orbitals is higher. For the bond in the molecule to be stable, the covalent bonding electrons occupy the lower energy bonding orbital, which may be signified by such symbols as σ or π depending on the situation. Corresponding anti-bonding orbitals can be signified by adding an asterisk to get σ* or π* orbitals. A non-bonding orbital in a molecule is an orbital with electrons in outer shells which do not participate in bonding and its energy level is the same as that of the constituent atom. Such orbitals can be designated as n orbitals. The electrons in an n orbital are typically lone pairs. In polyatomic molecules, different vibrational and rotational energy levels are also involved. Roughly speaking, a molecular energy state (i.e., an eigenstate of the molecular Hamiltonian) is the sum of the electronic, vibrational, rotational, nuclear, and translational components, such that: E = E electronic + E vibrational + E rotational + E nuclear + E translational {\displaystyle E=E_{\text{electronic}}+E_{\text{vibrational}}+E_{\text{rotational}}+E_{\text{nuclear}}+E_{\text{translational}}} where Eelectronic is an eigenvalue of the electronic molecular Hamiltonian (the value of the potential energy surface) at the equilibrium geometry of the molecule. The molecular energy levels are labelled by the molecular term symbols. The specific energies of these components vary with the specific energy state and the substance. === Energy level diagrams === There are various types of energy level diagrams for bonds between atoms in a molecule. Examples Molecular orbital diagrams, Jablonski diagrams, and Franck–Condon diagrams. == Energy level transitions == Electrons in atoms and molecules can change (make transitions in) energy levels by emitting or absorbing a photon (of electromagnetic radiation), whose energy must be exactly equal to the energy difference between the two levels. Electrons can also be completely removed from a chemical species such as an atom, molecule, or ion. Complete removal of an electron from an atom can be a form of ionization, which is effectively moving the electron out to an orbital with an infinite principal quantum number, in effect so far away so as to have practically no more effect on the remaining atom (ion). For various types of atoms, there are 1st, 2nd, 3rd, etc. ionization energies for removing the 1st, then the 2nd, then the 3rd, etc. of the highest energy electrons, respectively, from the atom originally in the ground state. Energy in corresponding opposite quantities can also be released, sometimes in the form of photon energy, when electrons are added to positively charged ions or sometimes atoms. Molecules can also undergo transitions in their vibrational or rotational energy levels. Energy level transitions can also be nonradiative, meaning emission or absorption of a photon is not involved. If an atom, ion, or molecule is at the lowest possible energy level, it and its electrons are said to be in the ground state. If it is at a higher energy level, it is said to be excited, or any electrons that have higher energy than the ground state are excited. Such a species can be excited to a higher energy level by absorbing a photon whose energy is equal to the energy difference between the levels. Conversely, an excited species can go to a lower energy level by spontaneously emitting a photon equal to the energy difference. A photon's energy is equal to the Planck constant (h) times its frequency (f) and thus is proportional to its frequency, or inversely to its wavelength (λ). ΔE = hf = hc / λ, since c, the speed of light, equals to fλ Correspondingly, many kinds of spectroscopy are based on detecting the frequency or wavelength of the emitted or absorbed photons to provide information on the material analyzed, including information on the energy levels and electronic structure of materials obtained by analyzing the spectrum. An asterisk is commonly used to designate an excited state. An electron transition in a molecule's bond from a ground state to an excited state may have a designation such as σ → σ*, π → π*, or n → π* meaning excitation of an electron from a σ bonding to a σ antibonding orbital, from a π bonding to a π antibonding orbital, or from an n non-bonding to a π antibonding orbital. Reverse electron transitions for all these types of excited molecules are also possible to return to their ground states, which can be designated as σ* → σ, π* → π, or π* → n. A transition in an energy level of an electron in a molecule may be combined with a vibrational transition and called a vibronic transition. A vibrational and rotational transition may be combined by rovibrational coupling. In rovibronic coupling, electron transitions are simultaneously combined with both vibrational and rotational transitions. Photons involved in transitions may have energy of various ranges in the electromagnetic spectrum, such as X-ray, ultraviolet, visible light, infrared, or microwave radiation, depending on the type of transition. In a very general way, energy level differences between electronic states are larger, differences between vibrational levels are intermediate, and differences between rotational levels are smaller, although there can be overlap. Translational energy levels are practically continuous and can be calculated as kinetic energy using classical mechanics. Higher temperature causes fluid atoms and molecules to move faster increasing their translational energy, and thermally excites molecules to higher average amplitudes of vibrational and rotational modes (excites the molecules to higher internal energy levels). This means that as temperature rises, translational, vibrational, and rotational contributions to molecular heat capacity let molecules absorb heat and hold more internal energy. Conduction of heat typically occurs as molecules or atoms collide transferring the heat between each other. At even higher temperatures, electrons can be thermally excited to higher energy orbitals in atoms or molecules. A subsequent drop of an electron to a lower energy level can release a photon, causing a possibly coloured glow. An electron further from the nucleus has higher potential energy than an electron closer to the nucleus, thus it becomes less bound to the nucleus, since its potential energy is negative and inversely dependent on its distance from the nucleus. == Crystalline materials == Crystalline solids are found to have energy bands, instead of or in addition to energy levels. Electrons can take on any energy within an unfilled band. At first this appears to be an exception to the requirement for energy levels. However, as shown in band theory, energy bands are actually made up of many discrete energy levels which are too close together to resolve. Within a band the number of levels is of the order of the number of atoms in the crystal, so although electrons are actually restricted to these energies, they appear to be able to take on a continuum of values. The important energy levels in a crystal are the top of the valence band, the bottom of the conduction band, the Fermi level, the vacuum level, and the energy levels of any defect states in the crystal. == See also == Perturbation theory (quantum mechanics) Atomic clock Computational chemistry == References ==
Wikipedia/Molecular_energy_state
Chemical physics is a branch of physics that studies chemical processes from a physical point of view. It focuses on understanding the physical properties and behavior of chemical systems, using principles from both physics and chemistry. This field investigates physicochemical phenomena using techniques from atomic and molecular physics and condensed matter physics. The United States Department of Education defines chemical physics as "A program that focuses on the scientific study of structural phenomena combining the disciplines of physical chemistry and atomic/molecular physics. Includes instruction in heterogeneous structures, alignment and surface phenomena, quantum theory, mathematical physics, statistical and classical mechanics, chemical kinetics, and laser physics." == Distinction between chemical physics and physical chemistry == While at the interface of physics and chemistry, chemical physics is distinct from physical chemistry as it focuses more on using physical theories to understand and explain chemical phenomena at the microscopic level, such as quantum mechanics, statistical mechanics, and molecular dynamics. Meanwhile, physical chemistry uses a broader range of methods, such as thermodynamics and kinetics, to study the physical nature of chemical processes. On the other hand, physical chemistry deals with the physical properties and behavior of matter in chemical reactions, covering a broader range of topics such as thermodynamics, kinetics, and spectroscopy, and often links the macroscopic and microscopic chemical behavior. The distinction between the two fields still needs to be clarified as both fields share common grounds. Scientists often practice in both fields during their research, as there is significant overlap in the topics and techniques used. Journals like PCCP (Physical Chemistry Chemical Physics) cover research in both areas, highlighting their overlap. == History == The term "chemical physics" in its modern sense was first used by the German scientist A. Eucken, who published "A Course in Chemical Physics" in 1930. Prior to this, in 1927, the publication "Electronic Chemistry" by V. N. Kondrat'ev, N. N. Semenov, and Iu. B. Khariton hinted at the meaning of "chemical physics" through its title. The Institute of Chemical Physics of the Academy of Sciences of the USSR was established in 1931. In the United States, "The Journal of Chemical Physics" has been published since 1933. In 1964, the General Electric Foundation established the Irving Langmuir Award in Chemical Physics to honor outstanding achievements in the field of chemical physics. Named after the Nobel Laureate Irving Langmuir, the award recognizes significant contributions to understanding chemical phenomena through physics principles, impacting areas such as surface chemistry and quantum mechanics. == What chemical physicists do == Chemical physicists investigate the structure and dynamics of ions, free radicals, polymers, clusters, and molecules. Their research includes studying the quantum mechanical aspects of chemical reactions, solvation processes, and the energy flow within and between molecules, and nanomaterials such as quantum dots. Experiments in chemical physics typically involve using spectroscopic methods to understand hydrogen bonding, electron transfer, the formation and dissolution of chemical bonds, chemical reactions, and the formation of nanoparticles. The research objectives in the theoretical aspect of chemical physics are to understand how chemical structures and reactions work at the quantum mechanical level. This field also aims to clarify how ions and radicals behave and react in the gas phase and to develop precise approximations that simplify the computation of the physics of chemical phenomena. Chemical physicists are looking for answers to such questions as: Can we experimentally test quantum mechanical predictions of the vibrations and rotations of simple molecules? Or even those of complex molecules (such as proteins)? Can we develop more accurate methods for calculating the electronic structure and properties of molecules? Can we understand chemical reactions from first principles? Why do quantum dots start blinking (in a pattern suggesting fractal kinetics) after absorbing photons? How do chemical reactions really take place? What is the step-by-step process that occurs when an isolated molecule becomes solvated? Or when a whole ensemble of molecules becomes solvated? Can we use the properties of negative ions to determine molecular structures, understand the dynamics of chemical reactions, or explain photodissociation? Why does a stream of soft x-rays knock enough electrons out of the atoms in a xenon cluster to cause the cluster to explode? == Journals == The Journal of Chemical Physics Journal of Physical Chemistry Letters Journal of Physical Chemistry A Journal of Physical Chemistry B Journal of Physical Chemistry C Physical Chemistry Chemical Physics Chemical Physics Letters Chemical Physics ChemPhysChem Molecular Physics (journal) == See also == Intermolecular force Molecular dynamics Quantum chemistry Solid-state physics or Condensed matter physics Surface science Van der Waals molecule == References ==
Wikipedia/Chemical_Physics
Diatomic molecules (from Greek di- 'two') are molecules composed of only two atoms, of the same or different chemical elements. If a diatomic molecule consists of two atoms of the same element, such as hydrogen (H2) or oxygen (O2), then it is said to be homonuclear. Otherwise, if a diatomic molecule consists of two different atoms, such as carbon monoxide (CO) or nitric oxide (NO), the molecule is said to be heteronuclear. The bond in a homonuclear diatomic molecule is non-polar. The only chemical elements that form stable homonuclear diatomic molecules at standard temperature and pressure (STP) (or at typical laboratory conditions of 1 bar and 25 °C) are the gases hydrogen (H2), nitrogen (N2), oxygen (O2), fluorine (F2), and chlorine (Cl2), and the liquid bromine (Br2). The noble gases (helium, neon, argon, krypton, xenon, and radon) are also gases at STP, but they are monatomic. The homonuclear diatomic gases and noble gases together are called "elemental gases" or "molecular gases", to distinguish them from other gases that are chemical compounds. At slightly elevated temperatures, the halogens bromine (Br2) and iodine (I2) also form diatomic gases. All halogens have been observed as diatomic molecules, except for astatine and tennessine, which are uncertain. Other elements form diatomic molecules when evaporated, but these diatomic species repolymerize when cooled. Heating ("cracking") elemental phosphorus gives diphosphorus (P2). Sulfur vapor is mostly disulfur (S2). Dilithium (Li2) and disodium (Na2) are known in the gas phase. Ditungsten (W2) and dimolybdenum (Mo2) form with sextuple bonds in the gas phase. Dirubidium (Rb2) is diatomic. == Heteronuclear molecules == All other diatomic molecules are chemical compounds of two different elements. Many elements can combine to form heteronuclear diatomic molecules, depending on temperature and pressure. Examples are gases carbon monoxide (CO), nitric oxide (NO), and hydrogen chloride (HCl). Many 1:1 binary compounds are not normally considered diatomic because they are polymeric at room temperature, but they form diatomic molecules when evaporated, for example gaseous MgO, SiO, and many others. == Occurrence == Hundreds of diatomic molecules have been identified in the environment of the Earth, in the laboratory, and in interstellar space. About 99% of the Earth's atmosphere is composed of two species of diatomic molecules: nitrogen (78%) and oxygen (21%). The natural abundance of hydrogen (H2) in the Earth's atmosphere is only of the order of parts per million, but H2 is the most abundant diatomic molecule in the universe. The interstellar medium is dominated by hydrogen atoms. == Molecular geometry == All diatomic molecules are linear and characterized by a single parameter which is the bond length or distance between the two atoms. Diatomic nitrogen has a triple bond, diatomic oxygen has a double bond, and diatomic hydrogen, fluorine, chlorine, iodine, and bromine all have single bonds. == Historical significance == Diatomic elements played an important role in the elucidation of the concepts of element, atom, and molecule in the 19th century, because some of the most common elements, such as hydrogen, oxygen, and nitrogen, occur as diatomic molecules. John Dalton's original atomic hypothesis assumed that all elements were monatomic and that the atoms in compounds would normally have the simplest atomic ratios with respect to one another. For example, Dalton assumed water's formula to be HO, giving the atomic weight of oxygen as eight times that of hydrogen, instead of the modern value of about 16. As a consequence, confusion existed regarding atomic weights and molecular formulas for about half a century. As early as 1805, Gay-Lussac and von Humboldt showed that water is formed of two volumes of hydrogen and one volume of oxygen, and by 1811 Amedeo Avogadro had arrived at the correct interpretation of water's composition, based on what is now called Avogadro's law and the assumption of diatomic elemental molecules. However, these results were mostly ignored until 1860, partly due to the belief that atoms of one element would have no chemical affinity toward atoms of the same element, and also partly due to apparent exceptions to Avogadro's law that were not explained until later in terms of dissociating molecules. At the 1860 Karlsruhe Congress on atomic weights, Cannizzaro resurrected Avogadro's ideas and used them to produce a consistent table of atomic weights, which mostly agree with modern values. These weights were an important prerequisite for the discovery of the periodic law by Dmitri Mendeleev and Lothar Meyer. == Excited electronic states == Diatomic molecules are normally in their lowest or ground state, which conventionally is also known as the X {\displaystyle X} state. When a gas of diatomic molecules is bombarded by energetic electrons, some of the molecules may be excited to higher electronic states, as occurs, for example, in the natural aurora; high-altitude nuclear explosions; and rocket-borne electron gun experiments. Such excitation can also occur when the gas absorbs light or other electromagnetic radiation. The excited states are unstable and naturally relax back to the ground state. Over various short time scales after the excitation (typically a fraction of a second, or sometimes longer than a second if the excited state is metastable), transitions occur from higher to lower electronic states and ultimately to the ground state, and in each transition results a photon is emitted. This emission is known as fluorescence. Successively higher electronic states are conventionally named A {\displaystyle A} , B {\displaystyle B} , C {\displaystyle C} , etc. (but this convention is not always followed, and sometimes lower case letters and alphabetically out-of-sequence letters are used, as in the example given below). The excitation energy must be greater than or equal to the energy of the electronic state in order for the excitation to occur. In quantum theory, an electronic state of a diatomic molecule is represented by the molecular term symbol 2 S + 1 Λ ( v ) ( g / u ) + / − {\displaystyle ^{2S+1}\Lambda (v)_{(g/u)}^{+/-}} where S {\displaystyle S} is the total electronic spin quantum number, Λ {\displaystyle \Lambda } is the total electronic angular momentum quantum number along the internuclear axis, and v {\displaystyle v} is the vibrational quantum number. Λ {\displaystyle \Lambda } takes on values 0, 1, 2, ..., which are represented by the electronic state symbols Σ {\displaystyle \Sigma } , Π {\displaystyle \Pi } , Δ {\displaystyle \Delta } , ... For example, the following table lists the common electronic states (without vibrational quantum numbers) along with the energy of the lowest vibrational level ( v = 0 {\displaystyle v=0} ) of diatomic nitrogen (N2), the most abundant gas in the Earth's atmosphere. The subscripts and superscripts after Λ {\displaystyle \Lambda } give additional quantum mechanical details about the electronic state. The superscript + {\displaystyle +} or − {\displaystyle -} determines whether reflection in a plane containing the internuclear axis introduces a sign change in the wavefunction. The sub-script g {\displaystyle g} or u {\displaystyle u} applies to molecules of identical atoms, and when reflecting the state along a plane perpendicular to the molecular axis, states that does not change are labelled g {\displaystyle g} (gerade), and states that change sign are labelled u {\displaystyle u} (ungerade). The aforementioned fluorescence occurs in distinct regions of the electromagnetic spectrum, called "emission bands": each band corresponds to a particular transition from a higher electronic state and vibrational level to a lower electronic state and vibrational level (typically, many vibrational levels are involved in an excited gas of diatomic molecules). For example, N2 A {\displaystyle A} - X {\displaystyle X} emission bands (a.k.a. Vegard-Kaplan bands) are present in the spectral range from 0.14 to 1.45 μm (micrometres). A given band can be spread out over several nanometers in electromagnetic wavelength space, owing to the various transitions that occur in the molecule's rotational quantum number, J {\displaystyle J} . These are classified into distinct sub-band branches, depending on the change in J {\displaystyle J} . The R {\displaystyle R} branch corresponds to Δ J = + 1 {\displaystyle \Delta J=+1} , the P {\displaystyle P} branch to Δ J = − 1 {\displaystyle \Delta J=-1} , and the Q {\displaystyle Q} branch to Δ J = 0 {\displaystyle \Delta J=0} . Bands are spread out even further by the limited spectral resolution of the spectrometer that is used to measure the spectrum. The spectral resolution depends on the instrument's point spread function. == Energy levels == The molecular term symbol is a shorthand expression of the angular momenta that characterize the electronic quantum states of a diatomic molecule, which are also eigenstates of the electronic molecular Hamiltonian. It is also convenient, and common, to represent a diatomic molecule as two-point masses connected by a massless spring. The energies involved in the various motions of the molecule can then be broken down into three categories: the translational, rotational, and vibrational energies. The theoretical study of the rotational energy levels of the diatomic molecules can be described using the below description of the rotational energy levels. While the study of vibrational energy level of the diatomic molecules can be described using the harmonic oscillator approximation or using the quantum vibrational interaction potentials. These potentials give more accurate energy levels because they take multiple vibrational effects into account. Concerning history, the first treatment of diatomic molecules with quantum mechanics was made by Lucy Mensing in 1926. === Translational energies === The translational energy of the molecule is given by the kinetic energy expression: E trans = 1 2 m v 2 {\displaystyle E_{\text{trans}}={\frac {1}{2}}mv^{2}} where m {\displaystyle m} is the mass of the molecule and v {\displaystyle v} is its velocity. === Rotational energies === Classically, the kinetic energy of rotation is E rot = L 2 2 I {\displaystyle E_{\text{rot}}={\frac {L^{2}}{2I}}} where L {\displaystyle L\,} is the angular momentum I {\displaystyle I\,} is the moment of inertia of the molecule For microscopic, atomic-level systems like a molecule, angular momentum can only have specific discrete values given by L 2 = ℓ ( ℓ + 1 ) ℏ 2 {\displaystyle L^{2}=\ell (\ell +1)\hbar ^{2}} where ℓ {\displaystyle \ell } is a non-negative integer and ℏ {\displaystyle \hbar } is the reduced Planck constant. Also, for a diatomic molecule the moment of inertia is I = μ r 0 2 {\displaystyle I=\mu r_{0}^{2}} where μ {\displaystyle \mu \,} is the reduced mass of the molecule and r 0 {\displaystyle r_{0}\,} is the average distance between the centers of the two atoms in the molecule. So, substituting the angular momentum and moment of inertia into Erot, the rotational energy levels of a diatomic molecule are given by: E rot = ℓ ( ℓ + 1 ) ℏ 2 2 μ r 0 2 , ℓ = 0 , 1 , 2 , … {\displaystyle E_{\text{rot}}={\frac {\ell (\ell +1)\hbar ^{2}}{2\mu r_{0}^{2}}},\quad \ell =0,1,2,\dots } === Vibrational energies === Another type of motion of a diatomic molecule is for each atom to oscillate—or vibrate—along the line connecting the two atoms. The vibrational energy is approximately that of a quantum harmonic oscillator: E vib = ( n + 1 2 ) ℏ ω , n = 0 , 1 , 2 , … , {\displaystyle E_{\text{vib}}=\left(n+{\tfrac {1}{2}}\right)\hbar \omega ,\quad n=0,1,2,\dots ,} where n {\displaystyle n} is an integer ℏ {\displaystyle \hbar } is the reduced Planck constant and ω {\displaystyle \omega } is the angular frequency of the vibration. === Comparison between rotational and vibrational energy spacings === The spacing, and the energy of a typical spectroscopic transition, between vibrational energy levels is about 100 times greater than that of a typical transition between rotational energy levels. == Hund's cases == The good quantum numbers for a diatomic molecule, as well as good approximations of rotational energy levels, can be obtained by modeling the molecule using Hund's cases. == Mnemonics == The mnemonics BrINClHOF, pronounced "Brinklehof", HONClBrIF, pronounced "Honkelbrif", “HOBrFINCl”, pronounced “Hoberfinkel”, and HOFBrINCl, pronounced "Hofbrinkle", have been coined to aid recall of the list of diatomic elements. Another method, for English-speakers, is the sentence: "Never Have Fear of Ice Cold Beer" as a representation of Nitrogen, Hydrogen, Fluorine, Oxygen, Iodine, Chlorine, Bromine. == See also == Symmetry of diatomic molecules AXE method Octatomic element Covalent bond Industrial gas == References == == Further reading == Huber, K. P.; Herzberg, G. (1979). Molecular Spectra and Molecular Structure IV. Constants of Diatomic Molecules. New York: Van Nostrand: Reinhold. Tipler, Paul (1998). Physics For Scientists and Engineers: Vol. 1 (4th ed.). W. H. Freeman. ISBN 1-57259-491-8. == External links == Hyperphysics – Rotational Spectra of Rigid Rotor Molecules Hyperphysics – Quantum Harmonic Oscillator 3D Chem – Chemistry, Structures, and 3D Molecules IUMSC – Indiana University Molecular Structure Center
Wikipedia/Diatomic_molecule
Molecular Physics is a peer-reviewed scientific journal covering research on the interface between chemistry and physics, in particular chemical physics and physical chemistry. It covers both theoretical and experimental molecular science, including electronic structure, molecular dynamics, spectroscopy, reaction kinetics, statistical mechanics, condensed matter and surface science. The journal was established in 1958 and is published by Taylor & Francis. According to the Journal Citation Reports, the journal has a 2021 impact factor of 1.937. The current editor-in-chief is Professor George Jackson (Imperial College London). A reprint of the first editorial and a full list of editors since its establishment can be found in the issue celebrating 50 years of the journal. == Notable current and former editors == Christopher Longuet-Higgins (Founding Editor) Joan van der Waals (Founding Editor) John Shipley Rowlinson A. David Buckingham Lawrence D. Barron Martin Quack Dominic Tildesley Henry F. Schaefer III Nicholas C. Handy Ruth Lynden-Bell Jean-Pierre Hansen Timothy Softley Martin Head-Gordon Trygve Helgaker == See also == List of scientific journals in physics List of scientific journals in chemistry == References == == External links == Official website
Wikipedia/Molecular_Physics_(journal)
The bond-dissociation energy (BDE, D0, or DH°) is one measure of the strength of a chemical bond A−B. It can be defined as the standard enthalpy change when A−B is cleaved by homolysis to give fragments A and B, which are usually radical species. The enthalpy change is temperature-dependent, and the bond-dissociation energy is often defined to be the enthalpy change of the homolysis at 0 K (absolute zero), although the enthalpy change at 298 K (standard conditions) is also a frequently encountered parameter. As a typical example, the bond-dissociation energy for one of the C−H bonds in ethane (C2H6) is defined as the standard enthalpy change of the process CH3CH2−H → CH3CH2• + H•, DH°298(CH3CH2−H) = ΔH° = 101.1(4) kcal/mol = 423.0 ± 1.7 kJ/mol = 4.40(2) eV (per bond). To convert a molar BDE to the energy needed to dissociate the bond per molecule, the conversion factor 23.060 kcal/mol (96.485 kJ/mol) for each eV can be used. A variety of experimental techniques, including spectrometric determination of energy levels, generation of radicals by pyrolysis or photolysis, measurements of chemical kinetics and equilibrium, and various calorimetric and electrochemical methods have been used to measure bond dissociation energy values. Nevertheless, bond dissociation energy measurements are challenging and are subject to considerable error. The majority of currently known values are accurate to within ±1 or 2 kcal/mol (4–10 kJ/mol). Moreover, values measured in the past, especially before the 1970s, can be especially unreliable and have been subject to revisions on the order of 10 kcal/mol (e.g., benzene C–H bonds, from 103 kcal/mol in 1965 to the modern accepted value of 112.9(5) kcal/mol). Even in modern times (between 1990 and 2004), the O−H bond of phenol has been reported to be anywhere from 85.8 to 91.0 kcal/mol. On the other hand, the bond dissociation energy of H2 at 298 K has been measured to high precision and accuracy: DH°298(H−H) = 104.1539(1) kcal/mol or 435.780 kJ/mol. == Definitions and related parameters == The term bond-dissociation energy is similar to the related notion of bond-dissociation enthalpy (or bond enthalpy), which is sometimes used interchangeably. However, some authors make the distinction that the bond-dissociation energy (D0) refers to the enthalpy change at 0 K, while the term bond-dissociation enthalpy is used for the enthalpy change at 298 K (unambiguously denoted DH°298). The former parameter tends to be favored in theoretical and computational work, while the latter is more convenient for thermochemical studies. For typical chemical systems, the numerical difference between the quantities is small, and the distinction can often be ignored. For a hydrocarbon RH, where R is significantly larger than H, for instance, the relationship D0(R−H) ≈ DH°298(R−H) − 1.5 kcal/mol is a good approximation. Some textbooks ignore the temperature dependence, while others have defined the bond-dissociation energy to be the reaction enthalpy of homolysis at 298 K. The bond dissociation energy is related to but slightly different from the depth of the associated potential energy well of the bond, De, known as the electronic energy. This is due to the existence of a zero-point energy ε0 for the vibrational ground state, which reduces the amount of energy needed to reach the dissociation limit. Thus, D0 is slightly less than De, and the relationship D0 = De − ε0 holds. The bond dissociation energy is an enthalpy change of a particular chemical process, namely homolytic bond cleavage, and "bond strength" as measured by the BDE should not be regarded as an intrinsic property of a particular bond type but rather as an energy change that depends on the chemical context. For instance, Blanksby and Ellison cites the example of ketene (H2C=CO), which has a C=C bond dissociation energy of 79 kcal/mol, while ethylene (H2C=CH2) has a bond dissociation energy of 174 kcal/mol. This vast difference is accounted for by the thermodynamic stability of carbon monoxide (CO), formed upon the C=C bond cleavage of ketene. The difference in availability of spin states upon fragmentation further complicates the use of BDE as a measure of bond strength for head-to-head comparisons, and force constants have been suggested as an alternative. Historically, the vast majority of tabulated bond energy values are bond enthalpies. More recently, however, the free energy analogue of bond-dissociation enthalpy, known as the bond-dissociation free energy (BDFE), has become more prevalent in the chemical literature. The BDFE of a bond A–B can be defined in the same way as the BDE as the standard free energy change (ΔG°) accompanying homolytic dissociation of AB into A and B. However, it is often thought of and computed stepwise as the sum of the free-energy changes of heterolytic bond dissociation (A–B → A+ + :B−), followed by one-electron reduction of A (A+ + e− → A•) and one-electron oxidation of B (:B− → •B + e−). In contrast to the BDE, which is usually defined and measured in the gas phase, the BDFE is often determined in the solution phase with respect to a solvent like DMSO, since the free-energy changes for the aforementioned thermochemical steps can be determined from parameters like acid dissociation constants (pKa) and standard redox potentials (ε°) that are measured in solution. === Bond energy === Except for diatomic molecules, the bond-dissociation energy differs from the bond energy. While the bond-dissociation energy is the energy of a single chemical bond, the bond energy is the average of all the bond-dissociation energies of the bonds of the same type for a given molecule. For a homoleptic compound EXn, the E–X bond energy is (1/n) multiplied by the enthalpy change of the reaction EXn → E + nX. Average bond energies given in tables are the average values of the bond energies of a collection of species containing "typical" examples of the bond in question. For example, dissociation of HO−H bond of a water molecule (H2O) requires 118.8 kcal/mol (497.1 kJ/mol). The dissociation of the remaining hydroxyl radical requires 101.8 kcal/mol (425.9 kJ/mol). The bond energy of the covalent O−H bonds in water is said to be 110.3 kcal/mol (461.5 kJ/mol), the average of these values. In the same way, for removing successive hydrogen atoms from methane the bond-dissociation energies are 105 kcal/mol (439 kJ/mol) for D(CH3−H), 110 kcal/mol (460 kJ/mol) for D(CH2−H), 101 kcal/mol (423 kJ/mol) for D(CH−H) and finally 81 kcal/mol (339 kJ/mol) for D(C−H). The bond energy is, thus, 99 kcal/mol, or 414 kJ/mol (the average of the bond-dissociation energies). None of the individual bond-dissociation energies equals the bond energy of 99 kcal/mol. === Strongest bonds and weakest bonds === According to experimental BDE data, the strongest measured single bonds are Si−F bonds. The BDE for H3Si−F is 152 kcal/mol, almost 50% stronger than the H3C−F bond (110 kcal/mol). The BDE for F3Si−F is even larger, at 166 kcal/mol. One consequence to these data are that many reactions generate silicon fluorides, such as glass etching, deprotection in organic synthesis, and volcanic emissions. The strength of the bond is attributed to the substantial electronegativity difference between silicon and fluorine, which leads to a substantial contribution from both ionic and covalent bonding to the overall strength of the bond. For the same reason, B–F bonds are also very strong, possibly stronger than Si−F, with the BDE for F2B−F computed to be 172 kcal/mol at the CCSD(T)/CBS level of theory. The C−C single bond of diacetylene (HC≡C−C≡CH) linking two sp-hybridized carbon atoms is also among the strongest, at 160 kcal/mol. The strongest bond for a neutral compound, including multiple bonds, is found in carbon monoxide at 257 kcal/mol. The protonated forms of CO, HCN and N2 are said to have even stronger bonds, although another study argues that the use of BDE as a measure of bond strength in these cases is misleading. On the other end of the scale, there is no clear boundary between a very weak covalent bond and an intermolecular interaction. Lewis acid–base complexes between transition metal fragments and noble gases are among the weakest of bonds with substantial covalent character, with (CO)5W:Ar having a W–Ar bond dissociation energy of less than 3.0 kcal/mol. Held together entirely by the van der Waals force, helium dimer, He2, has the lowest measured bond dissociation energy of only 0.022 kcal/mol. == Homolytic versus heterolytic dissociation == Bonds can be broken symmetrically or asymmetrically. The former is called homolysis and is the basis of the usual BDEs. Asymmetric scission of a bond is called heterolysis. For molecular hydrogen, the alternatives are: In the gas phase, the enthalpy of heterolysis is larger than that of homolysis, due to the need to separate unlike charges. However, this value is lowered substantially in the presence of a solvent. == Representative bond enthalpies == The data tabulated below shows how bond strengths vary over the periodic table. There is great interest, especially in organic chemistry, concerning relative strengths of bonds within a given group of compounds, and representative bond dissociation energies for common organic compounds are shown below. == See also == Bond energy Electronegativity Ionization energy Electron affinity Lattice energy == References ==
Wikipedia/Dissociation_energy
Stellar molecules are molecules that exist or form in stars. Such formations can take place when the temperature is low enough for molecules to form – typically around 6,000 K (5,730 °C; 10,340 °F) or cooler. Otherwise the stellar matter is restricted to atoms and ions in the forms of gas or – at very high temperatures – plasma. == Background == Matter is made up by atoms (formed by protons and other subatomic particles). When the environment is right, atoms can join together and form molecules, which give rise to most materials studied in materials science. But certain environments, such as high temperatures, don't allow atoms to form molecules, as the environmental energy exceeds that of the dissociation energy of the bonds within the molecule. Stars have very high temperatures, primarily in their interior, and therefore there are few molecules formed in stars. By the mid-18th century, scientists surmised that the source of the Sun's light was incandescence, rather than combustion. == Evidence and research == Although the Sun is a star, its photosphere has a low enough temperature of 6,000 K (5,730 °C; 10,340 °F), and therefore molecules can form. Water has been found on the Sun, and there is evidence of H2 in white dwarf stellar atmospheres. Cooler stars include absorption band spectra that are characteristic of molecules. Similar absorption bands can be found through observation of solar sun spots, which are cool enough to allow persistence of stellar molecules. Molecules found in the Sun include MgH, CaH, FeH, CrH, NaH, OH, SiH, VO, and TiO. Others include CN, CH, MgF, NH, C2, SrF, ZrO, YO, ScO, and BH. Stars of most types can contain molecules, even the Ap category of A-type stars. Only the hottest O-, B-, and A-type stars have no detectable molecules. Carbon-rich white dwarfs, even though very hot, have spectral lines of C2 and CH. === Laboratory measurements === Measurements of simple molecules that may be found in stars are performed in laboratories to determine the wavelengths of the spectra lines. Also, it is important to measure the dissociation energy and oscillator strengths (how strongly the molecule interacts with electromagnetic radiation). These measurements are inserted into formula that can calculate the spectrum under different conditions of pressure and temperature. However, man-made conditions are often different from those in stars, because it is hard to achieve the temperatures, and also local thermal equilibrium, as found in stars, is unlikely. Accuracy of oscillator strengths and actual measurement of dissociation energy is usually only approximate. === Model atmosphere === A numerical model of a star's atmosphere will calculate pressures and temperatures at different depths, and can predict the spectrum for different elemental concentrations. == Application == The molecules in stars can be used to determine some characteristics of the star. The isotopic composition can be determined if the lines in the molecular spectrum are observed. The different masses of different isotopes cause vibration and rotation frequencies to significantly vary. Secondly the temperature can be determined, as the temperature will change the numbers of molecules in the different vibrational and rotational states. Some molecules are sensitive to the ratio of elements, and so indicate elemental composition of the star. Different molecules are characteristic of different kinds of stars, and are used to classify them. Because there can be numerous spectral lines of different strength, conditions at different depths in the star can be determined. These conditions include temperature and speed towards or away from the observer. The spectrum of molecules has advantages over atomic spectral lines, as atomic lines are often very strong, and therefore only come from high in the atmosphere. Also the profile of the atomic spectral line can be distorted due to isotopes or overlaying of other spectral lines. The molecular spectrum is much more sensitive to temperature than atomic lines. == Detection == The following molecules have been detected in the atmospheres of stars: == See also == Stellar chemistry == References ==
Wikipedia/Molecules_in_stars
Extraterrestrial liquid water is water in its liquid state that naturally occurs outside Earth. It is a subject of wide interest because it is recognized as one of the key prerequisites for life as we know it and is thus surmised to be essential for extraterrestrial life. Although many celestial bodies in the Solar System have a hydrosphere, Earth is the only celestial body known to have stable bodies of liquid water on its surface, with oceanic water covering 71% of its surface, which is essential to life on Earth. The presence of liquid water is maintained by Earth's atmospheric pressure and stable orbit in the Sun's circumstellar habitable zone, however, the origin of Earth's water remains uncertain. The main methods currently used for confirmation are absorption spectroscopy and geochemistry. These techniques have proven effective for atmospheric water vapor and ice. However, using current methods of astronomical spectroscopy it is substantially more difficult to detect liquid water on terrestrial planets, especially in the case of subsurface water. Due to this, astronomers, astrobiologists and planetary scientists use habitable zone, gravitational and tidal theory, models of planetary differentiation and radiometry to determine the potential for liquid water. Water observed in volcanic activity can provide more compelling indirect evidence, as can fluvial features and the presence of antifreeze agents, such as salts or ammonia. Using such methods, many scientists infer that liquid water once covered large areas of Mars and Venus. Water is thought to exist as liquid beneath the surface of some planetary bodies, similar to groundwater on Earth. Water vapour is sometimes considered conclusive evidence for the presence of liquid water, although atmospheric water vapour may be found to exist in many places where liquid water does not. Similar indirect evidence, however, supports the existence of liquids below the surface of several moons and dwarf planets elsewhere in the Solar System. Some are speculated to be large extraterrestrial "oceans". Liquid water is thought to be common in other planetary systems, despite the lack of conclusive evidence, and there is a growing list of extrasolar candidates for liquid water. In June 2020, NASA scientists reported that it is likely that exoplanets with oceans may be common in the Milky Way galaxy, based on mathematical modeling studies. == Significance == Water is a fundamental element for the biochemistry of all known living beings. With some areas on Earth such as deserts being dryer than others, their local lifeforms are adapted to make efficient use of the scarce available water. No known lifeform can live completely without water. It is also one of the simplest molecules, composed of one oxygen and two hydrogen atoms, and can be found in all celestial bodies of the solar system. It is only useful for life in a liquid state, and extraterrestrial water is commonly found as water vapor or ice. Although life eventually adapted to live on land, the first forms of life on Earth appeared in liquid water. As a result, the search for extraterrestrial liquid water is closely related with the search of extraterrestrial life. Liquid water also has several properties that are beneficial for lifeforms. For example, unlike most other liquids, it becomes less dense when it solidifies rather than denser. As a result, if a body of water gets cold enough, the ice floats and eventually creates an ice layer, trapping liquid water and its ecosystems below. Without this property, lakes and oceans would become ice in their full size, along with any creatures living in them. == Liquid water in the Solar System == As of December 2015, the confirmed liquid water in the Solar System outside Earth is 25–50 times the volume of Earth's water (1.3 billion km3), i.e. about 3.25–6.5 × 1010 km3 (32.5 to 65 billion km3) and 3.25–6.5 × 1019 tons (32.5 to 65 billion billion tons) of water. === Mars === The Mars ocean theory suggests that nearly a third of the surface of Mars was once covered by water, though the water on Mars is no longer oceanic. Water on Mars exists today almost exclusively as ice and underground, with a small amount present in the atmosphere as vapor. Some liquid water may occur transiently on the Martian surface today but only under certain conditions. No large standing bodies of liquid water exist because the atmospheric pressure at the surface averages just 600 pascals (0.087 psi)—about 0.6% of Earth's mean sea level pressure—and because the global average temperature is far too low (210 K (−63 °C)), leading to either rapid evaporation or freezing. Features called recurring slope lineae are thought to be caused by flows of brine—hydrated salts. In July 2018, scientists from the Italian Space Agency reported the detection of a subglacial lake on Mars, 1.5 kilometres (0.93 mi) below the southern polar ice cap, and spanning 20 kilometres (12 mi) horizontally, the first evidence for a stable body of liquid water on the planet. Because the temperature at the base of the polar cap is estimated at 205 K (−68 °C; −91 °F), scientists assume that the water may remain liquid due to the antifreeze effect of magnesium and calcium perchlorates. The 1.5-kilometre (0.93 mi) ice layer covering the lake is composed of water ice with 10 to 20% admixed dust, and seasonally covered by a 1-metre (3 ft 3 in)-thick layer of CO2 ice. === Europa === Scientists' consensus is that a layer of liquid water exists beneath the surface of Europa, a moon of Jupiter and that heat from tidal flexing allows the subsurface ocean to remain liquid. It is estimated that the outer crust of solid ice is approximately 10–30 km (6–19 mi) thick, including a ductile "warm ice" layer, which could mean that the liquid ocean underneath may be about 100 km (60 mi) deep. This leads to a volume of Europa's oceans of 3 × 1018 m3 (3 billion km3), slightly more than twice the volume of Earth's oceans. === Enceladus === Enceladus, a moon of Saturn, has shown geysers of water, confirmed by the Cassini spacecraft in 2005 and analyzed more deeply in 2008. Gravimetric data in 2010–2011 confirmed a subsurface ocean. While previously believed to be localized, most likely in a portion of the southern hemisphere, evidence revealed in 2015 now suggests the subsurface ocean is global in nature. In addition to water, these geysers from vents near the south pole contained small amounts of salt, nitrogen, carbon dioxide, and volatile hydrocarbons. The melting of the ocean water and the geysers appear to be driven by tidal flux from Saturn. === Mimas === Mimas, another moon of Saturn similar in size and orbit to Enceladus, was found by Cassini to have a "rocking" motion whose amplitude could only be explained by a large subsurface ocean. === Ganymede === A subsurface saline ocean is theorized to exist on Ganymede, a moon of Jupiter, following observation by the Hubble Space Telescope in 2015. Patterns in auroral belts and rocking of the magnetic field suggest the presence of an ocean. It is estimated to be 100 km deep with the surface lying below a crust of 150 km of ice. As of 2015, the precise quantity of liquid water on Ganymede is highly uncertain (1–33 times as much as Earth). === Ceres === Ceres appears to be differentiated into a rocky core and icy mantle, and may have a remnant internal ocean of liquid water under the layer of ice. The surface is probably a mixture of water ice and various hydrated minerals such as carbonates and clay. In January 2014, emissions of water vapor were detected from several regions of Ceres. This was unexpected, because large bodies in the asteroid belt do not typically emit vapor, a hallmark of comets. Ceres also features a mountain called Ahuna Mons that is thought to be a cryovolcanic dome that facilitates the movement of high viscosity cryovolcanic magma consisting of water ice softened by its content of salts. === Ice giants === The ice giant planets Uranus and Neptune are thought to have a supercritical water ocean beneath their clouds, which accounts for about two-thirds of their total mass, most likely surrounding small rocky cores, although a 2006 study by Wiktorowicz and Ingersall ruled out the possibility of such a water "ocean" existing on Neptune. This kind of planet is thought to be common in extrasolar planetary systems. === Pluto === In June 2020, astronomers reported evidence that the dwarf planet Pluto may have had a subsurface ocean, and consequently may have been habitable, when it was first formed. == Indicators, methods of detection and confirmation == Most known extrasolar planetary systems appear to have very different compositions to the Solar System, though there is probably sample bias arising from the detection methods. === Spectroscopy === Liquid water has a distinct absorption spectroscopy signature compared to other states of water due to the state of its hydrogen bonds. Despite the confirmation of extraterrestrial water vapor and ice, however, the spectral signature of liquid water is yet to be confirmed outside of Earth. The signatures of surface water on terrestrial planets may be undetectable through thick atmospheres across the vast distances of space using current technology. Seasonal flows on warm Martian slopes, though strongly suggestive of briny liquid water, have yet to indicate this in spectroscopic analysis. Water vapor has been confirmed in numerous objects via spectroscopy, though it does not by itself confirm the presence of liquid water. However, when combined with other observations, the possibility might be inferred. For example, the density of GJ 1214 b would suggest that a large fraction of its mass is water and follow-up detection by the Hubble telescope of the presence of water vapor strongly suggests that exotic materials like 'hot ice' or 'superfluid water' may be present. === Magnetic fields === For the Jovian moons Ganymede and Europa, the existence of a sub-ice ocean is inferred from the measurements of the magnetic field of Jupiter. Since conductors moving through a magnetic field produce a counter-electromotive field, the presence of the water below the surface was deduced from the change in magnetic field as the moon passed from the northern to southern magnetic hemisphere of Jupiter. === Geological indicators === Thomas Gold has posited that many Solar System bodies could potentially hold groundwater below the surface. It is thought that liquid water may exist in the Martian subsurface. Research suggests that in the past there was liquid water flowing on the surface, creating large areas similar to Earth's oceans. However, the question remains as to where the water has gone. There are a number of direct and indirect proofs of water's presence either on or under the surface, e.g. stream beds, polar caps, spectroscopic measurement, eroded craters or minerals directly connected to the existence of liquid water (such as Goethite). In an article in the Journal of Geophysical Research, scientists studied Lake Vostok in Antarctica and discovered that it may have implications for liquid water still being on Mars. Through their research, scientists came to the conclusion that if Lake Vostok existed before the perennial glaciation began, that it is likely that the lake did not freeze all the way to the bottom. Due to this hypothesis, scientists say that if water had existed before the polar ice caps on Mars, it is likely that there is still liquid water below the ice caps that may even contain evidence of life. "Chaos terrain", a common feature on Europa's surface, is interpreted by researchers studying images of Europa taken by NASA's Galileo spacecraft as regions where the subsurface ocean has melted through the icy crust. === Volcanic observation === Geysers have been found on Enceladus, a moon of Saturn, and Europa, moon of Jupiter. These contain water vapour and could be indicators of liquid water deeper down. It could also be just ice. In June 2009, using data gathered by NASA's Casini spacecraft, researchers noticed that Enceladus wobbled in a certain way as it orbited Saturn. That wobble indicated that the moon's icy crust didn't extend all the way to its core — instead, it rested on a global ocean, the researchers concluded. was put forward for salty subterranean oceans on Enceladus. On 3 April 2014, NASA reported that evidence for a large underground ocean of liquid water on Enceladus had been found by the Cassini spacecraft. According to the scientists, evidence of an underground ocean suggests that Enceladus is one of the most likely places in the solar system to "host microbial life". Material from Enceladus' south polar jets contains salty water and organic molecules, the basic chemical ingredients for life," said Linda Spilker, Cassini's project scientist at JPL. "Their discovery expanded our view of the 'habitable zone' within our solar system and in planetary systems of other stars. Emissions of water vapor have been detected from several regions of the dwarf planet Ceres, combined with evidence of ongoing cryovalcanic activity. === Gravitational evidence === Scientists' consensus is that a layer of liquid water exists beneath Europa's surface, and that heat energy from tidal flexing allows the subsurface ocean to remain liquid. The first hints of a subsurface ocean came from theoretical considerations of tidal heating (a consequence of Europa's slightly eccentric orbit and orbital resonance with the other Galilean moons). Scientists used gravitational measurements from the Cassini spacecraft to confirm a water ocean under the crust of Enceladus. Such tidal models have been used as theories for water layers in other Solar System moons. According to at least one gravitational study on Cassini data, Dione has an ocean 100 kilometers below the surface. Anomalies in the orbital libration of Saturn's moon Mimas combined with models of tidal mechanics led scientists in 2022 to propose that it harbours an internal ocean. The finding has surprised many who believed it was not possible for the Solar System's smallest round body, which was previously believed to be frozen solid, and has led to the classification of a new type of "stealth ocean world". === Ground penetrating radio === Scientists have detected liquid water using radio signals. The radio detection and ranging (RADAR) instrument of the Cassini probe was used to detect the existence of a layer of liquid water and ammonia beneath the surface of Saturn's moon Titan that are consistent with calculations of the moon's density. Ground penetrating radar and dielectric permittivity data from the MARSIS instrument on Mars Express indicates a 20-kilometer-wide stable body of briny liquid water in the Planum Australe region of planet Mars. === Density calculation === Planetary scientists can use calculations of density to determine the composition of planets and their potential to possess liquid water, though the method is not highly accurate as the combination of many compounds and states can produce similar densities. Models of Saturn's moon Titan density indicate the presence of a subsurface ocean layer. Similar density estimations are strong indicators of an subsurface ocean on Enceladus. Initial analysis of 55 Cancri e's low density indicated that it consisted 30% supercritical fluid which Diana Valencia of the Massachusetts Institute of Technology proposed could be in the form of salty supercritical water, though follow-up analysis of its transit failed to detect traces of either water or hydrogen. GJ 1214 b was the second exoplanet (after CoRoT-7b) to have an established mass and radius less than those of the giant Solar System planets. It is three times the size of Earth and about 6.5 times as massive. Its low density indicated that it is likely a mix of rock and water, and follow-up observations using the Hubble telescope now seem to confirm that a large fraction of its mass is water, so it is a large waterworld. The high temperatures and pressures would form exotic materials like 'hot ice' or 'superfluid water'. === Models of radioactive decay === Models of heat retention and heating via radioactive decay in smaller icy Solar System bodies suggest that Rhea, Titania, Oberon, Triton, Pluto, Eris, Sedna, and Orcus may have oceans underneath solid icy crusts approximately 100 km thick. Of particular interest in these cases is the fact that the models indicate that the liquid layers are in direct contact with the rocky core, which allows efficient mixing of minerals and salts into the water. This is in contrast with the oceans that may be inside larger icy satellites like Ganymede, Callisto, or Titan, where layers of high-pressure phases of ice are thought to underlie the liquid water layer. Models of radioactive decay suggest that MOA-2007-BLG-192Lb, a small planet orbiting a small star could be as warm as the Earth and completely covered by a very deep ocean. === Internal differentiation models === Models of Solar System objects indicate the presence of liquid water in their internal differentiation. Some models of the dwarf planet Ceres, largest object in the asteroid belt indicate the possibility of a wet interior layer. Water vapor detected to be emitted by the dwarf planet may be an indicator, through sublimation of surface ice. A global layer of liquid water thick enough to decouple the crust from the mantle is thought to be present on Titan, Europa and, with less certainty, Callisto, Ganymede and Triton. Other icy moons may also have internal oceans, or have once had internal oceans that have now frozen. === Habitable zone === A planet's orbit in the circumstellar habitable zone is a popular method used to predict its potential for surface water at its surface. Habitable zone theory has put forward several extrasolar candidates for liquid water, though they are highly speculative as a planet's orbit around a star alone does not guarantee that it has liquid water. In addition to its orbit, a planetary mass object must have the potential for sufficient atmospheric pressure to support liquid water and a sufficient supply of hydrogen and oxygen at or near its surface. The Gliese 581 planetary system contains multiple planets that may be candidates for surface water, including Gliese 581c, Gliese 581d, might be warm enough for oceans if a greenhouse effect was operating, and Gliese 581e. Gliese 667 C has three of them are in the habitable zone including Gliese 667 Cc is estimated to have surface temperatures similar to Earth and a strong chance of liquid water. Kepler-22b one of the first 54 candidates found by the Kepler telescope and reported is 2.4 times the size of the Earth, with an estimated temperature of 22 °C. It is described as having the potential for surface water, though its composition is currently unknown. Among the 1,235 possible extrasolar planet candidates detected by NASA's planet-hunting Kepler space telescope during its first four months of operation, 54 are orbiting in the parent star's habitable 'Goldilocks' zone where liquid water could exist. Five of these are near Earth-size. On 6 January 2015, NASA announced further observations conducted from May 2009 to April 2013 which included eight candidates between one and two times the size of Earth, orbiting in a habitable zone. Of these eight, six orbit stars that are similar to the Sun in size and temperature. Three of the newly confirmed exoplanets were found to orbit within habitable zones of stars similar to the Sun: two of the three, Kepler-438b and Kepler-442b, are near-Earth-size and likely rocky; the third, Kepler-440b, is a super-Earth. === Water rich circumstellar disks === Long before the discovery of water on asteroids, on comets, and on dwarf planets beyond Neptune, the Solar System's circumstellar disks, beyond the snow line, including the asteroid belt and the Kuiper belt were thought to contain large amounts of water and these were believed to be the Origin of water on Earth. Given that many types of stars are thought to blow volatiles from the system through the photoevaporation effect, water content in circumstellar disks and rocky material in other planetary systems are very good indicators of a planetary system's potential for liquid water and a potential for organic chemistry, especially if detected within the planet forming regions or the habitable zone. Techniques such as interferometry can be used for this. In 2007, such a disk was found in the habitable zone of MWC 480. In 2008, such a disk was found around the star AA Tauri. In 2009, a similar disk was discovered around the young star HD 142527. In 2013, a water-rich debris disk around GD 61 accompanied by a confirmed rocky object consisting of magnesium, silicon, iron, and oxygen. The same year, another water rich disk was spotted around HD 100546 has ices close to the star. There is no guarantee that the other conditions will be found that allow liquid water to be present on a planetary surface. Should planetary mass objects be present, a single, gas giant, with or without planetary mass moons, orbiting close to the circumstellar habitable zone, could prevent the necessary conditions from occurring in the system. However, it would mean that planetary mass objects, such as the icy bodies of the solar system, could have abundant quantities of liquid within them. == History == Lunar maria are vast basaltic plains on the Moon that were thought to be bodies of water by early astronomers, who referred to them as "seas". Galileo expressed some doubt about the lunar 'seas' in his Dialogue Concerning the Two Chief World Systems. Before space probes were landed, the idea of oceans on Venus was credible science, but the planet was discovered to be much too hot. Telescopic observations from the time of Galileo onward have shown that Mars has no features resembling watery oceans. Mars' dryness was long recognized, and gave credibility to the spurious Martian canals. === Ancient water on Venus === NASA's Goddard Institute for Space Studies and others have postulated that Venus may have had a shallow ocean in the past for up to 2 billion years, with as much water as Earth. Depending on the parameters used in their theoretical model, the last liquid water could have evaporated as recently as 715 million years ago. Currently, the only known water on Venus is in the form of a tiny amount of atmospheric vapor (20 ppm). Hydrogen, a component of water, is still being lost to space as detected by ESA's Venus Express spacecraft. == Evidence of past surface water == Assuming that the giant-impact hypothesis is correct, there were never real seas or oceans on the Moon, only perhaps a little moisture (liquid or ice) in some places, when the Moon had a thin atmosphere created by degassing of volcanoes or impacts of icy bodies. The Dawn space probe found possible evidence of past water flow on the asteroid Vesta, leading to speculation of underground reservoirs of water-ice. Astronomers speculate that Venus had liquid water and perhaps oceans in its very early history. Given that Venus has been completely resurfaced by its own active geology, the idea of a primeval ocean is hard to test. Rock samples may one day give the answer. It was once thought that Mars might have dried up from something more Earth-like. The initial discovery of a cratered surface made this seem unlikely, but further evidence has changed this view. Liquid water may have existed on the surface of Mars in the distant past, and several basins on Mars have been proposed as dry sea beds. The largest is Vastitas Borealis; others include Hellas Planitia and Argyre Planitia. There is currently much debate over whether Mars once had an ocean of water in its northern hemisphere, and over what happened to it if it did. Findings by the Mars Exploration Rover mission indicate it had some long-term standing water in at least one location, but its extent is not known. The Opportunity Mars rover photographed bright veins of a mineral leading to conclusive confirmation of deposition by liquid water. On 9 December 2013, NASA reported that the planet Mars had a large freshwater lake (which could have been a hospitable environment for microbial life) based on evidence from the Curiosity rover studying Aeolis Palus near Mount Sharp in Gale Crater. == Liquid water on comets and asteroids == Comets contain large proportions of water ice, but are generally thought to be completely frozen due to their small size and large distance from the Sun. However, studies on dust collected from comet Wild-2 show evidence for liquid water inside the comet at some point in the past. It is yet unclear what source of heat may have caused melting of some of the comet's water ice. Nevertheless, on 10 December 2014, scientists reported that the composition of water vapor from comet Churyumov–Gerasimenko, as determined by the Rosetta spacecraft, is substantially different from that found on Earth. That is, the ratio of deuterium to hydrogen in the water from the comet was determined to be three times that found for terrestrial water. This makes it very unlikely that water found on Earth came from comets such as comet Churyumov–Gerasimenko according to the scientists. The asteroid 24 Themis was the first found to have water, including liquid pressurised by non-atmospheric means, dissolved into mineral through ionising radiation. Water has also been found to flow on the large asteroid 4 Vesta heated through periodic impacts. == Extrasolar habitable zone candidates for water == Most known extrasolar planetary systems appear to have very different compositions compared to that of the Solar System, though there may be sample bias arising from the detection methods. The goal of current searches is to find Earth-sized planets in the habitable zone of their planetary systems (also sometimes called the "Goldilocks zone"). Planets with oceans could include Earth-sized moons of giant planets, though it remains speculative whether such 'moons' really exist. There is speculation that rocky planets hosting water may be commonplace throughout the Milky Way. In July 2022, water vapor was detected in the atmosphere of the exoplanet WASP-96b based on spectrum studies with the James Webb Space Telescope. In August 2022, the exoplanet TOI-1452 b was found to have a density consistent with a water-rich composition based on studies with data from the Transiting Exoplanet Survey Satellite (TESS). == See also == == Bibliography == Aguilera Mochon, Juan Antonio (2017). El agua en el cosmos [Water in the cosmos] (in Spanish). Spain: RBA. ISBN 978-84-473-9082-3. == References == Explanatory notes Citations == External links == The Extrasolar Planets Encyclopaedia Astronomy & Astrophysics (14 December 2007). "Gliese 581: Extrasolar Planet Might Indeed Be Habitable". ScienceDaily. University of Texas at Austin (14 December 2007). "Jupiter's Moon Europa: What Could Be Under The Ice?". ScienceDaily. University of Florida (24 December 2007). "To Curious Aliens, Earth Would Stand Out As Living Planet". ScienceDaily. Ohio State University (16 December 2008). "Ocean-bearing Planets: Looking For Extraterrestrial Life In All The Right Places". ScienceDaily.
Wikipedia/Extraterrestrial_liquid_water
Extraterrestrial life, or alien life (colloquially, aliens), is life that originates from another world rather than on Earth. No extraterrestrial life has yet been scientifically conclusively detected. Such life might range from simple forms such as prokaryotes to intelligent beings, possibly bringing forth civilizations that might be far more, or far less, advanced than humans. The Drake equation speculates about the existence of sapient life elsewhere in the universe. The science of extraterrestrial life is known as astrobiology. Speculation about the possibility of inhabited worlds beyond Earth dates back to antiquity. Early Christian writers discussed the idea of a "plurality of worlds" as proposed by earlier thinkers such as Democritus; Augustine references Epicurus's idea of innumerable worlds "throughout the boundless immensity of space" in The City of God. Pre-modern writers typically assumed extraterrestrial "worlds" were inhabited by living beings. William Vorilong, in the 15th century, acknowledged the possibility Jesus could have visited extraterrestrial worlds to redeem their inhabitants.: 26  Nicholas of Cusa wrote in 1440 that Earth is "a brilliant star" like other celestial objects visible in space; which would appear similar to the Sun, from an exterior perspective, due to a layer of "fiery brightness" in the outer layer of the atmosphere. He theorized all extraterrestrial bodies could be inhabited by men, plants, and animals, including the Sun. Descartes wrote that there were no means to prove the stars were not inhabited by "intelligent creatures", but their existence was a matter of speculation.: 67  When considering the atmospheric composition and ecosystems hosted by extraterrestrial bodies, extraterrestrial life can seem more speculation than reality, due to the harsh conditions and disparate chemical composition of the atmospheres, when compared to the life-abundant Earth. However, there are many extreme and chemically harsh ecosystems on Earth that do support forms of life and are often hypothesized to be the origin of life on Earth. Hydrothermal vents, acidic hot springs, and volcanic lakes are examples of life forming under difficult circumstances, provide parallels to the extreme environments on other planets and support the possibility of extraterrestrial life. Since the mid-20th century, active research has taken place to look for signs of extraterrestrial life, encompassing searches for current and historic extraterrestrial life, and a narrower search for extraterrestrial intelligent life. Depending on the category of search, methods range from analysis of telescope and specimen data to radios used to detect and transmit communications. The concept of extraterrestrial life, particularly extraterrestrial intelligence, has had a major cultural impact, especially extraterrestrials in fiction. Science fiction has communicated scientific ideas, imagined a range of possibilities, and influenced public interest in and perspectives on extraterrestrial life. One shared space is the debate over the wisdom of attempting communication with extraterrestrial intelligence. Some encourage aggressive methods to try to contact intelligent extraterrestrial life. Others – citing the tendency of technologically advanced human societies to enslave or destroy less advanced societies – argue it may be dangerous to actively draw attention to Earth. == Context == Initially, after the Big Bang, the universe was too hot to allow life. It is estimated that the temperature of the universe was around 10 billion K at the one second mark. 15 million years later, it cooled to temperate levels, but the elements that make up living things did not exist yet. The only freely available elements at that point were hydrogen and helium. Carbon and oxygen (and later, water) would not appear until 50 million years later, created through stellar fusion. At that point, the difficulty for life to appear was not the temperature, but the scarcity of free heavy elements. Planetary systems emerged, and the first organic compounds may have formed in the protoplanetary disk of dust grains that would eventually create rocky planets like Earth. Although Earth was in a molten state after its birth and may have burned any organics that fell in it, it would have been more receptive once it cooled down. Once the right conditions on Earth were met, life started by a chemical process known as abiogenesis. Alternatively, life may have formed less frequently, then spread – by meteoroids, for example – between habitable planets in a process called panspermia. During most of its stellar evolution stars combine hydrogen nuclei to make helium nuclei by stellar fusion, and the comparatively lighter weight of helium allows the star to release the extra energy. The process continues until the star uses all of its available fuel, with the speed of consumption being related to the size of the star. During its last stages, stars start combining helium nuclei to form carbon nuclei. The higher-sized stars can further combine carbon nuclei to create oxygen and silicon, oxygen into neon and sulfur, and so on until iron. In the end, the star blows much of its content back into the stellar medium, where it would join clouds that would eventually become new generations of stars and planets. Many of those materials are the raw components of life on Earth. As this process takes place in all the universe, said materials are ubiquitous in the cosmos and not a rarity from the Solar System. Earth is a planet in the Solar System, a planetary system formed by a star at the center, the Sun, and the objects that orbit it: other planets, moons, asteroids, and comets. The sun is part of the Milky Way, a galaxy. The Milky Way is part of the Local Group, a galaxy group that is in turn part of the Laniakea Supercluster. The universe is composed of all similar structures in existence. The immense distances between celestial objects is a difficulty for the study of extraterrestrial life. So far, humans have only set foot on the Moon and sent robotic probes to other planets and moons in the Solar System. Although probes can withstand conditions that may be lethal to humans, the distances cause time delays: the New Horizons took nine years after launch to reach Pluto. No probe has ever reached extrasolar planetary systems. The Voyager 2 has left the Solar System at a speed of 50,000 kilometers per hour, if it headed towards the Alpha Centauri system, the closest one to Earth at 4.4 light years, it would reach it in 100,000 years. Under current technology, such systems can only be studied by telescopes, which have limitations. It is estimated that dark matter has a larger amount of combined matter than stars and gas clouds, but as it plays no role on the stellar evolution of stars and planets, it is usually not taken into account by astrobiology. There is an area around a star, the circumstellar habitable zone or "Goldilocks zone", where water may be at the right temperature to exist in liquid form at a planetary surface. This area is neither too close to the star, where water would become steam, nor too far away, where water would be frozen as ice. However, although useful as an approximation, planetary habitability is complex and defined by several factors. Being in the habitable zone is not enough for a planet to be habitable, not even to actually have such liquid water. Venus is located in the habitable zone of the Solar System but does not have liquid water because of the conditions of its atmosphere. Jovian planets or gas giants are not considered habitable even if they orbit close enough to their stars as hot Jupiters, due to crushing atmospheric pressures. The actual distances for the habitable zones vary according to the type of star, and even the solar activity of each specific star influences the local habitability. The type of star also defines the time the habitable zone will exist, as its presence and limits will change along with the star's stellar evolution. The Big Bang took place 13.8 billion years ago, the Solar System was formed 4.6 billion years ago, and the first hominids appeared 6 million years ago. Life on other planets may have started, evolved, given birth to extraterrestrial intelligences, and perhaps even faced a planetary extinction event millions or even billions of years ago. The brief times of existence of Earth's species, when considered from a cosmic perspective, may suggest that extraterrestrial life may be equally fleeting under such a scale. During a period of about 7 million years, from about 10 to 17 million after the Big Bang, the background temperature was between 373 and 273 K (100 and 0 °C; 212 and 32 °F), allowing the possibility of liquid water if any planets existed. Avi Loeb (2014) speculated that primitive life might in principle have appeared during this window, which he called "the Habitable Epoch of the Early Universe". Life on Earth is quite ubiquitous across the planet and has adapted over time to almost all the available environments in it, extremophiles and the deep biosphere thrive at even the most hostile ones. As a result, it is inferred that life in other celestial bodies may be equally adaptive. However, the origin of life is unrelated to its ease of adaptation and may have stricter requirements. A celestial body may not have any life on it, even if it were habitable. == Likelihood of existence == It is unclear if life, and more importantly, intelligent life in the cosmos is ubiquitous or rare. The hypothesis of ubiquitous extraterrestrial life relies on three main ideas. The first one, the size of the universe allows for plenty of planets to have a similar habitability to Earth, and the age of the universe gives enough time for a long process analog to the history of Earth to happen there. The second is that the chemical elements that make up life, such as carbon and water, are ubiquitous in the universe. The third is that the physical laws are universal, which means that the forces that would facilitate or prevent the existence of life would be the same ones as on Earth. According to this argument, made by scientists such as Carl Sagan and Stephen Hawking, it would be improbable for life not to exist somewhere else other than Earth. This argument is embodied in the Copernican principle, which states that Earth does not occupy a unique position in the Universe, and the mediocrity principle, which states that there is nothing special about life on Earth. Other authors consider instead that life in the cosmos, or at least multicellular life, may be actually rare. The Rare Earth hypothesis maintains that life on Earth is possible because of a series of factors that range from the location in the galaxy and the configuration of the Solar System to local characteristics of the planet, and that it is unlikely that all such requirements are simultaneously met by another planet. The proponents of this hypothesis consider that very little evidence suggests the existence of extraterrestrial life and that at this point it is just a desired result and not a reasonable scientific explanation for any gathered data. In 1961, astronomer and astrophysicist Frank Drake devised the Drake equation as a way to stimulate scientific dialogue at a meeting on the search for extraterrestrial intelligence (SETI). The Drake equation is a probabilistic argument used to estimate the number of active, communicative extraterrestrial civilizations in the Milky Way galaxy. The Drake equation is: N = R ∗ ⋅ f p ⋅ n e ⋅ f ℓ ⋅ f i ⋅ f c ⋅ L {\displaystyle N=R_{\ast }\cdot f_{p}\cdot n_{e}\cdot f_{\ell }\cdot f_{i}\cdot f_{c}\cdot L} where: N = the number of Milky Way galaxy civilizations already capable of communicating across interplanetary space and R* = the average rate of star formation in our galaxy fp = the fraction of those stars that have planets ne = the average number of planets that can potentially support life fl = the fraction of planets that actually support life fi = the fraction of planets with life that evolves to become intelligent life (civilisations) fc = the fraction of civilizations that develop a technology to broadcast detectable signs of their existence into space L = the length of time over which such civilizations broadcast detectable signals into space Drake's proposed estimates are as follows, but numbers on the right side of the equation are agreed as speculative and open to substitution: 10,000 = 5 ⋅ 0.5 ⋅ 2 ⋅ 1 ⋅ 0.2 ⋅ 1 ⋅ 10,000 {\displaystyle 10{,}000=5\cdot 0.5\cdot 2\cdot 1\cdot 0.2\cdot 1\cdot 10{,}000} The Drake equation has proved controversial since, although it is written as a math equation, none of its values were known at the time. Although some values may eventually be measured, others are based on social sciences and are not knowable by their very nature. This does not allow one to make noteworthy conclusions from the equation. Based on observations from the Hubble Space Telescope, there are nearly 2 trillion galaxies in the observable universe. It is estimated that at least ten percent of all Sun-like stars have a system of planets, i.e. there are 6.25×1018 stars with planets orbiting them in the observable universe. Even if it is assumed that only one out of a billion of these stars has planets supporting life, there would be some 6.25 billion life-supporting planetary systems in the observable universe. A 2013 study based on results from the Kepler spacecraft estimated that the Milky Way contains at least as many planets as it does stars, resulting in 100–400 billion exoplanets. The Nebular hypothesis that explains the formation of the Solar System and other planetary systems would suggest that those can have several configurations, and not all of them may have rocky planets within the habitable zone. The apparent contradiction between high estimates of the probability of the existence of extraterrestrial civilisations and the lack of evidence for such civilisations is known as the Fermi paradox. Dennis W. Sciama claimed that life's existence in the universe depends on various fundamental constants. Zhi-Wei Wang and Samuel L. Braunstein suggest that a random universe capable of supporting life is likely to be just barely able to do so, giving a potential explanation to the Fermi paradox. == Biochemical basis == If extraterrestrial life exists, it could range from simple microorganisms and multicellular organisms similar to animals or plants, to complex alien intelligences akin to humans. When scientists talk about extraterrestrial life, they consider all those types. Although it is possible that extraterrestrial life may have other configurations, scientists use the hierarchy of lifeforms from Earth for simplicity, as it is the only one known to exist. The first basic requirement for life is an environment with non-equilibrium thermodynamics, which means that the thermodynamic equilibrium must be broken by a source of energy. The traditional sources of energy in the cosmos are the stars, such as for life on Earth, which depends on the energy of the sun. However, there are other alternative energy sources, such as volcanoes, plate tectonics, and hydrothermal vents. There are ecosystems on Earth in deep areas of the ocean that do not receive sunlight, and take energy from black smokers instead. Magnetic fields and radioactivity have also been proposed as sources of energy, although they would be less efficient ones. Life on Earth requires water in a liquid state as a solvent in which biochemical reactions take place. It is highly unlikely that an abiogenesis process can start within a gaseous or solid medium: the atom speeds, either too fast or too slow, make it difficult for specific ones to meet and start chemical reactions. A liquid medium also allows the transport of nutrients and substances required for metabolism. Sufficient quantities of carbon and other elements, along with water, might enable the formation of living organisms on terrestrial planets with a chemical make-up and temperature range similar to that of Earth. Life based on ammonia rather than water has been suggested as an alternative, though this solvent appears less suitable than water. It is also conceivable that there are forms of life whose solvent is a liquid hydrocarbon, such as methane, ethane or propane. Another unknown aspect of potential extraterrestrial life would be the chemical elements that would compose it. Life on Earth is largely composed of carbon, but there could be other hypothetical types of biochemistry. A replacement for carbon would need to be able to create complex molecules, store information required for evolution, and be freely available in the medium. To create DNA, RNA, or a close analog, such an element should be able to bind its atoms with many others, creating complex and stable molecules. It should be able to create at least three covalent bonds: two for making long strings and at least a third to add new links and allow for diverse information. Only nine elements meet this requirement: boron, nitrogen, phosphorus, arsenic, antimony (three bonds), carbon, silicon, germanium and tin (four bonds). As for abundance, carbon, nitrogen, and silicon are the most abundant ones in the universe, far more than the others. On Earth's crust the most abundant of those elements is silicon, in the Hydrosphere it is carbon and in the atmosphere, it is carbon and nitrogen. Silicon, however, has disadvantages over carbon. The molecules formed with silicon atoms are less stable, and more vulnerable to acids, oxygen, and light. An ecosystem of silicon-based lifeforms would require very low temperatures, high atmospheric pressure, an atmosphere devoid of oxygen, and a solvent other than water. The low temperatures required would add an extra problem, the difficulty to kickstart a process of abiogenesis to create life in the first place. Norman Horowitz, head of the Jet Propulsion Laboratory bioscience section for the Mariner and Viking missions from 1965 to 1976 considered that the great versatility of the carbon atom makes it the element most likely to provide solutions, even exotic solutions, to the problems of survival of life on other planets. However, he also considered that the conditions found on Mars were incompatible with carbon based life. Even if extraterrestrial life is based on carbon and uses water as a solvent, like Earth life, it may still have a radically different biochemistry. Life is generally considered to be a product of natural selection. It has been proposed that to undergo natural selection a living entity must have the capacity to replicate itself, the capacity to avoid damage/decay, and the capacity to acquire and process resources in support of the first two capacities. Life on Earth started with an RNA world and later evolved to its current form, where some of the RNA tasks were transferred to DNA and proteins. Extraterrestrial life may still be stuck using RNA, or evolve into other configurations. It is unclear if our biochemistry is the most efficient one that could be generated, or which elements would follow a similar pattern. However, it is likely that, even if cells had a different composition to those from Earth, they would still have a cell membrane. Life on Earth jumped from prokaryotes to eukaryotes and from unicellular organisms to multicellular organisms through evolution. So far no alternative process to achieve such a result has been conceived, even if hypothetical. Evolution requires life to be divided into individual organisms, and no alternative organisation has been satisfactorily proposed either. At the basic level, membranes define the limit of a cell, between it and its environment, while remaining partially open to exchange energy and resources with it. The evolution from simple cells to eukaryotes, and from them to multicellular lifeforms, is not guaranteed. The Cambrian explosion took place thousands of millions of years after the origin of life, and its causes are not fully known yet. On the other hand, the jump to multicellularity took place several times, which suggests that it could be a case of convergent evolution, and so likely to take place on other planets as well. Palaeontologist Simon Conway Morris considers that convergent evolution would lead to kingdoms similar to our plants and animals, and that many features are likely to develop in alien animals as well, such as bilateral symmetry, limbs, digestive systems and heads with sensory organs. Scientists from the University of Oxford analysed it from the perspective of evolutionary theory and wrote in a study in the International Journal of Astrobiology that aliens may be similar to humans. The planetary context would also have an influence: a planet with higher gravity would have smaller animals, and other types of stars can lead to non-green photosynthesizers. The amount of energy available would also affect biodiversity, as an ecosystem sustained by black smokers or hydrothermal vents would have less energy available than those sustained by a star's light and heat, and so its lifeforms would not grow beyond a certain complexity. There is also research in assessing the capacity of life for developing intelligence. It has been suggested that this capacity arises with the number of potential niches a planet contains, and that the complexity of life itself is reflected in the information density of planetary environments, which in turn can be computed from its niches. === Harsh environmental conditions on Earth harboring life === It is common knowledge that the conditions on other planets in the solar system, in addition to the many galaxies outside of the Milky Way galaxy, are very harsh and seem to be too extreme to harbor any life. The environmental conditions on these planets can have intense UV radiation paired with extreme temperatures, lack of water, and much more that can lead to conditions that don't seem to favor the creation or maintenance of extraterrestrial life. However, there has been much historical evidence that some of the earliest and most basic forms of life on Earth originated in some extreme environments that seem unlikely to have harbored life at least at one point in Earth's history. Fossil evidence as well as many historical theories backed up by years of research and studies have marked environments like hydrothermal vents or acidic hot springs as some of the first places that life could have originated on Earth. These environments can be considered extreme when compared to the typical ecosystems that the majority of life on Earth now inhabit, as hydrothermal vents are scorching hot due to the magma escaping from the Earth's mantle and meeting the much colder oceanic water. Even in today's world, there can be a diverse population of bacteria found inhabiting the area surrounding these hydrothermal vents which can suggest that some form of life can be supported even in the harshest of environments like the other planets in the solar system. The aspects of these harsh environments that make them ideal for the origin of life on Earth, as well as the possibility of creation of life on other planets, is the chemical reactions forming spontaneously. For example, the hydrothermal vents found on the ocean floor are known to support many chemosynthetic processes which allow organisms to utilize energy through reduced chemical compounds that fix carbon. In return, these reactions will allow for organisms to live in relatively low oxygenated environments while maintaining enough energy to support themselves. The early Earth environment was reducing and therefore, these carbon fixing compounds were necessary for the survival and possible origin of life on Earth. With the little amount of information that scientists have found regarding the atmosphere on other planets in the Milky Way galaxy and beyond, the atmospheres are most likely reducing or with very low oxygen levels, especially when compared with Earth's atmosphere. If there were the necessary elements and ions on these planets, the same carbon fixing, reduced chemical compounds occurring around hydrothermal vents could also occur on these planets' surfaces and possibly result in the origin of extraterrestrial life. == Planetary habitability in the Solar System == The Solar System has a wide variety of planets, dwarf planets, and moons, and each one is studied for its potential to host life. Each one has its own specific conditions that may benefit or harm life. So far, the only lifeforms found are those from Earth. No extraterrestrial intelligence other than humans exists or has ever existed within the Solar System. Astrobiologist Mary Voytek points out that it would be unlikely to find large ecosystems, as they would have already been detected by now. The inner Solar System is likely devoid of life. However, Venus is still of interest to astrobiologists, as it is a terrestrial planet that was likely similar to Earth in its early stages and developed in a different way. There is a greenhouse effect, the surface is the hottest in the Solar System, sulfuric acid clouds, all surface liquid water is lost, and it has a thick carbon-dioxide atmosphere with huge pressure. Comparing both helps to understand the precise differences that lead to beneficial or harmful conditions for life. And despite the conditions against life on Venus, there are suspicions that microbial life-forms may still survive in high-altitude clouds. Mars is a cold and almost airless desert, inhospitable to life. However, recent studies revealed that water on Mars used to be quite abundant, forming rivers, lakes, and perhaps even oceans. Mars may have been habitable back then, and life on Mars may have been possible. But when the planetary core ceased to generate a magnetic field, solar winds removed the atmosphere and the planet became vulnerable to solar radiation. Ancient life-forms may still have left fossilised remains, and microbes may still survive deep underground. As mentioned, the gas giants and ice giants are unlikely to contain life. The most distant solar system bodies, found in the Kuiper Belt and outwards, are locked in permanent deep-freeze, but cannot be ruled out completely. Although the giant planets themselves are highly unlikely to have life, there is much hope to find it on moons orbiting these planets. Europa, from the Jovian system, has a subsurface ocean below a thick layer of ice. Ganymede and Callisto also have subsurface oceans, but life is less likely in them because water is sandwiched between layers of solid ice. Europa would have contact between the ocean and the rocky surface, which helps the chemical reactions. It may be difficult to dig so deep in order to study those oceans, though. Enceladus, a tiny moon of Saturn with another subsurface ocean, may not need to be dug, as it releases water to space in eruption columns. The space probe Cassini flew inside one of these, but could not make a full study because NASA did not expect this phenomenon and did not equip the probe to study ocean water. Still, Cassini detected complex organic molecules, salts, evidence of hydrothermal activity, hydrogen, and methane. Titan is the only celestial body in the Solar System besides Earth that has liquid bodies on the surface. It has rivers, lakes, and rain of hydrocarbons, methane, and ethane, and even a cycle similar to Earth's water cycle. This special context encourages speculations about lifeforms with different biochemistry, but the cold temperatures would make such chemistry take place at a very slow pace. Water is rock-solid on the surface, but Titan does have a subsurface water ocean like several other moons. However, it is of such a great depth that it would be very difficult to access it for study. == Scientific search == The science that searches and studies life in the universe, both on Earth and elsewhere, is called astrobiology. With the study of Earth's life, the only known form of life, astrobiology seeks to study how life starts and evolves and the requirements for its continuous existence. This helps to determine what to look for when searching for life in other celestial bodies. This is a complex area of study, and uses the combined perspectives of several scientific disciplines, such as astronomy, biology, chemistry, geology, oceanography, and atmospheric sciences. The scientific search for extraterrestrial life is being carried out both directly and indirectly. As of September 2017, 3,667 exoplanets in 2,747 systems have been identified, and other planets and moons in the Solar System hold the potential for hosting primitive life such as microorganisms. As of 8 February 2021, an updated status of studies considering the possible detection of lifeforms on Venus (via phosphine) and Mars (via methane) was reported. === Search for basic life === Scientists search for biosignatures within the Solar System by studying planetary surfaces and examining meteorites. Some claim to have identified evidence that microbial life has existed on Mars. In 1996, a controversial report stated that structures resembling nanobacteria were discovered in a meteorite, ALH84001, formed of rock ejected from Mars. Although all the unusual properties of the meteorite were eventually explained as the result of inorganic processes, the controversy over its discovery laid the groundwork for the development of astrobiology. An experiment on the two Viking Mars landers reported gas emissions from heated Martian soil samples that some scientists argue are consistent with the presence of living microorganisms. Lack of corroborating evidence from other experiments on the same samples suggests that a non-biological reaction is a more likely hypothesis. In February 2005 NASA scientists reported they may have found some evidence of extraterrestrial life on Mars. The two scientists, Carol Stoker and Larry Lemke of NASA's Ames Research Center, based their claim on methane signatures found in Mars's atmosphere resembling the methane production of some forms of primitive life on Earth, as well as on their own study of primitive life near the Rio Tinto river in Spain. NASA officials soon distanced NASA from the scientists' claims, and Stoker herself backed off from her initial assertions. In November 2011, NASA launched the Mars Science Laboratory that landed the Curiosity rover on Mars. It is designed to assess the past and present habitability on Mars using a variety of scientific instruments. The rover landed on Mars at Gale Crater in August 2012. A group of scientists at Cornell University started a catalog of microorganisms, with the way each one reacts to sunlight. The goal is to help with the search for similar organisms in exoplanets, as the starlight reflected by planets rich in such organisms would have a specific spectrum, unlike that of starlight reflected from lifeless planets. If Earth was studied from afar with this system, it would reveal a shade of green, as a result of the abundance of plants with photosynthesis. In August 2011, NASA studied meteorites found on Antarctica, finding adenine, guanine, hypoxanthine and xanthine. Adenine and guanine are components of DNA, and the others are used in other biological processes. The studies ruled out pollution of the meteorites on Earth, as those components would not be freely available the way they were found in the samples. This discovery suggests that several organic molecules that serve as building blocks of life may be generated within asteroids and comets. In October 2011, scientists reported that cosmic dust contains complex organic compounds ("amorphous organic solids with a mixed aromatic-aliphatic structure") that could be created naturally, and rapidly, by stars. It is still unclear if those compounds played a role in the creation of life on Earth, but Sun Kwok, of the University of Hong Kong, thinks so. "If this is the case, life on Earth may have had an easier time getting started as these organics can serve as basic ingredients for life." In August 2012, and in a world first, astronomers at Copenhagen University reported the detection of a specific sugar molecule, glycolaldehyde, in a distant star system. The molecule was found around the protostellar binary IRAS 16293-2422, which is located 400 light years from Earth. Glycolaldehyde is needed to form ribonucleic acid, or RNA, which is similar in function to DNA. This finding suggests that complex organic molecules may form in stellar systems prior to the formation of planets, eventually arriving on young planets early in their formation. In December 2023, astronomers reported the first time discovery, in the plumes of Enceladus, moon of the planet Saturn, of hydrogen cyanide, a possible chemical essential for life as we know it, as well as other organic molecules, some of which are yet to be better identified and understood. According to the researchers, "these [newly discovered] compounds could potentially support extant microbial communities or drive complex organic synthesis leading to the origin of life." === Search for extraterrestrial intelligences === Although most searches are focused on the biology of extraterrestrial life, an extraterrestrial intelligence capable enough to develop a civilization may be detectable by other means as well. Technology may generate technosignatures, effects on the native planet that may not be caused by natural causes. There are three main types of techno-signatures considered: interstellar communications, effects on the atmosphere, and planetary-sized structures such as Dyson spheres. Organizations such as the SETI Institute search the cosmos for potential forms of communication. They started with radio waves, and now search for laser pulses as well. The challenge for this search is that there are natural sources of such signals as well, such as gamma-ray bursts and supernovae, and the difference between a natural signal and an artificial one would be in its specific patterns. Astronomers intend to use artificial intelligence for this, as it can manage large amounts of data and is devoid of biases and preconceptions. Besides, even if there is an advanced extraterrestrial civilization, there is no guarantee that it is transmitting radio communications in the direction of Earth. The length of time required for a signal to travel across space means that a potential answer may arrive decades or centuries after the initial message. The atmosphere of Earth is rich in nitrogen dioxide as a result of air pollution, which can be detectable. The natural abundance of carbon, which is also relatively reactive, makes it likely to be a basic component of the development of a potential extraterrestrial technological civilization, as it is on Earth. Fossil fuels may likely be generated and used on such worlds as well. The abundance of chlorofluorocarbons in the atmosphere can also be a clear technosignature, considering their role in ozone depletion. Light pollution may be another technosignature, as multiple lights on the night side of a rocky planet can be a sign of advanced technological development. However, modern telescopes are not strong enough to study exoplanets with the required level of detail to perceive it. The Kardashev scale proposes that a civilization may eventually start consuming energy directly from its local star. This would require giant structures built next to it, called Dyson spheres. Those speculative structures would cause an excess infrared radiation, that telescopes may notice. The infrared radiation is typical of young stars, surrounded by dusty protoplanetary disks that will eventually form planets. An older star such as the Sun would have no natural reason to have excess infrared radiation. The presence of heavy elements in a star's light-spectrum is another potential biosignature; such elements would (in theory) be found if the star were being used as an incinerator/repository for nuclear waste products. === Extrasolar planets === Some astronomers search for extrasolar planets that may be conducive to life, narrowing the search to terrestrial planets within the habitable zones of their stars. Since 1992, over four thousand exoplanets have been discovered (5,943 planets in 4,461 planetary systems including 976 multiple planetary systems as of 17 April 2025). The extrasolar planets so far discovered range in size from that of terrestrial planets similar to Earth's size to that of gas giants larger than Jupiter. The number of observed exoplanets is expected to increase greatly in the coming years. The Kepler space telescope has also detected a few thousand candidate planets, of which about 11% may be false positives. There is at least one planet on average per star. About 1 in 5 Sun-like stars have an "Earth-sized" planet in the habitable zone, with the nearest expected to be within 12 light-years distance from Earth. Assuming 200 billion stars in the Milky Way, that would be 11 billion potentially habitable Earth-sized planets in the Milky Way, rising to 40 billion if red dwarfs are included. The rogue planets in the Milky Way possibly number in the trillions. The nearest known exoplanet is Proxima Centauri b, located 4.2 light-years (1.3 pc) from Earth in the southern constellation of Centaurus. As of March 2014, the least massive exoplanet known is PSR B1257+12 A, which is about twice the mass of the Moon. The most massive planet listed on the NASA Exoplanet Archive is DENIS-P J082303.1−491201 b, about 29 times the mass of Jupiter, although according to most definitions of a planet, it is too massive to be a planet and may be a brown dwarf instead. Almost all of the planets detected so far are within the Milky Way, but there have also been a few possible detections of extragalactic planets. The study of planetary habitability also considers a wide range of other factors in determining the suitability of a planet for hosting life. One sign that a planet probably already contains life is the presence of an atmosphere with significant amounts of oxygen, since that gas is highly reactive and generally would not last long without constant replenishment. This replenishment occurs on Earth through photosynthetic organisms. One way to analyse the atmosphere of an exoplanet is through spectrography when it transits its star, though this might only be feasible with dim stars like white dwarfs. == History and cultural impact == === Cosmic pluralism === The modern concept of extraterrestrial life is based on assumptions that were not commonplace during the early days of astronomy. The first explanations for the celestial objects seen in the night sky were based on mythology. Scholars from Ancient Greece were the first to consider that the universe is inherently understandable and rejected explanations based on supernatural incomprehensible forces, such as the myth of the Sun being pulled across the sky in the chariot of Apollo. They had not developed the scientific method yet and based their ideas on pure thought and speculation, but they developed precursor ideas to it, such as that explanations had to be discarded if they contradict observable facts. The discussions of those Greek scholars established many of the pillars that would eventually lead to the idea of extraterrestrial life, such as Earth being round and not flat. The cosmos was first structured in a geocentric model that considered that the sun and all other celestial bodies revolve around Earth. However, they did not consider them as worlds. In Greek understanding, the world was composed by both Earth and the celestial objects with noticeable movements. Anaximander thought that the cosmos was made from apeiron, a substance that created the world, and that the world would eventually return to the cosmos. Eventually two groups emerged, the atomists that thought that matter at both Earth and the cosmos was equally made of small atoms of the classical elements (earth, water, fire and air), and the Aristotelians who thought that those elements were exclusive of Earth and that the cosmos was made of a fifth one, the aether. Atomist Epicurus thought that the processes that created the world, its animals and plants should have created other worlds elsewhere, along with their own animals and plants. Aristotle thought instead that all the earth element naturally fell towards the center of the universe, and that would make it impossible for other planets to exist elsewhere. Under that reasoning, Earth was not only in the center, it was also the only planet in the universe. Cosmic pluralism, the plurality of worlds, or simply pluralism, describes the philosophical belief in numerous "worlds" in addition to Earth, which might harbor extraterrestrial life. The earliest recorded assertion of extraterrestrial human life is found in ancient scriptures of Jainism. There are multiple "worlds" mentioned in Jain scriptures that support human life. These include, among others, Bharat Kshetra, Mahavideh Kshetra, Airavat Kshetra, and Hari kshetra. Medieval Muslim writers like Fakhr al-Din al-Razi and Muhammad al-Baqir supported cosmic pluralism on the basis of the Qur'an. Chaucer's poem The House of Fame engaged in medieval thought experiments that postulated the plurality of worlds. However, those ideas about other worlds were different from the current knowledge about the structure of the universe, and did not postulate the existence of planetary systems other than the Solar System. When those authors talk about other worlds, they talk about places located at the center of their own systems, and with their own stellar vaults and cosmos surrounding them. The Greek ideas and the disputes between atomists and Aristotelians outlived the fall of the Greek empire. The Great Library of Alexandria compiled information about it, part of which was translated by Islamic scholars and thus survived the end of the Library. Baghdad combined the knowledge of the Greeks, the Indians, the Chinese and its own scholars, and the knowledge expanded through the Byzantine Empire. From there it eventually returned to Europe by the time of the Middle Ages. However, as the Greek atomist doctrine held that the world was created by random movements of atoms, with no need for a creator deity, it became associated with atheism, and the dispute intertwined with religious ones. Still, the Church did not react to those topics in a homogeneous way, and there were stricter and more permissive views within the church itself. The first known mention of the term 'panspermia' was in the writings of the 5th-century BC Greek philosopher Anaxagoras. He proposed the idea that life exists everywhere. === Early modern period === By the time of the late Middle Ages there were many known inaccuracies in the geocentric model, but it was kept in use because naked eye observations provided limited data. Nicolaus Copernicus started the Copernican Revolution by proposing that the planets revolve around the sun rather than Earth. His proposal had little acceptance at first because, as he kept the assumption that orbits were perfect circles, his model led to as many inaccuracies as the geocentric one. Tycho Brahe improved the available data with naked-eye observatories, which worked with highly complex sextants and quadrants. Tycho could not make sense of his observations, but Johannes Kepler did: orbits were not perfect circles, but ellipses. This knowledge benefited the Copernican model, which worked now almost perfectly. The invention of the telescope a short time later, perfected by Galileo Galilei, clarified the final doubts, and the paradigm shift was completed. Under this new understanding, the notion of extraterrestrial life became feasible: if Earth is but just a planet orbiting around a star, there may be planets similar to Earth elsewhere. The astronomical study of distant bodies also proved that physical laws are the same elsewhere in the universe as on Earth, with nothing making the planet truly special. The new ideas were met with resistance from the Catholic church. Galileo was tried for the heliocentric model, which was considered heretical, and forced to recant it. The best-known early-modern proponent of ideas of extraterrestrial life was the Italian philosopher Giordano Bruno, who argued in the 16th century for an infinite universe in which every star is surrounded by its own planetary system. Bruno wrote that other worlds "have no less virtue nor a nature different to that of our earth" and, like Earth, "contain animals and inhabitants". Bruno's belief in the plurality of worlds was one of the charges leveled against him by the Venetian Holy Inquisition, which tried and executed him. The heliocentric model was further strengthened by the postulation of the theory of gravity by Sir Isaac Newton. This theory provided the mathematics that explains the motions of all things in the universe, including planetary orbits. By this point, the geocentric model was definitely discarded. By this time, the use of the scientific method had become a standard, and new discoveries were expected to provide evidence and rigorous mathematical explanations. Science also took a deeper interest in the mechanics of natural phenomena, trying to explain not just the way nature works but also the reasons for working that way. There was very little actual discussion about extraterrestrial life before this point, as the Aristotelian ideas remained influential while geocentrism was still accepted. When it was finally proved wrong, it not only meant that Earth was not the center of the universe, but also that the lights seen in the sky were not just lights, but physical objects. The notion that life may exist in them as well soon became an ongoing topic of discussion, although one with no practical ways to investigate. The possibility of extraterrestrials remained a widespread speculation as scientific discovery accelerated. William Herschel, the discoverer of Uranus, was one of many 18th–19th-century astronomers who believed that the Solar System is populated by alien life. Other scholars of the period who championed "cosmic pluralism" included Immanuel Kant and Benjamin Franklin. At the height of the Enlightenment, even the Sun and Moon were considered candidates for extraterrestrial inhabitants. === 19th century === Speculation about life on Mars increased in the late 19th century, following telescopic observation of apparent Martian canals – which soon, however, turned out to be optical illusions. Despite this, in 1895, American astronomer Percival Lowell published his book Mars, followed by Mars and its Canals in 1906, proposing that the canals were the work of a long-gone civilisation. Spectroscopic analysis of Mars's atmosphere began in earnest in 1894, when U.S. astronomer William Wallace Campbell showed that neither water nor oxygen was present in the Martian atmosphere. By 1909 better telescopes and the best perihelic opposition of Mars since 1877 conclusively put an end to the canal hypothesis. As a consequence of the belief in the spontaneous generation there was little thought about the conditions of each celestial body: it was simply assumed that life would thrive anywhere. This theory was disproved by Louis Pasteur in the 19th century. Popular belief in thriving alien civilisations elsewhere in the solar system still remained strong until Mariner 4 and Mariner 9 provided close images of Mars, which debunked forever the idea of the existence of Martians and decreased the previous expectations of finding alien life in general. The end of the spontaneous generation belief forced investigation into the origin of life. Although abiogenesis is the more accepted theory, a number of authors reclaimed the term "panspermia" and proposed that life was brought to Earth from elsewhere. Some of those authors are Jöns Jacob Berzelius (1834), Kelvin (1871), Hermann von Helmholtz (1879) and, somewhat later, by Svante Arrhenius (1903). The science fiction genre, although not so named during the time, developed during the late 19th century. The expansion of the genre of extraterrestrials in fiction influenced the popular perception over the real-life topic, making people eager to jump to conclusions about the discovery of aliens. Science marched at a slower pace, some discoveries fueled expectations and others dashed excessive hopes. For example, with the advent of telescopes, most structures seen on the Moon or Mars were immediately attributed to Selenites or Martians, and later ones (such as more powerful telescopes) revealed that all such discoveries were natural features. A famous case is the Cydonia region of Mars, first imaged by the Viking 1 orbiter. The low-resolution photos showed a rock formation that resembled a human face, but later spacecraft took photos in higher detail that showed that there was nothing special about the site. === Recent history === The search and study of extraterrestrial life became a science of its own, astrobiology. Also known as exobiology, this discipline is studied by the NASA, the ESA, the INAF, and others. Astrobiology studies life from Earth as well, but with a cosmic perspective. For example, abiogenesis is of interest to astrobiology, not because of the origin of life on Earth, but for the chances of a similar process taking place in other celestial bodies. Many aspects of life, from its definition to its chemistry, are analyzed as either likely to be similar in all forms of life across the cosmos or only native to Earth. Astrobiology, however, remains constrained by the current lack of extraterrestrial life-forms to study, as all life on Earth comes from the same ancestor, and it is hard to infer general characteristics from a group with a single example to analyse. The 20th century came with great technological advances, speculations about future hypothetical technologies, and an increased basic knowledge of science by the general population thanks to science divulgation through the mass media. The public interest in extraterrestrial life and the lack of discoveries by mainstream science led to the emergence of pseudosciences that provided affirmative, if questionable, answers to the existence of aliens. Ufology claims that many unidentified flying objects (UFOs) would be spaceships from alien species, and ancient astronauts hypothesis claim that aliens would have visited Earth in antiquity and prehistoric times but people would have failed to understand it by then. Most UFOs or UFO sightings can be readily explained as sightings of Earth-based aircraft (including top-secret aircraft), known astronomical objects or weather phenomenons, or as hoaxes. Looking beyond the pseudosciences, Lewis White Beck strove to elevate the level of public discourse on the topic of extraterrestrial life by tracing the evolution of philosophical thought over the centuries from ancient times into the modern era. His review of the contributions made by Lucretius, Plutarch, Aristotle, Copernicus, Immanuel Kant, John Wilkins, Charles Darwin and Karl Marx demonstrated that even in modern times, humanity could be profoundly influenced in its search for extraterrestrial life by subtle and comforting archetypal ideas which are largely derived from firmly held religious, philosophical and existential belief systems. On a positive note, however, Beck further argued that even if the search for extraterrestrial life proves to be unsuccessful, the endeavor itself could have beneficial consequences by assisting humanity in its attempt to actualize superior ways of living here on Earth. By the 21st century, it was accepted that multicellular life in the Solar System can only exist on Earth, but the interest in extraterrestrial life increased regardless. This is a result of the advances in several sciences. The knowledge of planetary habitability allows to consider on scientific terms the likelihood of finding life at each specific celestial body, as it is known which features are beneficial and harmful for life. Astronomy and telescopes also improved to the point exoplanets can be confirmed and even studied, increasing the number of search places. Life may still exist elsewhere in the Solar System in unicellular form, but the advances in spacecraft allow to send robots to study samples in situ, with tools of growing complexity and reliability. Although no extraterrestrial life has been found and life may still be just a rarity from Earth, there are scientific reasons to suspect that it can exist elsewhere, and technological advances that may detect it if it does. Many scientists are optimistic about the chances of finding alien life. In the words of SETI's Frank Drake, "All we know for sure is that the sky is not littered with powerful microwave transmitters". Drake noted that it is entirely possible that advanced technology results in communication being carried out in some way other than conventional radio transmission. At the same time, the data returned by space probes, and giant strides in detection methods, have allowed science to begin delineating habitability criteria on other worlds, and to confirm that at least other planets are plentiful, though aliens remain a question mark. The Wow! signal, detected in 1977 by a SETI project, remains a subject of speculative debate. On the other hand, other scientists are pessimistic. Jacques Monod wrote that "Man knows at last that he is alone in the indifferent immensity of the universe, whence which he has emerged by chance". In 2000, geologist and paleontologist Peter Ward and astrobiologist Donald Brownlee published a book entitled Rare Earth: Why Complex Life is Uncommon in the Universe. In it, they discussed the Rare Earth hypothesis, in which they claim that Earth-like life is rare in the universe, whereas microbial life is common. Ward and Brownlee are open to the idea of evolution on other planets that is not based on essential Earth-like characteristics such as DNA and carbon. As for the possible risks, theoretical physicist Stephen Hawking warned in 2010 that humans should not try to contact alien life forms. He warned that aliens might pillage Earth for resources. "If aliens visit us, the outcome would be much as when Columbus landed in America, which didn't turn out well for the Native Americans", he said. Jared Diamond had earlier expressed similar concerns. On 20 July 2015, Hawking and Russian billionaire Yuri Milner, along with the SETI Institute, announced a well-funded effort, called the Breakthrough Initiatives, to expand efforts to search for extraterrestrial life. The group contracted the services of the 100-meter Robert C. Byrd Green Bank Telescope in West Virginia in the United States and the 64-meter Parkes Telescope in New South Wales, Australia. On 13 February 2015, scientists (including Geoffrey Marcy, Seth Shostak, Frank Drake and David Brin) at a convention of the American Association for the Advancement of Science, discussed Active SETI and whether transmitting a message to possible intelligent extraterrestrials in the Cosmos was a good idea; one result was a statement, signed by many, that a "worldwide scientific, political and humanitarian discussion must occur before any message is sent". == Government responses == The 1967 Outer Space Treaty and the 1979 Moon Agreement define rules of planetary protection against potentially hazardous extraterrestrial life. COSPAR also provides guidelines for planetary protection. A committee of the United Nations Office for Outer Space Affairs had in 1977 discussed for a year strategies for interacting with extraterrestrial life or intelligence. The discussion ended without any conclusions. As of 2010, the UN lacks response mechanisms for the case of an extraterrestrial contact. One of the NASA divisions is the Office of Safety and Mission Assurance (OSMA), also known as the Planetary Protection Office. A part of its mission is to "rigorously preclude backward contamination of Earth by extraterrestrial life." In 2016, the Chinese Government released a white paper detailing its space program. According to the document, one of the research objectives of the program is the search for extraterrestrial life. It is also one of the objectives of the Chinese Five-hundred-meter Aperture Spherical Telescope (FAST) program. In 2020, Dmitry Rogozin, the head of the Russian space agency, said the search for extraterrestrial life is one of the main goals of deep space research. He also acknowledged the possibility of existence of primitive life on other planets of the Solar System. The French space agency has an office for the study of "non-identified aero spatial phenomena". The agency is maintaining a publicly accessible database of such phenomena, with over 1600 detailed entries. According to the head of the office, the vast majority of entries have a mundane explanation; but for 25% of entries, their extraterrestrial origin can neither be confirmed nor denied. In 2020, chairman of the Israel Space Agency Isaac Ben-Israel stated that the probability of detecting life in outer space is "quite large". But he disagrees with his former colleague Haim Eshed who stated that there are contacts between an advanced alien civilisation and some of Earth's governments. == In fiction == Although the idea of extraterrestrial peoples became feasible once astronomy developed enough to understand the nature of planets, they were not thought of as being any different from humans. Having no scientific explanation for the origin of mankind and its relation to other species, there was no reason to expect them to be any other way. This was changed by the 1859 book On the Origin of Species by Charles Darwin, which proposed the theory of evolution. Now with the notion that evolution on other planets may take other directions, science fiction authors created bizarre aliens, clearly distinct from humans. A usual way to do that was to add body features from other animals, such as insects or octopuses. Costuming and special effects feasibility alongside budget considerations forced films and TV series to tone down the fantasy, but these limitations lessened since the 1990s with the advent of computer-generated imagery (CGI), and later on as CGI became more effective and less expensive. Real-life events sometimes captivate people's imagination and this influences the works of fiction. For example, during the Barney and Betty Hill incident, the first recorded claim of an alien abduction, the couple reported that they were abducted and experimented on by aliens with oversized heads, big eyes, pale grey skin, and small noses, a description that eventually became the grey alien archetype once used in works of fiction. == See also == == Notes == == References == == Further reading == == External links == Astrobiology at NASA European Astrobiology Institute
Wikipedia/Extraterrestrial_life
The Nexus for Exoplanet System Science (NExSS) initiative is a National Aeronautics and Space Administration (NASA) virtual institute designed to foster interdisciplinary collaboration in the search for life on exoplanets. Led by the Ames Research Center, the NASA Exoplanet Science Institute, and the Goddard Institute for Space Studies, NExSS will help organize the search for life on exoplanets from participating research teams and acquire new knowledge about exoplanets and extrasolar planetary systems. == History == In 1995, astronomers using ground-based observatories discovered 51 Pegasi b, the first exoplanet orbiting a Sun-like star. NASA launched the Kepler space telescope in 2009 to search for Earth-size exoplanets. By 2015, they had confirmed more than a thousand exoplanets, while several thousand additional candidates awaited confirmation. To help coordinate efforts to sift through and understand the data, NASA needed a way for researchers to collaborate across disciplines. The success of the Virtual Planetary Laboratory research network at the University of Washington led Mary A. Voytek, director of the NASA Astrobiology Program, to model its structure and create the Nexus for Exoplanet System Science (NExSS) initiative. Leaders from three NASA research centers will run the program: Natalie Batalha of NASA's Ames Research Center, Dawn Gelino of the NASA Exoplanet Science Institute, and Anthony Del Genio of NASA's Goddard Institute for Space Studies. == Research == Functioning as a virtual institute, NExSS is currently composed of sixteen interdisciplinary science teams from ten universities, three NASA centers and two research institutes, who will work together to search for habitable exoplanets that can support life. The US teams were initially selected from a total of about 200 proposals; however, the coalition is expected to expand nationally and internationally as the project gets underway. Teams will also work with amateur citizen scientists who will have the ability to access the public Kepler data and search for exoplanets. NExSS will draw from scientific expertise in each of the four divisions of the Science Mission Directorate: Earth science, planetary science, heliophysics and astrophysics. NExSS research will directly contribute to understanding and interpreting future exoplanet data from the upcoming launches of the Transiting Exoplanet Survey Satellite and James Webb Space Telescope, as well as the planned Nancy Grace Roman Space Telescope mission. Current NExSS research projects as of 2015: == See also == == Notes == == References ==
Wikipedia/Nexus_for_Exoplanet_System_Science
Triatomic molecules are molecules composed of three atoms, of either the same or different chemical elements. Examples include H2O, CO2 (pictured), HCN, O3 (ozone) and NO2. == Molecular vibrations == The vibrational modes of a triatomic molecule can be determined in specific cases. === Symmetric linear molecules === A symmetric linear molecule ABA can perform: Antisymmetric longitudinal vibrations with frequency ω a = k 1 M m A m B {\displaystyle \omega _{a}={\sqrt {\frac {k_{1}M}{m_{A}m_{B}}}}} Symmetric longitudinal vibrations with frequency ω s 1 = k 1 m A {\displaystyle \omega _{s1}={\sqrt {\frac {k_{1}}{m_{A}}}}} Symmetric transversal vibrations with frequency ω s 2 = 2 k 2 M m A m B {\displaystyle \omega _{s2}={\sqrt {\frac {2k_{2}M}{m_{A}m_{B}}}}} In the previous formulas, M is the total mass of the molecule, mA and mB are the masses of the elements A and B, k1 and k2 are the spring constants of the molecule along its axis and perpendicular to it. == Types == === Homonuclear === Homonuclear triatomic molecules contain three of the same kind of atom. That molecule will be an allotrope of that element. Ozone, O3 is an example of a triatomic molecule with all atoms the same. Triatomic hydrogen, H3, is unstable and breaks up spontaneously. H3+, the trihydrogen cation is stable by itself and is symmetric. 4He3, the helium trimer is only weakly bound by van der Waals force and is in an Efimov state. Trisulfur (S3) is analogous to ozone. == Geometry == All triatomic molecules may be classified as possessing either a linear, bent, or cyclic geometry. === Linear === Linear triatomic molecules owe their geometry to their sp or sp3d hybridised central atoms. Well-known linear triatomic molecules include carbon dioxide (CO2) and hydrogen cyanide (HCN). Xenon difluoride (XeF2) is one of the rare examples of a linear triatomic molecule possessing non-bonded pairs of electrons on the central atom. === Bent === === Cyclic === == References == == External links == Media related to Triatomic molecules at Wikimedia Commons
Wikipedia/Triatomic_molecule
The iron–sulfur world hypothesis is a set of proposals for the origin of life and the early evolution of life advanced in a series of articles between 1988 and 1992 by Günter Wächtershäuser, a Munich patent lawyer with a degree in chemistry, who had been encouraged and supported by philosopher Karl R. Popper to publish his ideas. The hypothesis proposes that early life may have formed on the surface of iron sulfide minerals, hence the name. It was developed by retrodiction (making a "prediction" about the past) from extant biochemistry (non-extinct, surviving biochemistry) in conjunction with chemical experiments. == Origin of life == === Pioneer organism === Wächtershäuser proposes that the earliest form of life, termed the "pioneer organism", originated in a volcanic hydrothermal flow at high pressure and high (100 °C) temperature. It had a composite structure of a mineral base with catalytic transition metal centers (predominantly iron and nickel, but also perhaps cobalt, manganese, tungsten and zinc). The catalytic centers catalyzed autotrophic carbon fixation pathways generating small molecule (non-polymer) organic compounds from inorganic gases (e.g. carbon monoxide, carbon dioxide, hydrogen cyanide and hydrogen sulfide). These organic compounds were retained on or in the mineral base as organic ligands of the transition metal centers with a flow retention time in correspondence with their mineral bonding strength thereby defining an autocatalytic "surface metabolism". The catalytic transition metal centers became autocatalytic by being accelerated by their organic products turned ligands. The carbon fixation metabolism became autocatalytic by forming a metabolic cycle in the form of a primitive sulfur-dependent version of the reductive citric acid cycle. Accelerated catalysts expanded the metabolism and new metabolic products further accelerated the catalysts. The idea is that once such a primitive autocatalytic metabolism was established, its intrinsically synthetic chemistry began to produce ever more complex organic compounds, ever more complex pathways and ever more complex catalytic centers. === Nutrient conversions === The water–gas shift reaction (CO + H2O → CO2 + H2) occurs in volcanic fluids with diverse catalysts or without catalysts. The combination of ferrous sulfide (FeS, troilite) and hydrogen sulfide (H2S) as reducing agents (both reagents are simultaneously oxidized in the reaction here under creating the disulfide bond, S–S) in conjunction with pyrite (FeS2) formation: FeS + H2S → FeS2 + 2 H+ + 2 e− or with H2 directly produced instead of 2 H+ + 2 e− FeS + H2S → FeS2 + H2 has been demonstrated under mild volcanic conditions. This key result has been disputed. Nitrogen fixation has been demonstrated for the isotope 15N2 in conjunction with pyrite formation. Ammonia forms from nitrate with FeS/H2S as reductant. Methylmercaptan [CH3-SH] and carbon oxysulfide [COS] form from CO2 and FeS/H2S, or from CO and H2 in the presence of NiS. === Synthetic reactions === Reaction of carbon monoxide (CO), hydrogen sulfide (H2S) and methanethiol CH3SH in the presence of nickel sulfide and iron sulfide generates the methyl thioester of acetic acid [CH3-CO-SCH3] and presumably thioacetic acid (CH3-CO-SH) as the simplest activated acetic acid analogues of acetyl-CoA. These activated acetic acid derivatives serve as starting materials for subsequent exergonic synthetic steps. They also serve for energy coupling with endergonic reactions, notably the formation of (phospho)anhydride compounds. However, Huber and Wächtershäuser reported low 0.5% acetate yields based on the input of CH3SH (methanethiol) (8 mM) in the presence of 350 mM CO. This is about 500 times and 3700 times the highest CH3SH and CO concentrations respectively measured to date in a natural hydrothermal vent fluid. Reaction of nickel hydroxide with hydrogen cyanide (HCN) (in the presence or absence of ferrous hydroxide, hydrogen sulfide or methyl mercaptan) generates nickel cyanide, which reacts with carbon monoxide (CO) to generate pairs of α-hydroxy and α-amino acids: e.g. glycolate/glycine, lactate/alanine, glycerate/serine; as well as pyruvic acid in significant quantities. Pyruvic acid is also formed at high pressure and high temperature from CO, H2O, FeS in the presence of nonyl mercaptan. Reaction of pyruvic acid or other α-keto acids with ammonia in the presence of ferrous hydroxide or in the presence of ferrous sulfide and hydrogen sulfide generates alanine or other α-amino acids. Reaction of α-amino acids in aqueous solution with COS or with CO and H2S generates a peptide cycle wherein dipeptides, tripeptides etc. are formed and subsequently degraded via N-terminal hydantoin moieties and N-terminal urea moieties and subsequent cleavage of the N-terminal amino acid unit. Proposed reaction mechanism for reduction of CO2 on FeS: Ying et al. (2007) have shown that direct transformation of mackinawite (FeS) to pyrite (FeS2) on reaction with H2S till 300 °C is not possible without the presence of critical amount of oxidant. In the absence of any oxidant, FeS reacts with H2S up to 300 °C to give pyrrhotite. Farid et al. have experimentally shown that mackinawite (FeS) has ability to reduce CO2 to CO at temperature higher than 300 °C. They reported that the surface of FeS is oxidized, which on reaction with H2S gives pyrite (FeS2). It is expected that CO reacts with H2O in the Drobner experiment to give H2. == Early evolution == Early evolution is defined as beginning with the origin of life and ending with the last universal common ancestor (LUCA). According to the iron–sulfur world theory it covers a coevolution of cellular organization (cellularization), the genetic machinery and enzymatization of the metabolism. === Cellularization === Cellularization occurs in several stages. It may have begun with the formation of primitive lipids (e.g. fatty acids or isoprenoids) in the surface metabolism. These lipids accumulate on or in the mineral base. This lipophilizes the outer or inner surfaces of the mineral base, which promotes condensation reactions over hydrolytic reactions by lowering the activity of water and protons. In the next stage lipid membranes are formed. While still anchored to the mineral base they form a semi-cell bounded partly by the mineral base and partly by the membrane. Further lipid evolution leads to self-supporting lipid membranes and closed cells. The earliest closed cells are pre-cells (sensu Kandler) because they allow frequent exchange of genetic material (e.g. by fusions). According to Woese, this frequent exchange of genetic material is the cause for the existence of the common stem in the tree of life and for a very rapid early evolution. Nick Lane and coauthors state that "Non-enzymatic equivalents of glycolysis, the pentose phosphate pathway and gluconeogenesis have been identified as well. Multiple syntheses of amino acids from α-keto acids by direct reductive amination and by transamination reactions can also take place. Long-chain fatty acids can be formed by hydrothermal Fischer-Tropsch-type synthesis which chemically resembles the process of fatty acid elongation. Recent work suggests that nucleobases might also be formed following the universally conserved biosynthetic pathways, using metal ions as catalysts". Metabolic intermediates in glycolysis and the pentose phosphate pathway such as glucose, pyruvate, ribose 5-phosphate, and erythrose-4-phosphate are spontaneously generated in the presence of Fe(II). Fructose 1,6-biphosphate, a metabolic intermediate in gluconeogenesis, was shown to have been continuously accumulated but only in a frozen solution. The formation of fructose 1,6-biphosphate was accelerated by lysine and glycine which implies the earliest anabolic enzymes were amino acids. It had been reported that 4Fe-4S, 2Fe-2S, and mononuclear iron clusters are spontaneously formed in low concentrations of cysteine and alkaline pH. Methyl thioacetate, a precursor to acetyl-CoA can be synthesized in conditions relevant to hydrothermal vents. Phosphorylation of methyl thioacetate leads to the synthesis of thioacetate, a simpler precursor to acetyl-CoA. Thioacetate in more cooler and neutral conditions promotes synthesis of acetyl phosphate which is a precursor to adenosine triphosphate and is capable of phosphorylating ribose and nucleosides. This suggests that acetyl phosphate was likely synthesized in thermophoresis and mixing between the acidic seawater and alkaline hydrothermal fluid in interconnected micropores. It is possible that it could promote nucleotide polymerization at mineral surfaces or at low water activity. Thermophoresis at hydrothermal vent pores can concentrate polyribonucleotides, but it remains unknown as to how it could promote coding and metabolic reactions. In mathematical simulations, autocatalytic nucleotide synthesis is proposed to promote protocell growth as nucleotides also catalyze CO2 fixation. Strong nucleotide catalysis of fatty acids and amino acids slow down protocell growth and if competition between catalytic function were to occur, this would disrupt the protocell. Weak or moderate nucleotide catalysis of amino acids via CO2 fixation would favor protocell division and growth. In 2017, a computational simulation of a protocell at an alkaline hydrothermal vent environment showed that "Some hydrophobic amino acids chelate FeS nanocrystals, producing three positive feedbacks: (i) an increase in catalytic surface area; (ii) partitioning of FeS nanocrystals to the membrane; and (iii) a proton-motive active site for carbon fixing that mimics the enzyme Ech". Maximal ATP synthesis would have occurred at high water activity in freshwater and high concentrations of Mg2+ and Ca2+ prevented synthesis of ATP, however the concentrations of divalent cations in Hadean oceans were much lower than in modern oceans and alkaline hydrothermal vent concentrations of Mg2+ and Ca2+ are typically lower than in the ocean. Such environments would have generated Fe3+ which would have promoted ADP phosphorylation. The mixture of seawater and alkaline hydrothermal vent fluid can promote cycling between Fe3+ and Fe2+. Experimental research of biomimetic prebiotic reactions such as the reduction of NAD+ and phosphoryl transfer also support an origin of life occurring at an alkaline hydrothermal vent . === Proto-ecological systems === William Martin and Michael Russell suggest that the first cellular life forms may have evolved inside alkaline hydrothermal vents at seafloor spreading zones in the deep sea. These structures consist of microscale caverns that are coated by thin membraneous metal sulfide walls. Therefore, these structures would resolve several critical points germane to Wächtershäuser's suggestions at once: the micro-caverns provide a means of concentrating newly synthesised molecules, thereby increasing the chance of forming oligomers; the steep temperature gradients inside the hydrothermal vent allow for establishing "optimum zones" of partial reactions in different regions of the vent (e.g. monomer synthesis in the hotter, oligomerisation in the cooler parts); the flow of hydrothermal water through the structure provides a constant source of building blocks and energy (chemical disequilibrium between hydrothermal hydrogen and marine carbon dioxide); the model allows for a succession of different steps of cellular evolution (prebiotic chemistry, monomer and oligomer synthesis, peptide and protein synthesis, RNA world, ribonucleoprotein assembly and DNA world) in a single structure, facilitating exchange between all developmental stages; synthesis of lipids as a means of "closing" the cells against the environment is not necessary, until basically all cellular functions are developed. This model locates the "last universal common ancestor" (LUCA) within the inorganically formed physical confines of an alkaline hydrothermal vent, rather than assuming the existence of a free-living form of LUCA. The last evolutionary step en route to bona fide free-living cells would be the synthesis of a lipid membrane that finally allows the organisms to leave the microcavern system of the vent. This postulated late acquisition of the biosynthesis of lipids as directed by genetically encoded peptides is consistent with the presence of completely different types of membrane lipids in archaea and bacteria (plus eukaryotes). The kind of vent at the foreground of their suggestion is chemically more similar to the warm (ca. 100 °C) off ridge vents such as Lost City than to the more familiar black smoker type vents (ca. 350 °C). In an abiotic world, a thermocline of temperatures and a chemocline in concentration is associated with the pre-biotic synthesis of organic molecules, hotter in proximity to the chemically rich vent, cooler but also less chemically rich at greater distances. The migration of synthesized compounds from areas of high concentration to areas of low concentration gives a directionality that provides both source and sink in a self-organizing fashion, enabling a proto-metabolic process by which acetic acid production and its eventual oxidization can be spatially organized. In this way many of the individual reactions that are today found in central metabolism could initially have occurred independent of any developing cell membrane. Each vent microcompartment is functionally equivalent to a single cell. Chemical communities having greater structural integrity and resilience to wildly fluctuating conditions are then selected for; their success would lead to local zones of depletion for important precursor chemicals. Progressive incorporation of these precursor components within a cell membrane would gradually increase metabolic complexity within the cell membrane, whilst leading to greater environmental simplicity in the external environment. In principle, this could lead to the development of complex catalytic sets capable of self-maintenance. Russell adds a significant factor to these ideas, by pointing out that semi-permeable mackinawite (an iron sulfide mineral) and silicate membranes could naturally develop under these conditions and electrochemically link reactions separated in space, if not in time. == Alternative environment == The 6 of the 11 metabolic intermediates in reverse Krebs cycle promoted by Fe, Zn2+, and Cr3+ in acidic conditions imply that protocells possibly emerged in locally metal-rich and acidic terrestrial hydrothermal fields. The acidic conditions are seemingly consistent with the stabilization of RNA. These hydrothermal fields would have exhibited cycling of freezing and thawing and a variety of temperature gradients that would promote nonenzymatic reactions of gluconeogenesis, nucleobase synthesis, nonenzymatic polymerization, and RNA replication. ATP synthesis and oxidation of ferrous iron via photochemical reactions or oxidants such as nitric oxide derived from lightning strikes, meteorite impacts, or volcanic emissions could have also occurred at hydrothermal fields. Wet-dry cycling of hydrothermal fields would polymerize RNA and peptides, protocell aggregation in a moist gel phase during wet-dry cycling would allow diffusion of metabolic products across neighboring protocells. Protocell aggregation could be described as a primitive version of horizontal gene transfer. Fatty acid vesicles would be stabilized by polymers in the presence of Mg2+ required for ribozyme activity. These prebiotic processes might have occurred in shaded areas that protect the emergence of early cellular life under ultraviolet irradiation. Long chain alcohols and monocarboxylic acids would have also been synthesized via Fischer–Tropsch synthesis. Hydrothermal fields would also have precipitates of transition metals and concentrated many elements including CHNOPS. Geothermal convection could also be a source of energy for the emergence of the proton motive force, phosphoryl group transfer, coupling between oxidation-reduction, and active transport. It's noted by David Deamer and Bruce Damer that these environments seemingly resemble Charles Darwin's idea of a "warm little pond". The problems with the hypothesis of a subaerial hydrothermal field of abiogenesis is that the proposed chemistry doesn't resemble known biochemical reactions. The abundance of subaerial hydrothermal fields would have been rare and offered no protection from either meteorites or ultraviolet irradiation. Clay minerals at subaerial hydrothermal fields would absorb organic reactants. Pyrophosphate has low solubility in water and can't be phosphorylated without a phosphorylating agent. It doesn't offer explanations for the origin of chemiosmosis and differences between Archaea and Bacteria. == See also == Abiogenesis Iron–sulfur protein RNA world RNP world Miller–Urey experiment == References ==
Wikipedia/Iron–sulfur_world_theory
Optical coherence tomography (OCT) is a high-resolution imaging technique with most of its applications in medicine and biology. OCT uses coherent near-infrared light to obtain micrometer-level depth resolved images of biological tissue or other scattering media. It uses interferometry techniques to detect the amplitude and time-of-flight of reflected light. OCT uses transverse sample scanning of the light beam to obtain two- and three-dimensional images. Short-coherence-length light can be obtained using a superluminescent diode (SLD) with a broad spectral bandwidth or a broadly tunable laser with narrow linewidth. The first demonstration of OCT imaging (in vitro) was published by a team from MIT and Harvard Medical School in a 1991 article in the journal Science. The article introduced the term "OCT" to credit its derivation from optical coherence-domain reflectometry, in which the axial resolution is based on temporal coherence. The first demonstrations of in vivo OCT imaging quickly followed. The first US patents on OCT by the MIT/Harvard group described a time-domain OCT (TD-OCT) system. These patents were licensed by Zeiss and formed the basis of the first generations of OCT products until 2006. In the decade preceding the invention of OCT, interferometry with short-coherence-length light had been investigated for a variety of applications. The potential to use interferometry for imaging was proposed, and measurement of retinal elevation profile and thickness had been demonstrated. The initial commercial clinical OCT systems were based on point-scanning TD-OCT technology, which primarily produced cross-sectional images due to the speed limitation (tens to thousands of axial scans per second). Fourier-domain OCT became available clinically 2006, enabling much greater image acquisition rate (tens of thousands to hundreds of thousands axial scans per second) without sacrificing signal strength. The higher speed allowed for three-dimensional imaging, which can be visualized in both en face and cross-sectional views. Novel contrasts such as angiography, elastography, and optoretinography also became possible by detecting signal change over time. Over the past three decades, the speed of commercial clinical OCT systems has increased more than 1000-fold, doubling every three years and rivaling Moore's law of computer chip performance. Development of parallel image acquisition approaches such as line-field and full-field technology may allow the performance improvement trend to continue. OCT is most widely used in ophthalmology, in which it has transformed the diagnosis and monitoring of retinal diseases, optic nerve diseases, and corneal diseases. It has greatly improved the management of the top three causes of blindness – macular degeneration, diabetic retinopathy, and glaucoma – thereby preventing vision loss in many patients. By 2016 OCT was estimated to be used in more than 30 million imaging procedures per year worldwide. Intravascular OCT imaging is used in the intravascular evaluation of coronary artery plaques and to guide stent placement. Beyond ophthalmology and cardiology, applications are also developing in other medical specialties such as dermatology, gastroenterology, neurology and neurovascular imaging, oncology, and dentistry. == Introduction == Interferometric reflectometry of biological tissue, especially of the human eye using short-coherence-length light (also referred to as partially-coherent, low-coherence, or broadband, broad-spectrum, or white light) was investigated in parallel by multiple groups worldwide since 1980s. Lending ideas from ultrasound imaging and merging the time-of-flight detection with optical interferometry to detect optical delays in the pico- and femtosecond range as known from the autocorrelator in the 1960's, the technique's development was and is tightly associated with the availability of novel electronic, mechanical and photonic abilities. Stemming from single lateral point low-coherence interferometry the addition of a wide range of technologies enabled key milestones in this computational imaging technique. High-speed axial and lateral scanners, ultra-broad spectrum or ultra-fast spectrally tunable lasers or other high brightness radiation sources, increasingly sensitive detectors, like high resolution and high speed cameras or fast A/D-converters that picked up from and drove ideas in the rapidly developing photonics field, together with the increasing availability of computing power were essential for its birth and success. In 1991, David Huang, then a student in James Fujimoto laboratory at Massachusetts Institute of Technology, working with Eric Swanson at the MIT Lincoln Laboratory and colleagues at the Harvard Medical School, successfully demonstrated imaging and called the new imaging modality "optical coherence tomography". Since then, OCT with micrometer axial resolution and below and cross-sectional imaging capabilities has become a prominent biomedical imaging technique that has continually improved in technical performance and range of applications. The improvement in image acquisition rate is particularly spectacular, starting with the original 0.8 Hz axial scan repetition rate to the current commercial clinical OCT systems operating at several hundred kHz and laboratory prototypes at multiple MHz. The range of applications has expanded from ophthalmology to cardiology and other medical specialties. For their roles in the invention of OCT, Fujimoto, Huang, and Swanson received the 2023 Lasker-DeBakey Clinical Medical Research Award and the National Medal of Technology and Innovation. These developments have been reviewed in articles written for the general scientific and medical readership. It is particularly suited to ophthalmic applications and other tissue imaging requiring micrometer resolution and millimeter penetration depth. OCT has also been used for various art conservation projects, where it is used to analyze different layers in a painting. OCT has interesting advantages over other medical imaging systems. Medical ultrasonography, magnetic resonance imaging (MRI), confocal microscopy, and OCT are differently suited to morphological tissue imaging: while the first two have whole body but low resolution imaging capability (typically a fraction of a millimeter), the third one can provide images with resolutions well below 1 micrometer (i.e. sub-cellular), between 0 and 100 micrometers in depth, and the fourth can probe as deep as 500 micrometers, but with a lower (i.e. architectural) resolution (around 10 micrometers in lateral and a few micrometers in depth in ophthalmology, for instance, and 20 micrometers in lateral in endoscopy). OCT is based on low-coherence interferometry. In conventional interferometry with long coherence length (i.e., laser interferometry), interference of light occurs over a distance of meters. In OCT, this interference is shortened to a distance of micrometers, owing to the use of broad-bandwidth light sources (i.e., sources that emit light over a broad range of frequencies). Light with broad bandwidths can be generated by using superluminescent diodes or lasers with extremely short pulses (femtosecond lasers). White light is an example of a broadband source with lower power. Light in an OCT system is broken into two arms – a sample arm (containing the item of interest) and a reference arm (usually a mirror). The combination of reflected light from the sample arm and reference light from the reference arm gives rise to an interference pattern, but only if light from both arms have traveled the "same" optical distance ("same" meaning a difference of less than a coherence length). By scanning the mirror in the reference arm, a reflectivity profile of the sample can be obtained (this is time domain OCT). Areas of the sample that reflect back a lot of light will create greater interference than areas that don't. Any light that is outside the short coherence length will not interfere. This reflectivity profile, called an A-scan, contains information about the spatial dimensions and location of structures within the item of interest. A cross-sectional tomogram (B-scan) may be achieved by laterally combining a series of these axial depth scans (A-scan). En face imaging at an acquired depth is possible depending on the imaging engine used. == Layperson's explanation == Optical coherence tomography (OCT) is a technique for obtaining sub-surface images of translucent or opaque materials at a resolution equivalent to a low-power microscope. It is effectively "optical ultrasound", imaging reflections from within tissue to provide cross-sectional images. OCT has attracted interest among the medical community because it provides tissue morphology imagery at much higher resolution (less than 10 μm axially and less than 20 μm laterally ) than other imaging modalities such as MRI or ultrasound. The key benefits of OCT are: Live sub-surface images at near-microscopic resolution Instant, direct imaging of tissue morphology No preparation of the sample or subject, no contact No ionizing radiation OCT delivers high resolution because it is based on light, rather than sound or radio frequency. An optical beam is directed at the tissue, and the small portion of this light that reflects directly back from sub-surface features is collected. Note that most light scatters off at large angles. In conventional imaging, this diffusely scattered light contributes background that obscures an image. However, in OCT, a technique called interferometry is used to record the optical path length of received photons, allowing rejection of most photons that scatter multiple times before detection. Thus OCT can build up clear 3D images of thick samples by rejecting background signal while collecting light directly reflected from surfaces of interest. Within the range of noninvasive three-dimensional imaging techniques that have been introduced to the medical research community, OCT as an echo technique is similar to ultrasound imaging. Other medical imaging techniques such as computerized axial tomography, magnetic resonance imaging, or positron emission tomography do not use the echo-location principle. The technique is limited to imaging 1 to 2 mm below the surface in biological tissue, because at greater depths the proportion of light that escapes without scattering is too small to be detected. No special preparation of a biological specimen is required, and images can be obtained "non-contact" or through a transparent window or membrane. The laser output from the instruments used is low – eye-safe near-infrared or visible-light – and no damage to the sample is therefore likely. == Theory == The principle of OCT is white light, or low coherence, interferometry. The optical setup typically consists of an interferometer (Fig. 1, typically Michelson type) with a low coherence, broad bandwidth light source. Light is split into and recombined from reference and sample arms, respectively. === Time domain === F O D T ( ν ) = 2 S 0 ( ν ) K r ( ν ) K s ( ν ) ( 3 ) {\displaystyle F_{O}DT\left(\nu \right)=2S_{0}\left(\nu \right)K_{r}\left(\nu \right)K_{s}\left(\nu \right)\qquad \quad (3)} In time domain OCT the path length of the reference arm is varied in time (the reference mirror is translated longitudinally). A property of low coherence interferometry is that interference, i.e. the series of dark and bright fringes, is only achieved when the path difference lies within the coherence length of the light source. This interference is called autocorrelation in a symmetric interferometer (both arms have the same reflectivity), or cross-correlation in the common case. The envelope of this modulation changes as path length difference is varied, where the peak of the envelope corresponds to path length matching. The interference of two partially coherent light beams can be expressed in terms of the source intensity, I S {\displaystyle I_{S}} , as I = k 1 I S + k 2 I S + 2 ( k 1 I S ) ⋅ ( k 2 I S ) ⋅ R e [ γ ( τ ) ] ( 1 ) {\displaystyle I=k_{1}I_{S}+k_{2}I_{S}+2{\sqrt {\left(k_{1}I_{S}\right)\cdot \left(k_{2}I_{S}\right)}}\cdot Re\left[\gamma \left(\tau \right)\right]\qquad (1)} where k 1 + k 2 < 1 {\displaystyle k_{1}+k_{2}<1} represents the interferometer beam splitting ratio, and γ ( τ ) {\displaystyle \gamma (\tau )} is called the complex degree of coherence, i.e. the interference envelope and carrier dependent on reference arm scan or time delay τ {\displaystyle \tau } , and whose recovery is of interest in OCT. Due to the coherence gating effect of OCT the complex degree of coherence is represented as a Gaussian function expressed as γ ( τ ) = exp ⁡ [ − ( π Δ ν τ 2 ln ⁡ 2 ) 2 ] ⋅ exp ⁡ ( − j 2 π ν 0 τ ) ( 2 ) {\displaystyle \gamma \left(\tau \right)=\exp \left[-\left({\frac {\pi \Delta \nu \tau }{2{\sqrt {\ln 2}}}}\right)^{2}\right]\cdot \exp \left(-j2\pi \nu _{0}\tau \right)\qquad \quad (2)} where Δ ν {\displaystyle \Delta \nu } represents the spectral width of the source in the optical frequency domain, and ν 0 {\displaystyle \nu _{0}} is the centre optical frequency of the source. In equation (2), the Gaussian envelope is amplitude modulated by an optical carrier. The peak of this envelope represents the location of the microstructure of the sample under test, with an amplitude dependent on the reflectivity of the surface. The optical carrier is due to the Doppler effect resulting from scanning one arm of the interferometer, and the frequency of this modulation is controlled by the speed of scanning. Therefore, translating one arm of the interferometer has two functions; depth scanning and a Doppler-shifted optical carrier are accomplished by pathlength variation. In OCT, the Doppler-shifted optical carrier has a frequency expressed as f D o p p = 2 ⋅ ν 0 ⋅ v s c ( 3 ) {\displaystyle f_{Dopp}={\frac {2\cdot \nu _{0}\cdot v_{s}}{c}}\qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad (3)} where ν 0 {\displaystyle \nu _{0}} is the central optical frequency of the source, v s {\displaystyle v_{s}} is the scanning velocity of the pathlength variation, and c {\displaystyle c} is the speed of light. The axial and lateral resolutions of OCT are decoupled from one another; the former being an equivalent to the coherence length of the light source and the latter being a function of the optics. The axial resolution of OCT is defined as where λ 0 {\displaystyle \lambda _{0}} and Δ λ {\displaystyle \Delta \lambda } are respectively the central wavelength and the spectral width of the light source. === Fourier domain === Fourier-domain (or Frequency-domain) OCT (FD-OCT) has speed and signal-to-noise ratio (SNR) advantages over time-domain OCT (TD-OCT) and has become the standard in the industry since 2006. The idea of using frequency modulation and coherent detection to obtain ranging information was already demonstrated in optical frequency domain reflectometry and laser radar in the 1980s, though the distance resolution and range were much longer than OCT. There are two types of FD-OCT – swept-source OCT (SS-OCT) and spectral-domain OCT (SD-OCT) – both of which acquire spectral interferograms which are then Fourier transformed to obtain an axial scan of reflectance amplitude versus depth. In SS-OCT, the spectral interferogram is acquired sequentially by tuning the wavelength of a laser light source. SD-OCT acquires spectral interferogram simultaneously in a spectrometer. An implementation of SS-OCT was described by the MIT group as early as 1994. A group based in the University of Vienna described measurement of intraocular distance using both tunable laser and spectrometer-based interferometry as early as 1995. SD-OCT imaging was first demonstrated both in vitro and in vivo by a collaboration between the Vienna group and a group based in the Nicholas Copernicus University in a series of articles between 2000 and 2002. The SNR advantage of FD-OCT over TD-OCT was first demonstrated in eye imaging and further analyzed by multiple groups of researchers in 2003. ==== Spectral-domain OCT ==== Spectral-domain OCT (spatially encoded frequency domain OCT) extracts spectral information by distributing different optical frequencies onto a detector stripe (line-array CCD or CMOS) via a dispersive element (see Fig. 4). Thereby the information of the full depth scan can be acquired within a single exposure. However, the large signal-to-noise advantage of FD-OCT is reduced due to the lower dynamic range of stripe detectors with respect to single photosensitive diodes, resulting in an SNR advantage of ~10 dB at much higher speeds. This is not much of a problem when working at 1300 nm, however, since dynamic range is not a serious problem at this wavelength range. The drawbacks of this technology are found in a strong fall-off of the SNR, which is proportional to the distance from the zero delay and a sinc-type reduction of the depth-dependent sensitivity because of limited detection linewidth. (One pixel detects a quasi-rectangular portion of an optical frequency range instead of a single frequency, the Fourier transform leads to the sinc(z) behavior). Additionally, the dispersive elements in the spectroscopic detector usually do not distribute the light equally spaced in frequency on the detector, but mostly have an inverse dependence. Therefore, the signal has to be resampled before processing, which cannot take care of the difference in local (pixelwise) bandwidth, which results in further reduction of the signal quality. However, the fall-off is not a serious problem with the development of new generation CCD or photodiode array with a larger number of pixels. Synthetic array heterodyne detection offers another approach to this problem without the need for high dispersion. ==== Swept-source OCT ==== Swept-source OCT (Time-encoded frequency domain OCT) tries to combine some of the advantages of standard TD and spectral domain OCT. Here the spectral components are not encoded by spatial separation, but they are encoded in time. The spectrum is either filtered or generated in single successive frequency steps and reconstructed before Fourier transformation. By accommodation of a frequency scanning light source (i.e. frequency scanning laser) the optical setup (see Fig. 3) becomes simpler than spectral domain OCT, but the problem of scanning is essentially translated from the TD-OCT reference arm into the swept source OCT light source. Here the advantage lies in the proven high SNR detection technology, while swept laser sources achieve very small instantaneous bandwidths (linewidths) at very high frequencies (20–200 kHz). Drawbacks are the nonlinearities in the wavelength (especially at high scanning frequencies), the broadening of the linewidth at high frequencies and a high sensitivity to movements of the scanning geometry or the sample (below the range of nanometers within successive frequency steps). == Scanning schemes == Focusing the light beam to a point on the surface of the sample under test, and recombining the reflected light with the reference will yield an interferogram with sample information corresponding to a single A-scan (Z axis only). Scanning of the sample can be accomplished by either scanning the light on the sample, or by moving the sample under test. A linear scan will yield a two-dimensional data set corresponding to a cross-sectional image (X-Z axes scan), whereas an area scan achieves a three-dimensional data set corresponding to a volumetric image (X-Y-Z axes scan). === Single point === Systems based on single point, confocal, or flying-spot time domain OCT, must scan the sample in two lateral dimensions and reconstruct a three-dimensional image using depth information obtained by coherence-gating through an axially scanning reference arm (Fig. 2). Two-dimensional lateral scanning has been electromechanically implemented by moving the sample using a translation stage, and using a novel micro-electro-mechanical system scanner. === Line-field OCT === Line-field confocal optical coherence tomography (LC-OCT) is an imaging technique based on the principle of time-domain OCT with line illumination using a broadband laser and line detection using a line-scan camera. LC-OCT produces B-scans in real-time from multiple A-scans acquired in parallel. En face as well as three-dimensional images can also be obtained by scanning the illumination line laterally. The focus is continuously adjusted during the scan of the sample depth, using a high numerical aperture (NA) microscope objective to image with high lateral resolution. By using a supercontinuum laser as a light source, a quasi-isotropic spatial resolution of ~ 1 μm is achieved at a central wavelength of ~ 800 nm. On the other hand, line illumination and detection, combined with the use of a high NA microscope objective, produce a confocal gate that prevents most scattered light that does not contribute to the signal from being detected by the camera. This confocal gate, which is absent in the full-field OCT technique, gives LC-OCT an advantage in terms of detection sensitivity and penetration in highly scattering media such as skin tissues. So far this technique has been used mainly for skin imaging in the fields of dermatology and cosmetology. === Full-field OCT === An imaging approach to temporal OCT was developed by Claude Boccara's team in 1998, with an acquisition of the images without beam scanning. In this technique called full-field OCT (FF-OCT), unlike other OCT techniques that acquire cross-sections of the sample, the images are here "en-face" i.e. like images of classical microscopy: orthogonal to the light beam of illumination. More precisely, interferometric images are created by a Michelson interferometer where the path length difference is varied by a fast electric component (usually a piezo mirror in the reference arm). These images acquired by a CCD camera are combined in post-treatment (or online) by the phase shift interferometry method, where usually 2 or 4 images per modulation period are acquired, depending on the algorithm used. More recently, approaches that allow rapid single-shot imaging were developed to simultaneously capture multiple phase-shifted images required for reconstruction, using single camera. Single-shot time-domain OCM is limited only by the camera frame rate and available illumination. The "en-face" tomographic images are thus produced by a wide-field illumination, ensured by the Linnik configuration of the Michelson interferometer where a microscope objective is used in both arms. Furthermore, while the temporal coherence of the source must remain low as in classical OCT (i.e. a broad spectrum), the spatial coherence must also be low to avoid parasitical interferences (i.e. a source with a large size). == Selected applications == Optical coherence tomography is an established medical imaging technique and is used across several medical specialties including ophthalmology and cardiology and is widely used in basic science research applications. === Ophthalmology === Ocular (or ophthalmic) OCT is used heavily by ophthalmologists and optometrists to obtain high-resolution images of the retina and anterior segment. Owing to OCT's capability to show cross-sections of tissue layers with micrometer resolution, OCT provides a straightforward method of assessing cellular organization, photoreceptor integrity, and axonal thickness in glaucoma, macular degeneration, diabetic macular edema, multiple sclerosis, optic neuritis, and other eye diseases or systemic pathologies which have ocular signs. Additionally, ophthalmologists leverage OCT to assess the vascular health of the retina via a technique called OCT angiography (OCTA). In ophthalmological surgery, especially retinal surgery, an OCT can be mounted on the microscope. Such a system is called an intraoperative OCT (iOCT) and provides support during the surgery with clinical benefits. Polarization-sensitive OCT was recently applied in the human retina to determine optical polarization properties of vessel walls near the optic nerve. Retinal imaging with PS-OCT demonstrated how the thickness and birefringence of blood vessel wall tissue of healthy subjects could be quantified, in vivo. PS-OCT was subsequently applied to patients with diabetes and age-matched healthy subjects, and showed an almost 100% increase in vessel wall birefringence due to diabetes, without a significant change in vessel wall thickness. In patients with hypertension however, the retinal vessel wall thickness increased by 60% while the vessel wall birefringence dropped by 20%, on average. The large differences measured in healthy subjects and patients suggest that retinal measurements with PS-OCT could be used as a screening tool for hypertension and diabetes. OCT can used to measure the thickness of the Retinal nerve fiber layer (RNFL). === Cardiology === In the settings of cardiology, OCT is used to image coronary arteries to visualize vessel wall lumen morphology and microstructure at a resolution ~10 times higher than other existing modalities such as intravascular ultrasounds, and x-ray angiography (intracoronary optical coherence tomography). For this type of application, 1 mm in diameter or smaller fiber-optics catheters are used to access artery lumen through semi-invasive interventions such as percutaneous coronary interventions. The first demonstration of endoscopic OCT was reported in 1997, by researchers in Fujimoto's laboratory at Massachusetts Institute of Technology. The first TD-OCT imaging catheter and system was commercialized by LightLab Imaging, Inc., a company based in Massachusetts in 2006. The first FD-OCT imaging study was reported by Massachusetts General Hospital in 2008. Intracoronary FD-OCT was first introduced in the market in 2009 by LightLab Imaging, Inc. followed by Terumo Corporation in 2012 and by Gentuity LLC in 2020. The higher acquisition speed of FD-OCT enabled the widespread adoption of this imaging technology for coronary artery imaging. It is estimated that over 100,000 FD-OCT coronary imaging cases are performed yearly, and that the market is increasing by approximately 20% every year. Other developments of intracoronary OCT included the combination with other optical imaging modalities for multi-modality imaging. Intravascular OCT has been combined with near-infrared fluorescence molecular imaging (NIRF) to enhance its capability to detect molecular/functional and tissue morphological information simultaneously. In a similar way, combination with near-infrared spectroscopy (NIRS) has been implemented. === Neurovascular === Endoscopic/intravascular OCT has been further developed for use in neurovascular applications including imaging for guiding endovascular treatment of ischemic stroke and brain aneurysms. Initial clinical investigations with existing coronary OCT catheters have been limited to proximal intracranial anatomy of patient with limited tortuosity, as coronary OCT technology was not designed for the tortuous cerebrovasculature encountered in the brain. However, despite these limitations, it showed the potential of OCT for the imaging of neurovascular disease. An intravascular OCT imaging catheter design tailored for use in tortuous neurovascular anatomy has been proposed in 2020. A first-in-human study using endovascular neuro OCT (nOCT) has been reported in 2024. === Oncology === Endoscopic OCT has been applied to the detection and diagnosis of cancer and precancerous lesions, such as Barrett's esophagus and esophageal dysplasia. === Dermatology === The first use of OCT in dermatology dates back to 1997. Since then, OCT has been applied to the diagnosis of various skin lesions including carcinomas. However, the diagnosis of melanoma using conventional OCT is difficult, especially due to insufficient imaging resolution. Emerging high-resolution OCT techniques such as LC-OCT have the potential to improve the clinical diagnostic process, allowing for the early detection of malignant skin tumors – including melanoma – and a reduction in the number of surgical excisions of benign lesions. Other promising areas of application include the imaging of lesions where excisions are hazardous or impossible and the guidance of surgical interventions through identification of tumor margins. === Dentistry === Researchers in Tokyo medical and Dental University were able to detect enamel white spot lesions around and beneath the orthodontic brackets using swept source OCT. === Research applications === Researchers have used OCT to produce detailed images of mice brains, through a "window" made of zirconia that has been modified to be transparent and implanted in the skull. Optical coherence tomography is also applicable and increasingly used in industrial applications, such as nondestructive testing (NDT), material thickness measurements, and in particular thin silicon wafers and compound semiconductor wafers thickness measurements surface roughness characterization, surface and cross-section imaging and volume loss measurements. OCT systems with feedback can be used to control manufacturing processes. With high speed data acquisition, and sub-micron resolution, OCT is adaptable to perform both inline and off-line. Due to the high volume of produced pills, an interesting field of application is in the pharmaceutical industry to control the coating of tablets. Fiber-based OCT systems are particularly adaptable to industrial environments. These can access and scan interiors of hard-to-reach spaces, and are able to operate in hostile environments—whether radioactive, cryogenic, or very hot. Novel optical biomedical diagnostic and imaging technologies are currently being developed to solve problems in biology and medicine. As of 2014, attempts have been made to use optical coherence tomography to identify root canals in teeth, specifically canal in the maxillary molar, however, there is no difference with the current methods of dental operatory microscope. Research conducted in 2015 was successful in utilizing a smartphone as an OCT platform, although much work remains to be done before such a platform would be commercially viable. Photonic integrated circuits may be a promising option to miniaturized OCT. Similarly to integrated circuits silicon-based fabrication techniques can be used to produce miniaturized photonic systems. First in vivo human retinal imaging has been reported recently. In 3D microfabrication, OCT enables non-destructive testing and real-time inspection during additive manufacturing. Its high-resolution imaging detects defects, characterizes material properties and ensures the integrity of internal geometries without damaging the part. == See also == == References ==
Wikipedia/Optical_coherence_tomography
In quantum physics, a virtual state is a very short-lived, unobservable quantum state. In many quantum processes a virtual state is an intermediate state, sometimes described as "imaginary" in a multi-step process that mediates otherwise forbidden transitions. Since virtual states are not eigenfunctions of any operator, normal parameters such as occupation, energy and lifetime need to be qualified. No measurement of a system will show one to be occupied, but they still have lifetimes derived from uncertainty relations. While each virtual state has an associated energy, no direct measurement of its energy is possible but various approaches have been used to make some measurements (for example see and related work on virtual state spectroscopy) or extract other parameters using measurement techniques that depend upon the virtual state's lifetime. The concept is quite general and can be used to predict and describe experimental results in many areas including Raman spectroscopy, non-linear optics generally, various types of photochemistry, and nuclear processes. == See also == Two-photon absorption Virtual particle Feshbach resonance Shape resonance == References ==
Wikipedia/Virtual_state_(physics)
In physics, semiclassical refers to a theory in which one part of a system is described quantum mechanically, whereas the other is treated classically. For example, external fields will be constant, or when changing will be classically described. In general, it incorporates a development in powers of the Planck constant, resulting in the classical physics of power 0, and the first nontrivial approximation to the power of (−1). In this case, there is a clear link between the quantum-mechanical system and the associated semi-classical and classical approximations, as it is similar in appearance to the transition from physical optics to geometric optics. == History == Max Planck was the first to introduce the idea of quanta of energy in 1900 while studying black-body radiation. In 1906, he was also the first to write that quantum theory should replicate classical mechanics at some limit, particularly if the Planck constant h were infinitesimal. With this idea he showed that Planck's law for thermal radiation leads to the Rayleigh–Jeans law, the classical prediction (valid for large wavelength). == Instances == Some examples of a semiclassical approximation include: WKB approximation: electrons in classical external electromagnetic fields. semiclassical gravity: quantum field theory within a classical curved gravitational background (see general relativity). quantum chaos: quantization of classical chaotic systems. magnetic properties of materials and astrophysical bodies under the effect of large magnetic fields (see for example De Haas–Van Alphen effect) quantum field theory: only Feynman diagrams with at most a single closed loop (see for example one-loop Feynman diagram) are considered, which corresponds to the powers of the Planck constant. == See also == Bohr model Correspondence principle Classical limit Eikonal approximation Einstein–Brillouin–Keller method Old quantum theory == References == R. Resnick; R. Eisberg (1985). Quantum Physics of Atoms, Molecules, Solids, Nuclei and Particles (2nd ed.). John Wiley & Sons. ISBN 978-0-471-87373-0. P.A.M. Dirac (1981). Principles of Quantum Mechanics (4th ed.). Clarendon Press. ISBN 978-0-19-852011-5. W. Pauli (1980). General Principles of Quantum Mechanics. Springer. ISBN 3-540-09842-9. R.P. Feynman; R.B. Leighton; M. Sands (1965). Feynman Lectures on Physics. Vol. 3. Addison-Wesley. ISBN 0-201-02118-8. C.B. Parker (1994). McGraw-Hill Encyclopaedia of Physics (2nd ed.). McGraw-Hill. ISBN 0-07-051400-3.
Wikipedia/Semiclassical_physics
Quantum mechanics is the study of matter and its interactions with energy on the scale of atomic and subatomic particles. By contrast, classical physics explains matter and energy only on a scale familiar to human experience, including the behavior of astronomical bodies such as the Moon. Classical physics is still used in much of modern science and technology. However, towards the end of the 19th century, scientists discovered phenomena in both the large (macro) and the small (micro) worlds that classical physics could not explain. The desire to resolve inconsistencies between observed phenomena and classical theory led to a revolution in physics, a shift in the original scientific paradigm: the development of quantum mechanics. Many aspects of quantum mechanics are counterintuitive and can seem paradoxical because they describe behavior quite different from that seen at larger scales. In the words of quantum physicist Richard Feynman, quantum mechanics deals with "nature as She is—absurd". Features of quantum mechanics often defy simple explanations in everyday language. One example of this is the uncertainty principle: precise measurements of position cannot be combined with precise measurements of velocity. Another example is entanglement: a measurement made on one particle (such as an electron that is measured to have spin 'up') will correlate with a measurement on a second particle (an electron will be found to have spin 'down') if the two particles have a shared history. This will apply even if it is impossible for the result of the first measurement to have been transmitted to the second particle before the second measurement takes place. Quantum mechanics helps people understand chemistry, because it explains how atoms interact with each other and form molecules. Many remarkable phenomena can be explained using quantum mechanics, like superfluidity. For example, if liquid helium cooled to a temperature near absolute zero is placed in a container, it spontaneously flows up and over the rim of its container; this is an effect which cannot be explained by classical physics. == History == James C. Maxwell's unification of the equations governing electricity, magnetism, and light in the late 19th century led to experiments on the interaction of light and matter. Some of these experiments had aspects which could not be explained until quantum mechanics emerged in the early part of the 20th century. === Evidence of quanta from the photoelectric effect === The seeds of the quantum revolution appear in the discovery by J.J. Thomson in 1897 that cathode rays were not continuous but "corpuscles" (electrons). Electrons had been named just six years earlier as part of the emerging theory of atoms. In 1900, Max Planck, unconvinced by the atomic theory, discovered that he needed discrete entities like atoms or electrons to explain black-body radiation. Very hot – red hot or white hot – objects look similar when heated to the same temperature. This look results from a common curve of light intensity at different frequencies (colors), which is called black-body radiation. White hot objects have intensity across many colors in the visible range. The lowest frequencies above visible colors are infrared light, which also give off heat. Continuous wave theories of light and matter cannot explain the black-body radiation curve. Planck spread the heat energy among individual "oscillators" of an undefined character but with discrete energy capacity; this model explained black-body radiation. At the time, electrons, atoms, and discrete oscillators were all exotic ideas to explain exotic phenomena. But in 1905 Albert Einstein proposed that light was also corpuscular, consisting of "energy quanta", in contradiction to the established science of light as a continuous wave, stretching back a hundred years to Thomas Young's work on diffraction. Einstein's revolutionary proposal started by reanalyzing Planck's black-body theory, arriving at the same conclusions by using the new "energy quanta". Einstein then showed how energy quanta connected to Thomson's electron. In 1902, Philipp Lenard directed light from an arc lamp onto freshly cleaned metal plates housed in an evacuated glass tube. He measured the electric current coming off the metal plate, at higher and lower intensities of light and for different metals. Lenard showed that amount of current – the number of electrons – depended on the intensity of the light, but that the velocity of these electrons did not depend on intensity. This is the photoelectric effect. The continuous wave theories of the time predicted that more light intensity would accelerate the same amount of current to higher velocity, contrary to this experiment. Einstein's energy quanta explained the volume increase: one electron is ejected for each quantum: more quanta mean more electrons.: 23  Einstein then predicted that the electron velocity would increase in direct proportion to the light frequency above a fixed value that depended upon the metal. Here the idea is that energy in energy-quanta depends upon the light frequency; the energy transferred to the electron comes in proportion to the light frequency. The type of metal gives a barrier, the fixed value, that the electrons must climb over to exit their atoms, to be emitted from the metal surface and be measured. Ten years elapsed before Millikan's definitive experiment verified Einstein's prediction. During that time many scientists rejected the revolutionary idea of quanta. But Planck's and Einstein's concept was in the air and soon began to affect other physics and quantum theories. === Quantization of bound electrons in atoms === Experiments with light and matter in the late 1800s uncovered a reproducible but puzzling regularity. When light was shown through purified gases, certain frequencies (colors) did not pass. These dark absorption 'lines' followed a distinctive pattern: the gaps between the lines decreased steadily. By 1889, the Rydberg formula predicted the lines for hydrogen gas using only a constant number and the integers to index the lines.: v1:376  The origin of this regularity was unknown. Solving this mystery would eventually become the first major step toward quantum mechanics. Throughout the 19th century evidence grew for the atomic nature of matter. With Thomson's discovery of the electron in 1897, scientist began the search for a model of the interior of the atom. Thomson proposed negative electrons swimming in a pool of positive charge. Between 1908 and 1911, Rutherford showed that the positive part was only 1/3000th of the diameter of the atom.: 26  Models of "planetary" electrons orbiting a nuclear "Sun" were proposed, but cannot explain why the electron does not simply fall into the positive charge. In 1913 Niels Bohr and Ernest Rutherford connected the new atom models to the mystery of the Rydberg formula: the orbital radius of the electrons were constrained and the resulting energy differences matched the energy differences in the absorption lines. This meant that absorption and emission of light from atoms was energy quantized: only specific energies that matched the difference in orbital energy would be emitted or absorbed.: 31  Trading one mystery – the regular pattern of the Rydberg formula – for another mystery – constraints on electron orbits – might not seem like a big advance, but the new atom model summarized many other experimental findings. The quantization of the photoelectric effect and now the quantization of the electron orbits set the stage for the final revolution. Throughout the first and the modern era of quantum mechanics the concept that classical mechanics must be valid macroscopically constrained possible quantum models. This concept was formalized by Bohr in 1923 as the correspondence principle. It requires quantum theory to converge to classical limits.: 29  A related concept is Ehrenfest's theorem, which shows that the average values obtained from quantum mechanics (e.g. position and momentum) obey classical laws. === Quantization of spin === In 1922 Otto Stern and Walther Gerlach demonstrated that the magnetic properties of silver atoms defy classical explanation, the work contributing to Stern’s 1943 Nobel Prize in Physics. They fired a beam of silver atoms through a magnetic field. According to classical physics, the atoms should have emerged in a spray, with a continuous range of directions. Instead, the beam separated into two, and only two, diverging streams of atoms. Unlike the other quantum effects known at the time, this striking result involves the state of a single atom.: v2:130  In 1927, T.E. Phipps and J.B. Taylor obtained a similar, but less pronounced effect using hydrogen atoms in their ground state, thereby eliminating any doubts that may have been caused by the use of silver atoms. In 1924, Wolfgang Pauli called it "two-valuedness not describable classically" and associated it with electrons in the outermost shell. The experiments lead to formulation of its theory described to arise from spin of the electron in 1925, by Samuel Goudsmit and George Uhlenbeck, under the advice of Paul Ehrenfest. === Quantization of matter === In 1924 Louis de Broglie proposed that electrons in an atom are constrained not in "orbits" but as standing waves. In detail his solution did not work, but his hypothesis – that the electron "corpuscle" moves in the atom as a wave – spurred Erwin Schrödinger to develop a wave equation for electrons; when applied to hydrogen the Rydberg formula was accurately reproduced.: 65  Max Born's 1924 paper "Zur Quantenmechanik" was the first use of the words "quantum mechanics" in print. His later work included developing quantum collision models; in a footnote to a 1926 paper he proposed the Born rule connecting theoretical models to experiment. In 1927 at Bell Labs, Clinton Davisson and Lester Germer fired slow-moving electrons at a crystalline nickel target which showed a diffraction pattern indicating wave nature of electron whose theory was fully explained by Hans Bethe. A similar experiment by George Paget Thomson and Alexander Reid, firing electrons at thin celluloid foils and later metal films, observing rings, independently discovered matter wave nature of electrons. === Further developments === In 1928 Paul Dirac published his relativistic wave equation simultaneously incorporating relativity, predicting anti-matter, and providing a complete theory for the Stern–Gerlach result.: 131  These successes launched a new fundamental understanding of our world at small scale: quantum mechanics. Planck and Einstein started the revolution with quanta that broke down the continuous models of matter and light. Twenty years later "corpuscles" like electrons came to be modeled as continuous waves. This result came to be called wave-particle duality, one iconic idea along with the uncertainty principle that sets quantum mechanics apart from older models of physics. === Quantum radiation, quantum fields === In 1923 Compton demonstrated that the Planck-Einstein energy quanta from light also had momentum; three years later the "energy quanta" got a new name "photon" Despite its role in almost all stages of the quantum revolution, no explicit model for light quanta existed until 1927 when Paul Dirac began work on a quantum theory of radiation that became quantum electrodynamics. Over the following decades this work evolved into quantum field theory, the basis for modern quantum optics and particle physics. == Wave–particle duality == The concept of wave–particle duality says that neither the classical concept of "particle" nor of "wave" can fully describe the behavior of quantum-scale objects, either photons or matter. Wave–particle duality is an example of the principle of complementarity in quantum physics. An elegant example of wave-particle duality is the double-slit experiment. In the double-slit experiment, as originally performed by Thomas Young in 1803, and then Augustin Fresnel a decade later, a beam of light is directed through two narrow, closely spaced slits, producing an interference pattern of light and dark bands on a screen. The same behavior can be demonstrated in water waves: the double-slit experiment was seen as a demonstration of the wave nature of light. Variations of the double-slit experiment have been performed using electrons, atoms, and even large molecules, and the same type of interference pattern is seen. Thus it has been demonstrated that all matter possesses wave characteristics. If the source intensity is turned down, the same interference pattern will slowly build up, one "count" or particle (e.g. photon or electron) at a time. The quantum system acts as a wave when passing through the double slits, but as a particle when it is detected. This is a typical feature of quantum complementarity: a quantum system acts as a wave in an experiment to measure its wave-like properties, and like a particle in an experiment to measure its particle-like properties. The point on the detector screen where any individual particle shows up is the result of a random process. However, the distribution pattern of many individual particles mimics the diffraction pattern produced by waves. == Uncertainty principle == Suppose it is desired to measure the position and speed of an object—for example, a car going through a radar speed trap. It can be assumed that the car has a definite position and speed at a particular moment in time. How accurately these values can be measured depends on the quality of the measuring equipment. If the precision of the measuring equipment is improved, it provides a result closer to the true value. It might be assumed that the speed of the car and its position could be operationally defined and measured simultaneously, as precisely as might be desired. In 1927, Heisenberg proved that this last assumption is not correct. Quantum mechanics shows that certain pairs of physical properties, for example, position and speed, cannot be simultaneously measured, nor defined in operational terms, to arbitrary precision: the more precisely one property is measured, or defined in operational terms, the less precisely can the other be thus treated. This statement is known as the uncertainty principle. The uncertainty principle is not only a statement about the accuracy of our measuring equipment but, more deeply, is about the conceptual nature of the measured quantities—the assumption that the car had simultaneously defined position and speed does not work in quantum mechanics. On a scale of cars and people, these uncertainties are negligible, but when dealing with atoms and electrons they become critical. Heisenberg gave, as an illustration, the measurement of the position and momentum of an electron using a photon of light. In measuring the electron's position, the higher the frequency of the photon, the more accurate is the measurement of the position of the impact of the photon with the electron, but the greater is the disturbance of the electron. This is because from the impact with the photon, the electron absorbs a random amount of energy, rendering the measurement obtained of its momentum increasingly uncertain, for one is necessarily measuring its post-impact disturbed momentum from the collision products and not its original momentum (momentum which should be simultaneously measured with position). With a photon of lower frequency, the disturbance (and hence uncertainty) in the momentum is less, but so is the accuracy of the measurement of the position of the impact. At the heart of the uncertainty principle is a fact that for any mathematical analysis in the position and velocity domains, achieving a sharper (more precise) curve in the position domain can only be done at the expense of a more gradual (less precise) curve in the speed domain, and vice versa. More sharpness in the position domain requires contributions from more frequencies in the speed domain to create the narrower curve, and vice versa. It is a fundamental tradeoff inherent in any such related or complementary measurements, but is only really noticeable at the smallest (Planck) scale, near the size of elementary particles. The uncertainty principle shows mathematically that the product of the uncertainty in the position and momentum of a particle (momentum is velocity multiplied by mass) could never be less than a certain value, and that this value is related to the Planck constant. == Wave function collapse == Wave function collapse means that a measurement has forced or converted a quantum (probabilistic or potential) state into a definite measured value. This phenomenon is only seen in quantum mechanics rather than classical mechanics. For example, before a photon actually "shows up" on a detection screen it can be described only with a set of probabilities for where it might show up. When it does appear, for instance in the CCD of an electronic camera, the time and space where it interacted with the device are known within very tight limits. However, the photon has disappeared in the process of being captured (measured), and its quantum wave function has disappeared with it. In its place, some macroscopic physical change in the detection screen has appeared, e.g., an exposed spot in a sheet of photographic film, or a change in electric potential in some cell of a CCD. == Eigenstates and eigenvalues == Because of the uncertainty principle, statements about both the position and momentum of particles can assign only a probability that the position or momentum has some numerical value. Therefore, it is necessary to formulate clearly the difference between the state of something indeterminate, such as an electron in a probability cloud, and the state of something having a definite value. When an object can definitely be "pinned-down" in some respect, it is said to possess an eigenstate. In the Stern–Gerlach experiment discussed above, the quantum model predicts two possible values of spin for the atom compared to the magnetic axis. These two eigenstates are named arbitrarily 'up' and 'down'. The quantum model predicts these states will be measured with equal probability, but no intermediate values will be seen. This is what the Stern–Gerlach experiment shows. The eigenstates of spin about the vertical axis are not simultaneously eigenstates of spin about the horizontal axis, so this atom has an equal probability of being found to have either value of spin about the horizontal axis. As described in the section above, measuring the spin about the horizontal axis can allow an atom that was spun up to spin down: measuring its spin about the horizontal axis collapses its wave function into one of the eigenstates of this measurement, which means it is no longer in an eigenstate of spin about the vertical axis, so can take either value. == The Pauli exclusion principle == In 1924, Wolfgang Pauli proposed a new quantum degree of freedom (or quantum number), with two possible values, to resolve inconsistencies between observed molecular spectra and the predictions of quantum mechanics. In particular, the spectrum of atomic hydrogen had a doublet, or pair of lines differing by a small amount, where only one line was expected. Pauli formulated his exclusion principle, stating, "There cannot exist an atom in such a quantum state that two electrons within [it] have the same set of quantum numbers." A year later, Uhlenbeck and Goudsmit identified Pauli's new degree of freedom with the property called spin whose effects were observed in the Stern–Gerlach experiment. == Dirac wave equation == In 1928, Paul Dirac extended the Pauli equation, which described spinning electrons, to account for special relativity. The result was a theory that dealt properly with events, such as the speed at which an electron orbits the nucleus, occurring at a substantial fraction of the speed of light. By using the simplest electromagnetic interaction, Dirac was able to predict the value of the magnetic moment associated with the electron's spin and found the experimentally observed value, which was too large to be that of a spinning charged sphere governed by classical physics. He was able to solve for the spectral lines of the hydrogen atom and to reproduce from physical first principles Sommerfeld's successful formula for the fine structure of the hydrogen spectrum. Dirac's equations sometimes yielded a negative value for energy, for which he proposed a novel solution: he posited the existence of an antielectron and a dynamical vacuum. This led to the many-particle quantum field theory. == Quantum entanglement == In quantum physics, a group of particles can interact or be created together in such a way that the quantum state of each particle of the group cannot be described independently of the state of the others, including when the particles are separated by a large distance. This is known as quantum entanglement. An early landmark in the study of entanglement was the Einstein–Podolsky–Rosen (EPR) paradox, a thought experiment proposed by Albert Einstein, Boris Podolsky and Nathan Rosen which argues that the description of physical reality provided by quantum mechanics is incomplete. In a 1935 paper titled "Can Quantum-Mechanical Description of Physical Reality be Considered Complete?", they argued for the existence of "elements of reality" that were not part of quantum theory, and speculated that it should be possible to construct a theory containing these hidden variables. The thought experiment involves a pair of particles prepared in what would later become known as an entangled state. Einstein, Podolsky, and Rosen pointed out that, in this state, if the position of the first particle were measured, the result of measuring the position of the second particle could be predicted. If instead the momentum of the first particle were measured, then the result of measuring the momentum of the second particle could be predicted. They argued that no action taken on the first particle could instantaneously affect the other, since this would involve information being transmitted faster than light, which is forbidden by the theory of relativity. They invoked a principle, later known as the "EPR criterion of reality", positing that: "If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of reality corresponding to that quantity." From this, they inferred that the second particle must have a definite value of both position and of momentum prior to either quantity being measured. But quantum mechanics considers these two observables incompatible and thus does not associate simultaneous values for both to any system. Einstein, Podolsky, and Rosen therefore concluded that quantum theory does not provide a complete description of reality. In the same year, Erwin Schrödinger used the word "entanglement" and declared: "I would not call that one but rather the characteristic trait of quantum mechanics." The Irish physicist John Stewart Bell carried the analysis of quantum entanglement much further. He deduced that if measurements are performed independently on the two separated particles of an entangled pair, then the assumption that the outcomes depend upon hidden variables within each half implies a mathematical constraint on how the outcomes on the two measurements are correlated. This constraint would later be named the Bell inequality. Bell then showed that quantum physics predicts correlations that violate this inequality. Consequently, the only way that hidden variables could explain the predictions of quantum physics is if they are "nonlocal", which is to say that somehow the two particles are able to interact instantaneously no matter how widely they ever become separated. Performing experiments like those that Bell suggested, physicists have found that nature obeys quantum mechanics and violates Bell inequalities. In other words, the results of these experiments are incompatible with any local hidden variable theory. == Quantum field theory == The idea of quantum field theory began in the late 1920s with British physicist Paul Dirac, when he attempted to quantize the energy of the electromagnetic field; just as in quantum mechanics the energy of an electron in the hydrogen atom was quantized. Quantization is a procedure for constructing a quantum theory starting from a classical theory. Merriam-Webster defines a field in physics as "a region or space in which a given effect (such as magnetism) exists". Other effects that manifest themselves as fields are gravitation and static electricity. In 2008, physicist Richard Hammond wrote: Sometimes we distinguish between quantum mechanics (QM) and quantum field theory (QFT). QM refers to a system in which the number of particles is fixed, and the fields (such as the electromechanical field) are continuous classical entities. QFT ... goes a step further and allows for the creation and annihilation of particles ... He added, however, that quantum mechanics is often used to refer to "the entire notion of quantum view".: 108  In 1931, Dirac proposed the existence of particles that later became known as antimatter. Dirac shared the Nobel Prize in Physics for 1933 with Schrödinger "for the discovery of new productive forms of atomic theory". === Quantum electrodynamics === Quantum electrodynamics (QED) is the name of the quantum theory of the electromagnetic force. Understanding QED begins with understanding electromagnetism. Electromagnetism can be called "electrodynamics" because it is a dynamic interaction between electrical and magnetic forces. Electromagnetism begins with the electric charge. Electric charges are the sources of and create, electric fields. An electric field is a field that exerts a force on any particles that carry electric charges, at any point in space. This includes the electron, proton, and even quarks, among others. As a force is exerted, electric charges move, a current flows, and a magnetic field is produced. The changing magnetic field, in turn, causes electric current (often moving electrons). The physical description of interacting charged particles, electrical currents, electrical fields, and magnetic fields is called electromagnetism. In 1928 Paul Dirac produced a relativistic quantum theory of electromagnetism. This was the progenitor to modern quantum electrodynamics, in that it had essential ingredients of the modern theory. However, the problem of unsolvable infinities developed in this relativistic quantum theory. Years later, renormalization largely solved this problem. Initially viewed as a provisional, suspect procedure by some of its originators, renormalization eventually was embraced as an important and self-consistent tool in QED and other fields of physics. Also, in the late 1940s Feynman diagrams provided a way to make predictions with QED by finding a probability amplitude for each possible way that an interaction could occur. The diagrams showed in particular that the electromagnetic force is the exchange of photons between interacting particles. The Lamb shift is an example of a quantum electrodynamics prediction that has been experimentally verified. It is an effect whereby the quantum nature of the electromagnetic field makes the energy levels in an atom or ion deviate slightly from what they would otherwise be. As a result, spectral lines may shift or split. Similarly, within a freely propagating electromagnetic wave, the current can also be just an abstract displacement current, instead of involving charge carriers. In QED, its full description makes essential use of short-lived virtual particles. There, QED again validates an earlier, rather mysterious concept. === Standard Model === The Standard Model of particle physics is the quantum field theory that describes three of the four known fundamental forces (electromagnetic, weak and strong interactions – excluding gravity) in the universe and classifies all known elementary particles. It was developed in stages throughout the latter half of the 20th century, through the work of many scientists worldwide, with the current formulation being finalized in the mid-1970s upon experimental confirmation of the existence of quarks. Since then, proof of the top quark (1995), the tau neutrino (2000), and the Higgs boson (2012) have added further credence to the Standard Model. In addition, the Standard Model has predicted various properties of weak neutral currents and the W and Z bosons with great accuracy. Although the Standard Model is believed to be theoretically self-consistent and has demonstrated success in providing experimental predictions, it leaves some physical phenomena unexplained and so falls short of being a complete theory of fundamental interactions. For example, it does not fully explain baryon asymmetry, incorporate the full theory of gravitation as described by general relativity, or account for the universe's accelerating expansion as possibly described by dark energy. The model does not contain any viable dark matter particle that possesses all of the required properties deduced from observational cosmology. It also does not incorporate neutrino oscillations and their non-zero masses. Accordingly, it is used as a basis for building more exotic models that incorporate hypothetical particles, extra dimensions, and elaborate symmetries (such as supersymmetry) to explain experimental results at variance with the Standard Model, such as the existence of dark matter and neutrino oscillations. == Interpretations == The physical measurements, equations, and predictions pertinent to quantum mechanics are all consistent and hold a very high level of confirmation. However, the question of what these abstract models say about the underlying nature of the real world has received competing answers. These interpretations are widely varying and sometimes somewhat abstract. For instance, the Copenhagen interpretation states that before a measurement, statements about a particle's properties are completely meaningless, while the many-worlds interpretation describes the existence of a multiverse made up of every possible universe. Light behaves in some aspects like particles and in other aspects like waves. Matter—the "stuff" of the universe consisting of particles such as electrons and atoms—exhibits wavelike behavior too. Some light sources, such as neon lights, give off only certain specific frequencies of light, a small set of distinct pure colors determined by neon's atomic structure. Quantum mechanics shows that light, along with all other forms of electromagnetic radiation, comes in discrete units, called photons, and predicts its spectral energies (corresponding to pure colors), and the intensities of its light beams. A single photon is a quantum, or smallest observable particle, of the electromagnetic field. A partial photon is never experimentally observed. More broadly, quantum mechanics shows that many properties of objects, such as position, speed, and angular momentum, that appeared continuous in the zoomed-out view of classical mechanics, turn out to be (in the very tiny, zoomed-in scale of quantum mechanics) quantized. Such properties of elementary particles are required to take on one of a set of small, discrete allowable values, and since the gap between these values is also small, the discontinuities are only apparent at very tiny (atomic) scales. == Applications == === Everyday applications === The relationship between the frequency of electromagnetic radiation and the energy of each photon is why ultraviolet light can cause sunburn, but visible or infrared light cannot. A photon of ultraviolet light delivers a high amount of energy—enough to contribute to cellular damage such as occurs in a sunburn. A photon of infrared light delivers less energy—only enough to warm one's skin. So, an infrared lamp can warm a large surface, perhaps large enough to keep people comfortable in a cold room, but it cannot give anyone a sunburn. === Technological applications === Applications of quantum mechanics include the laser, the transistor, the electron microscope, and magnetic resonance imaging. A special class of quantum mechanical applications is related to macroscopic quantum phenomena such as superfluid helium and superconductors. The study of semiconductors led to the invention of the diode and the transistor, which are indispensable for modern electronics. In even a simple light switch, quantum tunneling is absolutely vital, as otherwise the electrons in the electric current could not penetrate the potential barrier made up of a layer of oxide. Flash memory chips found in USB drives also use quantum tunneling, to erase their memory cells. == See also == Einstein's thought experiments Macroscopic quantum phenomena Philosophy of physics Quantum computing Virtual particle Teaching quantum mechanics List of textbooks on classical and quantum mechanics == References == == Bibliography == Bernstein, Jeremy (2005). "Max Born and the quantum theory". American Journal of Physics. 73 (11): 999–1008. Bibcode:2005AmJPh..73..999B. doi:10.1119/1.2060717. Beller, Mara (2001). Quantum Dialogue: The Making of a Revolution. University of Chicago Press. Bohr, Niels (1958). Atomic Physics and Human Knowledge. John Wiley & Sons]. ISBN 0486479285. OCLC 530611. {{cite book}}: ISBN / Date incompatibility (help) de Broglie, Louis (1953). The Revolution in Physics. Noonday Press. LCCN 53010401. Bronner, Patrick; Strunz, Andreas; Silberhorn, Christine; Meyn, Jan-Peter (2009). "Demonstrating quantum random with single photons". European Journal of Physics. 30 (5): 1189–1200. Bibcode:2009EJPh...30.1189B. doi:10.1088/0143-0807/30/5/026. S2CID 7903179. Einstein, Albert (1934). Essays in Science. Philosophical Library. ISBN 0486470113. LCCN 55003947. {{cite book}}: ISBN / Date incompatibility (help) Feigl, Herbert; Brodbeck, May (1953). Readings in the Philosophy of Science. Appleton-Century-Crofts. ISBN 0390304883. LCCN 53006438. {{cite book}}: ISBN / Date incompatibility (help) Feynman, Richard P. (1949). "Space-Time Approach to Quantum Electrodynamics". Physical Review. 76 (6): 769–89. Bibcode:1949PhRv...76..769F. doi:10.1103/PhysRev.76.769. Feynman, Richard P. (1990). QED, The Strange Theory of Light and Matter. Penguin Books. ISBN 978-0140125054. Fowler, Michael (1999). The Bohr Atom. University of Virginia. Heisenberg, Werner (1958). Physics and Philosophy. Harper and Brothers. ISBN 0061305499. LCCN 99010404. {{cite book}}: ISBN / Date incompatibility (help) Lakshmibala, S. (2004). "Heisenberg, Matrix Mechanics and the Uncertainty Principle". Resonance: Journal of Science Education. 9 (8): 46–56. doi:10.1007/bf02837577. S2CID 29893512. Liboff, Richard L. (1992). Introductory Quantum Mechanics (2nd ed.). Addison-Wesley Pub. Co. ISBN 9780201547153. Lindsay, Robert Bruce; Margenau, Henry (1957). Foundations of Physics. Dover. ISBN 0918024188. LCCN 57014416. {{cite book}}: ISBN / Date incompatibility (help) McEvoy, J. P.; Zarate, Oscar (2004). Introducing Quantum Theory. Icon Books. ISBN 1874166374. Nave, Carl Rod (2005). "Quantum Physics". HyperPhysics. Georgia State University. Peat, F. David (2002). From Certainty to Uncertainty: The Story of Science and Ideas in the Twenty-First Century. Joseph Henry Press. Reichenbach, Hans (1944). Philosophic Foundations of Quantum Mechanics. University of California Press. ISBN 0486404595. LCCN a44004471. {{cite book}}: ISBN / Date incompatibility (help) Schilpp, Paul Arthur (1949). Albert Einstein: Philosopher-Scientist. Tudor Publishing Company. LCCN 50005340. Scientific American Reader, 1953. Sears, Francis Weston (1949). Optics (3rd ed.). Addison-Wesley. ISBN 0195046013. LCCN 51001018. {{cite book}}: ISBN / Date incompatibility (help) Shimony, A. (1983). "(title not given in citation)". Foundations of Quantum Mechanics in the Light of New Technology (S. Kamefuchi et al., eds.). Tokyo: Japan Physical Society. p. 225.; cited in: Popescu, Sandu; Daniel Rohrlich (1996). "Action and Passion at a Distance: An Essay in Honor of Professor Abner Shimony". arXiv:quant-ph/9605004. Tavel, Morton; Tavel, Judith (illustrations) (2002). Contemporary physics and the limits of knowledge. Rutgers University Press. ISBN 978-0813530772. Van Vleck, J. H.,1928, "The Correspondence Principle in the Statistical Interpretation of Quantum Mechanics", Proc. Natl. Acad. Sci. 14: 179. Westmoreland; Benjamin Schumacher (1998). "Quantum Entanglement and the Nonexistence of Superluminal Signals". arXiv:quant-ph/9801014. Wheeler, John Archibald; Feynman, Richard P. (1949). "Classical Electrodynamics in Terms of Direct Interparticle Action" (PDF). Reviews of Modern Physics. 21 (3): 425–33. Bibcode:1949RvMP...21..425W. doi:10.1103/RevModPhys.21.425. Wieman, Carl; Perkins, Katherine (2005). "Transforming Physics Education". Physics Today. 58 (11): 36. Bibcode:2005PhT....58k..36W. doi:10.1063/1.2155756. == Further reading == The following titles, all by working physicists, attempt to communicate quantum theory to laypeople, using a minimum of technical apparatus. Jim Al-Khalili (2003). Quantum: A Guide for the Perplexed. Weidenfeld & Nicolson. ISBN 978-1780225340. Chester, Marvin (1987). Primer of Quantum Mechanics. John Wiley. ISBN 0486428788. Brian Cox and Jeff Forshaw (2011) The Quantum Universe. Allen Lane. ISBN 978-1846144325. Richard Feynman (1985). QED: The Strange Theory of Light and Matter. Princeton University Press. ISBN 0691083886. Ford, Kenneth (2005). The Quantum World. Harvard Univ. Press. Includes elementary particle physics. Ghirardi, GianCarlo (2004). Sneaking a Look at God's Cards, Gerald Malsbary, trans. Princeton Univ. Press. The most technical of the works cited here. Passages using algebra, trigonometry, and bra–ket notation can be passed over on a first reading. Tony Hey and Walters, Patrick (2003). The New Quantum Universe. Cambridge Univ. Press. Includes much about the technologies quantum theory has made possible. ISBN 978-0521564571. Vladimir G. Ivancevic, Tijana T. Ivancevic (2008). Quantum leap: from Dirac and Feynman, Across the universe, to human body and mind. World Scientific Publishing Company. Provides an intuitive introduction in non-mathematical terms and an introduction in comparatively basic mathematical terms. ISBN 978-9812819277. J. P. McEvoy and Oscar Zarate (2004). Introducing Quantum Theory. Totem Books. ISBN 1840465778' N. David Mermin (1990). "Spooky actions at a distance: mysteries of the QT" in his Boojums all the way through. Cambridge Univ. Press: 110–76. The author is a rare physicist who tries to communicate to philosophers and humanists. ISBN 978-0521388801. Roland Omnès (1999). Understanding Quantum Mechanics. Princeton Univ. Press. ISBN 978-0691004358. Victor Stenger (2000). Timeless Reality: Symmetry, Simplicity, and Multiple Universes. Buffalo NY: Prometheus Books. Chpts. 5–8. ISBN 978-1573928595. Martinus Veltman (2003). Facts and Mysteries in Elementary Particle Physics. World Scientific Publishing Company. ISBN 978-9812381491. == External links == "Microscopic World – Introduction to Quantum Mechanics". by Takada, Kenjiro, emeritus professor at Kyushu University The Quantum Exchange (tutorials and open-source learning software). Atoms and the Periodic Table Single and double slit interference Time-Evolution of a Wavepacket in a Square Well An animated demonstration of a wave packet dispersion over time. Carroll, Sean M. "Quantum Mechanics (an embarrassment)". Sixty Symbols. Brady Haran for the University of Nottingham.
Wikipedia/Basics_of_quantum_mechanics
In atomic physics, the Bohr model or Rutherford–Bohr model was a model of the atom that incorporated some early quantum concepts. Developed from 1911 to 1918 by Niels Bohr and building on Ernest Rutherford's nuclear model, it supplanted the plum pudding model of J. J. Thomson only to be replaced by the quantum atomic model in the 1920s. It consists of a small, dense nucleus surrounded by orbiting electrons. It is analogous to the structure of the Solar System, but with attraction provided by electrostatic force rather than gravity, and with the electron energies quantized (assuming only discrete values). In the history of atomic physics, it followed, and ultimately replaced, several earlier models, including Joseph Larmor's Solar System model (1897), Jean Perrin's model (1901), the cubical model (1902), Hantaro Nagaoka's Saturnian model (1904), the plum pudding model (1904), Arthur Haas's quantum model (1910), the Rutherford model (1911), and John William Nicholson's nuclear quantum model (1912). The improvement over the 1911 Rutherford model mainly concerned the new quantum mechanical interpretation introduced by Haas and Nicholson, but forsaking any attempt to explain radiation according to classical physics. The model's key success lies in explaining the Rydberg formula for hydrogen's spectral emission lines. While the Rydberg formula had been known experimentally, it did not gain a theoretical basis until the Bohr model was introduced. Not only did the Bohr model explain the reasons for the structure of the Rydberg formula, it also provided a justification for the fundamental physical constants that make up the formula's empirical results. The Bohr model is a relatively primitive model of the hydrogen atom, compared to the valence shell model. As a theory, it can be derived as a first-order approximation of the hydrogen atom using the broader and much more accurate quantum mechanics and thus may be considered to be an obsolete scientific theory. However, because of its simplicity, and its correct results for selected systems (see below for application), the Bohr model is still commonly taught to introduce students to quantum mechanics or energy level diagrams before moving on to the more accurate, but more complex, valence shell atom. A related quantum model was proposed by Arthur Erich Haas in 1910 but was rejected until the 1911 Solvay Congress where it was thoroughly discussed. The quantum theory of the period between Planck's discovery of the quantum (1900) and the advent of a mature quantum mechanics (1925) is often referred to as the old quantum theory. == Background == Until the second decade of the 20th century, atomic models were generally speculative. Even the concept of atoms, let alone atoms with internal structure, faced opposition from some scientists.: 2  === Planetary models === In the late 1800s speculations on the possible structure of the atom included planetary models with orbiting charged electrons.: 35  These models faced a significant constraint. In 1897, Joseph Larmor showed that an accelerating charge would radiate power according to classical electrodynamics, a result known as the Larmor formula. Since electrons forced to remain in orbit are continuously accelerating, they would be mechanically unstable. Larmor noted that electromagnetic effect of multiple electrons, suitable arranged, would cancel each other. Thus subsequent atomic models based on classical electrodynamics needed to adopt such special multi-electron arrangements.: 113  === Thomson's atom model === When Bohr began his work on a new atomic theory in the summer of 1912: 237  the atomic model proposed by J. J. Thomson, now known as the plum pudding model, was the best available.: 37  Thomson proposed a model with electrons rotating in coplanar rings within an atomic-sized, positively-charged, spherical volume. Thomson showed that this model was mechanically stable by lengthy calculations and was electrodynamically stable under his original assumption of thousands of electrons per atom. Moreover, he suggested that the particularly stable configurations of electrons in rings was connected to chemical properties of the atoms. He developed a formula for the scattering of beta particles that seemed to match experimental results.: 38  However Thomson himself later showed that the atom had a factor of a thousand fewer electrons, challenging the stability argument and forcing the poorly understood positive sphere to have most of the atom's mass. Thomson was also unable to explain the many lines in atomic spectra.: 18  === Rutherford nuclear model === In 1908, Hans Geiger and Ernest Marsden demonstrated that alpha particle occasionally scatter at large angles, a result inconsistent with Thomson's model. In 1911 Ernest Rutherford developed a new scattering model, showing that the observed large angle scattering could be explained by a compact, highly charged mass at the center of the atom. Rutherford scattering did not involve the electrons and thus his model of the atom was incomplete. Bohr begins his first paper on his atomic model by describing Rutherford's atom as consisting of a small, dense, positively charged nucleus attracting negatively charged electrons. === Atomic spectra === By the early twentieth century, it was expected that the atom would account for the many atomic spectral lines. These lines were summarized in empirical formula by Johann Balmer and Johannes Rydberg. In 1897, Lord Rayleigh showed that vibrations of electrical systems predicted spectral lines that depend on the square of the vibrational frequency, contradicting the empirical formula which depended directly on the frequency.: 18  In 1907 Arthur W. Conway showed that, rather than the entire atom vibrating, vibrations of only one of the electrons in the system described by Thomson might be sufficient to account for spectral series.: II:106  Although Bohr's model would also rely on just the electron to explain the spectrum, he did not assume an electrodynamical model for the atom. The other important advance in the understanding of atomic spectra was the Rydberg–Ritz combination principle which related atomic spectral line frequencies to differences between 'terms', special frequencies characteristic of each element.: 173  Bohr would recognize the terms as energy levels of the atom divided by the Planck constant, leading to the modern view that the spectral lines result from energy differences.: 847  === Haas atomic model === In 1910, Arthur Erich Haas proposed a model of the hydrogen atom with an electron circulating on the surface of a sphere of positive charge. The model resembled Thomson's plum pudding model, but Haas added a radical new twist: he constrained the electron's potential energy, E pot {\displaystyle E_{\text{pot}}} , on a sphere of radius a to equal the frequency, f, of the electron's orbit on the sphere times the Planck constant:: 197  E pot = − e 2 a = h f {\displaystyle E_{\text{pot}}={\frac {-e^{2}}{a}}=hf} where e represents the charge on the electron and the sphere. Haas combined this constraint with the balance-of-forces equation. The attractive force between the electron and the sphere balances the centrifugal force: e 2 a 2 = m a ( 2 π f ) 2 {\displaystyle {\frac {e^{2}}{a^{2}}}=ma(2\pi f)^{2}} where m is the mass of the electron. This combination relates the radius of the sphere to the Planck constant: a = h 2 4 π 2 e 2 m {\displaystyle a={\frac {h^{2}}{4\pi ^{2}e^{2}m}}} Haas solved for the Planck constant using the then-current value for the radius of the hydrogen atom. Three years later, Bohr would use similar equations with different interpretation. Bohr took the Planck constant as given value and used the equations to predict, a, the radius of the electron orbiting in the ground state of the hydrogen atom. This value is now called the Bohr radius.: 197  === Influence of the Solvay Conference === The first Solvay Conference, in 1911, was one of the first international physics conferences. Nine Nobel or future Nobel laureates attended, including Ernest Rutherford, Bohr's mentor.: 271  Bohr did not attend but he read the Solvay reports and discussed them with Rutherford.: 233  The subject of the conference was the theory of radiation and the energy quanta of Max Planck's oscillators. Planck's lecture at the conference ended with comments about atoms and the discussion that followed it concerned atomic models. Hendrik Lorentz raised the question of the composition of the atom based on Haas's model, a form of Thomson's plum pudding model with a quantum modification. Lorentz explained that the size of atoms could be taken to determine the Planck constant as Haas had done or the Planck constant could be taken as determining the size of atoms.: 273  Bohr would adopt the second path. The discussions outlined the need for the quantum theory to be included in the atom. Planck explicitly mentions the failings of classical mechanics.: 273  While Bohr had already expressed a similar opinion in his PhD thesis, at Solvay the leading scientists of the day discussed a break with classical theories.: 244  Bohr's first paper on his atomic model cites the Solvay proceedings saying: "Whatever the alteration in the laws of motion of the electrons may be, it seems necessary to introduce in the laws in question a quantity foreign to the classical electrodynamics, i.e. Planck's constant, or as it often is called the elementary quantum of action." Encouraged by the Solvay discussions, Bohr would assume the atom was stable and abandon the efforts to stabilize classical models of the atom: 199  === Nicholson atom theory === In 1911 John William Nicholson published a model of the atom which would influence Bohr's model. Nicholson developed his model based on the analysis of astrophysical spectroscopy. He connected the observed spectral line frequencies with the orbits of electrons in his atoms. The connection he adopted associated the atomic electron orbital angular momentum with the Planck constant. Whereas Planck focused on a quantum of energy, Nicholson's angular momentum quantum relates to orbital frequency. This new concept gave Planck constant an atomic meaning for the first time.: 169  In his 1913 paper Bohr cites Nicholson as finding quantized angular momentum important for the atom. The other critical influence of Nicholson work was his detailed analysis of spectra. Before Nicholson's work Bohr thought the spectral data was not useful for understanding atoms. In comparing his work to Nicholson's, Bohr came to understand the spectral data and their value. When he then learned from a friend about Balmer's compact formula for the spectral line data, Bohr quickly realized his model would match it in detail.: 178  Nicholson's model was based on classical electrodynamics along the lines of J.J. Thomson's plum pudding model but his negative electrons orbiting a positive nucleus rather than circulating in a sphere. To avoid immediate collapse of this system he required that electrons come in pairs so the rotational acceleration of each electron was matched across the orbit.: 163  By 1913 Bohr had already shown, from the analysis of alpha particle energy loss, that hydrogen had only a single electron not a matched pair.: 195  Bohr's atomic model would abandon classical electrodynamics. Nicholson's model of radiation was quantum but was attached to the orbits of the electrons. Bohr quantization would associate it with differences in energy levels of his model of hydrogen rather than the orbital frequency. === Bohr's previous work === Bohr completed his PhD in 1911 with a thesis 'Studies on the Electron Theory of Metals', an application of the classical electron theory of Hendrik Lorentz. Bohr noted two deficits of the classical model. The first concerned the specific heat of metals which James Clerk Maxwell noted in 1875: every additional degree of freedom in a theory of metals, like subatomic electrons, cause more disagreement with experiment. The second, the classical theory could not explain magnetism.: 194  After his PhD, Bohr worked briefly in the lab of JJ Thomson before moving to Rutherford's lab in Manchester to study radioactivity. He arrived just after Rutherford completed his proposal of a compact nuclear core for atoms. Charles Galton Darwin, also at Manchester, had just completed an analysis of alpha particle energy loss in metals, concluding the electron collisions where the dominant cause of loss. Bohr showed in a subsequent paper that Darwin's results would improve by accounting for electron binding energy. Importantly this allowed Bohr to conclude that hydrogen atoms have a single electron.: 195  == Development == Next, Bohr was told by his friend, Hans Hansen, that the Balmer series is calculated using the Balmer formula, an empirical equation discovered by Johann Balmer in 1885 that described wavelengths of some spectral lines of hydrogen. This was further generalized by Johannes Rydberg in 1888, resulting in what is now known as the Rydberg formula. After this, Bohr declared, "everything became clear". In 1913 Niels Bohr put forth three postulates to provide an electron model consistent with Rutherford's nuclear model: The electron is able to revolve in certain stable orbits around the nucleus without radiating any energy, contrary to what classical electromagnetism suggests. These stable orbits are called stationary orbits and are attained at certain discrete distances from the nucleus. The electron cannot have any other orbit in between the discrete ones. The stationary orbits are attained at distances for which the angular momentum of the revolving electron is an integer multiple of the reduced Planck constant: m e v r = n ℏ {\displaystyle m_{\mathrm {e} }vr=n\hbar } , where n = 1 , 2 , 3 , . . . {\displaystyle n=1,2,3,...} is called the principal quantum number, and ℏ = h / 2 π {\displaystyle \hbar =h/2\pi } . The lowest value of n {\displaystyle n} is 1; this gives the smallest possible orbital radius, known as the Bohr radius, of 0.0529 nm for hydrogen. Once an electron is in this lowest orbit, it can get no closer to the nucleus. Starting from the angular momentum quantum rule as Bohr admits is previously given by Nicholson in his 1912 paper, Bohr was able to calculate the energies of the allowed orbits of the hydrogen atom and other hydrogen-like atoms and ions. These orbits are associated with definite energies and are also called energy shells or energy levels. In these orbits, the electron's acceleration does not result in radiation and energy loss. The Bohr model of an atom was based upon Planck's quantum theory of radiation. Electrons can only gain and lose energy by jumping from one allowed orbit to another, absorbing or emitting electromagnetic radiation with a frequency ν {\displaystyle \nu } determined by the energy difference of the levels according to the Planck relation: Δ E = E 2 − E 1 = h ν {\displaystyle \Delta E=E_{2}-E_{1}=h\nu } , where h {\displaystyle h} is the Planck constant. Other points are: Like Einstein's theory of the photoelectric effect, Bohr's formula assumes that during a quantum jump a discrete amount of energy is radiated. However, unlike Einstein, Bohr stuck to the classical Maxwell theory of the electromagnetic field. Quantization of the electromagnetic field was explained by the discreteness of the atomic energy levels; Bohr did not believe in the existence of photons. According to the Maxwell theory the frequency ν {\displaystyle \nu } of classical radiation is equal to the rotation frequency ν {\displaystyle \nu } rot of the electron in its orbit, with harmonics at integer multiples of this frequency. This result is obtained from the Bohr model for jumps between energy levels E n {\displaystyle E_{n}} and E n − k {\displaystyle E_{n-k}} when k {\displaystyle k} is much smaller than n {\displaystyle n} . These jumps reproduce the frequency of the k {\displaystyle k} -th harmonic of orbit n {\displaystyle n} . For sufficiently large values of n {\displaystyle n} (so-called Rydberg states), the two orbits involved in the emission process have nearly the same rotation frequency, so that the classical orbital frequency is not ambiguous. But for small n {\displaystyle n} (or large k {\displaystyle k} ), the radiation frequency has no unambiguous classical interpretation. This marks the birth of the correspondence principle, requiring quantum theory to agree with the classical theory only in the limit of large quantum numbers. The Bohr–Kramers–Slater theory (BKS theory) is a failed attempt to extend the Bohr model, which violates the conservation of energy and momentum in quantum jumps, with the conservation laws only holding on average. Bohr's condition, that the angular momentum be an integer multiple of ℏ {\displaystyle \hbar } , was later reinterpreted in 1924 by de Broglie as a standing wave condition: the electron is described by a wave and a whole number of wavelengths must fit along the circumference of the electron's orbit: n λ = 2 π r . {\displaystyle n\lambda =2\pi r.} According to de Broglie's hypothesis, matter particles such as the electron behave as waves. The de Broglie wavelength of an electron is λ = h m v , {\displaystyle \lambda ={\frac {h}{mv}},} which implies that n h m v = 2 π r , {\displaystyle {\frac {nh}{mv}}=2\pi r,} or n h 2 π = m v r , {\displaystyle {\frac {nh}{2\pi }}=mvr,} where m v r {\displaystyle mvr} is the angular momentum of the orbiting electron. Writing ℓ {\displaystyle \ell } for this angular momentum, the previous equation becomes ℓ = n h 2 π , {\displaystyle \ell ={\frac {nh}{2\pi }},} which is Bohr's second postulate. Bohr described angular momentum of the electron orbit as 2 / h {\displaystyle 2/h} while de Broglie's wavelength of λ = h / p {\displaystyle \lambda =h/p} described h {\displaystyle h} divided by the electron momentum. In 1913, however, Bohr justified his rule by appealing to the correspondence principle, without providing any sort of wave interpretation. In 1913, the wave behavior of matter particles such as the electron was not suspected. In 1925, a new kind of mechanics was proposed, quantum mechanics, in which Bohr's model of electrons traveling in quantized orbits was extended into a more accurate model of electron motion. The new theory was proposed by Werner Heisenberg. Another form of the same theory, wave mechanics, was discovered by the Austrian physicist Erwin Schrödinger independently, and by different reasoning. Schrödinger employed de Broglie's matter waves, but sought wave solutions of a three-dimensional wave equation describing electrons that were constrained to move about the nucleus of a hydrogen-like atom, by being trapped by the potential of the positive nuclear charge. == Electron energy levels == The Bohr model gives almost exact results only for a system where two charged points orbit each other at speeds much less than that of light. This not only involves one-electron systems such as the hydrogen atom, singly ionized helium, and doubly ionized lithium, but it includes positronium and Rydberg states of any atom where one electron is far away from everything else. It can be used for K-line X-ray transition calculations if other assumptions are added (see Moseley's law below). In high energy physics, it can be used to calculate the masses of heavy quark mesons. Calculation of the orbits requires two assumptions. Classical mechanics The electron is held in a circular orbit by electrostatic attraction. The centripetal force is equal to the Coulomb force. m e v 2 r = Z k e e 2 r 2 , {\displaystyle {\frac {m_{\mathrm {e} }v^{2}}{r}}={\frac {Zk_{\mathrm {e} }e^{2}}{r^{2}}},} where me is the electron's mass, e is the elementary charge, ke is the Coulomb constant and Z is the atom's atomic number. It is assumed here that the mass of the nucleus is much larger than the electron mass (which is a good assumption). This equation determines the electron's speed at any radius: v = Z k e e 2 m e r . {\displaystyle v={\sqrt {\frac {Zk_{\mathrm {e} }e^{2}}{m_{\mathrm {e} }r}}}.} It also determines the electron's total energy at any radius: E = − 1 2 m e v 2 . {\displaystyle E=-{\frac {1}{2}}m_{\mathrm {e} }v^{2}.} The total energy is negative and inversely proportional to r. This means that it takes energy to pull the orbiting electron away from the proton. For infinite values of r, the energy is zero, corresponding to a motionless electron infinitely far from the proton. The total energy is half the potential energy, the difference being the kinetic energy of the electron. This is also true for noncircular orbits by the virial theorem. A quantum rule The angular momentum L = mevr is an integer multiple of ħ: m e v r = n ℏ . {\displaystyle m_{\mathrm {e} }vr=n\hbar .} === Derivation === In classical mechanics, if an electron is orbiting around an atom with period T, and if its coupling to the electromagnetic field is weak, so that the orbit doesn't decay very much in one cycle, it will emit electromagnetic radiation in a pattern repeating at every period, so that the Fourier transform of the pattern will only have frequencies which are multiples of 1/T. However, in quantum mechanics, the quantization of angular momentum leads to discrete energy levels of the orbits, and the emitted frequencies are quantized according to the energy differences between these levels. This discrete nature of energy levels introduces a fundamental departure from the classical radiation law, giving rise to distinct spectral lines in the emitted radiation. Bohr assumes that the electron is circling the nucleus in an elliptical orbit obeying the rules of classical mechanics, but with no loss of radiation due to the Larmor formula. Denoting the total energy as E, the electron charge as −e, the nucleus charge as K = Ze, the electron mass as me, half the major axis of the ellipse as a, he starts with these equations:: 3  E is assumed to be negative, because a positive energy is required to unbind the electron from the nucleus and put it at rest at an infinite distance. Eq. (1a) is obtained from equating the centripetal force to the Coulombian force acting between the nucleus and the electron, considering that E = T + U {\displaystyle E=T+U} (where T is the average kinetic energy and U the average electrostatic potential), and that for Kepler's second law, the average separation between the electron and the nucleus is a. Eq. (1b) is obtained from the same premises of eq. (1a) plus the virial theorem, stating that, for an elliptical orbit, Then Bohr assumes that | E | {\displaystyle \vert E\vert } is an integer multiple of the energy of a quantum of light with half the frequency of the electron's revolution frequency,: 4  i.e.: From eq. (1a, 1b, 2), it descends: He further assumes that the orbit is circular, i.e. a = r {\displaystyle a=r} , and, denoting the angular momentum of the electron as L, introduces the equation: Eq. (4) stems from the virial theorem, and from the classical mechanics relationships between the angular momentum, the kinetic energy and the frequency of revolution. From eq. (1c, 2, 4), it stems: where: that is: This results states that the angular momentum of the electron is an integer multiple of the reduced Planck constant.: 15  Substituting the expression for the velocity gives an equation for r in terms of n: m e k e Z e 2 m e r r = n ℏ , {\displaystyle m_{\text{e}}{\sqrt {\dfrac {k_{\text{e}}Ze^{2}}{m_{\text{e}}r}}}r=n\hbar ,} so that the allowed orbit radius at any n is r n = n 2 ℏ 2 Z k e e 2 m e . {\displaystyle r_{n}={\frac {n^{2}\hbar ^{2}}{Zk_{\mathrm {e} }e^{2}m_{\mathrm {e} }}}.} The smallest possible value of r in the hydrogen atom (Z = 1) is called the Bohr radius and is equal to: r 1 = ℏ 2 k e e 2 m e ≈ 5.29 × 10 − 11 m = 52.9 p m . {\displaystyle r_{1}={\frac {\hbar ^{2}}{k_{\mathrm {e} }e^{2}m_{\mathrm {e} }}}\approx 5.29\times 10^{-11}~\mathrm {m} =52.9~\mathrm {pm} .} The energy of the n-th level for any atom is determined by the radius and quantum number: E = − Z k e e 2 2 r n = − Z 2 ( k e e 2 ) 2 m e 2 ℏ 2 n 2 ≈ − 13.6 Z 2 n 2 e V . {\displaystyle E=-{\frac {Zk_{\mathrm {e} }e^{2}}{2r_{n}}}=-{\frac {Z^{2}(k_{\mathrm {e} }e^{2})^{2}m_{\mathrm {e} }}{2\hbar ^{2}n^{2}}}\approx {\frac {-13.6\ Z^{2}}{n^{2}}}~\mathrm {eV} .} An electron in the lowest energy level of hydrogen (n = 1) therefore has about 13.6 eV less energy than a motionless electron infinitely far from the nucleus. The next energy level (n = 2) is −3.4 eV. The third (n = 3) is −1.51 eV, and so on. For larger values of n, these are also the binding energies of a highly excited atom with one electron in a large circular orbit around the rest of the atom. The hydrogen formula also coincides with the Wallis product. The combination of natural constants in the energy formula is called the Rydberg energy (RE): R E = ( k e e 2 ) 2 m e 2 ℏ 2 . {\displaystyle R_{\mathrm {E} }={\frac {(k_{\mathrm {e} }e^{2})^{2}m_{\mathrm {e} }}{2\hbar ^{2}}}.} This expression is clarified by interpreting it in combinations that form more natural units: m e c 2 {\displaystyle m_{\mathrm {e} }c^{2}} is the rest mass energy of the electron (511 keV), k e e 2 ℏ c = α ≈ 1 137 {\displaystyle {\frac {k_{\mathrm {e} }e^{2}}{\hbar c}}=\alpha \approx {\frac {1}{137}}} is the fine-structure constant, R E = 1 2 ( m e c 2 ) α 2 {\displaystyle R_{\mathrm {E} }={\frac {1}{2}}(m_{\mathrm {e} }c^{2})\alpha ^{2}} . Since this derivation is with the assumption that the nucleus is orbited by one electron, we can generalize this result by letting the nucleus have a charge q = Ze, where Z is the atomic number. This will now give us energy levels for hydrogenic (hydrogen-like) atoms, which can serve as a rough order-of-magnitude approximation of the actual energy levels. So for nuclei with Z protons, the energy levels are (to a rough approximation): E n = − Z 2 R E n 2 . {\displaystyle E_{n}=-{\frac {Z^{2}R_{\mathrm {E} }}{n^{2}}}.} The actual energy levels cannot be solved analytically for more than one electron (see n-body problem) because the electrons are not only affected by the nucleus but also interact with each other via the Coulomb force. When Z = 1/α (Z ≈ 137), the motion becomes highly relativistic, and Z2 cancels the α2 in R; the orbit energy begins to be comparable to rest energy. Sufficiently large nuclei, if they were stable, would reduce their charge by creating a bound electron from the vacuum, ejecting the positron to infinity. This is the theoretical phenomenon of electromagnetic charge screening which predicts a maximum nuclear charge. Emission of such positrons has been observed in the collisions of heavy ions to create temporary super-heavy nuclei. The Bohr formula properly uses the reduced mass of electron and proton in all situations, instead of the mass of the electron, m red = m e m p m e + m p = m e 1 1 + m e / m p . {\displaystyle m_{\text{red}}={\frac {m_{\mathrm {e} }m_{\mathrm {p} }}{m_{\mathrm {e} }+m_{\mathrm {p} }}}=m_{\mathrm {e} }{\frac {1}{1+m_{\mathrm {e} }/m_{\mathrm {p} }}}.} However, these numbers are very nearly the same, due to the much larger mass of the proton, about 1836.1 times the mass of the electron, so that the reduced mass in the system is the mass of the electron multiplied by the constant 1836.1/(1 + 1836.1) = 0.99946. This fact was historically important in convincing Rutherford of the importance of Bohr's model, for it explained the fact that the frequencies of lines in the spectra for singly ionized helium do not differ from those of hydrogen by a factor of exactly 4, but rather by 4 times the ratio of the reduced mass for the hydrogen vs. the helium systems, which was much closer to the experimental ratio than exactly 4. For positronium, the formula uses the reduced mass also, but in this case, it is exactly the electron mass divided by 2. For any value of the radius, the electron and the positron are each moving at half the speed around their common center of mass, and each has only one fourth the kinetic energy. The total kinetic energy is half what it would be for a single electron moving around a heavy nucleus. E n = R E 2 n 2 {\displaystyle E_{n}={\frac {R_{\mathrm {E} }}{2n^{2}}}} (positronium). == Rydberg formula == Beginning in late 1860s, Johann Balmer and later Johannes Rydberg and Walther Ritz developed increasingly accurate empirical formula matching measured atomic spectral lines. Critical for Bohr's later work, Rydberg expressed his formula in terms of wave-number, equivalent to frequency. These formula contained a constant, R {\displaystyle R} , now known the Rydberg constant and a pair of integers indexing the lines:: 247  ν = R ( 1 m 2 − 1 n 2 ) . {\displaystyle \nu =R\left({\frac {1}{m^{2}}}-{\frac {1}{n^{2}}}\right).} Despite many attempts, no theory of the atom could reproduce these relatively simple formula.: 169  In Bohr's theory describing the energies of transitions or quantum jumps between orbital energy levels is able to explain these formula. For the hydrogen atom Bohr starts with his derived formula for the energy released as a free electron moves into a stable circular orbit indexed by τ {\displaystyle \tau } : W τ = 2 π 2 m e 4 h 2 τ 2 {\displaystyle W_{\tau }={\frac {2\pi ^{2}me^{4}}{h^{2}\tau ^{2}}}} The energy difference between two such levels is then: h ν = W τ 2 − W τ 1 = 2 π 2 m e 4 h 2 ( 1 τ 2 2 − 1 τ 1 2 ) {\displaystyle h\nu =W_{\tau _{2}}-W_{\tau _{1}}={\frac {2\pi ^{2}me^{4}}{h^{2}}}\left({\frac {1}{\tau _{2}^{2}}}-{\frac {1}{\tau _{1}^{2}}}\right)} Therefore, Bohr's theory gives the Rydberg formula and moreover the numerical value the Rydberg constant for hydrogen in terms of more fundamental constants of nature, including the electron's charge, the electron's mass, and the Planck constant:: 31  c R H = 2 π 2 m e 4 h 3 . {\displaystyle cR_{\text{H}}={\frac {2\pi ^{2}me^{4}}{h^{3}}}.} Since the energy of a photon is E = h c λ , {\displaystyle E={\frac {hc}{\lambda }},} these results can be expressed in terms of the wavelength of the photon given off: 1 λ = R ( 1 n f 2 − 1 n i 2 ) . {\displaystyle {\frac {1}{\lambda }}=R\left({\frac {1}{n_{\text{f}}^{2}}}-{\frac {1}{n_{\text{i}}^{2}}}\right).} Bohr's derivation of the Rydberg constant, as well as the concomitant agreement of Bohr's formula with experimentally observed spectral lines of the Lyman (nf = 1), Balmer (nf = 2), and Paschen (nf = 3) series, and successful theoretical prediction of other lines not yet observed, was one reason that his model was immediately accepted.: 34  To apply to atoms with more than one electron, the Rydberg formula can be modified by replacing Z with Z − b or n with n − b where b is constant representing a screening effect due to the inner-shell and other electrons (see Electron shell and the later discussion of the "Shell Model of the Atom" below). This was established empirically before Bohr presented his model. == Shell model (heavier atoms) == Bohr's original three papers in 1913 described mainly the electron configuration in lighter elements. Bohr called his electron shells, "rings" in 1913. Atomic orbitals within shells did not exist at the time of his planetary model. Bohr explains in Part 3 of his famous 1913 paper that the maximum electrons in a shell is eight, writing: "We see, further, that a ring of n electrons cannot rotate in a single ring round a nucleus of charge ne unless n < 8." For smaller atoms, the electron shells would be filled as follows: "rings of electrons will only join together if they contain equal numbers of electrons; and that accordingly the numbers of electrons on inner rings will only be 2, 4, 8". However, in larger atoms the innermost shell would contain eight electrons, "on the other hand, the periodic system of the elements strongly suggests that already in neon N = 10 an inner ring of eight electrons will occur". Bohr wrote "From the above we are led to the following possible scheme for the arrangement of the electrons in light atoms:" In Bohr's third 1913 paper Part III called "Systems Containing Several Nuclei", he says that two atoms form molecules on a symmetrical plane and he reverts to describing hydrogen. The 1913 Bohr model did not discuss higher elements in detail and John William Nicholson was one of the first to prove in 1914 that it couldn't work for lithium, but was an attractive theory for hydrogen and ionized helium. In 1921, following the work of chemists and others involved in work on the periodic table, Bohr extended the model of hydrogen to give an approximate model for heavier atoms. This gave a physical picture that reproduced many known atomic properties for the first time although these properties were proposed contemporarily with the identical work of chemist Charles Rugeley Bury Bohr's partner in research during 1914 to 1916 was Walther Kossel who corrected Bohr's work to show that electrons interacted through the outer rings, and Kossel called the rings: "shells". Irving Langmuir is credited with the first viable arrangement of electrons in shells with only two in the first shell and going up to eight in the next according to the octet rule of 1904, although Kossel had already predicted a maximum of eight per shell in 1916. Heavier atoms have more protons in the nucleus, and more electrons to cancel the charge. Bohr took from these chemists the idea that each discrete orbit could only hold a certain number of electrons. Per Kossel, after that the orbit is full, the next level would have to be used. This gives the atom a shell structure designed by Kossel, Langmuir, and Bury, in which each shell corresponds to a Bohr orbit. This model is even more approximate than the model of hydrogen, because it treats the electrons in each shell as non-interacting. But the repulsions of electrons are taken into account somewhat by the phenomenon of screening. The electrons in outer orbits do not only orbit the nucleus, but they also move around the inner electrons, so the effective charge Z that they feel is reduced by the number of the electrons in the inner orbit. For example, the lithium atom has two electrons in the lowest 1s orbit, and these orbit at Z = 2. Each one sees the nuclear charge of Z = 3 minus the screening effect of the other, which crudely reduces the nuclear charge by 1 unit. This means that the innermost electrons orbit at approximately 1/2 the Bohr radius. The outermost electron in lithium orbits at roughly the Bohr radius, since the two inner electrons reduce the nuclear charge by 2. This outer electron should be at nearly one Bohr radius from the nucleus. Because the electrons strongly repel each other, the effective charge description is very approximate; the effective charge Z doesn't usually come out to be an integer. The shell model was able to qualitatively explain many of the mysterious properties of atoms which became codified in the late 19th century in the periodic table of the elements. One property was the size of atoms, which could be determined approximately by measuring the viscosity of gases and density of pure crystalline solids. Atoms tend to get smaller toward the right in the periodic table, and become much larger at the next line of the table. Atoms to the right of the table tend to gain electrons, while atoms to the left tend to lose them. Every element on the last column of the table is chemically inert (noble gas). In the shell model, this phenomenon is explained by shell-filling. Successive atoms become smaller because they are filling orbits of the same size, until the orbit is full, at which point the next atom in the table has a loosely bound outer electron, causing it to expand. The first Bohr orbit is filled when it has two electrons, which explains why helium is inert. The second orbit allows eight electrons, and when it is full the atom is neon, again inert. The third orbital contains eight again, except that in the more correct Sommerfeld treatment (reproduced in modern quantum mechanics) there are extra "d" electrons. The third orbit may hold an extra 10 d electrons, but these positions are not filled until a few more orbitals from the next level are filled (filling the n = 3 d-orbitals produces the 10 transition elements). The irregular filling pattern is an effect of interactions between electrons, which are not taken into account in either the Bohr or Sommerfeld models and which are difficult to calculate even in the modern treatment. == Moseley's law and calculation (K-alpha X-ray emission lines) == Niels Bohr said in 1962: "You see actually the Rutherford work was not taken seriously. We cannot understand today, but it was not taken seriously at all. There was no mention of it any place. The great change came from Moseley." In 1913, Henry Moseley found an empirical relationship between the strongest X-ray line emitted by atoms under electron bombardment (then known as the K-alpha line), and their atomic number Z. Moseley's empiric formula was found to be derivable from Rydberg's formula and later Bohr's formula (Moseley actually mentions only Ernest Rutherford and Antonius Van den Broek in terms of models as these had been published before Moseley's work and Moseley's 1913 paper was published the same month as the first Bohr model paper). The two additional assumptions that [1] this X-ray line came from a transition between energy levels with quantum numbers 1 and 2, and [2], that the atomic number Z when used in the formula for atoms heavier than hydrogen, should be diminished by 1, to (Z − 1)2. Moseley wrote to Bohr, puzzled about his results, but Bohr was not able to help. At that time, he thought that the postulated innermost "K" shell of electrons should have at least four electrons, not the two which would have neatly explained the result. So Moseley published his results without a theoretical explanation. It was Walther Kossel in 1914 and in 1916 who explained that in the periodic table new elements would be created as electrons were added to the outer shell. In Kossel's paper, he writes: "This leads to the conclusion that the electrons, which are added further, should be put into concentric rings or shells, on each of which ... only a certain number of electrons—namely, eight in our case—should be arranged. As soon as one ring or shell is completed, a new one has to be started for the next element; the number of electrons, which are most easily accessible, and lie at the outermost periphery, increases again from element to element and, therefore, in the formation of each new shell the chemical periodicity is repeated." Later, chemist Langmuir realized that the effect was caused by charge screening, with an inner shell containing only 2 electrons. In his 1919 paper, Irving Langmuir postulated the existence of "cells" which could each only contain two electrons each, and these were arranged in "equidistant layers". In the Moseley experiment, one of the innermost electrons in the atom is knocked out, leaving a vacancy in the lowest Bohr orbit, which contains a single remaining electron. This vacancy is then filled by an electron from the next orbit, which has n=2. But the n=2 electrons see an effective charge of Z − 1, which is the value appropriate for the charge of the nucleus, when a single electron remains in the lowest Bohr orbit to screen the nuclear charge +Z, and lower it by −1 (due to the electron's negative charge screening the nuclear positive charge). The energy gained by an electron dropping from the second shell to the first gives Moseley's law for K-alpha lines, E = h ν = E i − E f = R E ( Z − 1 ) 2 ( 1 1 2 − 1 2 2 ) , {\displaystyle E=h\nu =E_{i}-E_{f}=R_{\mathrm {E} }(Z-1)^{2}\left({\frac {1}{1^{2}}}-{\frac {1}{2^{2}}}\right),} or f = ν = R v ( 3 4 ) ( Z − 1 ) 2 = ( 2.46 × 10 15 Hz ) ( Z − 1 ) 2 . {\displaystyle f=\nu =R_{\mathrm {v} }\left({\frac {3}{4}}\right)(Z-1)^{2}=(2.46\times 10^{15}~{\text{Hz}})(Z-1)^{2}.} Here, Rv = RE/h is the Rydberg constant, in terms of frequency equal to 3.28×1015 Hz. For values of Z between 11 and 31 this latter relationship had been empirically derived by Moseley, in a simple (linear) plot of the square root of X-ray frequency against atomic number (however, for silver, Z = 47, the experimentally obtained screening term should be replaced by 0.4). Notwithstanding its restricted validity, Moseley's law not only established the objective meaning of atomic number, but as Bohr noted, it also did more than the Rydberg derivation to establish the validity of the Rutherford/Van den Broek/Bohr nuclear model of the atom, with atomic number (place on the periodic table) standing for whole units of nuclear charge. Van den Broek had published his model in January 1913 showing the periodic table was arranged according to charge while Bohr's atomic model was not published until July 1913. The K-alpha line of Moseley's time is now known to be a pair of close lines, written as (Kα1 and Kα2) in Siegbahn notation. == Shortcomings == The Bohr model gives an incorrect value L=ħ for the ground state orbital angular momentum: The angular momentum in the true ground state is known to be zero from experiment. Although mental pictures fail somewhat at these levels of scale, an electron in the lowest modern "orbital" with no orbital momentum, may be thought of as not to revolve "around" the nucleus at all, but merely to go tightly around it in an ellipse with zero area (this may be pictured as "back and forth", without striking or interacting with the nucleus). This is only reproduced in a more sophisticated semiclassical treatment like Sommerfeld's. Still, even the most sophisticated semiclassical model fails to explain the fact that the lowest energy state is spherically symmetric – it doesn't point in any particular direction. In modern quantum mechanics, the electron in hydrogen is a spherical cloud of probability that grows denser near the nucleus. The rate-constant of probability-decay in hydrogen is equal to the inverse of the Bohr radius, but since Bohr worked with circular orbits, not zero area ellipses, the fact that these two numbers exactly agree is considered a "coincidence". (However, many such coincidental agreements are found between the semiclassical vs. full quantum mechanical treatment of the atom; these include identical energy levels in the hydrogen atom and the derivation of a fine-structure constant, which arises from the relativistic Bohr–Sommerfeld model (see below) and which happens to be equal to an entirely different concept, in full modern quantum mechanics). The Bohr model also failed to explain: Much of the spectra of larger atoms. At best, it can make predictions about the K-alpha and some L-alpha X-ray emission spectra for larger atoms, if two additional ad hoc assumptions are made. Emission spectra for atoms with a single outer-shell electron (atoms in the lithium group) can also be approximately predicted. Also, if the empiric electron–nuclear screening factors for many atoms are known, many other spectral lines can be deduced from the information, in similar atoms of differing elements, via the Ritz–Rydberg combination principles (see Rydberg formula). All these techniques essentially make use of Bohr's Newtonian energy-potential picture of the atom. The relative intensities of spectral lines; although in some simple cases, Bohr's formula or modifications of it, was able to provide reasonable estimates (for example, calculations by Kramers for the Stark effect). The existence of fine structure and hyperfine structure in spectral lines, which are known to be due to a variety of relativistic and subtle effects, as well as complications from electron spin. The Zeeman effect – changes in spectral lines due to external magnetic fields; these are also due to more complicated quantum principles interacting with electron spin and orbital magnetic fields. Doublets and triplets appear in the spectra of some atoms as very close pairs of lines. Bohr's model cannot say why some energy levels should be very close together. Multi-electron atoms do not have energy levels predicted by the model. It does not work for (neutral) helium. == Refinements == Several enhancements to the Bohr model were proposed, most notably the Sommerfeld or Bohr–Sommerfeld models, which suggested that electrons travel in elliptical orbits around a nucleus instead of the Bohr model's circular orbits. This model supplemented the quantized angular momentum condition of the Bohr model with an additional radial quantization condition, the Wilson–Sommerfeld quantization condition ∫ 0 T p r d q r = n h , {\displaystyle \int _{0}^{T}p_{\text{r}}\,dq_{\text{r}}=nh,} where pr is the radial momentum canonically conjugate to the coordinate qr, which is the radial position, and T is one full orbital period. The integral is the action of action-angle coordinates. This condition, suggested by the correspondence principle, is the only one possible, since the quantum numbers are adiabatic invariants. The Bohr–Sommerfeld model was fundamentally inconsistent and led to many paradoxes. The magnetic quantum number measured the tilt of the orbital plane relative to the xy plane, and it could only take a few discrete values. This contradicted the obvious fact that an atom could have any orientation relative to the coordinates, without restriction. The Sommerfeld quantization can be performed in different canonical coordinates and sometimes gives different answers. The incorporation of radiation corrections was difficult, because it required finding action-angle coordinates for a combined radiation/atom system, which is difficult when the radiation is allowed to escape. The whole theory did not extend to non-integrable motions, which meant that many systems could not be treated even in principle. In the end, the model was replaced by the modern quantum-mechanical treatment of the hydrogen atom, which was first given by Wolfgang Pauli in 1925, using Heisenberg's matrix mechanics. The current picture of the hydrogen atom is based on the atomic orbitals of wave mechanics, which Erwin Schrödinger developed in 1926. However, this is not to say that the Bohr–Sommerfeld model was without its successes. Calculations based on the Bohr–Sommerfeld model were able to accurately explain a number of more complex atomic spectral effects. For example, up to first-order perturbations, the Bohr model and quantum mechanics make the same predictions for the spectral line splitting in the Stark effect. At higher-order perturbations, however, the Bohr model and quantum mechanics differ, and measurements of the Stark effect under high field strengths helped confirm the correctness of quantum mechanics over the Bohr model. The prevailing theory behind this difference lies in the shapes of the orbitals of the electrons, which vary according to the energy state of the electron. The Bohr–Sommerfeld quantization conditions lead to questions in modern mathematics. Consistent semiclassical quantization condition requires a certain type of structure on the phase space, which places topological limitations on the types of symplectic manifolds which can be quantized. In particular, the symplectic form should be the curvature form of a connection of a Hermitian line bundle, which is called a prequantization. Bohr also updated his model in 1922, assuming that certain numbers of electrons (for example, 2, 8, and 18) correspond to stable "closed shells". == Model of the chemical bond == Niels Bohr proposed a model of the atom and a model of the chemical bond. According to his model for a diatomic molecule, the electrons of the atoms of the molecule form a rotating ring whose plane is perpendicular to the axis of the molecule and equidistant from the atomic nuclei. The dynamic equilibrium of the molecular system is achieved through the balance of forces between the forces of attraction of nuclei to the plane of the ring of electrons and the forces of mutual repulsion of the nuclei. The Bohr model of the chemical bond took into account the Coulomb repulsion – the electrons in the ring are at the maximum distance from each other. == Symbolism of planetary atomic models == Although Bohr's atomic model was superseded by quantum models in the 1920s, the visual image of electrons orbiting a nucleus has remained the popular concept of atoms. The concept of an atom as a tiny planetary system has been widely used as a symbol for atoms and even for "atomic" energy (even though this is more properly considered nuclear energy).: 58  Examples of its use over the past century include but are not limited to: The logo of the United States Atomic Energy Commission, which was in part responsible for its later usage in relation to nuclear fission technology in particular. The flag of the International Atomic Energy Agency is a "crest-and-spinning-atom emblem", enclosed in olive branches. The US minor league baseball Albuquerque Isotopes' logo shows baseballs as electrons orbiting a large letter "A". A similar symbol, the atomic whirl, was chosen as the symbol for the American Atheists, and has come to be used as a symbol of atheism in general. The Unicode Miscellaneous Symbols code point U+269B (⚛) for an atom looks like a planetary atom model. The television show The Big Bang Theory uses a planetary-like image in its print logo. The JavaScript library React uses planetary-like image as its logo. On maps, it is generally used to indicate a nuclear power installation. == See also == == References == === Footnotes === === Primary sources === Bohr, N. (July 1913). "I. On the constitution of atoms and molecules". The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science. 26 (151): 1–25. Bibcode:1913PMag...26....1B. doi:10.1080/14786441308634955. Bohr, N. (September 1913). "XXXVII. On the constitution of atoms and molecules". The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science. 26 (153): 476–502. Bibcode:1913PMag...26..476B. doi:10.1080/14786441308634993. Bohr, N. (1 November 1913). "LXXIII. On the constitution of atoms and molecules". The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science. 26 (155): 857–875. Bibcode:1913PMag...26..857B. doi:10.1080/14786441308635031. Bohr, N. (October 1913). "The Spectra of Helium and Hydrogen". Nature. 92 (2295): 231–232. Bibcode:1913Natur..92..231B. doi:10.1038/092231d0. S2CID 11988018. Bohr, N. (March 1921). "Atomic Structure". Nature. 107 (2682): 104–107. Bibcode:1921Natur.107..104B. doi:10.1038/107104a0. S2CID 4035652. A. Einstein (1917). "Zum Quantensatz von Sommerfeld und Epstein". Verhandlungen der Deutschen Physikalischen Gesellschaft. 19: 82–92. Reprinted in The Collected Papers of Albert Einstein, A. Engel translator, (1997) Princeton University Press, Princeton. 6 p. 434. (provides an elegant reformulation of the Bohr–Sommerfeld quantization conditions, as well as an important insight into the quantization of non-integrable (chaotic) dynamical systems.) de Broglie, Maurice; Langevin, Paul; Solvay, Ernest; Einstein, Albert (1912). La théorie du rayonnement et les quanta : rapports et discussions de la réunion tenue à Bruxelles, du 30 octobre au 3 novembre 1911, sous les auspices de M.E. Solvay (in French). Gauthier-Villars. OCLC 1048217622. == Further reading == Linus Carl Pauling (1970). "Chapter 5-1". General Chemistry (3rd ed.). San Francisco: W.H. Freeman & Co. Reprint: Linus Pauling (1988). General Chemistry. New York: Dover Publications. ISBN 0-486-65622-5. George Gamow (1985). "Chapter 2". Thirty Years That Shook Physics. Dover Publications. Walter J. Lehmann (1972). "Chapter 18". Atomic and Molecular Structure: the development of our concepts. John Wiley and Sons. ISBN 0-471-52440-9. Paul Tipler and Ralph Llewellyn (2002). Modern Physics (4th ed.). W. H. Freeman. ISBN 0-7167-4345-0. Klaus Hentschel: Elektronenbahnen, Quantensprünge und Spektren, in: Charlotte Bigg & Jochen Hennig (eds.) Atombilder. Ikonografien des Atoms in Wissenschaft und Öffentlichkeit des 20. Jahrhunderts, Göttingen: Wallstein-Verlag 2009, pp. 51–61 Steven and Susan Zumdahl (2010). "Chapter 7.4". Chemistry (8th ed.). Brooks/Cole. ISBN 978-0-495-82992-8. Kragh, Helge (November 2011). "Conceptual objections to the Bohr atomic theory — do electrons have a 'free will'?". The European Physical Journal H. 36 (3): 327–352. Bibcode:2011EPJH...36..327K. doi:10.1140/epjh/e2011-20031-x. S2CID 120859582. == External links == Standing waves in Bohr's atomic model—An interactive simulation to intuitively explain the quantization condition of standing waves in Bohr's atomic mode
Wikipedia/Bohr_atom_model
The Rutherford model is a name for the first model of an atom with a compact nucleus. The concept arose from Ernest Rutherford discovery of the nucleus. Rutherford directed the Geiger–Marsden experiment in 1909, which showed much more alpha particle recoil than J. J. Thomson's plum pudding model of the atom could explain. Thomson's model had positive charge spread out in the atom. Rutherford's analysis proposed a high central charge concentrated into a very small volume in comparison to the rest of the atom and with this central volume containing most of the atom's mass. The central region would later be known as the atomic nucleus. Rutherford did not discuss the organization of electrons in the atom and did not himself propose a model for the atom. Niels Bohr joined Rutherford's lab and developed a theory for the electron motion which became known as the Bohr model. == Background == Throughout the 1800s, speculative ideas about atoms were discussed and published. JJ Thomson's model was the first of these models to be based on experimentally detected subatomic particles. In the same paper that Thomson announced his results on "corpuscle" nature of cathode rays, an event considered the discovery of the electron, he began speculating on atomic models composed of electrons. He developed his model, now called the plum pudding model, primarily in 1904-06. He produced an elaborate mechanical model of the electrons moving in concentric rings, but the positive charge needed to balance the negative electrons was a simple sphere of uniform charge and unknown composition.: 13  Between 1904 and 1910 Thomson developed formulae for the deflection of fast beta particles from his atomic model for comparison to experiment. Similar work by Rutherford using alpha particles would eventually show Thomson's model could not be correct.: 269  Also among the early models were "planetary" or Solar System-like models.: 35  In a 1901 paper, Jean Baptiste Perrin used Thomson's discovery in a proposed a Solar System like model for atoms, with very strongly charged "positive suns" surrounded by "corpuscles, a kind of small negative planets", where the word "corpuscles" refers to what we now call electrons. Perrin discussed how this hypothesis might related to important then unexplained phenomena like the photoelectric effect, emission spectra, and radioactivity.: 145  Perrin later credited Rutherford with the discovery of the nuclear model. A somewhat similar model proposed by Hantaro Nagaoka in 1904 used Saturn's rings as an analog.: 37  The rings consisted of a large number of particles that repelled each other but were attracted to a large central charge. This charge was calculated to be 10,000 times the charge of the ring particles for stability. George A. Schott showed in 1904 that Nagaoka's model could not be consistent with results of atomic spectroscopy and the model fell out of favor.: 37  == Experimental basis for the model == Rutherford's nuclear model of the atom grew out of a series of experiments with alpha particles, a form of radiation Rutherford discovered in 1899. These experiments demonstrated that alpha particles "scattered" or bounced off atoms in ways unlike Thomson's model predicted. In 1908 and 1910, Hans Geiger and Ernest Marsden in Rutherford's lab showed that alpha particles could occasionally be reflected from gold foils. If Thomson was correct, the beam would go through the gold foil with very small deflections. In the experiment most of the beam passed through the foil, but a few were deflected. In a May 1911 paper, Rutherford presented his own physical model for subatomic structure, as an interpretation for the unexpected experimental results. In it, the atom is made up of a central charge (this is the modern atomic nucleus, though Rutherford did not use the term "nucleus" in his paper). Rutherford only committed himself to a small central region of very high positive or negative charge in the atom. For concreteness, consider the passage of a high speed α particle through an atom having a positive central charge N e, and surrounded by a compensating charge of N electrons. Using only energetic considerations of how far particles of known speed would be able to penetrate toward a central charge of 100 e, Rutherford was able to calculate that the radius of his gold central charge would need to be less (how much less could not be told) than 3.4 × 10−14 meters. This was in a gold atom known to be 10−10 metres or so in radius—a very surprising finding, as it implied a strong central charge less than 1/3000th of the diameter of the atom. The Rutherford model served to concentrate a great deal of the atom's charge and mass to a very small core, but did not attribute any structure to the remaining electrons and remaining atomic mass. It did mention the atomic model of Hantaro Nagaoka, in which the electrons are arranged in one or more rings, with the specific metaphorical structure of the stable rings of Saturn. The plum pudding model of J. J. Thomson also had rings of orbiting electrons. The Rutherford paper suggested that the central charge of an atom might be "proportional" to its atomic mass in hydrogen mass units u (roughly 1/2 of it, in Rutherford's model). For gold, this mass number is 197 (not then known to great accuracy) and was therefore modelled by Rutherford to be possibly 196 u. However, Rutherford did not attempt to make the direct connection of central charge to atomic number, since gold's "atomic number" (at that time merely its place number in the periodic table) was 79, and Rutherford had modelled the charge to be about +100 units (he had actually suggested 98 units of positive charge, to make half of 196). Thus, Rutherford did not formally suggest the two numbers (periodic table place, 79, and nuclear charge, 98 or 100) might be exactly the same. In 1913 Antonius van den Broek suggested that the nuclear charge and atomic weight were not connected, clearing the way for the idea that atomic number and nuclear charge were the same. This idea was quickly taken up by Rutherford's team and was confirmed experimentally within two years by Henry Moseley.: 52  These are the key indicators: The atom's electron cloud does not (substantially) influence alpha particle scattering. Much of an atom's positive charge is concentrated in a relatively tiny volume at the center of the atom, known today as the nucleus. The magnitude of this charge is proportional to (up to a charge number that can be approximately half of) the atom's atomic mass—the remaining mass is now known to be mostly attributed to neutrons. This concentrated central mass and charge is responsible for deflecting both alpha and beta particles. The mass of heavy atoms such as gold is mostly concentrated in the central charge region, since calculations show it is not deflected or moved by the high speed alpha particles, which have very high momentum in comparison to electrons, but not with regard to a heavy atom as a whole. The atom itself is about 100,000 (105) times the diameter of the nucleus. This could be related to putting a grain of sand in the middle of a football field. == Contribution to modern science == Rutherford's new atom model caused no reaction at first.: 28  Rutherford explicitly ignores the electrons, only mentioning Hantaro Nagaoka's Saturnian model. By ignoring the electrons Rutherford also ignores any potential implications for atomic spectroscopy for chemistry.: 302  Rutherford himself did not press the case for his atomic model in the following years: his own 1913 book on "Radioactive substances and their radiations" only mentions the atom twice; other books by other authors around this time focus on Thomson's model.: 446  The impact of Rutherford's nuclear model came after Niels Bohr arrived as a post-doctoral student in Manchester at Rutherford's invitation. Bohr dropped his work on the Thomson model in favor of Rutherford's nuclear model, developing the Rutherford–Bohr model over the next several years. Eventually Bohr incorporated early ideas of quantum mechanics into the model of the atom, allowing prediction of electronic spectra and concepts of chemistry.: 304  After Rutherford's discovery, subsequent research determined the atomic structure which led to Rutherford's gold foil experiment. Scientists eventually discovered that atoms have a positively charged nucleus (with an atomic number of charges) in the center, with a radius of about 1.2 × 10−15 meters × [atomic mass number]1⁄3. Electrons were found to be even smaller. == References == == External links == Rutherford's Model by Raymond College Rutherford's Model by Kyushu University
Wikipedia/Rutherford_model
To determine the vibrational spectroscopy of linear molecules, the rotation and vibration of linear molecules are taken into account to predict which vibrational (normal) modes are active in the infrared spectrum and the Raman spectrum. == Degrees of freedom == The location of a molecule in a 3-dimensional space can be described by the total number of coordinates. Each atom is assigned a set of x, y, and z coordinates and can move in all three directions. Degrees of freedom is the total number of variables used to define the motion of a molecule completely. For N atoms in a molecule moving in 3-D space, there are 3N total motions because each atom has 3N degrees of freedom. == Vibrational modes == N atoms in a molecule have 3N degrees of freedom which constitute translations, rotations, and vibrations. For non-linear molecules, there are 3 degrees of freedom for translational (motion along the x, y, and z directions) and 3 degrees of freedom for rotational motion (rotations in Rx, Ry, and Rz directions) for each atom. Linear molecules are defined as possessing bond angles of 180°, so there are 3 degrees of freedom for translational motion but only 2 degrees of freedom for rotational motion because the rotation about its molecular axis leaves the molecule unchanged. When subtracting the translational and rotational degrees of freedom, the degrees of vibrational modes is determined. Number of degrees of vibrational freedom for nonlinear molecules: 3N-6 Number of degrees of vibrational freedom for linear molecules: 3N-5 === Symmetry of vibrational modes === All 3N degrees of freedom have symmetry relationships consistent with the irreducible representations of the molecule's point group. A linear molecule is characterized as possessing a bond angle of 180° with either a C∞v or D∞h symmetry point group. Each point group has a character table that represents all of the possible symmetry of that molecule. Specifically for linear molecules, the two character tables are shown below: However, these two character tables have infinite number of irreducible representations, so it is necessary to lower the symmetry to a subgroup that has related representations whose characters are the same for the shared operations in the two groups. A property that transforms as one representation in a group will transform as its correlated representation in a subgroup. Therefore, C∞v will be correlated to C2v and D∞h to D2h. The correlation table for each is shown below: Once the point group of the linear molecule is determined and the correlated symmetry is identified, all symmetry element operations associated to that correlated symmetry's point group are performed for each atom to deduce the reducible representation of the 3N Cartsian displacement vectors. From the right side of the character table, the non-vibrational degrees of freedom, rotational (Rx and Ry) and translational (x, y, and z), are subtracted: Γvib = Γ3N - Γrot - Γtrans. This yields the Γvib, which is used to find the correct normal modes from the original symmetry, which is either C∞v or D∞h, using the correlation table above. Then, each vibrational mode can be identified as either IR or Raman active. === Vibrational spectroscopy === A vibration will be active in the IR if there is a change in the dipole moment of the molecule and if it has the same symmetry as one of the x, y, z coordinates. To determine which modes are IR active, the irreducible representation corresponding to x, y, and z are checked with the reducible representation of Γvib. An IR mode is active if the same irreducible representation is present in both. Furthermore, a vibration will be Raman active if there is a change in the polarizability of the molecule and if it has the same symmetry as one of the direct products of the x, y, z coordinates. To determine which modes are Raman active, the irreducible representation corresponding to xy, xz, yz, x2, y2, and z2 are checked with the reducible representation of Γvib. A Raman mode is active if the same irreducible representation is present in both. == Example == Carbon Dioxide, CO2 1. Assign point group: D∞h 2. Determine group-subgroup point group: D2h 3. Find the number of normal (vibrational) modes or degrees of freedom using the equation: 3n - 5 = 3(3) - 5 = 4 4. Derive reducible representation Γ3N: 5. Decompose the reducible representation into irreducible components: Γ3N = Ag + B2g + B3g + 2B1u + 2B2u + 2B3u 6. Solve for the irreducible representation corresponding to the normal modes with the subgroup character table: Γ3N = Ag + B2g + B3g + 2B1u + 2B2u + 2B3u Γrot = B2g + B3g Γtrans = B1u + B2u + B3u Γvib = Γ3N - Γrot - Γtrans Γvib = Ag + B1u + B2u + B3u 7. Use the correlation table to find the normal modes for the original point group: v1 = Ag = Σ+g v2 = B1u = Σ+u v3 = B2u = Πu v4 = B3u = Πu 8. Label whether the modes are either IR active or Raman active: v1 = Raman active v2 = IR active v3 = IR active v4 = IR active == References ==
Wikipedia/Vibrational_spectroscopy_of_linear_molecules
Negative-index metamaterial or negative-index material (NIM) is a metamaterial whose refractive index for an electromagnetic wave has a negative value over some frequency range. NIMs are constructed of periodic basic parts called unit cells, which are usually significantly smaller than the wavelength of the externally applied electromagnetic radiation. The unit cells of the first experimentally investigated NIMs were constructed from circuit board material, or in other words, wires and dielectrics. In general, these artificially constructed cells are stacked or planar and configured in a particular repeated pattern to compose the individual NIM. For instance, the unit cells of the first NIMs were stacked horizontally and vertically, resulting in a pattern that was repeated and intended (see below images). Specifications for the response of each unit cell are predetermined prior to construction and are based on the intended response of the entire, newly constructed, material. In other words, each cell is individually tuned to respond in a certain way, based on the desired output of the NIM. The aggregate response is mainly determined by each unit cell's geometry and substantially differs from the response of its constituent materials. In other words, the way the NIM responds is that of a new material, unlike the wires or metals and dielectrics it is made from. Hence, the NIM has become an effective medium. Also, in effect, this metamaterial has become an “ordered macroscopic material, synthesized from the bottom up”, and has emergent properties beyond its components. Metamaterials that exhibit a negative value for the refractive index are often referred to by any of several terminologies: left-handed media or left-handed material (LHM), backward-wave media (BW media), media with negative refractive index, double negative (DNG) metamaterials, and other similar names. == Properties and characteristics == Electrodynamics of media with negative indices of refraction were first studied by Russian theoretical physicist Victor Veselago from Moscow Institute of Physics and Technology in 1967. The proposed left-handed or negative-index materials were theorized to exhibit optical properties opposite to those of glass, air, and other transparent media. Such materials were predicted to exhibit counterintuitive properties like bending or refracting light in unusual and unexpected ways. However, the first practical metamaterial was not constructed until 33 years later and it does support Veselago's concepts. Currently, negative-index metamaterials are being developed to manipulate electromagnetic radiation in new ways. For example, optical and electromagnetic properties of natural materials are often altered through chemistry. With metamaterials, optical and electromagnetic properties can be engineered by changing the geometry of its unit cells. The unit cells are materials that are ordered in geometric arrangements with dimensions that are fractions of the wavelength of the radiated electromagnetic wave. Each artificial unit responds to the radiation from the source. The collective result is the material's response to the electromagnetic wave that is broader than normal. Subsequently, transmission is altered by adjusting the shape, size, and configurations of the unit cells. This results in control over material parameters known as permittivity and magnetic permeability. These two parameters (or quantities) determine the propagation of electromagnetic waves in matter. Therefore, controlling the values of permittivity and permeability means that the refractive index can be negative or zero as well as conventionally positive. It all depends on the intended application or desired result. So, optical properties can be expanded beyond the capabilities of lenses, mirrors, and other conventional materials. Additionally, one of the effects most studied is the negative index of refraction. === Reverse propagation === When a negative index of refraction occurs, propagation of the electromagnetic wave is reversed. Resolution below the diffraction limit becomes possible. This is known as subwavelength imaging. Transmitting a beam of light via an electromagnetically flat surface is another capability. In contrast, conventional materials are usually curved, and cannot achieve resolution below the diffraction limit. Also, reversing the electromagnetic waves in a material, in conjunction with other ordinary materials (including air) could result in minimizing losses that would normally occur. The reverse of the electromagnetic wave, characterized by an antiparallel phase velocity is also an indicator of negative index of refraction. Furthermore, negative-index materials are customized composites. In other words, materials are combined with a desired result in mind. Combinations of materials can be designed to achieve optical properties not seen in nature. The properties of the composite material stem from its lattice structure constructed from components smaller than the impinging electromagnetic wavelength separated by distances that are also smaller than the impinging electromagnetic wavelength. Likewise, by fabricating such metamaterials researchers are trying to overcome fundamental limits tied to the wavelength of light. The unusual and counterintuitive properties currently have practical and commercial use manipulating electromagnetic microwaves in wireless and communication systems. Lastly, research continues in the other domains of the electromagnetic spectrum, including visible light. == Materials == The first actual metamaterials worked in the microwave regime, or centimeter wavelengths, of the electromagnetic spectrum (about 4.3 GHz). It was constructed of split-ring resonators and conducting straight wires (as unit cells). The unit cells were sized from 7 to 10 millimeters. The unit cells were arranged in a two-dimensional (periodic) repeating pattern which produces a crystal-like geometry. Both the unit cells and the lattice spacing were smaller than the radiated electromagnetic wave. This produced the first left-handed material when both the permittivity and permeability of the material were negative. This system relies on the resonant behavior of the unit cells. Below a group of researchers develop an idea for a left-handed metamaterial that does not rely on such resonant behavior. Research in the microwave range continues with split-ring resonators and conducting wires. Research also continues in the shorter wavelengths with this configuration of materials and the unit cell sizes are scaled down. However, at around 200 terahertz issues arise which make using the split ring resonator problematic. "Alternative materials become more suitable for the terahertz and optical regimes." At these wavelengths selection of materials and size limitations become important. For example, in 2007 a 100 nanometer mesh wire design made of silver and woven in a repeating pattern transmitted beams at the 780 nanometer wavelength, the far end of the visible spectrum. The researchers believe this produced a negative refraction of 0.6. Nevertheless, this operates at only a single wavelength like its predecessor metamaterials in the microwave regime. Hence, the challenges are to fabricate metamaterials so that they "refract light at ever-smaller wavelengths" and to develop broad band capabilities. === Artificial transmission-line-media === In the metamaterial literature, medium or media refers to transmission medium or optical medium. In 2002, a group of researchers came up with the idea that in contrast to materials that depended on resonant behavior, non-resonant phenomena could surpass narrow bandwidth constraints of the wire/split-ring resonator configuration. This idea translated into a type of medium with broader bandwidth abilities, negative refraction, backward waves, and focusing beyond the diffraction limit. They dispensed with split-ring-resonators and instead used a network of L–C loaded transmission lines. In metamaterial literature this became known as artificial transmission-line media. At that time it had the added advantage of being more compact than a unit made of wires and split ring resonators. The network was both scalable (from the megahertz to the tens of gigahertz range) and tunable. It also includes a method for focusing the wavelengths of interest. By 2007 the negative refractive index transmission line was employed as a subwavelength focusing free-space flat lens. That this is a free-space lens is a significant advance. Part of prior research efforts targeted creating a lens that did not need to be embedded in a transmission line. === The optical domain === Metamaterial components shrink as research explores shorter wavelengths (higher frequencies) of the electromagnetic spectrum in the infrared and visible spectrums. For example, theory and experiment have investigated smaller horseshoe shaped split ring resonators designed with lithographic techniques, as well as paired metal nanorods or nanostrips, and nanoparticles as circuits designed with lumped element models == Applications == The science of negative-index materials is being matched with conventional devices that broadcast, transmit, shape, or receive electromagnetic signals that travel over cables, wires, or air. The materials, devices and systems that are involved with this work could have their properties altered or heightened. Hence, this is already happening with metamaterial antennas and related devices which are commercially available. Moreover, in the wireless domain these metamaterial apparatuses continue to be researched. Other applications are also being researched. These are electromagnetic absorbers such as radar-microwave absorbers, electrically small resonators, waveguides that can go beyond the diffraction limit, phase compensators, advancements in focusing devices (e.g. microwave lens), and improved electrically small antennas. In the optical frequency regime developing the superlens may allow for imaging below the diffraction limit. Other potential applications for negative-index metamaterials are optical nanolithography, nanotechnology circuitry, as well as a near field superlens (Pendry, 2000) that could be useful for biomedical imaging and subwavelength photolithography. == Manipulating permittivity and permeability == To describe any electromagnetic properties of a given achiral material such as an optical lens, there are two significant parameters. These are permittivity, ϵ r {\displaystyle \epsilon _{r}} , and permeability, μ r {\displaystyle \mu _{r}} , which allow accurate prediction of light waves traveling within materials, and electromagnetic phenomena that occur at the interface between two materials. For example, refraction is an electromagnetic phenomenon which occurs at the interface between two materials. Snell's law states that the relationship between the angle of incidence of a beam of electromagnetic radiation (light) and the resulting angle of refraction rests on the refractive indices, n {\displaystyle n} , of the two media (materials). The refractive index of an achiral medium is given by n = ± ϵ r μ r {\displaystyle \scriptstyle n=\pm {\sqrt {\epsilon _{r}\mu _{r}}}} . Hence, it can be seen that the refractive index is dependent on these two parameters. Therefore, if designed or arbitrarily modified values can be inputs for ϵ r {\displaystyle \epsilon _{r}} and μ r {\displaystyle \mu _{r}} , then the behavior of propagating electromagnetic waves inside the material can be manipulated at will. This ability then allows for intentional determination of the refractive index. For example, in 1967, Victor Veselago analytically determined that light will refract in the reverse direction (negatively) at the interface between a material with negative refractive index and a material exhibiting conventional positive refractive index. This extraordinary material was realized on paper with simultaneous negative values for ϵ r {\displaystyle \epsilon _{r}} and μ r {\displaystyle \mu _{r}} , and could therefore be termed a double negative material. However, in Veselago's day a material which exhibits double negative parameters simultaneously seemed impossible because no natural materials exist which can produce this effect. Therefore, his work was ignored for three decades. It was nominated for the Nobel Prize later. In general the physical properties of natural materials cause limitations. Most dielectrics only have positive permittivities, ϵ r {\displaystyle \epsilon _{r}} > 0. Metals will exhibit negative permittivity, ϵ r {\displaystyle \epsilon _{r}} < 0 at optical frequencies, and plasmas exhibit negative permittivity values in certain frequency bands. Pendry et al. demonstrated that the plasma frequency can be made to occur in the lower microwave frequencies for metals with a material made of metal rods that replaces the bulk metal. However, in each of these cases permeability remains always positive. At microwave frequencies it is possible for negative μ to occur in some ferromagnetic materials. But the inherent drawback is they are difficult to find above terahertz frequencies. In any case, a natural material that can achieve negative values for permittivity and permeability simultaneously has not been found or discovered. Hence, all of this has led to constructing artificial composite materials known as metamaterials in order to achieve the desired results. == Negative index of refraction due to chirality == In case of chiral materials, the refractive index n {\displaystyle n} depends not only on permittivity ϵ r {\displaystyle \epsilon _{r}} and permeability μ r {\displaystyle \mu _{r}} , but also on the chirality parameter κ {\displaystyle \kappa } , resulting in distinct values for left and right circularly polarized waves, given by n = ± ϵ r μ r ± κ {\displaystyle n=\pm {\sqrt {\epsilon _{r}\mu _{r}}}\pm \kappa } A negative index will occur for waves of one circular polarization if κ {\displaystyle \kappa } > ϵ r μ r {\displaystyle {\sqrt {\epsilon _{r}\mu _{r}}}} . In this case, it is not necessary that either or both ϵ r {\displaystyle \epsilon _{r}} and μ r {\displaystyle \mu _{r}} be negative to achieve a negative index of refraction. A negative refractive index due to chirality was predicted by Pendry and Tretyakov et al., and first observed simultaneously and independently by Plum et al. and Zhang et al. in 2009. == Physical properties never before produced in nature == Theoretical articles were published in 1996 and 1999 which showed that synthetic materials could be constructed to purposely exhibit a negative permittivity and permeability. These papers, along with Veselago's 1967 theoretical analysis of the properties of negative-index materials, provided the background to fabricate a metamaterial with negative effective permittivity and permeability. See below. A metamaterial developed to exhibit negative-index behavior is typically formed from individual components. Each component responds differently and independently to a radiated electromagnetic wave as it travels through the material. Since these components are smaller than the radiated wavelength it is understood that a macroscopic view includes an effective value for both permittivity and permeability. === Composite material === In the year 2000, David R. Smith's team of UCSD researchers produced a new class of composite materials by depositing a structure onto a circuit-board substrate consisting of a series of thin copper split-rings and ordinary wire segments strung parallel to the rings. This material exhibited unusual physical properties that had never been observed in nature. These materials obey the laws of physics, but behave differently from normal materials. In essence these negative-index metamaterials were noted for having the ability to reverse many of the physical properties that govern the behavior of ordinary optical materials. One of those unusual properties is the ability to reverse, for the first time, Snell's law of refraction. Until the demonstration of negative refractive index for microwaves by the UCSD team, the material had been unavailable. Advances during the 1990s in fabrication and computation abilities allowed these first metamaterials to be constructed. Thus, the "new" metamaterial was tested for the effects described by Victor Veselago 30 years earlier. Studies of this experiment, which followed shortly thereafter, announced that other effects had occurred. With antiferromagnets and certain types of insulating ferromagnets, effective negative magnetic permeability is achievable when polariton resonance exists. To achieve a negative index of refraction, however, permittivity with negative values must occur within the same frequency range. The artificially fabricated split-ring resonator is a design that accomplishes this, along with the promise of dampening high losses. With this first introduction of the metamaterial, it appears that the losses incurred were smaller than antiferromagnetic, or ferromagnetic materials. When first demonstrated in 2000, the composite material (NIM) was limited to transmitting microwave radiation at frequencies of 4 to 7 gigahertz (4.28–7.49 cm wavelengths). This range is between the frequency of household microwave ovens (~2.45 GHz, 12.23 cm) and military radars (~10 GHz, 3 cm). At demonstrated frequencies, pulses of electromagnetic radiation moving through the material in one direction are composed of constituent waves moving in the opposite direction. The metamaterial was constructed as a periodic array of copper split ring and wire conducting elements deposited onto a circuit-board substrate. The design was such that the cells, and the lattice spacing between the cells, were much smaller than the radiated electromagnetic wavelength. Hence, it behaves as an effective medium. The material has become notable because its range of (effective) permittivity εeff and permeability μeff values have exceeded those found in any ordinary material. Furthermore, the characteristic of negative (effective) permeability evinced by this medium is particularly notable, because it has not been found in ordinary materials. In addition, the negative values for the magnetic component is directly related to its left-handed nomenclature, and properties (discussed in a section below). The split-ring resonator (SRR), based on the prior 1999 theoretical article, is the tool employed to achieve negative permeability. This first composite metamaterial is then composed of split-ring resonators and electrical conducting posts. Initially, these materials were only demonstrated at wavelengths longer than those in the visible spectrum. In addition, early NIMs were fabricated from opaque materials and usually made of non-magnetic constituents. As an illustration, however, if these materials are constructed at visible frequencies, and a flashlight is shone onto the resulting NIM slab, the material should focus the light at a point on the other side. This is not possible with a sheet of ordinary opaque material. In 2007, the NIST in collaboration with the Atwater Lab at Caltech created the first NIM active at optical frequencies. More recently (as of 2008), layered "fishnet" NIM materials made of silicon and silver wires have been integrated into optical fibers to create active optical elements. === Simultaneous negative permittivity and permeability === Negative permittivity εeff < 0 had already been discovered and realized in metals for frequencies all the way up to the plasma frequency, before the first metamaterial. There are two requirements to achieve a negative value for refraction. First, is to fabricate a material which can produce negative permeability μeff < 0. Second, negative values for both permittivity and permeability must occur simultaneously over a common range of frequencies. Therefore, for the first metamaterial, the nuts and bolts are one split-ring resonator electromagnetically combined with one (electric) conducting post. These are designed to resonate at designated frequencies to achieve the desired values. Looking at the make-up of the split ring, the associated magnetic field pattern from the SRR is dipolar. This dipolar behavior is notable because this means it mimics nature's atom, but on a much larger scale, such as in this case at 2.5 millimeters. Atoms exist on the scale of picometers. The splits in the rings create a dynamic where the SRR unit cell can be made resonant at radiated wavelengths much larger than the diameter of the rings. If the rings were closed, a half wavelength boundary would be electromagnetically imposed as a requirement for resonance. The split in the second ring is oriented opposite to the split in the first ring. It is there to generate a large capacitance, which occurs in the small gap. This capacitance substantially decreases the resonant frequency while concentrating the electric field. The individual SRR depicted on the right had a resonant frequency of 4.845 GHz, and the resonance curve, inset in the graph, is also shown. The radiative losses from absorption and reflection are noted to be small, because the unit dimensions are much smaller than the free space, radiated wavelength. When these units or cells are combined into a periodic arrangement, the magnetic coupling between the resonators is strengthened, and a strong magnetic coupling occurs. Properties unique in comparison to ordinary or conventional materials begin to emerge. For one thing, this periodic strong coupling creates a material, which now has an effective magnetic permeability μeff in response to the radiated-incident magnetic field. === Composite material passband === Graphing the general dispersion curve, a region of propagation occurs from zero up to a lower band edge, followed by a gap, and then an upper passband. The presence of a 400 MHz gap between 4.2 GHz and 4.6 GHz implies a band of frequencies where μeff < 0 occurs. (Please see the image in the previous section) Furthermore, when wires are added symmetrically between the split rings, a passband occurs within the previously forbidden band of the split ring dispersion curves. That this passband occurs within a previously forbidden region indicates that the negative εeff for this region has combined with the negative μeff to allow propagation, which fits with theoretical predictions. Mathematically, the dispersion relation leads to a band with negative group velocity everywhere, and a bandwidth that is independent of the plasma frequency, within the stated conditions. Mathematical modeling and experiment have both shown that periodically arrayed conducting elements (non-magnetic by nature) respond predominantly to the magnetic component of incident electromagnetic fields. The result is an effective medium and negative μeff over a band of frequencies. The permeability was verified to be the region of the forbidden band, where the gap in propagation occurred – from a finite section of material. This was combined with a negative permittivity material, εeff < 0, to form a “left-handed” medium, which formed a propagation band with negative group velocity where previously there was only attenuation. This validated predictions. In addition, a later work determined that this first metamaterial had a range of frequencies over which the refractive index was predicted to be negative for one direction of propagation (see ref #). Other predicted electrodynamic effects were to be investigated in other research. === Describing a left-handed material === From the conclusions in the above section a left-handed material (LHM) can be defined. It is a material which exhibits simultaneous negative values for permittivity, ε, and permeability, μ, in an overlapping frequency region. Since the values are derived from the effects of the composite medium system as a whole, these are defined as effective permittivity, εeff, and effective permeability, μeff. Real values are then derived to denote the value of negative index of refraction, and wave vectors. This means that in practice losses will occur for a given medium used to transmit electromagnetic radiation such as microwave, or infrared frequencies, or visible light – for example. In this instance, real values describe either the amplitude or the intensity of a transmitted wave relative to an incident wave, while ignoring the negligible loss values. == Isotropic negative index in two dimensions == In the above sections first fabricated metamaterial was constructed with resonating elements, which exhibited one direction of incidence and polarization. In other words, this structure exhibited left-handed propagation in one dimension. This was discussed in relation to Veselago's seminal work 33 years earlier (1967). He predicted that intrinsic to a material, which manifests negative values of effective permittivity and permeability, are several types of reversed physics phenomena. Hence, there was then a critical need for a higher-dimensional LHMs to confirm Veselago's theory, as expected. The confirmation would include reversal of Snell's law (index of refraction), along with other reversed phenomena. In the beginning of 2001 the existence of a higher-dimensional structure was reported. It was two-dimensional and demonstrated by both experiment and numerical confirmation. It was an LHM, a composite constructed of wire strips mounted behind the split-ring resonators (SRRs) in a periodic configuration. It was created for the express purpose of being suitable for further experiments to produce the effects predicted by Veselago. == Experimental verification of a negative index of refraction == A theoretical work published in 1967 by Soviet physicist Victor Veselago showed that a refractive index with negative values is possible and that this does not violate the laws of physics. As discussed previously (above), the first metamaterial had a range of frequencies over which the refractive index was predicted to be negative for one direction of propagation. It was reported in May 2000. In 2001, a team of researchers constructed a prism composed of metamaterials (negative-index metamaterials) to experimentally test for negative refractive index. The experiment used a waveguide to help transmit the proper frequency and isolate the material. This test achieved its goal because it successfully verified a negative index of refraction. The experimental demonstration of negative refractive index was followed by another demonstration, in 2003, of a reversal of Snell's law, or reversed refraction. However, in this experiment negative index of refraction material is in free space from 12.6 to 13.2 GHz. Although the radiated frequency range is about the same, a notable distinction is this experiment is conducted in free space rather than employing waveguides. Furthering the authenticity of negative refraction, the power flow of a wave transmitted through a dispersive left-handed material was calculated and compared to a dispersive right-handed material. The transmission of an incident field, composed of many frequencies, from an isotropic nondispersive material into an isotropic dispersive media is employed. The direction of power flow for both nondispersive and dispersive media is determined by the time-averaged Poynting vector. Negative refraction was shown to be possible for multiple frequency signals by explicit calculation of the Poynting vector in the LHM. === Fundamental electromagnetic properties of the NIM === In a slab of conventional material with an ordinary refractive index – a right-handed material (RHM) – the wave front is transmitted away from the source. In a NIM the wavefront travels toward the source. However, the magnitude and direction of the flow of energy essentially remains the same in both the ordinary material and the NIM. Since the flow of energy remains the same in both materials (media), the impedance of the NIM matches the RHM. Hence, the sign of the intrinsic impedance is still positive in a NIM. Light incident on a left-handed material, or NIM, will bend to the same side as the incident beam, and for Snell's law to hold, the refraction angle should be negative. In a passive metamaterial medium this determines a negative real and imaginary part of the refractive index. === Negative refractive index in left-handed materials === In 1968 Victor Veselago's paper showed that the opposite directions of EM plane waves and the flow of energy was derived from the individual Maxwell curl equations. In ordinary optical materials, the curl equation for the electric field show a "right hand rule" for the directions of the electric field E, the magnetic induction B, and wave propagation, which goes in the direction of wave vector k. However, the direction of energy flow formed by E × H is right-handed only when permeability is greater than zero. This means that when permeability is less than zero, e.g. negative, wave propagation is reversed (determined by k), and contrary to the direction of energy flow. Furthermore, the relations of vectors E, H, and k form a "left-handed" system – and it was Veselago who coined the term "left-handed" (LH) material, which is in wide use today (2011). He contended that an LH material has a negative refractive index and relied on the steady-state solutions of Maxwell's equations as a center for his argument. After a 30-year void, when LH materials were finally demonstrated, it could be said that the designation of negative refractive index is unique to LH systems; even when compared to photonic crystals. Photonic crystals, like many other known systems, can exhibit unusual propagation behavior such as reversal of phase and group velocities. But, negative refraction does not occur in these systems, and not yet realistically in photonic crystals. === Negative refraction at optical frequencies === The negative refractive index in the optical range was first demonstrated in 2005 by Shalaev et al. (at the telecom wavelength λ = 1.5 μm) and by Brueck et al. (at λ = 2 μm) at nearly the same time. In 2006, a Caltech team led by Lezec, Dionne, and Atwater achieved negative refraction in the visible spectral regime. == Reversed Cherenkov radiation == Besides reversed values for the index of refraction, Veselago predicted the occurrence of reversed Cherenkov radiation in a left-handed medium. Whereas ordinary Cherenkov radiation is emitted in a cone around the direction in which a charged particle is travelling through the medium, reversed Cherenkov radiation is emitted in a cone around the opposite direction. Reversed Cherenkov radiation was first experimentally demonstrated indirectly in 2009, using a phased electromagnetic dipole array to model a moving charged particle. Reversed Cherenkov radiation emitted by actual charged particles was first observed in 2017. == Other optics with NIMs == Theoretical work, along with numerical simulations, began in the early 2000s on the abilities of DNG slabs for subwavelength focusing. The research began with Pendry's proposed "Perfect lens." Several research investigations that followed Pendry's concluded that the "Perfect lens" was possible in theory but impractical. One direction in subwavelength focusing proceeded with the use of negative-index metamaterials, but based on the enhancements for imaging with surface plasmons. In another direction researchers explored paraxial approximations of NIM slabs. == Implications of negative refractive materials == The existence of negative refractive materials can result in a change in electrodynamic calculations for the case of permeability μ = 1 . A change from a conventional refractive index to a negative value gives incorrect results for conventional calculations, because some properties and effects have been altered. When permeability μ has values other than 1 this affects Snell's law, the Doppler effect, the Cherenkov radiation, Fresnel's equations, and Fermat's principle. The refractive index is basic to the science of optics. Shifting the refractive index to a negative value may be a cause to revisit or reconsider the interpretation of some norms, or basic laws. == US patent on left-handed composite media == The first US patent for a fabricated metamaterial, titled "Left handed composite media" by David R. Smith, Sheldon Schultz, Norman Kroll and Richard A. Shelby, was issued in 2004. The invention achieves simultaneous negative permittivity and permeability over a common band of frequencies. The material can integrate media which is already composite or continuous, but which will produce negative permittivity and permeability within the same spectrum of frequencies. Different types of continuous or composite may be deemed appropriate when combined for the desired effect. However, the inclusion of a periodic array of conducting elements is preferred. The array scatters electromagnetic radiation at wavelengths longer than the size of the element and lattice spacing. The array is then viewed as an effective medium. == See also == Academic journals Metamaterials Metamaterials books Metamaterials Handbook Metamaterials: Physics and Engineering Explorations == Notes == This article incorporates public domain material from websites or documents of the United States government. -NIST == References == == Further reading == S. Anantha Ramakrishna; Tomasz M. Grzegorczyk (2008). Physics and Applications of Negative Refractive Index Materials (PDF). CRC Press. doi:10.1201/9781420068764.ch1 (inactive 2024-11-12). ISBN 978-1-4200-6875-7. Archived from the original (PDF) on 2016-03-03.{{cite book}}: CS1 maint: DOI inactive as of November 2024 (link) Ramakrishna, S Anantha (2005). "Physics of negative refractive index materials". Reports on Progress in Physics. 68 (2): 449. Bibcode:2005RPPh...68..449R. doi:10.1088/0034-4885/68/2/R06. S2CID 250829241. Pendry, J.; Holden, A.; Stewart, W.; Youngs, I. (1996). "Extremely Low Frequency Plasmons in Metallic Mesostructures" (PDF). Physical Review Letters. 76 (25): 4773–4776. Bibcode:1996PhRvL..76.4773P. doi:10.1103/PhysRevLett.76.4773. PMID 10061377. Archived from the original (PDF) on 2011-07-17. Retrieved 2011-08-18. Pendry, J B; Holden, A J; Robbins, D J; Stewart, W J (1998). "Low frequency plasmons in thin-wire structures" (PDF). Journal of Physics: Condensed Matter. 10 (22): 4785–4809. Bibcode:1998JPCM...10.4785P. doi:10.1088/0953-8984/10/22/007. S2CID 250891354. Also see the Preprint-author's copy. Padilla, Willie J.; Basov, Dimitri N.; Smith, David R. (2006). "Negative refractive index metamaterials". Materials Today. 9 (7–8): 28. doi:10.1016/S1369-7021(06)71573-5. Bayindir, Mehmet; Aydin, K.; Ozbay, E.; Markoš, P.; Soukoulis, C. M. (2002-07-01). "Transmission properties of composite metamaterials in free space" (PDF). Applied Physics Letters. 81 (1): 120. Bibcode:2002ApPhL..81..120B. doi:10.1063/1.1492009. hdl:11693/24684. == External links == Manipulating the Near Field with Metamaterials Slide show, with audio available, by Dr. John Pendry, Imperial College, London Laszlo Solymar; Ekaterina Shamonina (2009-03-15). Waves in Metamaterials. Oxford University Press, USA. March 2009. ISBN 978-0-19-921533-1. "Illustrating the Law of Refraction". Young, Andrew T. (1999–2009). "An Introduction to Mirages". SDSU San Diego, CA. Retrieved 2009-08-12. Garrett, C.; et al. (1969-09-25). "Propagation of a Gaussian Light Pulse through an Anomalous Dispersion Medium". Phys. Rev. A. 1 (2): 305–313. Bibcode:1970PhRvA...1..305G. doi:10.1103/PhysRevA.1.305. List of science website news stories on Left Handed Materials Caloz, Christophe (March 2009). "Perspectives on EM metamaterials". Materials Today. 12 (3): 12–20. doi:10.1016/S1369-7021(09)70071-9.
Wikipedia/Negative_index_metamaterials
Biomaterials are materials that are used in contact with biological systems. Biocompatibility and applicability of surface modification with current uses of metallic, polymeric and ceramic biomaterials allow alteration of properties to enhance performance in a biological environment while retaining bulk properties of the desired device. Surface modification involves the fundamentals of physicochemical interactions between the biomaterial and the physiological environment at the molecular, cellular and tissue levels (reduce bacterial adhesion, promote cell adhesion). Currently, there are various methods of characterization and surface modification of biomaterials and useful applications of fundamental concepts in several biomedical solutions. == Function == The function of surface modification is to change the physical and chemical properties of surfaces to improve the functionality of the original material. Protein surface modification of various types biomaterials (ceramics, polymers, metals, composites) is performed to ultimately increase biocompatibility of the material and interact as a bioactive material for specific applications. In various biomedical applications of developing implantable medical devices (such as pacemakers and stents), surface properties/interactions of proteins with a specific material must be evaluated with regards to biocompatibility as it plays a major role in determining a biological response. For instance, surface hydrophobicity or hydrophilicity of a material can be altered. Engineering biocompatibility between the physiological environment and the surface material allows new medical products, materials and surgical procedures with additional biofunctionality. Surface modification can be done through various methods, which can be classified through three main groups: physical (physical adsorption, Langmuir blodgett film), chemical (oxidation by strong acids, ozone treatment, chemisorption, and flame treatment) and radiation (glow discharge, corona discharge, photo activation (UV), laser, ion beam, plasma immersion ion implantation, electron beam lithography, and γ-irradiation). === Biocompatibility === In a biomedical perspective, biocompatibility is the ability of a material to perform with an appropriate host response in a specific application. It is described to be non-toxic, no induced adverse reactions such as chronic inflammatory response with unusual tissue formation, and designed to function properly for a reasonable lifetime. It is a requirement of biomaterials in which the surface modified material will cause no harm to the host, and the material itself will not harmed by the host. Although most synthetic biomaterials have the physical properties that meet or even exceed those of natural tissue, they often result in an unfavorable physiological reaction such as thrombosis formation, inflammation and infection. Biointegration is the ultimate goal in for example orthopedic implants that bones establish a mechanically solid interface with complete fusion between the artificial implanted material and bone tissues under good biocompatibility conditions. Modifying the surface of a material can improve its biocompatibility, and can be done without changing its bulk properties. The properties of the uppermost molecular layers are critical in biomaterials since the surface layers are in physicochemical contact with the biological environment. Furthermore, although some of the biomaterials have good biocompatibility, it may possess poor mechanical or physical properties such as wear resistance, anti-corrosion, or wettability or lubricity. In these cases, surface modification is utilized to deposit a layer of coating or mixing with substrate to form a composite layer. === Cell adhesion === As proteins are made up of different sequences of amino acids, proteins can have various functions as its structural shape driven by a number of molecular bonds can change. Amino acids exhibit different characteristics such as being polar, non-polar, positively or negatively charged which is determined by having different side chains. Thus, attachment of molecules with different protein for example, those containing Arginine-Glycine-Aspartate (RGD) sequences are expected to modify the surface of tissue scaffolds and result in improvement of cell adhesion when placed into its physiological environment. Additional modifications of the surface could be through attachment of functional groups of 2D or 3D patterns on the surface so that cell alignment is guided and new tissue formation is improved. === Biomedical materials === Some of the surface modification techniques listed above are particularly used for certain functions or kinds of materials. One of the advantages of plasma immersion ion implantation is its ability to treat most materials. Ion implantation is an effective surface treatment technique that be used to enhance the surface properties of biomaterials. The unique advantage of plasma modification is that the surface properties and biocompatibility can be enhanced selectively while the favorable bulk attributes of the materials such as strength remain unchanged. Overall, it is an effective method to modify medical implants with complex shape. By altering the surface functionalities using plasma modification, the optimal surface, chemical and physical properties can be obtained. Plasma immersion implantation is a technique suitable for low melting point materials such as polymers, and widely accepted to improve adhesion between pinhole free layers and substrates. The ultimate goal is to enhance the properties of biomaterials such as biocompatibility, corrosion resistance and functionality with the fabrication of different types of biomedical thin films with various biologically important elements such as nitrogen, calcium, and sodium implanted with them. Different thin films such as titanium oxide, titanium nitride, and diamond-like carbon have been treated previously, and results show that the processed material exhibit better biocompatibility compared to the some current ones used in biomedical implants. In order to evaluate the biocompatibility of the fabricated thin films, various in vitro biological environment need to be conducted. == Biological response == The immune system will react differently if an implant is coated in extra-cellular matrix proteins. The proteins surrounding the implant serve to "hide" the implant from the innate immune system. However, if the implant is coated in allergenic proteins, the patient's adaptive immune response may be initiated. To prevent such a negative immune reaction, immunosuppressive drugs may be prescribed, or autologous tissue may produce the protein coating. === Acute response === Immediately following insertion, an implant (and the tissue damage from surgery) will result in acute inflammation. The classic signs of acute inflammation are redness, swelling, heat, pain, and loss of function. Hemorrhaging from tissue damage results in clotting which stimulates latent mast cells. The mast cells release chemokines which activate blood vessel endothelium. The blood vessels dilate and become leaky, producing the redness and swelling associated with acute inflammation. The activated endothelium allows extravasation of blood plasma and white blood cells including macrophages which transmigrate to the implant and recognize it as non-biologic. Macrophages release oxidants to combat the foreign body. If antioxidants fail to destroy the foreign body, chronic inflammation begins. === Chronic response === Implantation of non-degradable materials will eventually result in chronic inflammation and fibrous capsule formation. Macrophages that fail to destroy pathogens will merge to form a foreign-body giant cell which quarantines the implant. High levels of oxidants cause fibroblasts to secrete collagen, forming a layer of fibrous tissue around the implant. By coating an implant with extracellular matrix proteins, macrophages will be unable to recognize the implant as non-biologic. The implant is then capable of continued interaction with the host, influencing the surrounding tissue toward various outcomes. For instance, the implant may improve healing by secreting angiogenic drugs. == Fabrication techniques == === Physical modification === Physical immobilization is simply coating a material with a biomimetic material without changing the structure of either. Various biomimetic materials with cell adhesive proteins (such as collagen or laminin) have been used in vitro to direct new tissue formation and cell growth. Cell adhesion and proliferation occurs much better on protein-coated surfaces. However, since the proteins are generally isolated, it is more likely to elicit an immune response. Generally, chemistry qualities should be taken into consideration. === Chemical modification === Alkali hydrolysis, covalent immobilization, and the wet chemical method are only three of the many ways to chemically modify a surface. The surface is prepped with surface activation, where several functionalities are placed on the polymer to react better with the proteins. In alkali hydrolysis, small protons diffuse between polymer chains and cause surface hydrolysis which cleaves ester bonds. This results in the formation of carboxyl and hydroxyl functionalities which can attach to proteins. In covalent immobilization, small fragments of proteins or short peptides are bonded to the surface. The peptides are highly stable and studies have shown that this method improves biocompatibility. The wet chemical method is one of the preferred methods of protein immobilization. Chemical species are dissolved in an organic solution where reactions take place to reduce the hydrophobic nature of the polymer. Surface stability is higher in chemical modification than in physical adsorption. It also offers higher biocompatibility towards cell growth and bodily fluid flow. === Photochemical modification === Successful attempts at grafting biomolecules onto polymers have been made using photochemical modification of biomaterials. These techniques employ high energy photons (typically UV) to break chemical bonds and release free radicals. Protein adhesion can be encouraged by favorably altering the surface charge of a biomaterial. Improved protein adhesion leads to better integration between the host and the implant. Ma et al. compared cell adhesion for various surface groups and found that OH and CONH2 improved PLLA wettability more than COOH. Applying a mask to the surface of the biomaterial allows selective surface modification. Areas that UV light penetrate will be modified such that cells will adhere to the region more favorably. The minimum feature size attainable is given by: C D = k 1 ⋅ λ N A {\displaystyle CD=k_{1}\cdot {\frac {\lambda }{NA}}} where C D {\displaystyle \,CD} is the minimum feature size k 1 {\displaystyle \,k_{1}} (commonly called k1 factor) is a coefficient that encapsulates process-related factors, and typically equals 0.4 for production. λ {\displaystyle \,\lambda } is the wavelength of light used N A {\displaystyle \,NA} is the numerical aperture of the lens as seen from the wafer According to this equation, greater resolution can be obtained by decreasing the wavelength, and increasing the numerical aperture. === Composites and graft formation === Graft formation improves the overall hydrophilicity of the material through a ratio of how much glycolic acid and lactic acid is added. Block polymer, or PLGA, decreases hydrophobicity of the surface by controlling the amount of glycolic acid. However, this doesn't increase the hydrophilic tendency of the material. In brush grafting, hydrophilic polymers containing alcohol or hydroxyl groups are placed onto surfaces through photopolymerization. === Plasma treatment === Plasma techniques are especially useful because they can deposit ultra thin (a few nm), adherent, conformal coatings. Glow discharge plasma is created by filling a vacuum with a low-pressure gas (ex. argon, ammonia, or oxygen). The gas is then excited using microwaves or current which ionizes it. The ionized gas is then thrown onto a surface at a high velocity where the energy produced physically and chemically changes the surface. After the changes occur, the ionized plasma gas is able to react with the surface to make it ready for protein adhesion. However, the surfaces may lose mechanical strength or other inherent properties because of the high amounts of energy. Several plasma-based technologies have been developed to contently immobilize proteins depending on the final application of the resulting biomaterial. This technique is a relatively fast approach to produce smart bioactive surfaces. == Applications == === Bone tissue === Extra-cellular matrix (ECM) proteins greatly dictate the process of bone formation—the attachment and proliferation of osteogenitor cells, differentiation to osteoblasts, matrix formation, and mineralization. It is beneficial to design biomaterials for bone-contacting devices with bone matrix proteins to promote bone growth. It is also possible to covalently and directionally immobilize osteoinductive peptides in the surface of the ceramic materials such as hydroxyapatite/β-tricalcium phosphate to stimulate osteoblast differentiation and better bone regeneration RGD peptides have been shown to increase the attachment and migration of osteoblasts on titanium implants, polymeric materials, and glass. Other adhesive peptides that can be recognized by molecules in the cell membrane can also affect binding of bone-derived cells. Particularly, the heparin binding domain in fibronectin is actively involved in specific interaction with osteogenic cells. Modification with heparin binding domains have the potential to enhance the binding of osteoblasts without affecting the attachment of endothelial cells and fibroblasts. Additionally, growth factors such as those in the bone morphogenic protein family are important polypeptides to induce bone formation. These growth factors can be covalently bound to materials to enhance the osteointegration of implants. === Neural tissue === Peripheral nervous system damage is typically treated by an autograft of nerve tissue to bridge a severed gap. This treatment requires successful regeneration of neural tissue; axons must grow from the proximal stump without interference in order to make a connection with the distal stump. Neural guidance channels (NGC), have been designed as a conduit for growth of new axons and the differentiation and morphogenesis of these tissues is affected by interaction between neural cells and the surrounding ECM. Studies of laminin have shown the protein to be an important ECM protein in the attachment of neural cells. The penta-peptide YIGSR and IKVAV, which are important sequences in laminin, have been shown to increase attachment of neural cells with the ability to control the spatial organization of the cells. === Cardiovascular tissue === It is important that cardiovascular devices such as stents or artificial vascular grafts be designed to mimic properties of the specific tissue region the device is serving to replace. In order to reduce thrombogenicity, surfaces can be coated with fibronectin and RGD containing peptides, which encourages attachment of endothelial cells. The peptides YIGSR and REDV have also been shown to enhance attachment and spreading of endothelial cells and ultimately reduce the thrombogenicity of the implant. == See also == Bovine Submaxillary Mucin Coatings == References ==
Wikipedia/Surface_modification_of_biomaterials_with_proteins
The Langmuir adsorption model explains adsorption by assuming an adsorbate behaves as an ideal gas at isothermal conditions. According to the model, adsorption and desorption are reversible processes. This model even explains the effect of pressure; i.e., at these conditions the adsorbate's partial pressure p A {\displaystyle p_{A}} is related to its volume V adsorbed onto a solid adsorbent. The adsorbent, as indicated in the figure, is assumed to be an ideal solid surface composed of a series of distinct sites capable of binding the adsorbate. The adsorbate binding is treated as a chemical reaction between the adsorbate gaseous molecule A g {\displaystyle A_{\text{g}}} and an empty sorption site S. This reaction yields an adsorbed species A ad {\displaystyle A_{\text{ad}}} with an associated equilibrium constant K eq {\displaystyle K_{\text{eq}}} : A g + S ↽ − − ⇀ A ad {\displaystyle {\ce {A_{g}{}+ S <=> A_{ad}}}} . From these basic hypotheses the mathematical formulation of the Langmuir adsorption isotherm can be derived in various independent and complementary ways: by the kinetics, the thermodynamics, and the statistical mechanics approaches respectively (see below for the different demonstrations). The Langmuir adsorption equation is θ A = V V m = K eq A p A 1 + K eq A p A , {\displaystyle \theta _{A}={\frac {V}{V_{\text{m}}}}={\frac {K_{\text{eq}}^{A}\,p_{A}}{1+K_{\text{eq}}^{A}\,p_{A}}},} where θ A {\displaystyle \theta _{A}} is the fractional occupancy of the adsorption sites, i.e., the ratio of the volume V of gas adsorbed onto the solid to the volume V m {\displaystyle V_{\text{m}}} of a gas molecules monolayer covering the whole surface of the solid and completely occupied by the adsorbate. A continuous monolayer of adsorbate molecules covering a homogeneous flat solid surface is the conceptual basis for this adsorption model. == Background and experiments == In 1916, Irving Langmuir presented his model for the adsorption of species onto simple surfaces. Langmuir was awarded the Nobel Prize in 1932 for his work concerning surface chemistry. He hypothesized that a given surface has a certain number of equivalent sites to which a species can "stick", either by physisorption or chemisorption. His theory began when he postulated that gaseous molecules do not rebound elastically from a surface, but are held by it in a similar way to groups of molecules in solid bodies. Langmuir published two papers that confirmed the assumption that adsorbed films do not exceed one molecule in thickness. The first experiment involved observing electron emission from heated filaments in gases. The second, a more direct evidence, examined and measured the films of liquid onto an adsorbent surface layer. He also noted that generally the attractive strength between the surface and the first layer of adsorbed substance is much greater than the strength between the first and second layer. However, there are instances where the subsequent layers may condense given the right combination of temperature and pressure. == Basic assumptions of the model == Inherent within this model, the following assumptions are valid specifically for the simplest case: the adsorption of a single adsorbate onto a series of equivalent sites onto the surface of the solid. The surface containing the adsorbing sites is a perfectly flat plane with no corrugations (assume the surface is homogeneous). However, chemically heterogeneous surfaces can be considered to be homogeneous if the adsorbate is bound to only one type of functional groups on the surface. The adsorbing gas adsorbs into an immobile state. All sites are energetically equivalent, and the energy of adsorption is equal for all sites. Each site can hold at most one molecule (mono-layer coverage only). No (or ideal) interactions between adsorbate molecules on adjacent sites. When the interactions are ideal, the energy of side-to-side interactions is equal for all sites regardless of the surface occupancy. == Derivations of the Langmuir adsorption isotherm == The mathematical expression of the Langmuir adsorption isotherm involving only one sorbing species can be demonstrated in different ways: the kinetics approach, the thermodynamics approach, and the statistical mechanics approach respectively. In case of two competing adsorbed species, the competitive adsorption model is required, while when a sorbed species dissociates into two distinct entities, the dissociative adsorption model need to be used. === Kinetic derivation === This section provides a kinetic derivation for a single-adsorbate case. The kinetic derivation applies to gas-phase adsorption. The multiple-adsorbate case is covered in the competitive adsorption sub-section. The model assumes adsorption and desorption as being elementary processes, where the rate of adsorption rad and the rate of desorption rd are given by r ad = k ad p A [ S ] , {\displaystyle r_{\text{ad}}=k_{\text{ad}}p_{A}[S],} r d = k d [ A ad ] , {\displaystyle r_{\text{d}}=k_{d}[A_{\text{ad}}],} where pA is the partial pressure of A over the surface, [S] is the concentration of free sites in number/m2, [Aad] is the surface concentration of A in molecules/m2 (concentration of occupied sites), and kad and kd are constants of forward adsorption reaction and backward desorption reaction in the above reactions. At equilibrium, the rate of adsorption equals the rate of desorption. Setting rad = rd and rearranging, we obtain [ A ad ] p A [ S ] = k ad k d = K eq A . {\displaystyle {\frac {[A_{\text{ad}}]}{p_{A}[S]}}={\frac {k_{\text{ad}}}{k_{\text{d}}}}=K_{\text{eq}}^{A}.} The concentration of sites is given by dividing the total number of sites (S0) covering the whole surface by the area of the adsorbent (a): [ S 0 ] = S 0 / a . {\displaystyle [S_{0}]=S_{0}/a.} We can then calculate the concentration of all sites by summing the concentration of free sites [S] and occupied sites: [ S 0 ] = [ S ] + [ A ad ] . {\displaystyle [S_{0}]=[S]+[A_{\text{ad}}].} Combining this with the equilibrium equation, we get [ S 0 ] = [ A ad ] K eq A p A + [ A ad ] = 1 + K eq A p A K eq A p A [ A ad ] . {\displaystyle [S_{0}]={\frac {[A_{\text{ad}}]}{K_{\text{eq}}^{A}p_{A}}}+[A_{\text{ad}}]={\frac {1+K_{\text{eq}}^{A}p_{A}}{K_{\text{eq}}^{A}p_{A}}}[A_{\text{ad}}].} We define now the fraction of the surface sites covered with A as θ A = [ A ad ] [ S 0 ] . {\displaystyle \theta _{A}={\frac {[A_{\text{ad}}]}{[S_{0}]}}.} This, applied to the previous equation that combined site balance and equilibrium, yields the Langmuir adsorption isotherm: θ A = K eq A p A 1 + K eq A p A . {\displaystyle \theta _{A}={\frac {K_{\text{eq}}^{A}p_{A}}{1+K_{\text{eq}}^{A}p_{A}}}.} === Thermodynamic derivation === In condensed phases (solutions), adsorption to a solid surface is a competitive process between the solvent (A) and the solute (B) to occupy the binding site. The thermodynamic equilibrium is described as Solvent (bound) + Solute (free) ↔ Solvent (free) + Solute (bound). If we designate the solvent by the subscript "1" and the solute by "2", and the bound state by the superscript "s" (surface/bound) and the free state by the "b" (bulk solution / free), then the equilibrium constant can be written as a ratio between the activities of products over reactants: K = a 1 b × a 2 s a 2 b × a 1 s . {\displaystyle K={\frac {a_{1}^{\text{b}}\times a_{2}^{\text{s}}}{a_{2}^{\text{b}}\times a_{1}^{\text{s}}}}.} For dilute solutions the activity of the solvent in bulk solution a 1 b ≃ 1 , {\displaystyle a_{1}^{\text{b}}\simeq 1,} and the activity coefficients ( γ {\displaystyle \gamma } ) are also assumed to ideal on the surface. Thus, a 2 s = X 2 s = θ , {\displaystyle a_{2}^{\text{s}}=X_{2}^{\text{s}}=\theta ,} a 1 s = X 1 s , {\displaystyle a_{1}^{\text{s}}=X_{1}^{\text{s}},} , and X 1 s + X 2 s = 1 , {\displaystyle X_{1}^{\text{s}}+X_{2}^{\text{s}}=1,} where X i {\displaystyle X_{i}} are mole fractions. Re-writing the equilibrium constant and solving for θ {\displaystyle \theta } yields θ = K a 2 b 1 + K a 2 b . {\displaystyle \theta ={\frac {Ka_{2}^{\text{b}}}{1+Ka_{2}^{\text{b}}}}.} Note that the concentration of the solute adsorbate can be used instead of the activity coefficient. However, the equilibrium constant will no longer be dimensionless and will have units of reciprocal concentration instead. The difference between the kinetic and thermodynamic derivations of the Langmuir model is that the thermodynamic uses activities as a starting point while the kinetic derivation uses rates of reaction. The thermodynamic derivation allows for the activity coefficients of adsorbates in their bound and free states to be included. The thermodynamic derivation is usually referred to as the "Langmuir-like equation". === Statistical mechanical derivation === This derivation based on statistical mechanics was originally provided by Volmer and Mahnert in 1925. The partition function of the finite number of adsorbents adsorbed on a surface, in a canonical ensemble, is given by Z ( N A ) = [ ζ L N A N S ! ( N S − N A ) ! ] 1 N A ! , {\displaystyle Z(N_{A})=\left[\zeta _{L}^{N_{A}}{\frac {N_{S}!}{(N_{S}-N_{A})!}}\right]{\frac {1}{N_{A}!}},} where ζ L {\displaystyle \zeta _{L}} is the partition function of a single adsorbed molecule, N S {\displaystyle N_{S}} is the number of adsorption sites (both occupied and unoccupied), and N A {\displaystyle N_{A}} is the number of adsorbed molecules which should be less than or equal to N S {\displaystyle N_{S}} . The terms in the bracket give the total partition function of the N A {\displaystyle N_{A}} adsorbed molecules by taking a product of the individual partition functions (refer to Partition function of subsystems). The 1 / N A ! {\displaystyle 1/N_{A}!} factor accounts for the overcounting arising due to the indistinguishable nature of the adsorbates. The grand canonical partition function is given by Z ( μ A ) = ∑ N A = 0 N S exp ⁡ ( N A μ A k B T ) ζ L N A N A ! N S ! ( N S − N A ) ! . {\displaystyle {\mathcal {Z}}(\mu _{A})=\sum _{N_{A}=0}^{N_{S}}\exp \left({\frac {N_{A}\mu _{A}}{k_{\text{B}}T}}\right){\frac {\zeta _{L}^{N_{A}}}{N_{A}!}}\,{\frac {N_{S}!}{(N_{S}-N_{A})!}}.} μ A {\displaystyle \mu _{A}} is the chemical potential of an adsorbed molecule. As it has the form of binomial series, the summation is reduced to Z ( μ A ) = ( 1 + x ) N S , {\displaystyle {\mathcal {Z}}(\mu _{A})=(1+x)^{N_{S}},} where x = ζ L exp ⁡ ( μ A k B T ) . {\displaystyle x=\zeta _{L}\exp \left({\frac {\mu _{A}}{k_{\rm {B}}T}}\right).} The grand canonical potential is Ω = − k B T ln ⁡ ( Z ) = − k B T N S ln ⁡ ( 1 + x ) , {\displaystyle \Omega =-k_{\rm {B}}T\ln({\mathcal {Z}})=-k_{\rm {B}}TN_{S}\ln(1+x),} based on which the average number of occupied sites is calculated ⟨ N A ⟩ = − ( ∂ Ω ∂ μ A ) T , area , {\displaystyle \langle N_{A}\rangle =-\left({\frac {\partial \Omega }{\partial \mu _{A}}}\right)_{T,{\text{area}}},} which gives the coverage θ A = ⟨ N A ⟩ N S = x 1 + x . {\displaystyle \theta _{A}={\frac {\langle N_{A}\rangle }{N_{S}}}={\frac {x}{1+x}}.} Now, invoking the condition that the system is in equilibrium, that is, the chemical potential of the adsorbed molecules is equal to that of the molecules in gas phase, we have μ A = μ g , {\displaystyle \mu _{A}=\mu _{\text{g}},} The chemical potential of an ideal gas is μ g = ( ∂ A g ∂ N ) T , V {\displaystyle \mu _{\text{g}}=\left({\frac {\partial A_{\text{g}}}{\partial N}}\right)_{T,V}} where A g = − k B T ln ⁡ Z g {\displaystyle A_{g}=-k_{\rm {B}}T\ln Z_{g}} is the Helmholtz free energy of an ideal gas with its partition function Z g = q N N ! . {\displaystyle Z_{g}={\frac {q^{N}}{N!}}.} q {\displaystyle q} is the partition function of a single particle in the volume of V {\displaystyle V} (only consider the translational freedom here). q = V ( 2 π m k B T h 2 ) 3 / 2 . {\displaystyle q=V\left({\frac {2\pi mk_{\rm {B}}T}{h^{2}}}\right)^{3/2}.} We thus have μ g = − k B T ln ⁡ ( q / N ) {\displaystyle \mu _{g}=-k_{\rm {B}}T\ln(q/N)} , where we use Stirling's approximation. Plugging μ g {\displaystyle \mu _{g}} to the expression of x {\displaystyle x} , we have θ A 1 − θ A = x = ζ L N q {\displaystyle {\frac {\theta _{A}}{1-\theta _{A}}}=x=\zeta _{L}{\frac {N}{q}}} which gives the coverage θ A = ζ L / ( q / N ) 1 + ζ L / ( q / N ) {\displaystyle \theta _{A}={\frac {\zeta _{L}/(q/N)}{1+\zeta _{L}/(q/N)}}} By defining P 0 = k B T ζ L ( 2 π m k B T h 2 ) 3 / 2 {\displaystyle P_{0}={\frac {k_{\text{B}}T}{\zeta _{L}}}\left({\frac {2\pi mk_{\text{B}}T}{h^{2}}}\right)^{3/2}} and using the identity P V = N k B T {\displaystyle PV=Nk_{\rm {B}}T} , finally, we have θ A = P P + P 0 . {\displaystyle \theta _{A}={\frac {P}{P+P_{0}}}.} It is plotted in the figure alongside demonstrating that the surface coverage increases quite rapidly with the partial pressure of the adsorbants, but levels off after P reaches P0. === Competitive adsorption === The previous derivations assumed that there is only one species, A, adsorbing onto the surface. This section considers the case when there are two distinct adsorbates present in the system. Consider two species A and B that compete for the same adsorption sites. The following hypotheses are made here: All the sites are equivalent. Each site can hold at most one molecule of A, or one molecule of B, but not both simultaneously. There are no interactions between adsorbate molecules on adjacent sites. As derived using kinetic considerations, the equilibrium constants for both A and B are given by [ A ad ] p A [ S ] = K eq A {\displaystyle {\frac {[A_{\text{ad}}]}{p_{A}\,[S]}}=K_{\text{eq}}^{A}} and [ B ad ] p B [ S ] = K eq B . {\displaystyle {\frac {[B_{\text{ad}}]}{p_{B}\,[S]}}=K_{\text{eq}}^{B}.} The site balance states that the concentration of total sites [S0] is equal to the sum of free sites, sites occupied by A and sites occupied by B: [ S 0 ] = [ S ] + [ A ad ] + [ B ad ] . {\displaystyle [S_{0}]=[S]+[A_{\text{ad}}]+[B_{\text{ad}}].} Inserting the equilibrium equations and rearranging in the same way we did for the single-species adsorption, we get similar expressions for both θA and θB: θ A = K eq A p A 1 + K eq A p A + K eq B p B , {\displaystyle \theta _{A}={\frac {K_{\text{eq}}^{A}\,p_{A}}{1+K_{\text{eq}}^{A}\,p_{A}+K_{\text{eq}}^{B}\,p_{B}}},} θ B = K eq B p B 1 + K eq A p A + K eq B p B . {\displaystyle \theta _{B}={\frac {K_{\text{eq}}^{B}\,p_{B}}{1+K_{\text{eq}}^{A}\,p_{A}+K_{\text{eq}}^{B}\,p_{B}}}.} === Dissociative adsorption === The other case of special importance is when a molecule D2 dissociates into two atoms upon adsorption. Here, the following assumptions would be held to be valid: D2 completely dissociates to two molecules of D upon adsorption. The D atoms adsorb onto distinct sites on the surface of the solid and then move around and equilibrate. All sites are equivalent. Each site can hold at most one atom of D. There are no interactions between adsorbate molecules on adjacent sites. Using similar kinetic considerations, we get [ D ad ] p D 2 1 / 2 [ S ] = K eq D . {\displaystyle {\frac {[D_{\text{ad}}]}{p_{D_{2}}^{1/2}[S]}}=K_{\text{eq}}^{D}.} The 1/2 exponent on pD2 arises because one gas phase molecule produces two adsorbed species. Applying the site balance as done above, θ D = ( K eq D p D 2 ) 1 / 2 1 + ( K eq D p D 2 ) 1 / 2 . {\displaystyle \theta _{D}={\frac {(K_{\text{eq}}^{D}\,p_{D_{2}})^{1/2}}{1+(K_{\text{eq}}^{D}\,p_{D_{2}})^{1/2}}}.} == Entropic considerations == The formation of Langmuir monolayers by adsorption onto a surface dramatically reduces the entropy of the molecular system. To find the entropy decrease, we find the entropy of the molecule when in the adsorbed condition. S = S configurational + S vibrational , {\displaystyle S=S_{\text{configurational}}+S_{\text{vibrational}},} S conf = k B ln ⁡ Ω conf , {\displaystyle S_{\text{conf}}=k_{\rm {B}}\ln \Omega _{\text{conf}},} Ω conf = N S ! N ! ( N S − N ) ! . {\displaystyle \Omega _{\text{conf}}={\frac {N_{S}!}{N!(N_{S}-N)!}}.} Using Stirling's approximation, we have ln ⁡ N ! ≈ N ln ⁡ N − N , {\displaystyle \ln N!\approx N\ln N-N,} S conf / k B ≈ − θ A ln ⁡ ( θ A ) − ( 1 − θ A ) ln ⁡ ( 1 − θ A ) . {\displaystyle S_{\text{conf}}/k_{\rm {B}}\approx -\theta _{A}\ln(\theta _{A})-(1-\theta _{A})\ln(1-\theta _{A}).} On the other hand, the entropy of a molecule of an ideal gas is S gas N k B = ln ⁡ ( k B T P λ 3 ) + 5 / 2 , {\displaystyle {\frac {S_{\text{gas}}}{Nk_{\text{B}}}}=\ln \left({\frac {k_{\text{B}}T}{P\lambda ^{3}}}\right)+5/2,} where λ {\displaystyle \lambda } is the thermal de Broglie wavelength of the gas molecule. == Limitations of the model == The Langmuir adsorption model deviates significantly in many cases, primarily because it fails to account for the surface roughness of the adsorbent. Rough inhomogeneous surfaces have multiple site types available for adsorption, with some parameters varying from site to site, such as the heat of adsorption. Moreover, specific surface area is a scale-dependent quantity, and no single true value exists for this parameter. Thus, the use of alternative probe molecules can often result in different obtained numerical values for surface area, rendering comparison problematic. The model also ignores adsorbate–adsorbate interactions. Experimentally, there is clear evidence for adsorbate–adsorbate interactions in heat of adsorption data. There are two kinds of adsorbate–adsorbate interactions: direct interaction and indirect interaction. Direct interactions are between adjacent adsorbed molecules, which could make adsorbing near another adsorbate molecule more or less favorable and greatly affects high-coverage behavior. In indirect interactions, the adsorbate changes the surface around the adsorbed site, which in turn affects the adsorption of other adsorbate molecules nearby. == Modifications == The modifications try to account for the points mentioned in above section like surface roughness, inhomogeneity, and adsorbate–adsorbate interactions. === Two-mechanism Langmuir-like equation (TMLLE) === Also known as the two-site Langmuir equation. This equation describes the adsorption of one adsorbate to two or more distinct types of adsorption sites. Each binding site can be described with its own Langmuir expression, as long as the adsorption at each binding site type is independent from the rest. q total = q 1 max K 1 a 2 b 1 + K 1 a 2 b + q 2 max K 2 a 2 b 1 + K 2 a 2 b + … , {\displaystyle q_{\text{total}}={\frac {q_{1}^{\text{max}}K_{1}a_{2}^{\text{b}}}{1+K_{1}a_{2}^{\text{b}}}}+{\frac {q_{2}^{\text{max}}K_{2}a_{2}^{\text{b}}}{1+K_{2}a_{2}^{\text{b}}}}+\dots ,} where q total {\displaystyle q_{\text{total}}} – total amount adsorbed at a given adsorbate concentration, q 1 max {\displaystyle q_{1}^{\text{max}}} – maximum capacity of site type 1, q 2 max {\displaystyle q_{2}^{\text{max}}} – maximum capacity of site type 2, K 1 {\displaystyle K_{1}} – equilibrium (affinity) constant of site type 1, K 2 {\displaystyle K_{2}} – equilibrium (affinity) constant of site type 2, a 2 b {\displaystyle a_{2}^{\text{b}}} – adsorbate activity in solution at equilibrium This equation works well for adsorption of some drug molecules to activated carbon in which some adsorbate molecules interact with hydrogen bonding while others interact with a different part of the surface by hydrophobic interactions (hydrophobic effect). The equation was modified to account for the hydrophobic effect (also known as entropy-driven adsorption): q total = q 1 max K 1 a 2 b 1 + K 1 a 2 b + q HB . {\displaystyle q_{\text{total}}={\frac {q_{1}^{\text{max}}K_{1}a_{2}^{\text{b}}}{1+K_{1}a_{2}^{\text{b}}}}+q_{\text{HB}}.} The hydrophobic effect is independent of concentration, since K 2 a 2 b ≫ 1. {\displaystyle K_{2}a_{2}^{\text{b}}\gg 1.} Therefore, the capacity of the adsorbent for hydrophobic interactions q HB {\displaystyle q_{\text{HB}}} can obtained from fitting to experimental data. The entropy-driven adsorption originates from the restriction of translational motion of bulk water molecules by the adsorbate, which is alleviated upon adsorption. === Freundlich adsorption isotherm === The Freundlich isotherm is the most important multi-site adsorption isotherm for rough surfaces. θ A = α F p C F , {\displaystyle \theta _{A}=\alpha _{F}p^{C_{\text{F}}},} where αF and CF are fitting parameters. This equation implies that if one makes a log–log plot of adsorption data, the data will fit a straight line. The Freundlich isotherm has two parameters, while Langmuir's equations has only one: as a result, it often fits the data on rough surfaces better than the Langmuir isotherm. However, the Freundlich equation is not unique; consequently, a good fit of the data points does not offer sufficient proof that the surface is heterogeneous. The heterogeneity of the surface can be confirmed with calorimetry. Homogeneous surfaces (or heterogeneous surfaces that exhibit homogeneous adsorption (single-site)) have a constant Δ H {\displaystyle \Delta H} of adsorption as a function of the occupied-sites fraction. On the other hand, heterogeneous adsorbents (multi-site) have a variable Δ H {\displaystyle \Delta H} of adsorption depending on the sites occupation. When the adsorbate pressure (or concentration) is low, the fractional occupation is small and as a result, only low-energy sites are occupied, since these are the most stable. As the pressure increases, the higher-energy sites become occupied, resulting in a smaller Δ H {\displaystyle \Delta H} of adsorption, given that adsorption is an exothermic process. A related equation is the Toth equation. Rearranging the Langmuir equation, one can obtain θ A = p A 1 K eq A + p A . {\displaystyle \theta _{A}={\frac {p_{A}}{{\frac {1}{K_{\text{eq}}^{A}}}+p_{A}}}.} J. Toth modified this equation by adding two parameters αT0 and CT0 to formulate the Toth equation: θ C T 0 = α T 0 p A C T 0 1 K eq A + p A C T 0 . {\displaystyle \theta ^{C_{T_{0}}}={\frac {\alpha _{T_{0}}p_{A}^{C_{T_{0}}}}{{\frac {1}{K_{\text{eq}}^{A}}}+p_{A}^{C_{T_{0}}}}}.} === Temkin adsorption isotherm === This isotherm takes into account indirect adsorbate–adsorbate interactions on adsorption isotherms. Temkin noted experimentally that heats of adsorption would more often decrease than increase with increasing coverage. The heat of adsorption ΔHad is defined as [ A ad ] p A [ S ] = K eq A ∝ e − Δ G ad / R T = e Δ S ad / R e − Δ H ad / R T . {\displaystyle {\frac {[A_{\text{ad}}]}{p_{A}[S]}}=K_{\text{eq}}^{A}\propto \mathrm {e} ^{-\Delta G_{\text{ad}}/RT}=\mathrm {e} ^{\Delta S_{\text{ad}}/R}\,\mathrm {e} ^{-\Delta H_{\text{ad}}/RT}.} He derived a model assuming that as the surface is loaded up with adsorbate, the heat of adsorption of all the molecules in the layer would decrease linearly with coverage due to adsorbate–adsorbate interactions: Δ H ad = Δ H ad 0 ( 1 − α T θ ) , {\displaystyle \Delta H_{\text{ad}}=\Delta H_{\text{ad}}^{0}(1-\alpha _{T}\theta ),} where αT is a fitting parameter. Assuming the Langmuir adsorption isotherm still applied to the adsorbed layer, K eq A {\displaystyle K_{\text{eq}}^{A}} is expected to vary with coverage as follows: K eq A = K eq A , 0 e Δ H ad 0 ( 1 − α T θ ) / k T . {\displaystyle K_{\text{eq}}^{A}=K_{\text{eq}}^{A,0}\mathrm {e} ^{\Delta H_{\text{ad}}^{0}(1-\alpha _{T}\theta )/kT}.} Langmuir's isotherm can be rearranged to K eq A p A = θ 1 − θ . {\displaystyle K_{\text{eq}}^{A}p_{A}={\frac {\theta }{1-\theta }}.} Substituting the expression of the equilibrium constant and taking the natural logarithm: ln ⁡ ( K eq A , 0 p A ) = − Δ H ad 0 α T θ k T + ln ⁡ θ 1 − θ . {\displaystyle \ln {\big (}K_{\text{eq}}^{A,0}p_{A}{\big )}={\frac {-\Delta H_{\text{ad}}^{0}\alpha _{T}\theta }{kT}}+\ln {\frac {\theta }{1-\theta }}.} === BET equation === Brunauer, Emmett and Teller (BET) derived the first isotherm for multilayer adsorption. It assumes a random distribution of sites that are empty or that are covered with by one monolayer, two layers and so on, as illustrated alongside. The main equation of this model is [ A ] S 0 = c B x B ( 1 − x B ) [ 1 + ( c B − 1 ) x B ] , {\displaystyle {\frac {[A]}{S_{0}}}={\frac {c_{B}x_{B}}{(1-x_{B})[1+(c_{B}-1)x_{B}]}},} where x B = p A K m , c B = K 1 K m , {\displaystyle x_{B}=p_{A}K_{m},\quad c_{B}={\frac {K_{1}}{K_{m}}},} and [A] is the total concentration of molecules on the surface, given by [ A ] = ∑ i = 1 ∞ i [ A ] i = ∑ i = 1 ∞ i K 1 K m i − 1 p A i [ A ] 0 , {\displaystyle [A]=\sum _{i=1}^{\infty }i[A]_{i}=\sum _{i=1}^{\infty }iK_{1}K_{m}^{i-1}p_{A}^{i}[A]_{0},} where K i = [ A ] i p A [ A ] i − 1 , {\displaystyle K_{i}={\frac {[A]_{i}}{p_{A}[A]_{i-1}}},} in which [A]0 is the number of bare sites, and [A]i is the number of surface sites covered by i molecules. == Adsorption of a binary liquid on a solid == This section describes the surface coverage when the adsorbate is in liquid phase and is a binary mixture. For ideal both phases[clarification needed] – no lateral interactions, homogeneous surface – the composition of a surface phase for a binary liquid system in contact with solid surface is given by a classic Everett isotherm equation (being a simple analogue of Langmuir equation), where the components are interchangeable (i.e. "1" may be exchanged to "2") without change of equation form: x 1 s = K x 1 l 1 + ( K − 1 ) x 1 l , {\displaystyle x_{1}^{s}={\frac {Kx_{1}^{l}}{1+(K-1)x_{1}^{l}}},} where the normal definition of multi-component system is valid as follows: ∑ i = 1 k x i s = 1 , ∑ i = 1 k x i l = 1. {\displaystyle \sum _{i=1}^{k}x_{i}^{s}=1,\quad \sum _{i=1}^{k}x_{i}^{l}=1.} By simple rearrangement, we get x 1 s = K [ x 1 l / ( 1 − x 1 l ) ] 1 + K [ x 1 l / ( 1 − x 1 l ) ] . {\displaystyle x_{1}^{s}={\frac {K[x_{1}^{l}/(1-x_{1}^{l})]}{1+K[x_{1}^{l}/(1-x_{1}^{l})]}}.} This equation describes competition of components "1" and "2". == See also == Hill equation (biochemistry) Michaelis–Menten kinetics (equation with the same mathematical form) Monod equation (equation with the same mathematical form) Reactions on surfaces == References == The constitution and fundamental properties of solids and liquids. part i. solids. Irving Langmuir; J. Am. Chem. Soc. 38, 2221-95 1916 == External links == Langmuir isotherm from Queen Mary, University of London LMMpro, Langmuir equation-fitting software
Wikipedia/Langmuir_equation
Surface science is the study of physical and chemical phenomena that occur at the interface of two phases, including solid–liquid interfaces, solid–gas interfaces, solid–vacuum interfaces, and liquid–gas interfaces. It includes the fields of surface chemistry and surface physics. Some related practical applications are classed as surface engineering. The science encompasses concepts such as heterogeneous catalysis, semiconductor device fabrication, fuel cells, self-assembled monolayers, and adhesives. Surface science is closely related to interface and colloid science. Interfacial chemistry and physics are common subjects for both. The methods are different. In addition, interface and colloid science studies macroscopic phenomena that occur in heterogeneous systems due to peculiarities of interfaces. == History == The field of surface chemistry started with heterogeneous catalysis pioneered by Paul Sabatier on hydrogenation and Fritz Haber on the Haber process. Irving Langmuir was also one of the founders of this field, and the scientific journal on surface science, Langmuir, bears his name. The Langmuir adsorption equation is used to model monolayer adsorption where all surface adsorption sites have the same affinity for the adsorbing species and do not interact with each other. Gerhard Ertl in 1974 described for the first time the adsorption of hydrogen on a palladium surface using a novel technique called LEED. Similar studies with platinum, nickel, and iron followed. Most recent developments in surface sciences include the 2007 Nobel prize of Chemistry winner Gerhard Ertl's advancements in surface chemistry, specifically his investigation of the interaction between carbon monoxide molecules and platinum surfaces. == Chemistry == Surface chemistry can be roughly defined as the study of chemical reactions at interfaces. It is closely related to surface engineering, which aims at modifying the chemical composition of a surface by incorporation of selected elements or functional groups that produce various desired effects or improvements in the properties of the surface or interface. Surface science is of particular importance to the fields of heterogeneous catalysis, electrochemistry, and geochemistry. === Catalysis === The adhesion of gas or liquid molecules to the surface is known as adsorption. This can be due to either chemisorption or physisorption, and the strength of molecular adsorption to a catalyst surface is critically important to the catalyst's performance (see Sabatier principle). However, it is difficult to study these phenomena in real catalyst particles, which have complex structures. Instead, well-defined single crystal surfaces of catalytically active materials such as platinum are often used as model catalysts. Multi-component materials systems are used to study interactions between catalytically active metal particles and supporting oxides; these are produced by growing ultra-thin films or particles on a single crystal surface. Relationships between the composition, structure, and chemical behavior of these surfaces are studied using ultra-high vacuum techniques, including adsorption and temperature-programmed desorption of molecules, scanning tunneling microscopy, low energy electron diffraction, and Auger electron spectroscopy. Results can be fed into chemical models or used toward the rational design of new catalysts. Reaction mechanisms can also be clarified due to the atomic-scale precision of surface science measurements. === Electrochemistry === Electrochemistry is the study of processes driven through an applied potential at a solid–liquid or liquid–liquid interface. The behavior of an electrode–electrolyte interface is affected by the distribution of ions in the liquid phase next to the interface forming the electrical double layer. Adsorption and desorption events can be studied at atomically flat single-crystal surfaces as a function of applied potential, time and solution conditions using spectroscopy, scanning probe microscopy and surface X-ray scattering. These studies link traditional electrochemical techniques such as cyclic voltammetry to direct observations of interfacial processes. === Geochemistry === Geological phenomena such as iron cycling and soil contamination are controlled by the interfaces between minerals and their environment. The atomic-scale structure and chemical properties of mineral–solution interfaces are studied using in situ synchrotron X-ray techniques such as X-ray reflectivity, X-ray standing waves, and X-ray absorption spectroscopy as well as scanning probe microscopy. For example, studies of heavy metal or actinide adsorption onto mineral surfaces reveal molecular-scale details of adsorption, enabling more accurate predictions of how these contaminants travel through soils or disrupt natural dissolution–precipitation cycles. == Physics == Surface physics can be roughly defined as the study of physical interactions that occur at interfaces. It overlaps with surface chemistry. Some of the topics investigated in surface physics include friction, surface states, surface diffusion, surface reconstruction, surface phonons and plasmons, epitaxy, the emission and tunneling of electrons, spintronics, and the self-assembly of nanostructures on surfaces. Techniques to investigate processes at surfaces include surface X-ray scattering, scanning probe microscopy, surface-enhanced Raman spectroscopy and X-ray photoelectron spectroscopy. == Analysis techniques == The study and analysis of surfaces involves both physical and chemical analysis techniques. Several modern methods probe the topmost 1–10 nm of surfaces exposed to vacuum. These include angle-resolved photoemission spectroscopy (ARPES), X-ray photoelectron spectroscopy (XPS), Auger electron spectroscopy (AES), low-energy electron diffraction (LEED), electron energy loss spectroscopy (EELS), thermal desorption spectroscopy (TPD), ion scattering spectroscopy (ISS), secondary ion mass spectrometry, dual-polarization interferometry, and other surface analysis methods included in the list of materials analysis methods. Many of these techniques require vacuum as they rely on the detection of electrons or ions emitted from the surface under study. Moreover, in general ultra-high vacuum, in the range of 10−7 pascal pressure or better, it is necessary to reduce surface contamination by residual gas, by reducing the number of molecules reaching the sample over a given time period. At 0.1 mPa (10−6 torr) partial pressure of a contaminant and standard temperature, it only takes on the order of 1 second to cover a surface with a one-to-one monolayer of contaminant to surface atoms, so much lower pressures are needed for measurements. This is found by an order of magnitude estimate for the (number) specific surface area of materials and the impingement rate formula from the kinetic theory of gases. Purely optical techniques can be used to study interfaces under a wide variety of conditions. Reflection-absorption infrared, dual polarisation interferometry, surface-enhanced Raman spectroscopy and sum frequency generation spectroscopy can be used to probe solid–vacuum as well as solid–gas, solid–liquid, and liquid–gas surfaces. Multi-parametric surface plasmon resonance works in solid–gas, solid–liquid, liquid–gas surfaces and can detect even sub-nanometer layers. It probes the interaction kinetics as well as dynamic structural changes such as liposome collapse or swelling of layers in different pH. Dual-polarization interferometry is used to quantify the order and disruption in birefringent thin films. This has been used, for example, to study the formation of lipid bilayers and their interaction with membrane proteins. Acoustic techniques, such as quartz crystal microbalance with dissipation monitoring, is used for time-resolved measurements of solid–vacuum, solid–gas and solid–liquid interfaces. The method allows for analysis of molecule–surface interactions as well as structural changes and viscoelastic properties of the adlayer. X-ray scattering and spectroscopy techniques are also used to characterize surfaces and interfaces. While some of these measurements can be performed using laboratory X-ray sources, many require the high intensity and energy tunability of synchrotron radiation. X-ray crystal truncation rods (CTR) and X-ray standing wave (XSW) measurements probe changes in surface and adsorbate structures with sub-Ångström resolution. Surface-extended X-ray absorption fine structure (SEXAFS) measurements reveal the coordination structure and chemical state of adsorbates. Grazing-incidence small angle X-ray scattering (GISAXS) yields the size, shape, and orientation of nanoparticles on surfaces. The crystal structure and texture of thin films can be investigated using grazing-incidence X-ray diffraction (GIXD, GIXRD). X-ray photoelectron spectroscopy (XPS) is a standard tool for measuring the chemical states of surface species and for detecting the presence of surface contamination. Surface sensitivity is achieved by detecting photoelectrons with kinetic energies of about 10–1000 eV, which have corresponding inelastic mean free paths of only a few nanometers. This technique has been extended to operate at near-ambient pressures (ambient pressure XPS, AP-XPS) to probe more realistic gas–solid and liquid–solid interfaces. Performing XPS with hard X-rays at synchrotron light sources yields photoelectrons with kinetic energies of several keV (hard X-ray photoelectron spectroscopy, HAXPES), enabling access to chemical information from buried interfaces. Modern physical analysis methods include scanning-tunneling microscopy (STM) and a family of methods descended from it, including atomic force microscopy (AFM). These microscopies have considerably increased the ability of surface scientists to measure the physical structure of many surfaces. For example, they make it possible to follow reactions at the solid–gas interface in real space, if those proceed on a time scale accessible by the instrument. == See also == == References == == Further reading == Kolasinski, Kurt W. (2012-04-30). Surface Science: Foundations of Catalysis and Nanoscience (3 ed.). Wiley. ISBN 978-1119990352. Attard, Gary; Barnes, Colin (January 1998). Surfaces. Oxford Chemistry Primers. ISBN 978-0198556862. == External links == "Ram Rao Materials and Surface Science", a video from the Vega Science Trust Surface Chemistry Discoveries Surface Metrology Guide
Wikipedia/Surface_physics