text
stringlengths
559
401k
source
stringlengths
13
121
In thermodynamics, a critical point (or critical state) is the end point of a phase equilibrium curve. One example is the liquid–vapor critical point, the end point of the pressure–temperature curve that designates conditions under which a liquid and its vapor can coexist. At higher temperatures, the gas comes into a supercritical phase, and so cannot be liquefied by pressure alone. At the critical point, defined by a critical temperature Tc and a critical pressure pc, phase boundaries vanish. Other examples include the liquid–liquid critical points in mixtures, and the ferromagnet–paramagnet transition (Curie temperature) in the absence of an external magnetic field. == Liquid–vapor critical point == === Overview === For simplicity and clarity, the generic notion of critical point is best introduced by discussing a specific example, the vapor–liquid critical point. This was the first critical point to be discovered, and it is still the best known and most studied one. The figure shows the schematic P-T diagram of a pure substance (as opposed to mixtures, which have additional state variables and richer phase diagrams, discussed below). The commonly known phases solid, liquid and vapor are separated by phase boundaries, i.e. pressure–temperature combinations where two phases can coexist. At the triple point, all three phases can coexist. However, the liquid–vapor boundary terminates in an endpoint at some critical temperature Tc and critical pressure pc. This is the critical point. The critical point of water occurs at 647.096 K (373.946 °C; 705.103 °F) and 22.064 megapascals (3,200.1 psi; 217.75 atm; 220.64 bar). In the vicinity of the critical point, the physical properties of the liquid and the vapor change dramatically, with both phases becoming even more similar. For instance, liquid water under normal conditions is nearly incompressible, has a low thermal expansion coefficient, has a high dielectric constant, and is an excellent solvent for electrolytes. Near the critical point, all these properties change into the exact opposite: water becomes compressible, expandable, a poor dielectric, a bad solvent for electrolytes, and mixes more readily with nonpolar gases and organic molecules. At the critical point, only one phase exists. The heat of vaporization is zero. There is a stationary inflection point in the constant-temperature line (critical isotherm) on a PV diagram. This means that at the critical point: ( ∂ p ∂ V ) T = 0 , {\displaystyle \left({\frac {\partial p}{\partial V}}\right)_{T}=0,} ( ∂ 2 p ∂ V 2 ) T = 0. {\displaystyle \left({\frac {\partial ^{2}p}{\partial V^{2}}}\right)_{T}=0.} Above the critical point there exists a state of matter that is continuously connected with (can be transformed without phase transition into) both the liquid and the gaseous state. It is called supercritical fluid. The common textbook knowledge that all distinction between liquid and vapor disappears beyond the critical point has been challenged by Fisher and Widom, who identified a p–T line that separates states with different asymptotic statistical properties (Fisher–Widom line). Sometimes the critical point does not manifest in most thermodynamic or mechanical properties, but is "hidden" and reveals itself in the onset of inhomogeneities in elastic moduli, marked changes in the appearance and local properties of non-affine droplets, and a sudden enhancement in defect pair concentration. === History === The existence of a critical point was first discovered by Charles Cagniard de la Tour in 1822 and named by Dmitri Mendeleev in 1860 and Thomas Andrews in 1869. Cagniard showed that CO2 could be liquefied at 31 °C at a pressure of 73 atm, but not at a slightly higher temperature, even under pressures as high as 3000 atm. === Theory === Solving the above condition ( ∂ p / ∂ V ) T = 0 {\displaystyle (\partial p/\partial V)_{T}=0} for the van der Waals equation, one can compute the critical point as T c = 8 a 27 R b , V c = 3 n b , p c = a 27 b 2 . {\displaystyle T_{\text{c}}={\frac {8a}{27Rb}},\quad V_{\text{c}}=3nb,\quad p_{\text{c}}={\frac {a}{27b^{2}}}.} However, the van der Waals equation, based on a mean-field theory, does not hold near the critical point. In particular, it predicts wrong scaling laws. To analyse properties of fluids near the critical point, reduced state variables are sometimes defined relative to the critical properties T r = T T c , p r = p p c , V r = V R T c / p c . {\displaystyle T_{\text{r}}={\frac {T}{T_{\text{c}}}},\quad p_{\text{r}}={\frac {p}{p_{\text{c}}}},\quad V_{\text{r}}={\frac {V}{RT_{\text{c}}/p_{\text{c}}}}.} The principle of corresponding states indicates that substances at equal reduced pressures and temperatures have equal reduced volumes. This relationship is approximately true for many substances, but becomes increasingly inaccurate for large values of pr. For some gases, there is an additional correction factor, called Newton's correction, added to the critical temperature and critical pressure calculated in this manner. These are empirically derived values and vary with the pressure range of interest. === Table of liquid–vapor critical temperature and pressure for selected substances === == Mixtures: liquid–liquid critical point == The liquid–liquid critical point of a solution, which occurs at the critical solution temperature, occurs at the limit of the two-phase region of the phase diagram. In other words, it is the point at which an infinitesimal change in some thermodynamic variable (such as temperature or pressure) leads to separation of the mixture into two distinct liquid phases, as shown in the polymer–solvent phase diagram to the right. Two types of liquid–liquid critical points are the upper critical solution temperature (UCST), which is the hottest point at which cooling induces phase separation, and the lower critical solution temperature (LCST), which is the coldest point at which heating induces phase separation. === Mathematical definition === From a theoretical standpoint, the liquid–liquid critical point represents the temperature–concentration extremum of the spinodal curve (as can be seen in the figure to the right). Thus, the liquid–liquid critical point in a two-component system must satisfy two conditions: the condition of the spinodal curve (the second derivative of the free energy with respect to concentration must equal zero), and the extremum condition (the third derivative of the free energy with respect to concentration must also equal zero or the derivative of the spinodal temperature with respect to concentration must equal zero). == See also == == References == == Further reading == "Revised Release on the IAPWS Industrial Formulation 1997 for the Thermodynamic Properties of Water and Steam" (PDF). International Association for the Properties of Water and Steam. August 2007. Retrieved 2009-06-09. "Critical points for some common solvents". ProSciTech. Archived from the original on 2008-01-31. "Critical Temperature and Pressure". Department of Chemistry. Purdue University. Retrieved 2006-12-03.
Wikipedia/Critical_point_(thermodynamics)
In signal processing and electronics, the frequency response of a system is the quantitative measure of the magnitude and phase of the output as a function of input frequency. The frequency response is widely used in the design and analysis of systems, such as audio and control systems, where they simplify mathematical analysis by converting governing differential equations into algebraic equations. In an audio system, it may be used to minimize audible distortion by designing components (such as microphones, amplifiers and loudspeakers) so that the overall response is as flat (uniform) as possible across the system's bandwidth. In control systems, such as a vehicle's cruise control, it may be used to assess system stability, often through the use of Bode plots. Systems with a specific frequency response can be designed using analog and digital filters. The frequency response characterizes systems in the frequency domain, just as the impulse response characterizes systems in the time domain. In linear systems (or as an approximation to a real system neglecting second order non-linear properties), either response completely describes the system and thus there is a one-to-one correspondence: the frequency response is the Fourier transform of the impulse response. The frequency response allows simpler analysis of cascaded systems such as multistage amplifiers, as the response of the overall system can be found through multiplication of the individual stages' frequency responses (as opposed to convolution of the impulse response in the time domain). The frequency response is closely related to the transfer function in linear systems, which is the Laplace transform of the impulse response. They are equivalent when the real part σ {\displaystyle \sigma } of the transfer function's complex variable s = σ + j ω {\displaystyle s=\sigma +j\omega } is zero. == Measurement and plotting == Measuring the frequency response typically involves exciting the system with an input signal and measuring the resulting output signal, calculating the frequency spectra of the two signals (for example, using the fast Fourier transform for discrete signals), and comparing the spectra to isolate the effect of the system. In linear systems, the frequency range of the input signal should cover the frequency range of interest. Several methods using different input signals may be used to measure the frequency response of a system, including: Applying constant amplitude sinusoids stepped through a range of frequencies and comparing the amplitude and phase shift of the output relative to the input. The frequency sweep must be slow enough for the system to reach its steady-state at each point of interest Applying an impulse signal and taking the Fourier transform of the system's response Applying a wide-sense stationary white noise signal over a long period of time and taking the Fourier transform of the system's response. With this method, the cross-spectral density (rather than the power spectral density) should be used if phase information is required The frequency response is characterized by the magnitude, typically in decibels (dB) or as a generic amplitude of the dependent variable, and the phase, in radians or degrees, measured against frequency, in radian/s, Hertz (Hz) or as a fraction of the sampling frequency. There are three common ways of plotting response measurements: Bode plots graph magnitude and phase against frequency on two rectangular plots Nyquist plots graph magnitude and phase parametrically against frequency in polar form Nichols plots graph magnitude and phase parametrically against frequency in rectangular form For the design of control systems, any of the three types of plots may be used to infer closed-loop stability and stability margins from the open-loop frequency response. In many frequency domain applications, the phase response is relatively unimportant and the magnitude response of the Bode plot may be all that is required. In digital systems (such as digital filters), the frequency response often contains a main lobe with multiple periodic sidelobes, due to spectral leakage caused by digital processes such as sampling and windowing. === Nonlinear frequency response === If the system under investigation is nonlinear, linear frequency domain analysis will not reveal all the nonlinear characteristics. To overcome these limitations, generalized frequency response functions and nonlinear output frequency response functions have been defined to analyze nonlinear dynamic effects. Nonlinear frequency response methods may reveal effects such as resonance, intermodulation, and energy transfer. == Applications == In the audible range frequency response is usually referred to in connection with electronic amplifiers, microphones and loudspeakers. Radio spectrum frequency response can refer to measurements of coaxial cable, twisted-pair cable, video switching equipment, wireless communications devices, and antenna systems. Infrasonic frequency response measurements include earthquakes and electroencephalography (brain waves). Frequency response curves are often used to indicate the accuracy of electronic components or systems. When a system or component reproduces all desired input signals with no emphasis or attenuation of a particular frequency band, the system or component is said to be "flat", or to have a flat frequency response curve. In other cases, 3D-form of frequency response graphs are sometimes used. Frequency response requirements differ depending on the application. In high fidelity audio, an amplifier requires a flat frequency response of at least 20–20,000 Hz, with a tolerance as tight as ±0.1 dB in the mid-range frequencies around 1000 Hz; however, in telephony, a frequency response of 400–4,000 Hz, with a tolerance of ±1 dB is sufficient for intelligibility of speech. Once a frequency response has been measured (e.g., as an impulse response), provided the system is linear and time-invariant, its characteristic can be approximated with arbitrary accuracy by a digital filter. Similarly, if a system is demonstrated to have a poor frequency response, a digital or analog filter can be applied to the signals prior to their reproduction to compensate for these deficiencies. The form of a frequency response curve is very important for anti-jamming protection of radars, communications and other systems. Frequency response analysis can also be applied to biological domains, such as the detection of hormesis in repeated behaviors with opponent process dynamics, or in the optimization of drug treatment regimens. == See also == == References == Notes Bibliography Luther, Arch C.; Inglis, Andrew F. Video engineering, McGraw-Hill, 1999. ISBN 0-07-135017-9 Stark, Scott Hunter. Live Sound Reinforcement, Vallejo, California, Artistpro.com, 1996–2002. ISBN 0-918371-07-4 L. R. Rabiner and B. Gold. Theory and Application of Digital Signal Processing. – Englewood Cliffs, NJ: Prentice-Hall, 1975. – 720 pp == External links == University of Michigan: Frequency Response Analysis and Design Tutorial Archived 2012-10-17 at the Wayback Machine Smith, Julius O. III: Introduction to Digital Filters with Audio Applications has a nice chapter on Frequency Response
Wikipedia/Response_function
The Hubbard model is an approximate model used to describe the transition between conducting and insulating systems. It is particularly useful in solid-state physics. The model is named for John Hubbard. The Hubbard model states that each electron experiences competing forces: one pushes it to tunnel to neighboring atoms, while the other pushes it away from its neighbors. Its Hamiltonian thus has two terms: a kinetic term allowing for tunneling ("hopping") of particles between lattice sites and a potential term reflecting on-site interaction. The particles can either be fermions, as in Hubbard's original work, or bosons, in which case the model is referred to as the "Bose–Hubbard model". The Hubbard model is a useful approximation for particles in a periodic potential at sufficiently low temperatures, where all the particles may be assumed to be in the lowest Bloch band, and long-range interactions between the particles can be ignored. If interactions between particles at different sites of the lattice are included, the model is often referred to as the "extended Hubbard model". In particular, the Hubbard term, most commonly denoted by U, is applied in first principles based simulations using Density Functional Theory, DFT. The inclusion of the Hubbard term in DFT simulations is important as this improves the prediction of electron localisation and thus it prevents the incorrect prediction of metallic conduction in insulating systems. The Hubbard model introduces short-range interactions between electrons to the tight-binding model, which only includes kinetic energy (a "hopping" term) and interactions with the atoms of the lattice (an "atomic" potential). When the interaction between electrons is strong, the behavior of the Hubbard model can be qualitatively different from a tight-binding model. For example, the Hubbard model correctly predicts the existence of Mott insulators: materials that are insulating due to the strong repulsion between electrons, even though they satisfy the usual criteria for conductors, such as having an odd number of electrons per unit cell. == History == The model was originally proposed in 1963 to describe electrons in solids. Hubbard, Martin Gutzwiller and Junjiro Kanamori each independently proposed it. Since then, it has been applied to the study of high-temperature superconductivity, quantum magnetism, and charge density waves. == Narrow energy band theory == The Hubbard model is based on the tight-binding approximation from solid-state physics, which describes particles moving in a periodic potential, typically referred to as a lattice. For real materials, each lattice site might correspond with an ionic core, and the particles would be the valence electrons of these ions. In the tight-binding approximation, the Hamiltonian is written in terms of Wannier states, which are localized states centered on each lattice site. Wannier states on neighboring lattice sites are coupled, allowing particles on one site to "hop" to another. Mathematically, the strength of this coupling is given by a "hopping integral", or "transfer integral", between nearby sites. The system is said to be in the tight-binding limit when the strength of the hopping integrals falls off rapidly with distance. This coupling allows states associated with each lattice site to hybridize, and the eigenstates of such a crystalline system are Bloch's functions, with the energy levels divided into separated energy bands. The width of the bands depends upon the value of the hopping integral. The Hubbard model introduces a contact interaction between particles of opposite spin on each site of the lattice. When the Hubbard model is used to describe electron systems, these interactions are expected to be repulsive, stemming from the screened Coulomb interaction. However, attractive interactions have also been frequently considered. The physics of the Hubbard model is determined by competition between the strength of the hopping integral, which characterizes the system's kinetic energy, and the strength of the interaction term. The Hubbard model can therefore explain the transition from metal to insulator in certain interacting systems. For example, it has been used to describe metal oxides as they are heated, where the corresponding increase in nearest-neighbor spacing reduces the hopping integral to the point where the on-site potential is dominant. Similarly, the Hubbard model can explain the transition from conductor to insulator in systems such as rare-earth pyrochlores as the atomic number of the rare-earth metal increases, because the lattice parameter increases (or the angle between atoms can also change) as the rare-earth element atomic number increases, thus changing the relative importance of the hopping integral compared to the on-site repulsion. == Example: one dimensional hydrogen atom chain == The hydrogen atom has one electron, in the so-called s orbital, which can either be spin up ( ↑ {\displaystyle \uparrow } ) or spin down ( ↓ {\displaystyle \downarrow } ). This orbital can be occupied by at most two electrons, one with spin up and one down (see Pauli exclusion principle). Under band theory, for a 1D chain of hydrogen atoms, the 1s orbital forms a continuous band, which would be exactly half-full. The 1D chain of hydrogen atoms is thus predicted to be a conductor under conventional band theory. This 1D string is the only configuration simple enough to be solved directly. But in the case where the spacing between the hydrogen atoms is gradually increased, at some point the chain must become an insulator. Expressed using the Hubbard model, the Hamiltonian is made up of two terms. The first term describes the kinetic energy of the system, parameterized by the hopping integral, t {\displaystyle t} . The second term is the on-site interaction of strength U {\displaystyle U} that represents the electron repulsion. Written out in second quantization notation, the Hubbard Hamiltonian then takes the form H ^ = − t ∑ i , σ ( c ^ i , σ † c ^ i + 1 , σ + c ^ i + 1 , σ † c ^ i , σ ) + U ∑ i n ^ i ↑ n ^ i ↓ , {\displaystyle {\hat {H}}=-t\sum _{i,\sigma }\left({\hat {c}}_{i,\sigma }^{\dagger }{\hat {c}}_{i+1,\sigma }+{\hat {c}}_{i+1,\sigma }^{\dagger }{\hat {c}}_{i,\sigma }\right)+U\sum _{i}{\hat {n}}_{i\uparrow }{\hat {n}}_{i\downarrow },} where n ^ i σ = c ^ i σ † c ^ i σ {\displaystyle {\hat {n}}_{i\sigma }={\hat {c}}_{i\sigma }^{\dagger }{\hat {c}}_{i\sigma }} is the spin-density operator for spin σ {\displaystyle \sigma } on the i {\displaystyle i} -th site. The density operator is n ^ i = n ^ i ↑ + n ^ i ↓ {\displaystyle {\hat {n}}_{i}={\hat {n}}_{i\uparrow }+{\hat {n}}_{i\downarrow }} and occupation of i {\displaystyle i} -th site for the wavefunction Φ {\displaystyle \Phi } is n i = ⟨ Φ | n ^ i | Φ ⟩ {\displaystyle n_{i}=\langle \Phi \vert {\hat {n}}_{i}\vert \Phi \rangle } . Typically t is taken to be positive, and U may be either positive or negative, but is assumed to be positive when considering electronic systems. Without the contribution of the second term, the Hamiltonian resolves to the tight binding formula from regular band theory. Including the second term yields a realistic model that also predicts a transition from conductor to insulator as the ratio of interaction to hopping, U / t {\displaystyle U/t} , is varied. This ratio can be modified by, for example, increasing the inter-atomic spacing, which would decrease the magnitude of t {\displaystyle t} without affecting U {\displaystyle U} . In the limit where U / t ≫ 1 {\displaystyle U/t\gg 1} , the chain simply resolves into a set of isolated magnetic moments. If U / t {\displaystyle U/t} is not too large, the overlap integral provides for superexchange interactions between neighboring magnetic moments, which may lead to a variety of interesting magnetic correlations, such as ferromagnetic, antiferromagnetic, etc. depending on the model parameters. The one-dimensional Hubbard model was solved by Lieb and Wu using the Bethe ansatz. Essential progress was achieved in the 1990s: a hidden symmetry was discovered, and the scattering matrix, correlation functions, thermodynamic and quantum entanglement were evaluated. == More complex systems == Although Hubbard is useful in describing systems such as a 1D chain of hydrogen atoms, it is important to note that more complex systems may experience other effects that the Hubbard model does not consider. In general, insulators can be divided into Mott–Hubbard insulators and charge-transfer insulators. A Mott–Hubbard insulator can be described as ( N i 2 + O 2 − ) 2 ⟶ N i 3 + O 2 − + N i 1 + O 2 − . {\displaystyle (\mathrm {Ni} ^{2+}\mathrm {O} ^{2-})_{2}\longrightarrow \mathrm {Ni} ^{3+}\mathrm {O} ^{2-}+\mathrm {Ni} ^{1+}\mathrm {O} ^{2-}.} This can be seen as analogous to the Hubbard model for hydrogen chains, where conduction between unit cells can be described by a transfer integral. However, it is possible for the electrons to exhibit another kind of behavior: N i 2 + O 2 − ⟶ N i 1 + O 1 − . {\displaystyle \mathrm {Ni} ^{2+}\mathrm {O} ^{2-}\longrightarrow \mathrm {Ni} ^{1+}\mathrm {O} ^{1-}.} This is known as charge transfer and results in charge-transfer insulators. Unlike Mott–Hubbard insulators electron transfer happens only within a unit cell. Both of these effects may be present and compete in complex ionic systems. == Numerical treatment == The fact that the Hubbard model has not been solved analytically in arbitrary dimensions has led to intense research into numerical methods for these strongly correlated electron systems. One major goal of this research is to determine the low-temperature phase diagram of this model, particularly in two-dimensions. Approximate numerical treatment of the Hubbard model on finite systems is possible via various methods. One such method, the Lanczos algorithm, can produce static and dynamic properties of the system. Ground state calculations using this method require the storage of three vectors of the size of the number of states. The number of states scales exponentially with the size of the system, which limits the number of sites in the lattice to about 20 on 21st century hardware. With projector and finite-temperature auxiliary-field Monte Carlo, two statistical methods exist that can obtain certain properties of the system. For low temperatures, convergence problems appear that lead to an exponential computational effort with decreasing temperature due to the so-called fermion sign problem. The Hubbard model can be studied within dynamical mean-field theory (DMFT). This scheme maps the Hubbard Hamiltonian onto a single-site impurity model, a mapping that is formally exact only in infinite dimensions and in finite dimensions corresponds to the exact treatment of all purely local correlations only. DMFT allows one to compute the local Green's function of the Hubbard model for a given U {\displaystyle U} and a given temperature. Within DMFT, the evolution of the spectral function can be computed and the appearance of the upper and lower Hubbard bands can be observed as correlations increase. == Simulator == Stacks of heterogeneous 2-dimensional transition metal dichalcogenides (TMD) have been used to simulate geometries in more than one dimension. Tungsten diselenide and tungsten sulfide were stacked. This created a moiré superlattice consisting of hexagonal supercells (repetition units defined by the relationship of the two materials). Each supercell then behaves as though it were a single atom. The distance between supercells is roughly 100x that of the atoms within them. This larger distance drastically reduces electron tunneling across supercells. They can be used to form Wigner crystals. Electrodes can be attached to regulate an electric field. The electric field controls how many electrons fill each supercell. The number of electrons per supercell effectively determines which "atom" the lattice simulates. One electron/cell behaves like hydrogen, two/cell like helium, etc. As of 2022, supercells with up to eight electrons (oxygen) could be simulated. One result of the simulation showed that the difference between metal and insulator is a continuous function of the electric field strength. A "backwards" stacking regime allows the creation of a Chern insulator via the anomalous quantum Hall effect (with the edges of the device acting as a conductor while the interior acted as an insulator.) The device functioned at a temperature of 5 Kelvins, far above the temperature at which the effect had first been observed. == See also == Anderson impurity model Bloch's theorem Electronic band structure Solid-state physics Bose–Hubbard model t-J model Heisenberg model (quantum) Dynamical mean-field theory Stoner criterion == References == == Further reading == Hubbard, J. (1963). "Electron Correlations in Narrow Energy Bands". Proceedings of the Royal Society of London. 276 (1365): 238–257. Bibcode:1963RSPSA.276..238H. doi:10.1098/rspa.1963.0204. JSTOR 2414761. S2CID 35439962. Bach, V.; Lieb, E. H.; Solovej, J. P. (1994). "Generalized Hartree–Fock Theory and the Hubbard Model". Journal of Statistical Physics. 76 (1–2): 3. arXiv:cond-mat/9312044. Bibcode:1994JSP....76....3B. doi:10.1007/BF02188656. S2CID 207143. Lieb, E. H. (1995). "The Hubbard Model: Some Rigorous Results and Open Problems". Xi Int. Cong. Mp, Int. Press (?). 1995: cond–mat/9311033. arXiv:cond-mat/9311033. Bibcode:1993cond.mat.11033L. Gebhard, F. (1997). "Metal–Insulator Transition". The Mott Metal–Insulator Transition: Models and Methods. Springer Tracts in Modern Physics. Vol. 137. Springer. pp. 1–48. ISBN 9783540614814. Lieb, E. H.; Wu, F. Y. (2003). "The one-dimensional Hubbard model: A reminiscence". Physica A. 321 (1): 1–27. arXiv:cond-mat/0207529. Bibcode:2003PhyA..321....1L. doi:10.1016/S0378-4371(02)01785-5. S2CID 44758937. Arovas, Daniel P.; Berg, Erez; Kivelson, Steven; Rahgu, Sri (2022). "The Hubbard Model". Annual Review of Condensed Matter Physics. 13: 239–274. arXiv:2103.12097. Bibcode:2022ARCMP..13..239A. doi:10.1146/annurev-conmatphys-031620-102024. Qin, Mingpu; Schäfer, Thomas; Andergassen, Sabine; Corboz, Philippe; Gull, Emanuel (2022). "The Hubbard Model: A Computational Perspective". Annual Review of Condensed Matter Physics. 13: 275–302. arXiv:2104.00064. Bibcode:2022ARCMP..13..275Q. doi:10.1146/annurev-conmatphys-090921-033948. S2CID 260725458.
Wikipedia/Hubbard_model
In physics and probability theory, Mean-field theory (MFT) or Self-consistent field theory studies the behavior of high-dimensional random (stochastic) models by studying a simpler model that approximates the original by averaging over degrees of freedom (the number of values in the final calculation of a statistic that are free to vary). Such models consider many individual components that interact with each other. The main idea of MFT is to replace all interactions to any one body with an average or effective interaction, sometimes called a molecular field. This reduces any many-body problem into an effective one-body problem. The ease of solving MFT problems means that some insight into the behavior of the system can be obtained at a lower computational cost. MFT has since been applied to a wide range of fields outside of physics, including statistical inference, graphical models, neuroscience, artificial intelligence, epidemic models, queueing theory, computer-network performance and game theory, as in the quantal response equilibrium. == Origins == The idea first appeared in physics (statistical mechanics) in the work of Pierre Curie and Pierre Weiss to describe phase transitions. MFT has been used in the Bragg–Williams approximation, models on Bethe lattice, Landau theory, Curie-Weiss law for magnetic susceptibility, Flory–Huggins solution theory, and Scheutjens–Fleer theory. Systems with many (sometimes infinite) degrees of freedom are generally hard to solve exactly or compute in closed, analytic form, except for some simple cases (e.g. certain Gaussian random-field theories, the 1D Ising model). Often combinatorial problems arise that make things like computing the partition function of a system difficult. MFT is an approximation method that often makes the original problem to be solvable and open to calculation, and in some cases MFT may give very accurate approximations. In field theory, the Hamiltonian may be expanded in terms of the magnitude of fluctuations around the mean of the field. In this context, MFT can be viewed as the "zeroth-order" expansion of the Hamiltonian in fluctuations. Physically, this means that an MFT system has no fluctuations, but this coincides with the idea that one is replacing all interactions with a "mean-field”. Quite often, MFT provides a convenient launch point for studying higher-order fluctuations. For example, when computing the partition function, studying the combinatorics of the interaction terms in the Hamiltonian can sometimes at best produce perturbation results or Feynman diagrams that correct the mean-field approximation. == Validity == In general, dimensionality plays an active role in determining whether a mean-field approach will work for any particular problem. There is sometimes a critical dimension above which MFT is valid and below which it is not. Heuristically, many interactions are replaced in MFT by one effective interaction. So if the field or particle exhibits many random interactions in the original system, they tend to cancel each other out, so the mean effective interaction and MFT will be more accurate. This is true in cases of high dimensionality, when the Hamiltonian includes long-range forces, or when the particles are extended (e.g. polymers). The Ginzburg criterion is the formal expression of how fluctuations render MFT a poor approximation, often depending upon the number of spatial dimensions in the system of interest. == Formal approach (Hamiltonian) == The formal basis for mean-field theory is the Bogoliubov inequality. This inequality states that the free energy of a system with Hamiltonian H = H 0 + Δ H {\displaystyle {\mathcal {H}}={\mathcal {H}}_{0}+\Delta {\mathcal {H}}} has the following upper bound: F ≤ F 0 = d e f ⟨ H ⟩ 0 − T S 0 , {\displaystyle F\leq F_{0}\ {\stackrel {\mathrm {def} }{=}}\ \langle {\mathcal {H}}\rangle _{0}-TS_{0},} where S 0 {\displaystyle S_{0}} is the entropy, and F {\displaystyle F} and F 0 {\displaystyle F_{0}} are Helmholtz free energies. The average is taken over the equilibrium ensemble of the reference system with Hamiltonian H 0 {\displaystyle {\mathcal {H}}_{0}} . In the special case that the reference Hamiltonian is that of a non-interacting system and can thus be written as H 0 = ∑ i = 1 N h i ( ξ i ) , {\displaystyle {\mathcal {H}}_{0}=\sum _{i=1}^{N}h_{i}(\xi _{i}),} where ξ i {\displaystyle \xi _{i}} are the degrees of freedom of the individual components of our statistical system (atoms, spins and so forth), one can consider sharpening the upper bound by minimising the right side of the inequality. The minimising reference system is then the "best" approximation to the true system using non-correlated degrees of freedom and is known as the mean field approximation. For the most common case that the target Hamiltonian contains only pairwise interactions, i.e., H = ∑ ( i , j ) ∈ P V i , j ( ξ i , ξ j ) , {\displaystyle {\mathcal {H}}=\sum _{(i,j)\in {\mathcal {P}}}V_{i,j}(\xi _{i},\xi _{j}),} where P {\displaystyle {\mathcal {P}}} is the set of pairs that interact, the minimising procedure can be carried out formally. Define Tr i ⁡ f ( ξ i ) {\displaystyle \operatorname {Tr} _{i}f(\xi _{i})} as the generalized sum of the observable f {\displaystyle f} over the degrees of freedom of the single component (sum for discrete variables, integrals for continuous ones). The approximating free energy is given by F 0 = Tr 1 , 2 , … , N ⁡ H ( ξ 1 , ξ 2 , … , ξ N ) P 0 ( N ) ( ξ 1 , ξ 2 , … , ξ N ) + k T Tr 1 , 2 , … , N ⁡ P 0 ( N ) ( ξ 1 , ξ 2 , … , ξ N ) log ⁡ P 0 ( N ) ( ξ 1 , ξ 2 , … , ξ N ) , {\displaystyle {\begin{aligned}F_{0}&=\operatorname {Tr} _{1,2,\ldots ,N}{\mathcal {H}}(\xi _{1},\xi _{2},\ldots ,\xi _{N})P_{0}^{(N)}(\xi _{1},\xi _{2},\ldots ,\xi _{N})\\&+kT\,\operatorname {Tr} _{1,2,\ldots ,N}P_{0}^{(N)}(\xi _{1},\xi _{2},\ldots ,\xi _{N})\log P_{0}^{(N)}(\xi _{1},\xi _{2},\ldots ,\xi _{N}),\end{aligned}}} where P 0 ( N ) ( ξ 1 , ξ 2 , … , ξ N ) {\displaystyle P_{0}^{(N)}(\xi _{1},\xi _{2},\dots ,\xi _{N})} is the probability to find the reference system in the state specified by the variables ( ξ 1 , ξ 2 , … , ξ N ) {\displaystyle (\xi _{1},\xi _{2},\dots ,\xi _{N})} . This probability is given by the normalized Boltzmann factor P 0 ( N ) ( ξ 1 , ξ 2 , … , ξ N ) = 1 Z 0 ( N ) e − β H 0 ( ξ 1 , ξ 2 , … , ξ N ) = ∏ i = 1 N 1 Z 0 e − β h i ( ξ i ) = d e f ∏ i = 1 N P 0 ( i ) ( ξ i ) , {\displaystyle {\begin{aligned}P_{0}^{(N)}(\xi _{1},\xi _{2},\ldots ,\xi _{N})&={\frac {1}{Z_{0}^{(N)}}}e^{-\beta {\mathcal {H}}_{0}(\xi _{1},\xi _{2},\ldots ,\xi _{N})}\\&=\prod _{i=1}^{N}{\frac {1}{Z_{0}}}e^{-\beta h_{i}(\xi _{i})}\ {\stackrel {\mathrm {def} }{=}}\ \prod _{i=1}^{N}P_{0}^{(i)}(\xi _{i}),\end{aligned}}} where Z 0 {\displaystyle Z_{0}} is the partition function. Thus F 0 = ∑ ( i , j ) ∈ P Tr i , j ⁡ V i , j ( ξ i , ξ j ) P 0 ( i ) ( ξ i ) P 0 ( j ) ( ξ j ) + k T ∑ i = 1 N Tr i ⁡ P 0 ( i ) ( ξ i ) log ⁡ P 0 ( i ) ( ξ i ) . {\displaystyle {\begin{aligned}F_{0}&=\sum _{(i,j)\in {\mathcal {P}}}\operatorname {Tr} _{i,j}V_{i,j}(\xi _{i},\xi _{j})P_{0}^{(i)}(\xi _{i})P_{0}^{(j)}(\xi _{j})\\&+kT\sum _{i=1}^{N}\operatorname {Tr} _{i}P_{0}^{(i)}(\xi _{i})\log P_{0}^{(i)}(\xi _{i}).\end{aligned}}} In order to minimise, we take the derivative with respect to the single-degree-of-freedom probabilities P 0 ( i ) {\displaystyle P_{0}^{(i)}} using a Lagrange multiplier to ensure proper normalization. The end result is the set of self-consistency equations P 0 ( i ) ( ξ i ) = 1 Z 0 e − β h i M F ( ξ i ) , i = 1 , 2 , … , N , {\displaystyle P_{0}^{(i)}(\xi _{i})={\frac {1}{Z_{0}}}e^{-\beta h_{i}^{MF}(\xi _{i})},\quad i=1,2,\ldots ,N,} where the mean field is given by h i MF ( ξ i ) = ∑ { j ∣ ( i , j ) ∈ P } Tr j ⁡ V i , j ( ξ i , ξ j ) P 0 ( j ) ( ξ j ) . {\displaystyle h_{i}^{\text{MF}}(\xi _{i})=\sum _{\{j\mid (i,j)\in {\mathcal {P}}\}}\operatorname {Tr} _{j}V_{i,j}(\xi _{i},\xi _{j})P_{0}^{(j)}(\xi _{j}).} == Applications == Mean field theory can be applied to a number of physical systems so as to study phenomena such as phase transitions. === Ising model === ==== Formal derivation ==== The Bogoliubov inequality, shown above, can be used to find the dynamics of a mean field model of the two-dimensional Ising lattice. A magnetisation function can be calculated from the resultant approximate free energy. The first step is choosing a more tractable approximation of the true Hamiltonian. Using a non-interacting or effective field Hamiltonian, − m ∑ i s i {\displaystyle -m\sum _{i}s_{i}} , the variational free energy is F V = F 0 + ⟨ ( − J ∑ s i s j − h ∑ s i ) − ( − m ∑ s i ) ⟩ 0 . {\displaystyle F_{V}=F_{0}+\left\langle \left(-J\sum s_{i}s_{j}-h\sum s_{i}\right)-\left(-m\sum s_{i}\right)\right\rangle _{0}.} By the Bogoliubov inequality, simplifying this quantity and calculating the magnetisation function that minimises the variational free energy yields the best approximation to the actual magnetisation. The minimiser is m = J ∑ ⟨ s j ⟩ 0 + h , {\displaystyle m=J\sum \langle s_{j}\rangle _{0}+h,} which is the ensemble average of spin. This simplifies to m = tanh ( z J β m ) + h . {\displaystyle m={\text{tanh}}(zJ\beta m)+h.} Equating the effective field felt by all spins to a mean spin value relates the variational approach to the suppression of fluctuations. The physical interpretation of the magnetisation function is then a field of mean values for individual spins. ==== Non-interacting spins approximation ==== Consider the Ising model on a d {\displaystyle d} -dimensional lattice. The Hamiltonian is given by H = − J ∑ ⟨ i , j ⟩ s i s j − h ∑ i s i , {\displaystyle H=-J\sum _{\langle i,j\rangle }s_{i}s_{j}-h\sum _{i}s_{i},} where the ∑ ⟨ i , j ⟩ {\displaystyle \sum _{\langle i,j\rangle }} indicates summation over the pair of nearest neighbors ⟨ i , j ⟩ {\displaystyle \langle i,j\rangle } , and s i , s j = ± 1 {\displaystyle s_{i},s_{j}=\pm 1} are neighboring Ising spins. Let us transform our spin variable by introducing the fluctuation from its mean value m i ≡ ⟨ s i ⟩ {\displaystyle m_{i}\equiv \langle s_{i}\rangle } . We may rewrite the Hamiltonian as H = − J ∑ ⟨ i , j ⟩ ( m i + δ s i ) ( m j + δ s j ) − h ∑ i s i , {\displaystyle H=-J\sum _{\langle i,j\rangle }(m_{i}+\delta s_{i})(m_{j}+\delta s_{j})-h\sum _{i}s_{i},} where we define δ s i ≡ s i − m i {\displaystyle \delta s_{i}\equiv s_{i}-m_{i}} ; this is the fluctuation of the spin. If we expand the right side, we obtain one term that is entirely dependent on the mean values of the spins and independent of the spin configurations. This is the trivial term, which does not affect the statistical properties of the system. The next term is the one involving the product of the mean value of the spin and the fluctuation value. Finally, the last term involves a product of two fluctuation values. The mean field approximation consists of neglecting this second-order fluctuation term: H ≈ H MF ≡ − J ∑ ⟨ i , j ⟩ ( m i m j + m i δ s j + m j δ s i ) − h ∑ i s i . {\displaystyle H\approx H^{\text{MF}}\equiv -J\sum _{\langle i,j\rangle }(m_{i}m_{j}+m_{i}\delta s_{j}+m_{j}\delta s_{i})-h\sum _{i}s_{i}.} These fluctuations are enhanced at low dimensions, making MFT a better approximation for high dimensions. Again, the summand can be re-expanded. In addition, we expect that the mean value of each spin is site-independent, since the Ising chain is translationally invariant. This yields H MF = − J ∑ ⟨ i , j ⟩ ( m 2 + 2 m ( s i − m ) ) − h ∑ i s i . {\displaystyle H^{\text{MF}}=-J\sum _{\langle i,j\rangle }{\big (}m^{2}+2m(s_{i}-m){\big )}-h\sum _{i}s_{i}.} The summation over neighboring spins can be rewritten as ∑ ⟨ i , j ⟩ = 1 2 ∑ i ∑ j ∈ n n ( i ) {\displaystyle \sum _{\langle i,j\rangle }={\frac {1}{2}}\sum _{i}\sum _{j\in nn(i)}} , where n n ( i ) {\displaystyle nn(i)} means "nearest neighbor of i {\displaystyle i} ", and the 1 / 2 {\displaystyle 1/2} prefactor avoids double counting, since each bond participates in two spins. Simplifying leads to the final expression H MF = J m 2 N z 2 − ( h + m J z ) ⏟ h eff. ∑ i s i , {\displaystyle H^{\text{MF}}={\frac {Jm^{2}Nz}{2}}-\underbrace {(h+mJz)} _{h^{\text{eff.}}}\sum _{i}s_{i},} where z {\displaystyle z} is the coordination number. At this point, the Ising Hamiltonian has been decoupled into a sum of one-body Hamiltonians with an effective mean field h eff. = h + J z m {\displaystyle h^{\text{eff.}}=h+Jzm} , which is the sum of the external field h {\displaystyle h} and of the mean field induced by the neighboring spins. It is worth noting that this mean field directly depends on the number of nearest neighbors and thus on the dimension of the system (for instance, for a hypercubic lattice of dimension d {\displaystyle d} , z = 2 d {\displaystyle z=2d} ). Substituting this Hamiltonian into the partition function and solving the effective 1D problem, we obtain Z = e − β J m 2 N z 2 [ 2 cosh ⁡ ( h + m J z k B T ) ] N , {\displaystyle Z=e^{-{\frac {\beta Jm^{2}Nz}{2}}}\left[2\cosh \left({\frac {h+mJz}{k_{\text{B}}T}}\right)\right]^{N},} where N {\displaystyle N} is the number of lattice sites. This is a closed and exact expression for the partition function of the system. We may obtain the free energy of the system and calculate critical exponents. In particular, we can obtain the magnetization m {\displaystyle m} as a function of h eff. {\displaystyle h^{\text{eff.}}} . We thus have two equations between m {\displaystyle m} and h eff. {\displaystyle h^{\text{eff.}}} , allowing us to determine m {\displaystyle m} as a function of temperature. This leads to the following observation: For temperatures greater than a certain value T c {\displaystyle T_{\text{c}}} , the only solution is m = 0 {\displaystyle m=0} . The system is paramagnetic. For T < T c {\displaystyle T<T_{\text{c}}} , there are two non-zero solutions: m = ± m 0 {\displaystyle m=\pm m_{0}} . The system is ferromagnetic. T c {\displaystyle T_{\text{c}}} is given by the following relation: T c = J z k B {\displaystyle T_{\text{c}}={\frac {Jz}{k_{B}}}} . This shows that MFT can account for the ferromagnetic phase transition. === Application to other systems === Similarly, MFT can be applied to other types of Hamiltonian as in the following cases: To study the metal–superconductor transition. In this case, the analog of the magnetization is the superconducting gap Δ {\displaystyle \Delta } . The molecular field of a liquid crystal that emerges when the Laplacian of the director field is non-zero. To determine the optimal amino acid side chain packing given a fixed protein backbone in protein structure prediction (see Self-consistent mean field (biology)). To determine the elastic properties of a composite material. Variationally minimisation like mean field theory can be also be used in statistical inference. == Extension to time-dependent mean fields == In mean field theory, the mean field appearing in the single-site problem is a time-independent scalar or vector quantity. However, this isn't always the case: in a variant of mean field theory called dynamical mean field theory (DMFT), the mean field becomes a time-dependent quantity. For instance, DMFT can be applied to the Hubbard model to study the metal–Mott-insulator transition. == See also == Dynamical mean field theory Mean field game theory == References ==
Wikipedia/Mean-field_theory
In engineering, physics, and chemistry, the study of transport phenomena concerns the exchange of mass, energy, charge, momentum and angular momentum between observed and studied systems. While it draws from fields as diverse as continuum mechanics and thermodynamics, it places a heavy emphasis on the commonalities between the topics covered. Mass, momentum, and heat transport all share a very similar mathematical framework, and the parallels between them are exploited in the study of transport phenomena to draw deep mathematical connections that often provide very useful tools in the analysis of one field that are directly derived from the others. The fundamental analysis in all three subfields of mass, heat, and momentum transfer are often grounded in the simple principle that the total sum of the quantities being studied must be conserved by the system and its environment. Thus, the different phenomena that lead to transport are each considered individually with the knowledge that the sum of their contributions must equal zero. This principle is useful for calculating many relevant quantities. For example, in fluid mechanics, a common use of transport analysis is to determine the velocity profile of a fluid flowing through a rigid volume. Transport phenomena are ubiquitous throughout the engineering disciplines. Some of the most common examples of transport analysis in engineering are seen in the fields of process, chemical, biological, and mechanical engineering, but the subject is a fundamental component of the curriculum in all disciplines involved in any way with fluid mechanics, heat transfer, and mass transfer. It is now considered to be a part of the engineering discipline as much as thermodynamics, mechanics, and electromagnetism. Transport phenomena encompass all agents of physical change in the universe. Moreover, they are considered to be fundamental building blocks which developed the universe, and which are responsible for the success of all life on Earth. However, the scope here is limited to the relationship of transport phenomena to artificial engineered systems. == Overview == In physics, transport phenomena are all irreversible processes of statistical nature stemming from the random continuous motion of molecules, mostly observed in fluids. Every aspect of transport phenomena is grounded in two primary concepts : the conservation laws, and the constitutive equations. The conservation laws, which in the context of transport phenomena are formulated as continuity equations, describe how the quantity being studied must be conserved. The constitutive equations describe how the quantity in question responds to various stimuli via transport. Prominent examples include Fourier's law of heat conduction and the Navier–Stokes equations, which describe, respectively, the response of heat flux to temperature gradients and the relationship between fluid flux and the forces applied to the fluid. These equations also demonstrate the deep connection between transport phenomena and thermodynamics, a connection that explains why transport phenomena are irreversible. Almost all of these physical phenomena ultimately involve systems seeking their lowest energy state in keeping with the principle of minimum energy. As they approach this state, they tend to achieve true thermodynamic equilibrium, at which point there are no longer any driving forces in the system and transport ceases. The various aspects of such equilibrium are directly connected to a specific transport: heat transfer is the system's attempt to achieve thermal equilibrium with its environment, just as mass and momentum transport move the system towards chemical and mechanical equilibrium. Examples of transport processes include heat conduction (energy transfer), fluid flow (momentum transfer), molecular diffusion (mass transfer), radiation and electric charge transfer in semiconductors. Transport phenomena have wide application. For example, in solid state physics, the motion and interaction of electrons, holes and phonons are studied under "transport phenomena". Another example is in biomedical engineering, where some transport phenomena of interest are thermoregulation, perfusion, and microfluidics. In chemical engineering, transport phenomena are studied in reactor design, analysis of molecular or diffusive transport mechanisms, and metallurgy. The transport of mass, energy, and momentum can be affected by the presence of external sources: An odor dissipates more slowly (and may intensify) when the source of the odor remains present. The rate of cooling of a solid that is conducting heat depends on whether a heat source is applied. The gravitational force acting on a rain drop counteracts the resistance or drag imparted by the surrounding air. == Commonalities among phenomena == An important principle in the study of transport phenomena is analogy between phenomena. === Diffusion === There are some notable similarities in equations for momentum, energy, and mass transfer which can all be transported by diffusion, as illustrated by the following examples: Mass: the spreading and dissipation of odors in air is an example of mass diffusion. Energy: the conduction of heat in a solid material is an example of heat diffusion. Momentum: the drag experienced by a rain drop as it falls in the atmosphere is an example of momentum diffusion (the rain drop loses momentum to the surrounding air through viscous stresses and decelerates). The molecular transfer equations of Newton's law for fluid momentum, Fourier's law for heat, and Fick's law for mass are very similar. One can convert from one transport coefficient to another in order to compare all three different transport phenomena. A great deal of effort has been devoted in the literature to developing analogies among these three transport processes for turbulent transfer so as to allow prediction of one from any of the others. The Reynolds analogy assumes that the turbulent diffusivities are all equal and that the molecular diffusivities of momentum (μ/ρ) and mass (DAB) are negligible compared to the turbulent diffusivities. When liquids are present and/or drag is present, the analogy is not valid. Other analogies, such as von Karman's and Prandtl's, usually result in poor relations. The most successful and most widely used analogy is the Chilton and Colburn J-factor analogy. This analogy is based on experimental data for gases and liquids in both the laminar and turbulent regimes. Although it is based on experimental data, it can be shown to satisfy the exact solution derived from laminar flow over a flat plate. All of this information is used to predict transfer of mass. === Onsager reciprocal relations === In fluid systems described in terms of temperature, matter density, and pressure, it is known that temperature differences lead to heat flows from the warmer to the colder parts of the system; similarly, pressure differences will lead to matter flow from high-pressure to low-pressure regions (a "reciprocal relation"). What is remarkable is the observation that, when both pressure and temperature vary, temperature differences at constant pressure can cause matter flow (as in convection) and pressure differences at constant temperature can cause heat flow. The heat flow per unit of pressure difference and the density (matter) flow per unit of temperature difference are equal. This equality was shown to be necessary by Lars Onsager using statistical mechanics as a consequence of the time reversibility of microscopic dynamics. The theory developed by Onsager is much more general than this example and capable of treating more than two thermodynamic forces at once. == Momentum transfer == In momentum transfer, the fluid is treated as a continuous distribution of matter. The study of momentum transfer, or fluid mechanics can be divided into two branches: fluid statics (fluids at rest), and fluid dynamics (fluids in motion). When a fluid is flowing in the x-direction parallel to a solid surface, the fluid has x-directed momentum, and its concentration is υxρ. By random diffusion of molecules there is an exchange of molecules in the z-direction. Hence the x-directed momentum has been transferred in the z-direction from the faster- to the slower-moving layer. The equation for momentum transfer is Newton's law of viscosity written as follows: τ z x = − ρ ν ∂ υ x ∂ z {\displaystyle \tau _{zx}=-\rho \nu {\frac {\partial \upsilon _{x}}{\partial z}}} where τzx is the flux of x-directed momentum in the z-direction, ν is μ/ρ, the momentum diffusivity, z is the distance of transport or diffusion, ρ is the density, and μ is the dynamic viscosity. Newton's law of viscosity is the simplest relationship between the flux of momentum and the velocity gradient. It may be useful to note that this is an unconventional use of the symbol τzx; the indices are reversed as compared with standard usage in solid mechanics, and the sign is reversed. == Mass transfer == When a system contains two or more components whose concentration vary from point to point, there is a natural tendency for mass to be transferred, minimizing any concentration difference within the system. Mass transfer in a system is governed by Fick's first law: 'Diffusion flux from higher concentration to lower concentration is proportional to the gradient of the concentration of the substance and the diffusivity of the substance in the medium.' Mass transfer can take place due to different driving forces. Some of them are: Mass can be transferred by the action of a pressure gradient (pressure diffusion) Forced diffusion occurs because of the action of some external force Diffusion can be caused by temperature gradients (thermal diffusion) Diffusion can be caused by differences in chemical potential This can be compared to Fick's law of diffusion, for a species A in a binary mixture consisting of A and B: J A y = − D A B ∂ C a ∂ y {\displaystyle J_{Ay}=-D_{AB}{\frac {\partial Ca}{\partial y}}} where D is the diffusivity constant. == Heat transfer == Many important engineered systems involve heat transfer. Some examples are the heating and cooling of process streams, phase changes, distillation, etc. The basic principle is the Fourier's law which is expressed as follows for a static system: q ″ = − k d T d x {\displaystyle q''=-k{\frac {dT}{dx}}} The net flux of heat through a system equals the conductivity times the rate of change of temperature with respect to position. For convective transport involving turbulent flow, complex geometries, or difficult boundary conditions, the heat transfer may be represented by a heat transfer coefficient. Q = h ⋅ A ⋅ Δ T {\displaystyle Q=h\cdot A\cdot {\Delta T}} where A is the surface area, Δ T {\displaystyle {\Delta T}} is the temperature driving force, Q is the heat flow per unit time, and h is the heat transfer coefficient. Within heat transfer, two principal types of convection can occur: Forced convection can occur in both laminar and turbulent flow. In the situation of laminar flow in circular tubes, several dimensionless numbers are used such as Nusselt number, Reynolds number, and Prandtl number. The commonly used equation is N u a = h a D k {\displaystyle Nu_{a}={\frac {h_{a}D}{k}}} . Natural or free convection is a function of Grashof and Prandtl numbers. The complexities of free convection heat transfer make it necessary to mainly use empirical relations from experimental data. Heat transfer is analyzed in packed beds, nuclear reactors and heat exchangers. == Heat and mass transfer analogy == The heat and mass analogy allows solutions for mass transfer problems to be obtained from known solutions to heat transfer problems. Its arises from similar non-dimensional governing equations between heat and mass transfer. === Derivation === The non-dimensional energy equation for fluid flow in a boundary layer can simplify to the following, when heating from viscous dissipation and heat generation can be neglected: u ∗ ∂ T ∗ ∂ x ∗ + v ∗ ∂ T ∗ ∂ y ∗ = 1 R e L P r ∂ 2 T ∗ ∂ y ∗ 2 {\displaystyle {u^{*}{\frac {\partial T^{*}}{\partial x^{*}}}}+{v^{*}{\frac {\partial T^{*}}{\partial y^{*}}}}={\frac {1}{Re_{L}Pr}}{\frac {\partial ^{2}T^{*}}{\partial y^{*2}}}} Where u ∗ {\displaystyle {u^{*}}} and v ∗ {\displaystyle {v^{*}}} are the velocities in the x and y directions respectively normalized by the free stream velocity, x ∗ {\displaystyle {x^{*}}} and y ∗ {\displaystyle {y^{*}}} are the x and y coordinates non-dimensionalized by a relevant length scale, R e L {\displaystyle {Re_{L}}} is the Reynolds number, P r {\displaystyle {Pr}} is the Prandtl number, and T ∗ {\displaystyle {T^{*}}} is the non-dimensional temperature, which is defined by the local, minimum, and maximum temperatures: T ∗ = T − T m i n T m a x − T m i n {\displaystyle T^{*}={\frac {T-T_{min}}{T_{max}-T_{min}}}} The non-dimensional species transport equation for fluid flow in a boundary layer can be given as the following, assuming no bulk species generation: u ∗ ∂ C A ∗ ∂ x ∗ + v ∗ ∂ C A ∗ ∂ y ∗ = 1 R e L S c ∂ 2 C A ∗ ∂ y ∗ 2 {\displaystyle {u^{*}{\frac {\partial C_{A}^{*}}{\partial x^{*}}}}+{v^{*}{\frac {\partial C_{A}^{*}}{\partial y^{*}}}}={\frac {1}{Re_{L}Sc}}{\frac {\partial ^{2}C_{A}^{*}}{\partial y^{*2}}}} Where C A ∗ {\displaystyle {C_{A}^{*}}} is the non-dimensional concentration, and S c {\displaystyle {Sc}} is the Schmidt number. Transport of heat is driven by temperature differences, while transport of species is due to concentration differences. They differ by the relative diffusion of their transport compared to the diffusion of momentum. For heat, the comparison is between viscous diffusivity ( ν {\displaystyle {\nu }} ) and thermal diffusion ( α {\displaystyle {\alpha }} ), given by the Prandtl number. Meanwhile, for mass transfer, the comparison is between viscous diffusivity ( ν {\displaystyle {\nu }} ) and mass Diffusivity ( D {\displaystyle {D}} ), given by the Schmidt number. In some cases direct analytic solutions can be found from these equations for the Nusselt and Sherwood numbers. In cases where experimental results are used, one can assume these equations underlie the observed transport. At an interface, the boundary conditions for both equations are also similar. For heat transfer at an interface, the no-slip condition allows us to equate conduction with convection, thus equating Fourier's law and Newton's law of cooling: q ″ = k d T d y = h ( T s − T b ) {\displaystyle q''=k{\frac {dT}{dy}}=h(T_{s}-T_{b})} Where q” is the heat flux, k {\displaystyle {k}} is the thermal conductivity, h {\displaystyle {h}} is the heat transfer coefficient, and the subscripts s {\displaystyle {s}} and b {\displaystyle {b}} compare the surface and bulk values respectively. For mass transfer at an interface, we can equate Fick's law with Newton's law for convection, yielding: J = D d C d y = h m ( C m − C b ) {\displaystyle J=D{\frac {dC}{dy}}=h_{m}(C_{m}-C_{b})} Where J {\displaystyle {J}} is the mass flux [kg/s m 3 {\displaystyle {m^{3}}} ], D {\displaystyle {D}} is the diffusivity of species a in fluid b, and h m {\displaystyle {h_{m}}} is the mass transfer coefficient. As we can see, q ″ {\displaystyle {q''}} and J {\displaystyle {J}} are analogous, k {\displaystyle {k}} and D {\displaystyle {D}} are analogous, while T {\displaystyle {T}} and C {\displaystyle {C}} are analogous. === Implementing the Analogy === Heat-Mass Analogy: Because the Nu and Sh equations are derived from these analogous governing equations, one can directly swap the Nu and Sh and the Pr and Sc numbers to convert these equations between mass and heat. In many situations, such as flow over a flat plate, the Nu and Sh numbers are functions of the Pr and Sc numbers to some coefficient n {\displaystyle n} . Therefore, one can directly calculate these numbers from one another using: N u S h = P r n S c n {\displaystyle {\frac {Nu}{Sh}}={\frac {Pr^{n}}{Sc^{n}}}} Where can be used in most cases, which comes from the analytical solution for the Nusselt Number for laminar flow over a flat plate. For best accuracy, n should be adjusted where correlations have a different exponent. We can take this further by substituting into this equation the definitions of the heat transfer coefficient, mass transfer coefficient, and Lewis number, yielding: h h m = k D L e n = ρ C p L e 1 − n {\displaystyle {\frac {h}{h_{m}}}={\frac {k}{DLe^{n}}}=\rho C_{p}Le^{1-n}} For fully developed turbulent flow, with n=1/3, this becomes the Chilton–Colburn J-factor analogy. Said analogy also relates viscous forces and heat transfer, like the Reynolds analogy. === Limitations === The analogy between heat transfer and mass transfer is strictly limited to binary diffusion in dilute (ideal) solutions for which the mass transfer rates are low enough that mass transfer has no effect on the velocity field. The concentration of the diffusing species must be low enough that the chemical potential gradient is accurately represented by the concentration gradient (thus, the analogy has limited application to concentrated liquid solutions). When the rate of mass transfer is high or the concentration of the diffusing species is not low, corrections to the low-rate heat transfer coefficient can sometimes help. Further, in multicomponent mixtures, the transport of one species is affected by the chemical potential gradients of other species. The heat and mass analogy may also break down in cases where the governing equations differ substantially. For instance, situations with substantial contributions from generation terms in the flow, such as bulk heat generation or bulk chemical reactions, may cause solutions to diverge. === Applications of the Heat-Mass Analogy === The analogy is useful for both using heat and mass transport to predict one another, or for understanding systems which experience simultaneous heat and mass transfer. For example, predicting heat transfer coefficients around turbine blades is challenging and is often done through measuring evaporating of a volatile compound and using the analogy. Many systems also experience simultaneous mass and heat transfer, and particularly common examples occur in processes with phase change, as the enthalpy of phase change often substantially influences heat transfer. Such examples include: evaporation at a water surface, transport of vapor in the air gap above a membrane distillation desalination membrane, and HVAC dehumidification equipment that combine heat transfer and selective membranes. == Applications == === Pollution === The study of transport processes is relevant for understanding the release and distribution of pollutants into the environment. In particular, accurate modeling can inform mitigation strategies. Examples include the control of surface water pollution from urban runoff, and policies intended to reduce the copper content of vehicle brake pads in the U.S. == See also == Constitutive equation Continuity equation Wave propagation Pulse Action potential Bioheat transfer == References == == External links == Transport Phenomena Archive Archived 2017-10-08 at the Wayback Machine in the Teaching Archives of the Materials Digital Library Pathway "Some Classical Transport Phenomena Problems with Solutions – Fluid Mechanics". "Some Classical Transport Phenomena Problems with Solutions – Heat Transfer". "Some Classical Transport Phenomena Problems with Solutions – Mass Transfer".
Wikipedia/Transport_theory_(statistical_physics)
The Annual Review of Materials Research is a peer-reviewed journal that publishes review articles about materials science. It has been published by the nonprofit Annual Reviews since 1971, when it was first released under the title the Annual Review of Materials Science. Four people have served as editors, with the current editor Ram Seshadri stepping into the position in 2024. It has an impact factor of 10.6 as of 2024. As of 2023, it is being published as open access, under the Subscribe to Open model. == History == The Annual Review of Materials Science was first published in 1971 by the nonprofit publisher Annual Reviews, making it their sixteenth journal. Its first editor was Robert Huggins. In 2001, its name was changed to the current form, the Annual Review of Materials Research. The name change was intended "to better reflect the broad appeal that materials research has for so many diverse groups of scientists and not simply those who identify themselves with the academic discipline of materials science." As of 2020, it was published both in print and electronically. It defines its scope as covering significant developments in the field of materials science, including methodologies for studying materials and materials phenomena. As of 2024, Journal Citation Reports gives the journal a 2023 impact factor of 10.6, ranking it forty-ninth of 438 titles in the category "Materials Science, Multidisciplinary". It is abstracted and indexed in Scopus, Science Citation Index Expanded, Civil Engineering Abstracts, INSPEC, and Academic Search, among others. == Editorial processes == The Annual Review of Materials Research is helmed by the editor or the co-editors. The editor is assisted by the editorial committee, which includes associate editors, regular members, and occasionally guest editors. Guest members participate at the invitation of the editor, and serve terms of one year. All other members of the editorial committee are appointed by the Annual Reviews board of directors and serve five-year terms. The editorial committee determines which topics should be included in each volume and solicits reviews from qualified authors. Unsolicited manuscripts are not accepted. Peer review of accepted manuscripts is undertaken by the editorial committee. === Editors of volumes === Dates indicate publication years in which someone was credited as a lead editor or co-editor of a journal volume. The planning process for a volume begins well before the volume appears, so appointment to the position of lead editor generally occurred prior to the first year shown here. An editor who has retired or died may be credited as a lead editor of a volume that they helped to plan, even if it is published after their retirement or death. Robert Huggins (1971–1993) Elton N. Kaufmann (1994–2000) David R. Clarke (2001–2024) Ram Seshadri (2025-) === Current editorial committee === As of 2024, the editorial committee consists of the editor and the following members: == See also == List of materials science journals == References ==
Wikipedia/Annual_Review_of_Materials_Science
In solid-state physics, the nearly free electron model (or NFE model and quasi-free electron model) is a quantum mechanical model of physical properties of electrons that can move almost freely through the crystal lattice of a solid. The model is closely related to the more conceptual empty lattice approximation. The model enables understanding and calculation of the electronic band structures, especially of metals. This model is an immediate improvement of the free electron model, in which the metal was considered as a non-interacting electron gas and the ions were neglected completely. == Mathematical formulation == The nearly free electron model is a modification of the free-electron gas model which includes a weak periodic perturbation meant to model the interaction between the conduction electrons and the ions in a crystalline solid. This model, like the free-electron model, does not take into account electron–electron interactions; that is, the independent electron approximation is still in effect. As shown by Bloch's theorem, introducing a periodic potential into the Schrödinger equation results in a wave function of the form ψ k ( r ) = u k ( r ) e i k ⋅ r {\displaystyle \psi _{\mathbf {k} }(\mathbf {r} )=u_{\mathbf {k} }(\mathbf {r} )e^{i\mathbf {k} \cdot \mathbf {r} }} where the function u k {\displaystyle u_{\mathbf {k} }} has the same periodicity as the lattice: u k ( r ) = u k ( r + T ) {\displaystyle u_{\mathbf {k} }(\mathbf {r} )=u_{\mathbf {k} }(\mathbf {r} +\mathbf {T} )} (where T {\displaystyle T} is a lattice translation vector.) Because it is a nearly free electron approximation we can assume that u k ( r ) ≈ 1 Ω r {\displaystyle u_{\mathbf {k} }(\mathbf {r} )\approx {\frac {1}{\sqrt {\Omega _{r}}}}} where Ω r {\displaystyle \Omega _{r}} denotes the volume of states of fixed radius r {\displaystyle r} (as described in Gibbs paradox). A solution of this form can be plugged into the Schrödinger equation, resulting in the central equation: ( λ k − ε ) C k + ∑ G U G C k − G = 0 {\displaystyle (\lambda _{\mathbf {k} }-\varepsilon )C_{\mathbf {k} }+\sum _{\mathbf {G} }U_{\mathbf {G} }C_{\mathbf {k} -\mathbf {G} }=0} where ε {\displaystyle \varepsilon } is the total energy, and the kinetic energy λ k {\displaystyle \lambda _{\mathbf {k} }} is characterized by λ k ψ k ( r ) = − ℏ 2 2 m ∇ 2 ψ k ( r ) = − ℏ 2 2 m ∇ 2 ( u k ( r ) e i k ⋅ r ) {\displaystyle \lambda _{\mathbf {k} }\psi _{\mathbf {k} }(\mathbf {r} )=-{\frac {\hbar ^{2}}{2m}}\nabla ^{2}\psi _{\mathbf {k} }(\mathbf {r} )=-{\frac {\hbar ^{2}}{2m}}\nabla ^{2}(u_{\mathbf {k} }(\mathbf {r} )e^{i\mathbf {k} \cdot \mathbf {r} })} which, after dividing by ψ k ( r ) {\displaystyle \psi _{\mathbf {k} }(\mathbf {r} )} , reduces to λ k = ℏ 2 k 2 2 m {\displaystyle \lambda _{\mathbf {k} }={\frac {\hbar ^{2}k^{2}}{2m}}} if we assume that u k ( r ) {\displaystyle u_{\mathbf {k} }(\mathbf {r} )} is almost constant and ∇ 2 u k ( r ) ≪ k 2 . {\displaystyle \nabla ^{2}u_{\mathbf {k} }(\mathbf {r} )\ll k^{2}.} The reciprocal parameters C k {\displaystyle C_{\mathbf {k} }} and U G {\displaystyle U_{\mathbf {G} }} are the Fourier coefficients of the wave function ψ ( r ) {\displaystyle \psi (\mathbf {r} )} and the screened potential energy U ( r ) {\displaystyle U(\mathbf {r} )} , respectively: U ( r ) = ∑ G U G e i G ⋅ r {\displaystyle U(\mathbf {r} )=\sum _{\mathbf {G} }U_{\mathbf {G} }e^{i\mathbf {G} \cdot \mathbf {r} }} ψ ( r ) = ∑ k C k e i k ⋅ r {\displaystyle \psi (\mathbf {r} )=\sum _{\mathbf {k} }C_{\mathbf {k} }e^{i\mathbf {k} \cdot \mathbf {r} }} The vectors G {\displaystyle \mathbf {G} } are the reciprocal lattice vectors, and the discrete values of k {\displaystyle \mathbf {k} } are determined by the boundary conditions of the lattice under consideration. Before doing the perturbation analysis, let us first consider the base case to which the perturbation is applied. Here, the base case is U ( x ) = 0 {\displaystyle U(x)=0} , and therefore all the Fourier coefficients of the potential are also zero. In this case the central equation reduces to the form ( λ k − ε ) C k = 0 {\displaystyle (\lambda _{\mathbf {k} }-\varepsilon )C_{\mathbf {k} }=0} This identity means that for each k {\displaystyle \mathbf {k} } , one of the two following cases must hold: C k = 0 {\displaystyle C_{\mathbf {k} }=0} , λ k = ε {\displaystyle \lambda _{\mathbf {k} }=\varepsilon } If ε {\displaystyle \varepsilon } is a non-degenerate energy level, then the second case occurs for only one value of k {\displaystyle \mathbf {k} } , while for the remaining k {\displaystyle \mathbf {k} } , the Fourier expansion coefficient C k {\displaystyle C_{\mathbf {k} }} is zero. In this case, the standard free electron gas result is retrieved: ψ k ∝ e i k ⋅ r {\displaystyle \psi _{\mathbf {k} }\propto e^{i\mathbf {k} \cdot \mathbf {r} }} If ε {\displaystyle \varepsilon } is a degenerate energy level, there will be a set of lattice vectors k 1 , … , k m {\displaystyle \mathbf {k} _{1},\dots ,\mathbf {k} _{m}} with λ 1 = ⋯ = λ m = ε {\displaystyle \lambda _{1}=\dots =\lambda _{m}=\varepsilon } . Then there will be m {\displaystyle m} independent plane wave solutions of which any linear combination is also a solution: ψ ∝ ∑ j = 1 m A j e i k j ⋅ r {\displaystyle \psi \propto \sum _{j=1}^{m}A_{j}e^{i\mathbf {k} _{j}\cdot \mathbf {r} }} Now let U {\displaystyle U} be nonzero and small. Non-degenerate and degenerate perturbation theory, respectively, can be applied in these two cases to solve for the Fourier coefficients C k {\displaystyle C_{\mathbf {k} }} of the wavefunction (correct to first order in U {\displaystyle U} ) and the energy eigenvalue ε {\displaystyle \varepsilon } (correct to second order in U {\displaystyle U} ). An important result of this derivation is that there is no first-order shift in the energy ε {\displaystyle \varepsilon } in the case of no degeneracy, while there is in the case of degeneracy (and near-degeneracy), implying that the latter case is more important in this analysis. Particularly, at the Brillouin zone boundary (or, equivalently, at any point on a Bragg plane), one finds a twofold energy degeneracy that results in a shift in energy given by: ε = λ k ± | U G | {\displaystyle \varepsilon =\lambda _{\mathbf {k} }\pm |U_{\mathbf {G} }|} . This energy gap between Brillouin zones is known as the band gap, with a magnitude of 2 | U G | {\displaystyle 2|U_{\mathbf {G} }|} . == Results == Introducing this weak perturbation has significant effects on the solution to the Schrödinger equation, most significantly resulting in a band gap between wave vectors in different Brillouin zones. == Justifications == In this model, the assumption is made that the interaction between the conduction electrons and the ion cores can be modeled through the use of a "weak" perturbing potential. This may seem like a severe approximation, for the Coulomb attraction between these two particles of opposite charge can be quite significant at short distances. It can be partially justified, however, by noting two important properties of the quantum mechanical system: The force between the ions and the electrons is greatest at very small distances. However, the conduction electrons are not "allowed" to get this close to the ion cores due to the Pauli exclusion principle: the orbitals closest to the ion core are already occupied by the core electrons. Therefore, the conduction electrons never get close enough to the ion cores to feel their full force. Furthermore, the core electrons shield the ion charge magnitude "seen" by the conduction electrons. The result is an effective nuclear charge experienced by the conduction electrons which is significantly reduced from the actual nuclear charge. == See also == Empty lattice approximation Electronic band structure Tight binding model Bloch's theorem Kronig–Penney model == References == Ashcroft, Neil W.; Mermin, N. David (1976). Solid State Physics. Orlando: Harcourt. ISBN 0-03-083993-9. Kittel, Charles (1996). Introduction to Solid State Physics (7th ed.). New York: Wiley. ISBN 0-471-11181-3. Elliott, Stephen (1998). The Physics and Chemistry of Solids. New York: Wiley. ISBN 0-471-98194-X.
Wikipedia/Nearly_free_electron_model
In the field of optics, transparency (also called pellucidity or diaphaneity) is the physical property of allowing light to pass through the material without appreciable scattering of light. On a macroscopic scale (one in which the dimensions are much larger than the wavelengths of the photons in question), the photons can be said to follow Snell's law. Translucency (also called translucence or translucidity) is the physical property of allowing light to pass through the material (with or without scattering of light). It allows light to pass through but the light does not necessarily follow Snell's law on the macroscopic scale; the photons may be scattered at either of the two interfaces, or internally, where there is a change in the index of refraction. In other words, a translucent material is made up of components with different indices of refraction. A transparent material is made up of components with a uniform index of refraction. Transparent materials appear clear, with the overall appearance of one color, or any combination leading up to a brilliant spectrum of every color. The opposite property of translucency is opacity. Other categories of visual appearance, related to the perception of regular or diffuse reflection and transmission of light, have been organized under the concept of cesia in an order system with three variables, including transparency, translucency and opacity among the involved aspects. When light encounters a material, it can interact with it in several different ways. These interactions depend on the wavelength of the light and the nature of the material. Photons interact with an object by some combination of reflection, absorption and transmission. Some materials, such as plate glass and clean water, transmit much of the light that falls on them and reflect little of it; such materials are called optically transparent. Many liquids and aqueous solutions are highly transparent. Absence of structural defects (voids, cracks, etc.) and molecular structure of most liquids are mostly responsible for excellent optical transmission. Materials that do not transmit light are called opaque. Many such substances have a chemical composition which includes what are referred to as absorption centers. Many substances are selective in their absorption of white light frequencies. They absorb certain portions of the visible spectrum while reflecting others. The frequencies of the spectrum which are not absorbed are either reflected or transmitted for our physical observation. This is what gives rise to color. The attenuation of light of all frequencies and wavelengths is due to the combined mechanisms of absorption and scattering. Transparency can provide almost perfect camouflage for animals able to achieve it. This is easier in dimly-lit or turbid seawater than in good illumination. Many marine animals such as jellyfish are highly transparent. == Etymology == late Middle English: from Old French, from medieval Latin transparent- 'visible through', from Latin transparere, from trans- 'through' + parere 'be visible'. late 16th century (in the Latin sense): from Latin translucent- 'shining through', from the verb translucere, from trans- 'through' + lucere 'to shine'. late Middle English opake, from Latin opacus 'darkened'. The current spelling (rare before the 19th century) has been influenced by the French form. == Introduction == With regard to the absorption of light, primary material considerations include: At the electronic level, absorption in the ultraviolet and visible (UV-Vis) portions of the spectrum depends on whether the electron orbitals are spaced (or "quantized") such that electrons can absorb a quantum of light (or photon) of a specific frequency. For example, in most glasses, electrons have no available energy levels above them in the range of that associated with visible light, or if they do, the transition to them would violate selection rules, meaning there is no appreciable absorption in pure (undoped) glasses, making them ideal transparent materials for windows in buildings. At the atomic or molecular level, physical absorption in the infrared portion of the spectrum depends on the frequencies of atomic or molecular vibrations or chemical bonds, and on selection rules. Nitrogen and oxygen are not greenhouse gases because there is no molecular dipole moment. With regard to the scattering of light, the most critical factor is the length scale of any or all of these structural features relative to the wavelength of the light being scattered. Primary material considerations include: Crystalline structure: whether the atoms or molecules exhibit the 'long-range order' evidenced in crystalline solids. Glassy structure: Scattering centers include fluctuations in density or composition. Microstructure: Scattering centers include internal surfaces such as grain boundaries, crystallographic defects, and microscopic pores. Organic materials: Scattering centers include fiber and cell structures and boundaries. Diffuse reflection - Generally, when light strikes the surface of a (non-metallic and non-glassy) solid material, it bounces off in all directions due to multiple reflections by the microscopic irregularities inside the material (e.g., the grain boundaries of a polycrystalline material or the cell or fiber boundaries of an organic material), and by its surface, if it is rough. Diffuse reflection is typically characterized by omni-directional reflection angles. Most of the objects visible to the naked eye are identified via diffuse reflection. Another term commonly used for this type of reflection is "light scattering". Light scattering from the surfaces of objects is our primary mechanism of physical observation. Light scattering in liquids and solids depends on the wavelength of the light being scattered. Limits to spatial scales of visibility (using white light) therefore arise, depending on the frequency of the light wave and the physical dimension (or spatial scale) of the scattering center. Visible light has a wavelength scale on the order of 0.5 μm. Scattering centers (or particles) as small as 1 μm have been observed directly in the light microscope (e.g., Brownian motion). === Transparent ceramics === Optical transparency in polycrystalline materials is limited by the amount of light scattered by their microstructural features. Light scattering depends on the wavelength of the light. Limits to spatial scales of visibility (using white light) therefore arise, depending on the frequency of the light wave and the physical dimension of the scattering center. For example, since visible light has a wavelength scale on the order of a micrometre, scattering centers will have dimensions on a similar spatial scale. Primary scattering centers in polycrystalline materials include microstructural defects such as pores and grain boundaries. In addition to pores, most of the interfaces in a typical metal or ceramic object are in the form of grain boundaries, which separate tiny regions of crystalline order. When the size of the scattering center (or grain boundary) is reduced below the size of the wavelength of the light being scattered, the scattering no longer occurs to any significant extent. In the formation of polycrystalline materials (metals and ceramics) the size of the crystalline grains is determined largely by the size of the crystalline particles present in the raw material during formation (or pressing) of the object. Moreover, the size of the grain boundaries scales directly with particle size. Thus, a reduction of the original particle size well below the wavelength of visible light (about 1/15 of the light wavelength, or roughly 600 nm / 15 = 40 nm) eliminates much of the light scattering, resulting in a translucent or even transparent material. Computer modeling of light transmission through translucent ceramic alumina has shown that microscopic pores trapped near grain boundaries act as primary scattering centers. The volume fraction of porosity had to be reduced below 1% for high-quality optical transmission (99.99 percent of theoretical density). This goal has been readily accomplished and amply demonstrated in laboratories and research facilities worldwide using the emerging chemical processing methods encompassed by the methods of sol-gel chemistry and nanotechnology. Transparent ceramics have created interest in their applications for high energy lasers, transparent armor windows, nose cones for heat seeking missiles, radiation detectors for non-destructive testing, high energy physics, space exploration, security and medical imaging applications. Large laser elements made from transparent ceramics can be produced at a relatively low cost. These components are free of internal stress or intrinsic birefringence, and allow relatively large doping levels or optimized custom-designed doping profiles. This makes ceramic laser elements particularly important for high-energy lasers. The development of transparent panel products will have other potential advanced applications including high strength, impact-resistant materials that can be used for domestic windows and skylights. Perhaps more important is that walls and other applications will have improved overall strength, especially for high-shear conditions found in high seismic and wind exposures. If the expected improvements in mechanical properties bear out, the traditional limits seen on glazing areas in today's building codes could quickly become outdated if the window area actually contributes to the shear resistance of the wall. Currently available infrared transparent materials typically exhibit a trade-off between optical performance, mechanical strength and price. For example, sapphire (crystalline alumina) is very strong, but it is expensive and lacks full transparency throughout the 3–5 μm mid-infrared range. Yttria is fully transparent from 3–5 μm, but lacks sufficient strength, hardness, and thermal shock resistance for high-performance aerospace applications. A combination of these two materials in the form of the yttrium aluminium garnet (YAG) is one of the top performers in the field. == Absorption of light in solids == When light strikes an object, it usually has not just a single frequency (or wavelength) but many. Objects have a tendency to selectively absorb, reflect, or transmit light of certain frequencies. That is, one object might reflect green light while absorbing all other frequencies of visible light. Another object might selectively transmit blue light while absorbing all other frequencies of visible light. The manner in which visible light interacts with an object is dependent upon the frequency of the light, the nature of the atoms in the object, and often, the nature of the electrons in the atoms of the object. Some materials allow much of the light that falls on them to be transmitted through the material without being reflected. Materials that allow the transmission of light waves through them are called optically transparent. Chemically pure (undoped) window glass and clean river or spring water are prime examples of this. Materials that do not allow the transmission of any light wave frequencies are called opaque. Such substances may have a chemical composition which includes what are referred to as absorption centers. Most materials are composed of materials that are selective in their absorption of light frequencies. Thus they absorb only certain portions of the visible spectrum. The frequencies of the spectrum which are not absorbed are either reflected back or transmitted for our physical observation. In the visible portion of the spectrum, this is what gives rise to color. Absorption centers are largely responsible for the appearance of specific wavelengths of visible light all around us. Moving from longer (0.7 μm) to shorter (0.4 μm) wavelengths: Red, orange, yellow, green, and blue (ROYGB) can all be identified by our senses in the appearance of color by the selective absorption of specific light wave frequencies (or wavelengths). Mechanisms of selective light wave absorption include: Electronic: Transitions in electron energy levels within the atom (e.g., pigments). These transitions are typically in the ultraviolet (UV) and/or visible portions of the spectrum. Vibrational: Resonance in atomic/molecular vibrational modes. These transitions are typically in the infrared portion of the spectrum. === UV-Vis: electronic transitions === In electronic absorption, the frequency of the incoming light wave is at or near the energy levels of the electrons within the atoms that compose the substance. In this case, the electrons will absorb the energy of the light wave and increase their energy state, often moving outward from the nucleus of the atom into an outer shell or orbital. The atoms that bind together to make the molecules of any particular substance contain a number of electrons (given by the atomic number Z in the periodic table). Recall that all light waves are electromagnetic in origin. Thus they are affected strongly when coming into contact with negatively charged electrons in matter. When photons (individual packets of light energy) come in contact with the valence electrons of an atom, one of several things can and will occur: A molecule absorbs the photon, some of the energy may be lost via luminescence, fluorescence and phosphorescence. A molecule absorbs the photon, which results in reflection or scattering. A molecule cannot absorb the energy of the photon and the photon continues on its path. This results in transmission (provided no other absorption mechanisms are active). Most of the time, it is a combination of the above that happens to the light that hits an object. The states in different materials vary in the range of energy that they can absorb. Most glasses, for example, block ultraviolet (UV) light. What happens is the electrons in the glass absorb the energy of the photons in the UV range while ignoring the weaker energy of photons in the visible light spectrum. But there are also existing special glass types, like special types of borosilicate glass or quartz that are UV-permeable and thus allow a high transmission of ultraviolet light. Thus, when a material is illuminated, individual photons of light can make the valence electrons of an atom transition to a higher electronic energy level. The photon is destroyed in the process and the absorbed radiant energy is transformed to electric potential energy. Several things can happen, then, to the absorbed energy: It may be re-emitted by the electron as radiant energy (in this case, the overall effect is in fact a scattering of light), dissipated to the rest of the material (i.e., transformed into heat), or the electron can be freed from the atom (as in the photoelectric effects and Compton effects). === Infrared: bond stretching === The primary physical mechanism for storing mechanical energy of motion in condensed matter is through heat, or thermal energy. Thermal energy manifests itself as energy of motion. Thus, heat is motion at the atomic and molecular levels. The primary mode of motion in crystalline substances is vibration. Any given atom will vibrate around some mean or average position within a crystalline structure, surrounded by its nearest neighbors. This vibration in two dimensions is equivalent to the oscillation of a clock's pendulum. It swings back and forth symmetrically about some mean or average (vertical) position. Atomic and molecular vibrational frequencies may average on the order of 1012 cycles per second (Terahertz radiation). When a light wave of a given frequency strikes a material with particles having the same or (resonant) vibrational frequencies, those particles will absorb the energy of the light wave and transform it into thermal energy of vibrational motion. Since different atoms and molecules have different natural frequencies of vibration, they will selectively absorb different frequencies (or portions of the spectrum) of infrared light. Reflection and transmission of light waves occur because the frequencies of the light waves do not match the natural resonant frequencies of vibration of the objects. When infrared light of these frequencies strikes an object, the energy is reflected or transmitted. If the object is transparent, then the light waves are passed on to neighboring atoms through the bulk of the material and re-emitted on the opposite side of the object. Such frequencies of light waves are said to be transmitted. === Transparency in insulators === An object may be not transparent either because it reflects the incoming light or because it absorbs the incoming light. Almost all solids reflect a part and absorb a part of the incoming light. When light falls onto a block of metal, it encounters atoms that are tightly packed in a regular lattice and a "sea of electrons" moving randomly between the atoms. In metals, most of these are non-bonding electrons (or free electrons) as opposed to the bonding electrons typically found in covalently bonded or ionically bonded non-metallic (insulating) solids. In a metallic bond, any potential bonding electrons can easily be lost by the atoms in a crystalline structure. The effect of this delocalization is simply to exaggerate the effect of the "sea of electrons". As a result of these electrons, most of the incoming light in metals is reflected back, which is why we see a shiny metal surface. Most insulators (or dielectric materials) are held together by ionic bonds. Thus, these materials do not have free conduction electrons, and the bonding electrons reflect only a small fraction of the incident wave. The remaining frequencies (or wavelengths) are free to propagate (or be transmitted). This class of materials includes all ceramics and glasses. If a dielectric material does not include light-absorbent additive molecules (pigments, dyes, colorants), it is usually transparent to the spectrum of visible light. Color centers (or dye molecules, or "dopants") in a dielectric absorb a portion of the incoming light. The remaining frequencies (or wavelengths) are free to be reflected or transmitted. This is how colored glass is produced. Most liquids and aqueous solutions are highly transparent. For example, water, cooking oil, rubbing alcohol, air, and natural gas are all clear. Absence of structural defects (voids, cracks, etc.) and molecular structure of most liquids are chiefly responsible for their excellent optical transmission. The ability of liquids to "heal" internal defects via viscous flow is one of the reasons why some fibrous materials (e.g., paper or fabric) increase their apparent transparency when wetted. The liquid fills up numerous voids making the material more structurally homogeneous. Light scattering in an ideal defect-free crystalline (non-metallic) solid that provides no scattering centers for incoming light will be due primarily to any effects of anharmonicity within the ordered lattice. Light transmission will be highly directional due to the typical anisotropy of crystalline substances, which includes their symmetry group and Bravais lattice. For example, the seven different crystalline forms of quartz silica (silicon dioxide, SiO2) are all clear, transparent materials. == Optical waveguides == Optically transparent materials focus on the response of a material to incoming light waves of a range of wavelengths. Guided light wave transmission via frequency selective waveguides involves the emerging field of fiber optics and the ability of certain glassy compositions to act as a transmission medium for a range of frequencies simultaneously (multi-mode optical fiber) with little or no interference between competing wavelengths or frequencies. This resonant mode of energy and data transmission via electromagnetic (light) wave propagation is relatively lossless. An optical fiber is a cylindrical dielectric waveguide that transmits light along its axis by the process of total internal reflection. The fiber consists of a core surrounded by a cladding layer. To confine the optical signal in the core, the refractive index of the core must be greater than that of the cladding. The refractive index is the parameter reflecting the speed of light in a material. (Refractive index is the ratio of the speed of light in vacuum to the speed of light in a given medium. The refractive index of vacuum is therefore 1.) The larger the refractive index, the more slowly light travels in that medium. Typical values for core and cladding of an optical fiber are 1.48 and 1.46, respectively. When light traveling in a dense medium hits a boundary at a steep angle, the light will be completely reflected. This effect, called total internal reflection, is used in optical fibers to confine light in the core. Light travels along the fiber bouncing back and forth off of the boundary. Because the light must strike the boundary with an angle greater than the critical angle, only light that enters the fiber within a certain range of angles will be propagated. This range of angles is called the acceptance cone of the fiber. The size of this acceptance cone is a function of the refractive index difference between the fiber's core and cladding. Optical waveguides are used as components in integrated optical circuits (e.g., combined with lasers or light-emitting diodes, LEDs) or as the transmission medium in local and long-haul optical communication systems. === Mechanisms of attenuation === Attenuation in fiber optics, also known as transmission loss, is the reduction in intensity of the light beam (or signal) with respect to distance traveled through a transmission medium. It is an important factor limiting the transmission of a signal across large distances. Attenuation coefficients in fiber optics usually use units of dB/km through the medium due to the very high quality of transparency of modern optical transmission media. The medium is usually a fiber of silica glass that confines the incident light beam to the inside. In optical fibers, the main source of attenuation is scattering from molecular level irregularities, called Rayleigh scattering, due to structural disorder and compositional fluctuations of the glass structure. This same phenomenon is seen as one of the limiting factors in the transparency of infrared missile domes. Further attenuation is caused by light absorbed by residual materials, such as metals or water ions, within the fiber core and inner cladding. Light leakage due to bending, splices, connectors, or other outside forces are other factors resulting in attenuation. At high optical powers, scattering can also be caused by nonlinear optical processes in the fiber. == As camouflage == Many marine animals that float near the surface are highly transparent, giving them almost perfect camouflage. However, transparency is difficult for bodies made of materials that have different refractive indices from seawater. Some marine animals such as jellyfish have gelatinous bodies, composed mainly of water; their thick mesogloea is acellular and highly transparent. This conveniently makes them buoyant, but it also makes them large for their muscle mass, so they cannot swim fast, making this form of camouflage a costly trade-off with mobility. Gelatinous planktonic animals are between 50 and 90 percent transparent. A transparency of 50 percent is enough to make an animal invisible to a predator such as cod at a depth of 650 metres (2,130 ft); better transparency is required for invisibility in shallower water, where the light is brighter and predators can see better. For example, a cod can see prey that are 98 percent transparent in optimal lighting in shallow water. Therefore, sufficient transparency for camouflage is more easily achieved in deeper waters. For the same reason, transparency in air is even harder to achieve, but a partial example is found in the glass frogs of the South American rain forest, which have translucent skin and pale greenish limbs. Several Central American species of clearwing (ithomiine) butterflies and many dragonflies and allied insects also have wings which are mostly transparent, a form of crypsis that provides some protection from predators. == See also == == References == == Further reading == Electrodynamics of continuous media, Landau, L. D., Lifshits. E.M. and Pitaevskii, L.P., (Pergamon Press, Oxford, 1984) Laser Light Scattering: Basic Principles and Practice Chu, B., 2nd Edn. (Academic Press, New York 1992) Solid State Laser Engineering, W. Koechner (Springer-Verlag, New York, 1999) Introduction to Chemical Physics, J.C. Slater (McGraw-Hill, New York, 1939) Modern Theory of Solids, F. Seitz, (McGraw-Hill, New York, 1940) Modern Aspects of the Vitreous State, J.D.MacKenzie, Ed. (Butterworths, London, 1960) == External links == UV stability Properties of Light UV-Vis Absorption Infrared Spectroscopy Brillouin Scattering Transparent Ceramics Bulletproof Glass Transparent ALON Armor Properties of Optical Materials What makes glass transparent ? Brillouin scattering in optical fiber Thermal IR Radiation and Missile Guidance
Wikipedia/Transparent_materials
In solid-state physics, the electronic band structure (or simply band structure) of a solid describes the range of energy levels that electrons may have within it, as well as the ranges of energy that they may not have (called band gaps or forbidden bands). Band theory derives these bands and band gaps by examining the allowed quantum mechanical wave functions for an electron in a large, periodic lattice of atoms or molecules. Band theory has been successfully used to explain many physical properties of solids, such as electrical resistivity and optical absorption, and forms the foundation of the understanding of all solid-state devices (transistors, solar cells, etc.). == Why bands and band gaps occur == The formation of electronic bands and band gaps can be illustrated with two complementary models for electrons in solids.: 161  The first one is the nearly free electron model, in which the electrons are assumed to move almost freely within the material. In this model, the electronic states resemble free electron plane waves, and are only slightly perturbed by the crystal lattice. This model explains the origin of the electronic dispersion relation, but the explanation for band gaps is subtle in this model.: 121  The second model starts from the opposite limit, in which the electrons are tightly bound to individual atoms. The electrons of a single, isolated atom occupy atomic orbitals with discrete energy levels. If two atoms come close enough so that their atomic orbitals overlap, the electrons can tunnel between the atoms. This tunneling splits (hybridizes) the atomic orbitals into molecular orbitals with different energies.: 117–122  Similarly, if a large number N of identical atoms come together to form a solid, such as a crystal lattice, the atoms' atomic orbitals overlap with the nearby orbitals. Each discrete energy level splits into N levels, each with a different energy. Since the number of atoms in a macroscopic piece of solid is a very large number (N ≈ 1022), the number of orbitals that hybridize with each other is very large. For this reason, the adjacent levels are very closely spaced in energy (of the order of 10−22 eV), and can be considered to form a continuum, an energy band. This formation of bands is mostly a feature of the outermost electrons (valence electrons) in the atom, which are the ones involved in chemical bonding and electrical conductivity. The inner electron orbitals do not overlap to a significant degree, so their bands are very narrow. Band gaps are essentially leftover ranges of energy not covered by any band, a result of the finite widths of the energy bands. The bands have different widths, with the widths depending upon the degree of overlap in the atomic orbitals from which they arise. Two adjacent bands may simply not be wide enough to fully cover the range of energy. For example, the bands associated with core orbitals (such as 1s electrons) are extremely narrow due to the small overlap between adjacent atoms. As a result, there tend to be large band gaps between the core bands. Higher bands involve comparatively larger orbitals with more overlap, becoming progressively wider at higher energies so that there are no band gaps at higher energies. == Basic concepts == === Assumptions and limits of band structure theory === Band theory is only an approximation to the quantum state of a solid, which applies to solids consisting of many identical atoms or molecules bonded together. These are the assumptions necessary for band theory to be valid: Infinite-size system: For the bands to be continuous, the piece of material must consist of a large number of atoms. Since a macroscopic piece of material contains on the order of 1022 atoms, this is not a serious restriction; band theory even applies to microscopic-sized transistors in integrated circuits. With modifications, the concept of band structure can also be extended to systems which are only "large" along some dimensions, such as two-dimensional electron systems. Homogeneous system: Band structure is an intrinsic property of a material, which assumes that the material is homogeneous. Practically, this means that the chemical makeup of the material must be uniform throughout the piece. Non-interactivity: The band structure describes "single electron states". The existence of these states assumes that the electrons travel in a static potential without dynamically interacting with lattice vibrations, other electrons, photons, etc. The above assumptions are broken in a number of important practical situations, and the use of band structure requires one to keep a close check on the limitations of band theory: Inhomogeneities and interfaces: Near surfaces, junctions, and other inhomogeneities, the bulk band structure is disrupted. Not only are there local small-scale disruptions (e.g., surface states or dopant states inside the band gap), but also local charge imbalances. These charge imbalances have electrostatic effects that extend deeply into semiconductors, insulators, and the vacuum (see doping, band bending). Along the same lines, most electronic effects (capacitance, electrical conductance, electric-field screening) involve the physics of electrons passing through surfaces and/or near interfaces. The full description of these effects, in a band structure picture, requires at least a rudimentary model of electron-electron interactions (see space charge, band bending). Small systems: For systems which are small along every dimension (e.g., a small molecule or a quantum dot), there is no continuous band structure. The crossover between small and large dimensions is the realm of mesoscopic physics. Strongly correlated materials (for example, Mott insulators) simply cannot be understood in terms of single-electron states. The electronic band structures of these materials are poorly defined (or at least, not uniquely defined) and may not provide useful information about their physical state. === Crystalline symmetry and wavevectors === Band structure calculations take advantage of the periodic nature of a crystal lattice, exploiting its symmetry. The single-electron Schrödinger equation is solved for an electron in a lattice-periodic potential, giving Bloch electrons as solutions ψ n k ( r ) = e i k ⋅ r u n k ( r ) , {\displaystyle \psi _{n\mathbf {k} }(\mathbf {r} )=e^{i\mathbf {k} \cdot \mathbf {r} }u_{n\mathbf {k} }(\mathbf {r} ),} where k is called the wavevector. For each value of k, there are multiple solutions to the Schrödinger equation labelled by n, the band index, which simply numbers the energy bands. Each of these energy levels evolves smoothly with changes in k, forming a smooth band of states. For each band we can define a function En(k), which is the dispersion relation for electrons in that band. The wavevector takes on any value inside the Brillouin zone, which is a polyhedron in wavevector (reciprocal lattice) space that is related to the crystal's lattice. Wavevectors outside the Brillouin zone simply correspond to states that are physically identical to those states within the Brillouin zone. Special high symmetry points/lines in the Brillouin zone are assigned labels like Γ, Δ, Λ, Σ (see Fig 1). It is difficult to visualize the shape of a band as a function of wavevector, as it would require a plot in four-dimensional space, E vs. kx, ky, kz. In scientific literature it is common to see band structure plots which show the values of En(k) for values of k along straight lines connecting symmetry points, often labelled Δ, Λ, Σ, or [100], [111], and [110], respectively. Another method for visualizing band structure is to plot a constant-energy isosurface in wavevector space, showing all of the states with energy equal to a particular value. The isosurface of states with energy equal to the Fermi level is known as the Fermi surface. Energy band gaps can be classified using the wavevectors of the states surrounding the band gap: Direct band gap: the lowest-energy state above the band gap has the same k as the highest-energy state beneath the band gap. Indirect band gap: the closest states above and beneath the band gap do not have the same k value. ==== Asymmetry: Band structures in non-crystalline solids ==== Although electronic band structures are usually associated with crystalline materials, quasi-crystalline and amorphous solids may also exhibit band gaps. These are somewhat more difficult to study theoretically since they lack the simple symmetry of a crystal, and it is not usually possible to determine a precise dispersion relation. As a result, virtually all of the existing theoretical work on the electronic band structure of solids has focused on crystalline materials. === Density of states === The density of states function g(E) is defined as the number of electronic states per unit volume, per unit energy, for electron energies near E. The density of states function is important for calculations of effects based on band theory. In Fermi's Golden Rule, a calculation for the rate of optical absorption, it provides both the number of excitable electrons and the number of final states for an electron. It appears in calculations of electrical conductivity where it provides the number of mobile states, and in computing electron scattering rates where it provides the number of final states after scattering. For energies inside a band gap, g(E) = 0. === Filling of bands === At thermodynamic equilibrium, the likelihood of a state of energy E being filled with an electron is given by the Fermi–Dirac distribution, a thermodynamic distribution that takes into account the Pauli exclusion principle: f ( E ) = 1 1 + e ( E − μ ) / k B T {\displaystyle f(E)={\frac {1}{1+e^{{(E-\mu )}/{k_{\text{B}}T}}}}} where: kBT is the product of the Boltzmann constant and temperature, and µ is the total chemical potential of electrons, or Fermi level (in semiconductor physics, this quantity is more often denoted EF). The Fermi level of a solid is directly related to the voltage on that solid, as measured with a voltmeter. Conventionally, in band structure plots the Fermi level is taken to be the zero of energy (an arbitrary choice). The density of electrons in the material is simply the integral of the Fermi–Dirac distribution times the density of states: N / V = ∫ − ∞ ∞ g ( E ) f ( E ) d E {\displaystyle N/V=\int _{-\infty }^{\infty }g(E)f(E)\,dE} Although there are an infinite number of bands and thus an infinite number of states, there are only a finite number of electrons to place in these bands. The preferred value for the number of electrons is a consequence of electrostatics: even though the surface of a material can be charged, the internal bulk of a material prefers to be charge neutral. The condition of charge neutrality means that N/V must match the density of protons in the material. For this to occur, the material electrostatically adjusts itself, shifting its band structure up or down in energy (thereby shifting g(E)), until it is at the correct equilibrium with respect to the Fermi level. ==== Names of bands near the Fermi level (conduction band, valence band) ==== A solid has an infinite number of allowed bands, just as an atom has infinitely many energy levels. However, most of the bands simply have too high energy, and are usually disregarded under ordinary circumstances. Conversely, there are very low energy bands associated with the core orbitals (such as 1s electrons). These low-energy core bands are also usually disregarded since they remain filled with electrons at all times, and are therefore inert. Likewise, materials have several band gaps throughout their band structure. The most important bands and band gaps—those relevant for electronics and optoelectronics—are those with energies near the Fermi level. The bands and band gaps near the Fermi level are given special names, depending on the material: In a semiconductor or band insulator, the Fermi level is surrounded by a band gap, referred to as the band gap (to distinguish it from the other band gaps in the band structure). The closest band above the band gap is called the conduction band, and the closest band beneath the band gap is called the valence band. The name "valence band" was coined by analogy to chemistry, since in semiconductors (and insulators) the valence band is built out of the valence orbitals. In a metal or semimetal, the Fermi level is inside of one or more allowed bands. In semimetals the bands are usually referred to as "conduction band" or "valence band" depending on whether the charge transport is more electron-like or hole-like, by analogy to semiconductors. In many metals, however, the bands are neither electron-like nor hole-like, and often just called "valence band" as they are made of valence orbitals. The band gaps in a metal's band structure are not important for low energy physics, since they are too far from the Fermi level. == Theory in crystals == The ansatz is the special case of electron waves in a periodic crystal lattice using Bloch's theorem as treated generally in the dynamical theory of diffraction. Every crystal is a periodic structure which can be characterized by a Bravais lattice, and for each Bravais lattice we can determine the reciprocal lattice, which encapsulates the periodicity in a set of three reciprocal lattice vectors (b1, b2, b3). Now, any periodic potential V(r) which shares the same periodicity as the direct lattice can be expanded out as a Fourier series whose only non-vanishing components are those associated with the reciprocal lattice vectors. So the expansion can be written as: V ( r ) = ∑ K V K e i K ⋅ r {\displaystyle V(\mathbf {r} )=\sum _{\mathbf {K} }{V_{\mathbf {K} }e^{i\mathbf {K} \cdot \mathbf {r} }}} where K = m1b1 + m2b2 + m3b3 for any set of integers (m1, m2, m3). From this theory, an attempt can be made to predict the band structure of a particular material, however most ab initio methods for electronic structure calculations fail to predict the observed band gap. === Nearly free electron approximation === In the nearly free electron approximation, interactions between electrons are completely ignored. This approximation allows use of Bloch's Theorem which states that electrons in a periodic potential have wavefunctions and energies which are periodic in wavevector up to a constant phase shift between neighboring reciprocal lattice vectors. The consequences of periodicity are described mathematically by the Bloch's theorem, which states that the eigenstate wavefunctions have the form Ψ n , k ( r ) = e i k ⋅ r u n ( r ) {\displaystyle \Psi _{n,\mathbf {k} }(\mathbf {r} )=e^{i\mathbf {k} \cdot \mathbf {r} }u_{n}(\mathbf {r} )} where the Bloch function u n ( r ) {\displaystyle u_{n}(\mathbf {r} )} is periodic over the crystal lattice, that is, u n ( r ) = u n ( r − R ) . {\displaystyle u_{n}(\mathbf {r} )=u_{n}(\mathbf {r} -\mathbf {R} ).} Here index n refers to the nth energy band, wavevector k is related to the direction of motion of the electron, r is the position in the crystal, and R is the location of an atomic site.: 179  The NFE model works particularly well in materials like metals where distances between neighbouring atoms are small. In such materials the overlap of atomic orbitals and potentials on neighbouring atoms is relatively large. In that case the wave function of the electron can be approximated by a (modified) plane wave. The band structure of a metal like aluminium even gets close to the empty lattice approximation. === Tight binding model === The opposite extreme to the nearly free electron approximation assumes the electrons in the crystal behave much like an assembly of constituent atoms. This tight binding model assumes the solution to the time-independent single electron Schrödinger equation Ψ {\displaystyle \Psi } is well approximated by a linear combination of atomic orbitals ψ n ( r ) {\displaystyle \psi _{n}(\mathbf {r} )} .: 245–248  Ψ ( r ) = ∑ n , R b n , R ψ n ( r − R ) , {\displaystyle \Psi (\mathbf {r} )=\sum _{n,\mathbf {R} }b_{n,\mathbf {R} }\psi _{n}(\mathbf {r} -\mathbf {R} ),} where the coefficients b n , R {\displaystyle b_{n,\mathbf {R} }} are selected to give the best approximate solution of this form. Index n refers to an atomic energy level and R refers to an atomic site. A more accurate approach using this idea employs Wannier functions, defined by:: Eq. 42 p. 267  a n ( r − R ) = V C ( 2 π ) 3 ∫ BZ d k e − i k ⋅ ( R − r ) u n k ; {\displaystyle a_{n}(\mathbf {r} -\mathbf {R} )={\frac {V_{C}}{(2\pi )^{3}}}\int _{\text{BZ}}d\mathbf {k} e^{-i\mathbf {k} \cdot (\mathbf {R} -\mathbf {r} )}u_{n\mathbf {k} };} in which u n k {\displaystyle u_{n\mathbf {k} }} is the periodic part of the Bloch's theorem and the integral is over the Brillouin zone. Here index n refers to the n-th energy band in the crystal. The Wannier functions are localized near atomic sites, like atomic orbitals, but being defined in terms of Bloch functions they are accurately related to solutions based upon the crystal potential. Wannier functions on different atomic sites R are orthogonal. The Wannier functions can be used to form the Schrödinger solution for the n-th energy band as: Ψ n , k ( r ) = ∑ R e − i k ⋅ ( R − r ) a n ( r − R ) . {\displaystyle \Psi _{n,\mathbf {k} }(\mathbf {r} )=\sum _{\mathbf {R} }e^{-i\mathbf {k} \cdot (\mathbf {R} -\mathbf {r} )}a_{n}(\mathbf {r} -\mathbf {R} ).} The TB model works well in materials with limited overlap between atomic orbitals and potentials on neighbouring atoms. Band structures of materials like Si, GaAs, SiO2 and diamond for instance are well described by TB-Hamiltonians on the basis of atomic sp3 orbitals. In transition metals a mixed TB-NFE model is used to describe the broad NFE conduction band and the narrow embedded TB d-bands. The radial functions of the atomic orbital part of the Wannier functions are most easily calculated by the use of pseudopotential methods. NFE, TB or combined NFE-TB band structure calculations, sometimes extended with wave function approximations based on pseudopotential methods, are often used as an economic starting point for further calculations. === KKR model === The KKR method, also called "multiple scattering theory" or Green's function method, finds the stationary values of the inverse transition matrix T rather than the Hamiltonian. A variational implementation was suggested by Korringa, Kohn and Rostocker, and is often referred to as the Korringa–Kohn–Rostoker method. The most important features of the KKR or Green's function formulation are (1) it separates the two aspects of the problem: structure (positions of the atoms) from the scattering (chemical identity of the atoms); and (2) Green's functions provide a natural approach to a localized description of electronic properties that can be adapted to alloys and other disordered system. The simplest form of this approximation centers non-overlapping spheres (referred to as muffin tins) on the atomic positions. Within these regions, the potential experienced by an electron is approximated to be spherically symmetric about the given nucleus. In the remaining interstitial region, the screened potential is approximated as a constant. Continuity of the potential between the atom-centered spheres and interstitial region is enforced. === Density-functional theory === In recent physics literature, a large majority of the electronic structures and band plots are calculated using density-functional theory (DFT), which is not a model but rather a theory, i.e., a microscopic first-principles theory of condensed matter physics that tries to cope with the electron-electron many-body problem via the introduction of an exchange-correlation term in the functional of the electronic density. DFT-calculated bands are in many cases found to be in agreement with experimentally measured bands, for example by angle-resolved photoemission spectroscopy (ARPES). In particular, the band shape is typically well reproduced by DFT. But there are also systematic errors in DFT bands when compared to experiment results. In particular, DFT seems to systematically underestimate by about 30-40% the band gap in insulators and semiconductors. It is commonly believed that DFT is a theory to predict ground state properties of a system only (e.g. the total energy, the atomic structure, etc.), and that excited state properties cannot be determined by DFT. This is a misconception. In principle, DFT can determine any property (ground state or excited state) of a system given a functional that maps the ground state density to that property. This is the essence of the Hohenberg–Kohn theorem. In practice, however, no known functional exists that maps the ground state density to excitation energies of electrons within a material. Thus, what in the literature is quoted as a DFT band plot is a representation of the DFT Kohn–Sham energies, i.e., the energies of a fictive non-interacting system, the Kohn–Sham system, which has no physical interpretation at all. The Kohn–Sham electronic structure must not be confused with the real, quasiparticle electronic structure of a system, and there is no Koopmans' theorem holding for Kohn–Sham energies, as there is for Hartree–Fock energies, which can be truly considered as an approximation for quasiparticle energies. Hence, in principle, Kohn–Sham based DFT is not a band theory, i.e., not a theory suitable for calculating bands and band-plots. In principle time-dependent DFT can be used to calculate the true band structure although in practice this is often difficult. A popular approach is the use of hybrid functionals, which incorporate a portion of Hartree–Fock exact exchange; this produces a substantial improvement in predicted bandgaps of semiconductors, but is less reliable for metals and wide-bandgap materials. === Green's function methods and the ab initio GW approximation === To calculate the bands including electron-electron interaction many-body effects, one can resort to so-called Green's function methods. Indeed, knowledge of the Green's function of a system provides both ground (the total energy) and also excited state observables of the system. The poles of the Green's function are the quasiparticle energies, the bands of a solid. The Green's function can be calculated by solving the Dyson equation once the self-energy of the system is known. For real systems like solids, the self-energy is a very complex quantity and usually approximations are needed to solve the problem. One such approximation is the GW approximation, so called from the mathematical form the self-energy takes as the product Σ = GW of the Green's function G and the dynamically screened interaction W. This approach is more pertinent when addressing the calculation of band plots (and also quantities beyond, such as the spectral function) and can also be formulated in a completely ab initio way. The GW approximation seems to provide band gaps of insulators and semiconductors in agreement with experiment, and hence to correct the systematic DFT underestimation. === Dynamical mean-field theory === Although the nearly free electron approximation is able to describe many properties of electron band structures, one consequence of this theory is that it predicts the same number of electrons in each unit cell. If the number of electrons is odd, we would then expect that there is an unpaired electron in each unit cell, and thus that the valence band is not fully occupied, making the material a conductor. However, materials such as CoO that have an odd number of electrons per unit cell are insulators, in direct conflict with this result. This kind of material is known as a Mott insulator, and requires inclusion of detailed electron-electron interactions (treated only as an averaged effect on the crystal potential in band theory) to explain the discrepancy. The Hubbard model is an approximate theory that can include these interactions. It can be treated non-perturbatively within the so-called dynamical mean-field theory, which attempts to bridge the gap between the nearly free electron approximation and the atomic limit. Formally, however, the states are not non-interacting in this case and the concept of a band structure is not adequate to describe these cases. === Others === Calculating band structures is an important topic in theoretical solid state physics. In addition to the models mentioned above, other models include the following: Empty lattice approximation: the "band structure" of a region of free space that has been divided into a lattice. k·p perturbation theory is a technique that allows a band structure to be approximately described in terms of just a few parameters. The technique is commonly used for semiconductors, and the parameters in the model are often determined by experiment. The Kronig–Penney model, a one-dimensional rectangular well model useful for illustration of band formation. While simple, it predicts many important phenomena, but is not quantitative. Hubbard model The band structure has been generalised to wavevectors that are complex numbers, resulting in what is called a complex band structure, which is of interest at surfaces and interfaces. Each model describes some types of solids very well, and others poorly. The nearly free electron model works well for metals, but poorly for non-metals. The tight binding model is extremely accurate for ionic insulators, such as metal halide salts (e.g. NaCl). == Band diagrams == To understand how band structure changes relative to the Fermi level in real space, a band structure plot is often first simplified in the form of a band diagram. In a band diagram the vertical axis is energy while the horizontal axis represents real space. Horizontal lines represent energy levels, while blocks represent energy bands. When the horizontal lines in these diagram are slanted then the energy of the level or band changes with distance. Diagrammatically, this depicts the presence of an electric field within the crystal system. Band diagrams are useful in relating the general band structure properties of different materials to one another when placed in contact with each other. == See also == Band-gap engineering – the process of altering a material's band structure Felix Bloch – pioneer in the theory of band structure Alan Herries Wilson – pioneer in the theory of band structure == References == == Further reading == Ashcroft, Neil and N. David Mermin, Solid State Physics, ISBN 0-03-083993-9 Harrison, Walter A., Elementary Electronic Structure, ISBN 981-238-708-0 Harrison, Walter A.; W. A. Benjamin Pseudopotentials in the theory of metals, (New York) 1966 Marder, Michael P., Condensed Matter Physics, ISBN 0-471-17779-2 Martin, Richard, Electronic Structure: Basic Theory and Practical Methods, ISBN 0-521-78285-6 Millman, Jacob; Arvin Gabriel, Microelectronics, ISBN 0-07-463736-3, Tata McGraw-Hill Edition. Nemoshkalenko, V. V., and N. V. Antonov, Computational Methods in Solid State Physics, ISBN 90-5699-094-2 Omar, M. Ali, Elementary Solid State Physics: Principles and Applications, ISBN 0-201-60733-6 Singh, Jasprit, Electronic and Optoelectronic Properties of Semiconductor Structures Chapters 2 and 3, ISBN 0-521-82379-X Vasileska, Dragica, Tutorial on Bandstructure Methods (2008)
Wikipedia/Band_theory
The Thomas–Fermi (TF) model, named after Llewellyn Thomas and Enrico Fermi, is a quantum mechanical theory for the electronic structure of many-body systems developed semiclassically shortly after the introduction of the Schrödinger equation. It stands separate from wave function theory as being formulated in terms of the electronic density alone and as such is viewed as a precursor to modern density functional theory. The Thomas–Fermi model is correct only in the limit of an infinite nuclear charge. Using the approximation for realistic systems yields poor quantitative predictions, even failing to reproduce some general features of the density such as shell structure in atoms and Friedel oscillations in solids. It has, however, found modern applications in many fields through the ability to extract qualitative trends analytically and with the ease at which the model can be solved. The kinetic energy expression of Thomas–Fermi theory is also used as a component in more sophisticated density approximation to the kinetic energy within modern orbital-free density functional theory. Working independently, Thomas and Fermi used this statistical model in 1927 to approximate the distribution of electrons in an atom. Although electrons are distributed nonuniformly in an atom, an approximation was made that the electrons are distributed uniformly in each small volume element ΔV (i.e. locally) but the electron density n ( r ) {\displaystyle n(\mathbf {r} )} can still vary from one small volume element to the next. == Kinetic energy == For a small volume element ΔV, and for the atom in its ground state, we can fill out a spherical momentum-space volume VF up to the Fermi momentum pF, and thus V F = 4 3 π p F 3 ( r ) , {\displaystyle V_{\text{F}}={\frac {4}{3}}\pi p_{\text{F}}^{3}(\mathbf {r} ),} where r {\displaystyle \mathbf {r} } is the position vector of a point in ΔV. The corresponding phase-space volume is Δ V ph = V F Δ V = 4 3 π p F 3 ( r ) Δ V . {\displaystyle \Delta V_{\text{ph}}=V_{\text{F}}\,\Delta V={\frac {4}{3}}\pi p_{\text{F}}^{3}(\mathbf {r} )\,\Delta V.} The electrons in ΔVph are distributed uniformly, with two electrons per h3 of this phase-space volume, where h is the Planck constant. Then the number of electrons in ΔVph is Δ N ph = 2 h 3 Δ V ph = 8 π 3 h 3 p F 3 ( r ) Δ V . {\displaystyle \Delta N_{\text{ph}}={\frac {2}{h^{3}}}\,\Delta V_{\text{ph}}={\frac {8\pi }{3h^{3}}}p_{\text{F}}^{3}(\mathbf {r} )\,\Delta V.} The number of electrons in ΔV is Δ N = n ( r ) Δ V , {\displaystyle \Delta N=n(\mathbf {r} )\,\Delta V,} where n ( r ) {\displaystyle n(\mathbf {r} )} is the electron number density. Equating the number of electrons in ΔV to that in ΔVph gives n ( r ) = 8 π 3 h 3 p F 3 ( r ) . {\displaystyle n(\mathbf {r} )={\frac {8\pi }{3h^{3}}}p_{\text{F}}^{3}(\mathbf {r} ).} The fraction of electrons at r {\displaystyle \mathbf {r} } that have momentum between p and p + dp is F r ( p ) d p = { 4 π p 2 d p 4 3 π p F 3 ( r ) if p ≤ p F ( r ) , 0 otherwise . {\displaystyle F_{\mathbf {r} }(p)\,dp={\begin{cases}{\dfrac {4\pi p^{2}\,dp}{{\frac {4}{3}}\pi p_{\text{F}}^{3}(\mathbf {r} )}}&{\text{if}}\ p\leq p_{\text{F}}(\mathbf {r} ),\\[1ex]0&{\text{otherwise}}.\end{cases}}} Using the classical expression for the kinetic energy of an electron with mass me, the kinetic energy per unit volume at r {\displaystyle \mathbf {r} } for the electrons of the atom is t ( r ) = ∫ p 2 2 m e n ( r ) F r ( p ) d p = n ( r ) ∫ 0 p f ( r ) p 2 2 m e 4 π p 2 4 3 π p F 3 ( r ) d p = C kin [ n ( r ) ] 5 / 3 , {\displaystyle {\begin{aligned}t(\mathbf {r} )&=\int {\frac {p^{2}}{2m_{\text{e}}}}n(\mathbf {r} )F_{\mathbf {r} }(p)\,dp\\&=n(\mathbf {r} )\int _{0}^{p_{f}(\mathbf {r} )}{\frac {p^{2}}{2m_{\text{e}}}}{\frac {4\pi p^{2}}{{\frac {4}{3}}\pi p_{\text{F}}^{3}(\mathbf {r} )}}\,dp\\&=C_{\text{kin}}[n(\mathbf {r} )]^{5/3},\end{aligned}}} where a previous expression relating n ( r ) {\displaystyle n(\mathbf {r} )} to p F ( r ) {\displaystyle p_{\text{F}}(\mathbf {r} )} has been used, and C kin = 3 h 2 40 m e ( 3 π ) 2 3 . {\displaystyle C_{\text{kin}}={\frac {3h^{2}}{40m_{\text{e}}}}\left({\frac {3}{\pi }}\right)^{\frac {2}{3}}.} Integrating the kinetic energy per unit volume t ( r ) {\displaystyle t(\mathbf {r} )} over all space results in the total kinetic energy of the electrons: T = C kin ∫ [ n ( r ) ] 5 / 3 d 3 r . {\displaystyle T=C_{\text{kin}}\int [n(\mathbf {r} )]^{5/3}\,d^{3}r.} This result shows that the total kinetic energy of the electrons can be expressed in terms of only the spatially varying electron density n ( r ) , {\displaystyle n(\mathbf {r} ),} according to the Thomas–Fermi model. As such, they were able to calculate the energy of an atom using this expression for the kinetic energy combined with the classical expressions for the nuclear–electron and electron–electron Coulomb interactions (which can both also be represented in terms of the electron density). == Potential energies == The potential energy of an atom's electrons, due to the electric attraction of the positively charged nucleus is U eN = ∫ n ( r ) V N ( r ) d 3 r , {\displaystyle U_{\text{eN}}=\int n(\mathbf {r} )V_{\text{N}}(\mathbf {r} )\,d^{3}r,} where V N ( r ) {\displaystyle V_{\text{N}}(\mathbf {r} )} is the potential energy of an electron at r {\displaystyle \mathbf {r} } that is due to the electric field of the nucleus. For the case of a nucleus centered at r = 0 {\displaystyle \mathbf {r} =0} with charge Ze, where Z is a positive integer, and e is the elementary charge, V N ( r ) = − Z e 2 r . {\displaystyle V_{\text{N}}(\mathbf {r} )={\frac {-Ze^{2}}{r}}.} The potential energy of the electrons due to their mutual electric repulsion is U ee = 1 2 e 2 ∫ n ( r ) n ( r ′ ) | r − r ′ | d 3 r d 3 r ′ . {\displaystyle U_{\text{ee}}={\frac {1}{2}}e^{2}\int {\frac {n(\mathbf {r} )n(\mathbf {r} ')}{|\mathbf {r} -\mathbf {r} '|}}\,d^{3}r\,d^{3}r'.} == Total energy == The total energy of the electrons is the sum of their kinetic and potential energies: E = T + U eN + U ee = C kin ∫ [ n ( r ) ] 5 / 3 d 3 r + ∫ n ( r ) V N ( r ) d 3 r + 1 2 e 2 ∫ n ( r ) n ( r ′ ) | r − r ′ | d 3 r d 3 r ′ . {\displaystyle {\begin{aligned}E&=T+U_{\text{eN}}+U_{\text{ee}}\\&=C_{\text{kin}}\int [n(\mathbf {r} )]^{5/3}\,d^{3}r+\int n(\mathbf {r} )V_{\text{N}}(\mathbf {r} )\,d^{3}r+{\frac {1}{2}}e^{2}\int {\frac {n(\mathbf {r} )n(\mathbf {r} ')}{|\mathbf {r} -\mathbf {r} '|}}\,d^{3}r\,d^{3}r'.\end{aligned}}} == Thomas–Fermi equation == In order to minimize the energy E while keeping the number of electrons constant, we add a Lagrange multiplier term of the form − μ ( − N + ∫ n ( r ) d 3 r ) {\displaystyle -\mu \left(-N+\int n(\mathbf {r} )\,d^{3}r\right)} to E. Letting the variation with respect to n vanish then gives the equation μ = 5 3 C kin n ( r ) 2 / 3 + V N ( r ) + e 2 ∫ n ( r ′ ) | r − r ′ | d 3 r ′ , {\displaystyle \mu ={\frac {5}{3}}C_{\text{kin}}n(\mathbf {r} )^{2/3}+V_{\text{N}}(\mathbf {r} )+e^{2}\int {\frac {n(\mathbf {r} ')}{|\mathbf {r} -\mathbf {r} '|}}\,d^{3}r',} which must hold wherever n ( r ) {\displaystyle n(\mathbf {r} )} is nonzero. If we define the total potential V ( r ) {\displaystyle V(\mathbf {r} )} by V ( r ) = V N ( r ) + e 2 ∫ n ( r ′ ) | r − r ′ | d 3 r ′ , {\displaystyle V(\mathbf {r} )=V_{\text{N}}(\mathbf {r} )+e^{2}\int {\frac {n(\mathbf {r} ')}{|\mathbf {r} -\mathbf {r} '|}}\,d^{3}r',} then n ( r ) = { ( 5 3 C kin ) − 3 / 2 ( μ − V ( r ) ) 3 / 2 if μ ≥ V ( r ) , 0 otherwise. {\displaystyle n(\mathbf {r} )={\begin{cases}\left({\frac {5}{3}}C_{\text{kin}}\right)^{-3/2}(\mu -V(\mathbf {r} ))^{3/2}&{\text{if}}\ \mu \geq V(\mathbf {r} ),\\[1ex]0&{\text{otherwise.}}\end{cases}}} If the nucleus is assumed to be a point with charge Ze at the origin, then n ( r ) {\displaystyle n(\mathbf {r} )} and V ( r ) {\displaystyle V(\mathbf {r} )} will both be functions only of the radius r = | r | , {\displaystyle r=|\mathbf {r} |,} and we can define φ(r) by μ − V ( r ) = Z e 2 r ϕ ( r b ) , b = 1 4 ( 9 π 2 2 Z ) 1 / 3 a 0 , {\displaystyle \mu -V(r)={\frac {Ze^{2}}{r}}\phi \left({\frac {r}{b}}\right),\qquad b={\frac {1}{4}}\left({\frac {9\pi ^{2}}{2Z}}\right)^{1/3}a_{0},} where a0 is the Bohr radius. From using the above equations together with Gauss's law, φ(r) can be seen to satisfy the Thomas–Fermi equation d 2 ϕ d r 2 = ϕ 3 / 2 r , ϕ ( 0 ) = 1. {\displaystyle {\frac {d^{2}\phi }{dr^{2}}}={\frac {\phi ^{3/2}}{\sqrt {r}}},\qquad \phi (0)=1.} For chemical potential μ = 0, this is a model of a neutral atom, with an infinite charge cloud where n ( r ) {\displaystyle n(\mathbf {r} )} is everywhere nonzero and the overall charge is zero, while for μ < 0, it is a model of a positive ion, with a finite charge cloud and positive overall charge. The edge of the cloud is where φ(r) = 0. For μ > 0, it can be interpreted as a model of a compressed atom, so that negative charge is squeezed into a smaller space. In this case the atom ends at the radius r where dφ/dr = φ/r. == Inaccuracies and improvements == Although this was an important first step, the Thomas–Fermi equation's accuracy is limited because the resulting expression for the kinetic energy is only approximate, and because the method does not attempt to represent the exchange energy of an atom as a conclusion of the Pauli exclusion principle. A term for the exchange energy was added by Dirac in 1930, which significantly improved its accuracy. However, the Thomas–Fermi–Dirac theory remained rather inaccurate for most applications. The largest source of error was in the representation of the kinetic energy, followed by the errors in the exchange energy, and due to the complete neglect of electron correlation. In 1962, Edward Teller showed that Thomas–Fermi theory cannot describe molecular bonding – the energy of any molecule calculated with TF theory is higher than the sum of the energies of the constituent atoms. More generally, the total energy of a molecule decreases when the bond lengths are uniformly increased. This can be overcome by improving the expression for the kinetic energy. One notable historical improvement to the Thomas–Fermi kinetic energy is the Weizsäcker (1935) correction, T W = 1 8 ℏ 2 m ∫ | ∇ n ( r ) | 2 n ( r ) d 3 r , {\displaystyle T_{\text{W}}={\frac {1}{8}}{\frac {\hbar ^{2}}{m}}\int {\frac {|\nabla n(\mathbf {r} )|^{2}}{n(\mathbf {r} )}}\,d^{3}r,} which is the other notable building block of orbital-free density functional theory. The problem with the inaccurate modelling of the kinetic energy in the Thomas–Fermi model, as well as other orbital-free density functionals, is circumvented in Kohn–Sham density functional theory with a fictitious system of non-interacting electrons whose kinetic energy expression is known. == See also == Thomas–Fermi screening Gas in a box § Thomas–Fermi approximation for the degeneracy of states == Further reading == R. G. Parr and W. Yang (1989). Density-Functional Theory of Atoms and Molecules. New York: Oxford University Press. ISBN 978-0-19-509276-9. N. H. March (1992). Electron Density Theory of Atoms and Molecules. Academic Press. ISBN 978-0-12-470525-8. N. H. March (1983). "1. Origins – The Thomas–Fermi Theory". In S. Lundqvist; N. H. March (eds.). Theory of The Inhomogeneous Electron Gas. Plenum Press. ISBN 978-0-306-41207-3. Feynman, R. P.; Metropolis, N.; Teller, E. (1949-05-15). "Equations of State of Elements Based on the Generalized Fermi-Thomas Theory". Physical Review. 75 (10): 1561–1573. doi:10.1103/PhysRev.75.1561. ISSN 0031-899X. == References ==
Wikipedia/Thomas–Fermi_model
Introduction to Solid State Physics, known colloquially as Kittel, is a classic condensed matter physics textbook written by American physicist Charles Kittel in 1953. The book has been highly influential and has seen widespread adoption; Marvin L. Cohen remarked in 2019 that Kittel's content choices in the original edition played a large role in defining the field of solid-state physics. It was also the first proper textbook covering this new field of physics. The book is published by John Wiley and Sons and, as of 2018, it is in its ninth edition and has been reprinted many times as well as translated into over a dozen languages, including Chinese, French, German, Hungarian, Indonesian, Italian, Japanese, Korean, Malay, Romanian, Russian, Spanish, and Turkish. In some later editions, the eighteenth chapter, titled Nanostructures, was written by Paul McEuen. Along with its competitor Ashcroft and Mermin, the book is considered a standard textbook in condensed matter physics. == Background == Kittel received his PhD from the University of Wisconsin–Madison in 1941 under his advisor Gregory Breit. Before being promoted to professor of physics at UC Berkeley in 1951, Kittel held several other positions. He worked for the Naval Ordnance Laboratory from 1940 to 1942, was a research physicist in the US Navy until 1945, worked at the Research Laboratory of Electronics at MIT from 1945 to 1947 and at Bell Labs from 1947 to 1951, and was a visiting associate professor at UC Berkeley from 1950 until his promotion. Henry Ehrenreich has noted that before the first edition of Introduction to Solid State Physics came out in 1953, there were no other textbooks on the subject; rather, the young field's study material was spread across several prominent articles and treatises. The field of solid state physics was very new at the time of writing and was defined by only a few treatises that, in the Ehrenreich's view, expounded rather than explained the topics and were not suitable as textbooks. == Content == The book covers a wide range of topics in solid state physics, including Bloch's theorem, crystals, magnetism, phonons, Fermi gases, magnetic resonance, and surface physics. The chapters are broken into sections that highlight the topics. == Reception == Marvin L. Cohen and Morrel H. Cohen, in an obituary for Kittel in 2019, remarked that the original book "was not only the dominant text for teaching in the field, it was on the bookshelf of researchers in academia and industry throughout the world", though they did not provide any time frame on when it may have been surpassed as the dominant text. They also noted that Kittel's content choices played a large role in defining the field of solid-state physics. The book is a classic textbook in the subject and has seen use as a comparative benchmark in the reviews of other books in condensed matter physics. In a 1969 review of another book, Robert G. Chambers noted that there were not many textbooks covering these topics, as "since 1953, Kittel's classic Introduction to Solid State Physics has dominated the field so effectively that few competitors have appeared", noting that the third edition continues that legacy. Before continuing, the reviewer noted that the book was too long for some uses and that less thorough works would be welcome. Several notable reviews of the first edition were published in 1954, including Arthur James Cochran Wilson, Leslie Fleetwood Bates, and Kenneth Standley, among others. Gwyn Owain Jones reviewed the book in 1955. The second edition of the book was reviewed by Robert W. Hellwarth in 1957 and Leslie Fleetwood Bates, among others. The third edition of the book also received reviews, including one by Donald F. Holcomb. A German translation of the book has also received several reviews. == Publication history == === Original editions === Kittel, Charles (1953). Introduction to solid state physics (1st ed.). New York: Wiley. p. 396. OCLC 859669173. Kittel, Charles (1956). Introduction to solid state physics (2nd ed.). New York: Wiley. p. 617. OCLC 746139663. Kittel, Charles (1966). Introduction to solid state physics (3rd ed.). New York; London: Wiley. p. 613. OCLC 1159631475. Kittel, Charles (1971). Introduction to solid state physics (4th ed.). New York: John Wiley & Sons. p. 766. ISBN 978-0-471-49021-0. OCLC 802643946. Kittel, Charles (1976). Introduction to solid state physics (5th ed.). New York: Wiley. p. 673. ISBN 978-0-471-49024-1. OCLC 908887143. Kittel, Charles (1986). Introduction to solid state physics (6th ed.). New York: Wiley. p. 646. ISBN 978-0-471-87474-4. OCLC 797201261. Kittel, Charles (1996). Introduction to solid state physics (7th ed.). New York: Wiley. p. 673. ISBN 978-0-471-11181-8. OCLC 263625446. Kittel, Charles (2005). Introduction to solid state physics (8th ed.). New York: Wiley. p. 680. ISBN 978-0-471-68057-4. OCLC 787838554. Kittel, Charles; McEuen, Paul (2018). Introduction to solid state physics (Global ed.,9th ed.). New Jersey: Wiley. p. 692. ISBN 978-1-119-45416-8. OCLC 1097548279. === Reprints === Kittel, Charles (1954) [1953]. Introduction to solid state physics (1st ed.). New York: John Wiley and Sons. p. 396. OCLC 1123251585. Kittel, Charles (1967). Introduction to solid state physics (3rd ed.). New York: Wiley. p. 648. OCLC 230149869. Kittel, Charles (1987). Introduction to solid state physics (6th ed.). New delhi: Wiley Eastern Ltd. p. 599. OCLC 772488914. Kittel, Charles (2011). Introduction to solid state physics (8th ed.). Hoboken, NJ: Wiley. p. 680. ISBN 978-0-471-41526-8. OCLC 730010889. Kittel, Charles (2013). Introduction to solid state physics (8th ed.). New Jersey: Wiley. p. 680. ISBN 978-0-471-41526-8. OCLC 820453856. Kittel, Charles; McEuen, Paul (2015). Introduction to solid state physics (8th ed.). New Delhi: Wiley. p. 680. ISBN 978-81-265-3518-7. OCLC 987438137. === Foreign translations === == See also == List of textbooks on classical mechanics and quantum mechanics List of textbooks in electromagnetism == References == == External links == "Introduction to Solid State Physics, 8th Edition | Wiley". Wiley.com. Retrieved 2 November 2020. "Remembering Charles Kittel | UC Berkeley Physics". physics.berkeley.edu. Archived from the original on 17 May 2019. Retrieved 2 November 2020.
Wikipedia/Introduction_to_Solid_State_Physics
The Drude model of electrical conduction was proposed in 1900 by Paul Drude to explain the transport properties of electrons in materials (especially metals). Basically, Ohm's law was well established and stated that the current J and voltage V driving the current are related to the resistance R of the material. The inverse of the resistance is known as the conductance. When we consider a metal of unit length and unit cross sectional area, the conductance is known as the conductivity, which is the inverse of resistivity. The Drude model attempts to explain the resistivity of a conductor in terms of the scattering of electrons (the carriers of electricity) by the relatively immobile ions in the metal that act like obstructions to the flow of electrons. The model, which is an application of kinetic theory, assumes that the microscopic behaviour of electrons in a solid may be treated classically and behaves much like a pinball machine, with a sea of constantly jittering electrons bouncing and re-bouncing off heavier, relatively immobile positive ions. In modern terms this is reflected in the valence electron model where the sea of electrons is composed of the valence electrons only, and not the full set of electrons available in the solid, and the scattering centers are the inner shells of tightly bound electrons to the nucleus. The scattering centers had a positive charge equivalent to the valence number of the atoms. This similarity added to some computation errors in the Drude paper, ended up providing a reasonable qualitative theory of solids capable of making good predictions in certain cases and giving completely wrong results in others. Whenever people tried to give more substance and detail to the nature of the scattering centers, and the mechanics of scattering, and the meaning of the length of scattering, all these attempts ended in failures. The scattering lengths computed in the Drude model, are of the order of 10 to 100 interatomic distances, and also these could not be given proper microscopic explanations. Drude scattering is not electron–electron scattering which is only a secondary phenomenon in the modern theory, neither nuclear scattering given electrons can be at most be absorbed by nuclei. The model remains a bit mute on the microscopic mechanisms, in modern terms this is what is now called the "primary scattering mechanism" where the underlying phenomenon can be different case per case. The model gives better predictions for metals, especially in regards to conductivity, and sometimes is called Drude theory of metals. This is because metals have essentially a better approximation to the free electron model, i.e. metals do not have complex band structures, electrons behave essentially as free particles and where, in the case of metals, the effective number of de-localized electrons is essentially the same as the valence number. The two most significant results of the Drude model are an electronic equation of motion, d d t ⟨ p ( t ) ⟩ = q ( E + ⟨ p ( t ) ⟩ m × B ) − ⟨ p ( t ) ⟩ τ , {\displaystyle {\frac {d}{dt}}\langle \mathbf {p} (t)\rangle =q\left(\mathbf {E} +{\frac {\langle \mathbf {p} (t)\rangle }{m}}\times \mathbf {B} \right)-{\frac {\langle \mathbf {p} (t)\rangle }{\tau }},} and a linear relationship between current density J and electric field E, J = n q 2 τ m E . {\displaystyle \mathbf {J} ={\frac {nq^{2}\tau }{m}}\,\mathbf {E} .} Here t is the time, ⟨p⟩ is the average momentum per electron and q, n, m, and τ are respectively the electron charge, number density, mass, and mean free time between ionic collisions. The latter expression is particularly important because it explains in semi-quantitative terms why Ohm's law, one of the most ubiquitous relationships in all of electromagnetism, should hold. Steps towards a more modern theory of solids were given by the following: The Einstein solid model and the Debye model, suggesting that the quantum behaviour of exchanging energy in integral units or quanta was an essential component in the full theory especially with regard to specific heats, where the Drude theory failed. In some cases, namely in the Hall effect, the theory was making correct predictions if instead of using a negative charge for the electrons a positive one was used. This is now interpreted as holes (i.e. quasi-particles that behave as positive charge carriers) but at the time of Drude it was rather obscure why this was the case. Drude used Maxwell–Boltzmann statistics for the gas of electrons and for deriving the model, which was the only one available at that time. By replacing the statistics with the correct Fermi Dirac statistics, Sommerfeld significantly improved the predictions of the model, although still having a semi-classical theory that could not predict all results of the modern quantum theory of solids. == History == German physicist Paul Drude proposed his model in 1900 when it was not clear whether atoms existed, and it was not clear what atoms were on a microscopic scale. In his original paper, Drude made an error, estimating the Lorenz number of Wiedemann–Franz law to be twice what it classically should have been, thus making it seem in agreement with the experimental value of the specific heat. This number is about 100 times smaller than the classical prediction but this factor cancels out with the mean electronic speed that is about 100 times bigger than Drude's calculation. The first direct proof of atoms through the computation of the Avogadro number from a microscopic model is due to Albert Einstein, the first modern model of atom structure dates to 1904 and the Rutherford model to 1909. Drude starts from the discovery of electrons in 1897 by J.J. Thomson and assumes as a simplistic model of solids that the bulk of the solid is composed of positively charged scattering centers, and a sea of electrons submerge those scattering centers to make the total solid neutral from a charge perspective. The model was extended in 1905 by Hendrik Antoon Lorentz (and hence is also known as the Drude–Lorentz model) to give the relation between the thermal conductivity and the electric conductivity of metals (see Lorenz number), and is a classical model. Later it was supplemented with the results of quantum theory in 1933 by Arnold Sommerfeld and Hans Bethe, leading to the Drude–Sommerfeld model. Nowadays the Drude and Sommerfeld models are still significant to understanding the qualitative behaviour of solids and to get a first qualitative understanding of a specific experimental setup. This is a generic method in solid state physics, where it is typical to incrementally increase the complexity of the models to give more and more accurate predictions. It is less common to use a full-blown quantum field theory from first principles, given the complexities due to the huge numbers of particles and interactions and the little added value of the extra mathematics involved (considering the incremental gain in numerical precision of the predictions). == Assumptions == Drude used the kinetic theory of gases applied to the gas of electrons moving on a fixed background of "ions"; this is in contrast with the usual way of applying the theory of gases as a neutral diluted gas with no background. The number density of the electron gas was assumed to be n = N A Z ρ m A , {\displaystyle n={\frac {N_{\text{A}}Z\rho _{\text{m}}}{A}},} where Z is the effective number of de-localized electrons per ion, for which Drude used the valence number, A is the atomic mass per mole, ρ m {\displaystyle \rho _{\text{m}}} is the mass density (mass per unit volume) of the "ions", and NA is the Avogadro constant. Considering the average volume available per electron as a sphere: V N = 1 n = 4 3 π r s 3 . {\displaystyle {\frac {V}{N}}={\frac {1}{n}}={\frac {4}{3}}\pi r_{\rm {s}}^{3}.} The quantity r s {\displaystyle r_{\text{s}}} is a parameter that describes the electron density and is often of the order of 2 or 3 times the Bohr radius, for alkali metals it ranges from 3 to 6 and some metal compounds it can go up to 10. The densities are of the order of 1000 times of a typical classical gas. The core assumptions made in the Drude model are the following: Drude applied the kinetic theory of a dilute gas, despite the high densities, therefore ignoring electron–electron and electron–ion interactions aside from collisions. The Drude model considers the metal to be formed of a collection of positively charged ions from which a number of "free electrons" were detached. These may be thought to be the valence electrons of the atoms that have become delocalized due to the electric field of the other atoms. The Drude model neglects long-range interaction between the electron and the ions or between the electrons; this is called the independent electron approximation. The electrons move in straight lines between one collision and another; this is called free electron approximation. The only interaction of a free electron with its environment was treated as being collisions with the impenetrable ions core. The average time between subsequent collisions of such an electron is τ, with a memoryless Poisson distribution. The nature of the collision partner of the electron does not matter for the calculations and conclusions of the Drude model. After a collision event, the distribution of the velocity and direction of an electron is determined by only the local temperature and is independent of the velocity of the electron before the collision event. The electron is considered to be immediately at equilibrium with the local temperature after a collision. Removing or improving upon each of these assumptions gives more refined models, that can more accurately describe different solids: Improving the hypothesis of the Maxwell–Boltzmann statistics with the Fermi–Dirac statistics leads to the Drude–Sommerfeld model. Improving the hypothesis of the Maxwell–Boltzmann statistics with the Bose–Einstein statistics leads to considerations about the specific heat of integer spin atoms and to the Bose–Einstein condensate. A valence band electron in a semiconductor is still essentially a free electron in a delimited energy range (i.e. only a "rare" high energy collision that implies a change of band would behave differently); the independent electron approximation is essentially still valid (i.e. no electron–electron scattering), where instead the hypothesis about the localization of the scattering events is dropped (in layman terms the electron is and scatters all over the place). == Mathematical treatment == === DC field === The simplest analysis of the Drude model assumes that electric field E is both uniform and constant, and that the thermal velocity of electrons is sufficiently high such that they accumulate only an infinitesimal amount of momentum dp between collisions, which occur on average every τ seconds. Then an electron isolated at time t will on average have been travelling for time τ since its last collision, and consequently will have accumulated momentum Δ ⟨ p ⟩ = q E τ . {\displaystyle \Delta \langle \mathbf {p} \rangle =q\mathbf {E} \tau .} During its last collision, this electron will have been just as likely to have bounced forward as backward, so all prior contributions to the electron's momentum may be ignored, resulting in the expression ⟨ p ⟩ = q E τ . {\displaystyle \langle \mathbf {p} \rangle =q\mathbf {E} \tau .} Substituting the relations ⟨ p ⟩ = m ⟨ v ⟩ , J = n q ⟨ v ⟩ , {\displaystyle {\begin{aligned}\langle \mathbf {p} \rangle &=m\langle \mathbf {v} \rangle ,\\\mathbf {J} &=nq\langle \mathbf {v} \rangle ,\end{aligned}}} results in the formulation of Ohm's law mentioned above: J = ( n q 2 τ m ) E . {\displaystyle \mathbf {J} =\left({\frac {nq^{2}\tau }{m}}\right)\mathbf {E} .} === Time-varying analysis === The dynamics may also be described by introducing an effective drag force. At time t = t0 + dt the electron's momentum will be: p ( t 0 + d t ) = ( 1 − d t τ ) [ p ( t 0 ) + f ( t ) d t + O ( d t 2 ) ] + d t τ ( g ( t 0 ) + f ( t ) d t + O ( d t 2 ) ) {\displaystyle \mathbf {p} (t_{0}+dt)=\left(1-{\frac {dt}{\tau }}\right)\left[\mathbf {p} (t_{0})+\mathbf {f} (t)dt+O(dt^{2})\right]+{\frac {dt}{\tau }}\left(\mathbf {g} (t_{0})+\mathbf {f} (t)dt+O(dt^{2})\right)} where f ( t ) {\displaystyle \mathbf {f} (t)} can be interpreted as generic force (e.g. Lorentz force) on the carrier or more specifically on the electron. g ( t 0 ) {\displaystyle \mathbf {g} (t_{0})} is the momentum of the carrier with random direction after the collision (i.e. with a momentum ⟨ g ( t 0 ) ⟩ = 0 {\displaystyle \langle \mathbf {g} (t_{0})\rangle =0} ) and with absolute kinetic energy ⟨ | g ( t 0 ) | ⟩ 2 2 m = 3 2 K T . {\displaystyle {\frac {\langle |\mathbf {g} (t_{0})|\rangle ^{2}}{2m}}={\frac {3}{2}}KT.} On average, a fraction of 1 − d t τ {\displaystyle \textstyle 1-{\frac {dt}{\tau }}} of the electrons will not have experienced another collision, the other fraction that had the collision on average will come out in a random direction and will contribute to the total momentum to only a factor d t τ f ( t ) d t {\displaystyle \textstyle {\frac {dt}{\tau }}\mathbf {f} (t)dt} which is of second order. With a bit of algebra and dropping terms of order d t 2 {\displaystyle dt^{2}} , this results in the generic differential equation d d t p ( t ) = f ( t ) − p ( t ) τ {\displaystyle {\frac {d}{dt}}\mathbf {p} (t)=\mathbf {f} (t)-{\frac {\mathbf {p} (t)}{\tau }}} The second term is actually an extra drag force or damping term due to the Drude effects. === Constant electric field === At time t = t0 + dt the average electron's momentum will be ⟨ p ( t 0 + d t ) ⟩ = ( 1 − d t τ ) ( ⟨ p ( t 0 ) ⟩ + q E d t ) , {\displaystyle \langle \mathbf {p} (t_{0}+dt)\rangle =\left(1-{\frac {dt}{\tau }}\right)\left(\langle \mathbf {p} (t_{0})\rangle +q\mathbf {E} \,dt\right),} and then d d t ⟨ p ( t ) ⟩ = q E − ⟨ p ( t ) ⟩ τ , {\displaystyle {\frac {d}{dt}}\langle \mathbf {p} (t)\rangle =q\mathbf {E} -{\frac {\langle \mathbf {p} (t)\rangle }{\tau }},} where ⟨p⟩ denotes average momentum and q the charge of the electrons. This, which is an inhomogeneous differential equation, may be solved to obtain the general solution of ⟨ p ( t ) ⟩ = q τ E ( 1 − e − t / τ ) + ⟨ p ( 0 ) ⟩ e − t / τ {\displaystyle \langle \mathbf {p} (t)\rangle =q\tau \mathbf {E} (1-e^{-t/\tau })+\langle \mathbf {p} (0)\rangle e^{-t/\tau }} for p(t). The steady state solution, ⁠d/dt⁠⟨p⟩ = 0, is then ⟨ p ⟩ = q τ E . {\displaystyle \langle \mathbf {p} \rangle =q\tau \mathbf {E} .} As above, average momentum may be related to average velocity and this in turn may be related to current density, ⟨ p ⟩ = m ⟨ v ⟩ , J = n q ⟨ v ⟩ , {\displaystyle {\begin{aligned}\langle \mathbf {p} \rangle &=m\langle \mathbf {v} \rangle ,\\\mathbf {J} &=nq\langle \mathbf {v} \rangle ,\end{aligned}}} and the material can be shown to satisfy Ohm's law J = σ 0 E {\displaystyle \mathbf {J} =\sigma _{0}\mathbf {E} } with a DC-conductivity σ0: σ 0 = n q 2 τ m {\displaystyle \sigma _{0}={\frac {nq^{2}\tau }{m}}} === AC field === The Drude model can also predict the current as a response to a time-dependent electric field with an angular frequency ω. The complex conductivity is σ ( ω ) = σ 0 1 − i ω τ = σ 0 1 + ω 2 τ 2 + i ω τ σ 0 1 + ω 2 τ 2 . {\displaystyle \sigma (\omega )={\frac {\sigma _{0}}{1-i\omega \tau }}={\frac {\sigma _{0}}{1+\omega ^{2}\tau ^{2}}}+i\omega \tau {\frac {\sigma _{0}}{1+\omega ^{2}\tau ^{2}}}.} Here it is assumed that: E ( t ) = ℜ ( E 0 e − i ω t ) ; J ( t ) = ℜ ( σ ( ω ) E 0 e − i ω t ) . {\displaystyle {\begin{aligned}E(t)&=\Re {\left(E_{0}e^{-i\omega t}\right)};\\J(t)&=\Re \left(\sigma (\omega )E_{0}e^{-i\omega t}\right).\end{aligned}}} In engineering, i is generally replaced by −i (or −j) in all equations, which reflects the phase difference with respect to origin, rather than delay at the observation point traveling in time. The imaginary part indicates that the current lags behind the electrical field. This happens because the electrons need roughly a time τ to accelerate in response to a change in the electrical field. Here the Drude model is applied to electrons; it can be applied both to electrons and holes; i.e., positive charge carriers in semiconductors. The curves for σ(ω) are shown in the graph. If a sinusoidally varying electric field with frequency ω {\displaystyle \omega } is applied to the solid, the negatively charged electrons behave as a plasma that tends to move a distance x apart from the positively charged background. As a result, the sample is polarized and there will be an excess charge at the opposite surfaces of the sample. The dielectric constant of the sample is expressed as ε r = D ε 0 E = 1 + P ε 0 E {\displaystyle \varepsilon _{r}={\frac {D}{\varepsilon _{0}E}}=1+{\frac {P}{\varepsilon _{0}E}}} where D {\displaystyle D} is the electric displacement and P {\displaystyle P} is the polarization density. The polarization density is written as P ( t ) = ℜ ( P 0 e i ω t ) {\displaystyle P(t)=\Re {\left(P_{0}e^{i\omega t}\right)}} and the polarization density with n electron density is P = − n e x {\displaystyle P=-nex} After a little algebra the relation between polarization density and electric field can be expressed as P = − n e 2 m ω 2 E {\displaystyle P=-{\frac {ne^{2}}{m\omega ^{2}}}E} The frequency dependent dielectric function of the solid is ε r ( ω ) = 1 − n e 2 ε 0 m ω 2 {\displaystyle \varepsilon _{r}(\omega )=1-{\frac {ne^{2}}{\varepsilon _{0}m\omega ^{2}}}} At a resonance frequency ω p {\displaystyle \omega _{\rm {p}}} , called the plasma frequency, the dielectric function changes sign from negative to positive and real part of the dielectric function drops to zero. ω p = n e 2 ε 0 m {\displaystyle \omega _{\rm {p}}={\sqrt {\frac {ne^{2}}{\varepsilon _{0}m}}}} The plasma frequency represents a plasma oscillation resonance or plasmon. The plasma frequency can be employed as a direct measure of the square root of the density of valence electrons in a solid. Observed values are in reasonable agreement with this theoretical prediction for a large number of materials. Below the plasma frequency, the dielectric function is negative and the field cannot penetrate the sample. Light with angular frequency below the plasma frequency will be totally reflected. Above the plasma frequency the light waves can penetrate the sample, a typical example are alkaline metals that becomes transparent in the range of ultraviolet radiation. === Thermal conductivity of metals === One great success of the Drude model is the explanation of the Wiedemann-Franz law. This was due to a fortuitous cancellation of errors in Drude's original calculation. Drude predicted the value of the Lorenz number: κ σ T = 3 2 ( k B e ) 2 = 1.11 × 10 − 8 W ⋅ Ω / K 2 {\displaystyle {\frac {\kappa }{\sigma T}}={\frac {3}{2}}\left({\frac {k_{\rm {B}}}{e}}\right)^{2}=1.11\times 10^{-8}\,\mathrm {W{\cdot }\Omega /K^{2}} } Experimental values are typically in the range of 2 − 3 × 10 − 8 W ⋅ Ω / K 2 {\displaystyle 2-3\times 10^{-8}~\mathrm {W{\cdot }\Omega /K^{2}} } for metals at temperatures between 0 and 100 degrees Celsius. === Thermopower === A generic temperature gradient when switched on in a thin bar will trigger a current of electrons towards the lower temperature side, given the experiments are done in an open circuit manner this current will accumulate on that side generating an electric field countering the electric current. This field is called thermoelectric field: E = Q ∇ T {\displaystyle \mathbf {E} =Q\nabla T} and Q is called thermopower. The estimates by Drude are a factor of 100 low given the direct dependency with the specific heat. Q = − c v 3 n e = − k B 2 e = 0.43 × 10 − 4 V / K {\displaystyle Q=-{\frac {c_{v}}{3ne}}=-{\frac {k_{\rm {B}}}{2e}}=0.43\times 10^{-4}\mathrm {~V/K} } where the typical thermopowers at room temperature are 100 times smaller, of the order of microvolts. == Accuracy of the model == The Drude model provides a very good explanation of DC and AC conductivity in metals, the Hall effect, and the magnetoresistance in metals near room temperature. The model also explains partly the Wiedemann–Franz law of 1853. Drude formula is derived in a limited way, namely by assuming that the charge carriers form a classical ideal gas. When quantum theory is considered, the Drude model can be extended to the free electron model, where the carriers follow Fermi–Dirac distribution. The conductivity predicted is the same as in the Drude model because it does not depend on the form of the electronic speed distribution. However, Drude's model greatly overestimates the electronic heat capacity of metals. In reality, metals and insulators have roughly the same heat capacity at room temperature. Also, the Drude model does not explain the scattered trend of electrical conductivity versus frequency above roughly 2 THz. The model can also be applied to positive (hole) charge carriers. === Drude response in real materials === The characteristic behavior of a Drude metal in the time or frequency domain, i.e. exponential relaxation with time constant τ or the frequency dependence for σ(ω) stated above, is called Drude response. In a conventional, simple, real metal (e.g. sodium, silver, or gold at room temperature) such behavior is not found experimentally, because the characteristic frequency τ−1 is in the infrared frequency range, where other features that are not considered in the Drude model (such as band structure) play an important role. But for certain other materials with metallic properties, frequency-dependent conductivity was found that closely follows the simple Drude prediction for σ(ω). These are materials where the relaxation rate τ−1 is at much lower frequencies. This is the case for certain doped semiconductor single crystals, high-mobility two-dimensional electron gases, and heavy-fermion metals. == See also == Free electron model Arnold Sommerfeld Electrical conductivity == References == === Citations === === References === === General === Ashcroft, Neil; Mermin, N. David (1976). Solid State Physics. New York: Holt, Rinehart and Winston. ISBN 978-0-03-083993-1. Kittel, Charles (2005). Introduction to Solid State Physics (8th ed.). John Wiley & Sons Inc. ISBN 0-471-41526-X. Ziman, J.M. (1972). Principles of the theory of solids (2nd ed.). Cambridge university press. ISBN 0-521-29733-8. == External links == Heaney, Michael B (2003). "Electrical Conductivity and Resistivity". In Webster, John G. (ed.). Electrical Measurement, Signal Processing, and Displays. CRC Press. ISBN 9780203009406.
Wikipedia/Drude_model
Corneal topography, also known as photokeratoscopy or videokeratography, is a non-invasive medical imaging technique for mapping the anterior curvature of the cornea, the outer structure of the eye. Since the cornea is normally responsible for some 70% of the eye's refractive power, its topography is of critical importance in determining the quality of vision and corneal health. The three-dimensional map is therefore a valuable aid to the examining ophthalmologist or optometrist and can assist in the diagnosis and treatment of a number of conditions; in planning cataract surgery and intraocular lens implantation; in planning refractive surgery such as LASIK, and evaluating its results; or in assessing the fit of contact lenses. A development of keratoscopy, corneal topography extends the measurement range from the four points a few millimeters apart that is offered by keratometry to a grid of thousands of points covering the entire cornea. The procedure is carried out in seconds and is painless. == Operation == The patient is seated facing the device, which is raised to eye level. One design consists of a bowl containing an illuminated pattern, such as a series of concentric rings. Another type uses a mechanically rotated arm bearing a light source. In either type, light is focused on the anterior surface of the patient's cornea and reflected back to a digital camera at the device. The topology of the cornea is revealed by the shape taken by the reflected pattern. A computer provides the necessary analysis, typically determining the position and height of several thousand points across the cornea. The topographical map can be represented in a number of graphical formats, such as a sagittal map, which color-codes the steepness of curvature according to its dioptric value. == Development == The corneal topograph owes its heritage to the Portuguese ophthalmologist Antonio Placido, who, in 1880, viewed a painted disk (Placido's disk) of alternating black and white rings reflected in the cornea. The rings showed as contour lines projected on the corneal tear film. The French ophthalmologist Louis Émile Javal incorporated the rings in his ophthalmometer and mounted an eyepiece which magnified the image of the eye. He proposed that the image should be photographed or diagrammatically represented to allow analysis of the image. In 1896, Allvar Gullstrand incorporated the disk in his ophthalmoscope, examining photographs of the cornea via a microscope and was able to manually calculate the curvature by means of a numerical algorithm. Gullstrand recognized the potential of the technique and commented that despite its laboriousness it could "give a resultant accuracy that previously could not be obtained in any other way". The flat field of Placido's disk reduced the accuracy close to the corneal periphery and in the 1950s the Wesley-Jessen company made use of a curved bowl to reduce the field defects. The curvature of the cornea could be determined from comparison of photographs of the rings against standardized images. In the 1980s, photographs of the projected images became hand-digitized and then analysed by computer. Automation of the process soon followed with the image captured by a digital camera and passed directly to a computer. In the 1990s, systems became commercially available from a number of suppliers. The first completely automatic system was the Corneal Modeling System (CMS-1) developed by Computed Anatomy, Inc. in New York City, under the direction of Martin Gersten and a group of surgeons at the New York Eye and Ear Infirmary. The price of the early instruments was initially very high ($75,000), largely confining their use to research establishments. However, prices have fallen substantially over time, bringing corneal topographs into the budget of smaller clinics and increasing the number of patients that can be examined. == Use == Computerized corneal topography can be employed for diagnostics. It is, in fact, one of the exams the patients have to undergo prior to the Cross-linking and the Mini Asymmetric Radial Keratotomy (M.A.R.K.). For example, the KISA% index (keratometry, I-S, skew percentage, astigmatism) is used to arrive at a diagnosis of keratoconus, to screen the suspect keratoconic patients and analyse the degree of corneal steepness changes in healthy relatives. Nevertheless, topography in itself is a measurement of the first reflective surface of the eye (tear film) and is not giving any additional information beside the shape of this layer expressed in curvature. Keratoconus in itself is a pattern of the entire cornea, therefore every measurement just focusing on one layer, might not be enough for a state of the art diagnosis. Especially early cases of keratoconus might be missed by a plain topographic measurement, which is critical if refractive surgery is being considered. The measurement is also sensitive to unstable tear films. Also, the alignment of the measurement can be difficult, especially with eyes that have keratoconus, a significant astigmatism, or sometimes after refractive surgery. Corneal topography instruments generate a measurement called simulated keratometry (SimK), which approximates the classic measurement of the widely used keratometer. Another novel use of corneal topographic data is called CorT, which has been shown to quantify refractive astigmatism more accurately than SimK and other approaches. CorT utilizes data from all Placido rings across the cornea compared with SimK, which is based on only one ring. While corneal topography relies on reflected light from the front (anterior) of the cornea, a technique called corneal tomography also provides a measure of the back (posterior) shape of the cornea. A measure called CorT total includes this posterior corneal data and more accurately reflects refraction compared with regular CorT, SimK, and other techniques. == References == == Further reading == Corbett M, O'Brart D, Rosen E, Stevenson R (1999). Corneal Topography. London: BMJ Books. p. 230. ISBN 0-7279-1226-7. Gormley D., Gersten M., Koplin R.S., Lubkin V. (1988). "Corneal Modeling". Cornea. 7 (1): 30–35. doi:10.1097/00003226-198801000-00004. PMID 3349789.{{cite journal}}: CS1 maint: multiple names: authors list (link) Yanoff M, Duker J (2004). Ophthalmology (2nd ed.). Mosby. ISBN 0-323-01634-0.
Wikipedia/Corneal_topography
Areography, also known as the geography of Mars, is a subfield of planetary science that entails the delineation and characterization of regions on Mars. Areography is mainly focused on what is called physical geography on Earth; that is the distribution of physical features across Mars and their cartographic representations. In April 2023, The New York Times reported an updated global map of Mars based on images from the Hope spacecraft. A related, but much more detailed, global Mars map was released by NASA on 16 April 2023. == History == The first detailed observations of Mars were from ground-based telescopes. The history of these observations are marked by the oppositions of Mars, when the planet is closest to Earth and hence is most easily visible, which occur every couple of years. Even more notable are the perihelic oppositions of Mars which occur approximately every 16 years, and are distinguished because Mars is closest to earth and Jupiter perihelion making it even closer to Earth. In September 1877, (a perihelic opposition of Mars occurred on September 5), Italian astronomer Giovanni Schiaparelli published the first detailed map of Mars. These maps notably contained features he called canali ("channels"), that were later shown to be an optical illusion. These canali were supposedly long straight lines on the surface of Mars to which he gave names of famous rivers on Earth. His term was popularly mistranslated as canals, and so started the Martian canal controversy. Following these observations, it was a long-held belief that Mars contained vast seas and vegetation. It was not until spacecraft visited the planet during NASA's Mariner missions in the 1960s that these myths were dispelled. Some maps of Mars were made using the data from these missions, but it wasn't until the Mars Global Surveyor mission, launched in 1996 and ending in late 2006, that complete, extremely detailed maps were obtained. == Cartography and geodesy == Cartography is the art, science, and technology of making maps. Geodesy is the science of measuring the shape, orientation, and gravity of Earth and, by extension, other planetary bodies. There are many established techniques specific to Earth that allow us to convert the 2D curved surface into 2D planes to facilitate mapping. To facilitate this on Mars, projections, coordinate systems, and datums needed to be established. Today, the United States Geological Survey defines thirty cartographic quadrangles for the surface of Mars. These can be seen below. === Zero elevation === On Earth, the zero elevation datum is based on sea level (the geoid). Since Mars has no oceans and hence no 'sea level', it is convenient to define an arbitrary zero-elevation level or "vertical datum" for mapping the surface, called areoid. The datum for Mars was defined initially in terms of a constant atmospheric pressure. From the Mariner 9 mission up until 2001, this was chosen as 610.5 Pa (6.105 mbar), on the basis that below this pressure liquid water can never be stable (i.e., the triple point of water is at this pressure). This value is only 0.6% of the pressure at sea level on Earth. Note that the choice of this value does not mean that liquid water does exist below this elevation, just that it could were the temperature to exceed 273.16 K (0.01 degrees C, 32.018 degrees F). In 2001, Mars Orbiter Laser Altimeter data led to a new convention of zero elevation defined as the equipotential surface (gravitational plus rotational) whose average value at the equator is equal to the mean radius of the planet. === Zero latitude === The origin of latitude is Mars's mean equator, defined perpendicularly to its mean axis of rotation, removing periodic wobbles. === Zero longitude === Mars's equator is defined by its rotation, but the location of its prime meridian was specified, as is Earth's, by choice of an arbitrary point which later observers accepted. The German astronomers Wilhelm Beer and Johann Heinrich Mädler selected a small circular feature in the Sinus Meridiani ('Middle Bay' or 'Meridian Bay') as a reference point when they produced the first systematic chart of Mars features in 1830–1832. In 1877, their choice was adopted as the prime meridian by the Italian astronomer Giovanni Schiaparelli when he began work on his notable maps of Mars. In 1909 ephemeris-makers decided that it was more important to maintain continuity of the ephemerides as a guide to observations and this definition was "virtually abandoned". After the Mariner spacecraft provided extensive imagery of Mars, in 1972 the Mariner 9 Geodesy / Cartography Group proposed that the prime meridian pass through the center of a small 500 m diameter crater, named Airy-0, located in Sinus Meridiani along the meridian line of Beer and Mädler, thus defining 0.0° longitude with a precision of 0.001°. This model used the planetographic control point network developed by Merton Davies of the RAND Corporation. As radiometric techniques increased the precision with which objects could be located on the surface of Mars, the center of a 500 m circular crater was considered to be insufficiently precise for exact measurements. The IAU Working Group on Cartographic Coordinates and Rotational Elements, therefore, recommended setting the longitude of the Viking 1 lander – for which there was extensive radiometric tracking data – as marking the standard longitude of 47.95137° west. This definition maintains the position of the center of Airy-0 at 0° longitude, within the tolerance of current cartographic uncertainties. == Topography == Across a whole planet, generalisation is not possible, and the geography of Mars varies considerably. The dichotomy of Martian topography is striking: northern plains flattened by lava flows contrast with the southern highlands, pitted and cratered by ancient impacts. The surface of Mars as seen from Earth is consequently divided into two kinds of areas, with differing albedo. The paler plains covered with dust and sand rich in reddish iron oxides were once thought of as Martian 'continents' and given names like Arabia Terra (land of Arabia) or Amazonis Planitia (Amazonian plain). The dark features were thought to be seas, hence their names Mare Erythraeum, Mare Sirenum and Aurorae Sinus. The largest dark feature seen from Earth is Syrtis Major Planum. The shield volcano, Olympus Mons (Mount Olympus), rises 22 km above the surrounding volcanic plains, and is the highest known mountain on any planet in the Solar System. It is in a vast upland region called Tharsis, which contains several large volcanos. See list of mountains on Mars. The Tharsis region of Mars also has the Solar System's largest canyon system, Valles Marineris or the Mariner Valley, which is 4,000 km long and 7 km deep. Mars is also scarred by countless impact craters. The largest of these is the Hellas impact basin. See list of craters on Mars. Mars has two permanent polar ice caps, the northern one located at Planum Boreum and the southern one at Planum Australe. The difference between Mars's highest and lowest points is nearly 30 km (from the top of Olympus Mons at an altitude of 21.2 km to Badwater Crater[1] at the bottom of the Hellas impact basin at an altitude of 8.2 km below the datum). In comparison, the difference between Earth's highest and lowest points (Mount Everest and the Mariana Trench) is only 19.7 km. Combined with the planets' different radii, this means Mars is nearly three times "rougher" than Earth. The International Astronomical Union's Working Group for Planetary System Nomenclature is responsible for naming Martian surface features. === Martian dichotomy === Observers of Martian topography will notice a dichotomy between the northern and southern hemispheres. Most of the northern hemisphere is flat, with few impact craters, and lies below the conventional 'zero elevation' level. In contrast, the southern hemisphere is mountains and highlands, mostly well above zero elevation. The two hemispheres differ in elevation by 1 to 3 km. The border separating the two areas is very interesting to geologists. One distinctive feature is the fretted terrain. It contains mesas, knobs, and flat-floored valleys having walls about a mile high. Around many of the mesas and knobs are lobate debris aprons that have been shown to be rock-covered glaciers. Other interesting features are the large river valleys and outflow channels that cut through the dichotomy. The northern lowlands comprise about one-third of the surface of Mars and are relatively flat, with occasional impact craters. The other two-thirds of the Martian surface are the southern highlands. The difference in elevation between the hemispheres is dramatic. Because of the density of impact craters, scientists believe the southern hemisphere to be far older than the northern plains. Much of heavily cratered southern highlands date back to the period of heavy bombardment, the Noachian. Multiple hypotheses have been proposed to explain the differences. The three most commonly accepted are a single mega-impact, multiple impacts, and endogenic processes such as mantle convection. Both impact-related hypotheses involve processes that could have occurred before the end of the primordial bombardment, implying that the crustal dichotomy has its origins early in the history of Mars. The giant impact hypothesis, originally proposed in the early 1980s, was met with skepticism due to the impact area's non-radial (elliptical) shape, where a circular pattern would be stronger support for impact by larger object(s). But a 2008 study provided additional research that supports a single giant impact. Using geologic data, researchers found support for the single impact of a large object hitting Mars at approximately a 45-degree angle. Additional evidence analyzing Martian rock chemistry for post-impact upwelling of mantle material would further support the giant impact theory. == Nomenclature == === Early nomenclature === Although better remembered for mapping the Moon starting in 1830, Johann Heinrich Mädler and Wilhelm Beer were the first "areographers". They started off by establishing once and for all that most of the surface features were permanent, and pinned down Mars's rotation period. In 1840, Mädler combined ten years of observations and drew the first map of Mars ever made. Rather than giving names to the various markings they mapped, Beer and Mädler simply designated them with letters; Meridian Bay (Sinus Meridiani) was thus feature "a". Over the next twenty years or so, as instruments improved and the number of observers also increased, various Martian features acquired a hodge-podge of names. To give a couple of examples, Solis Lacus was known as the "Oculus" (the Eye), and Syrtis Major was usually known as the "Hourglass Sea" or the "Scorpion". In 1858, it was also dubbed the "Atlantic Canale" by the Jesuit astronomer Angelo Secchi. Secchi commented that it "seems to play the role of the Atlantic which, on Earth, separates the Old Continent from the New;" this was the first time the fateful canale, which in Italian can mean either "channel" or "canal", had been applied to Mars. In 1867, Richard Anthony Proctor drew up a map of Mars. It was based, somewhat crudely, on the Rev. William Rutter Dawes's earlier drawings of 1865, then the best ones available. Proctor explained his system of nomenclature by saying, "I have applied to the different features the names of those observers who have studied the physical peculiarities presented by Mars." Here are some of his names, paired with those later used by Schiaparelli in his Martian map created between 1877 and 1886. Schiaparelli's names were generally adopted and are the names actually used today: Proctor's nomenclature has often been criticized, mainly because so many of his names honored English astronomers, but also because he used many names more than once. In particular, Dawes appeared no fewer than six times (Dawes Ocean, Dawes Continent, Dawes Sea, Dawes Strait, Dawes Isle, and Dawes Forked Bay). Even so, Proctor's names are not without charm, and for all their shortcomings they were a foundation on which later astronomers would improve. === Modern nomenclature === Today, names of Martian features derive from a number of sources, but the names of the large features are derived primarily from the maps of Mars made in 1886 by the Italian astronomer Giovanni Schiaparelli. Schiaparelli named the larger features of Mars primarily using names from Greek mythology and to a lesser extent the Bible. Mars's large albedo features retain many of the older names, but are often updated to reflect new knowledge of the nature of the features. For example, 'Nix Olympica' (the snows of Olympus) has become Olympus Mons (Mount Olympus). Large Martian craters are named after important scientists and science fiction writers; smaller ones are named after towns and villages on Earth. Various landforms studied by the Mars Exploration Rovers are given temporary names or nicknames to identify them during exploration and investigation. However, it is hoped that the International Astronomical Union will make permanent the names of certain major features, such as the Columbia Hills, which were named after the seven astronauts who died in the Space Shuttle Columbia disaster. == See also == == References == == Further reading == Lane, K. Maria D., Geographies of Mars: Seeing and Knowing the Red Planet The University of Chicago Press, Chicago. 2010. Sheehan, William, The Planet Mars: A History of Observation and Discovery Archived 2017-09-11 at the Wayback Machine (Full text online) The University of Arizona Press, Tucson. 1996. == External links == Google Mars – Google Maps for Mars, with various surface features and interesting places pointed out Mars/themis Maps – Arizona State University Mars Maps – Maps of Mars MEC-1 Prototype Historical Globes of the Red Planet 3D Map of Mars – 3D Map of Mars Presents distances and altitudes of features/NASA The Origin of Mars Crater Names Interactive 3D map of Mars created by CTX
Wikipedia/Topography_of_Mars
In computer graphics, a triangulated irregular network (TIN) is a representation of a continuous surface consisting entirely of triangular facets (a triangle mesh), used mainly as Discrete Global Grid in primary elevation modeling. The vertices of these triangles are created from field recorded spot elevations through a variety of means including surveying through conventional techniques, Global Positioning System Real-Time Kinematic (GPS RTK), photogrammetry, or some other means. Associated with three-dimensional ⁠ ( x , y , z ) {\displaystyle (x,y,z)} ⁠ data and topography, TINs are useful for the description and analysis of general horizontal ⁠ ( x , y ) {\displaystyle (x,y)} ⁠ distributions and relationships. Digital TIN data structures are used in a variety of applications, including geographic information systems (GIS), and computer aided design (CAD) for the visual representation of a topographical surface. A TIN is a vector-based representation of the physical land surface or sea bottom, made up of irregularly distributed nodes and lines with three-dimensional coordinates ⁠ ( x , y , z ) {\displaystyle (x,y,z)} ⁠ that are arranged in a network of non-overlapping triangles. A TIN comprises a triangular network of vertices, known as mass points, with associated coordinates in three dimensions connected by edges to form a triangular tessellation. Three-dimensional visualizations are readily created by rendering of the triangular facets. In regions where there is little variation in surface height, the points may be widely spaced whereas in areas of more intense variation in height the point density is increased. A TIN used to represent terrain is often called a digital elevation model (DEM), which can be further used to produce digital surface models (DSM) or digital terrain models (DTM). An advantage of using a TIN over a rasterized digital elevation model (DEM) in mapping and analysis is that the points of a TIN are distributed variably based on an algorithm that determines which points are most necessary to create an accurate representation of the terrain. Data input is therefore flexible and fewer points need to be stored than in a raster DEM, with regularly distributed points. While a TIN may be considered less suited than a raster DEM for certain kinds of GIS applications, such as analysis of a surface's slope and aspect, it is often used in CAD to create contour lines. A DTM and DSM can be formed from a DEM. A DEM can be interpolated from a TIN. TIN are based on a Delaunay triangulation or constrained Delaunay. Delaunay conforming triangulations are recommended over constrained triangulations. This is because the resulting TINs are likely to contain fewer long, skinny triangles, which are undesirable for surface analysis. Additionally, natural neighbor interpolation and Thiessen (Voronoi) polygon generation can only be performed on Delaunay conforming triangulations. A constrained Delaunay triangulation can be considered when you need to explicitly define certain edges that are guaranteed not to be modified (that is, split into multiple edges) by the triangulator. Constrained Delaunay triangulations are also useful for minimizing the size of a TIN, since they have fewer nodes and triangles where breaklines are not densified. The TIN model was developed in the early 1970s as a simple way to build a surface from a set of irregularly spaced points. The first triangulated irregular network program for GIS was written by W. Randolph Franklin, under the direction of David Douglas and Thomas Peucker (Poiker), at Canada's Simon Fraser University, in 1973. == File formats == A variety of different file formats exist for saving TIN information, including Esri TIN, along with others such as AquaVeo and ICEM CFD. == References == == External links == UBC Geography PSU Education ArcGIS
Wikipedia/Triangulated_irregular_network
Environmental science is an interdisciplinary academic field that integrates physics, biology, meteorology, mathematics and geography (including ecology, chemistry, plant science, zoology, mineralogy, oceanography, limnology, soil science, geology and physical geography, and atmospheric science) to the study of the environment, and the solution of environmental problems. Environmental science emerged from the fields of natural history and medicine during the Enlightenment. Today it provides an integrated, quantitative, and interdisciplinary approach to the study of environmental systems. Environmental scientists seek to understand the earth's physical, chemical, biological, and geological processes, and to use that knowledge to understand how issues such as alternative energy systems, pollution control and mitigation, natural resource management, and the effects of global warming and climate change influence and affect the natural systems and processes of earth. Environmental issues almost always include an interaction of physical, chemical, and biological processes. Environmental scientists bring a systems approach to the analysis of environmental problems. Key elements of an effective environmental scientist include the ability to relate space, and time relationships as well as quantitative analysis. Environmental science came alive as a substantive, active field of scientific investigation in the 1960s and 1970s driven by (a) the need for a multi-disciplinary approach to analyze complex environmental problems, (b) the arrival of substantive environmental laws requiring specific environmental protocols of investigation and (c) the growing public awareness of a need for action in addressing environmental problems. Events that spurred this development included the publication of Rachel Carson's landmark environmental book Silent Spring along with major environmental issues becoming very public, such as the 1969 Santa Barbara oil spill, and the Cuyahoga River of Cleveland, Ohio, "catching fire" (also in 1969), and helped increase the visibility of environmental issues and create this new field of study. == Terminology == In common usage, "environmental science" and "ecology" are often used interchangeably, but technically, ecology refers only to the study of organisms and their interactions with each other as well as how they interrelate with environment. Ecology could be considered a subset of environmental science, which also could involve purely chemical or public health issues (for example) ecologists would be unlikely to study. In practice, there are considerable similarities between the work of ecologists and other environmental scientists. There is substantial overlap between ecology and environmental science with the disciplines of fisheries, forestry, and wildlife. Environmental studies incorporates more of the social sciences for understanding human relationships, perceptions and policies towards the environment. Environmental engineering focuses on design and technology for improving environmental quality in every aspect. == History == === Ancient civilizations === Historical concern for environmental issues is well documented in archives around the world. Ancient civilizations were mainly concerned with what is now known as environmental science insofar as it related to agriculture and natural resources. Scholars believe that early interest in the environment began around 6000 BCE when ancient civilizations in Israel and Jordan collapsed due to deforestation. As a result, in 2700 BCE the first legislation limiting deforestation was established in Mesopotamia. Two hundred years later, in 2500 BCE, a community residing in the Indus River Valley observed the nearby river system in order to improve sanitation. This involved manipulating the flow of water to account for public health. In the Western Hemisphere, numerous ancient Central American city-states collapsed around 1500 BCE due to soil erosion from intensive agriculture. Those remaining from these civilizations took greater attention to the impact of farming practices on the sustainability of the land and its stable food production. Furthermore, in 1450 BCE the Minoan civilization on the Greek island of Crete declined due to deforestation and the resulting environmental degradation of natural resources. Pliny the Elder somewhat addressed the environmental concerns of ancient civilizations in the text Naturalis Historia, written between 77 and 79 ACE, which provided an overview of many related subsets of the discipline. Although warfare and disease were of primary concern in ancient society, environmental issues played a crucial role in the survival and power of different civilizations. As more communities recognized the importance of the natural world to their long-term success, an interest in studying the environment came into existence. === Beginnings of environmental science === ==== 18th century ==== In 1735, the concept of binomial nomenclature is introduced by Carolus Linnaeus as a way to classify all living organisms, influenced by earlier works of Aristotle. His text, Systema Naturae, represents one of the earliest culminations of knowledge on the subject, providing a means to identify different species based partially on how they interact with their environment. ==== 19th century ==== In the 1820s, scientists were studying the properties of gases, particularly those in the Earth's atmosphere and their interactions with heat from the Sun. Later that century, studies suggested that the Earth had experienced an Ice Age and that warming of the Earth was partially due to what are now known as greenhouse gases (GHG). The greenhouse effect was introduced, although climate science was not yet recognized as an important topic in environmental science due to minimal industrialization and lower rates of greenhouse gas emissions at the time. ==== 20th century ==== In the 1900s, the discipline of environmental science as it is known today began to take shape. The century is marked by significant research, literature, and international cooperation in the field. In the early 20th century, criticism from dissenters downplayed the effects of global warming. At this time, few researchers were studying the dangers of fossil fuels. After a 1.3 degrees Celsius temperature anomaly was found in the Atlantic Ocean in the 1940s, however, scientists renewed their studies of gaseous heat trapping from the greenhouse effect (although only carbon dioxide and water vapor were known to be greenhouse gases then). Nuclear development following the Second World War allowed environmental scientists to intensively study the effects of carbon and make advancements in the field. Further knowledge from archaeological evidence brought to light the changes in climate over time, particularly ice core sampling. Environmental science was brought to the forefront of society in 1962 when Rachel Carson published an influential piece of environmental literature, Silent Spring. Carson's writing led the American public to pursue environmental safeguards, such as bans on harmful chemicals like the insecticide DDT. Another important work, The Tragedy of the Commons, was published by Garrett Hardin in 1968 in response to accelerating natural degradation. In 1969, environmental science once again became a household term after two striking disasters: Ohio's Cuyahoga River caught fire due to the amount of pollution in its waters and a Santa Barbara oil spill endangered thousands of marine animals, both receiving prolific media coverage. Consequently, the United States passed an abundance of legislation, including the Clean Water Act and the Great Lakes Water Quality Agreement. The following year, in 1970, the first ever Earth Day was celebrated worldwide and the United States Environmental Protection Agency (EPA) was formed, legitimizing the study of environmental science in government policy. In the next two years, the United Nations created the United Nations Environment Programme (UNEP) in Stockholm, Sweden to address global environmental degradation. Much of the interest in environmental science throughout the 1970s and the 1980s was characterized by major disasters and social movements. In 1978, hundreds of people were relocated from Love Canal, New York after carcinogenic pollutants were found to be buried underground near residential areas. The next year, in 1979, the nuclear power plant on Three Mile Island in Pennsylvania suffered a meltdown and raised concerns about the dangers of radioactive waste and the safety of nuclear energy. In response to landfills and toxic waste often disposed of near their homes, the official Environmental Justice Movement was started by a Black community in North Carolina in 1982. Two years later, the toxic methyl isocyanate gas was released to the public from a power plant disaster in Bhopal, India, harming hundreds of thousands of people living near the disaster site, the effects of which are still felt today. In a groundbreaking discovery in 1985, a British team of researchers studying Antarctica found evidence of a hole in the ozone layer, inspiring global agreements banning the use of chlorofluorocarbons (CFCs), which were previously used in nearly all aerosols and refrigerants. Notably, in 1986, the meltdown at the Chernobyl nuclear power plant in Ukraine released radioactive waste to the public, leading to international studies on the ramifications of environmental disasters. Over the next couple of years, the Brundtland Commission (previously known as the World Commission on Environment and Development) published a report titled Our Common Future and the Montreal Protocol formed the International Panel on Climate Change (IPCC) as international communication focused on finding solutions for climate change and degradation. In the late 1980s, the Exxon Valdez company was fined for spilling large quantities of crude oil off the coast of Alaska and the resulting cleanup, involving the work of environmental scientists. After hundreds of oil wells were burned in combat in 1991, warfare between Iraq and Kuwait polluted the surrounding atmosphere just below the air quality threshold environmental scientists believed was life-threatening. ==== 21st century ==== Many niche disciplines of environmental science have emerged over the years, although climatology is one of the most known topics. Since the 2000s, environmental scientists have focused on modeling the effects of climate change and encouraging global cooperation to minimize potential damages. In 2002, the Society for the Environment as well as the Institute of Air Quality Management were founded to share knowledge and develop solutions around the world. Later, in 2008, the United Kingdom became the first country to pass legislation (the Climate Change Act) that aims to reduce carbon dioxide output to a specified threshold. In 2016 the Kyoto Protocol became the Paris Agreement, which sets concrete goals to reduce greenhouse gas emissions and restricts Earth's rise in temperature to a 2 degrees Celsius maximum. The agreement is one of the most expansive international efforts to limit the effects of global warming to date. Most environmental disasters in this time period involve crude oil pollution or the effects of rising temperatures. In 2010, BP was responsible for the largest American oil spill in the Gulf of Mexico, known as the Deepwater Horizon spill, which killed a number of the company's workers and released large amounts of crude oil into the water. Furthermore, throughout this century, much of the world has been ravaged by widespread wildfires and water scarcity, prompting regulations on the sustainable use of natural resources as determined by environmental scientists. The 21st century is marked by significant technological advancements. New technology in environmental science has transformed how researchers gather information about various topics in the field. Research in engines, fuel efficiency, and decreasing emissions from vehicles since the times of the Industrial Revolution has reduced the amount of carbon and other pollutants into the atmosphere. Furthermore, investment in researching and developing clean energy (i.e. wind, solar, hydroelectric, and geothermal power) has significantly increased in recent years, indicating the beginnings of the divestment from fossil fuel use. Geographic information systems (GIS) are used to observe sources of air or water pollution through satellites and digital imagery analysis. This technology allows for advanced farming techniques like precision agriculture as well as monitoring water usage in order to set market prices. In the field of water quality, developed strains of natural and manmade bacteria contribute to bioremediation, the treatment of wastewaters for future use. This method is more eco-friendly and cheaper than manual cleanup or treatment of wastewaters. Most notably, the expansion of computer technology has allowed for large data collection, advanced analysis, historical archives, public awareness of environmental issues, and international scientific communication. The ability to crowdsource on the Internet, for example, represents the process of collectivizing knowledge from researchers around the world to create increased opportunity for scientific progress. With crowdsourcing, data is released to the public for personal analyses which can later be shared as new information is found. Another technological development, blockchain technology, monitors and regulates global fisheries. By tracking the path of fish through global markets, environmental scientists can observe whether certain species are being overharvested to the point of extinction. Additionally, remote sensing allows for the detection of features of the environment without physical intervention. The resulting digital imagery is used to create increasingly accurate models of environmental processes, climate change, and much more. Advancements to remote sensing technology are particularly useful in locating the nonpoint sources of pollution and analyzing ecosystem health through image analysis across the electromagnetic spectrum. Lastly, thermal imaging technology is used in wildlife management to catch and discourage poachers and other illegal wildlife traffickers from killing endangered animals, proving useful for conservation efforts. Artificial intelligence has also been used to predict the movement of animal populations and protect the habitats of wildlife. == Components == === Atmospheric sciences === Atmospheric sciences focus on the Earth's atmosphere, with an emphasis upon its interrelation to other systems. Atmospheric sciences can include studies of meteorology, greenhouse gas phenomena, atmospheric dispersion modeling of airborne contaminants, sound propagation phenomena related to noise pollution, and even light pollution. Taking the example of the global warming phenomena, physicists create computer models of atmospheric circulation and infrared radiation transmission, chemists examine the inventory of atmospheric chemicals and their reactions, biologists analyze the plant and animal contributions to carbon dioxide fluxes, and specialists such as meteorologists and oceanographers add additional breadth in understanding the atmospheric dynamics. === Ecology === As defined by the Ecological Society of America, "Ecology is the study of the relationships between living organisms, including humans, and their physical environment; it seeks to understand the vital connections between plants and animals and the world around them." Ecologists might investigate the relationship between a population of organisms and some physical characteristic of their environment, such as concentration of a chemical; or they might investigate the interaction between two populations of different organisms through some symbiotic or competitive relationship. For example, an interdisciplinary analysis of an ecological system which is being impacted by one or more stressors might include several related environmental science fields. In an estuarine setting where a proposed industrial development could impact certain species by water and air pollution, biologists would describe the flora and fauna, chemists would analyze the transport of water pollutants to the marsh, physicists would calculate air pollution emissions and geologists would assist in understanding the marsh soils and bay muds. === Environmental chemistry === Environmental chemistry is the study of chemical alterations in the environment. Principal areas of study include soil contamination and water pollution. The topics of analysis include chemical degradation in the environment, multi-phase transport of chemicals (for example, evaporation of a solvent containing lake to yield solvent as an air pollutant), and chemical effects upon biota. As an example study, consider the case of a leaking solvent tank which has entered the habitat soil of an endangered species of amphibian. As a method to resolve or understand the extent of soil contamination and subsurface transport of solvent, a computer model would be implemented. Chemists would then characterize the molecular bonding of the solvent to the specific soil type, and biologists would study the impacts upon soil arthropods, plants, and ultimately pond-dwelling organisms that are the food of the endangered amphibian. === Geosciences === Geosciences include environmental geology, environmental soil science, volcanic phenomena and evolution of the Earth's crust. In some classification systems this can also include hydrology, including oceanography. As an example study, of soils erosion, calculations would be made of surface runoff by soil scientists. Fluvial geomorphologists would assist in examining sediment transport in overland flow. Physicists would contribute by assessing the changes in light transmission in the receiving waters. Biologists would analyze subsequent impacts to aquatic flora and fauna from increases in water turbidity. == Regulations driving the studies == In the United States the National Environmental Policy Act (NEPA) of 1969 set forth requirements for analysis of federal government actions (such as highway construction projects and land management decisions) in terms of specific environmental criteria. Numerous state laws have echoed these mandates, applying the principles to local-scale actions. The upshot has been an explosion of documentation and study of environmental consequences before the fact of development actions. One can examine the specifics of environmental science by reading examples of Environmental Impact Statements prepared under NEPA such as: Wastewater treatment expansion options discharging into the San Diego/Tijuana Estuary, Expansion of the San Francisco International Airport, Development of the Houston, Metro Transportation system, Expansion of the metropolitan Boston MBTA transit system, and Construction of Interstate 66 through Arlington, Virginia. In England and Wales the Environment Agency (EA), formed in 1996, is a public body for protecting and improving the environment and enforces the regulations listed on the communities and local government site. (formerly the office of the deputy prime minister). The agency was set up under the Environment Act 1995 as an independent body and works closely with UK Government to enforce the regulations. == See also == Environmental engineering science Environmental informatics Environmental monitoring Environmental planning Environmental statistics Glossary of environmental science List of environmental studies topics == References == == External links == Glossary of environmental terms – Global Development Research Center
Wikipedia/Environmental_science
Terrain cartography or relief mapping is the depiction of the shape of the surface of the Earth on a map, using one or more of several techniques that have been developed. Terrain or relief is an essential aspect of physical geography, and as such its portrayal presents a central problem in cartographic design, and more recently geographic information systems and geovisualization. == Hill profiles == The most ancient form of relief depiction in cartography, hill profiles are simply illustrations of mountains and hills in profile, placed as appropriate on generally small-scale (broad area of coverage) maps. They are seldom used today except as part of an "antique" styling. === Physiographic illustration === In 1921, A.K. Lobeck published A Physiographic Diagram of the United States, using an advanced version of the hill profile technique to illustrate the distribution of landforms on a small-scale map. Erwin Raisz further developed, standardized, and taught this technique, which uses generalized texture to imitate landform shapes over a large area. A combination of hill profile and shaded relief, this style of terrain representation is simultaneously idiosyncratic to its creator—often hand-painted—and found insightful in illustrating geomorphological patterns. === Plan oblique relief === More recently, Tom Patterson developed a computer-generated technique for mapping terrain inspired by Raisz's work, called plan oblique relief. This tool starts with a shaded relief image, then shifts pixels northward proportional to their elevation. The effect is to make mountains "stand up" and "lay over" features to the north, in the same fashion as hill profiles. Some viewers are able to see the effect more easily than others. == Hachures == Hachures, first standardized by the Austrian topographer Johann Georg Lehmann in 1799, are a form of shading using lines. They show the orientation of slope, and by their thickness and overall density they provide a general sense of steepness. Being non-numeric, they are less useful to a scientific survey than contours, but can successfully communicate quite specific shapes of terrain. They are especially effective at showing relatively low relief, such as rolling hills. It was a standard on topographic maps of Germany well into the 20th Century. There have been multiple attempts to recreate this technique using digital GIS data, with mixed results. == Contour lines == First developed in France in the 18th Century, contour lines (or isohypses) are isolines of equal elevation. This is the most common way of visualizing elevation quantitatively, and is familiar from topographic maps. Most 18th- and early 19th-century national surveys did not record relief across the entire area of coverage, calculating only spot elevations at survey points. The United States Geological Survey (USGS) topographical survey maps included contour representation of relief, and so maps that show relief, especially with exact representation of elevation, came to be called topographic maps (or "topo" maps) in the United States, and the usage has spread internationally. On maps produced by Swisstopo, the color of the contour lines is used to indicate the type of ground: black for bare rock and scree, blue for ice and underwater contours, and brown for earth-covered ground. === Tanaka (relief) contours === The Tanaka (relief) contours technique is a method used to illuminate contour lines in order to help visualize terrain. Lines are highlighted or shaded depending on their relationship to a light source in the Northwest. If the object being illustrated would shadow a section of contour line, that contour would be represented with a black band. Otherwise, slopes facing the light source would be represented by white bands. This method was developed by Professor Tanaka Kitiro in 1950, but had been experimented with as early as 1870, with little success due to technological limitations in printing. The resulting terrain at this point was a grayscale image. Cartographer Berthold Horn later created software to digitally produce Tanaka Contours, and Patrick Kennelly, another cartographer, later found a way to add color to these maps, making them more realistic. There are a number of issues with this method. Historically, printing technology did not reproduce Tanaka contours well, especially the white lines on a gray background. This method is also very time-consuming. In addition, the terraced appearance does not look appealing or accurate in some kinds of terrain. == Hypsometric tints == Hypsometric tints (also called layer tinting, elevation tinting, elevation coloring, or hysometric coloring) are colors placed between contour lines to indicate elevation. These tints are shown as bands of color in a graduated scheme or as a color scheme applied to contour lines themselves; either method is considered a type of Isarithmic map. Hypsometric tinting of maps and globes is often accompanied by a similar method of bathymetric tinting to convey differences in water depth. == Shaded relief == Shaded relief, or hill-shading, shows the shape of the terrain in a realistic fashion by showing how the three-dimensional surface would be illuminated from a point light source. The shadows normally follow the convention of top-left lighting in which the light source is placed near the upper-left corner of the map. If the map is oriented with north at the top, the result is that the light appears to come from the north-west. Although this is unrealistic lighting in the northern hemisphere, using a southern light source can cause multistable perception illusions, in which the topography appears inverted. Shaded relief was traditionally drawn with charcoal, airbrush and other artist's media. The Swiss cartographer Eduard Imhof is widely regarded as a master of manual hill-shading technique and theory. Shaded relief is today almost exclusively computer-generated from digital elevation models (DEM). The mathematical basis of analytical hillshading is to calculate the surface normal at each location, then calculate the angle between that vector and the vector pointing to the illumination using the Dot product; the smaller that angle, the more illumination that location is receiving. However, most software implementations use algorithms that shorten those calculations. This tool is available in a variety of GIS and graphics software, including Photoshop, QGIS, GRASS GIS or ArcMap's Spatial Analyst extension. While these relatively simple tools have made shaded relief almost ubiquitous in maps, many cartographers have been unhappy with the product, and have developed techniques to improve its appearance, including the following: === Illuminated shading === Imhof's contributions included a multi-color approach to shading, with purples in valleys and yellows on peaks, which is known as “illuminated shading.” Illuminating the sides of the terrain facing the light source with yellow colors provides greater realism (since direct sunlight is more yellow, and ambient light is more blue), enhances the sense of the three-dimensional nature of the terrain, and make the map more aesthetically pleasing and artistic-looking. Much work has been done in digitally recreating the work of Eduard Imhof, which has been fairly successful in some cases. === Multi-directional shading === A common criticism of computer-generated analytical hillshading is its stark, artificial look, in which slopes facing the light are solid white, and slopes facing away are solid black. Raisz called it "plastic shading," and others have said it looks like a moonscape. One solution is to incorporate multiple lighting directions to imitate the effect of ambient lighting, creating a much more realistic looking product. Multiple techniques have been proposed for doing this, including using Geographic information systems software for generating multiple shaded relief images and averaging them together, using 3-d modeling software to render terrain, and custom software tools to imitate natural lighting using up to hundreds of individual sources. This technique has been found to be most effective for very rugged terrain at medium scales of 1:30,000 to 1:1,000,000. === Texture/bump mapping === It is possible to make the terrain look more realistic by imitating the three-dimensional look of not only the bare land surface, but also the features covering that land surface, such as buildings and plants. Texture mapping or bump mapping is a technique adapted from Computer graphics that adds a layer of shaded texture to the shaded surface relief that imitates the look of the local land cover. This texture can be generated in several ways: Texture substitution: Copying, abstracting, and merging remote sensing imagery of land cover. Texture generation: Creating a simulated land cover elevation layer in GIS, such as a random scattering of "trees," then generating a shaded relief of this. Elevation measurement: Using fine resolution remote sensing techniques, especially Lidar and drones, to directly or indirectly (through Photogrammetry) measure the height and or shape of land cover features, and shade that elevation surface. This technique is most useful at producing realistic maps at relatively large scales, 1:5,000 to 1:50,000. === Resolution mixing or bumping === One challenge with shaded relief, especially at small scales (1:500,000 or less), is that the technique is very good at visualizing local (high-frequency) relief, but may not effectively show larger features. For example, a rugged area of hills and valleys will show as much or more variation than a large, smooth mountain. Resolution bumping is a hybrid technique developed by NPS cartographer Tom Patterson to mitigate this problem. A fine-resolution DEM is averaged with a heavily smoothed version (i.e., significantly coarser resolution). When the hillshading algorithm is applied to this, it has the effect of blending the fine details of the original terrain model with the broader features brought out by the smoothed model. This technique works best at small scales and in regions that are consistently rugged. == Oblique view == A three-dimensional view (projected onto a two-dimensional medium) of the surface of the Earth, along with the geographic features resting on it. Imagined aerial views of cities were first produced during the late Middle Ages, but these "bird's eye views" became very popular in the United States during the 1800s. The advent of GIS (especially recent advances in 3-D and global visualization) and 3-D graphics modeling software has made the production of realistic aerial views relatively easy, although the execution of quality Cartographic design on these models remains a challenge. == Raised-relief map == This is a map in which relief is shown as a three-dimensional object. The most intuitive way to depict relief is to imitate it at scale. Hand-crafted dioramas may date back to 200BCE in China, but mass production did not become available until World War II with the invention of vacuum-formed plastic maps, and computerized machining to create molds efficiently. Machining is also used to create large custom models from substrates such as high-density foam, and can even color them based on aerial photography by placing an inkjet printhead on the machining device. The advent of 3D printing has introduced a much more economical means to produce raised-relief maps, although most 3D printers are too small to efficiently produce large dioramas. == Rendering == Terrain rendering covers a variety of methods of depicting real-world or imaginary world surfaces. Most common terrain rendering is the depiction of Earth's surface. It is used in various applications to give an observer a frame of reference. It is also often used in combination with rendering of non-terrain objects, such as trees, buildings, rivers, etc. There are two major modes of terrain rendering: top-down and perspective rendering. Top-down terrain rendering has been known for centuries in the way of cartographic maps. Perspective terrain rendering has also been known for quite some time. However, only with the advent of computers and computer graphics perspective rendering has become mainstream. === Structure === A typical terrain rendering application consists of a terrain database, a central processing unit (CPU), a dedicated graphics processing unit (GPU), and a display. A software application is configured to start at initial location in the world space. The output of the application is screen space representation of the real world on a display. The software application uses the CPU to identify and load terrain data corresponding to initial location from the terrain database, then applies the required transformations to build a mesh of points that can be rendered by the GPU, which completes geometrical transformations, creating screen space objects (such as polygons) that create a picture closely resembling the location of the real world. === Texture === There are a number of ways to texture the terrain surface. Some applications benefit from using artificial textures, such as elevation coloring, checkerboard, or other generic textures. Some applications attempt to recreate the real-world surface to the best possible representation using aerial photography and satellite imagery. In video games, texture splatting is used to texture the terrain surface. === Generation === There are a great variety of methods to generate terrain surfaces. The main problem solved by all these methods is managing number of processed and rendered polygons. It is possible to create a very detailed picture of the world using billions of data points. However such applications are limited to static pictures. Most uses of terrain rendering are moving images, which require the software application to make decisions on how to simplify (by discarding or approximating) source terrain data. Virtually all terrain rendering applications use level of detail to manage number of data points processed by CPU and GPU. There are several modern algorithms for terrain surfaces generating. === Applications === Terrain rendering is widely used in computer games to represent both Earth's surface and imaginary worlds. Some games also have terrain deformation (or deformable terrain). One important application of terrain rendering is in synthetic vision systems. Pilots flying aircraft benefit greatly from the ability to see terrain surface at all times regardless of conditions outside the aircraft. == Skeletal, structural, or break lines == Emphasizes hydrological drainage divide and watershed streams. == Forums and associations == Portrayal of relief is especially important in mountainous regions. The Commission on Mountain Cartography of the International Cartographic Association is the best-known forum for discussion of theory and techniques for mapping these regions. == See also == Cartographic labeling Pictorial map Geomipmapping Geometry Clipmaps ROAM (Real-time optimally adapting mesh) == References == == External links == Shaded Relief, a website by Tom Patterson Relief Shading, a website of the Institute of Cartography at ETH Zurich Wikipedia Graphic Lab, a tutorial on creating shaded relief maps using free and open source software Rendering a map using relief shading technique in Photoshop Virtual Terrain Project Underwater Relief Shading
Wikipedia/Cartographic_relief_depiction
In modern mapping, a topographic map or topographic sheet is a type of map characterized by large-scale detail and quantitative representation of relief features, usually using contour lines (connecting points of equal elevation), but historically using a variety of methods. Traditional definitions require a topographic map to show both natural and artificial features. A topographic survey is typically based upon a systematic observation and published as a map series, made up of two or more map sheets that combine to form the whole map. A topographic map series uses a common specification that includes the range of cartographic symbols employed, as well as a standard geodetic framework that defines the map projection, coordinate system, ellipsoid and geodetic datum. Official topographic maps also adopt a national grid referencing system. Natural Resources Canada provides this description of topographic maps:These maps depict in detail ground relief (landforms and terrain), drainage (lakes and rivers), forest cover, administrative areas, populated areas, transportation routes and facilities (including roads and railways), and other man-made features. Other authors define topographic maps by contrasting them with another type of map; they are distinguished from smaller-scale "chorographic maps" that cover large regions, "planimetric maps" that do not show elevations, and "thematic maps" that focus on specific topics. However, in the vernacular and day to day world, the representation of relief (contours) is popularly held to define the genre, such that even small-scale maps showing relief are commonly (and erroneously, in the technical sense) called "topographic". The study or discipline of topography is a much broader field of study, which takes into account all natural and human-made features of terrain. Maps were among the first artifacts to record observations about topography. == History == Topographic maps are based on topographical surveys. Performed at large scales, these surveys are called topographical in the old sense of topography, showing a variety of elevations and landforms. This is in contrast to older cadastral surveys, which primarily show property and governmental boundaries. The first multi-sheet topographic map series of an entire country, the Carte géométrique de la France, was completed in 1789. The Great Trigonometric Survey of India, started by the East India Company in 1802, then taken over by the British Raj after 1857 was notable as a successful effort on a larger scale and for accurately determining heights of Himalayan peaks from viewpoints over one hundred miles distant. Topographic surveys were prepared by the military to assist in planning for battle and for defensive emplacements (thus the name and history of the United Kingdom's Ordnance Survey). As such, elevation information was of vital importance. As they evolved, topographic map series became a national resource in modern nations in planning infrastructure and resource exploitation. In the United States, the national map-making function which had been shared by both the Army Corps of Engineers and the Department of the Interior migrated to the newly created United States Geological Survey in 1879, where it has remained since. 1913 saw the beginning of the International Map of the World initiative, which set out to map all of Earth's significant land areas at a scale of 1:1 million, on about one thousand sheets, each covering four degrees latitude by six or more degrees longitude. Excluding borders, each sheet was 44 cm high and (depending on latitude) up to 66 cm wide. Although the project eventually foundered, it left an indexing system that remains in use. By the 1980s, centralized printing of standardized topographic maps began to be superseded by databases of coordinates that could be used on computers by moderately skilled end users to view or print maps with arbitrary contents, coverage and scale. For example, the federal government of the United States' TIGER initiative compiled interlinked databases of federal, state and local political borders and census enumeration areas, and of roadways, railroads, and water features with support for locating street addresses within street segments. TIGER was developed in the 1980s and used in the 1990 and subsequent decennial censuses. Digital elevation models (DEM) were also compiled, initially from topographic maps and stereographic interpretation of aerial photographs and then from satellite photography and radar data. Since all these were government projects funded with taxes and not classified for national security reasons, the datasets were in the public domain and freely usable without fees or licensing. TIGER and DEM datasets greatly facilitated geographic information systems and made the Global Positioning System much more useful by providing context around locations given by the technology as coordinates. Initial applications were mostly professionalized forms such as innovative surveying instruments and agency-level GIS systems tended by experts. By the mid-1990s, increasingly user-friendly resources such as online mapping in two and three dimensions, integration of GPS with mobile phones and automotive navigation systems appeared. As of 2011, the future of standardized, centrally printed topographical maps is left somewhat in doubt. == Uses == Topographic maps have many multiple uses in the present day: any type of geographic planning or large-scale architecture; Earth sciences and many other geographic disciplines; mining and other Earth-based endeavours; civil engineering and recreational uses such as hiking and orienteering. It takes practice and skill to read and interpret a topographic map. This includes not only how to identify map features, but also how to interpret contour lines to infer landforms like cliffs, ridges, draws, etc. Training in map reading is often given in orienteering, scouting, and the military. == Conventions == The various features shown on the map are represented by conventional signs or symbols. For example, colors can be used to indicate a classification of roads. These signs are usually explained in the margin of the map, or on a separately published characteristic sheet. Topographic maps are also commonly called contour maps or topo maps. In the United States, where the primary national series is organized by a strict 7.5-minute grid, they are often called or quads or quadrangles. Topographic maps conventionally show topography, or land contours, by means of contour lines. Contour lines are curves that connect contiguous points of the same altitude (isohypse). In other words, every point on the marked line of 100 m elevation is 100 m above mean sea level. These maps usually show not only the contours, but also any significant streams or other bodies of water, forest cover, built-up areas or individual buildings (depending on scale), and other features and points of interest such as what direction those streams are flowing. Most topographic maps were prepared using photogrammetric interpretation of aerial photography using a stereoplotter. Modern mapping also employs lidar and other Remote sensing techniques. Older topographic maps were prepared using traditional surveying instruments. The cartographic style (content and appearance) of topographic maps is highly variable between national mapping organizations. Aesthetic traditions and conventions persist in topographic map symbology, particularly amongst European countries at medium map scales. == Publishers of national topographic map series == Although virtually the entire terrestrial surface of Earth has been mapped at scale 1:1,000,000, medium and large-scale mapping has been accomplished intensively in some countries and much less in others. Several commercial vendors supply international topographic map series. According to 2007/2/EC European directive, national mapping agencies of European Union countries must have publicly available services for searching, viewing and downloading their official map series. Topographic maps produced by some of them are available under a free license that allows re-use, such as a Creative Commons license. == See also == == References == == External links == USGS Topographic maps are downloadable as pdf files from a searchable map or by a search if the map name is known. How a Topographic Map is Manufactured, History, and Other Information The International Cartographic Association (ICA) Commission on Topographic Mapping
Wikipedia/Topographic_map
The U.S. Army Corps of Topographical Engineers was a branch of the United States Army authorized on 4 July 1838. It consisted only of officers who were handpicked from West Point and was used for mapping and the design and construction of federal civil works such as lighthouses and other coastal fortifications and navigational routes. Members included such officers as George Meade, John C. Frémont, Thomas J. Cram, Stephen Long, and Washington Hood. It was merged with the United States Army Corps of Engineers on 31 March 1863, at which point the Corps of Engineers also assumed the Lakes Survey for the Great Lakes. In the mid-19th century, Corps of Engineers' officers ran Lighthouse Districts in tandem with U.S. Naval officers. In 1841, Congress created the Lake Survey. The Survey, based in Detroit, Mich., was charged with conducting a hydrographical survey of the Northern and Northwestern Lakes and preparing and publishing nautical charts and other navigation aids. The Lake Survey published its first charts in 1852. == Significance == William Goetzmann has written: From the year 1838 down to the Civil War, there existed a small but highly significant branch of the Army called the Corps of Topographical Engineers. ... The Engineers were concerned with recording all of the western phenomena as accurately as possible, whether main-traveled roads or uncharted wilderness. As Army officers they represented the direct concern of the national government the settling of the West. The Corps of Topographical Engineers was a central institution of Manifest Destiny, and in the years before the Civil War its officers made explorations which resulted in the first scientific mapping of the West. They laid out national boundaries and directly promoted the advance of settlement by locating and constructing wagon roads, improving rivers and harbors, even performing experiments for the location of subsurface water in the arid regions. In short, they functioned as a department of public works for the West - and indeed for the whole nation, since the operations of the Corps extended to every state and territory of the United States. The work of the Corps in the West had still broader significance. Since a major part of its work was to assemble scientific information in the form of maps, pictures, statistics, and narrative reports about the West, it contributed importantly to the compilation of scientific knowledge about the interior of the North American continent. The Topographical Engineers were sophisticated men of their time who worked closely with the foremost scholars in American and European centers of learning. Scientists and artists of all nationalities accompanied their expeditions as partners and co-workers. The Army Topographer considered himself by schooling and profession as one of a company of savants. By virtue of his West Point training and status he was an engineer, something above the ordinary field officer, whose duties were confined usually to strictly military tasks. As a Topographical Engineer he on occasion might address the American Association for the Advancement of Science. He probably subscribed to Silliman's American Journal of Science and he was a pillar of the Smithsonian Institution. == Major expeditions prior to the Corps' creation == In all, there were six major expeditions into the Louisiana Purchase, the first being the best known Corps of Discovery led by Lewis and Clark in 1804–1806. A second expedition in 1804 included astronomer and naturalist John Dunbar and prominent Philadelphia chemist William Hunter. This expedition attempted to follow the Red River to its source in Texas, then controlled by Spain, but turned back after three months. In April 1806 a second Red River Expedition was led by Captain Richard Sparks and included astronomer and surveyor Thomas Freeman and Peter Custis, a University of Pennsylvania medical student who served as the expedition's botanist. The group of 24 traveled 615 miles up the Red River before being turned back by Spanish authorities. President Thomas Jefferson hoped that this expedition would be nearly as important as the one led by Lewis and Clark, but the interruption by Spanish authorities prevented this hope from being realized. In 1805–1806, Lieutenant Zebulon Pike was ordered by General James Wilkinson, Governor of the Upper Louisiana Territory, to find the source of the Mississippi River. In 1806–1807, President Jefferson ordered Lieutenant Pike, on another expedition, to find the headwaters of the Arkansas River and Red River. This is better known as the Pike Expedition. Spanish forces arrested Pike and confiscated his papers, but assigned a translator and cartographer to translate Pike's documents. In 1817 Major Stephen H. Long explored the upper Mississippi River, selecting sites for Fort Smith on the Arkansas River and Fort St. Anthony at the confluence of the Minnesota and Mississippi. In 1819, President James Monroe and Secretary of War John C. Calhoun ordered General Henry Atkinson to lead what became known as the Yellowstone Expedition. One objective was to eliminate British influence among the Native American tribes in the region. Nearly 1,000 soldiers were transported by five steamboats up the Missouri River to the Mandan villages at the mouth of the Yellowstone, where they built a fort. This was the first known use of steam propulsion in the west. == Major expeditions by the Corps == Joseph Nicolas Nicollet, assisted by Second Lieutenant in the Corps of Topographical Engineers, John Charles Fremont, conducted a reconnaissance of the region of the Upper Mississippi River and Missouri Rivers in 1838 and 1839. Boundary survey of the border between Wisconsin Territory and Michigan (1840–1841) conducted by Captain Thomas J. Cram. Boundary survey of the border with Republic of Texas (1841–42) by Maj. James Duncan Graham. Later discovered lunar tides in the Great Lakes in 1858–59. Fremont conducted expeditions over the Oregon trail to the Columbia River and to California during 1842–1846. During his third expedition, Fremont detached Lieutenants James W. Abert and William G. Peck in August 1845, at Bent's Fort on the Arkansas River to survey Purgatory Creek and the Canadian and False Washita Rivers. Boundary survey of the Canada–US border (1844–46) led by William H. Emory. Military Reconnaissance from Fort Leavenworth in Missouri to San Diego in California, Including Part Of The Arkansas, Del Norte, and Gila Rivers. By Lieut. Col. W. H. Emory, made in 1846-7, With the Advanced Guard of the "Army of the West" led by Brigadier General Stephen W. Kearny. Boundary survey of the borders with Mexico; United States and Mexican Boundary Survey (1848–1855) led by William H. Emory. Lorenzo Sitgreaves led the first topographical mission across Arizona in 1851. Lt. Col. J. D. Graham was on the first resurvey of the Mason–Dixon line, from 1849 to 1850. At some point, Graham was to replace the Delaware-Maryland-Pennsylvania Tri-State Marker, but misplaced it. Pacific Railroad Surveys, which consisted of five surveys to find potential transcontinental railroad routes. These survey reports were compiled into twelve volumes, Reports of Explorations and Surveys, to ascertain the most practicable and economical route for a railroad from the Mississippi River to the Pacific Ocean, made under the direction of the Secretary of War, in 1853-4. Volumes I-XII, Washington, Government Printing Office, 1855–61. The Northern Pacific survey followed between the 47th parallel north and 49th parallel north from St. Paul, Minnesota to the Puget Sound and was led by the newly appointed governor of the Washington Territory, Isaac Stevens. Accompanying Stevens were Captain George B. McClellan with Lt. Sylvester Mowry out of the Columbia Barracks from the west and Lt. Rufus Saxton with Lt. Richard Arnold out of St. Marysville from the east. There were two Central Pacific surveys. One followed between the 37th parallel north and 39th parallel north from St. Louis, Missouri to San Francisco, California. This survey was led by Lt. John Williams Gunnison until his death at the hands of a band of Pahvants in Utah. Lt. Edward Griffin Beckwith then took command. Also participating in this survey was Frederick W. von Egloffstein, George Stoneman and Lt. Gouverneur K. Warren. Beckwith, subsequently explored the second Central Pacific route near the 41st parallel. This was the route subsequently closely followed by the Central Pacific and Union Pacific to complete the first transcontinental railroad. There were two Southern Pacific surveys. One along the 35th parallel north from Oklahoma to Los Angeles, California, a route similar to the western part of the later Santa Fe Railroad and to Interstate 40, which was led by Lt. Amiel Weeks Whipple. Joseph Christmas Ives, Whipple's assistant, led the expedition that explored and mapped the Colorado River to the mouth of Las Vegas Wash. The southernmost Southern Pacific survey went across Texas to San Diego, California, a route which was later followed by the Southern Pacific Railroad which completed the second transcontinental railway in 1881. This survey was led by Lt. John Parke and John Pope. The fifth survey was along the Pacific coast from San Diego to Seattle, Washington conducted by Lt. Robert S. Williamson and John G. Parke. Exploration of the Colorado River of the West by Lt. Joseph Christmas Ives, 1858–59 Boundary survey of the borders with Canada, The Northwest Boundary Survey (1857–61) == See also == Army Geospatial Center == Further reading == Schubert, Frank N. (2004). The Nation Builders: A Sesquicentennial History of the Corps of Topographical Engineers 1838–1863 (PDF). University Press of the Pacific. ISBN 978-1410218728. Archived from the original (PDF) on July 15, 2014. "A History of the U.S. Topographical Engineers, (1818–1863) part 1". U.S. Corps of Topographical Engineers website, quoting from Beers, Henry P. "A History of the U.S. Topographical Engineers, 1813–1863." 2 pts. The Military Engineer 34 (Jun 1942): pp.287-91 & (Jul 1942): pp.348-52. Retrieved 2011-08-06. == References == Attribution This article incorporates public domain material from the United States Army Corps of Engineers This article incorporates public domain material from Miscellaneous USACE History Publications. United States Army. == External links == U.S. Corps of Topographical Engineers - a living history group website on this topic
Wikipedia/Corps_of_Topographical_Engineers
Surveying or land surveying is the technique, profession, art, and science of determining the terrestrial two-dimensional or three-dimensional positions of points and the distances and angles between them. These points are usually on the surface of the Earth, and they are often used to establish maps and boundaries for ownership, locations, such as the designated positions of structural components for construction or the surface location of subsurface features, or other purposes required by government or civil law, such as property sales. A professional in land surveying is called a land surveyor. Surveyors work with elements of geodesy, geometry, trigonometry, regression analysis, physics, engineering, metrology, programming languages, and the law. They use equipment, such as total stations, robotic total stations, theodolites, GNSS receivers, retroreflectors, 3D scanners, lidar sensors, radios, inclinometer, handheld tablets, optical and digital levels, subsurface locators, drones, GIS, and surveying software. Surveying has been an element in the development of the human environment since the beginning of recorded history. It is used in the planning and execution of most forms of construction. It is also used in transportation, communications, mapping, and the definition of legal boundaries for land ownership. It is an important tool for research in many other scientific disciplines. == Definition == The International Federation of Surveyors defines the function of surveying as follows: A surveyor is a professional person with the academic qualifications and technical expertise to conduct one, or more, of the following activities; to determine, measure and represent land, three-dimensional objects, point-fields and trajectories; to assemble and interpret land and geographically related information, to use that information for the planning and efficient administration of the land, the sea and any structures thereon; and, to conduct research into the above practices and to develop them. == History == === Ancient history === Surveying has occurred since humans built the first large structures. In ancient Egypt, a rope stretcher would use simple geometry to re-establish boundaries after the annual floods of the Nile River. The almost perfect squareness and north–south orientation of the Great Pyramid of Giza, built c. 2700 BC, affirm the Egyptians' command of surveying. The groma instrument may have originated in Mesopotamia (early 1st millennium BC). The prehistoric monument at Stonehenge (c. 2500 BC) was set out by prehistoric surveyors using peg and rope geometry. The mathematician Liu Hui described ways of measuring distant objects in his work Haidao Suanjing or The Sea Island Mathematical Manual, published in 263 AD. The Romans recognized land surveying as a profession. They established the basic measurements under which the Roman Empire was divided, such as a tax register of conquered lands (300 AD). Roman surveyors were known as Gromatici. In medieval Europe, beating the bounds maintained the boundaries of a village or parish. This was the practice of gathering a group of residents and walking around the parish or village to establish a communal memory of the boundaries. Young boys were included to ensure the memory lasted as long as possible. In England, William the Conqueror commissioned the Domesday Book in 1086. It recorded the names of all the land owners, the area of land they owned, the quality of the land, and specific information of the area's content and inhabitants. It did not include maps showing exact locations. === Modern era === Abel Foullon described a plane table in 1551, but it is thought that the instrument was in use earlier as his description is of a developed instrument. Gunter's chain was introduced in 1620 by English mathematician Edmund Gunter. It enabled plots of land to be accurately surveyed and plotted for legal and commercial purposes. Leonard Digges described a theodolite that measured horizontal angles in his book A geometric practice named Pantometria (1571). Joshua Habermel (Erasmus Habermehl) created a theodolite with a compass and tripod in 1576. Johnathon Sission was the first to incorporate a telescope on a theodolite in 1725. In the 18th century, modern techniques and instruments for surveying began to be used. Jesse Ramsden introduced the first precision theodolite in 1787. It was an instrument for measuring angles in the horizontal and vertical planes. He created his great theodolite using an accurate dividing engine of his own design. Ramsden's theodolite represented a great step forward in the instrument's accuracy. William Gascoigne invented an instrument that used a telescope with an installed crosshair as a target device, in 1640. James Watt developed an optical meter for the measuring of distance in 1771; it measured the parallactic angle from which the distance to a point could be deduced. Dutch mathematician Willebrord Snellius (a.k.a. Snel van Royen) introduced the modern systematic use of triangulation. In 1615 he surveyed the distance from Alkmaar to Breda, approximately 72 miles (116 km). He underestimated this distance by 3.5%. The survey was a chain of quadrangles containing 33 triangles in all. Snell showed how planar formulae could be corrected to allow for the curvature of the Earth. He also showed how to resect, or calculate, the position of a point inside a triangle using the angles cast between the vertices at the unknown point. These could be measured more accurately than bearings of the vertices, which depended on a compass. His work established the idea of surveying a primary network of control points, and locating subsidiary points inside the primary network later. Between 1733 and 1740, Jacques Cassini and his son César undertook the first triangulation of France. They included a re-surveying of the meridian arc, leading to the publication in 1745 of the first map of France constructed on rigorous principles. By this time triangulation methods were well established for local map-making. It was only towards the end of the 18th century that detailed triangulation network surveys mapped whole countries. In 1784, a team from General William Roy's Ordnance Survey of Great Britain began the Principal Triangulation of Britain. The first Ramsden theodolite was built for this survey. The survey was finally completed in 1853. The Great Trigonometric Survey of India began in 1801. The Indian survey had an enormous scientific impact. It was responsible for one of the first accurate measurements of a section of an arc of longitude, and for measurements of the geodesic anomaly. It named and mapped Mount Everest and the other Himalayan peaks. Surveying became a professional occupation in high demand at the turn of the 19th century with the onset of the Industrial Revolution. The profession developed more accurate instruments to aid its work. Industrial infrastructure projects used surveyors to lay out canals, roads and rail. In the US, the Land Ordinance of 1785 created the Public Land Survey System. It formed the basis for dividing the western territories into sections to allow the sale of land. The PLSS divided states into township grids which were further divided into sections and fractions of sections. Napoleon Bonaparte founded continental Europe's first cadastre in 1808. This gathered data on the number of parcels of land, their value, land usage, and names. This system soon spread around Europe. Robert Torrens introduced the Torrens system in South Australia in 1858. Torrens intended to simplify land transactions and provide reliable titles via a centralized register of land. The Torrens system was adopted in several other nations of the English-speaking world. Surveying became increasingly important with the arrival of railroads in the 1800s. Surveying was necessary so that railroads could plan technologically and financially viable routes. === 20th century === At the beginning of the century, surveyors had improved the older chains and ropes, but they still faced the problem of accurate measurement of long distances. Trevor Lloyd Wadley developed the Tellurometer during the 1950s. It measures long distances using two microwave transmitter/receivers. During the late 1950s Geodimeter introduced electronic distance measurement (EDM) equipment. EDM units use a multi frequency phase shift of light waves to find a distance. These instruments eliminated the need for days or weeks of chain measurement by measuring between points kilometers apart in one go. Advances in electronics allowed miniaturization of EDM. In the 1970s the first instruments combining angle and distance measurement appeared, becoming known as total stations. Manufacturers added more equipment by degrees, bringing improvements in accuracy and speed of measurement. Major advances include tilt compensators, data recorders and on-board calculation programs. The first satellite positioning system was the US Navy TRANSIT system. The first successful launch took place in 1960. The system's main purpose was to provide position information to Polaris missile submarines. Surveyors found they could use field receivers to determine the location of a point. Sparse satellite cover and large equipment made observations laborious and inaccurate. The main use was establishing benchmarks in remote locations. The US Air Force launched the first prototype satellites of the Global Positioning System (GPS) in 1978. GPS used a larger constellation of satellites and improved signal transmission, thus improving accuracy. Early GPS observations required several hours of observations by a static receiver to reach survey accuracy requirements. Later improvements to both satellites and receivers allowed for Real Time Kinematic (RTK) surveying. RTK surveys provide high-accuracy measurements by using a fixed base station and a second roving antenna. The position of the roving antenna can be tracked. === 21st century === The theodolite, total station and RTK GPS survey remain the primary methods in use. Remote sensing and satellite imagery continue to improve and become cheaper, allowing more commonplace use. Prominent new technologies include three-dimensional (3D) scanning and lidar-based topographical surveys. UAV technology along with photogrammetric image processing is also appearing. == Equipment == === Hardware === The main surveying instruments in use around the world are the theodolite, measuring tape, total station, 3D scanners, GPS/GNSS, level and rod. Most instruments screw onto a tripod when in use. Tape measures are often used for measurement of smaller distances. 3D scanners and various forms of aerial imagery are also used. The theodolite is an instrument for the measurement of angles. It uses two separate circles, protractors or alidades to measure angles in the horizontal and the vertical plane. A telescope mounted on trunnions is aligned vertically with the target object. The whole upper section rotates for horizontal alignment. The vertical circle measures the angle that the telescope makes against the vertical, known as the zenith angle. The horizontal circle uses an upper and lower plate. When beginning the survey, the surveyor points the instrument in a known direction (bearing), and clamps the lower plate in place. The instrument can then rotate to measure the bearing to other objects. If no bearing is known or direct angle measurement is wanted, the instrument can be set to zero during the initial sight. It will then read the angle between the initial object, the theodolite itself, and the item that the telescope aligns with. The gyrotheodolite is a form of theodolite that uses a gyroscope to orient itself in the absence of reference marks. It is used in underground applications. The total station is a development of the theodolite with an electronic distance measurement device (EDM). A total station can be used for leveling when set to the horizontal plane. Since their introduction, total stations have shifted from optical-mechanical to fully electronic devices. Modern top-of-the-line total stations no longer need a reflector or prism to return the light pulses used for distance measurements. They are fully robotic, and can even e-mail point data to a remote computer and connect to satellite positioning systems, such as Global Positioning System. Real Time Kinematic GPS systems have significantly increased the speed of surveying, and they are now horizontally accurate to within 1 cm ± 1 ppm in real-time, while vertically it is currently about half of that to within 2 cm ± 2 ppm. GPS surveying differs from other GPS uses in the equipment and methods used. Static GPS uses two receivers placed in position for a considerable length of time. The long span of time lets the receiver compare measurements as the satellites orbit. The changes as the satellites orbit also provide the measurement network with well conditioned geometry. This produces an accurate baseline that can be over 20 km long. RTK surveying uses one static antenna and one roving antenna. The static antenna tracks changes in the satellite positions and atmospheric conditions. The surveyor uses the roving antenna to measure the points needed for the survey. The two antennas use a radio link that allows the static antenna to send corrections to the roving antenna. The roving antenna then applies those corrections to the GPS signals it is receiving to calculate its own position. RTK surveying covers smaller distances than static methods. This is because divergent conditions further away from the base reduce accuracy. Surveying instruments have characteristics that make them suitable for certain uses. Theodolites and levels are often used by constructors rather than surveyors in first world countries. The constructor can perform simple survey tasks using a relatively cheap instrument. Total stations are workhorses for many professional surveyors because they are versatile and reliable in all conditions. The productivity improvements from a GPS on large scale surveys make them popular for major infrastructure or data gathering projects. One-person robotic-guided total stations allow surveyors to measure without extra workers to aim the telescope or record data. A fast but expensive way to measure large areas is with a helicopter, using a GPS to record the location of the helicopter and a laser scanner to measure the ground. To increase precision, surveyors place beacons on the ground (about 20 km (12 mi) apart). This method reaches precisions between 5–40 cm (depending on flight height). Surveyors use ancillary equipment such as tripods and instrument stands; staves and beacons used for sighting purposes; PPE; vegetation clearing equipment; digging implements for finding survey markers buried over time; hammers for placements of markers in various surfaces and structures; and portable radios for communication over long lines of sight. === Software === Land surveyors, construction professionals, geomatics engineers and civil engineers using total station, GPS, 3D scanners, and other collector data use land surveying software to increase efficiency, accuracy, and productivity. Land Surveying Software is a staple of contemporary land surveying. Typically, much if not all of the drafting and some of the designing for plans and plats of the surveyed property is done by the surveyor, and nearly everyone working in the area of drafting today (2021) utilizes CAD software and hardware both on PC, and more and more in newer generation data collectors in the field as well. Other computer platforms and tools commonly used today by surveyors are offered online by the U.S. Federal Government and other governments' survey agencies, such as the National Geodetic Survey and the CORS network, to get automated corrections and conversions for collected GPS data, and the data coordinate systems themselves. == Techniques == Surveyors determine the position of objects by measuring angles and distances. The factors that can affect the accuracy of their observations are also measured. They then use this data to create vectors, bearings, coordinates, elevations, areas, volumes, plans and maps. Measurements are often split into horizontal and vertical components to simplify calculation. GPS and astronomic measurements also need measurement of a time component. === Distance measurement === Before EDM (Electronic Distance Measurement) laser devices, distances were measured using a variety of means. In pre-colonial America Natives would use the "bow shot" as a distance reference ("as far as an arrow can slung out of a bow", or "flights of a Cherokee long bow"). Europeans used chains with links of a known length such as a Gunter's chain, or measuring tapes made of steel or invar. To measure horizontal distances, these chains or tapes were pulled taut to reduce sagging and slack. The distance had to be adjusted for heat expansion. Attempts to hold the measuring instrument level would also be made. When measuring up a slope, the surveyor might have to "break" (break chain) the measurement- use an increment less than the total length of the chain. Perambulators, or measuring wheels, were used to measure longer distances but not to a high level of accuracy. Tacheometry is the science of measuring distances by measuring the angle between two ends of an object with a known size. It was sometimes used before to the invention of EDM where rough ground made chain measurement impractical. === Angle measurement === Historically, horizontal angles were measured by using a compass to provide a magnetic bearing or azimuth. Later, more precise scribed discs improved angular resolution. Mounting telescopes with reticles atop the disc allowed more precise sighting (see theodolite). Levels and calibrated circles allowed the measurement of vertical angles. Verniers allowed measurement to a fraction of a degree, such as with a turn-of-the-century transit. The plane table provided a graphical method of recording and measuring angles, which reduced the amount of mathematics required. In 1829 Francis Ronalds invented a reflecting instrument for recording angles graphically by modifying the octant. By observing the bearing from every vertex in a figure, a surveyor can measure around the figure. The final observation will be between the two points first observed, except with a 180° difference. This is called a close. If the first and last bearings are different, this shows the error in the survey, called the angular misclose. The surveyor can use this information to prove that the work meets the expected standards. === Leveling === The simplest method for measuring height is with an altimeter using air pressure to find the height. When more precise measurements are needed, means like precise levels (also known as differential leveling) are used. When precise leveling, a series of measurements between two points are taken using an instrument and a measuring rod. Differences in height between the measurements are added and subtracted in a series to get the net difference in elevation between the two endpoints. With the Global Positioning System (GPS), elevation can be measured with satellite receivers. Usually, GPS is somewhat less accurate than traditional precise leveling, but may be similar over long distances. When using an optical level, the endpoint may be out of the effective range of the instrument. There may be obstructions or large changes of elevation between the endpoints. In these situations, extra setups are needed. Turning is a term used when referring to moving the level to take an elevation shot from a different location. To "turn" the level, one must first take a reading and record the elevation of the point the rod is located on. While the rod is being kept in exactly the same location, the level is moved to a new location where the rod is still visible. A reading is taken from the new location of the level and the height difference is used to find the new elevation of the level gun, which is why this method is referred to as differential levelling. This is repeated until the series of measurements is completed. The level must be horizontal to get a valid measurement. Because of this, if the horizontal crosshair of the instrument is lower than the base of the rod, the surveyor will not be able to sight the rod and get a reading. The rod can usually be raised up to 25 feet (7.6 m) high, allowing the level to be set much higher than the base of the rod. === Determining position === The primary way of determining one's position on the Earth's surface when no known positions are nearby is by astronomic observations. Observations to the Sun, Moon and stars could all be made using navigational techniques. Once the instrument's position and bearing to a star is determined, the bearing can be transferred to a reference point on Earth. The point can then be used as a base for further observations. Survey-accurate astronomic positions were difficult to observe and calculate and so tended to be a base off which many other measurements were made. Since the advent of the GPS system, astronomic observations are rare as GPS allows adequate positions to be determined over most of the surface of the Earth. === Reference networks === Few survey positions are derived from the first principles. Instead, most surveys points are measured relative to previously measured points. This forms a reference or control network where each point can be used by a surveyor to determine their own position when beginning a new survey. Survey points are usually marked on the earth's surface by objects ranging from small nails driven into the ground to large beacons that can be seen from long distances. The surveyors can set up their instruments in this position and measure to nearby objects. Sometimes a tall, distinctive feature such as a steeple or radio aerial has its position calculated as a reference point that angles can be measured against. Triangulation is a method of horizontal location favoured in the days before EDM and GPS measurement. It can determine distances, elevations and directions between distant objects. Since the early days of surveying, this was the primary method of determining accurate positions of objects for topographic maps of large areas. A surveyor first needs to know the horizontal distance between two of the objects, known as the baseline. Then the heights, distances and angular position of other objects can be derived, as long as they are visible from one of the original objects. High-accuracy transits or theodolites were used, and angle measurements were repeated for increased accuracy. See also Triangulation in three dimensions. Offsetting is an alternate method of determining the position of objects, and was often used to measure imprecise features such as riverbanks. The surveyor would mark and measure two known positions on the ground roughly parallel to the feature, and mark out a baseline between them. At regular intervals, a distance was measured at right angles from the first line to the feature. The measurements could then be plotted on a plan or map, and the points at the ends of the offset lines could be joined to show the feature. Traversing is a common method of surveying smaller areas. The surveyors start from an old reference mark or known position and place a network of reference marks covering the survey area. They then measure bearings and distances between the reference marks, and to the target features. Most traverses form a loop pattern or link between two prior reference marks so the surveyor can check their measurements. ==== Datum and coordinate systems ==== Many surveys do not calculate positions on the surface of the Earth, but instead, measure the relative positions of objects. However, often the surveyed items need to be compared to outside data, such as boundary lines or previous survey's objects. The oldest way of describing a position is via latitude and longitude, and often a height above sea level. As the surveying profession grew it created Cartesian coordinate systems to simplify the mathematics for surveys over small parts of the Earth. The simplest coordinate systems assume that the Earth is flat and measure from an arbitrary point, known as a 'datum' (singular form of data). The coordinate system allows easy calculation of the distances and direction between objects over small areas. Large areas distort due to the Earth's curvature. North is often defined as true north at the datum. For larger regions, it is necessary to model the shape of the Earth using an ellipsoid or a geoid. Many countries have created coordinate-grids customized to lessen error in their area of the Earth. === Errors and accuracy === A basic tenet of surveying is that no measurement is perfect, and that there will always be a small amount of error. There are three classes of survey errors: Gross errors or blunders: Errors made by the surveyor during the survey. Upsetting the instrument, misaiming a target, or writing down a wrong measurement are all gross errors. A large gross error may reduce the accuracy to an unacceptable level. Therefore, surveyors use redundant measurements and independent checks to detect these errors early in the survey. Systematic: Errors that follow a consistent pattern. Examples include effects of temperature on a chain or EDM measurement, or a poorly adjusted spirit-level causing a tilted instrument or target pole. Systematic errors that have known effects can be compensated or corrected. Random: Random errors are small unavoidable fluctuations. They are caused by imperfections in measuring equipment, eyesight, and conditions. They can be minimized by redundancy of measurement and avoiding unstable conditions. Random errors tend to cancel each other out, but checks must be made to ensure they are not propagating from one measurement to the next. Surveyors avoid these errors by calibrating their equipment, using consistent methods, and by good design of their reference network. Repeated measurements can be averaged and any outlier measurements discarded. Independent checks like measuring a point from two or more locations or using two different methods are used, and errors can be detected by comparing the results of two or more measurements, thus utilizing redundancy. Once the surveyor has calculated the level of the errors in his or her work, it is adjusted. This is the process of distributing the error between all measurements. Each observation is weighted according to how much of the total error it is likely to have caused and part of that error is allocated to it in a proportional way. The most common methods of adjustment are the Bowditch method, also known as the compass rule, and the principle of least squares method. The surveyor must be able to distinguish between accuracy and precision. In the United States, surveyors and civil engineers use units of feet wherein a survey foot breaks down into 10ths and 100ths. Many deed descriptions containing distances are often expressed using these units (125.25 ft). On the subject of accuracy, surveyors are often held to a standard of one one-hundredth of a foot; about 1/8 inch. Calculation and mapping tolerances are much smaller wherein achieving near-perfect closures are desired. Though tolerances will vary from project to project, in the field and day to day usage beyond a 100th of a foot is often impractical. == Types == Local organisations or regulatory bodies class specializations of surveying in different ways. Broad groups are: As-built survey: a survey that documents the location of recently constructed elements of a construction project. As-built surveys are done for record, completion evaluation and payment purposes. An as-built survey is also known as a 'works as executed survey'. As built surveys are often presented in red or redline and laid over existing plans for comparison with design information. Cadastral or boundary surveying: a survey that establishes or re-establishes boundaries of a parcel using a legal description. It involves the setting or restoration of monuments or markers at the corners or along the boundaries of a parcel. These take the form of iron rods, pipes, or concrete monuments in the ground, or nails set in concrete or asphalt. The ALTA/ACSM Land Title Survey is a standard proposed by the American Land Title Association and the American Congress on Surveying and Mapping. It incorporates elements of the boundary survey, mortgage survey, and topographic survey. Control surveying: Control surveys establish reference points to use as starting positions for future surveys. Most other forms of surveying will contain elements of control surveying. Construction surveying and engineering surveying: topographic, layout, and as-built surveys associated with engineering design. They often need geodetic computations beyond normal civil engineering practice. Deformation survey: a survey to determine if a structure or object is changing shape or moving. First the positions of points on an object are found. A period of time is allowed to pass and the positions are then re-measured and calculated. Then a comparison between the two sets of positions is made. Dimensional control survey: This is a type of survey conducted in or on a non-level surface. Common in the oil and gas industry to replace old or damaged pipes on a like-for-like basis, the advantage of dimensional control survey is that the instrument used to conduct the survey does not need to be level. This is useful in the off-shore industry, as not all platforms are fixed and are thus subject to movement. Foundation survey: a survey done to collect the positional data on a foundation that has been poured and is cured. This is done to ensure that the foundation was constructed in the location, and at the elevation, authorized in the plot plan, site plan, or subdivision plan. Hydrographic survey: a survey conducted with the purpose of mapping the shoreline and bed of a body of water. Used for navigation, engineering, or resource management purposes. Leveling: either finds the elevation of a given point or establish a point at a given elevation. LOMA survey: Survey to change base flood line, removing property from a Special Flood Hazard Area. Measured survey : a building survey to produce plans of the building. such a survey may be conducted before renovation works, for commercial purpose, or at end of the construction process. Mining surveying: Mining surveying includes directing the digging of mine shafts and galleries and the calculation of volume of rock. It uses specialised techniques due to the restraints to survey geometry such as vertical shafts and narrow passages. Mortgage survey: A mortgage survey or physical survey is a simple survey that delineates land boundaries and building locations. It checks for encroachment, building setback restrictions and shows nearby flood zones. In many places a mortgage survey is a precondition for a mortgage loan. Photographic control survey: A survey that creates reference marks visible from the air to allow aerial photographs to be rectified. Stakeout, layout or setout: an element of many other surveys where the calculated or proposed position of an object is marked on the ground. This can be temporary or permanent. This is an important component of engineering and cadastral surveying. Structural survey: a detailed inspection to report upon the physical condition and structural stability of a building or structure. It highlights any work needed to maintain it in good repair. Subdivision: A boundary survey that splits a property into two or more smaller properties. Topographic survey: a survey of the positions and elevations of points and objects on a land surface, presented as a topographic map with contour lines. Existing conditions: Similar to a topographic survey but instead focuses more on the specific location of key features and structures as they exist at that time within the surveyed area rather than primarily focusing on the elevation, often used alongside architectural drawings and blueprints to locate or place building structures. Underwater survey: a survey of an underwater site, object, or area. === Plane vs. geodetic surveying === Based on the considerations and true shape of the Earth, surveying is broadly classified into two types. Plane surveying assumes the Earth is flat. Curvature and spheroidal shape of the Earth is neglected. In this type of surveying all triangles formed by joining survey lines are considered as plane triangles. It is employed for small survey works where errors due to the Earth's shape are too small to matter. In geodetic surveying the curvature of the Earth is taken into account while calculating reduced levels, angles, bearings and distances. This type of surveying is usually employed for large survey works. Survey works up to 100 square miles (260 square kilometers ) are treated as plane and beyond that are treated as geodetic. In geodetic surveying necessary corrections are applied to reduced levels, bearings and other observations. === On the basis of the instrument used === Chain Surveying Compass Surveying Plane table Surveying Levelling Theodolite Surveying Traverse Surveying Tacheometric Surveying Aerial Surveying == Profession == The basic principles of surveying have changed little over the ages, but the tools used by surveyors have evolved. Engineering, especially civil engineering, often needs surveyors. Surveyors help determine the placement of roads, railways, reservoirs, dams, pipelines, retaining walls, bridges, and buildings. They establish the boundaries of legal descriptions and political divisions. They also provide advice and data for geographical information systems (GIS) that record land features and boundaries. Surveyors must have a thorough knowledge of algebra, basic calculus, geometry, and trigonometry. They must also know the laws that deal with surveys, real property, and contracts. Most jurisdictions recognize three different levels of qualification: Survey assistants or chainmen are usually unskilled workers who help the surveyor. They place target reflectors, find old reference marks, and mark points on the ground. The term 'chainman' derives from past use of measuring chains. An assistant would move the far end of the chain under the surveyor's direction. Survey technicians often operate survey instruments, run surveys in the field, do survey calculations, or draft plans. A technician usually has no legal authority and cannot certify his work. Not all technicians are qualified, but qualifications at the certificate or diploma level are available. Licensed, registered, or chartered surveyors usually hold a degree or higher qualification. They are often required to pass further exams to join a professional association or to gain certifying status. Surveyors are responsible for planning and management of surveys. They have to ensure that their surveys, or surveys performed under their supervision, meet the legal standards. Many principals of surveying firms hold this status. Related professions include cartographers, hydrographers, geodesists, photogrammetrists, and topographers, as well as civil engineers and geomatics engineers. === Licensing === Licensing requirements vary with jurisdiction, and are commonly consistent within national borders. Prospective surveyors usually have to receive a degree in surveying, followed by a detailed examination of their knowledge of surveying law and principles specific to the region they wish to practice in, and undergo a period of on-the-job training or portfolio building before they are awarded a license to practise. Licensed surveyors usually receive a post nominal, which varies depending on where they qualified. The system has replaced older apprenticeship systems. A licensed land surveyor is generally required to sign and seal all plans. The state dictates the format, showing their name and registration number. In many jurisdictions, surveyors must mark their registration number on survey monuments when setting boundary corners. Monuments take the form of capped iron rods, concrete monuments, or nails with washers. === Surveying institutions === Most countries' governments regulate at least some forms of surveying. Their survey agencies establish regulations and standards. Standards control accuracy, surveying credentials, monumentation of boundaries and maintenance of geodetic networks. Many nations devolve this authority to regional entities or states/provinces. Cadastral surveys tend to be the most regulated because of the permanence of the work. Lot boundaries established by cadastral surveys may stand for hundreds of years without modification. Most jurisdictions also have a form of professional institution representing local surveyors. These institutes often endorse or license potential surveyors, as well as set and enforce ethical standards. The largest institution is the International Federation of Surveyors (Abbreviated FIG, for French: Fédération Internationale des Géomètres). They represent the survey industry worldwide. === Building surveying === Most English-speaking countries consider building surveying a distinct profession. They have their own professional associations and licensing requirements. A building surveyor can provide technical building advice on existing buildings, new buildings, design, compliance with regulations such as planning and building control. A building surveyor normally acts on behalf of his or her client ensuring that their vested interests remain protected. The Royal Institution of Chartered Surveyors (RICS) is a world-recognised governing body for those working within the built environment. === Cadastral surveying === One of the primary roles of the land surveyor is to determine the boundary of real property on the ground. The surveyor must determine where the adjoining landowners wish to put the boundary. The boundary is established in legal documents and plans prepared by attorneys, engineers, and land surveyors. The surveyor then puts monuments on the corners of the new boundary. They might also find or resurvey the corners of the property monumented by prior surveys. Cadastral land surveyors are licensed by governments. The cadastral survey branch of the Bureau of Land Management (BLM) conducts most cadastral surveys in the United States. They consult with Forest Service, National Park Service, Army Corps of Engineers, Bureau of Indian Affairs, Fish and Wildlife Service, Bureau of Reclamation, and others. The BLM used to be known as the United States General Land Office (GLO). In states organized per the Public Land Survey System (PLSS), surveyors must carry out BLM cadastral surveys under that system. Cadastral surveyors often have to work around changes to the earth that obliterate or damage boundary monuments. When this happens, they must consider evidence that is not recorded on the title deed. This is known as extrinsic evidence. === Quantity surveying === Quantity surveying is a profession that deals with the costs and contracts of construction projects. A quantity surveyor is an expert in estimating the costs of materials, labor, and time needed for a project, as well as managing the financial and legal aspects of the project. A quantity surveyor can work for either the client or the contractor, and can be involved in different stages of the project, from planning to completion. Quantity surveyors are also known as Chartered Surveyors in the UK. == Notable surveyors == Some U.S. Presidents were land surveyors. George Washington and Abraham Lincoln surveyed colonial or frontier territories early in their career, prior to serving in office. Ferdinand Rudolph Hassler is considered the "father" of geodetic surveying in the U.S. David T. Abercrombie practiced land surveying before starting an outfitter store of excursion goods. The business would later turn into Abercrombie & Fitch lifestyle clothing store. Percy Harrison Fawcett was a British surveyor that explored the jungles of South America attempting to find the Lost City of Z. His biography and expeditions were recounted in the book The Lost City of Z and were later adapted on film screen. Inō Tadataka produced the first map of Japan using modern surveying techniques starting in 1800, at the age of 55. == See also == === Types and methods === Arc measurement, historical method for determining Earth's local radius Photogrammetry, a method of collecting information using aerial photography and satellite images Cadastral surveyor, used to document land ownership by the production of documents, diagrams, plats, and maps Dominion Land Survey, the method used to divide most of Western Canada into one-square-mile sections for agricultural and other purposes Public Land Survey System, a method used in the United States to survey and identify land parcels Survey township, a square unit of land, six miles (~9.7 km) on a side, done by the U.S. Public Land Survey System Construction surveying, the locating of structures relative to a reference line, used in the construction of buildings, roads, mines, and tunnels Deviation survey, used in the oil industry to measure a borehole's departure from the vertical === Geospatial survey organizations === Survey of India, India's central agency in charge of mapping and surveying Ordnance Survey, a national mapping agency for Great Britain U.S. National Geodetic Survey, performing geographic surveys as part of the U.S. Department of Commerce United States Coast and Geodetic Survey, a former surveying agency of the United States Government === Other === == References == == Further reading == == External links == Géomètres sans Frontières : Association de géometres pour aide au développement. NGO Surveyors without borders (in French) The National Museum of Surveying The Home of the National Museum of Surveying in Springfield, Illinois Land Surveyors United Support Network Global social support network featuring surveyor forums, instructional videos, industry news and support groups based on geolocation. Natural Resources Canada – Surveying Archived 29 October 2010 at the Wayback Machine Good overview of surveying with references to construction surveys, cadastral surveys, photogrammetry surveys, mining surveys, hydrographic surveys, route surveys, control surveys and topographic surveys Table of Surveying, 1728 Cyclopaedia Surveying & Triangulation The History of Surveying And Survey Equipment NCEES National Council of Examiners for Engineering and Surveying (NCEES) NSPS National Society of Professional Surveyors (NSPS) Ground Penetrating Radar FAQ Archived 22 December 2019 at the Wayback Machine Using Ground Penetrating Radar for Land Surveying Survey Earth A Global event for professional land surveyors and students to remeasure planet in a single day during summer solstice as a community of land surveyors. Surveyors – Occupational Employment Statistics Public Land Survey System Foundation (2009), Manual of Surveying Instructions For the Survey of the Public Lands of the United States
Wikipedia/Topographical_surveys
A satellite navigation or satnav system is a system that uses satellites to provide autonomous geopositioning. A satellite navigation system with global coverage is termed global navigation satellite system (GNSS). As of 2024, four global systems are operational: the United States's Global Positioning System (GPS), Russia's Global Navigation Satellite System (GLONASS), China's BeiDou Navigation Satellite System (BDS), and the European Union's Galileo. Two regional systems are operational: India's NavIC and Japan's QZSS. Satellite-based augmentation systems (SBAS), designed to enhance the accuracy of GNSS, include Japan's Quasi-Zenith Satellite System (QZSS), India's GAGAN and the European EGNOS, all of them based on GPS. Previous iterations of the BeiDou navigation system and the present Indian Regional Navigation Satellite System (IRNSS), operationally known as NavIC, are examples of stand-alone operating regional navigation satellite systems (RNSS). Satellite navigation devices determine their location (longitude, latitude, and altitude/elevation) to high precision (within a few centimeters to meters) using time signals transmitted along a line of sight by radio from satellites. The system can be used for providing position, navigation or for tracking the position of something fitted with a receiver (satellite tracking). The signals also allow the electronic receiver to calculate the current local time to a high precision, which allows time synchronisation. These uses are collectively known as Positioning, Navigation and Timing (PNT). Satnav systems operate independently of any telephonic or internet reception, though these technologies can enhance the usefulness of the positioning information generated. Global coverage for each system is generally achieved by a satellite constellation of 18–30 medium Earth orbit (MEO) satellites spread between several orbital planes. The actual systems vary, but all use orbital inclinations of >50° and orbital periods of roughly twelve hours (at an altitude of about 20,000 kilometres or 12,000 miles). == Classification == GNSS systems that provide enhanced accuracy and integrity monitoring usable for civil navigation are classified as follows: GNSS-1 is the first generation system and is the combination of existing satellite navigation systems (GPS and GLONASS), with Satellite Based Augmentation Systems (SBAS) or Ground Based Augmentation Systems (GBAS). In the United States, the satellite-based component is the Wide Area Augmentation System (WAAS); in Europe, it is the European Geostationary Navigation Overlay Service (EGNOS); in Japan, it is the Multi-Functional Satellite Augmentation System (MSAS); and in India, it is the GPS-aided GEO augmented navigation (GAGAN). Ground-based augmentation is provided by systems like the Local Area Augmentation System (LAAS). GNSS-2 is the second generation of systems that independently provide a full civilian satellite navigation system, exemplified by the European Galileo positioning system. These systems will provide the accuracy and integrity monitoring necessary for civil navigation; including aircraft. Initially, this system consisted of only Upper L Band frequency sets (L1 for GPS, E1 for Galileo, and G1 for GLONASS). In recent years, GNSS systems have begun activating Lower L Band frequency sets (L2 and L5 for GPS, E5a and E5b for Galileo, and G3 for GLONASS) for civilian use; they feature higher aggregate accuracy and fewer problems with signal reflection. As of late 2018, a few consumer-grade GNSS devices are being sold that leverage both. They are typically called "Dual band GNSS" or "Dual band GPS" devices. By their roles in the navigation system, systems can be classified as: There are four global satellite navigation systems, currently GPS (United States), GLONASS (Russian Federation), Beidou (China) and Galileo (European Union). Global Satellite-Based Augmentation Systems (SBAS) such as OmniSTAR and StarFire. Regional SBAS including WAAS (US), EGNOS (EU), MSAS (Japan), GAGAN (India) and SDCM (Russia). Regional Satellite Navigation Systems such as India's NAVIC, and Japan's QZSS. Continental scale Ground Based Augmentation Systems (GBAS) for example the Australian GRAS and the joint US Coast Guard, Canadian Coast Guard, US Army Corps of Engineers and US Department of Transportation National Differential GPS (DGPS) service. Regional scale GBAS such as CORS networks. Local GBAS typified by a single GPS reference station operating Real Time Kinematic (RTK) corrections. As many of the global GNSS systems (and augmentation systems) use similar frequencies and signals around L1, many "Multi-GNSS" receivers capable of using multiple systems have been produced. While some systems strive to interoperate with GPS as well as possible by providing the same clock, others do not. == History == Ground-based radio navigation is decades old. The DECCA, LORAN, GEE and Omega systems used terrestrial longwave radio transmitters which broadcast a radio pulse from a known "master" location, followed by a pulse repeated from a number of "slave" stations. The delay between the reception of the master signal and the slave signals allowed the receiver to deduce the distance to each of the slaves, providing a fix. The first satellite navigation system was Transit, a system deployed by the US military in the 1960s. Transit's operation was based on the Doppler effect: the satellites travelled on well-known paths and broadcast their signals on a well-known radio frequency. The received frequency will differ slightly from the broadcast frequency because of the movement of the satellite with respect to the receiver. By monitoring this frequency shift over a short time interval, the receiver can determine its location to one side or the other of the satellite, and several such measurements combined with a precise knowledge of the satellite's orbit can fix a particular position. Satellite orbital position errors are caused by radio-wave refraction, gravity field changes (as the Earth's gravitational field is not uniform), and other phenomena. A team, led by Harold L Jury of Pan Am Aerospace Division in Florida from 1970 to 1973, found solutions and/or corrections for many error sources. Using real-time data and recursive estimation, the systematic and residual errors were narrowed down to accuracy sufficient for navigation. == Principles == Part of an orbiting satellite's broadcast includes its precise orbital data. Originally, the US Naval Observatory (USNO) continuously observed the precise orbits of these satellites. As a satellite's orbit deviated, the USNO sent the updated information to the satellite. Subsequent broadcasts from an updated satellite would contain its most recent ephemeris. Modern systems are more direct. The satellite broadcasts a signal that contains orbital data (from which the position of the satellite can be calculated) and the precise time the signal was transmitted. Orbital data include a rough almanac for all satellites to aid in finding them, and a precise ephemeris for this satellite. The orbital ephemeris is transmitted in a data message that is superimposed on a code that serves as a timing reference. The satellite uses an atomic clock to maintain synchronization of all the satellites in the constellation. The receiver compares the time of broadcast encoded in the transmission of three (at sea level) or four (which allows an altitude calculation also) different satellites, measuring the time-of-flight to each satellite. Several such measurements can be made at the same time to different satellites, allowing a continual fix to be generated in real time using an adapted version of trilateration: see GNSS positioning calculation for details. Each distance measurement, regardless of the system being used, places the receiver on a spherical shell centred on the broadcaster, at the measured distance from the broadcaster. By taking several such measurements and then looking for a point where the shells meet, a fix is generated. However, in the case of fast-moving receivers, the position of the receiver moves as signals are received from several satellites. In addition, the radio signals slow slightly as they pass through the ionosphere, and this slowing varies with the receiver's angle to the satellite, because that angle corresponds to the distance which the signal travels through the ionosphere. The basic computation thus attempts to find the shortest directed line tangent to four oblate spherical shells centred on four satellites. Satellite navigation receivers reduce errors by using combinations of signals from multiple satellites and multiple correlators, and then using techniques such as Kalman filtering to combine the noisy, partial, and constantly changing data into a single estimate for position, time, and velocity. Einstein's theory of general relativity is applied to GPS time correction, the net result is that time on a GPS satellite clock advances faster than a clock on the ground by about 38 microseconds per day. == Applications == The original motivation for satellite navigation was for military applications. Satellite navigation allows precision in the delivery of weapons to targets, greatly increasing their lethality whilst reducing inadvertent casualties from mis-directed weapons. (See Guided bomb). Satellite navigation also allows forces to be directed and to locate themselves more easily, reducing the fog of war. Now a global navigation satellite system, such as Galileo, is used to determine users location and the location of other people or objects at any given moment. The range of application of satellite navigation in the future is enormous, including both the public and private sectors across numerous market segments such as science, transport, agriculture, etc. The ability to supply satellite navigation signals is also the ability to deny their availability. The operator of a satellite navigation system potentially has the ability to degrade or eliminate satellite navigation services over any territory it desires. == Global navigation satellite systems == In order of first launch year: === GPS === First launch year: 1978 The United States' Global Positioning System (GPS) consists of up to 32 medium Earth orbit satellites in six different orbital planes. The exact number of satellites varies as older satellites are retired and replaced. Operational since 1978 and globally available since 1994, GPS is the world's most utilized satellite navigation system. === GLONASS === First launch year: 1982 The formerly Soviet, and now Russian, Global'naya Navigatsionnaya Sputnikovaya Sistema, (GLObal NAvigation Satellite System or GLONASS), is a space-based satellite navigation system that provides a civilian radionavigation-satellite service and is also used by the Russian Aerospace Defence Forces. GLONASS has full global coverage since 1995 and with 24 active satellites. === BeiDou === First launch year: 2000 BeiDou started as the now-decommissioned Beidou-1, an Asia-Pacific local network on the geostationary orbits. The second generation of the system BeiDou-2 became operational in China in December 2011. The BeiDou-3 system is proposed to consist of 30 MEO satellites and five geostationary satellites (IGSO). A 16-satellite regional version (covering Asia and Pacific area) was completed by December 2012. Global service was completed by December 2018. On 23 June 2020, the BDS-3 constellation deployment is fully completed after the last satellite was successfully launched at the Xichang Satellite Launch Center. === Galileo === First launch year: 2011 The European Union and European Space Agency agreed in March 2002 to introduce their own alternative to GPS, called the Galileo positioning system. Galileo became operational on 15 December 2016 (global Early Operational Capability, EOC). At an estimated cost of €10 billion, the system of 30 MEO satellites was originally scheduled to be operational in 2010. The original year to become operational was 2014. The first experimental satellite was launched on 28 December 2005. Galileo is expected to be compatible with the modernized GPS system. The receivers will be able to combine the signals from both Galileo and GPS satellites to greatly increase the accuracy. The full Galileo constellation consists of 24 active satellites, the last of which was launched in December 2021. The main modulation used in Galileo Open Service signal is the Composite Binary Offset Carrier (CBOC) modulation. == Regional navigation satellite systems == === NavIC === The NavIC (acronym for Navigation with Indian Constellation) is an autonomous regional satellite navigation system developed by the Indian Space Research Organisation (ISRO). The Indian government approved the project in May 2006. It consists of a constellation of 7 navigational satellites. Three of the satellites are placed in geostationary orbit (GEO) and the remaining 4 in geosynchronous orbit (GSO) to have a larger signal footprint and lower number of satellites to map the region. It is intended to provide an all-weather absolute position accuracy of better than 7.6 metres (25 ft) throughout India and within a region extending approximately 1,500 km (930 mi) around it. An Extended Service Area lies between the primary service area and a rectangle area enclosed by the 30th parallel south to the 50th parallel north and the 30th meridian east to the 130th meridian east, 1,500–6,000 km beyond borders. A goal of complete Indian control has been stated, with the space segment, ground segment and user receivers all being built in India. The constellation was in orbit as of 2018, and the system was available for public use in early 2018. NavIC provides two levels of service, the "standard positioning service", which will be open for civilian use, and a "restricted service" (an encrypted one) for authorized users (including military). There are plans to expand NavIC system by increasing constellation size from 7 to 11. India plans to make the NavIC global by adding 24 more MEO satellites. The Global NavIC will be free to use for the global public. === Early BeiDou === The first two generations of China's BeiDou navigation system were designed to provide regional coverage. === Korea === The Korean Positioning System (KPS) is currently in development and expected to be operational by 2035. == Augmentation == GNSS augmentation is a method of improving a navigation system's attributes, such as accuracy, reliability, and availability, through the integration of external information into the calculation process, for example, the Wide Area Augmentation System, the European Geostationary Navigation Overlay Service, the Multi-functional Satellite Augmentation System, Differential GPS, GPS-aided GEO augmented navigation (GAGAN) and inertial navigation systems. === QZSS === The Quasi-Zenith Satellite System (QZSS) is a four-satellite regional time transfer system and enhancement for GPS covering Japan and the Asia-Oceania regions. QZSS services were available on a trial basis as of January 12, 2018, and were started in November 2018. The first satellite was launched in September 2010. An independent satellite navigation system (from GPS) with 7 satellites is planned for 2023. === EGNOS === == Comparison of systems == Using multiple GNSS systems for user positioning increases the number of visible satellites, improves precise point positioning (PPP) and shortens the average convergence time. The signal-in-space ranging error (SISRE) in November 2019 were 1.6 cm for Galileo, 2.3 cm for GPS, 5.2 cm for GLONASS and 5.5 cm for BeiDou when using real-time corrections for satellite orbits and clocks. The average SISREs of the BDS-3 MEO, IGSO, and GEO satellites were 0.52 m, 0.90 m and 1.15 m, respectively. Compared to the four major global satellite navigation systems consisting of MEO satellites, the SISRE of the BDS-3 MEO satellites was slightly inferior to 0.4 m of Galileo, slightly superior to 0.59 m of GPS, and remarkably superior to 2.33 m of GLONASS. The SISRE of BDS-3 IGSO was 0.90 m, which was on par with the 0.92 m of QZSS IGSO. However, as the BDS-3 GEO satellites were newly launched and not completely functioning in orbit, their average SISRE was marginally worse than the 0.91 m of the QZSS GEO satellites. == Related techniques == === DORIS === Doppler Orbitography and Radio-positioning Integrated by Satellite (DORIS) is a French precision navigation system. Unlike other GNSS systems, it is based on static emitting stations around the world, the receivers being on satellites, in order to precisely determine their orbital position. The system may be used also for mobile receivers on land with more limited usage and coverage. Used with traditional GNSS systems, it pushes the accuracy of positions to centimetric precision (and to millimetric precision for altimetric application and also allows monitoring very tiny seasonal changes of Earth rotation and deformations), in order to build a much more precise geodesic reference system. === LEO satellites === The two current operational low Earth orbit (LEO) satellite phone networks are able to track transceiver units with accuracy of a few kilometres using doppler shift calculations from the satellite. The coordinates are sent back to the transceiver unit where they can be read using AT commands or a graphical user interface. This can also be used by the gateway to enforce restrictions on geographically bound calling plans. == International regulation == The International Telecommunication Union (ITU) defines a radionavigation-satellite service (RNSS) as "a radiodetermination-satellite service used for the purpose of radionavigation. This service may also include feeder links necessary for its operation". RNSS is regarded as a safety-of-life service and an essential part of navigation which must be protected from interferences. Aeronautical radionavigation-satellite (ARNSS) is – according to Article 1.47 of the International Telecommunication Union's (ITU) Radio Regulations (RR) – defined as «A radionavigation service in which earth stations are located on board aircraft.» Maritime radionavigation-satellite service (MRNSS) is – according to Article 1.45 of the International Telecommunication Union's (ITU) Radio Regulations (RR) – defined as «A radionavigation-satellite service in which earth stations are located on board ships.» === Classification === ITU Radio Regulations (article 1) classifies radiocommunication services as: Radiodetermination service (article 1.40) Radiodetermination-satellite service (article 1.41) Radionavigation service (article 1.42) Radionavigation-satellite service (article 1.43) Maritime radionavigation service (article 1.44) Maritime radionavigation-satellite service (article 1.45) Aeronautical radionavigation service (article 1.46) Aeronautical radionavigation-satellite service (article 1.47) Examples of RNSS use Augmentation system GNSS augmentation Automatic Dependent Surveillance–Broadcast BeiDou Navigation Satellite System (BDS) GALILEO, European GNSS Global Positioning System (GPS), with Differential GPS (DGPS) GLONASS NAVIC Quasi-Zenith Satellite System (QZSS) === Frequency allocation === The allocation of radio frequencies is provided according to Article 5 of the ITU Radio Regulations (edition 2012). To improve harmonisation in spectrum utilisation, most service allocations are incorporated in national Tables of Frequency Allocations and Utilisations within the responsibility of the appropriate national administration. Allocations are: primary: indicated by writing in capital letters secondary: indicated by small letters exclusive or shared utilization: within the responsibility of administrations. == Alternatives == Alternative Positioning, Navigation and Timing (AltPNT) refers to the concept of as an alternative to GNSS. Such alternatives include: Inertial navigation systems (INS) eLORAN Terrain-based navigation (TBN) Visual Positioning Systems (VPS) LiDAR == See also == == Notes == == References == == Further reading == Office for Outer Space Affairs of the United Nations (2010), Report on Current and Planned Global and Regional Navigation Satellite Systems and Satellite-based Augmentation Systems. == External links == === Information on specific GNSS systems === ESA information on EGNOS Information on the Beidou system Global Navigation Satellite System Fundamentals === Organizations related to GNSS === United Nations International Committee on Global Navigation Satellite Systems (ICG) Institute of Navigation (ION) GNSS Meetings The International GNSS Service (IGS) International Global Navigation Satellite Systems Society Inc (IGNSS) International Earth Rotation and Reference Systems Service (IERS) International GNSS Service (IGS) US National Executive Committee for Space-Based Positioning, Navigation, and Timing US National Geodetic Survey Orbits for the Global Positioning System satellites in the Global Navigation Satellite System UNAVCO GNSS Modernization Asia-Pacific Economic Cooperation (APEC) GNSS Implementation Team === Supportive or illustrative sites === GPS and GLONASS Simulation (Java applet) Simulation and graphical depiction of the motion of space vehicles, including DOP computation. GPS, GNSS, Geodesy and Navigation Concepts in depth === Alternatives to GNSS === USSF Alternative Positioning, Navigation, & Timing Challenge Definition Workshop Startups map out strategies to augment or backup GPS Competing with Uncle Sam’s free space offerings
Wikipedia/Global_navigation_satellite_systems
A global relief model, sometimes also denoted as global topography model or composite model, combines digital elevation model (DEM) data over land with digital bathymetry model (DBM) data over water-covered areas (oceans, lakes) to describe Earth's relief. A relief model thus shows how Earth's surface would look like in the absence of water or ice masses. The relief is represented by a set of heights (elevations or depths) that refer to some height reference surface, often the mean sea level or the geoid. Global relief models are used for a variety of applications including geovisualization, geologic, geomorphologic and geophysical analyses, gravity field modelling as well as geo-statistics. == Measurement == Global relief models are always based on combinations of data sets from different remote sensing techniques. This is because there is no single remote sensing technique that would allow measurement of the relief both over dry and water-covered areas. Elevation data over land is often obtained from LIDAR or inSAR measurements, while bathymetry is acquired based on SONAR and altimetry. Global relief models may also contain elevations of the bedrock (sub-ice-topography) below the ice shields of Antarctica and Greenland. Ice sheet thickness, mostly measured through ice-penetrating RADAR, is subtracted from the ice surface heights to reveal the bedrock. == Spatial resolution == While digital elevation models describe Earth's land topography often with 1 to 3 arc-second resolution (e.g., from the SRTM or ASTER missions), the global bathymetry (e.g., SRTM30_PLUS) is known to a much lesser spatial resolution in the kilometre-range. The same holds true for models of the bedrock of Antarctica and Greenland. Therefore, global relief models are often constructed at 1 arc-minute resolution (corresponding to about 1.8 km postings). Some products such as the 30 and 15 arc-second resolution SRTM30_PLUS/ SRTM15_PLUS grids offer higher resolution to adequately represent SONAR depth measurements where available. Although grid cells are spaced at 15 or 30 arc-seconds, when SONAR measurements are unavailable, the resolution is much worse (~20-12 km) depending on factors such as water depth. == Public data sets == Data sets produced and released to the public include Earth2014, SRTM30_PLUS and ETOPO1. === ETOPO 2022 === The 2022 ETOPO version is the most recent global relief model with several scans at 1 arc-min, 30 arc-sec, and 15 arc-sec resolutions. The ETOPO Global Relief Model combines topographic, bathymetric, and shoreline data from regional and global sources to provide high-resolution representations of Earth's surface. It supports applications such as tsunami forecasting, ocean circulation modeling, and Earth visualization. The latest version, ETOPO 2022, is available in two formats: Ice Surface, depicting the top of ice sheets in Greenland and Antarctica, and Bedrock, showing the underlying terrain. === Earth2014 (2015) === The Earth2014 global relief model, developed at Curtin University (Western Australia) and TU Munich (Germany). Earth2014 provides sets of 1 arc-min resolution global grids (about 1.8 km postings) of Earth's relief in different representations based on the 2013 releases of bedrock and ice-sheet data over Antarctica (Bedmap2) and Greenland (Greenland Bedrock Topography), the 2013 SRTM_30PLUS bathymetry and 2008 SRTM V4.1 SRTM land topography. Earth2014 provides five different layers of height data, including Earth's surface (lower interface of the atmosphere), topography and bathymetry of the oceans and major lakes, topography, bathymetry and bedrock, ice-sheet thicknesses and rock-equivalent topography. The Earth2014 global grids are provided as heights relative to the EGM96 mean sea level for the conventional relief model, and as planetary radii relative to the centre of Earth to show the shape of the Earth. === SRTM30_PLUS (2014) === SRTM30_PLUS is a combined bathymetry and topography model produced by Scripps Institution of Oceanography (California). The version 15_PLUS comes at 0.25 arc-min resolution (about 450 m postings), while the 30_PLUS version offers 0.5 arc-min (900 m) resolution. The bathymetric data in SRTM30_PLUS stems from depth soundings (SONAR) and from satellite altimetry. The bathymetric component of SRTM30_PLUS gets regularly updated with new or improved data sets in order to continuously improve and refine the description of the sea floor geometry. Over land areas, SRTM30 data from the USGS is included. SRTM30_PLUS provides background information for Google Earth and Google Maps. === ETOPO1 (2009) === The ETOPO1 1-arcmin global relief model, produced by the National Geophysical Data Center (Colorado), provides two layers of relief information. One layer represents the global relief including bedrock over Antarctica and Greenland, and another layer the global relief including ice surface heights. Both layers include bathymetry over the oceans and some of Earth's major lakes. ETOPO1 land topography and ocean bathymetry relies on SRTM30 topography and a multitude of bathymetric surveys that have been merged. Historic versions of ETOPO1 are the ETOPO2 and ETOPO5 relief models (2 and 5 arc-min resolution). The ETOPO1 global relief model is based on the 2001 Bedmap1 model of bedrock over Antarctica, which is now superseded by the significantly improved Bedmap2 bedrock data. The ETOPO1-contained information on ocean depths is superseded through several updates of the SRTM30_PLUS bathymetry. == References == == External links == Earth2014 homepage SRTM30 Plus homepage ETOPO1 homepage
Wikipedia/Global_Relief_Model
Photography is the art, application, and practice of creating images by recording light, either electronically by means of an image sensor, or chemically by means of a light-sensitive material such as photographic film. It is employed in many fields of science, manufacturing (e.g., photolithography), and business, as well as its more direct uses for art, film and video production, recreational purposes, hobby, and mass communication. A person who operates a camera to capture or take photographs is called a photographer, while the captured image, also known as a photograph, is the result produced by the camera. Typically, a lens is used to focus the light reflected or emitted from objects into a real image on the light-sensitive surface inside a camera during a timed exposure. With an electronic image sensor, this produces an electrical charge at each pixel, which is electronically processed and stored in a digital image file for subsequent display or processing. The result with photographic emulsion is an invisible latent image, which is later chemically "developed" into a visible image, either negative or positive, depending on the purpose of the photographic material and the method of processing. A negative image on film is traditionally used to photographically create a positive image on a paper base, known as a print, either by using an enlarger or by contact printing. Before the emergence of digital photography, photographs that utilized film had to be developed to produce negatives or projectable slides, and negatives had to be printed as positive images, usually in enlarged form. This was typically done by photographic laboratories, but many amateur photographers, students, and photographic artists did their own processing. == Etymology == The word "photography" was created from the Greek roots φωτός (phōtós), genitive of φῶς (phōs), "light" and γραφή (graphé) "representation by means of lines" or "drawing", together meaning "drawing with light". Several people may have coined the same new term from these roots independently. Hércules Florence, a French painter and inventor living in Campinas, Brazil, used the French form of the word, photographie, in private notes which a Brazilian historian believes were written in 1834. This claim is widely reported but is not yet largely recognized internationally. The first use of the word by Florence became widely known after the research of Boris Kossoy in 1980. On 25 February 1839, the German newspaper Vossische Zeitung published an article titled Photographie, discussing several priority claims, especially that of Henry Fox Talbot's, in relation to Daguerre's claim of invention. The article is the earliest known occurrence of the word in public print. It was signed "J.M.", believed to have been Berlin astronomer Johann von Maedler. The astronomer John Herschel is also credited with coining the word, independent of Talbot, in 1839. The inventors Nicéphore Niépce, Talbot, and Louis Daguerre seem not to have known or used the word "photography", but referred to their processes as "Heliography" (Niépce), "Photogenic Drawing"/"Talbotype"/"Calotype" (Talbot), and "Daguerreotype" (Daguerre). == History == === Precursor technologies === Photography is the result of combining several technical discoveries relating to seeing an image and capturing the image. The discovery of the camera obscura ("dark chamber" in Latin) that provides an image of a scene dates back to ancient China. Greek mathematicians Aristotle and Euclid independently described a camera obscura in the 5th and 4th centuries BCE. In the 6th century CE, Byzantine mathematician Anthemius of Tralles used a type of camera obscura in his experiments. The Arab physicist Ibn al-Haytham (Alhazen) (965–1040) also invented a camera obscura as well as the first true pinhole camera. The invention of the camera has been traced back to the work of Ibn al-Haytham. While the effects of a single light passing through a pinhole had been described earlier, Ibn al-Haytham gave the first correct analysis of the camera obscura, including the first geometrical and quantitative descriptions of the phenomenon, and was the first to use a screen in a dark room so that an image from one side of a hole in the surface could be projected onto a screen on the other side. He also first understood the relationship between the focal point and the pinhole, and performed early experiments with afterimages, laying the foundations for the invention of photography in the 19th century. Leonardo da Vinci mentions natural camerae obscurae that are formed by dark caves on the edge of a sunlit valley. A hole in the cave wall will act as a pinhole camera and project a laterally reversed, upside down image on a piece of paper. Renaissance painters used the camera obscura which, in fact, gives the optical rendering in color that dominates Western Art. It is a box with a small hole in one side, which allows specific light rays to enter, projecting an inverted image onto a viewing screen or paper. The birth of photography was then concerned with inventing means to capture and keep the image produced by the camera obscura. Albertus Magnus (1193–1280) discovered silver nitrate, and Georg Fabricius (1516–1571) discovered silver chloride, and the techniques described in Ibn al-Haytham's Book of Optics are capable of producing primitive photographs using medieval materials. Daniele Barbaro described a diaphragm in 1566. Wilhelm Homberg described how light darkened some chemicals (photochemical effect) in 1694. Around 1717, Johann Heinrich Schulze used a light-sensitive slurry to capture images of cut-out letters on a bottle and on that basis many German sources and some international ones credit Schulze as the inventor of photography. The fiction book Giphantie, published in 1760, by French author Tiphaigne de la Roche, described what can be interpreted as photography. In June 1802, British inventor Thomas Wedgwood made the first known attempt to capture the image in a camera obscura by means of a light-sensitive substance. He used paper or white leather treated with silver nitrate. Although he succeeded in capturing the shadows of objects placed on the surface in direct sunlight, and even made shadow copies of paintings on glass, it was reported in 1802 that "the images formed by means of a camera obscura have been found too faint to produce, in any moderate time, an effect upon the nitrate of silver." The shadow images eventually darkened all over. === Invention === The first permanent photoetching was an image produced in 1822 by the French inventor Nicéphore Niépce, but it was destroyed in a later attempt to make prints from it. Niépce was successful again in 1825. In 1826 he made the View from the Window at Le Gras, the earliest surviving photograph from nature (i.e., of the image of a real-world scene, as formed in a camera obscura by a lens). Because Niépce's camera photographs required an extremely long exposure (at least eight hours and probably several days), he sought to greatly improve his bitumen process or replace it with one that was more practical. In partnership with Louis Daguerre, he worked out post-exposure processing methods that produced visually superior results and replaced the bitumen with a more light-sensitive resin, but hours of exposure in the camera were still required. With an eye to eventual commercial exploitation, the partners opted for total secrecy. Niépce died in 1833 and Daguerre then redirected the experiments toward the light-sensitive silver halides, which Niépce had abandoned many years earlier because of his inability to make the images he captured with them light-fast and permanent. Daguerre's efforts culminated in what would later be named the daguerreotype process. The essential elements—a silver-plated surface sensitized by iodine vapor, developed by mercury vapor, and "fixed" with hot saturated salt water—were in place in 1837. The required exposure time was measured in minutes instead of hours. Daguerre took the earliest confirmed photograph of a person in 1838 while capturing a view of a Paris street: unlike the other pedestrian and horse-drawn traffic on the busy boulevard, which appears deserted, one man having his boots polished stood sufficiently still throughout the several-minutes-long exposure to be visible. The existence of Daguerre's process was publicly announced, without details, on 7 January 1839. The news created an international sensation. France soon agreed to pay Daguerre a pension in exchange for the right to present his invention to the world as the gift of France, which occurred when complete working instructions were unveiled on 19 August 1839. In that same year, American photographer Robert Cornelius is credited with taking the earliest surviving photographic self-portrait. In Brazil, Hercules Florence had started working out a silver-salt-based paper process in 1832, later naming it photographia, at least four years before John Herschel coined the English word photography. In 1834, having settled on silver nitrate on paper, a combination which had been the subject of experiments by Thomas Wedgwood around the year 1800, Florence's notebooks indicate that he eventually succeeded in creating light-fast, durable images. Partly because he never published his invention adequately, partly because he was an obscure inventor living in a remote and undeveloped province, Hércules Florence died, in Brazil, unrecognized internationally as one of the inventors of photography during his lifetime. Meanwhile, a British inventor, William Fox Talbot, had succeeded in making crude but reasonably light-fast silver images on paper as early as 1834 but had kept his work secret. After reading about Daguerre's invention in January 1839, Talbot published his hitherto secret method in a paper to the Royal Society and set about improving on it. At first, like other pre-daguerreotype processes, Talbot's paper-based photography typically required hours-long exposures in the camera, but in 1840 he created the calotype process, which used the chemical development of a latent image to greatly reduce the exposure needed and compete with the daguerreotype. In both its original and calotype forms, Talbot's process, unlike Daguerre's, created a translucent negative which could be used to print multiple positive copies; this is the basis of most modern chemical photography up to the present day, as daguerreotypes could only be replicated by rephotographing them with a camera. Talbot's famous tiny paper negative of the Oriel window in Lacock Abbey, one of a number of camera photographs he made in the summer of 1835, may be the oldest camera negative in existence. In March 1837, Steinheil, along with Franz von Kobell, used silver chloride and a cardboard camera to make pictures in negative of the Frauenkirche and other buildings in Munich, then taking another picture of the negative to get a positive, the actual black and white reproduction of a view on the object. The pictures produced were round with a diameter of 4 cm, the method was later named the "Steinheil method". In France, Hippolyte Bayard invented his own process for producing direct positive paper prints and claimed to have invented photography earlier than Daguerre or Talbot. British chemist John Herschel made many contributions to the new field. He invented the cyanotype process, later familiar as the "blueprint". He was the first to use the terms "photography", "negative" and "positive". He had discovered in 1819 that sodium thiosulphate was a solvent of silver halides, and in 1839 he informed Talbot (and, indirectly, Daguerre) that it could be used to "fix" silver-halide-based photographs and make them completely light-fast. He made the first glass negative in late 1839. In the March 1851 issue of The Chemist, Frederick Scott Archer published his wet plate collodion process. It became the most widely used photographic medium until the gelatin dry plate, introduced in the 1870s, eventually replaced it. There are three subsets to the collodion process; the Ambrotype (a positive image on glass), the Ferrotype or Tintype (a positive image on metal) and the glass negative, which was used to make positive prints on albumen or salted paper. Many advances in photographic glass plates and printing were made during the rest of the 19th century. In 1891, Gabriel Lippmann introduced a process for making natural-color photographs based on the optical phenomenon of the interference of light waves. His scientifically elegant and important but ultimately impractical invention earned him the Nobel Prize in Physics in 1908. Glass plates were the medium for most original camera photography from the late 1850s until the general introduction of flexible plastic films during the 1890s. Although the convenience of the film greatly popularized amateur photography, early films were somewhat more expensive and of markedly lower optical quality than their glass plate equivalents, and until the late 1910s they were not available in the large formats preferred by most professional photographers, so the new medium did not immediately or completely replace the old. Because of the superior dimensional stability of glass, the use of plates for some scientific applications, such as astrophotography, continued into the 1990s, and in the niche field of laser holography, it has persisted into the 21st century. === Film === Hurter and Driffield began pioneering work on the light sensitivity of photographic emulsions in 1876. Their work enabled the first quantitative measure of film speed to be devised. The first flexible photographic roll film was marketed by George Eastman, founder of Kodak in 1885, but this original "film" was actually a coating on a paper base. As part of the processing, the image-bearing layer was stripped from the paper and transferred to a hardened gelatin support. The first transparent plastic roll film followed in 1889. It was made from highly flammable nitrocellulose known as nitrate film. Although cellulose acetate or "safety film" had been introduced by Kodak in 1908, at first it found only a few special applications as an alternative to the hazardous nitrate film, which had the advantages of being considerably tougher, slightly more transparent, and cheaper. The changeover was not completed for X-ray films until 1933, and although safety film was always used for 16 mm and 8 mm home movies, nitrate film remained standard for theatrical 35 mm motion pictures until it was finally discontinued in 1951. Films remained the dominant form of photography until the early 21st century when advances in digital photography drew consumers to digital formats. Although modern photography is dominated by digital users, film continues to be used by enthusiasts and professional photographers. The distinctive "look" of film based photographs compared to digital images is likely due to a combination of factors, including (1) differences in spectral and tonal sensitivity (S-shaped density-to-exposure (H&D curve) with film vs. linear response curve for digital CCD sensors), (2) resolution, and (3) continuity of tone. === Black-and-white === Originally, all photography was monochrome, or black-and-white. Even after color film was readily available, black-and-white photography continued to dominate for decades, due to its lower cost, chemical stability, and its "classic" photographic look. The tones and contrast between light and dark areas define black-and-white photography. Monochromatic pictures are not necessarily composed of pure blacks, whites, and intermediate shades of gray but can involve shades of one particular hue depending on the process. The cyanotype process, for example, produces an image composed of blue tones. The albumen print process, publicly revealed in 1847, produces brownish tones. Many photographers continue to produce some monochrome images, sometimes because of the established archival permanence of well-processed silver-halide-based materials. Some full-color digital images are processed using a variety of techniques to create black-and-white results, and some manufacturers produce digital cameras that exclusively shoot monochrome. Monochrome printing or electronic display can be used to salvage certain photographs taken in color which are unsatisfactory in their original form; sometimes when presented as black-and-white or single-color-toned images they are found to be more effective. Although color photography has long predominated, monochrome images are still produced, mostly for artistic reasons. Almost all digital cameras have an option to shoot in monochrome, and almost all image editing software can combine or selectively discard RGB color channels to produce a monochrome image from one shot in color. === Color === Color photography was explored beginning in the 1840s. Early experiments in color required extremely long exposures (hours or days for camera images) and could not "fix" the photograph to prevent the color from quickly fading when exposed to white light. The first permanent color photograph was taken in 1861 using the three-color-separation principle first published by Scottish physicist James Clerk Maxwell in 1855. The foundation of virtually all practical color processes, Maxwell's idea was to take three separate black-and-white photographs through red, green and blue filters. This provides the photographer with the three basic channels required to recreate a color image. Transparent prints of the images could be projected through similar color filters and superimposed on the projection screen, an additive method of color reproduction. A color print on paper could be produced by superimposing carbon prints of the three images made in their complementary colors, a subtractive method of color reproduction pioneered by Louis Ducos du Hauron in the late 1860s. Russian photographer Sergei Mikhailovich Prokudin-Gorskii made extensive use of this color separation technique, employing a special camera which successively exposed the three color-filtered images on different parts of an oblong plate. Because his exposures were not simultaneous, unsteady subjects exhibited color "fringes" or, if rapidly moving through the scene, appeared as brightly colored ghosts in the resulting projected or printed images. Implementation of color photography was hindered by the limited sensitivity of early photographic materials, which were mostly sensitive to blue, only slightly sensitive to green, and virtually insensitive to red. The discovery of dye sensitization by photochemist Hermann Vogel in 1873 suddenly made it possible to add sensitivity to green, yellow and even red. Improved color sensitizers and ongoing improvements in the overall sensitivity of emulsions steadily reduced the once-prohibitive long exposure times required for color, bringing it ever closer to commercial viability. Autochrome, the first commercially successful color process, was introduced by the Lumière brothers in 1907. Autochrome plates incorporated a mosaic color filter layer made of dyed grains of potato starch, which allowed the three color components to be recorded as adjacent microscopic image fragments. After an Autochrome plate was reversal processed to produce a positive transparency, the starch grains served to illuminate each fragment with the correct color and the tiny colored points blended together in the eye, synthesizing the color of the subject by the additive method. Autochrome plates were one of several varieties of additive color screen plates and films marketed between the 1890s and the 1950s. Kodachrome, the first modern "integral tripack" (or "monopack") color film, was introduced by Kodak in 1935. It captured the three color components in a multi-layer emulsion. One layer was sensitized to record the red-dominated part of the spectrum, another layer recorded only the green part and a third recorded only the blue. Without special film processing, the result would simply be three superimposed black-and-white images, but complementary cyan, magenta, and yellow dye images were created in those layers by adding color couplers during a complex processing procedure. Agfa's similarly structured Agfacolor Neu was introduced in 1936. Unlike Kodachrome, the color couplers in Agfacolor Neu were incorporated into the emulsion layers during manufacture, which greatly simplified the processing. Currently, available color films still employ a multi-layer emulsion and the same principles, most closely resembling Agfa's product. Instant color film, used in a special camera which yielded a unique finished color print only a minute or two after the exposure, was introduced by Polaroid in 1963. Color photography may form images as positive transparencies, which can be used in a slide projector, or as color negatives intended for use in creating positive color enlargements on specially coated paper. The latter is now the most common form of film (non-digital) color photography owing to the introduction of automated photo printing equipment. After a transition period centered around 1995–2005, color film was relegated to a niche market by inexpensive multi-megapixel digital cameras. Film continues to be the preference of some photographers because of its distinctive "look". === Digital === In 1981, Sony unveiled the first consumer camera to use a charge-coupled device for imaging, eliminating the need for film: the Sony Mavica. While the Mavica saved images to disk, the images were displayed on television, and the camera was not fully digital. The first digital camera to both record and save images in a digital format was the Fujix DS-1P created by Fujifilm in 1988. In 1991, Kodak unveiled the DCS 100, the first commercially available digital single-lens reflex camera. Although its high cost precluded uses other than photojournalism and professional photography, commercial digital photography was born. Digital imaging uses an electronic image sensor to record the image as a set of electronic data rather than as chemical changes on film. An important difference between digital and chemical photography is that chemical photography resists photo manipulation because it involves film and photographic paper, while digital imaging is a highly manipulative medium. This difference allows for a degree of image post-processing that is comparatively difficult in film-based photography and permits different communicative potentials and applications. Digital photography dominates the 21st century. More than 99% of photographs taken around the world are through digital cameras, increasingly through smartphones. == Techniques == A large variety of photographic techniques and media are used in the process of capturing images for photography. These include the camera; dual photography; full-spectrum, ultraviolet and infrared media; light field photography; and other imaging techniques. === Cameras === The camera is the image-forming device, and a photographic plate, photographic film or a silicon electronic image sensor is the capture medium. The respective recording medium can be the plate or film itself, or a digital magnetic or electronic memory. Photographers control the camera and lens to "expose" the light recording material to the required amount of light to form a "latent image" (on plate or film) or RAW file (in digital cameras) which, after appropriate processing, is converted to a usable image. Digital cameras use an electronic image sensor based on light-sensitive electronics such as charge-coupled device (CCD) or complementary metal–oxide–semiconductor (CMOS) technology. The resulting digital image is stored electronically, but can be reproduced on a paper. The camera (or 'camera obscura') is a dark room or chamber from which, as far as possible, all light is excluded except the light that forms the image. It was discovered and used in the 16th century by painters. The subject being photographed, however, must be illuminated. Cameras can range from small to very large, a whole room that is kept dark while the object to be photographed is in another room where it is properly illuminated. This was common for reproduction photography of flat copy when large film negatives were used (see Process camera). As soon as photographic materials became "fast" (sensitive) enough for taking candid or surreptitious pictures, small "detective" cameras were made, some actually disguised as a book or handbag or pocket watch (the Ticka camera) or even worn hidden behind an Ascot necktie with a tie pin that was really the lens. The movie camera is a type of photographic camera that takes a rapid sequence of photographs on recording medium. In contrast to a still camera, which captures a single snapshot at a time, the movie camera takes a series of images, each called a "frame". This is accomplished through an intermittent mechanism. The frames are later played back in a movie projector at a specific speed, called the "frame rate" (number of frames per second). While viewing, a person's eyes and brain merge the separate pictures to create the illusion of motion. === Stereoscopic === Photographs, both monochrome and color, can be captured and displayed through two side-by-side images that emulate human stereoscopic vision. Stereoscopic photography was the first that captured figures in motion. While known colloquially as "3-D" photography, the more accurate term is stereoscopy. Such cameras have long been realized by using film and more recently in digital electronic methods (including cell phone cameras). === Dualphotography === Dualphotography consists of photographing a scene from both sides of a photographic device at once (e.g. camera for back-to-back dualphotography, or two networked cameras for portal-plane dualphotography). The dualphoto apparatus can be used to simultaneously capture both the subject and the photographer, or both sides of a geographical place at once, thus adding a supplementary narrative layer to that of a single image. === Full-spectrum, ultraviolet and infrared === Ultraviolet and infrared films have been available for many decades and employed in a variety of photographic avenues since the 1960s. New technological trends in digital photography have opened a new direction in full spectrum photography, where careful filtering choices across the ultraviolet, visible and infrared lead to new artistic visions. Modified digital cameras can detect some ultraviolet, all of the visible and much of the near infrared spectrum, as most digital imaging sensors are sensitive from about 350 nm to 1000 nm. An off-the-shelf digital camera contains an infrared hot mirror filter that blocks most of the infrared and a bit of the ultraviolet that would otherwise be detected by the sensor, narrowing the accepted range from about 400 nm to 700 nm. Replacing a hot mirror or infrared blocking filter with an infrared pass or a wide spectrally transmitting filter allows the camera to detect the wider spectrum light at greater sensitivity. Without the hot-mirror, the red, green and blue (or cyan, yellow and magenta) colored micro-filters placed over the sensor elements pass varying amounts of ultraviolet (blue window) and infrared (primarily red and somewhat lesser the green and blue micro-filters). Uses of full spectrum photography are for fine art photography, geology, forensics and law enforcement. === Layering === Layering is a photographic composition technique that manipulates the foreground, subject or middle-ground, and background layers in a way that they all work together to tell a story through the image. Layers may be incorporated by altering the focal length, distorting the perspective by positioning the camera in a certain spot. People, movement, light and a variety of objects can be used in layering. === Light field === Digital methods of image capture and display processing have enabled the new technology of "light field photography" (also known as synthetic aperture photography). This process allows focusing at various depths of field to be selected after the photograph has been captured. As explained by Michael Faraday in 1846, the "light field" is understood as 5-dimensional, with each point in 3-D space having attributes of two more angles that define the direction of each ray passing through that point. These additional vector attributes can be captured optically through the use of microlenses at each pixel point within the 2-dimensional image sensor. Every pixel of the final image is actually a selection from each sub-array located under each microlens, as identified by a post-image capture focus algorithm. === Other === Besides the camera, other methods of forming images with light are available. For instance, a photocopy or xerography machine forms permanent images but uses the transfer of static electrical charges rather than photographic medium, hence the term electrophotography. Photograms are images produced by the shadows of objects cast on the photographic paper, without the use of a camera. Objects can also be placed directly on the glass of an image scanner to produce digital pictures. == Types == === Amateur === Amateur photographers take photos for personal use, as a hobby or out of casual interest, rather than as a business or job. The quality of amateur work can be comparable to that of many professionals. Amateurs can fill a gap in subjects or topics that might not otherwise be photographed if they are not commercially useful or salable. Amateur photography grew during the late 19th century due to the popularization of the hand-held camera. Twenty-first century social media and near-ubiquitous camera phones have made photographic and video recording pervasive in everyday life. In the mid-2010s smartphone cameras added numerous automatic assistance features like color management, autofocus face detection and image stabilization that significantly decreased skill and effort needed to take high quality images. === Commercial === Commercial photography is probably best defined as any photography for which the photographer is paid for images rather than works of art. In this light, money could be paid for the subject of the photograph or the photograph itself. The commercial photographic world could include: Advertising photography: There are photographs made to illustrate and usually sell a service or product. These images, such as packshots, are generally done with an advertising agency, design firm or with an in-house corporate design team. Architectural photography focuses on capturing photographs of buildings and architectural structures that are aesthetically pleasing and accurate in terms of representations of their subjects. Event photography focuses on photographing guests and occurrences at mostly social events. Fashion and glamour photography usually incorporates models and is a form of advertising photography. Fashion photography, like the work featured in Harper's Bazaar, emphasizes clothes and other products; glamour emphasizes the model and body form while glamour photography is popular in advertising and men's magazines. Models in glamour photography sometimes work nude. 360 product photography displays a series of photos to give the impression of a rotating object. This technique is commonly used by ecommerce websites to help shoppers visualise products. Concert photography focuses on capturing candid images of both the artist or band as well as the atmosphere (including the crowd). Many of these photographers work freelance and are contracted through an artist or their management to cover a specific show. Concert photographs are often used to promote the artist or band in addition to the venue. Crime scene photography consists of photographing scenes of crime such as robberies and murders. A black and white camera or an infrared camera may be used to capture specific details. Still life photography usually depicts inanimate subject matter, typically commonplace objects which may be either natural or man-made. Still life is a broader category for food and some natural photography and can be used for advertising purposes. Real estate photography focuses on the production of photographs showcasing a property that is for sale, such photographs requires the use of wide-lens and extensive knowledge in high-dynamic-range imaging photography. Food photography can be used for editorial, packaging or advertising use. Food photography is similar to still life photography but requires some special skills. Photojournalism can be considered a subset of editorial photography. Photographs made in this context are accepted as a documentation of a news story. Paparazzi is a form of photojournalism in which the photographer captures candid images of athletes, celebrities, politicians, and other prominent people. Portrait and wedding photography: Are photographs made and sold directly to the end user of the images. Landscape photography typically captures the presence of nature but can also focus on human-made features or disturbances of landscapes. Wildlife photography demonstrates the life of wild animals. === Art === During the 20th century, both fine art photography and documentary photography became accepted by the English-speaking art world and the gallery system. In the United States, a handful of photographers, including Alfred Stieglitz, Edward Steichen, John Szarkowski, F. Holland Day, and Edward Weston, spent their lives advocating for photography as a fine art. At first, fine art photographers tried to imitate painting styles. This movement is called Pictorialism, often using soft focus for a dreamy, 'romantic' look. In reaction to that, Weston, Ansel Adams, and others formed the Group f/64 to advocate 'straight photography', the photograph as a (sharply focused) thing in itself and not an imitation of something else. The aesthetics of photography is a matter that continues to be discussed regularly, especially in artistic circles. Many artists argued that photography was the mechanical reproduction of an image. If photography is authentically art, then photography in the context of art would need redefinition, such as determining what component of a photograph makes it beautiful to the viewer. The controversy began with the earliest images "written with light"; Nicéphore Niépce, Louis Daguerre, and others among the very earliest photographers were met with acclaim, but some questioned if their work met the definitions and purposes of art. Clive Bell in his classic essay Art states that only "significant form" can distinguish art from what is not art. There must be some one quality without which a work of art cannot exist; possessing which, in the least degree, no work is altogether worthless. What is this quality? What quality is shared by all objects that provoke our aesthetic emotions? What quality is common to Sta. Sophia and the windows at Chartres, Mexican sculpture, a Persian bowl, Chinese carpets, Giotto's frescoes at Padua, and the masterpieces of Poussin, Piero della Francesca, and Cezanne? Only one answer seems possible – significant form. In each, lines and colors combined in a particular way, certain forms and relations of forms, stir our aesthetic emotions. On 7 February 2007, Sotheby's London sold the 2001 photograph 99 Cent II Diptychon for an unprecedented $3,346,456 to an anonymous bidder, making it the most expensive at the time. Conceptual photography turns a concept or idea into a photograph. Even though what is depicted in the photographs are real objects, the subject is strictly abstract. In parallel to this development, the then largely separate interface between painting and photography was closed in the second half of the 20th century with the chemigram of Pierre Cordier and the chemogram of Josef H. Neumann. In 1974 the chemograms by Josef H. Neumann concluded the separation of the painterly background and the photographic layer by showing the picture elements in a symbiosis that had never existed before, as an unmistakable unique specimen, in a simultaneous painterly and at the same time real photographic perspective, using lenses, within a photographic layer, united in colors and shapes. This Neumann chemogram from the 1970s thus differs from the beginning of the previously created cameraless chemigrams of a Pierre Cordier and the photogram Man Ray or László Moholy-Nagy of the previous decades. These works of art were almost simultaneous with the invention of photography by various important artists who characterized Hippolyte Bayard, Thomas Wedgwood, William Henry Fox Talbot in their early stages, and later Man Ray and László Moholy-Nagy in the twenties and by the painter in the thirties Edmund Kesting and Christian Schad by draping objects directly onto appropriately sensitized photo paper and using a light source without a camera. === Photojournalism === Photojournalism is a particular form of photography (the collecting, editing, and presenting of news material for publication or broadcast) that employs images in order to tell a news story. It is now usually understood to refer only to still images, but in some cases the term also refers to video used in broadcast journalism. Photojournalism is distinguished from other close branches of photography (e.g., documentary photography, social documentary photography, street photography or celebrity photography) by complying with a rigid ethical framework which demands that the work be both honest and impartial whilst telling the story in strictly journalistic terms. Photojournalists create pictures that contribute to the news media, and help communities connect with one other. Photojournalists must be well informed and knowledgeable about events happening right outside their door. They deliver news in a creative format that is not only informative, but also entertaining, including sports photography. === Science and forensics === The camera has a long and distinguished history as a means of recording scientific phenomena from the first use by Daguerre and Fox-Talbot, such as astronomical events (eclipses for example), small creatures and plants when the camera was attached to the eyepiece of microscopes (in photomicroscopy) and for macro photography of larger specimens. The camera also proved useful in recording crime scenes and the scenes of accidents, such as the Wootton bridge collapse in 1861. The methods used in analysing photographs for use in legal cases are collectively known as forensic photography. Crime scene photos are usually taken from three vantage points: overview, mid-range, and close-up. In 1845 Francis Ronalds, the Honorary Director of the Kew Observatory, invented the first successful camera to make continuous recordings of meteorological and geomagnetic parameters. Different machines produced 12- or 24- hour photographic traces of the minute-by-minute variations of atmospheric pressure, temperature, humidity, atmospheric electricity, and the three components of geomagnetic forces. The cameras were supplied to numerous observatories around the world and some remained in use until well into the 20th century. Charles Brooke a little later developed similar instruments for the Greenwich Observatory. Science regularly uses image technology that has derived from the design of the pinhole camera to avoid distortions that can be caused by lenses. X-ray machines are similar in design to pinhole cameras, with high-grade filters and laser radiation. Photography has become universal in recording events and data in science and engineering, and at crime scenes or accident scenes. The method has been much extended by using other wavelengths, such as infrared photography and ultraviolet photography, as well as spectroscopy. Those methods were first used in the Victorian era and improved much further since that time. The first photographed atom was discovered in 2012 by physicists at Griffith University, Australia. They used an electric field to trap an "Ion" of the element, Ytterbium. The image was recorded on a CCD, an electronic photographic film. === Wildlife photography === Wildlife photography involves capturing images of various forms of wildlife. Unlike other forms of photography such as product or food photography, successful wildlife photography requires a photographer to choose the right place and right time when specific wildlife are present and active. It often requires great patience and considerable skill and command of the right photographic equipment. == Social and cultural implications == There are many ongoing questions about different aspects of photography. In her On Photography (1977), Susan Sontag dismisses the objectivity of photography. This is a highly debated subject within the photographic community. Sontag argues, "To photograph is to appropriate the thing photographed. It means putting one's self into a certain relation to the world that feels like knowledge, and therefore like power." Photographers decide what to take a photo of, what elements to exclude and what angle to frame the photo, and these factors may reflect a particular socio-historical context. Along these lines, it can be argued that photography is a subjective form of representation. Modern photography has raised a number of concerns on its effect on society. In Alfred Hitchcock's Rear Window (1954), the camera is presented as promoting voyeurism. 'Although the camera is an observation station, the act of photographing is more than passive observing'. The camera doesn't rape or even possess, though it may presume, intrude, trespass, distort, exploit, and, at the farthest reach of metaphor, assassinate – all activities that, unlike the sexual push and shove, can be conducted from a distance, and with some detachment. Digital imaging has raised ethical concerns because of the ease of manipulating digital photographs in post-processing. Many photojournalists have declared they will not crop their pictures or are forbidden from combining elements of multiple photos to make "photomontages", passing them as "real" photographs. Today's technology has made image editing relatively simple for even the novice photographer. However, recent changes of in-camera processing allow digital fingerprinting of photos to detect tampering for purposes of forensic photography. Photography is one of the new media forms that changes perception and changes the structure of society. Further unease has been caused around cameras in regards to desensitization. Fears that disturbing or explicit images are widely accessible to children and society at large have been raised. Particularly, photos of war and pornography are causing a stir. Sontag is concerned that "to photograph is to turn people into objects that can be symbolically possessed". Desensitization discussion goes hand in hand with debates about censored images. Sontag writes of her concern that the ability to censor pictures means the photographer has the ability to construct reality. One of the practices through which photography constitutes society is tourism. Tourism and photography combine to create a "tourist gaze" in which local inhabitants are positioned and defined by the camera lens. However, it has also been argued that there exists a "reverse gaze" through which indigenous photographees can position the tourist photographer as a shallow consumer of images. == Law == Photography is both restricted and protected by the law in many jurisdictions. Protection of photographs is typically achieved through the granting of copyright or moral rights to the photographer. In the United States, photography is protected as a First Amendment right and anyone is free to photograph anything seen in public spaces as long as it is in plain view. In the UK, a recent law (Counter-Terrorism Act 2008) increases the power of the police to prevent people, even press photographers, from taking pictures in public places. In South Africa, any person may photograph any other person, without their permission, in public spaces and the only specific restriction placed on what may not be photographed by government is related to anything classed as national security. Each country has different laws. == See also == Outline of photography Science of photography List of photographers List of photography awards List of most expensive photographs List of photographs considered the most important Astrophotography Image editing Imaging Photolab and minilab Visual arts Large format Medium format Microform == References == == Further reading == === Introduction === Barrett, T 2012, Criticizing Photographs: an introduction to understanding images, 5th edn, McGraw-Hill, New York. Bate, D. (2009), Photography: The Key Concepts, Bloomsbury, New York. Berger, J. (Dyer, G. ed.), (2013), Understanding a Photograph, Penguin Classics, London. Bright, S 2011, Art Photography Now, Thames & Hudson, London. Cotton, C. (2015), The Photograph as Contemporary Art, 3rd edn, Thames & Hudson, New York. Heiferman, M. (2013), Photography Changes Everything, Aperture Foundation, US. Shore, S. (2015), The Nature of Photographs, 2nd ed. Phaidon, New York. Wells, L. (2004), Photography. A Critical Introduction [Paperback], 3rd ed. Routledge, London. ISBN 0-415-30704-X === History === A New History of Photography, ed. by Michel Frizot, Köln : Könemann, 1998 Franz-Xaver Schlegel, Das Leben der toten Dinge – Studien zur modernen Sachfotografie in den USA 1914–1935, 2 Bände, Stuttgart/Germany: Art in Life 1999, ISBN 3-00-004407-8. === Reference works === Tom Ang (2002). Dictionary of Photography and Digital Imaging: The Essential Reference for the Modern Photographer. Watson-Guptill. ISBN 978-0-8174-3789-3. Hans-Michael Koetzle: Das Lexikon der Fotografen: 1900 bis heute, Munich: Knaur 2002, 512 p., ISBN 3-426-66479-8 John Hannavy (ed.): Encyclopedia of Nineteenth-Century Photography, 1736 p., New York: Routledge 2005 ISBN 978-0-415-97235-2 Lynne Warren (Hrsg.): Encyclopedia of Twentieth-Century Photography, 1719 p., New York: Routledge, 2006 The Oxford Companion to the Photograph, ed. by Robin Lenman, Oxford University Press 2005 "The Focal Encyclopedia of Photography", Richard Zakia, Leslie Stroebel, Focal Press 1993, ISBN 0-240-51417-3 Stroebel, Leslie (2000). Basic Photographic Materials and Processes. et al. Boston: Focal Press. ISBN 978-0-240-80405-7. === Other books === Photography and The Art of Seeing by Freeman Patterson, Key Porter Books 1989, ISBN 1-55013-099-4. The Art of Photography: An Approach to Personal Expression by Bruce Barnbaum, Rocky Nook 2010, ISBN 1-933952-68-7. Image Clarity: High Resolution Photography by John B. Williams, Focal Press 1990, ISBN 0-240-80033-8. == External links == World History of Photography Archived 31 October 2010 at the Wayback Machine From The History of Art. Daguerreotype to Digital: A Brief History of the Photographic Process – State Library & Archives of Florida
Wikipedia/Photographic
Orography is the study of the topographic relief of mountains, and can more broadly include hills, and any part of a region's elevated terrain. Orography (also known as oreography, orology, or oreology) falls within the broader discipline of geomorphology. The term orography comes from the Greek: όρος, hill, γράφω, to write. == Uses == Mountain ranges and elevated land masses have a major impact on global climate. For instance, the elevated areas of East Africa substantially determine the strength of the Indian monsoon. In scientific models, such as general circulation models, orography defines the lower boundary of the model over land. When a river's tributaries or settlements by the river are listed in 'orographic sequence', they are in order from the highest (nearest the source of the river) to the lowest or mainstem (nearest the mouth). This method of listing tributaries is similar to the Strahler Stream Order, where the headwater tributaries are listed as category 1. == Orographic precipitation == Orographic precipitation, also known as relief precipitation, is precipitation generated by a forced upward movement of air upon encountering a physiographic upland (see anabatic wind). This lifting can be caused by: Upward deflection of large-scale horizontal flow by the orography. Anabatic or upward vertical propagation of moist air up an orographic slope, caused by daytime heating of the mountain barrier surface. Upon ascent, the air that is being lifted expands and cools adiabatically. This adiabatic cooling of a rising moist air parcel may lower its temperature to its dew point, thus allowing for condensation of the water vapor contained within it, and hence the formation of a cloud. If enough water vapor condenses into cloud droplets, these droplets may become large enough to fall to the ground as precipitation. Terrain-induced precipitation is a major factor for meteorologists to consider when they forecast the local weather. Orography can play a major role in determining the type, amount, intensity, and duration of precipitation events. Researchers have discovered that barrier width, slope steepness, and updraft speed are major contributors when it comes to achieving the optimal amount and intensity of orographic precipitation. Computer models simulating these factors have shown that narrow barriers and steeper slopes produce stronger updraft speeds, which in turn increase orographic precipitation. Orographic precipitation is known to occur on oceanic islands, such as the Hawaiian Islands and New Zealand; much of the rainfall received on such islands is on the windward side, and the leeward side tends to be quite dry, almost desert-like. This phenomenon results in substantial local gradients in the amount of average rainfall, with coastal areas receiving on the order of 20 to 30 inches (510 to 760 mm) per year, and interior uplands receiving over 100 inches (2,500 mm) per year. Leeward coastal areas are especially dry—less than 20 in (510 mm) per year at Waikiki—and the tops of moderately high uplands are especially wet—about 475 in (12,100 mm) per year at Wai'ale'ale on Kaua'i. Another area in which orographic precipitation is known to occur is the Pennines in the north of England: the west side of the Pennines receives more rain than the east because the clouds are forced up and over the hills and cause the rain to tend to fall on the western slopes. This is particularly noticeable between Manchester (to the west) and Leeds (to the east); Leeds receives less rain due to a rain shadow of 12 miles (19 km) from the Pennines. == See also == Coverage (telecommunication) Orographic lift Rain shadow == Citations == == General and cited references == Stull, Roland (2017). Practical Meteorology: An Algebra-based Survey of Atmospheric Science. University of British Columbia. ISBN 978-0-88865-283-6. Whiteman, C. David (2000). Mountain Meteorology: Fundamentals and Applications. Oxford University Press. ISBN 0-19-513271-8. == External links == Map of the Orography of Europe from Euratlas.com
Wikipedia/Orography
A digital elevation model (DEM) or digital surface model (DSM) is a 3D computer graphics representation of elevation data to represent terrain or overlaying objects, commonly of a planet, moon, or asteroid. A "global DEM" refers to a discrete global grid. DEMs are used often in geographic information systems (GIS), and are the most common basis for digitally produced relief maps. A digital terrain model (DTM) represents specifically the ground surface while DEM and DSM may represent tree top canopy or building roofs. While a DSM may be useful for landscape modeling, city modeling and visualization applications, a DTM is often required for flood or drainage modeling, land-use studies, geological applications, and other applications, and in planetary science. == Terminology == There is no universal usage of the terms digital elevation model (DEM), digital terrain model (DTM) and digital surface model (DSM) in scientific literature. In most cases the term digital surface model represents the earth's surface and includes all objects on it. In contrast to a DSM, the digital terrain model (DTM) represents the bare ground surface without any objects like plants and buildings (see the figure on the right). DEM is often used as a generic term for DSMs and DTMs, only representing height information without any further definition about the surface. Other definitions equalise the terms DEM and DTM, equalise the terms DEM and DSM, define the DEM as a subset of the DTM, which also represents other morphological elements, or define a DEM as a rectangular grid and a DTM as a three-dimensional model (TIN). Most of the data providers (USGS, ERSDAC, CGIAR, Spot Image) use the term DEM as a generic term for DSMs and DTMs. Some datasets such as SRTM or the ASTER GDEM are originally DSMs, although in forested areas, SRTM reaches into the tree canopy giving readings somewhere between a DSM and a DTM). DTMs are created from high resolution DSM datasets using complex algorithms to filter out buildings and other objects, a process known as "bare-earth extraction". In the following, the term DEM is used as a generic term for DSMs and DTMs. == Types == A DEM can be represented as a raster (a grid of squares, also known as a heightmap when representing elevation) or as a vector-based triangular irregular network (TIN). The TIN DEM dataset is also referred to as a primary (measured) DEM, whereas the Raster DEM is referred to as a secondary (computed) DEM. The DEM could be acquired through techniques such as photogrammetry, lidar, IfSAR or InSAR, land surveying, etc. (Li et al. 2005). DEMs are commonly built using data collected using remote sensing techniques, but they may also be built from land surveying. === Rendering === The digital elevation model itself consists of a matrix of numbers, but the data from a DEM is often rendered in visual form to make it understandable to humans. This visualization may be in the form of a contoured topographic map, or could use shading and false color assignment (or "pseudo-color") to render elevations as colors (for example, using green for the lowest elevations, shading to red, with white for the highest elevation.). Visualizations are sometimes also done as oblique views, reconstructing a synthetic visual image of the terrain as it would appear looking down at an angle. In these oblique visualizations, elevations are sometimes scaled using "vertical exaggeration" in order to make subtle elevation differences more noticeable. Some scientists, however, object to vertical exaggeration as misleading the viewer about the true landscape. == Production == Mappers may prepare digital elevation models in a number of ways, but they frequently use remote sensing rather than direct survey data. Older methods of generating DEMs often involve interpolating digital contour maps that may have been produced by direct survey of the land surface. This method is still used in mountain areas, where interferometry is not always satisfactory. Note that contour line data or any other sampled elevation datasets (by GPS or ground survey) are not DEMs, but may be considered digital terrain models. A DEM implies that elevation is available continuously at each location in the study area. === Satellite mapping === One powerful technique for generating digital elevation models is interferometric synthetic aperture radar where two passes of a radar satellite (such as RADARSAT-1 or TerraSAR-X or Cosmo SkyMed), or a single pass if the satellite is equipped with two antennas (like the SRTM instrumentation), collect sufficient data to generate a digital elevation map tens of kilometers on a side with a resolution of around ten meters. Other kinds of stereoscopic pairs can be employed using the digital image correlation method, where two optical images are acquired with different angles taken from the same pass of an airplane or an Earth Observation Satellite (such as the HRS instrument of SPOT5 or the VNIR band of ASTER). The SPOT 1 satellite (1986) provided the first usable elevation data for a sizeable portion of the planet's landmass, using two-pass stereoscopic correlation. Later, further data were provided by the European Remote-Sensing Satellite (ERS, 1991) using the same method, the Shuttle Radar Topography Mission (SRTM, 2000) using single-pass SAR and the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER, 2000) instrumentation on the Terra satellite using double-pass stereo pairs. The HRS instrument on SPOT 5 has acquired over 100 million square kilometers of stereo pairs. === Planetary mapping === A tool of increasing value in planetary science has been use of orbital altimetry used to make digital elevation map of planets. A primary tool for this is laser altimetry but radar altimetry is also used. Planetary digital elevation maps made using laser altimetry include the Mars Orbiter Laser Altimeter (MOLA) mapping of Mars, the Lunar Orbital Laser Altimeter (LOLA) and Lunar Altimeter (LALT) mapping of the Moon, and the Mercury Laser Altimeter (MLA) mapping of Mercury. In planetary mapping, each planetary body has a unique reference surface. New Horizons' Long Range Reconnaissance Imager used stereo photogrammetry to produce partial surface elevation maps of Pluto and 486958 Arrokoth. === Methods for obtaining elevation data used to create DEMs === Lidar Radar Stereo photogrammetry from aerial surveys Structure from motion / Multi-view stereo applied to aerial photography Block adjustment from optical satellite imagery Interferometry from radar data Real Time Kinematic GPS Topographic maps Theodolite or total station Doppler radar Focus variation Inertial surveys Surveying and mapping drones Range imaging === Accuracy === The quality of a DEM is a measure of how accurate elevation is at each pixel (absolute accuracy) and how accurately is the morphology presented (relative accuracy). Quality assessment of DEM can be performed by comparison of DEMs from different sources. Several factors play an important role for quality of DEM-derived products: terrain roughness; sampling density (elevation data collection method); grid resolution or pixel size; interpolation algorithm; vertical resolution; terrain analysis algorithm; Reference 3D products include quality masks that give information on the coastline, lake, snow, clouds, correlation etc. == Uses == Common uses of DEMs include: Extracting terrain parameters for geomorphology Modeling water flow for hydrology or mass movement (for example avalanches and landslides) Modeling soils wetness with Cartographic Depth to Water Indexes (DTW-index) Creation of relief maps Rendering of 3D visualizations. 3D flight planning and TERCOM Creation of physical models (including raised relief maps and 3D printed terrain models) Rectification of aerial photography or satellite imagery Reduction (terrain correction) of gravity measurements (gravimetry, physical geodesy) Terrain analysis in geomorphology and physical geography Geographic information systems (GIS) Engineering and infrastructure design Satellite navigation (for example GPS and GLONASS) Line-of-sight analysis Base mapping Flight simulation Train simulation Precision farming and forestry Surface analysis Intelligent transportation systems (ITS) Auto safety / advanced driver-assistance systems (ADAS) Archaeology == Sources == === Global === Released at the beginning of 2022, FABDEM offers a bare earth simulation of the Earth's surface at 30 arc-second resolution. Adapted from GLO-30, the data removes all forests and buildings. The data is free to download non-commercially and through the developer's website at a cost commercially. An alternative free global DEM is called GTOPO30 (30 arcsecond resolution, c. 1 km along the equator) is available, but its quality is variable and in some areas it is very poor. A much higher quality DEM from the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) instrument of the Terra satellite is also freely available for 99% of the globe, and represents elevation at 30 meter resolution. A similarly high resolution was previously only available for the United States territory under the Shuttle Radar Topography Mission (SRTM) data, while most of the rest of the planet was only covered in a 3 arc-second resolution (around 90 meters along the equator). SRTM does not cover the polar regions and has mountain and desert no data (void) areas. SRTM data, being derived from radar, represents the elevation of the first-reflected surface—quite often tree tops. So, the data are not necessarily representative of the ground surface, but the top of whatever is first encountered by the radar. Submarine elevation (known as bathymetry) data is generated using ship-mounted depth soundings. When land topography and bathymetry is combined, a truly global relief model is obtained. The SRTM30Plus dataset (used in NASA World Wind) attempts to combine GTOPO30, SRTM and bathymetric data to produce a truly global elevation model. The Earth2014 global topography and relief model provides layered topography grids at 1 arc-minute resolution. Other than SRTM30plus, Earth2014 provides information on ice-sheet heights and bedrock (that is, topography below the ice) over Antarctica and Greenland. Another global model is Global Multi-resolution Terrain Elevation Data 2010 (GMTED2010) with 7.5 arc second resolution. It is based on SRTM data and combines other data outside SRTM coverage. A novel global DEM of postings lower than 12 m and a height accuracy of less than 2 m is expected from the TanDEM-X satellite mission which started in July 2010. The most common grid (raster) spacing is between 50 and 500 meters. In gravimetry e.g., the primary grid may be 50 m, but is switched to 100 or 500 meters in distances of about 5 or 10 kilometers. Since 2002, the HRS instrument on SPOT 5 has acquired over 100 million square kilometers of stereo pairs used to produce a DTED2 format DEM (with a 30-meter posting) DEM format DTED2 over 50 million km2. The radar satellite RADARSAT-2 has been used by MacDonald, Dettwiler and Associates Ltd. to provide DEMs for commercial and military customers. In 2014, acquisitions from radar satellites TerraSAR-X and TanDEM-X will be available in the form of a uniform global coverage with a resolution of 12 meters. ALOS provides since 2016 a global 1-arc second DSM free of charge, and a commercial 5 meter DSM/DTM. === Local === Many national mapping agencies produce their own DEMs, often of a higher resolution and quality, but frequently these have to be purchased, and the cost is usually prohibitive to all except public authorities and large corporations. DEMs are often a product of national lidar dataset programs. Free DEMs are also available for Mars: the MEGDR, or Mission Experiment Gridded Data Record, from the Mars Global Surveyor's Mars Orbiter Laser Altimeter (MOLA) instrument; and NASA's Mars Digital Terrain Model (DTM). === Websites === OpenTopography is a web based community resource for access to high-resolution, Earth science-oriented, topography data (lidar and DEM data), and processing tools running on commodity and high performance compute system along with educational resources. OpenTopography is based at the San Diego Supercomputer Center at the University of California San Diego and is operated in collaboration with colleagues in the School of Earth and Space Exploration at Arizona State University and UNAVCO. Core operational support for OpenTopography comes from the National Science Foundation, Division of Earth Sciences. The OpenDemSearcher is a Mapclient with a visualization of regions with free available middle and high resolution DEMs. == See also == Ground slope and aspect (ground spatial gradient) Digital outcrop model Global Relief Model Physical terrain model Terrain cartography Terrain rendering === DEM file formats === Bathymetric Attributed Grid (BAG) DTED DIMAP Sentinel 1 ESA data base SDTS DEM USGS DEM == References == == Further reading == Wilson, J.P.; Gallant, J.C. (2000). "Chapter 1" (PDF). In Wilson, J.P.; Gallant, J.C. (eds.). Terrain Analysis: Principles and Applications. New York: Wiley. pp. 1–27. ISBN 978-0-471-32188-0. Retrieved 2007-02-16. Hirt, C.; Filmer, M.S.; Featherstone, W.E. (2010). "Comparison and validation of recent freely-available ASTER-GDEM ver1, SRTM ver4.1 and GEODATA DEM-9S ver3 digital elevation models over Australia". Australian Journal of Earth Sciences. 57 (3): 337–347. Bibcode:2010AuJES..57..337H. doi:10.1080/08120091003677553. hdl:20.500.11937/43846. S2CID 140651372. Retrieved May 5, 2012. Rexer, M.; Hirt, C. (2014). "Comparison of free high-resolution digital elevation data sets (ASTER GDEM2, SRTM v2.1/v4.1) and validation against accurate heights from the Australian National Gravity Database" (PDF). Australian Journal of Earth Sciences. 61 (2): 213–226. Bibcode:2014AuJES..61..213R. doi:10.1080/08120099.2014.884983. hdl:20.500.11937/38264. S2CID 3783826. Archived from the original (PDF) on June 7, 2016. Retrieved April 24, 2014. == External links == DEM Quality Comparison Terrainmap.com Maps-for-free.com Geo-Spatial Data Acquisition Archived 2013-08-22 at the Wayback Machine Elevation Mapper, Create geo-referenced elevation maps Data products Satellite Geodesy by Scripps Institution of Oceanography Shuttle Radar Topography Mission by NASA/JPL Global 30 Arc-Second Elevation (GTOPO30) by the U.S. Geological Survey Global Multi-resolution Terrain Elevation Data 2010 (GMTED2010) by the U.S. Geological Survey Earth2014 by Technische Universität München Sonny's LiDAR Digital Terrain Models of Europe
Wikipedia/Digital_Elevation_Model
The Ordnance Survey (OS) is the national mapping agency for Great Britain. The agency's name indicates its original military purpose (see ordnance and surveying), which was to map Scotland in the wake of the Jacobite rising of 1745. There was also a more general and nationwide need in light of the potential threat of invasion during the Napoleonic Wars. Since 1 April 2015, the Ordnance Survey has operated as Ordnance Survey Ltd, a government-owned company, 100% in public ownership. The Ordnance Survey Board remains accountable to the Secretary of State for Science, Innovation and Technology. It was also a member of the Public Data Group. Paper maps represent only 5% of the company's annual revenue. It produces digital map data, online route planning and sharing services and mobile apps, plus many other location-based products for business, government and consumers. Ordnance Survey mapping is usually classified as either "large-scale" (in other words, more detailed) or "small-scale". The Survey's large-scale mapping comprises 1:2,500 maps for urban areas and 1:10,000 more generally. (The latter superseded the 1:10,560 "six inches to the mile" scale in the 1950s.) These large scale maps are typically used in professional land-use contexts and were available as sheets until the 1980s, when they were digitised. Small-scale mapping for leisure use includes the 1:25,000 "Explorer" series, the 1:50,000 "Landranger" series and the 1:250,000 road maps. These are still available in traditional sheet form. Ordnance Survey maps remain in copyright for 50 years after their publication. Some of the copyright libraries hold complete or near-complete collections of pre-digital OS mapping. == History == === Origins === The origins of the Ordnance Survey lie in the aftermath of the Jacobite rising of 1745. Prince William, Duke of Cumberland realised that the British Army did not have a good map of the Scottish Highlands to locate Jacobite dissenters such as Simon Fraser, 11th Lord Lovat so that they could be put on trial. In 1747, Lieutenant-Colonel David Watson proposed the compilation of a map of the Highlands to help in pacifying the region. In response, King George II charged Watson with making a military survey of the Highlands under the command of the Duke of Cumberland. Among Watson's assistants were William Roy, Paul Sandby and John Manson. The survey was produced at a scale of 1 inch to 1,000 yards (1:36,000) and included "the Duke of Cumberland's Map" (primarily by Watson and Roy), now held in the British Library. Roy later had an illustrious career in the Royal Engineers (RE), rising to the rank of General, and he was largely responsible for the British share of the work in determining the relative positions of the French and British royal observatories. This work was the starting point of the Principal Triangulation of Great Britain (1783–1853), and led to the creation of the Ordnance Survey itself. Roy's technical skills and leadership set the high standard for which the Ordnance Survey became known. Work was begun in earnest in 1790 under Roy's supervision, when the Board of Ordnance (a predecessor of part of the modern Ministry of Defence) began a national military survey starting with the south coast of England. Roy's birthplace near Carluke in South Lanarkshire is today marked by a memorial in the form of a large OS trig point. By 1791, the Board received the newer Ramsden theodolite (an improved successor to the one that Roy had used in 1784), and work began on mapping southern Great Britain using a 5 mi (8 km) baseline on Hounslow Heath that Roy himself had previously measured; it crosses the present Heathrow Airport. In 1991, Royal Mail marked the bicentenary by issuing a set of postage stamps featuring maps of the Kentish village of Hamstreet. In 1801, the first one-inch-to-the-mile (1:63,360 scale) map was published, detailing the county of Kent, with Essex following shortly afterwards. The Kent map was published privately and stopped at the county border, while the Essex maps were published by the Ordnance Survey and ignored the county border, setting the trend for future Ordnance Survey maps. During the next 20 years, about a third of England and Wales was mapped at the same scale (see Principal Triangulation of Great Britain) under the direction of William Mudge, as other military matters took precedence. It took until 1823 to re-establish the relationship with the French survey made by Roy in 1787. By 1810, one-inch-to-the-mile maps of most of the south of England were completed, but they were withdrawn from sale between 1811 and 1816 because of security fears. By 1840, the one-inch survey had covered all of Wales and all but the six northernmost counties of England. Surveying was hard work. For instance, Major Thomas Colby, the longest-serving Director General of the Ordnance Survey, walked 586 mi (943 km) in 22 days on a reconnaissance in 1819. In 1824, Colby and most of his staff moved to Ireland to work on a six-inches-to-the-mile (1:10,560) valuation survey. The survey of Ireland, county by county, was completed in 1846. The suspicions and tensions it caused in rural Ireland are the subject of Brian Friel's play Translations. Colby was not only involved in the design of specialist measuring equipment. He also established a systematic collection of place names, and reorganised the map-making process to produce clear, accurate plans. Place names were recorded in "Name Books", a system first used in Ireland. The instructions for their use were: The persons employed on the survey are to endeavour to obtain the correct orthography of the names of places by diligently consulting the best authorities within their reach. The name of each place is to be inserted as it is commonly spelt, in the first column of the name book and the various modes of spelling it used in books, writings &c. are to be inserted in the second column, with the authority placed in the third column opposite to each. Whilst these procedures generally produced excellent results, mistakes were made: for instance, the Pilgrims' Way in the North Downs labelled the wrong route, but the name stuck. Similarly, the spelling of Scafell and Scafell Pike copied an error on an earlier map, and was retained as this was the name of a corner of one of the Principal Triangles, despite "Scawfell" being the almost universal form at the time. Colby believed in leading from the front, travelling with his men, helping to build camps and, as each survey session drew to a close, arranging mountain-top parties with enormous plum puddings. The British Geological Survey was founded in 1835 as the Ordnance Geological Survey under Henry De la Beche, and remained a branch of the Ordnance Survey until 1965. At the same time, the uneven quality of the English and Scottish maps was being improved by engravers under Benjamin Baker. By the time Colby retired in 1846, the production of six-inch maps of Ireland was complete. This had led to a demand for similar treatment in England, and work was proceeding on extending the six-inch map to northern England, but only a three-inch scale for most of Scotland. When Colby retired, he recommended William Yolland as his successor, but he was considered too young and the less experienced Lewis Alexander Hall was appointed. After a fire in the Tower of London, the headquarters of the survey was moved to Southampton taking over buildings previously occupied by a military orphanage (the Royal Military Asylum) in 1841, and Yolland was put in charge, but Hall sent him off to Ireland so that when Hall left in 1854 Yolland was again passed over in favour of Major Henry James. Hall was enthusiastic about extending the survey of the north of England to a scale of 1:2,500. In 1855, the Board of Ordnance was abolished and the Ordnance Survey was placed under the War Office together with the Topographical Survey and the Depot of Military Knowledge. Eventually in 1870 it was transferred to the Office of Works. The primary triangulation of the United Kingdom of Roy, Mudge and Yolland was completed by 1841, but was greatly improved by Alexander Ross Clarke who completed a new survey based on Airy's spheroid in 1858, completing the Principal Triangulation. The following year, he completed an initial levelling of the country. === Great Britain "County Series" === After the Ordnance Survey published its first large-scale maps of Ireland in the mid-1830s, the Tithe Act 1836 led to calls for a similar six-inch to the mile survey in England and Wales. Official procrastination followed, but the development of the railways added to pressure that resulted in the Ordnance Survey Act 1841 (4 & 5 Vict. c. 30). This granted a right to enter property for the purpose of the survey. Following a fire at its headquarters at the Tower of London in 1841 the Ordnance Survey relocated to a site in Southampton and was in disarray for several years, with arguments about which scales to use. Major-General Sir Henry James was by then Director General, and he saw how photography could be used to make maps of various scales cheaply and easily. He developed and exploited photozincography, not only to reduce the costs of map production but also to publish facsimiles of nationally important manuscripts. Between 1861 and 1864, a facsimile of the Domesday Book was issued, county by county; and a facsimile of the Gough Map was issued in 1870. From the 1840s, the Ordnance Survey concentrated on the Great Britain "County Series", modelled on the earlier Ireland survey. A start was made on mapping the whole country, county by county, at six inches to the mile (1:10,560). In 1854, "twenty-five inch" maps were introduced with a scale of 1:2500 (25.344 inches to the mile) and the six inch maps were then based on these twenty-five inch maps. The first edition of the two scales was completed by the 1890s, with a second edition completed in the 1890s and 1900s. From 1907 till the early 1940s, a third edition (or "second revision") was begun but never completed: only areas with significant changes on the ground were revised, many two or three times. Meanwhile, publication of the one-inch to the mile series for Great Britain was completed in 1891. From the late 19th century to the early 1940s, the OS produced many "restricted" versions of the County Series maps and other War Department sheets for War Office purposes, in a variety of large scales that included details of military significance such as dockyards, naval installations, fortifications and military camps. Apart from a brief period during the disarmament talks of the 1930s, these areas were left blank or incomplete on standard maps. The War Department 1:2500s, unlike the standard issue, were contoured. The de-classified sheets have now been deposited in some of the Copyright Libraries, helping to complete the map-picture of pre-Second World War Britain. === City and town mapping, 19th and early 20th century === From 1824, the OS began a 6-inch (1:10,560) survey of Ireland for taxation purposes but found this to be inadequate for urban areas and adopted the five-foot scale (1:1056) for Irish cities and towns. From 1840, the six-inch standard was adopted in Great Britain for the un-surveyed northern counties and the 1:1056 scale also began to be adopted for urban surveys. Between 1842 and 1895, some 400 towns were mapped at 1:500 (126 inches), 1:528 (120 inches, "10 foot scale") or 1:1056 (60 inches), with the remaining towns mapped at 1:2500 (~25 inches). In 1855, the Treasury authorised funding for 1:2500 for rural areas and 1:500 for urban areas. The 1:500 scale was considered more 'rational' than 1:528 and became known as the "sanitary scale" since its primary purpose was to support establishment of mains sewerage and water supply. However, a review of the Ordnance Survey in 1892 found that sales of the 1:500 series maps were very poor and the Treasury declined to fund their continuing maintenance, declaring that any revision or new mapping at this scale must be self-financing. Very few towns and cities saw a second edition of the town plans: by 1909 only fourteen places had paid for updates. The review determined that revision of 1:2500 mapping should proceed apace. The most detailed mapping of London was the OS's 1:1056 survey between 1862 and 1872, which took 326 sheets to cover the capital; a second edition (which needed 759 sheets because of urban expansion) was completed and brought out between 1891 and 1895. London was unusual in that land registration on transfer of title was made compulsory there in 1900. The 1:1056 sheets were partially revised to provide a basis for HM Land Registry index maps and the OS mapped the whole London County Council area (at 1:1056) at national expense. Placenames from the second edition were used in 2016 by the GB1900 project to crowd-source an open-licensed gazetteer of Great Britain. From 1911 onwards – and mainly between 1911 and 1913 – the Ordnance Survey photo-enlarged many 1:2500 sheets covering built-up areas to 1:1250 (50.688 inches to the mile) for Land Valuation and Inland Revenue purposes: the increased scale was to provide space for annotations. About a quarter of these 1:1250s were marked "Partially revised 1912/13". In areas where there were no further 1:2500s, these partially revised "fifty inch" sheets represent the last large-scale revision (larger than six-inch) of the County Series. The County Series mapping was superseded by the Ordnance Survey National Grid 1:1250s, 1:2500s and 1:10,560s after the Second World War. === 20th century === During World War I, the Ordnance Survey was involved in preparing maps of France and Belgium. During World War II, many more maps were created, including: 1:40,000 map of Antwerp, Belgium 1:100,000 map of Brussels, Belgium 1:5,000,000 map of South Africa 1:250,000 map of Italy 1:50,000 map of north-east France 1:30,000 map of the Netherlands with manuscript outline of districts occupied by the German Army. After the war, Colonel Charles Close, then Director General, developed a strategy using covers designed by Ellis Martin to increase sales in the leisure market. In 1920 O. G. S. Crawford was appointed Archaeology Officer and played a prominent role in developing the use of aerial photography to deepen understanding of archaeology. In 1922, devolution in Northern Ireland led to the creation of the Ordnance Survey of Northern Ireland (OSNI) and the independence of the Irish Free State led to the creation of the Ordnance Survey of Ireland, so the original Ordnance Survey pulled its coverage back to Great Britain. In 1935, the Davidson Committee was established to review the Ordnance Survey's future. The new Director General, Major-General Malcolm MacLeod, started the retriangulation of Great Britain, an immense task involving the erection of concrete triangulation pillars ("trig points") on prominent hilltops as infallible positions for theodolites. Each measurement made by theodolite during the retriangulation was repeated no fewer than 32 times. The Davidson Committee's final report set the Ordnance Survey on course for the 20th century. The metric national grid reference system was launched and a 1:25000-scale series of maps was introduced. The one-inch maps continued to be produced until the 1970s, when they were superseded by the 1:50000-scale series – as proposed by William Roy more than two centuries earlier. The Ordnance Survey had outgrown its site in the centre of Southampton (made worse by the bomb damage of the Second World War). The bombing during the Blitz devastated Southampton in November 1940 and destroyed most of the Ordnance Survey's city centre offices. Staff were dispersed to other buildings and to temporary accommodation at Chessington and Esher, Surrey, where they produced 1:25000 scale maps of France, Italy, Germany and most of the rest of Europe in preparation for its invasion. Until 1969, the Ordnance Survey largely remained at its Southampton city centre HQ and at temporary buildings in the suburb of Maybush nearby, when a new purpose-built headquarters was opened in Maybush adjacent to the wartime temporary buildings there. Some of the remaining buildings of the original Southampton city-centre site are now used as part of the city's court complex. The new head office building was designed by the Ministry of Public Building and Works for 4000 staff, including many new recruits who were taken on in the late 1960s and early 1970s as draughtsmen and surveyors. The buildings originally contained factory-floor space for photographic processes such as heliozincography and map printing, as well as large buildings for storing flat maps. Above the industrial areas were extensive office areas. The complex was notable for its concrete mural. Celestial, by sculptor Keith McCarter and the concrete elliptical paraboloid shell roof over the staff restaurant building. In 1995, the Ordnance Survey digitised the last of about 230,000 maps, making the United Kingdom the first country in the world to complete a programme of large-scale electronic mapping. By the late 1990s technological developments had eliminated the need for vast areas for storing maps and for making printing plates by hand. Although there was a small computer section at the Ordnance Survey in the 1960s, the digitising programme had replaced the need for printing large-scale maps, while computer-to-plate technology (in the form of a single machine) had also rendered the photographic platemaking areas obsolete. Part of the latter was converted into a new conference centre in 2000, which was used for internal events and also made available for external organisations to hire. The Ordnance Survey became an Executive Agency in 1990, making the organisation independent of ministerial control. In 1999 the agency was designated a trading fund, required to cover its costs by charging for its products and to remit a proportion of its profits to the Treasury. === 21st century === In 2010, OS announced that printing and warehouse operations were to be outsourced, ending over 200 years of in-house printing. The Frome-based firm Butler, Tanner and Dennis (BT&D) secured its printing contract. As already stated, large-scale maps had not been printed at the Ordnance Survey since the common availability of geographical information systems (GISs), but, until late 2010, the OS Explorer and OS Landranger series were printed in Maybush. In April 2009 building began of a new head office in Adanac Park on the outskirts of Southampton. By 10 February 2011 virtually all staff had relocated to the new "Explorer House" building and the old site had been sold off and redeveloped. Prince Philip officially opened the new headquarters building on 4 October 2011. On 22 January 2015 plans were announced for the organisation to move from a trading fund model to a government-owned limited company, with the move completed in April 2015. The organisation remains fully owned by the UK government and retains many of the features of a public organisation. In September 2015 the history of the Ordnance Survey was the subject of a BBC Four TV documentary entitled A Very British Map: The Ordnance Survey Story. On 10 June 2019 the Department for Business, Energy and Industrial Strategy (BEIS) appointed Steve Blair as the Chief Executive of the Ordnance Survey. The Ordnance Survey supported the launch of the Slow Ways initiative, which encourages users to walk on lesser used paths between UK towns. On 7 February 2023, ownership of Ordnance Survey Ltd passed to the newly formed Department for Science, Innovation and Technology. == Map range == The Ordnance Survey produces a large range of paper maps and digital mapping products. === OS MasterMap === The Ordnance Survey's flagship digital product, launched in November 2001, is OS MasterMap, a database that records, in one continuous digital map, every fixed feature of Great Britain larger than a few metres. Every feature is given a unique TOID (TOpographical IDentifier), a simple identifier that includes no semantic information. Typically, each TOID is associated with a polygon that represents the area on the ground that the feature covers, in National Grid coordinates. OS MasterMap is offered in themed layers, each linked to a number of TOIDs. In September 2010, the layers were: Topography The primary layer of OS MasterMap, consisting of vector data comprising large-scale representation of features in the real world, such as buildings and areas of vegetation. The features captured and the way they are depicted is listed in a specification available on the Ordnance Survey website. Integrated transport network A link-and-node network of transport features such as roads and railways. This data is at the heart of many satnav systems. In an attempt to reduce the number of HGVs using unsuitable roads, a data-capture programme of "Road Routing Information" was undertaken by 2015, aiming to add information such as height restrictions and one-way streets. Imagery Orthorectified aerial photography in raster format. Address An overlay adding every address in the UK to other layers. Address 2 Adds further information to the Address layer, such as addresses with multiple occupants (blocks of flats, student houses, etc.) and objects with no postal addresses, such as fields and electricity substations. ITN was withdrawn in April 2019 and replaced by OS MasterMap Highways Network. The Address layers were withdrawn in about 2016 with the information now being available in the AddressBase products – so as of 2020, MasterMap consists of Topography and Imagery. Pricing of licenses to OS MasterMap data depends on the total area requested, the layers licensed, the number of TOIDs in the layers, and the period in years of the data usage. OS MasterMap can be used to generate maps for a vast array of purposes and maps can be printed from OS MasterMap data with detail equivalent to a traditional 1:1250 scale paper map. The Ordnance Survey states that thanks to continuous review, OS MasterMap data is never more than six months out of date. The scale and detail of this mapping project is unique. By 2009, around 440 million TOIDs had been assigned, and the database stood at 600 gigabytes in size. As of March 2011, OS claims 450 million TOIDs. As of 2005, OS MasterMap was at version 6; 2010's version 8 includes provision for Urban Paths (an extension of the "integrated transport network" layer) and pre-build address layer. All these versions have a similar GML schema. === Business mapping === The Ordnance Survey produces a wide variety of different products aimed at business users, such as utility companies and local authorities. The data is supplied by the Ordnance Survey on optical media or increasingly, via the Internet. Products can be downloaded via FTP or accessed 'on demand' via a web browser. Organisations using Ordnance Survey data have to purchase a licence to do so. Some of the main products are: OS MasterMap The Ordnance Survey's most detailed mapping showing individual buildings and other features in a vector format. Every real-world object is assigned a unique reference number (TOID) that allows customers to add this reference to their own databases. OS MasterMap consists of several so-called "layers" such as the aerial imagery, transport and postcode. The principal layer is the topographic layer. OS VectorMap Local A customisable vector product at 1:10,000 scale. Meridian 2, Strategi Mid-scale mapping in vector format. Boundary-Line Mapping showing administrative boundaries such as counties, parishes and electoral wards. Raster versions of leisure maps 1:10,000, 1:25,000, 1:50,000, 1:250,000 scale raster === Leisure maps === OS's range of leisure maps are published in a variety of scales: Tour (c. 1:100,000, except Scotland) One-sheet maps covering a generally county-sized area, showing major and most minor roads and containing tourist information and selected footpaths. Tour maps are generally produced from enlargements of 1:250,000 mapping. Several larger scale town maps are provided on each sheet for major settlement centres. The maps have sky-blue covers and there are eight sheets in the series. Scales vary: OS Landranger (1:50,000) The "general purpose" map. They have pink covers; 204 sheets cover the whole of Great Britain and the Isle of Man. The map shows all footpaths and the format is similar to the Explorer maps, but with less detail. OS Landranger Active (1:50,000) Select OS Landranger maps available in a plastic-laminated waterproof version, similar to the OS Explorer Active range. As of October 2009, 25 of the 204 Landranger maps were available as OS Landranger Active maps. OS Explorer (1:25,000) Specifically designed for walkers and cyclists. They have orange covers, and contain 403 sheets covering the whole of Great Britain (the Isle of Man is excluded from this series). These are the most detailed leisure maps that the Ordnance Survey publish and cover all types of footpaths and most details of the countryside for easy navigation. The OL branded sheets within the Explorer series show areas of greater interest (including the Lake District, the Black Mountains, etc.) with an enlarged area coverage. They appear identical to the ordinary Explorer maps, except for the numbering and a little yellow mark on the corner (a relic of the old Outdoor Leisure series). The OS Explorer maps, together with the former Outdoor Leisure series, superseded the numerous green-covered Pathfinder maps. In May 2015 the Ordnance Survey announced that the new release of OL series maps would come with a mobile download version, available through a dedicated app on Android and iOS devices. It is expected that this will be rolled out to all the Explorer and Landranger series over time. OS Explorer Active (1:25,000) OS Explorer and Outdoor Leisure maps in a plastic-laminated waterproof version. Activity Maps An experimental range of maps designed to support specific activities. The four map packs currently published are Off-Road Cycling Hampshire North, South, East and West. Each map pack contains 12 cycle routes printed on individual map sheets on waterproof paper. While they are based on the 1:25,000 scale maps, the scales have been adjusted so each route fits on a single A4 sheet. Route (1:625,000; discontinued 2010) A double-sided map designed for long-distance road users, covering the whole of Great Britain. Road (1:250,000; discontinued 2010) A series of eight sheets covering Great Britain, designed for road users. The last two, along with fifteen Tour maps, were discontinued during January 2010 as part of a drive for cost efficiency following the Great Recession. The Road series was reintroduced in September 2016. === App development === In 2013, the Ordnance Survey released its first official app, OS MapFinder (still available, but no longer maintained), and has since added three more apps. In 2021, OS Maps added coverage in Australia. OS Maps Available on iOS and Android, the free to download app allows users to access maps direct to their devices, plan and record routes and share routes with others. Users can subscribe and download OS Landranger and OS Explorer high-resolution maps in 660dpi quality and use them without incurring roaming charges as maps are stored on the device and can be used offline – without Wi-Fi or mobile signal. OS Maps Web Available as a web page–it allows users to access maps from the web using modern web browsers, planning of custom routes and printing of maps is possible similarly to what the mobile applications can do OS Locate Launched in February 2014 and available on iOS and Android, the free app is a fast and highly accurate means of pinpointing a users exact location and displays grid reference, latitude, longitude and altitude. OS Locate does not need a mobile signal to function, so the inbuilt GPS system in a device can be relied upon. === Custom products === The Ordnance Survey also offers OS Custom Made, a print-on-demand service based on digital raster data that allows a customer to specify the area of the map or maps desired. Two scales are offered – 1:50,000 (equivalent to 40 km by 40 km) or 1:25,000 (20 km by 20 km) – and the maps may be produced either folded or flat for framing or wall mounting. Customers may provide their own titles and cover images for folded maps. The Ordnance Survey also produces more detailed custom mapping to order, at 1:1,250 or 1:500 (Siteplan), from its large-scale digital data. Custom scales may also be produced from the enlargement or reduction of the existing scales. === Educational mapping === The Ordnance Survey supplies reproductions of its maps from the early 1970s to the 1990s for educational use. These are widely seen in schools both in Britain and in former British colonies, either as stand-alone geographic aids or as part of geography textbooks or workbooks. During the 2000s, in an attempt to increase schoolchildren's awareness of maps, the Ordnance Survey offered a free OS Explorer Map to every 11-year-old in UK primary education. By the end of 2010, when the scheme closed, over 6 million maps had been given away. The scheme was replaced by free access to the Digimap for Schools service provided by EDINA for eligible schools. With the trend away from paper products towards geographical information systems (GISs), the Ordnance Survey has been looking into ways of ensuring schoolchildren are made aware of the benefits of GISs and has launched "MapZone", an interactive child-orientated website featuring learning resources and map-related games. The Ordnance Survey publishes a quarterly journal, principally for geography teachers, called Mapping News. === Derivative and licensed products === Bing Maps offers OS data as a layer for the whole of the UK. Philip's publishes OS data in its road and street atlases in book format. One series of historic maps, published by Cassini, is a reprint of the Ordnance Survey first series from the mid-19th century but using the OS Landranger projection at 1:50,000 and given 1 km gridlines. This means that features from over 150 years ago fit almost exactly over their modern equivalents and modern grid references can be given to old features. The digitisation of the data has allowed the Ordnance Survey to sell maps electronically. Several companies are now licensed to produce the popular scales (1:50,000 and 1:25,000) and their own derived datasets of the map on CD/DVD or to make them available online for download. The buyer typically has the right to view the maps on a PC, a laptop, and a pocket PC/smartphone, and to print off any number of copies. The accompanying software is GPS-aware, and the maps are ready-calibrated. Thus, the user can quickly transfer the desired area from their PC to their laptop or smartphone, and go for a drive or walk with their position continually pinpointed on the screen. The individual map is more expensive than the equivalent paper version, but the price per square km falls rapidly with the size of coverage bought. === Free access to historic mapping === The National Library of Scotland provides free access to OS mapping from 1840 to 1970, in a variety of scales from 1:1056 "five foot" maps of London to 1:625,000 "ten mile" national planning maps. In addition, SABRE Maps provides free access to OS mapping from the end of World War 1 to the 1970s at small and intermediate scale mapping, including 1:25000, One Inch, Half Inch, Quarter Inch and Ten Mile scales, usually with a wider coverage of individual revisions than the NLS. === History of 1:63360 and 1:50000 map publications === == Cartography and geodesy == The Ordnance Survey's original maps were made by triangulation. For the second survey, in 1934, this process was used again and resulted in the building of many triangulation pillars (trig points): short (c. 4 feet/1.2 m high), usually square, concrete or stone pillars at prominent locations such as hill tops. Their precise locations were determined by triangulation, and the details in between were then filled in with less precise methods. Modern Ordnance Survey maps are largely based on orthorectified aerial photographs, but large numbers of the triangulation pillars remain, many of them adopted by private land owners. The Ordnance Survey still has a team of surveyors across Great Britain who visit in person and survey areas that cannot be surveyed using photogrammetric methods (such as land obscured by vegetation) and there is an aim of ensuring that any major feature (such as a new motorway or large housing development) is surveyed within six months of being built. While original survey methods were largely manual, the current surveying task is simplified by the use of Global Navigation Satellite System technology, allowing the most precise surveying standards yet. The Ordnance Survey is responsible for a UK-wide network of continually operating GNSS stations known as "OS Net". These are used for surveying and other organisations can purchase the right to utilise the network for their own uses. The Ordnance Survey still maintains a set of master geodetic reference points to tie Ordnance Survey geographic datum points to modern measurement systems such as GPS. Ordnance Survey maps of Great Britain use the Ordnance Survey National Grid rather than latitude and longitude to indicate position. The Grid is known technically as OSGB36 (Ordnance Survey Great Britain 1936) and was introduced after the 1936–1953 retriangulation. On the British mainland for recording heights the Ordnance Survey maintains an orthometric system referenced to Ordnance Datum Newlyn, which is a height datum defined by mean sea level as measured in Newlyn, Cornwall, between 1915 and 1921. In 2016 the Ordnance Survey redefined Ordnance Datum Newlyn causing a general upwards shift of circa 25mm; an effect of this included the Calf Top hill becoming a mountain. The Ordnance Survey's CartoDesign team performs a key role in the organisation, as the authority for cartographic design and development, and engages with internal and external audiences to promote and communicate the value of cartography. They work on a broad range of projects and are responsible for styling all new products and services. == Research == For several decades the Ordnance Survey has had a research department that is active in several areas of geographical information science, including: Spatial cognition Map generalisation Spatial data modelling Remote sensing and analysis of remotely sensed data Semantics and ontologies The Ordnance Survey actively supports the academic research community through its external research and university liaison team. The research department actively supports MSc and PhD students as well as engaging in collaborative research. Most Ordnance Survey products are available to UK universities that have signed up to the Digimap agreement and data is also made available for research purposes that advances the Ordnance Survey's own research agenda. == Data access and criticisms == The Ordnance Survey has been subject to criticism. Most centres on the point that Ordnance Survey possesses a virtual government monopoly on geographic data in the UK, but, although a government agency, it has been required to act as a trading fund (i.e. a commercial entity) from 1999 to 2015. This meant that it is supposed to be entirely self-funded from the commercial sale of its data and derived products whilst at the same time the public supplier of geographical information. In 1985, the Committee of Enquiry into the Handling of Geographic Information was set up to "advise the Secretary of State for the Environment within two years on the future handling of geographic information in the UK, taking account of modern developments in information technology and market needs". The committee's final report, published in 1987 under the name of its chairman Roger Chorley, stressed the importance of accessible geographic information to the UK and recommended a loosening of policies on distribution and cost recovery. In 2007 the Ordnance Survey were criticised for contracting the public relations company Mandate Communications to understand the dynamics of the free data movement and discover which politicians and advisers continued to support their current policies. === OS OpenData === In response to the feedback from a consultation Policy options for geographic information from Ordnance Survey the government announced that a package of Ordnance Survey data sets would be released for free use and re-use. On 1 April 2010 the Ordnance Survey released the brand OS OpenData under an attribution-only licence compatible with CC-BY. Various groups and individuals had campaigned for this release of data, but some were disappointed when some of the profitable datasets, including the leisure 1:50,000 scale and 1:25,000 scale mapping, as well as the low scale Mastermap were not included. These were withheld with the counter-argument that if licensees do not pay for OS data collection then the government would have to be willing to foot a £30 million per annum bill to obtain the future economic benefit of sharing the mapping. In mid-2013 the Ordnance Survey described an "enhanced" linked-data service with a SPARQL 1.1-compliant endpoint and bulk-download options. In June 2018, following the recommendations of the Geospatial Commission, part of the Cabinet Office, it was announced that parts of OS Mastermap would be released under the Open Government Licence. These would include: property extents created from OS MasterMap Topography Layer TOIDs from OS MasterMap Topography Layer, by integration into OpenMap Local Other data would be made available free up to small businesses (under a transaction threshold) OS MasterMap Topography Layer, including building heights and functional sites OS MasterMap Greenspace Layer OS MasterMap Highways Network OS MasterMap Water Network Layer OS Detailed Path Network These are available through APIs on the OS Data Hub. === Historical material === Ordnance Survey historical works are generally available, as the agency is covered by Crown Copyright: works more than fifty years old, including historic surveys of Britain and Ireland and much of the New Popular Edition, are in the public domain. However, finding suitable originals remains an issue as the Ordnance Survey does not provide historical mapping on "free" terms, instead marketing commercially "enhanced" reproductions in partnership with companies including GroundSure and Landmark. The National Library of Scotland has been developing its archive to make Ordnance Survey maps for all of Great Britain more easily available through their website, whilst the Society for All British and Irish Road Enthusiasts (SABRE) also has a large easily available archive for large numbers of Ordnance Survey maps across all of Great Britain, often with almost complete complete sets of all relevant map revisions. Wikimedia Commons has complete sets of scans of the Old/First series one-inch maps of England and Wales; of the Old/First series one-inch maps of Scotland; of the Seventh Series One-inch maps of Great Britain (1952–1967); of the Third Edition quarter-inch maps of England and Wales; and of the Fifth Series quarter-inch maps of Great Britain. These sets are complete in the sense of including at least one copy of each of the sheets in the series, not in the sense of including all revision levels. == See also == == References == === Notes === === Citations === === Sources === == External links == Official website Ordnance Survey research guide – The National Archives
Wikipedia/Ordnance_Survey
Ocean surface topography or sea surface topography, also called ocean dynamic topography, are highs and lows on the ocean surface, similar to the hills and valleys of Earth's land surface depicted on a topographic map. These variations are expressed in terms of average sea surface height (SSH) relative to Earth's geoid. The main purpose of measuring ocean surface topography is to understand the large-scale ocean circulation. == Time variations == Unaveraged or instantaneous sea surface height (SSH) is most obviously affected by the tidal forces of the Moon and by the seasonal cycle of the Sun acting on Earth. Over timescales longer than a year, the patterns in SSH can be influenced by ocean circulation. Typically, SSH anomalies resulting from these forces differ from the mean by less than ±1 m (3 ft) at the global scale. Other influences include changing interannual patterns of temperature, salinity, waves, tides and winds. Ocean surface topography can be measured with high accuracy and precision at regional to global scale by satellite altimetry (e.g. TOPEX/Poseidon). Slower and larger variations are due to changes in Earth's gravitational field (geoid) due to melting ice, rearrangement of continents, formation of sea mounts and other redistribution of rock. The combination of satellite gravimetry (e.g. GRACE and GRACE-FO) with altimetry can be used to determine sea level rise and properties such as ocean heat content. == Applications == Ocean surface topography is used to map ocean currents, which move around the ocean's "hills" and "valleys" in predictable ways. A clockwise sense of rotation is found around "hills" in the northern hemisphere and "valleys" in the southern hemisphere. This is because of the Coriolis effect. Conversely, a counterclockwise sense of rotation is found around "valleys" in the northern hemisphere and "hills" in the southern hemisphere. Ocean surface topography is also used to understand how the ocean moves heat around the globe, a critical component of Earth's climate, and for monitoring changes in global sea level. The collection of the data is useful for the long-term information about the ocean and its currents. According to NASA science this data can also be used to provide understanding of weather, climate, navigation, fisheries management, and offshore operations. Observations made about the data are used to study the oceans tides, circulation, and the amount of heat the ocean contains. These observations can help predict short and long term effects of the weather and the earth's climate over time. == Measurement == The sea surface height (SSH) is calculated through altimetry satellites using as a reference surface the ellipsoid, which determine the distance from the satellite to a target surface by measuring the satellite-to-surface round-trip time of a radar pulse. The satellites then measure the distance between their orbit altitude and the surface of the water. Due to the differing depths of the ocean, an approximation is made. This enables data to be taken precisely due to the uniform surface level. The satellite's altitude then has to be calculated with respect to the reference ellipsoid. It is calculated using the orbital parameters of the satellite and various positioning instruments. However, the ellipsoid is not an equipotential surface of the Earth's gravity field, so the measurements must be referenced to a surface that represents the water flow, in this case the geoid. The transformations between geometric heights (ellipsoid) and orthometric heights (geoid) are performed from a geoidal model. The sea surface height is then the difference between the satellite's altitude relative to the reference ellipsoid and the altimeter range. The satellite sends microwave pulses to the ocean surface. The travel time of the pulses ascending to the oceans surface and back provides data of the sea surface height. In the image below you can see the measurement system using by the satellite Jason-1. == Satellite missions == Currently there are nine different satellites calculating the earth ocean topography, Cryosat-2, SARAL, Jason-3, Sentinel-3A and Sentinel-3B, CFOSat, HY-2B and HY-2C, and Sentinel-6 Michael Freilich (also called Jason-CS A). Jason-3 and Sentinel-6 Michael Freilich are currently both in space orbiting Earth in a tandem rotation. They are approximately 330 kilometers apart. Ocean surface topography can be derived from ship-going measurements of temperature and salinity at depth. However, since 1992, a series of satellite altimetry missions, beginning with TOPEX/Poseidon and continued with Jason-1, Ocean Surface Topography Mission on the Jason-2 satellite, Jason-3 and now Sentinel-6 Michael Freilich have measured sea surface height directly. By combining these measurements with gravity measurements from NASA's Grace and ESA's GOCE missions, scientists can determine sea surface topography to within a few centimeters. Jason-1 was launched by a Boeing Delta II rocket in California in 2001 and continued measurements initially collected by TOPEX/Poseidon satellite, which orbited from 1992 up until 2006. NASA and CNES, the French space agency, are joint partners in this mission. The main objectives of the Jason satellites is to collect data on the average ocean circulation around the globe in order to better understand its interaction with the time varying components and the involved mechanisms for initializing ocean models. To monitor the low frequency ocean variability and observe the season cycles and inter-annual variations like El Niño and La Niña, the North Atlantic oscillation, the pacific decadal oscillation, and planetary waves crossing the oceans over a period of months, then they will be modeled over a long period of time due to the precise altimetric observations. It aims to contribute to observations of the mesoscale ocean variability, affecting the whole oceans. This activity is especially intense near western boundary currents. Also monitor the average sea level because it is a large indicator of global warming through the sea level data. Improvement of tide modeling by observing more long period components such as coastal interactions, internal waves, and tidal energy dissipation. Finally the satellite data will supply knowledge to support other types of marine meteorology which is the scientific study of the atmosphere. Jason-2 was launched on June 20, 2008, by a Delta-2 rocket out of the California site in Vandenberg and terminated its mission on October 10, 2019. Jason-3 was launched on January 16, 2016 by a Falcon-9 SpaceX rocket from Vandenberg, as well as Sentinel-6 Michael Freilich, launched on November 21, 2020. The long-term objectives of the Jason satellite series are to provide global descriptions of the seasonal and yearly changes of the circulation and heat storage in the ocean. This includes the study of short-term climatic changes such as El Nino, La Nina. The satellites detect global sea level mean and record the fluctuations. Also detecting the slow change of upper ocean circulation on decadal time scales, every ten years. Studying the transportation of heat and carbon in the ocean and examining the main components that fuel deep water tides. The satellites data collection also helps improve wind speed and height measurements in current time and for long-term studies. Lastly improving our knowledge about the marine geoid. The first seven months Jason-2 was put into use it was flown in extreme close proximity to Jason-1. Only being one minute apart from each other the satellites observed the same area of the ocean. The reason for the close proximity in observation was for cross-calibration. This was meant to calculate any bias in the two altimeters. This multiple month observation proved that there was no bias in the data and both collections of data were consistent. A new satellite mission called the Surface Water Ocean Topography Mission has been proposed to make the first global survey of the topography of all of Earth's surface water—the ocean, lakes and rivers. This study is aimed to provide a comprehensive view of Earth's freshwater bodies from space and more much detailed measurements of the ocean surface than ever before. == See also == Dynamic topography Eddy (fluid dynamics) SARAL Sea surface microlayer == References == == External links == Ocean Surface Topography from Space OSTM Instrument Description
Wikipedia/Sea-surface_topography
Nanotopography refers to specific surface features which form or are generated at the nanoscopic scale. While the term can be used to describe a broad range of applications ranging from integrated circuits to microfluidics, in practice it typically applied to sub-micron textured surfaces as used in biomaterials research. == In nature == Several functional nanotopographies have been identified in nature. Certain surfaces like that of the lotus leaf have been understood to apply nanoscale textures for abiotic processes such as self-cleaning. Bio-mimetic applications of this discovery have since arrived in consumer products. In 2012, it was recognized that nanotopographies in nature are also used for antibiotic purposes. The wing of the cicada, the surface of which is covered in nanoscale pillars, induces lysis of bacteria. While the nano-pillars were not observed to prevent cell adhesion, they acted mechanistically to stretch microbial membranes to breakage. In vitro testing of the cicada wing demonstrated its efficacy against a variety of bacterial strains. == Manufacturing == Numerous technologies are available for the production of nanotopography. High-throughput techniques include plasma functionalization, abrasive blasting, and etching. Though low cost, these processes are limited in the control and replicability of feature size and geometry. Techniques enabling greater feature precision exist, among them electron beam lithography and particle deposition, but are slower and more resource intensive by comparison. Alternatively, processes such as molecular self-assembly can be utilized which provide an enhanced level of production speed and feature control. == Applications to medicine == Though the effects of nanotopography on cell behavior have only been recognized since 1964, some of the first practical applications of the technology are being realized in the field of medicine. Among the few clinical applications is the functionalization of titanium implant surfaces with nanotopography, generated with submersion etching and sand blasting. This technology has been the focal point of a diverse body of research aimed at improving post-operative integration of certain implant components. The determinant of integration varies, but as most titanium implants are orthopedics-oriented, osseointegration is the dominant aim of the field. == Applications to cell engineering == Nanotopography is readily applied to cell culture and has been shown to have a significant impact on cell behavior across different lineages. Substrate features in the nanoscale regime down to the order of 9 nm are able to retain some effect. Subjected solely to topographical cues, a wide variety of cells demonstrate responses including changes in cell growth and gene expression. Certain patterns are able to induce stem cells to differentiate down specific pathways. Notable results include osteogenic induction in the absence of media components as well as near-total cell alignment as seen in smooth muscle. The potential of topographical cues to fulfill roles otherwise requiring xeno-based media components offers high translatability to clinical applications, as regulation and cost related to animal-derived products constitutes a major roadblock in a number of cell-related technologies. == References ==
Wikipedia/Nanotopography
Earth science or geoscience includes all fields of natural science related to the planet Earth. This is a branch of science dealing with the physical, chemical, and biological complex constitutions and synergistic linkages of Earth's four spheres: the biosphere, hydrosphere/cryosphere, atmosphere, and geosphere (or lithosphere). Earth science can be considered to be a branch of planetary science but with a much older history. == Geology == Geology is broadly the study of Earth's structure, substance, and processes. Geology is largely the study of the lithosphere, or Earth's surface, including the crust and rocks. It includes the physical characteristics and processes that occur in the lithosphere as well as how they are affected by geothermal energy. It incorporates aspects of chemistry, physics, and biology as elements of geology interact. Historical geology is the application of geology to interpret Earth history and how it has changed over time. Geochemistry studies the chemical components and processes of the Earth. Geophysics studies the physical properties of the Earth. Paleontology studies fossilized biological material in the lithosphere. Planetary geology studies geoscience as it pertains to extraterrestrial bodies. Geomorphology studies the origin of landscapes. Structural geology studies the deformation of rocks to produce mountains and lowlands. Resource geology studies how energy resources can be obtained from minerals. Environmental geology studies how pollution and contaminants affect soil and rock. Mineralogy is the study of minerals and includes the study of mineral formation, crystal structure, hazards associated with minerals, and the physical and chemical properties of minerals. Petrology is the study of rocks, including the formation and composition of rocks. Petrography is a branch of petrology that studies the typology and classification of rocks. == Earth's interior == Plate tectonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the Earth's crust. Beneath the Earth's crust lies the mantle which is heated by the radioactive decay of heavy elements. The mantle is not quite solid and consists of magma which is in a state of semi-perpetual convection. This convection process causes the lithospheric plates to move, albeit slowly. The resulting process is known as plate tectonics. Areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the Earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform (or conservative) boundaries. Earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as part of subduction. Plate tectonics might be thought of as the process by which the Earth is resurfaced. As the result of seafloor spreading, new crust and lithosphere is created by the flow of magma from the mantle to the near surface, through fissures, where it cools and solidifies. Through subduction, oceanic crust and lithosphere vehemently returns to the convecting mantle. Volcanoes result primarily from the melting of subducted crust material. Crust material that is forced into the asthenosphere melts, and some portion of the melted material becomes light enough to rise to the surface—giving birth to volcanoes. == Atmospheric science == Atmospheric science initially developed in the late-19th century as a means to forecast the weather through meteorology, the study of weather. Atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to acid rain. Climatology studies the climate and climate change. The troposphere, stratosphere, mesosphere, thermosphere, and exosphere are the five layers which make up Earth's atmosphere. 75% of the mass in the atmosphere is located within the troposphere, the lowest layer. In all, the atmosphere is made up of about 78.0% nitrogen, 20.9% oxygen, and 0.92% argon, and small amounts of other gases including CO2 and water vapor. Water vapor and CO2 cause the Earth's atmosphere to catch and hold the Sun's energy through the greenhouse effect. This makes Earth's surface warm enough for liquid water and life. In addition to trapping heat, the atmosphere also protects living organisms by shielding the Earth's surface from cosmic rays. The magnetic field—created by the internal motions of the core—produces the magnetosphere which protects Earth's atmosphere from the solar wind. As the Earth is 4.5 billion years old, it would have lost its atmosphere by now if there were no protective magnetosphere. == Earth's magnetic field == == Hydrology == Hydrology is the study of the hydrosphere and the movement of water on Earth. It emphasizes the study of how humans use and interact with freshwater supplies. Study of water's movement is closely related to geomorphology and other branches of Earth science. Applied hydrology involves engineering to maintain aquatic environments and distribute water supplies. Subdisciplines of hydrology include oceanography, hydrogeology, ecohydrology, and glaciology. Oceanography is the study of oceans. Hydrogeology is the study of groundwater. It includes the mapping of groundwater supplies and the analysis of groundwater contaminants. Applied hydrogeology seeks to prevent contamination of groundwater and mineral springs and make it available as drinking water. The earliest exploitation of groundwater resources dates back to 3000 BC, and hydrogeology as a science was developed by hydrologists beginning in the 17th century. Ecohydrology is the study of ecological systems in the hydrosphere. It can be divided into the physical study of aquatic ecosystems and the biological study of aquatic organisms. Ecohydrology includes the effects that organisms and aquatic ecosystems have on one another as well as how these ecoystems are affected by humans. Glaciology is the study of the cryosphere, including glaciers and coverage of the Earth by ice and snow. Concerns of glaciology include access to glacial freshwater, mitigation of glacial hazards, obtaining resources that exist beneath frozen land, and addressing the effects of climate change on the cryosphere. == Ecology == Ecology is the study of the biosphere. This includes the study of nature and of how living things interact with the Earth and one another and the consequences of that. It considers how living things use resources such as oxygen, water, and nutrients from the Earth to sustain themselves. It also considers how humans and other living creatures cause changes to nature. == Physical geography == Physical geography is the study of Earth's systems and how they interact with one another as part of a single self-contained system. It incorporates astronomy, mathematical geography, meteorology, climatology, geology, geomorphology, biology, biogeography, pedology, and soils geography. Physical geography is distinct from human geography, which studies the human populations on Earth, though it does include human effects on the environment. == Methodology == Methodologies vary depending on the nature of the subjects being studied. Studies typically fall into one of three categories: observational, experimental, or theoretical. Earth scientists often conduct sophisticated computer analysis or visit an interesting location to study earth phenomena (e.g. Antarctica or hot spot island chains). A foundational idea in Earth science is the notion of uniformitarianism, which states that "ancient geologic features are interpreted by understanding active processes that are readily observed." In other words, any geologic processes at work in the present have operated in the same ways throughout geologic time. This enables those who study Earth history to apply knowledge of how the Earth's processes operate in the present to gain insight into how the planet has evolved and changed throughout long history. == Earth's spheres == In Earth science, it is common to conceptualize the Earth's surface as consisting of several distinct layers, often referred to as spheres: the lithosphere, the hydrosphere, the atmosphere, and the biosphere, this concept of spheres is a useful tool for understanding the Earth's surface and its various processes these correspond to rocks, water, air and life. Also included by some are the cryosphere (corresponding to ice) as a distinct portion of the hydrosphere and the pedosphere (corresponding to soil) as an active and intermixed sphere. The following fields of science are generally categorized within the Earth sciences: Geology describes the rocky parts of the Earth's crust (or lithosphere) and its historic development. Major subdisciplines are mineralogy and petrology, geomorphology, paleontology, stratigraphy, structural geology, engineering geology, and sedimentology. Physical geography focuses on geography as an Earth science. Physical geography is the study of Earth's seasons, climate, atmosphere, soil, streams, landforms, and oceans. Physical geography can be divided into several branches or related fields, as follows: geomorphology, biogeography, environmental geography, palaeogeography, climatology, meteorology, coastal geography, hydrology, ecology, glaciology. Geophysics and geodesy investigate the shape of the Earth, its reaction to forces and its magnetic and gravity fields. Geophysicists explore the Earth's core and mantle as well as the tectonic and seismic activity of the lithosphere. Geophysics is commonly used to supplement the work of geologists in developing a comprehensive understanding of crustal geology, particularly in mineral and petroleum exploration. Seismologists use geophysics to understand plate tectonic movement, as well as predict seismic activity. Geochemistry studies the processes that control the abundance, composition, and distribution of chemical compounds and isotopes in geologic environments. Geochemists use the tools and principles of chemistry to study the Earth's composition, structure, processes, and other physical aspects. Major subdisciplines are aqueous geochemistry, cosmochemistry, isotope geochemistry and biogeochemistry. Soil science covers the outermost layer of the Earth's crust that is subject to soil formation processes (or pedosphere). Major subdivisions in this field of study include edaphology and pedology. Ecology covers the interactions between organisms and their environment. This field of study differentiates the study of Earth from other planets in the Solar System, Earth being the only planet teeming with life. Hydrology, oceanography and limnology are studies which focus on the movement, distribution, and quality of the water and involve all the components of the hydrologic cycle on the Earth and its atmosphere (or hydrosphere). "Sub-disciplines of hydrology include hydrometeorology, surface water hydrology, hydrogeology, watershed science, forest hydrology, and water chemistry." Glaciology covers the icy parts of the Earth (or cryosphere). Atmospheric sciences cover the gaseous parts of the Earth (or atmosphere) between the surface and the exosphere (about 1000 km). Major subdisciplines include meteorology, climatology, atmospheric chemistry, and atmospheric physics. === Earth science breakup === == See also == == References == === Sources === == Further reading == == External links == Earth Science Picture of the Day, a service of Universities Space Research Association, sponsored by NASA Goddard Space Flight Center. Geoethics in Planetary and Space Exploration. Geology Buzz: Earth Science Archived 2021-11-04 at the Wayback Machine
Wikipedia/Geoscience
A digital elevation model (DEM) or digital surface model (DSM) is a 3D computer graphics representation of elevation data to represent terrain or overlaying objects, commonly of a planet, moon, or asteroid. A "global DEM" refers to a discrete global grid. DEMs are used often in geographic information systems (GIS), and are the most common basis for digitally produced relief maps. A digital terrain model (DTM) represents specifically the ground surface while DEM and DSM may represent tree top canopy or building roofs. While a DSM may be useful for landscape modeling, city modeling and visualization applications, a DTM is often required for flood or drainage modeling, land-use studies, geological applications, and other applications, and in planetary science. == Terminology == There is no universal usage of the terms digital elevation model (DEM), digital terrain model (DTM) and digital surface model (DSM) in scientific literature. In most cases the term digital surface model represents the earth's surface and includes all objects on it. In contrast to a DSM, the digital terrain model (DTM) represents the bare ground surface without any objects like plants and buildings (see the figure on the right). DEM is often used as a generic term for DSMs and DTMs, only representing height information without any further definition about the surface. Other definitions equalise the terms DEM and DTM, equalise the terms DEM and DSM, define the DEM as a subset of the DTM, which also represents other morphological elements, or define a DEM as a rectangular grid and a DTM as a three-dimensional model (TIN). Most of the data providers (USGS, ERSDAC, CGIAR, Spot Image) use the term DEM as a generic term for DSMs and DTMs. Some datasets such as SRTM or the ASTER GDEM are originally DSMs, although in forested areas, SRTM reaches into the tree canopy giving readings somewhere between a DSM and a DTM). DTMs are created from high resolution DSM datasets using complex algorithms to filter out buildings and other objects, a process known as "bare-earth extraction". In the following, the term DEM is used as a generic term for DSMs and DTMs. == Types == A DEM can be represented as a raster (a grid of squares, also known as a heightmap when representing elevation) or as a vector-based triangular irregular network (TIN). The TIN DEM dataset is also referred to as a primary (measured) DEM, whereas the Raster DEM is referred to as a secondary (computed) DEM. The DEM could be acquired through techniques such as photogrammetry, lidar, IfSAR or InSAR, land surveying, etc. (Li et al. 2005). DEMs are commonly built using data collected using remote sensing techniques, but they may also be built from land surveying. === Rendering === The digital elevation model itself consists of a matrix of numbers, but the data from a DEM is often rendered in visual form to make it understandable to humans. This visualization may be in the form of a contoured topographic map, or could use shading and false color assignment (or "pseudo-color") to render elevations as colors (for example, using green for the lowest elevations, shading to red, with white for the highest elevation.). Visualizations are sometimes also done as oblique views, reconstructing a synthetic visual image of the terrain as it would appear looking down at an angle. In these oblique visualizations, elevations are sometimes scaled using "vertical exaggeration" in order to make subtle elevation differences more noticeable. Some scientists, however, object to vertical exaggeration as misleading the viewer about the true landscape. == Production == Mappers may prepare digital elevation models in a number of ways, but they frequently use remote sensing rather than direct survey data. Older methods of generating DEMs often involve interpolating digital contour maps that may have been produced by direct survey of the land surface. This method is still used in mountain areas, where interferometry is not always satisfactory. Note that contour line data or any other sampled elevation datasets (by GPS or ground survey) are not DEMs, but may be considered digital terrain models. A DEM implies that elevation is available continuously at each location in the study area. === Satellite mapping === One powerful technique for generating digital elevation models is interferometric synthetic aperture radar where two passes of a radar satellite (such as RADARSAT-1 or TerraSAR-X or Cosmo SkyMed), or a single pass if the satellite is equipped with two antennas (like the SRTM instrumentation), collect sufficient data to generate a digital elevation map tens of kilometers on a side with a resolution of around ten meters. Other kinds of stereoscopic pairs can be employed using the digital image correlation method, where two optical images are acquired with different angles taken from the same pass of an airplane or an Earth Observation Satellite (such as the HRS instrument of SPOT5 or the VNIR band of ASTER). The SPOT 1 satellite (1986) provided the first usable elevation data for a sizeable portion of the planet's landmass, using two-pass stereoscopic correlation. Later, further data were provided by the European Remote-Sensing Satellite (ERS, 1991) using the same method, the Shuttle Radar Topography Mission (SRTM, 2000) using single-pass SAR and the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER, 2000) instrumentation on the Terra satellite using double-pass stereo pairs. The HRS instrument on SPOT 5 has acquired over 100 million square kilometers of stereo pairs. === Planetary mapping === A tool of increasing value in planetary science has been use of orbital altimetry used to make digital elevation map of planets. A primary tool for this is laser altimetry but radar altimetry is also used. Planetary digital elevation maps made using laser altimetry include the Mars Orbiter Laser Altimeter (MOLA) mapping of Mars, the Lunar Orbital Laser Altimeter (LOLA) and Lunar Altimeter (LALT) mapping of the Moon, and the Mercury Laser Altimeter (MLA) mapping of Mercury. In planetary mapping, each planetary body has a unique reference surface. New Horizons' Long Range Reconnaissance Imager used stereo photogrammetry to produce partial surface elevation maps of Pluto and 486958 Arrokoth. === Methods for obtaining elevation data used to create DEMs === Lidar Radar Stereo photogrammetry from aerial surveys Structure from motion / Multi-view stereo applied to aerial photography Block adjustment from optical satellite imagery Interferometry from radar data Real Time Kinematic GPS Topographic maps Theodolite or total station Doppler radar Focus variation Inertial surveys Surveying and mapping drones Range imaging === Accuracy === The quality of a DEM is a measure of how accurate elevation is at each pixel (absolute accuracy) and how accurately is the morphology presented (relative accuracy). Quality assessment of DEM can be performed by comparison of DEMs from different sources. Several factors play an important role for quality of DEM-derived products: terrain roughness; sampling density (elevation data collection method); grid resolution or pixel size; interpolation algorithm; vertical resolution; terrain analysis algorithm; Reference 3D products include quality masks that give information on the coastline, lake, snow, clouds, correlation etc. == Uses == Common uses of DEMs include: Extracting terrain parameters for geomorphology Modeling water flow for hydrology or mass movement (for example avalanches and landslides) Modeling soils wetness with Cartographic Depth to Water Indexes (DTW-index) Creation of relief maps Rendering of 3D visualizations. 3D flight planning and TERCOM Creation of physical models (including raised relief maps and 3D printed terrain models) Rectification of aerial photography or satellite imagery Reduction (terrain correction) of gravity measurements (gravimetry, physical geodesy) Terrain analysis in geomorphology and physical geography Geographic information systems (GIS) Engineering and infrastructure design Satellite navigation (for example GPS and GLONASS) Line-of-sight analysis Base mapping Flight simulation Train simulation Precision farming and forestry Surface analysis Intelligent transportation systems (ITS) Auto safety / advanced driver-assistance systems (ADAS) Archaeology == Sources == === Global === Released at the beginning of 2022, FABDEM offers a bare earth simulation of the Earth's surface at 30 arc-second resolution. Adapted from GLO-30, the data removes all forests and buildings. The data is free to download non-commercially and through the developer's website at a cost commercially. An alternative free global DEM is called GTOPO30 (30 arcsecond resolution, c. 1 km along the equator) is available, but its quality is variable and in some areas it is very poor. A much higher quality DEM from the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) instrument of the Terra satellite is also freely available for 99% of the globe, and represents elevation at 30 meter resolution. A similarly high resolution was previously only available for the United States territory under the Shuttle Radar Topography Mission (SRTM) data, while most of the rest of the planet was only covered in a 3 arc-second resolution (around 90 meters along the equator). SRTM does not cover the polar regions and has mountain and desert no data (void) areas. SRTM data, being derived from radar, represents the elevation of the first-reflected surface—quite often tree tops. So, the data are not necessarily representative of the ground surface, but the top of whatever is first encountered by the radar. Submarine elevation (known as bathymetry) data is generated using ship-mounted depth soundings. When land topography and bathymetry is combined, a truly global relief model is obtained. The SRTM30Plus dataset (used in NASA World Wind) attempts to combine GTOPO30, SRTM and bathymetric data to produce a truly global elevation model. The Earth2014 global topography and relief model provides layered topography grids at 1 arc-minute resolution. Other than SRTM30plus, Earth2014 provides information on ice-sheet heights and bedrock (that is, topography below the ice) over Antarctica and Greenland. Another global model is Global Multi-resolution Terrain Elevation Data 2010 (GMTED2010) with 7.5 arc second resolution. It is based on SRTM data and combines other data outside SRTM coverage. A novel global DEM of postings lower than 12 m and a height accuracy of less than 2 m is expected from the TanDEM-X satellite mission which started in July 2010. The most common grid (raster) spacing is between 50 and 500 meters. In gravimetry e.g., the primary grid may be 50 m, but is switched to 100 or 500 meters in distances of about 5 or 10 kilometers. Since 2002, the HRS instrument on SPOT 5 has acquired over 100 million square kilometers of stereo pairs used to produce a DTED2 format DEM (with a 30-meter posting) DEM format DTED2 over 50 million km2. The radar satellite RADARSAT-2 has been used by MacDonald, Dettwiler and Associates Ltd. to provide DEMs for commercial and military customers. In 2014, acquisitions from radar satellites TerraSAR-X and TanDEM-X will be available in the form of a uniform global coverage with a resolution of 12 meters. ALOS provides since 2016 a global 1-arc second DSM free of charge, and a commercial 5 meter DSM/DTM. === Local === Many national mapping agencies produce their own DEMs, often of a higher resolution and quality, but frequently these have to be purchased, and the cost is usually prohibitive to all except public authorities and large corporations. DEMs are often a product of national lidar dataset programs. Free DEMs are also available for Mars: the MEGDR, or Mission Experiment Gridded Data Record, from the Mars Global Surveyor's Mars Orbiter Laser Altimeter (MOLA) instrument; and NASA's Mars Digital Terrain Model (DTM). === Websites === OpenTopography is a web based community resource for access to high-resolution, Earth science-oriented, topography data (lidar and DEM data), and processing tools running on commodity and high performance compute system along with educational resources. OpenTopography is based at the San Diego Supercomputer Center at the University of California San Diego and is operated in collaboration with colleagues in the School of Earth and Space Exploration at Arizona State University and UNAVCO. Core operational support for OpenTopography comes from the National Science Foundation, Division of Earth Sciences. The OpenDemSearcher is a Mapclient with a visualization of regions with free available middle and high resolution DEMs. == See also == Ground slope and aspect (ground spatial gradient) Digital outcrop model Global Relief Model Physical terrain model Terrain cartography Terrain rendering === DEM file formats === Bathymetric Attributed Grid (BAG) DTED DIMAP Sentinel 1 ESA data base SDTS DEM USGS DEM == References == == Further reading == Wilson, J.P.; Gallant, J.C. (2000). "Chapter 1" (PDF). In Wilson, J.P.; Gallant, J.C. (eds.). Terrain Analysis: Principles and Applications. New York: Wiley. pp. 1–27. ISBN 978-0-471-32188-0. Retrieved 2007-02-16. Hirt, C.; Filmer, M.S.; Featherstone, W.E. (2010). "Comparison and validation of recent freely-available ASTER-GDEM ver1, SRTM ver4.1 and GEODATA DEM-9S ver3 digital elevation models over Australia". Australian Journal of Earth Sciences. 57 (3): 337–347. Bibcode:2010AuJES..57..337H. doi:10.1080/08120091003677553. hdl:20.500.11937/43846. S2CID 140651372. Retrieved May 5, 2012. Rexer, M.; Hirt, C. (2014). "Comparison of free high-resolution digital elevation data sets (ASTER GDEM2, SRTM v2.1/v4.1) and validation against accurate heights from the Australian National Gravity Database" (PDF). Australian Journal of Earth Sciences. 61 (2): 213–226. Bibcode:2014AuJES..61..213R. doi:10.1080/08120099.2014.884983. hdl:20.500.11937/38264. S2CID 3783826. Archived from the original (PDF) on June 7, 2016. Retrieved April 24, 2014. == External links == DEM Quality Comparison Terrainmap.com Maps-for-free.com Geo-Spatial Data Acquisition Archived 2013-08-22 at the Wayback Machine Elevation Mapper, Create geo-referenced elevation maps Data products Satellite Geodesy by Scripps Institution of Oceanography Shuttle Radar Topography Mission by NASA/JPL Global 30 Arc-Second Elevation (GTOPO30) by the U.S. Geological Survey Global Multi-resolution Terrain Elevation Data 2010 (GMTED2010) by the U.S. Geological Survey Earth2014 by Technische Universität München Sonny's LiDAR Digital Terrain Models of Europe
Wikipedia/Digital_elevation_model
The seabed (also known as the seafloor, sea floor, ocean floor, and ocean bottom) is the bottom of the ocean. All floors of the ocean are known as seabeds. The structure of the seabed of the global ocean is governed by plate tectonics. Most of the ocean is very deep, where the seabed is known as the abyssal plain. Seafloor spreading creates mid-ocean ridges along the center line of major ocean basins, where the seabed is slightly shallower than the surrounding abyssal plain. From the abyssal plain, the seabed slopes upward toward the continents and becomes, in order from deep to shallow, the continental rise, slope, and shelf. The depth within the seabed itself, such as the depth down through a sediment core, is known as the "depth below seafloor". The ecological environment of the seabed and the deepest waters are collectively known, as a habitat for creatures, as the "benthos". Most of the seabed throughout the world's oceans is covered in layers of marine sediments. Categorized by where the materials come from or composition, these sediments are classified as either: from land (terrigenous), from biological organisms (biogenous), from chemical reactions (hydrogenous), and from space (cosmogenous). Categorized by size, these sediments range from very small particles called clays and silts, known as mud, to larger particles from sand to boulders. Features of the seabed are governed by the physics of sediment transport and by the biology of the creatures living in the seabed and in the ocean waters above. Physically, seabed sediments often come from the erosion of material on land and from other rarer sources, such as volcanic ash. Sea currents transport sediments, especially in shallow waters where tidal energy and wave energy cause resuspension of seabed sediments. Biologically, microorganisms living within the seabed sediments change seabed chemistry. Marine organisms create sediments, both within the seabed and in the water above. For example, phytoplankton with silicate or calcium carbonate shells grow in abundance in the upper ocean, and when they die, their shells sink to the seafloor to become seabed sediments. Human impacts on the seabed are diverse. Examples of human effects on the seabed include exploration, plastic pollution, and exploitation by mining and dredging operations. To map the seabed, ships use acoustic technology to map water depths throughout the world. Submersible vehicles help researchers study unique seabed ecosystems such as hydrothermal vents. Plastic pollution is a global phenomenon, and because the ocean is the ultimate destination for global waterways, much of the world's plastic ends up in the ocean and some sinks to the seabed. Exploitation of the seabed involves extracting valuable minerals from sulfide deposits via deep sea mining, as well as dredging sand from shallow environments for construction and beach nourishment. == Structure == Most of the oceans have a common structure, created by common physical phenomena, mainly from tectonic movement, and sediment from various sources. The structure of the oceans, starting with the continents, begins usually with a continental shelf, continues to the continental slope – which is a steep descent into the ocean, until reaching the abyssal plain – a topographic plain, the beginning of the seabed, and its main area. The border between the continental slope and the abyssal plain usually has a more gradual descent, and is called the continental rise, which is caused by sediment cascading down the continental slope. The mid-ocean ridge, as its name implies, is a mountainous rise through the middle of all the oceans, between the continents. Typically a rift runs along the edge of this ridge. Along tectonic plate edges there are typically oceanic trenches – deep valleys, created by the mantle circulation movement from the mid-ocean mountain ridge to the oceanic trench. Hotspot volcanic island ridges are created by volcanic activity, erupting periodically, as the tectonic plates pass over a hotspot. In areas with volcanic activity and in the oceanic trenches there are hydrothermal vents – releasing high pressure and extremely hot water and chemicals into the typically freezing water around it. Deep ocean water is divided into layers or zones, each with typical features of salinity, pressure, temperature and marine life, according to their depth. Lying along the top of the abyssal plain is the abyssal zone, whose lower boundary lies at about 6,000 m (20,000 ft). The hadal zone – which includes the oceanic trenches, lies between 6,000 and 11,000 metres (20,000–36,000 ft) and is the deepest oceanic zone. === Depth below seafloor === Depth below seafloor is a vertical coordinate used in geology, paleontology, oceanography, and petrology (see ocean drilling). The acronym "mbsf" (meaning "meters below the seafloor") is a common convention used for depths below the seafloor. Different seabeds in the world's oceans == Sediments == Sediments in the seabed vary in origin, from eroded land materials carried into the ocean by rivers or wind flow, waste and decompositions of sea creatures, and precipitation of chemicals within the sea water itself, including some from outer space. There are four basic types of sediment of the sea floor: Terrigenous (also lithogenous) describes the sediment from continents eroded by rain, rivers, and glaciers, as well as sediment blown into the ocean by the wind, such as dust and volcanic ash. Biogenous material is the sediment made up of the hard parts of sea creatures, mainly phytoplankton, that accumulate on the bottom of the ocean. Hydrogenous sediment is material that precipitates in the ocean when oceanic conditions change, or material created in hydrothermal vent systems. Cosmogenous sediment comes from extraterrestrial sources. === Terrigenous and biogenous === Terrigenous sediment is the most abundant sediment found on the seafloor. Terrigenous sediments come from the continents. These materials are eroded from continents and transported by wind and water to the ocean. Fluvial sediments are transported from land by rivers and glaciers, such as clay, silt, mud, and glacial flour. Aeolian sediments are transported by wind, such as dust and volcanic ash. Biogenous sediment is the next most abundant material on the seafloor. Biogenous sediments are biologically produced by living creatures. Sediments made up of at least 30% biogenous material are called "oozes." There are two types of oozes: Calcareous oozes and Siliceous oozes. Plankton grow in ocean waters and create the materials that become oozes on the seabed. Calcareous oozes are predominantly composed of calcium shells found in phytoplankton such as coccolithophores and zooplankton like the foraminiferans. These calcareous oozes are never found deeper than about 4,000 to 5,000 meters because at further depths the calcium dissolves. Similarly, Siliceous oozes are dominated by the siliceous shells of phytoplankton like diatoms and zooplankton such as radiolarians. Depending on the productivity of these planktonic organisms, the shell material that collects when these organisms die may build up at a rate anywhere from 1 mm to 1 cm every 1000 years. === Hydrogenous and cosmogenous === Hydrogenous sediments are uncommon. They only occur with changes in oceanic conditions such as temperature and pressure. Rarer still are cosmogenous sediments. Hydrogenous sediments are formed from dissolved chemicals that precipitate from the ocean water, or along the mid-ocean ridges, they can form by metallic elements binding onto rocks that have water of more than 300 °C circulating around them. When these elements mix with the cold sea water they precipitate from the cooling water. Known as manganese nodules, they are composed of layers of different metals like manganese, iron, nickel, cobalt, and copper, and they are always found on the surface of the ocean floor. Cosmogenous sediments are the remains of space debris such as comets and asteroids, made up of silicates and various metals that have impacted the Earth. === Size classification === Another way that sediments are described is through their descriptive classification. These sediments vary in size, anywhere from 1/4096 of a mm to greater than 256 mm. The different types are: boulder, cobble, pebble, granule, sand, silt, and clay, each type becoming finer in grain. The grain size indicates the type of sediment and the environment in which it was created. Larger grains sink faster and can only be pushed by rapid flowing water (high energy environment) whereas small grains sink very slowly and can be suspended by slight water movement, accumulating in conditions where water is not moving so quickly. This means that larger grains of sediment may come together in higher energy conditions and smaller grains in lower energy conditions. == Benthos == == Topography == Seabed topography (ocean topography or marine topography) refers to the shape of the land (topography) when it interfaces with the ocean. These shapes are obvious along coastlines, but they occur also in significant ways underwater. The effectiveness of marine habitats is partially defined by these shapes, including the way they interact with and shape ocean currents, and the way sunlight diminishes when these landforms occupy increasing depths. Tidal networks depend on the balance between sedimentary processes and hydrodynamics however, anthropogenic influences can impact the natural system more than any physical driver. Marine topographies include coastal and oceanic landforms ranging from coastal estuaries and shorelines to continental shelves and coral reefs. Further out in the open ocean, they include underwater and deep sea features such as ocean rises and seamounts. The submerged surface has mountainous features, including a globe-spanning mid-ocean ridge system, as well as undersea volcanoes, oceanic trenches, submarine canyons, oceanic plateaus and abyssal plains. The mass of the oceans is approximately 1.35×1018 metric tons, or about 1/4400 of the total mass of the Earth. The oceans cover an area of 3.618×108 km2 with a mean depth of 3,682 m, resulting in an estimated volume of 1.332×109 km3. == Features == Each region of the seabed has typical features such as common sediment composition, typical topography, salinity of water layers above it, marine life, magnetic direction of rocks, and sedimentation. Some features of the seabed include flat abyssal plains, mid-ocean ridges, deep trenches, and hydrothermal vents. Seabed topography is flat where layers of sediments cover the tectonic features. For example, the abyssal plain regions of the ocean are relatively flat and covered in many layers of sediments. Sediments in these flat areas come from various sources, including but not limited to: land erosion sediments from rivers, chemically precipitated sediments from hydrothermal vents, Microorganism activity, sea currents eroding the seabed and transporting sediments to the deeper ocean, and phytoplankton shell materials. Where the seafloor is actively spreading and sedimentation is relatively light, such as in the northern and eastern Atlantic Ocean, the original tectonic activity can be clearly seen as straight line "cracks" or "vents" thousands of kilometers long. These underwater mountain ranges are known as mid-ocean ridges. Other seabed environments include hydrothermal vents, cold seeps, and shallow areas. Marine life is abundant in the deep sea around hydrothermal vents. Large deep sea communities of marine life have been discovered around black and white smokers – vents emitting chemicals toxic to humans and most vertebrates. This marine life receives its energy both from the extreme temperature difference (typically a drop of 150 degrees) and from chemosynthesis by bacteria. Brine pools are another seabed feature, usually connected to cold seeps. In shallow areas, the seabed can host sediments created by marine life such as corals, fish, algae, crabs, marine plants and other organisms. == Human impact == === Exploration === The seabed has been explored by submersibles such as Alvin and, to some extent, scuba divers with special equipment. Hydrothermal vents were discovered in 1977 by researchers using an underwater camera platform. In recent years satellite measurements of ocean surface topography show very clear maps of the seabed, and these satellite-derived maps are used extensively in the study and exploration of the ocean floor. === Plastic pollution === In 2020 scientists created what may be the first scientific estimate of how much microplastic currently resides in Earth's seafloor, after investigating six areas of ~3 km depth ~300 km off the Australian coast. They found the highly variable microplastic counts to be proportionate to plastic on the surface and the angle of the seafloor slope. By averaging the microplastic mass per cm3, they estimated that Earth's seafloor contains ~14 million tons of microplastic – about double the amount they estimated based on data from earlier studies – despite calling both estimates "conservative" as coastal areas are known to contain much more microplastic pollution. These estimates are about one to two times the amount of plastic thought – per Jambeck et al., 2015 – to currently enter the oceans annually. === Exploitation === === In art and culture === Some children's play songs include elements such as "There's a hole at the bottom of the sea", or "A sailor went to sea... but all that he could see was the bottom of the deep blue sea". On and under the seabed are archaeological sites of historic interest, such as shipwrecks and sunken towns. This underwater cultural heritage is protected by the UNESCO Convention on the Protection of the Underwater Cultural Heritage. The convention aims at preventing looting and the destruction or loss of historic and cultural information by providing an international legal framework. == See also == == References == == Further reading == Roger Hekinian: Sea Floor Exploration: Scientific Adventures Diving into the Abyss. Springer, 2014. ISBN 978-3-319-03202-3 (print); ISBN 978-3-319-03203-0 (eBook) Stéphane Sainson: Electromagnetic Seabed Logging. A new tool for geoscientists. Springer, 2016. ISBN 978-3-319-45353-8 (print); ISBN 978-3-319-45355-2 (eBook) == External links == Understanding the Seafloor presentation from Cosee – the Center for Ocean Sciences Educational Excellence. Ocean Explorer (www.oceanexplorer.noaa.gov) – Public outreach site for explorations sponsored by the Office of Ocean Exploration. NOAA, Ocean Explorer Gallery, Submarine Ring of Fire 2006 Gallery, Submarine Ring of Fire 2004 Gallery – A rich collection of images, video, audio and podcast. NOAA, Ocean Explorer YouTube Channel Submarine Ring of Fire, Mariana Arc – Explore the volcanoes of the Mariana Arc, Submarine Ring of Fire. Age of the Ocean Floor National Geophysical Data Center Astonishing deep sea life on TED (conference)
Wikipedia/Marine_topography
Hypsometry (from Ancient Greek ὕψος (húpsos) 'height' and μέτρον (métron) 'measure') is the measurement of the elevation and depth of features of Earth's surface relative to mean sea level. On Earth, the elevations can take on either positive or negative (below sea level) values. The distribution is theorised to be bimodal due to the difference in density between the lighter continental crust and denser oceanic crust. On other planets within this solar system, elevations are typically unimodal, owing to the lack of plate tectonics on those bodies. == Hypsometric curve == A hypsometric curve is a histogram or cumulative distribution function of elevations in a geographical area. Differences in hypsometric curves between landscapes arise because the geomorphic processes that shape the landscape may be different. When drawn as a 2-dimensional histogram, a hypsometric curve displays the elevation (y) on the vertical, y-axis and area above the corresponding elevation (x) on the horizontal or x-axis. The curve can also be shown in non-dimensional or standardized form by scaling elevation and area by the maximum values. The non-dimensional hypsometric curve provides a hydrologist or a geomorphologist with a way to assess the similarity of watersheds — and is one of several characteristics used for doing so. The hypsometric integral is a summary measure of the shape of the hypsometric curve. In the original paper on this topic, Arthur Strahler proposed a curve containing three parameters to fit different hypsometric relations: y = [ d − x x ⋅ a d − a ] z {\displaystyle y=\left[{\frac {d-x}{x}}\cdot {\frac {a}{d-a}}\right]^{z}} , where a, d and z are fitting parameters. Subsequent research using two-dimensional landscape evolution models has called the general applicability of this fit into question, as well as the capability of the hypsometric curve to deal with scale-dependent effects. A modified curve with one additional parameter has been proposed to improve the fit. Hypsometric curves are commonly used in limnology to represent the relationship between lake surface area and depth and calculate total lake volume. These graphs can be used to predict various characteristics of lakes such as productivity, dilution of incoming chemicals, and potential for water mixing. == See also == Bathymetry Hypsometric equation Hypsometer, an instrument used in hypsometry, which estimates the elevation by boiling water – water boils at different temperatures depending on the air pressure, and thus altitude. Levelling Topography Orography == References == == Further reading == Hypsometric Curve
Wikipedia/Hypsography
Tomography is imaging by sections or sectioning that uses any kind of penetrating wave. The method is used in radiology, archaeology, biology, atmospheric science, geophysics, oceanography, plasma physics, materials science, cosmochemistry, astrophysics, quantum information, and other areas of science. The word tomography is derived from Ancient Greek τόμος tomos, "slice, section" and γράφω graphō, "to write" or, in this context as well, "to describe." A device used in tomography is called a tomograph, while the image produced is a tomogram. In many cases, the production of these images is based on the mathematical procedure tomographic reconstruction, such as X-ray computed tomography technically being produced from multiple projectional radiographs. Many different reconstruction algorithms exist. Most algorithms fall into one of two categories: filtered back projection (FBP) and iterative reconstruction (IR). These procedures give inexact results: they represent a compromise between accuracy and computation time required. FBP demands fewer computational resources, while IR generally produces fewer artifacts (errors in the reconstruction) at a higher computing cost. Although MRI (magnetic resonance imaging), optical coherence tomography and ultrasound are transmission methods, they typically do not require movement of the transmitter to acquire data from different directions. In MRI, both projections and higher spatial harmonics are sampled by applying spatially varying magnetic fields; no moving parts are necessary to generate an image. On the other hand, since ultrasound and optical coherence tomography uses time-of-flight to spatially encode the received signal, it is not strictly a tomographic method and does not require multiple image acquisitions. == Types of tomography == Some recent advances rely on using simultaneously integrated physical phenomena, e.g. X-rays for both CT and angiography, combined CT/MRI and combined CT/PET. Discrete tomography and Geometric tomography, on the other hand, are research areas that deal with the reconstruction of objects that are discrete (such as crystals) or homogeneous. They are concerned with reconstruction methods, and as such they are not restricted to any of the particular (experimental) tomography methods listed above. === Synchrotron X-ray tomographic microscopy === A new technique called synchrotron X-ray tomographic microscopy (SRXTM) allows for detailed three-dimensional scanning of fossils. The construction of third-generation synchrotron sources combined with the tremendous improvement of detector technology, data storage and processing capabilities since the 1990s has led to a boost of high-end synchrotron tomography in materials research with a wide range of different applications, e.g. the visualization and quantitative analysis of differently absorbing phases, microporosities, cracks, precipitates or grains in a specimen. Synchrotron radiation is created by accelerating free particles in high vacuum. By the laws of electrodynamics this acceleration leads to the emission of electromagnetic radiation (Jackson, 1975). Linear particle acceleration is one possibility, but apart from the very high electric fields one would need it is more practical to hold the charged particles on a closed trajectory in order to obtain a source of continuous radiation. Magnetic fields are used to force the particles onto the desired orbit and prevent them from flying in a straight line. The radial acceleration associated with the change of direction then generates radiation. == Volume rendering == Volume rendering is a set of techniques used to display a 2D projection of a 3D discretely sampled data set, typically a 3D scalar field. A typical 3D data set is a group of 2D slice images acquired, for example, by a CT, MRI, or MicroCT scanner. These are usually acquired in a regular pattern (e.g., one slice every millimeter) and usually have a regular number of image pixels in a regular pattern. This is an example of a regular volumetric grid, with each volume element, or voxel represented by a single value that is obtained by sampling the immediate area surrounding the voxel. To render a 2D projection of the 3D data set, one first needs to define a camera in space relative to the volume. Also, one needs to define the opacity and color of every voxel. This is usually defined using an RGBA (for red, green, blue, alpha) transfer function that defines the RGBA value for every possible voxel value. For example, a volume may be viewed by extracting isosurfaces (surfaces of equal values) from the volume and rendering them as polygonal meshes or by rendering the volume directly as a block of data. The marching cubes algorithm is a common technique for extracting an isosurface from volume data. Direct volume rendering is a computationally intensive task that may be performed in several ways. == History == Focal plane tomography was developed in the 1930s by the radiologist Alessandro Vallebona, and proved useful in reducing the problem of superimposition of structures in projectional radiography. In a 1953 article in the medical journal Chest, B. Pollak of the Fort William Sanatorium described the use of planography, another term for tomography. Focal plane tomography remained the conventional form of tomography until being largely replaced by mainly computed tomography in the late 1970s. Focal plane tomography uses the fact that the focal plane appears sharper, while structures in other planes appear blurred. By moving an X-ray source and the film in opposite directions during the exposure, and modifying the direction and extent of the movement, operators can select different focal planes which contain the structures of interest. == See also == Chemical imaging 3D reconstruction Discrete tomography Geometric tomography Geophysical imaging Industrial computed tomography Johann Radon Medical imaging Network tomography Nonogram – a type of puzzle based on a discrete model of tomography Radon transform Tomographic reconstruction Multiscale tomography Voxel == References == == External links == Media related to Tomography at Wikimedia Commons Image reconstruction algorithms for microtomography
Wikipedia/Tomography
Atomic force microscopy (AFM) or scanning force microscopy (SFM) is a very-high-resolution type of scanning probe microscopy (SPM), with demonstrated resolution on the order of fractions of a nanometer, more than 1000 times better than the optical diffraction limit. == Overview == Atomic force microscopy (AFM) gathers information by "feeling" or "touching" the surface with a mechanical probe. Piezoelectric elements that facilitate tiny but accurate and precise movements on (electronic) command enable precise scanning. Despite the name, the Atomic Force Microscope does not use the nuclear force. === Abilities and spatial resolution === The AFM has three major abilities: force measurement, topographic imaging, and manipulation. In force measurement, AFMs can be used to measure the forces between the probe and the sample as a function of their mutual separation. This can be applied to perform force spectroscopy, to measure the mechanical properties of the sample, such as the sample's Young's modulus, a measure of stiffness. For imaging, the reaction of the probe to the forces that the sample imposes on it can be used to form an image of the three-dimensional shape (topography) of a sample surface at a high resolution. This is achieved by raster scanning the position of the sample with respect to the tip and recording the height of the probe that corresponds to a constant probe-sample interaction (see § Topographic image for more). The surface topography is commonly displayed as a pseudocolor plot. Although the initial publication about atomic force microscopy by Binnig, Quate and Gerber in 1986 speculated about the possibility of achieving atomic resolution, profound experimental challenges needed to be overcome before atomic resolution of defects and step edges in ambient (liquid) conditions was demonstrated in 1993 by Ohnesorge and Binnig. True atomic resolution of the silicon 7x7 surface had to wait a little longer before it was shown by Giessibl. Subatomic resolution (i.e. the ability to resolve structural details within the electron density of a single atom) has also been achieved by AFM. In manipulation, the forces between tip and sample can also be used to change the properties of the sample in a controlled way. Examples of this include atomic manipulation, scanning probe lithography and local stimulation of cells. Simultaneous with the acquisition of topographical images, other properties of the sample can be measured locally and displayed as an image, often with similarly high resolution. Examples of such properties are mechanical properties like stiffness or adhesion strength and electrical properties such as conductivity or surface potential. In fact, the majority of SPM techniques are extensions of AFM that use this modality. === Other microscopy technologies === The major difference between atomic force microscopy and competing technologies such as optical microscopy and electron microscopy is that AFM does not use lenses or beam irradiation. Therefore, it does not suffer from a limitation in spatial resolution due to diffraction and aberration, and preparing a space for guiding the beam (by creating a vacuum) and staining the sample are not necessary. There are several types of scanning microscopy including SPM (which includes AFM, scanning tunneling microscopy (STM) and near-field scanning optical microscope (SNOM/NSOM), STED microscopy (STED), and scanning electron microscopy and electrochemical AFM, EC-AFM). Although SNOM and STED use visible, infrared or even terahertz light to illuminate the sample, their resolution is not constrained by the diffraction limit. === Configuration === Fig. 3 shows an AFM, which typically consists of the following features. Numbers in parentheses correspond to numbered features in Fig. 3. Coordinate directions are defined by the coordinate system (0). The small spring-like cantilever (1) is carried by the support (2). Optionally, a piezoelectric element (typically made of a ceramic material) (3) oscillates the cantilever (1). The sharp tip (4) is fixed to the free end of the cantilever (1). The detector (5) records the deflection and motion of the cantilever (1). The sample (6) is mounted on the sample stage (8). An xyz drive (7) permits to displace the sample (6) and the sample stage (8) in x, y, and z directions with respect to the tip apex (4). Although Fig. 3 shows the drive attached to the sample, the drive can also be attached to the tip, or independent drives can be attached to both, since it is the relative displacement of the sample and tip that needs to be controlled. Controllers and plotter are not shown in Fig. 3. According to the configuration described above, the interaction between tip and sample, which can be an atomic-scale phenomenon, is transduced into changes of the motion of cantilever, which is a macro-scale phenomenon. Several different aspects of the cantilever motion can be used to quantify the interaction between the tip and sample, most commonly the value of the deflection, the amplitude of an imposed oscillation of the cantilever, or the shift in resonance frequency of the cantilever (see section Imaging Modes). ==== Detector ==== The detector (5) of AFM measures the deflection (displacement with respect to the equilibrium position) of the cantilever and converts it into an electrical signal. The intensity of this signal will be proportional to the displacement of the cantilever. Various methods of detection can be used, e.g. interferometry, optical levers, the piezoelectric method, and STM-based detectors (see section "AFM cantilever deflection measurement"). ==== Image formation ==== This section applies specifically to imaging in § Contact mode. For other imaging modes, the process is similar, except that "deflection" should be replaced by the appropriate feedback variable. When using the AFM to image a sample, the tip is brought into contact with the sample, and the sample is raster scanned along an x–y grid. Most commonly, an electronic feedback loop is employed to keep the probe-sample force constant during scanning. This feedback loop has the cantilever deflection as input, and its output controls the distance along the z axis between the probe support (2 in fig. 3) and the sample support (8 in fig 3). As long as the tip remains in contact with the sample, and the sample is scanned in the x–y plane, height variations in the sample will change the deflection of the cantilever. The feedback then adjusts the height of the probe support so that the deflection is restored to a user-defined value (the setpoint). A properly adjusted feedback loop adjusts the support-sample separation continuously during the scanning motion, such that the deflection remains approximately constant. In this situation, the feedback output equals the sample surface topography to within a small error. Historically, a different operation method has been used, in which the sample-probe support distance is kept constant and not controlled by a feedback (servo mechanism). In this mode, usually referred to as "constant-height mode", the deflection of the cantilever is recorded as a function of the sample x–y position. As long as the tip is in contact with the sample, the deflection then corresponds to surface topography. This method is now less commonly used because the forces between tip and sample are not controlled, which can lead to forces high enough to damage the tip or the sample. It is, however, common practice to record the deflection even when scanning in constant force mode, with feedback. This reveals the small tracking error of the feedback, and can sometimes reveal features that the feedback was not able to adjust for. The AFM signals, such as sample height or cantilever deflection, are recorded on a computer during the x–y scan. They are plotted in a pseudocolor image, in which each pixel represents an x–y position on the sample, and the color represents the recorded signal. === History === The AFM was invented by IBM scientists in 1985. The precursor to the AFM, the scanning tunneling microscope (STM), was developed by Gerd Binnig and Heinrich Rohrer in the early 1980s at IBM Research – Zurich, a development that earned them the 1986 Nobel Prize for Physics. Binnig invented the atomic force microscope and the first experimental implementation was made by Binnig, Quate and Gerber in 1986. The first commercially available atomic force microscope was introduced in 1989. The AFM is one of the foremost tools for imaging, measuring, and manipulating matter at the nanoscale. === Applications === The AFM has been applied to problems in a wide range of disciplines of the natural sciences, including solid-state physics, semiconductor science and technology, molecular engineering, polymer chemistry and physics, surface chemistry, molecular biology, cell biology, and medicine. Applications in the field of solid state physics include (a) the identification of atoms at a surface, (b) the evaluation of interactions between a specific atom and its neighboring atoms, and (c) the study of changes in physical properties arising from changes in an atomic arrangement through atomic manipulation. In molecular biology, AFM can be used to study the structure and mechanical properties of protein complexes and assemblies. For example, AFM has been used to image microtubules and measure their stiffness. In cellular biology, AFM can be used to attempt to distinguish cancer cells and normal cells based on a hardness of cells, and to evaluate interactions between a specific cell and its neighboring cells in a competitive culture system. AFM can also be used to indent cells, to study how they regulate the stiffness or shape of the cell membrane or wall. In some variations, electric potentials can also be scanned using conducting cantilevers. In more advanced versions, currents can be passed through the tip to probe the electrical conductivity or transport of the underlying surface, but this is a challenging task with few research groups reporting consistent data (as of 2004). AFM techniques such as conductive atomic force microscopy (C-AFM) and Kelvin probe force microscopy (KPFM) are increasingly used in solid-state battery research to analyze local conductivity variations, interfacial potential changes, and degradation mechanisms at the nanoscale. == Principles == The AFM consists of a cantilever with a sharp tip (probe) at its end that is used to scan the specimen surface. The cantilever is typically silicon or silicon nitride with a tip radius of curvature on the order of nanometers. When the tip is brought into proximity of a sample surface, forces between the tip and the sample lead to a deflection of the cantilever according to Hooke's law. Depending on the situation, forces that are measured in AFM include mechanical contact force, van der Waals forces, capillary forces, chemical bonding, electrostatic forces, magnetic forces (see magnetic force microscope, MFM), Casimir forces, solvation forces, etc. Along with force, additional quantities may simultaneously be measured through the use of specialized types of probes (see scanning thermal microscopy, scanning joule expansion microscopy, photothermal microspectroscopy, etc.). The AFM can be operated in a number of modes, depending on the application. In general, possible imaging modes are divided into static (also called contact) modes and a variety of dynamic (non-contact or "tapping") modes where the cantilever is vibrated or oscillated at a given frequency. === Imaging modes === AFM operation is usually described as one of three modes, according to the nature of the tip motion: contact mode, also called static mode (as opposed to the other two modes, which are called dynamic modes); tapping mode, also called intermittent contact, AC mode, or vibrating mode, or, after the detection mechanism, amplitude modulation AFM; and non-contact mode, or, again after the detection mechanism, frequency modulation AFM. Despite the nomenclature, repulsive contact can occur or be avoided both in amplitude modulation AFM and frequency modulation AFM, depending on the settings. ==== Contact mode ==== In contact mode, the tip is "dragged" across the surface of the sample and the contours of the surface are measured either using the deflection of the cantilever directly or, more commonly, using the feedback signal required to keep the cantilever at a constant position. Because the measurement of a static signal is prone to noise and drift, low stiffness cantilevers (i.e. cantilevers with a low spring constant, k) are used to achieve a large enough deflection signal while keeping the interaction force low. Close to the surface of the sample, attractive forces can be quite strong, causing the tip to "snap-in" to the surface. Thus, contact mode AFM is almost always done at a depth where the overall force is repulsive, that is, in firm "contact" with the solid surface. ==== Tapping mode ==== In ambient conditions, most samples develop a liquid meniscus layer. Because of this, keeping the probe tip close enough to the sample for short-range forces to become detectable while preventing the tip from sticking to the surface presents a major problem for contact mode in ambient conditions. Dynamic contact mode (also called intermittent contact, AC mode or tapping mode) was developed to bypass this problem. Nowadays, tapping mode is the most frequently used AFM mode when operating in ambient conditions or in liquids. In tapping mode, the cantilever is driven to oscillate up and down at or near its resonance frequency. This oscillation is commonly achieved with a small piezo element in the cantilever holder, but other possibilities include an AC magnetic field (with magnetic cantilevers), piezoelectric cantilevers, or periodic heating with a modulated laser beam. The amplitude of this oscillation usually varies from several nm to 200 nm. In tapping mode, the frequency and amplitude of the driving signal are kept constant, leading to a constant amplitude of the cantilever oscillation as long as there is no drift or interaction with the surface. The interaction of forces acting on the cantilever when the tip comes close to the surface, van der Waals forces, dipole–dipole interactions, electrostatic forces, etc. cause the amplitude of the cantilever's oscillation to change (usually decrease) as the tip gets closer to the sample. This amplitude is used as the parameter that goes into the electronic servo that controls the height of the cantilever above the sample. The servo adjusts the height to maintain a set cantilever oscillation amplitude as the cantilever is scanned over the sample. A tapping AFM image is therefore produced by imaging the force of the intermittent contacts of the tip with the sample surface. Although the peak forces applied during the contacting part of the oscillation can be much higher than typically used in contact mode, tapping mode generally lessens the damage done to the surface and the tip compared to the amount done in contact mode. This can be explained by the short duration of the applied force, and because the lateral forces between tip and sample are significantly lower in tapping mode over contact mode. Tapping mode imaging is gentle enough even for the visualization of supported lipid bilayers or adsorbed single polymer molecules (for instance, 0.4 nm thick chains of synthetic polyelectrolytes) under liquid medium. With proper scanning parameters, the conformation of single molecules can remain unchanged for hours, and even single molecular motors can be imaged while moving. When operating in tapping mode, the phase of the cantilever's oscillation with respect to the driving signal can be recorded as well. This signal channel contains information about the energy dissipated by the cantilever in each oscillation cycle. Samples that contain regions of varying stiffness or with different adhesion properties can give a contrast in this channel that is not visible in the topographic image. Extracting the sample's material properties in a quantitative manner from phase images, however, is often not feasible. ==== Non-contact mode ==== In non-contact atomic force microscopy mode, the tip of the cantilever does not contact the sample surface. The cantilever is instead oscillated at either its resonant frequency (frequency modulation) or just above (amplitude modulation) where the amplitude of oscillation is typically a few nanometers (<10 nm) down to a few picometers. The van der Waals forces, which are strongest from 1 nm to 10 nm above the surface, or any other long-range force that extends above the surface acts to decrease the resonance frequency of the cantilever. This decrease in resonant frequency combined with the feedback loop system maintains a constant oscillation amplitude or frequency by adjusting the average tip-to-sample distance. Measuring the tip-to-sample distance at each (x,y) data point allows the scanning software to construct a topographic image of the sample surface. Non-contact mode AFM does not suffer from tip or sample degradation effects that are sometimes observed after taking numerous scans with contact AFM. This makes non-contact AFM preferable to contact AFM for measuring soft samples, e.g. biological samples and organic thin film. In the case of rigid samples, contact and non-contact images may look the same. However, if a few monolayers of adsorbed fluid are lying on the surface of a rigid sample, the images may look quite different. An AFM operating in contact mode will penetrate the liquid layer to image the underlying surface, whereas in non-contact mode an AFM will oscillate above the adsorbed fluid layer to image both the liquid and surface. Schemes for dynamic mode operation include frequency modulation where a phase-locked loop is used to track the cantilever's resonance frequency and the more common amplitude modulation with a servo loop in place to keep the cantilever excitation to a defined amplitude. In frequency modulation, changes in the oscillation frequency provide information about tip-sample interactions. Frequency can be measured with very high sensitivity and thus the frequency modulation mode allows for the use of very stiff cantilevers. Stiff cantilevers provide stability very close to the surface and, as a result, this technique was the first AFM technique to provide true atomic resolution in ultra-high vacuum conditions. In amplitude modulation, changes in the oscillation amplitude or phase provide the feedback signal for imaging. In amplitude modulation, changes in the phase of oscillation can be used to discriminate between different types of materials on the surface. Amplitude modulation can be operated either in the non-contact or in the intermittent contact regime. In dynamic contact mode, the cantilever is oscillated such that the separation distance between the cantilever tip and the sample surface is modulated. Amplitude modulation has also been used in the non-contact regime to image with atomic resolution by using very stiff cantilevers and small amplitudes in an ultra-high vacuum environment. == Topographic image == Image formation is a plotting method that produces a color mapping through changing the x–y position of the tip while scanning and recording the measured variable, i.e. the intensity of control signal, to each x–y coordinate. The color mapping shows the measured value corresponding to each coordinate. The image expresses the intensity of a value as a hue. Usually, the correspondence between the intensity of a value and a hue is shown as a color scale in the explanatory notes accompanying the image. Operation mode of image forming of the AFM are generally classified into two groups from the viewpoint of whether or not it uses z-Feedback loop (not shown) to maintain the tip-sample distance to keep signal intensity exported by the detector. The first one (using z-Feedback loop), said to be "constant XX mode" (XX is something which kept by z-Feedback loop). Topographic image formation mode is based on abovementioned "constant XX mode", z-Feedback loop controls the relative distance between the probe and the sample through outputting control signals to keep constant one of frequency, vibration and phase which typically corresponds to the motion of cantilever (for instance, voltage is applied to the Z-piezoelectric element and it moves the sample up and down towards the Z direction. === Topographic image of FM-AFM === When the distance between the probe and the sample is brought to the range where atomic force may be detected, while a cantilever is excited in its natural eigenfrequency (f0), the resonance frequency f of the cantilever may shift from its original resonance frequency. In other words, in the range where atomic force may be detected, a frequency shift (df =f–f0) will also be observed. When the distance between the probe and the sample is in the non-contact region, the frequency shift increases in negative direction as the distance between the probe and the sample gets smaller. When the sample has concavity and convexity, the distance between the tip-apex and the sample varies in accordance with the concavity and convexity accompanied with a scan of the sample along x–y direction (without height regulation in z-direction). As a result, the frequency shift arises. The image in which the values of the frequency obtained by a raster scan along the x–y direction of the sample surface are plotted against the x–y coordination of each measurement point is called a constant-height image. On the other hand, the df may be kept constant by moving the probe upward and downward (See (3) of FIG.5) in z-direction using a negative feedback (by using z-feedback loop) while the raster scan of the sample surface along the x–y direction. The image in which the amounts of the negative feedback (the moving distance of the probe upward and downward in z-direction) are plotted against the x–y coordination of each measurement point is a topographic image. In other words, the topographic image is a trace of the tip of the probe regulated so that the df is constant and it may also be considered to be a plot of a constant-height surface of the df. Therefore, the topographic image of the AFM is not the exact surface morphology itself, but actually the image influenced by the bond-order between the probe and the sample, however, the topographic image of the AFM is considered to reflect the geographical shape of the surface more than the topographic image of a scanning tunnel microscope. == Force spectroscopy == Besides imaging, AFM can be used for force spectroscopy, the direct measurement of tip-sample interaction forces as a function of the gap between the tip and sample. The result of this measurement is called a force-distance curve. For this method, the AFM tip is extended towards and retracted from the surface as the deflection of the cantilever is monitored as a function of piezoelectric displacement. These measurements have been used to measure nanoscale contacts, atomic bonding, Van der Waals forces, and Casimir forces, dissolution forces in liquids and single molecule stretching and rupture forces. AFM has also been used to measure, in an aqueous environment, the dispersion force due to polymer adsorbed on the substrate. Forces of the order of a few piconewtons can now be routinely measured with a vertical distance resolution of better than 0.1 nanometers. Force spectroscopy can be performed with either static or dynamic modes. In dynamic modes, information about the cantilever vibration is monitored in addition to the static deflection. Problems with the technique include no direct measurement of the tip-sample separation and the common need for low-stiffness cantilevers, which tend to "snap" to the surface. These problems are not insurmountable. An AFM that directly measures the tip-sample separation has been developed. The snap-in can be reduced by measuring in liquids or by using stiffer cantilevers, but in the latter case a more sensitive deflection sensor is needed. By applying a small dither to the tip, the stiffness (force gradient) of the bond can be measured as well. === Biological applications and other === Force spectroscopy is used in biophysics to measure the mechanical properties of living material (such as tissue or cells) or detect structures of different stiffness buried into the bulk of the sample using the stiffness tomography. Another application was to measure the interaction forces between from one hand a material stuck on the tip of the cantilever, and from another hand the surface of particles either free or occupied by the same material. From the adhesion force distribution curve, a mean value of the forces has been derived. It allowed to make a cartography of the surface of the particles, covered or not by the material. AFM has also been used for mechanically unfolding proteins. In such experiments, the analyzes of the mean unfolding forces with the appropriate model leads to the obtainment of the information about the unfolding rate and free energy profile parameters of the protein. == Identification of individual surface atoms == The AFM can be used to image atoms and structures on a variety of surfaces. The atom at the apex of the tip "senses" individual atoms on the underlying surface when it begins the formation of chemical bonds with each atom. Because these chemical interactions subtly alter the tip's vibration frequency, they can be detected and mapped. This principle was used to distinguish between atoms of silicon, tin and lead on an alloy surface, by comparing these atomic fingerprints with values obtained from density functional theory (DFT) simulations. Interaction forces must be measured precisely for each type of atom expected in the sample, and then to compare with forces given by DFT simulations. It was found that the tip interacted most strongly with silicon atoms, and interacted 24% and 41% less strongly with tin and lead atoms, respectively. Each different type of atom could be identified in the matrix as the tip using this information. == Probe == An AFM probe has a sharp tip on the free-swinging end of a cantilever that protrudes from a holder. The dimensions of the cantilever are in the scale of micrometers. The radius of the tip is usually on the scale of a few nanometers to a few tens of nanometers. (Specialized probes exist with much larger end radii, for example probes for indentation of soft materials.) The cantilever holder, also called the holder chip—often 1.6 mm by 3.4 mm in size—allows the operator to hold the AFM cantilever/probe assembly with tweezers and fit it into the corresponding holder clips on the scanning head of the atomic force microscope. This device is most commonly called an "AFM probe", but other names include "AFM tip" and "cantilever" (employing the name of a single part as the name of the whole device). An AFM probe is a particular type of SPM probe. AFM probes are manufactured with MEMS technology. Most AFM probes used are made from silicon (Si), but borosilicate glass and silicon nitride are also in use. AFM probes are considered consumables as they are often replaced when the tip apex becomes dull or contaminated or when the cantilever is broken. They can cost from a couple of tens of dollars up to hundreds of dollars per cantilever for the most specialized cantilever/probe combinations. To use the device, the tip is brought very close to the surface of the object under investigation, and the cantilever is deflected by the interaction between the tip and the surface, which is what the AFM is designed to measure. A spatial map of the interaction can be made by measuring the deflection at many points on a 2D surface. Several types of interaction can be detected. Depending on the interaction under investigation, the surface of the tip of the AFM probe needs to be modified with a coating. Among the coatings used are gold – for covalent bonding of biological molecules and the detection of their interaction with a surface, diamond for increased wear resistance and magnetic coatings for detecting the magnetic properties of the investigated surface. Another solution exists to achieve high resolution magnetic imaging: equipping the probe with a microSQUID. The AFM tips are fabricated using silicon micro machining and the precise positioning of the microSQUID loop is achieved using electron beam lithography. The additional attachment of a quantum dot to the tip apex of a conductive probe enables surface potential imaging with high lateral resolution, scanning quantum dot microscopy. The surface of the cantilevers can also be modified. These coatings are mostly applied in order to increase the reflectance of the cantilever and to improve the deflection signal. == Forces as a function of tip geometry == The forces between the tip and the sample strongly depend on the geometry of the tip. Various studies were exploited in the past years to write the forces as a function of the tip parameters. Among the different forces between the tip and the sample, the water meniscus forces are highly interesting, both in air and liquid environment. Other forces must be considered, like the Coulomb force, van der Waals forces, double layer interactions, solvation forces, hydration and hydrophobic forces. === Water meniscus === Water meniscus forces are highly interesting for AFM measurements in air. Due to the ambient humidity, a thin layer of water is formed between the tip and the sample during air measurements. The resulting capillary force gives rise to a strong attractive force that pulls the tip onto the surface. In fact, the adhesion force measured between tip and sample in ambient air of finite humidity is usually dominated by capillary forces. As a consequence, it is difficult to pull the tip away from the surface. For soft samples including many polymers and in particular biological materials, the strong adhesive capillary force gives rise to sample degradation and destruction upon imaging in contact mode. Historically, these problems were an important motivation for the development of dynamic imaging in air (e.g. "tapping mode"). During tapping mode imaging in air, capillary bridges still form. Yet, for suitable imaging conditions, the capillary bridges are formed and broken in every oscillation cycle of the cantilever normal to the surface, as can be inferred from an analysis of cantilever amplitude and phase vs. distance curves. As a consequence, destructive shear forces are largely reduced and soft samples can be investigated. In order to quantify the equilibrium capillary force, it is necessary to start from the Laplace equation for pressure: P = γ L ( 1 r 1 + 1 r 0 ) ≃ γ L r e f f {\displaystyle P=\gamma _{L}\left({\frac {1}{r}}_{1}+{\frac {1}{r}}_{0}\right)\simeq {\frac {\gamma _{L}}{r_{eff}}}} where γL, is the surface energy and r0 and r1 are defined in the figure. The pressure is applied on an area of A ≃ 2 π R ≃ [ r e f f ( 1 + cos ⁡ θ ) + h ] {\displaystyle A\simeq 2\pi R\simeq [r_{eff}(1+\cos \theta )+h]} where θ is the angle between the tip's surface and the liquid's surface while h is the height difference between the surrounding liquid and the top of the miniscus. The force that pulls together the two surfaces is F = 2 π R γ L ( 1 + cos ⁡ θ + h r e f f ) {\displaystyle F=2\pi R\gamma _{L}\left(1+\cos \theta +{\frac {h}{r_{eff}}}\right)} The same formula could also be calculated as a function of relative humidity. Gao calculated formulas for different tip geometries. As an example, the force decreases by 20% for a conical tip with respect to a spherical tip. When these forces are calculated, a difference must be made between the wet on dry situation and the wet on wet situation. For a spherical tip, the force is: f m = − 2 π R γ L ( cos ⁡ θ + cos ⁡ ϕ ) ( 1 − d h d D ) {\displaystyle f_{m}=-2\pi R\gamma _{L}(\cos \theta +\cos \phi )\left(1-{\frac {dh}{dD}}\right)} for dry on wet, f m = − 2 π R γ L d r 0 d D {\displaystyle f_{m}=-2\pi R\gamma _{L}{\frac {dr_{0}}{dD}}} for wet on wet, where θ is the contact angle of the dry sphere and φ is the immersed angle, as shown in the figure For a conical tip, the formula becomes: f m = − 2 π R γ L tan ⁡ δ cos ⁡ δ ( cos ⁡ θ + sin ⁡ δ ) ( h D ) ( 1 − d h d D ) {\displaystyle f_{m}=-2\pi R\gamma _{L}{\frac {\tan \delta }{\cos \delta }}(\cos \theta +\sin \delta )(hD)\left(1-{\frac {dh}{dD}}\right)} for dry on wet f m = − 2 π R γ L ( 1 cos ⁡ δ + sin ⁡ δ ) ( r 0 ) ( d r 0 d D ) {\displaystyle f_{m}=-2\pi R\gamma _{L}\left({\frac {1}{\cos \delta }}+\sin \delta \right)(r_{0})\left({\frac {dr_{0}}{dD}}\right)} for wet on wet where δ is the half cone angle and r0 and h are parameters of the meniscus profile. == AFM cantilever-deflection measurement == === Beam-deflection measurement === The most common method for cantilever-deflection measurements is the beam-deflection method. In this method, laser light from a solid-state diode is reflected off the back of the cantilever and collected by a position-sensitive detector (PSD) consisting of two closely spaced photodiodes, whose output signal is collected by a differential amplifier. Angular displacement of the cantilever results in one photodiode collecting more light than the other photodiode, producing an output signal (the difference between the photodiode signals normalized by their sum), which is proportional to the deflection of the cantilever. The sensitivity of the beam-deflection method is very high, and a noise floor on the order of 10 fm Hz−1⁄2 can be obtained routinely in a well-designed system. Although this method is sometimes called the "optical lever" method, the signal is not amplified if the beam path is made longer. A longer beam path increases the motion of the reflected spot on the photodiodes, but also widens the spot by the same amount due to diffraction, so that the same amount of optical power is moved from one photodiode to the other. The "optical leverage" (output signal of the detector divided by deflection of the cantilever) is inversely proportional to the numerical aperture of the beam focusing optics, as long as the focused laser spot is small enough to fall completely on the cantilever. It is also inversely proportional to the length of the cantilever. The relative popularity of the beam-deflection method can be explained by its high sensitivity and simple operation, and by the fact that cantilevers do not require electrical contacts or other special treatments, and can therefore be fabricated relatively cheaply with sharp integrated tips. === Other deflection-measurement methods === Many other methods for beam-deflection measurements exist. Piezoelectric detection – Cantilevers made from quartz (such as the qPlus configuration), or other piezoelectric materials can directly detect deflection as an electrical signal. Cantilever oscillations down to 10pm have been detected with this method. Laser Doppler vibrometry – A laser Doppler vibrometer can be used to produce very accurate deflection measurements for an oscillating cantilever (thus is only used in non-contact mode). This method is expensive and is only used by relatively few groups. Scanning tunneling microscope (STM) — The first atomic microscope used an STM complete with its own feedback mechanism to measure deflection. This method is very difficult to implement, and is slow to react to deflection changes compared to modern methods. Optical interferometry – Optical interferometry can be used to measure cantilever deflection. Due to the nanometre scale deflections measured in AFM, the interferometer is running in the sub-fringe regime, thus, any drift in laser power or wavelength has strong effects on the measurement. For these reasons optical interferometer measurements must be done with great care (for example using index matching fluids between optical fibre junctions), with very stable lasers. For these reasons optical interferometry is rarely used. Capacitive detection – Metal coated cantilevers can form a capacitor with another contact located behind the cantilever. Deflection changes the distance between the contacts and can be measured as a change in capacitance. Piezoresistive detection – Cantilevers can be fabricated with piezoresistive elements that act as a strain gauge. Using a Wheatstone bridge, strain in the AFM cantilever due to deflection can be measured. This is not commonly used in vacuum applications, as the piezoresistive detection dissipates energy from the system affecting Q of the resonance. == Piezoelectric scanners == AFM scanners are made from piezoelectric material, which expands and contracts proportionally to an applied voltage. Whether they elongate or contract depends upon the polarity of the voltage applied. Traditionally the tip or sample is mounted on a "tripod" of three piezo crystals, with each responsible for scanning in the x,y and z directions. In 1986, the same year as the AFM was invented, a new piezoelectric scanner, the tube scanner, was developed for use in STM. Later tube scanners were incorporated into AFMs. The tube scanner can move the sample in the x, y, and z directions using a single tube piezo with a single interior contact and four external contacts. An advantage of the tube scanner compared to the original tripod design, is better vibrational isolation, resulting from the higher resonant frequency of the single element construction, in combination with a low resonant frequency isolation stage. A disadvantage is that the x-y motion can cause unwanted z motion resulting in distortion. Another popular design for AFM scanners is the flexure stage, which uses separate piezos for each axis, and couples them through a flexure mechanism. Scanners are characterized by their sensitivity, which is the ratio of piezo movement to piezo voltage, i.e., by how much the piezo material extends or contracts per applied volt. Due to the differences in material or size, the sensitivity varies from scanner to scanner. Sensitivity varies non-linearly with respect to scan size. Piezo scanners exhibit more sensitivity at the end than at the beginning of a scan. This causes the forward and reverse scans to behave differently and display hysteresis between the two scan directions. This can be corrected by applying a non-linear voltage to the piezo electrodes to cause linear scanner movement and calibrating the scanner accordingly. One disadvantage of this approach is that it requires re-calibration because the precise non-linear voltage needed to correct non-linear movement will change as the piezo ages (see below). This problem can be circumvented by adding a linear sensor to the sample stage or piezo stage to detect the true movement of the piezo. Deviations from ideal movement can be detected by the sensor and corrections applied to the piezo drive signal to correct for non-linear piezo movement. This design is known as a "closed loop" AFM. Non-sensored piezo AFMs are referred to as "open loop" AFMs. The sensitivity of piezoelectric materials decreases exponentially with time. This causes most of the change in sensitivity to occur in the initial stages of the scanner's life. Piezoelectric scanners are run for approximately 48 hours before they are shipped from the factory so that they are past the point where they may have large changes in sensitivity. As the scanner ages, the sensitivity will change less with time and the scanner would seldom require recalibration, though various manufacturer manuals recommend monthly to semi-monthly calibration of open loop AFMs. == Advantages and disadvantages == === Advantages === AFM has several advantages over the scanning electron microscope (SEM). Unlike the electron microscope, which provides a two-dimensional projection or a two-dimensional image of a sample, the AFM provides a three-dimensional surface profile. In addition, samples viewed by AFM do not require any special treatments (such as metal/carbon coatings) that would irreversibly change or damage the sample, and does not typically suffer from charging artifacts in the final image. While an electron microscope needs an expensive vacuum environment for proper operation, most AFM modes can work perfectly well in ambient air or even a liquid environment. This makes it possible to study biological macromolecules and even living organisms. In principle, AFM can provide higher resolution than SEM. It has been shown to give true atomic resolution in ultra-high vacuum (UHV) and, more recently, in liquid environments. High resolution AFM is comparable in resolution to scanning tunneling microscopy and transmission electron microscopy. AFM can also be combined with a variety of optical microscopy and spectroscopy techniques such as fluorescent microscopy of infrared spectroscopy, giving rise to scanning near-field optical microscopy, nano-FTIR and further expanding its applicability. Combined AFM-optical instruments have been applied primarily in the biological sciences but have recently attracted strong interest in photovoltaics and energy-storage research, polymer sciences, nanotechnology and even medical research. === Disadvantages === A disadvantage of AFM compared with the scanning electron microscope (SEM) is the single scan image size. In one pass, the SEM can image an area on the order of square millimeters with a depth of field on the order of millimeters, whereas the AFM can only image a maximum scanning area of about 150×150 micrometers and a maximum height on the order of 10–20 micrometers. One method of improving the scanned area size for AFM is by using parallel probes in a fashion similar to that of millipede data storage. The scanning speed of an AFM is also a limitation. Traditionally, an AFM cannot scan images as fast as an SEM, requiring several minutes for a typical scan, while an SEM is capable of scanning at near real-time, although at relatively low quality. The relatively slow rate of scanning during AFM imaging often leads to thermal drift in the image making the AFM less suited for measuring accurate distances between topographical features on the image. However, several fast-acting designs were suggested to increase microscope scanning productivity including what is being termed videoAFM (reasonable quality images are being obtained with videoAFM at video rate: faster than the average SEM). To eliminate image distortions induced by thermal drift, several methods have been introduced. AFM images can also be affected by nonlinearity, hysteresis, and creep of the piezoelectric material and cross-talk between the x, y, z axes that may require software enhancement and filtering. Such filtering could "flatten" out real topographical features. However, newer AFMs utilize real-time correction software (for example, feature-oriented scanning) or closed-loop scanners, which practically eliminate these problems. Some AFMs also use separated orthogonal scanners (as opposed to a single tube), which also serve to eliminate part of the cross-talk problems. As with any other imaging technique, there is the possibility of image artifacts, which could be induced by an unsuitable tip, a poor operating environment, or even by the sample itself, as depicted on the right. These image artifacts are unavoidable; however, their occurrence and effect on results can be reduced through various methods. Artifacts resulting from a too-coarse tip can be caused for example by inappropriate handling or de facto collisions with the sample by either scanning too fast or having an unreasonably rough surface, causing actual wearing of the tip. Due to the nature of AFM probes, they cannot normally measure steep walls or overhangs. Specially made cantilevers and AFMs can be used to modulate the probe sideways as well as up and down (as with dynamic contact and non-contact modes) to measure sidewalls, at the cost of more expensive cantilevers, lower lateral resolution and additional artifacts. == Other applications in various fields of study == The latest efforts in integrating nanotechnology and biological research have been successful and show much promise for the future, including in fields such as nanobiomechanics. Since nanoparticles are a potential vehicle of drug delivery, the biological responses of cells to these nanoparticles are continuously being explored to optimize their efficacy and how their design could be improved. Pyrgiotakis et al. were able to study the interaction between CeO2 and Fe2O3 engineered nanoparticles and cells by attaching the engineered nanoparticles to the AFM tip. Studies have taken advantage of AFM to obtain further information on the behavior of live cells in biological media. Real-time atomic force spectroscopy (or nanoscopy) and dynamic atomic force spectroscopy have been used to study live cells and membrane proteins and their dynamic behavior at high resolution, on the nanoscale. Imaging and obtaining information on the topography and the properties of the cells has also given insight into chemical processes and mechanisms that occur through cell-cell interaction and interactions with other signaling molecules (ex. ligands). Evans and Calderwood used single cell force microscopy to study cell adhesion forces, bond kinetics/dynamic bond strength and its role in chemical processes such as cell signaling. Scheuring, Lévy, and Rigaud reviewed studies in which AFM to explore the crystal structure of membrane proteins of photosynthetic bacteria. Alsteen et al. have used AFM-based nanoscopy to perform a real-time analysis of the interaction between live mycobacteria and antimycobacterial drugs (specifically isoniazid, ethionamide, ethambutol, and streptomycine), which serves as an example of the more in-depth analysis of pathogen-drug interactions that can be done through AFM. == See also == Science portal == References == == Further reading == Voigtländer, Bert (2019). Atomic Force Microscopy. NanoScience and Technology. Springer. Bibcode:2019afm..book.....V. doi:10.1007/978-3-030-13654-3. ISBN 978-3-030-13653-6. S2CID 199490753. Carpick, Robert W.; Salmeron, Miquel (1997). "Scratching the Surface: Fundamental Investigations of Tribology with Atomic Force Microscopy". Chemical Reviews. 97 (4): 1163–1194. doi:10.1021/cr960068q. ISSN 0009-2665. PMID 11851446. Giessibl, Franz J. (2003). "Advances in atomic force microscopy". Reviews of Modern Physics. 75 (3): 949–983. arXiv:cond-mat/0305119. Bibcode:2003RvMP...75..949G. doi:10.1103/RevModPhys.75.949. ISSN 0034-6861. S2CID 18924292. Garcia, Ricardo; Knoll, Armin; Riedo, Elisa (2014). "Advanced Scanning Probe Lithography". Nature Nanotechnology. 9 (8): 577–87. arXiv:1505.01260. Bibcode:2014NatNa...9..577G. doi:10.1038/NNANO.2014.157. PMID 25091447. S2CID 205450948. García, Ricardo; Pérez, Rubén (2002). "Dynamic atomic force microscopy methods". Surface Science Reports. 47 (6–8): 197–301. Bibcode:2002SurSR..47..197G. doi:10.1016/S0167-5729(02)00077-8. == External links == The Inner Workings of an AFM - An Animated Explanation WeCanFigureThisOut.org
Wikipedia/Atomic_force_microscopy
The term conceptual model refers to any model that is formed after a conceptualization or generalization process. Conceptual models are often abstractions of things in the real world, whether physical or social. Semantic studies are relevant to various stages of concept formation. Semantics is fundamentally a study of concepts, the meaning that thinking beings give to various elements of their experience. == Overview == === Concept models and conceptual models === The value of a conceptual model is usually directly proportional to how well it corresponds to a past, present, future, actual or potential state of affairs. A concept model (a model of a concept) is quite different because in order to be a good model it need not have this real world correspondence. In artificial intelligence, conceptual models and conceptual graphs are used for building expert systems and knowledge-based systems; here the analysts are concerned to represent expert opinion on what is true not their own ideas on what is true. === Type and scope of conceptual models === Conceptual models range in type from the more concrete, such as the mental image of a familiar physical object, to the formal generality and abstractness of mathematical models which do not appear to the mind as an image. Conceptual models also range in terms of the scope of the subject matter that they are taken to represent. A model may, for instance, represent a single thing (e.g. the Statue of Liberty), whole classes of things (e.g. the electron), and even very vast domains of subject matter such as the physical universe. The variety and scope of conceptual models is due to the variety of purposes for using them. Conceptual modeling is the activity of formally describing some aspects of the physical and social world around us for the purposes of understanding and communication. === Fundamental objectives === A conceptual model's primary objective is to convey the fundamental principles and basic functionality of the system which it represents. Also, a conceptual model must be developed in such a way as to provide an easily understood system interpretation for the model's users. A conceptual model, when implemented properly, should satisfy four fundamental objectives. Enhance an individual's understanding of the representative system Facilitate efficient conveyance of system details between stakeholders Provide a point of reference for system designers to extract system specifications Document the system for future reference and provide a means for collaboration The conceptual model plays an important role in the overall system development life cycle. Figure 1 below, depicts the role of the conceptual model in a typical system development scheme. It is clear that if the conceptual model is not fully developed, the execution of fundamental system properties may not be implemented properly, giving way to future problems or system shortfalls. These failures do occur in the industry and have been linked to; lack of user input, incomplete or unclear requirements, and changing requirements. Those weak links in the system design and development process can be traced to improper execution of the fundamental objectives of conceptual modeling. The importance of conceptual modeling is evident when such systemic failures are mitigated by thorough system development and adherence to proven development objectives/techniques. == Modelling techniques == Numerous techniques can be applied across multiple disciplines to increase the user's understanding of the system to be modeled. A few techniques are briefly described in the following text, however, many more exist or are being developed. Some commonly used conceptual modeling techniques and methods include: workflow modeling, workforce modeling, rapid application development, object-role modeling, and the Unified Modeling Language (UML). === Data flow modeling === Data flow modeling (DFM) is a basic conceptual modeling technique that graphically represents elements of a system. DFM is a fairly simple technique; however, like many conceptual modeling techniques, it is possible to construct higher and lower level representative diagrams. The data flow diagram usually does not convey complex system details such as parallel development considerations or timing information, but rather works to bring the major system functions into context. Data flow modeling is a central technique used in systems development that utilizes the structured systems analysis and design method (SSADM). === Entity relationship modeling === Entity–relationship modeling (ERM) is a conceptual modeling technique used primarily for software system representation. Entity-relationship diagrams, which are a product of executing the ERM technique, are normally used to represent database models and information systems. The main components of the diagram are the entities and relationships. The entities can represent independent functions, objects, or events. The relationships are responsible for relating the entities to one another. To form a system process, the relationships are combined with the entities and any attributes needed to further describe the process. Multiple diagramming conventions exist for this technique; IDEF1X, Bachman, and EXPRESS, to name a few. These conventions are just different ways of viewing and organizing the data to represent different system aspects. === Event-driven process chain === The event-driven process chain (EPC) is a conceptual modeling technique which is mainly used to systematically improve business process flows. Like most conceptual modeling techniques, the event driven process chain consists of entities/elements and functions that allow relationships to be developed and processed. More specifically, the EPC is made up of events which define what state a process is in or the rules by which it operates. In order to progress through events, a function/ active event must be executed. Depending on the process flow, the function has the ability to transform event states or link to other event driven process chains. Other elements exist within an EPC, all of which work together to define how and by what rules the system operates. The EPC technique can be applied to business practices such as resource planning, process improvement, and logistics. === Joint application development === The dynamic systems development method uses a specific process called JEFFF to conceptually model a systems life cycle. JEFFF is intended to focus more on the higher level development planning that precedes a project's initialization. The JAD process calls for a series of workshops in which the participants work to identify, define, and generally map a successful project from conception to completion. This method has been found to not work well for large scale applications, however smaller applications usually report some net gain in efficiency. === Place/transition net === Also known as Petri nets, this conceptual modeling technique allows a system to be constructed with elements that can be described by direct mathematical means. The petri net, because of its nondeterministic execution properties and well defined mathematical theory, is a useful technique for modeling concurrent system behavior, i.e. simultaneous process executions. === State transition modeling === State transition modeling makes use of state transition diagrams to describe system behavior. These state transition diagrams use distinct states to define system behavior and changes. Most current modeling tools contain some kind of ability to represent state transition modeling. The use of state transition models can be most easily recognized as logic state diagrams and directed graphs for finite-state machines. === Technique evaluation and selection === Because the conceptual modeling method can sometimes be purposefully vague to account for a broad area of use, the actual application of concept modeling can become difficult. To alleviate this issue, and shed some light on what to consider when selecting an appropriate conceptual modeling technique, the framework proposed by Gemino and Wand will be discussed in the following text. However, before evaluating the effectiveness of a conceptual modeling technique for a particular application, an important concept must be understood; Comparing conceptual models by way of specifically focusing on their graphical or top level representations is shortsighted. Gemino and Wand make a good point when arguing that the emphasis should be placed on a conceptual modeling language when choosing an appropriate technique. In general, a conceptual model is developed using some form of conceptual modeling technique. That technique will utilize a conceptual modeling language that determines the rules for how the model is arrived at. Understanding the capabilities of the specific language used is inherent to properly evaluating a conceptual modeling technique, as the language reflects the techniques descriptive ability. Also, the conceptual modeling language will directly influence the depth at which the system is capable of being represented, whether it be complex or simple. ==== Considering affecting factors ==== Building on some of their earlier work, Gemino and Wand acknowledge some main points to consider when studying the affecting factors: the content that the conceptual model must represent, the method in which the model will be presented, the characteristics of the model's users, and the conceptual model languages specific task. The conceptual model's content should be considered in order to select a technique that would allow relevant information to be presented. The presentation method for selection purposes would focus on the technique's ability to represent the model at the intended level of depth and detail. The characteristics of the model's users or participants is an important aspect to consider. A participant's background and experience should coincide with the conceptual model's complexity, else misrepresentation of the system or misunderstanding of key system concepts could lead to problems in that system's realization. The conceptual model language task will further allow an appropriate technique to be chosen. The difference between creating a system conceptual model to convey system functionality and creating a system conceptual model to interpret that functionality could involve two completely different types of conceptual modeling languages. ==== Considering affected variables ==== Gemino and Wand go on to expand the affected variable content of their proposed framework by considering the focus of observation and the criterion for comparison. The focus of observation considers whether the conceptual modeling technique will create a "new product", or whether the technique will only bring about a more intimate understanding of the system being modeled. The criterion for comparison would weigh the ability of the conceptual modeling technique to be efficient or effective. A conceptual modeling technique that allows for development of a system model which takes all system variables into account at a high level may make the process of understanding the system functionality more efficient, but the technique lacks the necessary information to explain the internal processes, rendering the model less effective. When deciding which conceptual technique to use, the recommendations of Gemino and Wand can be applied in order to properly evaluate the scope of the conceptual model in question. Understanding the conceptual models scope will lead to a more informed selection of a technique that properly addresses that particular model. In summary, when deciding between modeling techniques, answering the following questions would allow one to address some important conceptual modeling considerations. What content will the conceptual model represent? How will the conceptual model be presented? Who will be using or participating in the conceptual model? How will the conceptual model describe the system? What is the conceptual models focus of observation? Will the conceptual model be efficient or effective in describing the system? Another function of the simulation conceptual model is to provide a rational and factual basis for assessment of simulation application appropriateness. == Models in philosophy and science == === Mental model === In cognitive psychology and philosophy of mind, a mental model is a representation of something in the mind, but a mental model may also refer to a nonphysical external model of the mind itself. === Metaphysical models === A metaphysical model is a type of conceptual model which is distinguished from other conceptual models by its proposed scope; a metaphysical model intends to represent reality in the broadest possible way. This is to say that it explains the answers to fundamental questions such as whether matter and mind are one or two substances; or whether or not humans have free will. === Epistemological models === An epistemological model is a type of conceptual model whose proposed scope is the known and the knowable, and the believed and the believable. === Logical models === In logic, a model is a type of interpretation under which a particular statement is true. Logical models can be broadly divided into ones which only attempt to represent concepts, such as mathematical models; and ones which attempt to represent physical objects, and factual relationships, among which are scientific models. Model theory is the study of (classes of) mathematical structures such as groups, fields, graphs, or even universes of set theory, using tools from mathematical logic. A system that gives meaning to the sentences of a formal language is called a model for the language. If a model for a language moreover satisfies a particular sentence or theory (set of sentences), it is called a model of the sentence or theory. Model theory has close ties to algebra and universal algebra. === Mathematical models === Mathematical models can take many forms, including but not limited to dynamical systems, statistical models, differential equations, or game theoretic models. These and other types of models can overlap, with a given model involving a variety of abstract structures. === Scientific models === A scientific model is a simplified abstract view of a complex reality. A scientific model represents empirical objects, phenomena, and physical processes in a logical way. Attempts to formalize the principles of the empirical sciences use an interpretation to model reality, in the same way logicians axiomatize the principles of logic. The aim of these attempts is to construct a formal system that will not produce theoretical consequences that are contrary to what is found in reality. Predictions or other statements drawn from such a formal system mirror or map the real world only insofar as these scientific models are true. == Statistical models == A statistical model is a probability distribution function proposed as generating data. In a parametric model, the probability distribution function has variable parameters, such as the mean and variance in a normal distribution, or the coefficients for the various exponents of the independent variable in linear regression. A nonparametric model has a distribution function without parameters, such as in bootstrapping, and is only loosely confined by assumptions. Model selection is a statistical method for selecting a distribution function within a class of them; e.g., in linear regression where the dependent variable is a polynomial of the independent variable with parametric coefficients, model selection is selecting the highest exponent, and may be done with nonparametric means, such as with cross validation. In statistics there can be models of mental events as well as models of physical events. For example, a statistical model of customer behavior is a model that is conceptual (because behavior is physical), but a statistical model of customer satisfaction is a model of a concept (because satisfaction is a mental not a physical event). == Social and political models == === Economic models === In economics, a model is a theoretical construct that represents economic processes by a set of variables and a set of logical and/or quantitative relationships between them. The economic model is a simplified framework designed to illustrate complex processes, often but not always using mathematical techniques. Frequently, economic models use structural parameters. Structural parameters are underlying parameters in a model or class of models. A model may have various parameters and those parameters may change to create various properties. == Models in systems architecture == A system model is the conceptual model that describes and represents the structure, behavior, and more views of a system. A system model can represent multiple views of a system by using two different approaches. The first one is the non-architectural approach and the second one is the architectural approach. The non-architectural approach respectively picks a model for each view. The architectural approach, also known as system architecture, instead of picking many heterogeneous and unrelated models, will use only one integrated architectural model. === Business process modelling === In business process modelling the enterprise process model is often referred to as the business process model. Process models are core concepts in the discipline of process engineering. Process models are: Processes of the same nature that are classified together into a model. A description of a process at the type level. Since the process model is at the type level, a process is an instantiation of it. The same process model is used repeatedly for the development of many applications and thus, has many instantiations. One possible use of a process model is to prescribe how things must/should/could be done in contrast to the process itself which is really what happens. A process model is roughly an anticipation of what the process will look like. What the process shall be will be determined during actual system development. == Models in information system design == === Conceptual models of human activity systems === Conceptual models of human activity systems are used in soft systems methodology (SSM), which is a method of systems analysis concerned with the structuring of problems in management. These models are models of concepts; the authors specifically state that they are not intended to represent a state of affairs in the physical world. They are also used in information requirements analysis (IRA) which is a variant of SSM developed for information system design and software engineering. === Logico-linguistic models === Logico-linguistic modeling is another variant of SSM that uses conceptual models. However, this method combines models of concepts with models of putative real world objects and events. It is a graphical representation of modal logic in which modal operators are used to distinguish statement about concepts from statements about real world objects and events. === Data models === ==== Entity–relationship model ==== In software engineering, an entity–relationship model (ERM) is an abstract and conceptual representation of data. Entity–relationship modeling is a database modeling method, used to produce a type of conceptual schema or semantic data model of a system, often a relational database, and its requirements in a top-down fashion. Diagrams created by this process are called entity-relationship diagrams, ER diagrams, or ERDs. Entity–relationship models have had wide application in the building of information systems intended to support activities involving objects and events in the real world. In these cases they are models that are conceptual. However, this modeling method can be used to build computer games or a family tree of the Greek Gods, in these cases it would be used to model concepts. ==== Domain model ==== A domain model is a type of conceptual model used to depict the structural elements and their conceptual constraints within a domain of interest (sometimes called the problem domain). A domain model includes the various entities, their attributes and relationships, plus the constraints governing the conceptual integrity of the structural model elements comprising that problem domain. A domain model may also include a number of conceptual views, where each view is pertinent to a particular subject area of the domain or to a particular subset of the domain model which is of interest to a stakeholder of the domain model. Like entity–relationship models, domain models can be used to model concepts or to model real world objects and events. == See also == == References == == Further reading == J. Parsons, L. Cole (2005), "What do the pictures mean? Guidelines for experimental evaluation of representation fidelity in diagrammatical conceptual modeling techniques", Data & Knowledge Engineering 55: 327–342; doi:10.1016/j.datak.2004.12.008 A. Gemino, Y. Wand (2005), "Complexity and clarity in conceptual modeling: Comparison of mandatory and optional properties", Data & Knowledge Engineering 55: 301–326; doi:10.1016/j.datak.2004.12.009 D. Batra (2005), "Conceptual Data Modeling Patterns", Journal of Database Management 16: 84–106 Papadimitriou, Fivos. (2010). "Conceptual Modelling of Landscape Complexity". Landscape Research, 35(5):563-570. doi:10.1080/01426397.2010.504913 == External links == Models article in the Internet Encyclopedia of Philosophy
Wikipedia/Model_(abstract)
Topography may refer to: == Cartography, geology and oceanography == Topography, the study of the current terrain features of a region and the graphic representation of the landform on a map Inverted topography, landscape features that have reversed their elevation relative to other features Karst topography, a landscape on soluble rock, characterized by underground drainage systems with sinkholes and caves Ocean surface topography, the difference between the surface of the ocean and the geoid Shuttle Radar Topography Mission, a research effort that obtained digital elevation models on a near-global scale from 56 °S to 60 °N, to generate the most complete high-resolution digital topographic database of Earth to date Topographic maps Topographic prominence, a concept used in the categorization of hills and mountains, also known as peaks Local history, formerly commonly called "topography" == Culture and media == === Art === Topographical views Topography of Terror, an outdoor museum in Berlin New Topographics, a photography exhibit in Rochester, NY, 1975-1976 === Literature === Christian Topography, a 6th-century book written by Cosmas Indicopleustes which advances the idea that the world is flat == Science == === Medicine === The location of features in the body, see human brain and topographical codes Corneal topography, a non-invasive medical imaging technique for mapping the surface curvature of the cornea, the outer structure of the eye === Ornithology === The study of feather tracts, see the list of terms used in bird topography === Physics === Diffraction topography, an X-ray imaging technique based on Bragg diffraction, in which diffraction from a crystal is recorded on film or by detector, resulting in topographic images (topographs) == Technology == The detailed design of a semiconductor integrated circuit, see integrated circuit layout design protection and the Integrated Circuit Topography Act
Wikipedia/Topography_(disambiguation)
A fall line refers to the line down a mountain or hill which is most directly downhill; that is, the direction a ball or other body would accelerate if it were free to move on the slope under gravity. Mathematically the fall line, the line of greatest slope, is the negative of the gradient (which points uphill) and perpendicular to the contour lines. In mountain biking, a trail follows the "fall line" if it generally descends in the most downward direction, rather than traversing in a sideways direction. A skier is said to be "skiing the fall line" if they are moving generally down, making turns either side of the fall line, rather than moving across the slope. == See also == Glossary of cycling Ridge line Topography Topographic profile Gradient descent == References ==
Wikipedia/Fall_line_(topography)
In mathematics, an open set is a generalization of an open interval in the real line. In a metric space (a set with a distance defined between every two points), an open set is a set that, with every point P in it, contains all points of the metric space that are sufficiently near to P (that is, all points whose distance to P is less than some value depending on P). More generally, an open set is a member of a given collection of subsets of a given set, a collection that has the property of containing every union of its members, every finite intersection of its members, the empty set, and the whole set itself. A set in which such a collection is given is called a topological space, and the collection is called a topology. These conditions are very loose, and allow enormous flexibility in the choice of open sets. For example, every subset can be open (the discrete topology), or no subset can be open except the space itself and the empty set (the indiscrete topology). In practice, however, open sets are usually chosen to provide a notion of nearness that is similar to that of metric spaces, without having a notion of distance defined. In particular, a topology allows defining properties such as continuity, connectedness, and compactness, which were originally defined by means of a distance. The most common case of a topology without any distance is given by manifolds, which are topological spaces that, near each point, resemble an open set of a Euclidean space, but on which no distance is defined in general. Less intuitive topologies are used in other branches of mathematics; for example, the Zariski topology, which is fundamental in algebraic geometry and scheme theory. == Motivation == Intuitively, an open set provides a method to distinguish two points. For example, if about one of two points in a topological space, there exists an open set not containing the other (distinct) point, the two points are referred to as topologically distinguishable. In this manner, one may speak of whether two points, or more generally two subsets, of a topological space are "near" without concretely defining a distance. Therefore, topological spaces may be seen as a generalization of spaces equipped with a notion of distance, which are called metric spaces. In the set of all real numbers, one has the natural Euclidean metric; that is, a function which measures the distance between two real numbers: d(x, y) = |x − y|. Therefore, given a real number x, one can speak of the set of all points close to that real number; that is, within ε of x. In essence, points within ε of x approximate x to an accuracy of degree ε. Note that ε > 0 always but as ε becomes smaller and smaller, one obtains points that approximate x to a higher and higher degree of accuracy. For example, if x = 0 and ε = 1, the points within ε of x are precisely the points of the interval (−1, 1); that is, the set of all real numbers between −1 and 1. However, with ε = 0.5, the points within ε of x are precisely the points of (−0.5, 0.5). Clearly, these points approximate x to a greater degree of accuracy than when ε = 1. The previous discussion shows, for the case x = 0, that one may approximate x to higher and higher degrees of accuracy by defining ε to be smaller and smaller. In particular, sets of the form (−ε, ε) give us a lot of information about points close to x = 0. Thus, rather than speaking of a concrete Euclidean metric, one may use sets to describe points close to x. This innovative idea has far-reaching consequences; in particular, by defining different collections of sets containing 0 (distinct from the sets (−ε, ε)), one may find different results regarding the distance between 0 and other real numbers. For example, if we were to define R as the only such set for "measuring distance", all points are close to 0 since there is only one possible degree of accuracy one may achieve in approximating 0: being a member of R. Thus, we find that in some sense, every real number is distance 0 away from 0. It may help in this case to think of the measure as being a binary condition: all things in R are equally close to 0, while any item that is not in R is not close to 0. In general, one refers to the family of sets containing 0, used to approximate 0, as a neighborhood basis; a member of this neighborhood basis is referred to as an open set. In fact, one may generalize these notions to an arbitrary set (X); rather than just the real numbers. In this case, given a point (x) of that set, one may define a collection of sets "around" (that is, containing) x, used to approximate x. Of course, this collection would have to satisfy certain properties (known as axioms) for otherwise we may not have a well-defined method to measure distance. For example, every point in X should approximate x to some degree of accuracy. Thus X should be in this family. Once we begin to define "smaller" sets containing x, we tend to approximate x to a greater degree of accuracy. Bearing this in mind, one may define the remaining axioms that the family of sets about x is required to satisfy. == Definitions == Several definitions are given here, in an increasing order of technicality. Each one is a special case of the next one. === Euclidean space === A subset U {\displaystyle U} of the Euclidean n-space Rn is open if, for every point x in U {\displaystyle U} , there exists a positive real number ε (depending on x) such that any point in Rn whose Euclidean distance from x is smaller than ε belongs to U {\displaystyle U} . Equivalently, a subset U {\displaystyle U} of Rn is open if every point in U {\displaystyle U} is the center of an open ball contained in U . {\displaystyle U.} An example of a subset of R that is not open is the closed interval [0,1], since neither 0 - ε nor 1 + ε belongs to [0,1] for any ε > 0, no matter how small. === Metric space === A subset U of a metric space (M, d) is called open if, for any point x in U, there exists a real number ε > 0 such that any point y ∈ M {\displaystyle y\in M} satisfying d(x, y) < ε belongs to U. Equivalently, U is open if every point in U has a neighborhood contained in U. This generalizes the Euclidean space example, since Euclidean space with the Euclidean distance is a metric space. === Topological space === A topology τ {\displaystyle \tau } on a set X is a set of subsets of X with the properties below. Each member of τ {\displaystyle \tau } is called an open set. X ∈ τ {\displaystyle X\in \tau } and ∅ ∈ τ {\displaystyle \varnothing \in \tau } Any union of sets in τ {\displaystyle \tau } belong to τ {\displaystyle \tau } : if { U i : i ∈ I } ⊆ τ {\displaystyle \left\{U_{i}:i\in I\right\}\subseteq \tau } then ⋃ i ∈ I U i ∈ τ {\displaystyle \bigcup _{i\in I}U_{i}\in \tau } Any finite intersection of sets in τ {\displaystyle \tau } belong to τ {\displaystyle \tau } : if U 1 , … , U n ∈ τ {\displaystyle U_{1},\ldots ,U_{n}\in \tau } then U 1 ∩ ⋯ ∩ U n ∈ τ {\displaystyle U_{1}\cap \cdots \cap U_{n}\in \tau } X together with τ {\displaystyle \tau } is called a topological space. Infinite intersections of open sets need not be open. For example, the intersection of all intervals of the form ( − 1 / n , 1 / n ) , {\displaystyle \left(-1/n,1/n\right),} where n {\displaystyle n} is a positive integer, is the set { 0 } {\displaystyle \{0\}} which is not open in the real line. A metric space is a topological space, whose topology consists of the collection of all subsets that are unions of open balls. There are, however, topological spaces that are not metric spaces. == Properties == The union of any number of open sets, or infinitely many open sets, is open. The intersection of a finite number of open sets is open. A complement of an open set (relative to the space that the topology is defined on) is called a closed set. A set may be both open and closed (a clopen set). The empty set and the full space are examples of sets that are both open and closed. A set can never been considered as open by itself. This notion is relative to a containing set and a specific topology on it. Whether a set is open depends on the topology under consideration. Having opted for greater brevity over greater clarity, we refer to a set X endowed with a topology τ {\displaystyle \tau } as "the topological space X" rather than "the topological space ( X , τ ) {\displaystyle (X,\tau )} ", despite the fact that all the topological data is contained in τ . {\displaystyle \tau .} If there are two topologies on the same set, a set U that is open in the first topology might fail to be open in the second topology. For example, if X is any topological space and Y is any subset of X, the set Y can be given its own topology (called the 'subspace topology') defined by "a set U is open in the subspace topology on Y if and only if U is the intersection of Y with an open set from the original topology on X." This potentially introduces new open sets: if V is open in the original topology on X, but V ∩ Y {\displaystyle V\cap Y} isn't open in the original topology on X, then V ∩ Y {\displaystyle V\cap Y} is open in the subspace topology on Y. As a concrete example of this, if U is defined as the set of rational numbers in the interval ( 0 , 1 ) , {\displaystyle (0,1),} then U is an open subset of the rational numbers, but not of the real numbers. This is because when the surrounding space is the rational numbers, for every point x in U, there exists a positive number a such that all rational points within distance a of x are also in U. On the other hand, when the surrounding space is the reals, then for every point x in U there is no positive a such that all real points within distance a of x are in U (because U contains no non-rational numbers). == Uses == Open sets have a fundamental importance in topology. The concept is required to define and make sense of topological space and other topological structures that deal with the notions of closeness and convergence for spaces such as metric spaces and uniform spaces. Every subset A of a topological space X contains a (possibly empty) open set; the maximum (ordered under inclusion) such open set is called the interior of A. It can be constructed by taking the union of all the open sets contained in A. A function f : X → Y {\displaystyle f:X\to Y} between two topological spaces X {\displaystyle X} and Y {\displaystyle Y} is continuous if the preimage of every open set in Y {\displaystyle Y} is open in X . {\displaystyle X.} The function f : X → Y {\displaystyle f:X\to Y} is called open if the image of every open set in X {\displaystyle X} is open in Y . {\displaystyle Y.} An open set on the real line has the characteristic property that it is a countable union of disjoint open intervals. == Special types of open sets == === Clopen sets and non-open and/or non-closed sets === A set might be open, closed, both, or neither. In particular, open and closed sets are not mutually exclusive, meaning that it is in general possible for a subset of a topological space to simultaneously be both an open subset and a closed subset. Such subsets are known as clopen sets. Explicitly, a subset S {\displaystyle S} of a topological space ( X , τ ) {\displaystyle (X,\tau )} is called clopen if both S {\displaystyle S} and its complement X ∖ S {\displaystyle X\setminus S} are open subsets of ( X , τ ) {\displaystyle (X,\tau )} ; or equivalently, if S ∈ τ {\displaystyle S\in \tau } and X ∖ S ∈ τ . {\displaystyle X\setminus S\in \tau .} In any topological space ( X , τ ) , {\displaystyle (X,\tau ),} the empty set ∅ {\displaystyle \varnothing } and the set X {\displaystyle X} itself are always clopen. These two sets are the most well-known examples of clopen subsets and they show that clopen subsets exist in every topological space. To see, it suffices to remark that, by definition of a topology, X {\displaystyle X} and ∅ {\displaystyle \varnothing } are both open, and that they are also closed, since each is the complement of the other. The open sets of the usual Euclidean topology of the real line R {\displaystyle \mathbb {R} } are the empty set, the open intervals and every union of open intervals. The interval I = ( 0 , 1 ) {\displaystyle I=(0,1)} is open in R {\displaystyle \mathbb {R} } by definition of the Euclidean topology. It is not closed since its complement in R {\displaystyle \mathbb {R} } is I ∁ = ( − ∞ , 0 ] ∪ [ 1 , ∞ ) , {\displaystyle I^{\complement }=(-\infty ,0]\cup [1,\infty ),} which is not open; indeed, an open interval contained in I ∁ {\displaystyle I^{\complement }} cannot contain 1, and it follows that I ∁ {\displaystyle I^{\complement }} cannot be a union of open intervals. Hence, I {\displaystyle I} is an example of a set that is open but not closed. By a similar argument, the interval J = [ 0 , 1 ] {\displaystyle J=[0,1]} is a closed subset but not an open subset. Finally, neither K = [ 0 , 1 ) {\displaystyle K=[0,1)} nor its complement R ∖ K = ( − ∞ , 0 ) ∪ [ 1 , ∞ ) {\displaystyle \mathbb {R} \setminus K=(-\infty ,0)\cup [1,\infty )} are open (because they cannot be written as a union of open intervals); this means that K {\displaystyle K} is neither open nor closed. If a topological space X {\displaystyle X} is endowed with the discrete topology (so that by definition, every subset of X {\displaystyle X} is open) then every subset of X {\displaystyle X} is a clopen subset. For a more advanced example reminiscent of the discrete topology, suppose that U {\displaystyle {\mathcal {U}}} is an ultrafilter on a non-empty set X . {\displaystyle X.} Then the union τ := U ∪ { ∅ } {\displaystyle \tau :={\mathcal {U}}\cup \{\varnothing \}} is a topology on X {\displaystyle X} with the property that every non-empty proper subset S {\displaystyle S} of X {\displaystyle X} is either an open subset or else a closed subset, but never both; that is, if ∅ ≠ S ⊊ X {\displaystyle \varnothing \neq S\subsetneq X} (where S ≠ X {\displaystyle S\neq X} ) then exactly one of the following two statements is true: either (1) S ∈ τ {\displaystyle S\in \tau } or else, (2) X ∖ S ∈ τ . {\displaystyle X\setminus S\in \tau .} Said differently, every subset is open or closed but the only subsets that are both (i.e. that are clopen) are ∅ {\displaystyle \varnothing } and X . {\displaystyle X.} === Regular open sets === A subset S {\displaystyle S} of a topological space X {\displaystyle X} is called a regular open set if Int ⁡ ( S ¯ ) = S {\displaystyle \operatorname {Int} \left({\overline {S}}\right)=S} or equivalently, if Bd ⁡ ( S ¯ ) = Bd ⁡ S {\displaystyle \operatorname {Bd} \left({\overline {S}}\right)=\operatorname {Bd} S} , where Bd ⁡ S {\displaystyle \operatorname {Bd} S} , Int ⁡ S {\displaystyle \operatorname {Int} S} , and S ¯ {\displaystyle {\overline {S}}} denote, respectively, the topological boundary, interior, and closure of S {\displaystyle S} in X {\displaystyle X} . A topological space for which there exists a base consisting of regular open sets is called a semiregular space. A subset of X {\displaystyle X} is a regular open set if and only if its complement in X {\displaystyle X} is a regular closed set, where by definition a subset S {\displaystyle S} of X {\displaystyle X} is called a regular closed set if Int ⁡ S ¯ = S {\displaystyle {\overline {\operatorname {Int} S}}=S} or equivalently, if Bd ⁡ ( Int ⁡ S ) = Bd ⁡ S . {\displaystyle \operatorname {Bd} \left(\operatorname {Int} S\right)=\operatorname {Bd} S.} Every regular open set (resp. regular closed set) is an open subset (resp. is a closed subset) although in general, the converses are not true. == Generalizations of open sets == Throughout, ( X , τ ) {\displaystyle (X,\tau )} will be a topological space. A subset A ⊆ X {\displaystyle A\subseteq X} of a topological space X {\displaystyle X} is called: α-open if A ⊆ int X ⁡ ( cl X ⁡ ( int X ⁡ A ) ) {\displaystyle A~\subseteq ~\operatorname {int} _{X}\left(\operatorname {cl} _{X}\left(\operatorname {int} _{X}A\right)\right)} , and the complement of such a set is called α-closed. preopen, nearly open, or locally dense if it satisfies any of the following equivalent conditions: A ⊆ int X ⁡ ( cl X ⁡ A ) . {\displaystyle A~\subseteq ~\operatorname {int} _{X}\left(\operatorname {cl} _{X}A\right).} There exists subsets D , U ⊆ X {\displaystyle D,U\subseteq X} such that U {\displaystyle U} is open in X , {\displaystyle X,} D {\displaystyle D} is a dense subset of X , {\displaystyle X,} and A = U ∩ D . {\displaystyle A=U\cap D.} There exists an open (in X {\displaystyle X} ) subset U ⊆ X {\displaystyle U\subseteq X} such that A {\displaystyle A} is a dense subset of U . {\displaystyle U.} The complement of a preopen set is called preclosed. b-open if A ⊆ int X ⁡ ( cl X ⁡ A ) ∪ cl X ⁡ ( int X ⁡ A ) {\displaystyle A~\subseteq ~\operatorname {int} _{X}\left(\operatorname {cl} _{X}A\right)~\cup ~\operatorname {cl} _{X}\left(\operatorname {int} _{X}A\right)} . The complement of a b-open set is called b-closed. β-open or semi-preopen if it satisfies any of the following equivalent conditions: A ⊆ cl X ⁡ ( int X ⁡ ( cl X ⁡ A ) ) {\displaystyle A~\subseteq ~\operatorname {cl} _{X}\left(\operatorname {int} _{X}\left(\operatorname {cl} _{X}A\right)\right)} cl X ⁡ A {\displaystyle \operatorname {cl} _{X}A} is a regular closed subset of X . {\displaystyle X.} There exists a preopen subset U {\displaystyle U} of X {\displaystyle X} such that U ⊆ A ⊆ cl X ⁡ U . {\displaystyle U\subseteq A\subseteq \operatorname {cl} _{X}U.} The complement of a β-open set is called β-closed. sequentially open if it satisfies any of the following equivalent conditions: Whenever a sequence in X {\displaystyle X} converges to some point of A , {\displaystyle A,} then that sequence is eventually in A . {\displaystyle A.} Explicitly, this means that if x ∙ = ( x i ) i = 1 ∞ {\displaystyle x_{\bullet }=\left(x_{i}\right)_{i=1}^{\infty }} is a sequence in X {\displaystyle X} and if there exists some a ∈ A {\displaystyle a\in A} is such that x ∙ → x {\displaystyle x_{\bullet }\to x} in ( X , τ ) , {\displaystyle (X,\tau ),} then x ∙ {\displaystyle x_{\bullet }} is eventually in A {\displaystyle A} (that is, there exists some integer i {\displaystyle i} such that if j ≥ i , {\displaystyle j\geq i,} then x j ∈ A {\displaystyle x_{j}\in A} ). A {\displaystyle A} is equal to its sequential interior in X , {\displaystyle X,} which by definition is the set SeqInt X ⁡ A : = { a ∈ A : whenever a sequence in X converges to a in ( X , τ ) , then that sequence is eventually in A } = { a ∈ A : there does NOT exist a sequence in X ∖ A that converges in ( X , τ ) to a point in A } {\displaystyle {\begin{alignedat}{4}\operatorname {SeqInt} _{X}A:&=\{a\in A~:~{\text{ whenever a sequence in }}X{\text{ converges to }}a{\text{ in }}(X,\tau ),{\text{ then that sequence is eventually in }}A\}\\&=\{a\in A~:~{\text{ there does NOT exist a sequence in }}X\setminus A{\text{ that converges in }}(X,\tau ){\text{ to a point in }}A\}\\\end{alignedat}}} The complement of a sequentially open set is called sequentially closed. A subset S ⊆ X {\displaystyle S\subseteq X} is sequentially closed in X {\displaystyle X} if and only if S {\displaystyle S} is equal to its sequential closure, which by definition is the set SeqCl X ⁡ S {\displaystyle \operatorname {SeqCl} _{X}S} consisting of all x ∈ X {\displaystyle x\in X} for which there exists a sequence in S {\displaystyle S} that converges to x {\displaystyle x} (in X {\displaystyle X} ). almost open and is said to have the Baire property if there exists an open subset U ⊆ X {\displaystyle U\subseteq X} such that A △ U {\displaystyle A\bigtriangleup U} is a meager subset, where △ {\displaystyle \bigtriangleup } denotes the symmetric difference. The subset A ⊆ X {\displaystyle A\subseteq X} is said to have the Baire property in the restricted sense if for every subset E {\displaystyle E} of X {\displaystyle X} the intersection A ∩ E {\displaystyle A\cap E} has the Baire property relative to E {\displaystyle E} . semi-open if A ⊆ cl X ⁡ ( int X ⁡ A ) {\displaystyle A~\subseteq ~\operatorname {cl} _{X}\left(\operatorname {int} _{X}A\right)} or, equivalently, cl X ⁡ A = cl X ⁡ ( int X ⁡ A ) {\displaystyle \operatorname {cl} _{X}A=\operatorname {cl} _{X}\left(\operatorname {int} _{X}A\right)} . The complement in X {\displaystyle X} of a semi-open set is called a semi-closed set. The semi-closure (in X {\displaystyle X} ) of a subset A ⊆ X , {\displaystyle A\subseteq X,} denoted by sCl X ⁡ A , {\displaystyle \operatorname {sCl} _{X}A,} is the intersection of all semi-closed subsets of X {\displaystyle X} that contain A {\displaystyle A} as a subset. semi-θ-open if for each x ∈ A {\displaystyle x\in A} there exists some semiopen subset U {\displaystyle U} of X {\displaystyle X} such that x ∈ U ⊆ sCl X ⁡ U ⊆ A . {\displaystyle x\in U\subseteq \operatorname {sCl} _{X}U\subseteq A.} θ-open (resp. δ-open) if its complement in X {\displaystyle X} is a θ-closed (resp. δ-closed) set, where by definition, a subset of X {\displaystyle X} is called θ-closed (resp. δ-closed) if it is equal to the set of all of its θ-cluster points (resp. δ-cluster points). A point x ∈ X {\displaystyle x\in X} is called a θ-cluster point (resp. a δ-cluster point) of a subset B ⊆ X {\displaystyle B\subseteq X} if for every open neighborhood U {\displaystyle U} of x {\displaystyle x} in X , {\displaystyle X,} the intersection B ∩ cl X ⁡ U {\displaystyle B\cap \operatorname {cl} _{X}U} is not empty (resp. B ∩ int X ⁡ ( cl X ⁡ U ) {\displaystyle B\cap \operatorname {int} _{X}\left(\operatorname {cl} _{X}U\right)} is not empty). Using the fact that A ⊆ cl X ⁡ A ⊆ cl X ⁡ B {\displaystyle A~\subseteq ~\operatorname {cl} _{X}A~\subseteq ~\operatorname {cl} _{X}B} and int X ⁡ A ⊆ int X ⁡ B ⊆ B {\displaystyle \operatorname {int} _{X}A~\subseteq ~\operatorname {int} _{X}B~\subseteq ~B} whenever two subsets A , B ⊆ X {\displaystyle A,B\subseteq X} satisfy A ⊆ B , {\displaystyle A\subseteq B,} the following may be deduced: Every α-open subset is semi-open, semi-preopen, preopen, and b-open. Every b-open set is semi-preopen (i.e. β-open). Every preopen set is b-open and semi-preopen. Every semi-open set is b-open and semi-preopen. Moreover, a subset is a regular open set if and only if it is preopen and semi-closed. The intersection of an α-open set and a semi-preopen (resp. semi-open, preopen, b-open) set is a semi-preopen (resp. semi-open, preopen, b-open) set. Preopen sets need not be semi-open and semi-open sets need not be preopen. Arbitrary unions of preopen (resp. α-open, b-open, semi-preopen) sets are once again preopen (resp. α-open, b-open, semi-preopen). However, finite intersections of preopen sets need not be preopen. The set of all α-open subsets of a space ( X , τ ) {\displaystyle (X,\tau )} forms a topology on X {\displaystyle X} that is finer than τ . {\displaystyle \tau .} A topological space X {\displaystyle X} is Hausdorff if and only if every compact subspace of X {\displaystyle X} is θ-closed. A space X {\displaystyle X} is totally disconnected if and only if every regular closed subset is preopen or equivalently, if every semi-open subset is preopen. Moreover, the space is totally disconnected if and only if the closure of every preopen subset is open. == See also == Almost open map – Map that satisfies a condition similar to that of being an open map. Base (topology) – Collection of open sets used to define a topology Clopen set – Subset which is both open and closed Closed set – Complement of an open subset Domain (mathematical analysis) – Connected open subset of a topological space Local homeomorphism – Mathematical function revertible near each point Open map – A function that sends open (resp. closed) subsets to open (resp. closed) subsetsPages displaying short descriptions of redirect targets Subbase – Collection of subsets that generate a topology == Notes == == References == == Bibliography == Hart, Klaas (2004). Encyclopedia of general topology. Amsterdam Boston: Elsevier/North-Holland. ISBN 0-444-50355-2. OCLC 162131277. Hart, Klaas Pieter; Nagata, Jun-iti; Vaughan, Jerry E. (2004). Encyclopedia of general topology. Elsevier. ISBN 978-0-444-50355-8. Munkres, James R. (2000). Topology (2nd ed.). Upper Saddle River, NJ: Prentice Hall, Inc. ISBN 978-0-13-181629-9. OCLC 42683260. (accessible to patrons with print disabilities) == External links == "Open set", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Wikipedia/Open_(topology)
In theoretical physics, geometrodynamics is an attempt to describe spacetime and associated phenomena completely in terms of geometry. Technically, its goal is to unify the fundamental forces and reformulate general relativity as a configuration space of three-metrics, modulo three-dimensional diffeomorphisms. The origin of this idea can be found in an English mathematician William Kingdon Clifford's works. This theory was enthusiastically promoted by John Wheeler in the 1960s, and work on it continues in the 21st century. == Einstein's geometrodynamics == The term geometrodynamics is as a synonym for general relativity. More properly, some authors use the phrase Einstein's geometrodynamics to denote the initial value formulation of general relativity, introduced by Arnowitt, Deser, and Misner (ADM formalism) around 1960. In this reformulation, spacetimes are sliced up into spatial hyperslices in a rather arbitrary fashion, and the vacuum Einstein field equation is reformulated as an evolution equation describing how, given the geometry of an initial hyperslice (the "initial value"), the geometry evolves over "time". This requires giving constraint equations which must be satisfied by the original hyperslice. It also involves some "choice of gauge"; specifically, choices about how the coordinate system used to describe the hyperslice geometry evolves. == Wheeler's geometrodynamics == Wheeler wanted to reduce physics to geometry in an even more fundamental way than the ADM reformulation of general relativity with a dynamic geometry whose curvature changes with time. It attempts to realize three concepts: mass without mass charge without charge field without field He wanted to lay the foundation for quantum gravity and unify gravitation with electromagnetism (the strong and weak interactions were not yet sufficiently well understood in 1960 to be included). Wheeler introduced the notion of geons, gravitational wave packets confined to a compact region of spacetime and held together by the gravitational attraction of the (gravitational) field energy of the wave itself. Wheeler was intrigued by the possibility that geons could affect test particles much like a massive object, hence mass without mass. Wheeler was also much intrigued by the fact that the (nonspinning) point-mass solution of general relativity, the Schwarzschild vacuum, has the nature of a wormhole. Similarly, in the case of a charged particle, the geometry of the Reissner–Nordström electrovacuum solution suggests that the symmetry between electric (which "end" in charges) and magnetic field lines (which never end) could be restored if the electric field lines do not actually end but only go through a wormhole to some distant location or even another branch of the universe. George Rainich had shown decades earlier that one can obtain the electromagnetic field tensor from the electromagnetic contribution to the stress–energy tensor, which in general relativity is directly coupled to spacetime curvature; Wheeler and Misner developed this into the so-called already-unified field theory which partially unifies gravitation and electromagnetism, yielding charge without charge. In the ADM reformulation of general relativity, Wheeler argued that the full Einstein field equation can be recovered once the momentum constraint can be derived, and suggested that this might follow from geometrical considerations alone, making general relativity something like a logical necessity. Specifically, curvature (the gravitational field) might arise as a kind of "averaging" over very complicated topological phenomena at very small scales, the so-called spacetime foam. This would realize geometrical intuition suggested by quantum gravity, or field without field. These ideas captured the imagination of many physicists, even though Wheeler himself quickly dashed some of the early hopes for his program. In particular, spin 1/2 fermions proved difficult to handle. For this, one has to go to the Einsteinian Unified Field Theory of the Einstein–Maxwell–Dirac system, or more generally, the Einstein–Yang–Mills-Dirac-Higgs System. Geometrodynamics also attracted attention from philosophers intrigued by the possibility of realizing some of Descartes' and Spinoza's ideas about the nature of space. == Modern notions of geometrodynamics == More recently, Christopher Isham, Jeremy Butterfield, and their students have continued to develop quantum geometrodynamics to take account of recent work toward a quantum theory of gravity and further developments in the very extensive mathematical theory of initial value formulations of general relativity. Some of Wheeler's original goals remain important for this work, particularly the hope of laying a solid foundation for quantum gravity. The philosophical program also continues to motivate several prominent contributors. Topological ideas in the realm of gravity date back to Riemann, Clifford, and Weyl and found a more concrete realization in the wormholes of Wheeler characterized by the Euler-Poincaré invariant. They result from attaching handles to black holes. Observationally, Albert Einstein's general relativity (GR) is rather well established for the Solar System and double pulsars. However, in GR the metric plays a double role: Measuring distances in spacetime and serving as a gravitational potential for the Christoffel connection. This dichotomy seems to be one of the main obstacles for quantizing gravity. Arthur Stanley Eddington suggested already in 1924 in his book The Mathematical Theory of Relativity (2nd Edition) to regard the connection as the basic field and the metric merely as a derived concept. Consequently, the primordial action in four dimensions should be constructed from a metric-free topological action such as the Pontryagin invariant of the corresponding gauge connection. Similarly as in the Yang–Mills theory, a quantization can be achieved by amending the definition of curvature and the Bianchi identities via topological ghosts. In such a graded Cartan formalism, the nilpotency of the ghost operators is on par with the Poincaré lemma for the exterior derivative. Using a BRST antifield formalism with a duality gauge fixing, a consistent quantization in spaces of double dual curvature is obtained. The constraint imposes instanton type solutions on the curvature-squared 'Yang-Mielke theory' of gravity, proposed in its affine form already by Weyl 1919 and by Yang in 1974. However, these exact solutions exhibit a 'vacuum degeneracy'. One needs to modify the double duality of the curvature via scale breaking terms, in order to retain Einstein's equations with an induced cosmological constant of partially topological origin as the unique macroscopic 'background'. Such scale breaking terms arise more naturally in a constraint formalism, the so-called BF scheme, in which the gauge curvature is denoted by F. In the case of gravity, it departs from the special linear group SL(5, R) in four dimensions, thus generalizing (Anti-)de Sitter gauge theories of gravity. After applying spontaneous symmetry breaking to the corresponding topological BF theory, again Einstein spaces emerge with a tiny cosmological constant related to the scale of symmetry breaking. Here the 'background' metric is induced via a Higgs-like mechanism. The finiteness of such a deformed topological scheme may convert into asymptotic safeness after quantization of the spontaneously broken model. Richard J. Petti believes that cosmological models with torsion but no rotating particles based on Einstein–Cartan theory illustrate a situation of "a (nonpropagating) field without a field". == See also == Mathematics of general relativity Hamilton–Jacobi–Einstein equation (HJEE) Numerical relativity Black hole electron Teleparallelism == References == === Works cited === Wheeler, J (1962). Nagel, Ernest; Suppes, Patrick; Tarski, Alfred (eds.). "Curved empty space as the building material of the physical world: an assessment". Logic, Methodology and Philosophy of Science: Proceedings of the International Congress for Logic, Methodology and Philosophy of Science. Stanford, California: Stanford University Press. === General references === Anderson, E. (2004). "Geometrodynamics: Spacetime or Space?". arXiv:gr-qc/0409123. This Ph.D. thesis offers a readable account of the long development of the notion of "geometrodynamics". Butterfield, Jeremy (1999). The Arguments of Time. Oxford: Oxford University Press. ISBN 978-0-19-726207-8. This book focuses on the philosophical motivations and implications of the modern geometrodynamics program. Prastaro, Agostino (1985). Geometrodynamics: Proceedings, 1985. Philadelphia: World Scientific. ISBN 978-9971-978-63-1. Misner, Charles W; Thorne, Kip S.; Wheeler, John Archibald (1973). Gravitation. San Francisco: W. H. Freeman. ISBN 978-0-7167-0344-0. See chapter 43 for superspace and chapter 44 for spacetime foam. Wheeler, John Archibald (1963). Geometrodynamics. New York: Academic Press. LCCN 62013645. Misner, Charles W; Wheeler, John A (1957). "Classical physics as geometry". Annals of Physics. 2 (6): 525–603. Bibcode:1957AnPhy...2..525M. doi:10.1016/0003-4916(57)90049-0. J. Wheeler (1961). "Geometrodynamics and the Problem of Motion". Rev. Mod. Phys. 44 (1): 63–78. Bibcode:1961RvMP...33...63W. doi:10.1103/RevModPhys.33.63. online version (subscription required) J. Wheeler (1957). "On the nature of quantum geometrodynamics". Ann. Phys. 2 (6): 604–614. Bibcode:1957AnPhy...2..604W. doi:10.1016/0003-4916(57)90050-7. online version (subscription required) Mielke, Eckehard W. (2008). "Einsteinian gravity from a topological action". General Relativity and Gravitation. 40 (6): 1311–1325. arXiv:0707.3466. Bibcode:2008GReGr..40.1311M. doi:10.1007/s10714-007-0603-3. ISSN 0001-7701. Wang, Charles H.-T. (2005-06-15). "Conformal geometrodynamics: True degrees of freedom in a truly canonical structure". Physical Review D. 71 (12). American Physical Society (APS): 124026. arXiv:gr-qc/0501024. Bibcode:2005PhRvD..71l4026W. doi:10.1103/physrevd.71.124026. ISSN 1550-7998. S2CID 118968025. == Further reading == Grünbaum, Adolf (1973): Geometrodynamics and Ontology, The Journal of Philosophy, vol. 70, no. 21, December 6, 1973, pp. 775–800, online version (subscription required) Mielke, Eckehard W. (1987): Geometrodynamics of Gauge Fields --- On the geometry of Yang—Mills and gravitational gauge theories, (Akademie—Verlag, Berlin), 242 pages. (2nd Edition, Springer International Publishing Switzerland, Mathematical Physics Studies 2017), 373 pages.
Wikipedia/Geometrodynamics
In mathematics, a topological space is called separable if it contains a countable, dense subset; that is, there exists a sequence ( x n ) n = 1 ∞ {\displaystyle (x_{n})_{n=1}^{\infty }} of elements of the space such that every nonempty open subset of the space contains at least one element of the sequence. Like the other axioms of countability, separability is a "limitation on size", not necessarily in terms of cardinality (though, in the presence of the Hausdorff axiom, this does turn out to be the case; see below) but in a more subtle topological sense. In particular, every continuous function on a separable space whose image is a subset of a Hausdorff space is determined by its values on the countable dense subset. Contrast separability with the related notion of second countability, which is in general stronger but equivalent on the class of metrizable spaces. == First examples == Any topological space that is itself finite or countably infinite is separable, for the whole space is a countable dense subset of itself. An important example of an uncountable separable space is the real line, in which the rational numbers form a countable dense subset. Similarly the set of all length- n {\displaystyle n} vectors of rational numbers, r = ( r 1 , … , r n ) ∈ Q n {\displaystyle {\boldsymbol {r}}=(r_{1},\ldots ,r_{n})\in \mathbb {Q} ^{n}} , is a countable dense subset of the set of all length- n {\displaystyle n} vectors of real numbers, R n {\displaystyle \mathbb {R} ^{n}} ; so for every n {\displaystyle n} , n {\displaystyle n} -dimensional Euclidean space is separable. A simple example of a space that is not separable is a discrete space of uncountable cardinality. Further examples are given below. == Separability versus second countability == Any second-countable space is separable: if { U n } {\displaystyle \{U_{n}\}} is a countable base, choosing any x n ∈ U n {\displaystyle x_{n}\in U_{n}} from the non-empty U n {\displaystyle U_{n}} gives a countable dense subset. Conversely, a metrizable space is separable if and only if it is second countable, which is the case if and only if it is Lindelöf. To further compare these two properties: An arbitrary subspace of a second-countable space is second countable; subspaces of separable spaces need not be separable (see below). Any continuous image of a separable space is separable (Willard 1970, Th. 16.4a); even a quotient of a second-countable space need not be second countable. A product of at most continuum many separable spaces is separable (Willard 1970, p. 109, Th 16.4c). A countable product of second-countable spaces is second countable, but an uncountable product of second-countable spaces need not even be first countable. We can construct an example of a separable topological space that is not second countable. Consider any uncountable set X {\displaystyle X} , pick some x 0 ∈ X {\displaystyle x_{0}\in X} , and define the topology to be the collection of all sets that contain x 0 {\displaystyle x_{0}} (or are empty). Then, the closure of x 0 {\displaystyle {x_{0}}} is the whole space ( X {\displaystyle X} is the smallest closed set containing x 0 {\displaystyle x_{0}} ), but every set of the form { x 0 , x } {\displaystyle \{x_{0},x\}} is open. Therefore, the space is separable but there cannot have a countable base. == Cardinality == The property of separability does not in and of itself give any limitations on the cardinality of a topological space: any set endowed with the trivial topology is separable, as well as second countable, quasi-compact, and connected. The "trouble" with the trivial topology is its poor separation properties: its Kolmogorov quotient is the one-point space. A first-countable, separable Hausdorff space (in particular, a separable metric space) has at most the continuum cardinality c {\displaystyle {\mathfrak {c}}} . In such a space, closure is determined by limits of sequences and any convergent sequence has at most one limit, so there is a surjective map from the set of convergent sequences with values in the countable dense subset to the points of X {\displaystyle X} . A separable Hausdorff space has cardinality at most 2 c {\displaystyle 2^{\mathfrak {c}}} , where c {\displaystyle {\mathfrak {c}}} is the cardinality of the continuum. For this closure is characterized in terms of limits of filter bases: if Y ⊆ X {\displaystyle Y\subseteq X} and z ∈ X {\displaystyle z\in X} , then z ∈ Y ¯ {\displaystyle z\in {\overline {Y}}} if and only if there exists a filter base B {\displaystyle {\mathcal {B}}} consisting of subsets of Y {\displaystyle Y} that converges to z {\displaystyle z} . The cardinality of the set S ( Y ) {\displaystyle S(Y)} of such filter bases is at most 2 2 | Y | {\displaystyle 2^{2^{|Y|}}} . Moreover, in a Hausdorff space, there is at most one limit to every filter base. Therefore, there is a surjection S ( Y ) → X {\displaystyle S(Y)\rightarrow X} when Y ¯ = X . {\displaystyle {\overline {Y}}=X.} The same arguments establish a more general result: suppose that a Hausdorff topological space X {\displaystyle X} contains a dense subset of cardinality κ {\displaystyle \kappa } . Then X {\displaystyle X} has cardinality at most 2 2 κ {\displaystyle 2^{2^{\kappa }}} and cardinality at most 2 κ {\displaystyle 2^{\kappa }} if it is first countable. The product of at most continuum many separable spaces is a separable space (Willard 1970, p. 109, Th 16.4c). In particular the space R R {\displaystyle \mathbb {R} ^{\mathbb {R} }} of all functions from the real line to itself, endowed with the product topology, is a separable Hausdorff space of cardinality 2 c {\displaystyle 2^{\mathfrak {c}}} . More generally, if κ {\displaystyle \kappa } is any infinite cardinal, then a product of at most 2 κ {\displaystyle 2^{\kappa }} spaces with dense subsets of size at most κ {\displaystyle \kappa } has itself a dense subset of size at most κ {\displaystyle \kappa } (Hewitt–Marczewski–Pondiczery theorem). == Constructive mathematics == Separability is especially important in numerical analysis and constructive mathematics, since many theorems that can be proved for nonseparable spaces have constructive proofs only for separable spaces. Such constructive proofs can be turned into algorithms for use in numerical analysis, and they are the only sorts of proofs acceptable in constructive analysis. A famous example of a theorem of this sort is the Hahn–Banach theorem. == Further examples == === Separable spaces === Every compact metric space (or metrizable space) is separable. Any topological space that is the union of a countable number of separable subspaces is separable. Together, these first two examples give a different proof that n {\displaystyle n} -dimensional Euclidean space is separable. The space C ( K ) {\displaystyle C(K)} of all continuous functions from a compact subset K ⊆ R {\displaystyle K\subseteq \mathbb {R} } to the real line R {\displaystyle \mathbb {R} } is separable. The Lebesgue spaces L p ( X , μ ) {\displaystyle L^{p}\left(X,\mu \right)} , over a measure space ⟨ X , M , μ ⟩ {\displaystyle \left\langle X,{\mathcal {M}},\mu \right\rangle } whose σ-algebra is countably generated and whose measure is σ-finite, are separable for any 1 ≤ p < ∞ {\displaystyle 1\leq p<\infty } . The space C ( [ 0 , 1 ] ) {\displaystyle C([0,1])} of continuous real-valued functions on the unit interval [ 0 , 1 ] {\displaystyle [0,1]} with the metric of uniform convergence is a separable space, since it follows from the Weierstrass approximation theorem that the set Q [ x ] {\displaystyle \mathbb {Q} [x]} of polynomials in one variable with rational coefficients is a countable dense subset of C ( [ 0 , 1 ] ) {\displaystyle C([0,1])} . The Banach–Mazur theorem asserts that any separable Banach space is isometrically isomorphic to a closed linear subspace of C ( [ 0 , 1 ] ) {\displaystyle C([0,1])} . A Hilbert space is separable if and only if it has a countable orthonormal basis. It follows that any separable, infinite-dimensional Hilbert space is isometric to the space ℓ 2 {\displaystyle \ell ^{2}} of square-summable sequences. An example of a separable space that is not second-countable is the Sorgenfrey line S {\displaystyle \mathbb {S} } , the set of real numbers equipped with the lower limit topology. A separable σ-algebra is a σ-algebra F {\displaystyle {\mathcal {F}}} that is a separable space when considered as a metric space with metric ρ ( A , B ) = μ ( A △ B ) {\displaystyle \rho (A,B)=\mu (A\triangle B)} for A , B ∈ F {\displaystyle A,B\in {\mathcal {F}}} and a given finite measure μ {\displaystyle \mu } (and with △ {\displaystyle \triangle } being the symmetric difference operator). === Non-separable spaces === The first uncountable ordinal ω 1 {\displaystyle \omega _{1}} , equipped with its natural order topology, is not separable. The Banach space ℓ ∞ {\displaystyle \ell ^{\infty }} of all bounded real sequences, with the supremum norm, is not separable. The same holds for L ∞ {\displaystyle L^{\infty }} . The Banach space of functions of bounded variation is not separable; note however that this space has very important applications in mathematics, physics and engineering. == Properties == A subspace of a separable space need not be separable (see the Sorgenfrey plane and the Moore plane), but every open subspace of a separable space is separable (Willard 1970, Th 16.4b). Also every subspace of a separable metric space is separable. In fact, every topological space is a subspace of a separable space of the same cardinality. A construction adding at most countably many points is given in (Sierpiński 1952, p. 49); if the space was a Hausdorff space then the space constructed that it embeds into is also a Hausdorff space. The set of all real-valued continuous functions on a separable space has a cardinality equal to c {\displaystyle {\mathfrak {c}}} , the cardinality of the continuum. This follows since such functions are determined by their values on dense subsets. From the above property, one can deduce the following: If X is a separable space having an uncountable closed discrete subspace, then X cannot be normal. This shows that the Sorgenfrey plane is not normal. For a compact Hausdorff space X, the following are equivalent: === Embedding separable metric spaces === Every separable metric space is homeomorphic to a subset of the Hilbert cube. This is established in the proof of the Urysohn metrization theorem. Every separable metric space is isometric to a subset of the (non-separable) Banach space l∞ of all bounded real sequences with the supremum norm; this is known as the Fréchet embedding. (Heinonen 2003) Every separable metric space is isometric to a subset of C([0,1]), the separable Banach space of continuous functions [0,1] → R, with the supremum norm. This is due to Stefan Banach. (Heinonen 2003) Every separable metric space is isometric to a subset of the Urysohn universal space. For nonseparable spaces: A metric space of density equal to an infinite cardinal α is isometric to a subspace of C([0,1]α, R), the space of real continuous functions on the product of α copies of the unit interval. (Kleiber & Pervin 1969) == References == Heinonen, Juha (January 2003), Geometric embeddings of metric spaces (PDF), retrieved 6 February 2009 Kelley, John L. (1975), General Topology, Berlin, New York: Springer-Verlag, ISBN 978-0-387-90125-1, MR 0370454 Kleiber, Martin; Pervin, William J. (1969), "A generalized Banach-Mazur theorem", Bull. Austral. Math. Soc., 1 (2): 169–173, doi:10.1017/S0004972700041411 Sierpiński, Wacław (1952), General topology, Mathematical Expositions, No. 7, Toronto, Ont.: University of Toronto Press, MR 0050870 Steen, Lynn Arthur; Seebach, J. Arthur Jr. (1995) [1978], Counterexamples in Topology (Dover reprint of 1978 ed.), Berlin, New York: Springer-Verlag, ISBN 978-0-486-68735-3, MR 0507446 Willard, Stephen (1970), General Topology, Addison-Wesley, ISBN 978-0-201-08707-9, MR 0264581
Wikipedia/Separable_(topology)
The topological entanglement entropy or topological entropy, usually denoted by γ {\displaystyle \gamma } , is a number characterizing many-body states that possess topological order. A non-zero topological entanglement entropy reflects the presence of long range quantum entanglements in a many-body quantum state. So the topological entanglement entropy links topological order with pattern of long range quantum entanglements. Given a topologically ordered state, the topological entropy can be extracted from the asymptotic behavior of the Von Neumann entropy measuring the quantum entanglement between a spatial block and the rest of the system. The entanglement entropy of a simply connected region of boundary length L, within an infinite two-dimensional topologically ordered state, has the following form for large L: S L ⟶ α L − γ + O ( L − ν ) , ν > 0 {\displaystyle S_{L}\;\longrightarrow \;\alpha L-\gamma +{\mathcal {O}}(L^{-\nu })\;,\qquad \nu >0\,\!} where − γ {\displaystyle -\gamma } is the topological entanglement entropy. The topological entanglement entropy is equal to the logarithm of the total quantum dimension of the quasiparticle excitations of the state. For example, the simplest fractional quantum Hall states, the Laughlin states at filling fraction 1/m, have γ = ½log(m). The Z2 fractionalized states, such as topologically ordered states of Z2 spin-liquid, quantum dimer models on non-bipartite lattices, and Kitaev's toric code state, are characterized γ = log(2). == See also == Quantum topology Topological defect Topological order Topological quantum field theory Topological quantum number Topological string theory == References == === Calculations for specific topologically ordered states === Haque, Masudul; Zozulya, Oleksandr; Schoutens, Kareljan (6 February 2007). "Entanglement Entropy in Fermionic Laughlin States". Physical Review Letters. 98 (6): 060401. arXiv:cond-mat/0609263. Bibcode:2007PhRvL..98f0401H. doi:10.1103/physrevlett.98.060401. ISSN 0031-9007. PMID 17358917. S2CID 5731929. Furukawa, Shunsuke; Misguich, Grégoire (5 June 2007). "Topological entanglement entropy in the quantum dimer model on the triangular lattice". Physical Review B. 75 (21): 214407. arXiv:cond-mat/0612227. Bibcode:2007PhRvB..75u4407F. doi:10.1103/physrevb.75.214407. ISSN 1098-0121. S2CID 118950876.
Wikipedia/Topological_entropy_in_physics
In mathematics, especially in the fields of representation theory and module theory, a Frobenius algebra is a finite-dimensional unital associative algebra with a special kind of bilinear form which gives the algebras particularly nice duality theories. Frobenius algebras began to be studied in the 1930s by Richard Brauer and Cecil Nesbitt and were named after Georg Frobenius. Tadashi Nakayama discovered the beginnings of a rich duality theory (Nakayama 1939), (Nakayama 1941). Jean Dieudonné used this to characterize Frobenius algebras (Dieudonné 1958). Frobenius algebras were generalized to quasi-Frobenius rings, those Noetherian rings whose right regular representation is injective. In recent times, interest has been renewed in Frobenius algebras due to connections to topological quantum field theory. == Definition == A finite-dimensional, unital, associative algebra A defined over a field k is said to be a Frobenius algebra if A is equipped with a nondegenerate bilinear form σ : A × A → k that satisfies the following equation: σ(a·b, c) = σ(a, b·c). This bilinear form is called the Frobenius form of the algebra. Equivalently, one may equip A with a linear functional λ : A → k such that the kernel of λ contains no nonzero left ideal of A. A Frobenius algebra is called symmetric if σ is symmetric, or equivalently λ satisfies λ(a·b) = λ(b·a). There is also a different, mostly unrelated notion of the symmetric algebra of a vector space. == Nakayama automorphism == For a Frobenius algebra A with σ as above, the automorphism ν of A such that σ(a, b) = σ(ν(b), a) is the Nakayama automorphism associated to A and σ. == Examples == Any matrix algebra defined over a field k is a Frobenius algebra with Frobenius form σ(a,b)=tr(a·b) where tr denotes the trace. Any finite-dimensional unital associative algebra A has a natural homomorphism to its own endomorphism ring End(A). A bilinear form can be defined on A in the sense of the previous example. If this bilinear form is nondegenerate, then it equips A with the structure of a Frobenius algebra. Every group ring k[G] of a finite group G over a field k is a symmetric Frobenius algebra, with Frobenius form σ(a,b) given by the coefficient of the identity element in a·b. For a field k, the four-dimensional k-algebra k[x,y]/ (x2, y2) is a Frobenius algebra. This follows from the characterization of commutative local Frobenius rings below, since this ring is a local ring with its maximal ideal generated by x and y, and unique minimal ideal generated by xy. For a field k, the three-dimensional k-algebra A=k[x,y]/ (x, y)2 is not a Frobenius algebra. The A homomorphism from xA into A induced by x ↦ y cannot be extended to an A homomorphism from A into A, showing that the ring is not self-injective, thus not Frobenius. Any finite-dimensional Hopf algebra, by a 1969 theorem of Larson-Sweedler on Hopf modules and integrals. == Properties == The direct product and tensor product of Frobenius algebras are Frobenius algebras. A finite-dimensional commutative local algebra over a field is Frobenius if and only if the right regular module is injective, if and only if the algebra has a unique minimal ideal. Commutative, local Frobenius algebras are precisely the zero-dimensional local Gorenstein rings containing their residue field and finite-dimensional over it. Frobenius algebras are quasi-Frobenius rings, and in particular, they are left and right Artinian and left and right self-injective. For a field k, a finite-dimensional, unital, associative algebra is Frobenius if and only if the injective right A-module Homk(A,k) is isomorphic to the right regular representation of A. For an infinite field k, a finite-dimensional, unital, associative k-algebra is a Frobenius algebra if it has only finitely many minimal right ideals. If F is a finite-dimensional extension field of k, then a finite-dimensional F-algebra is naturally a finite-dimensional k-algebra via restriction of scalars, and is a Frobenius F-algebra if and only if it is a Frobenius k-algebra. In other words, the Frobenius property does not depend on the field, as long as the algebra remains a finite-dimensional algebra. Similarly, if F is a finite-dimensional extension field of k, then every k-algebra A gives rise naturally to an F algebra, F ⊗k A, and A is a Frobenius k-algebra if and only if F ⊗k A is a Frobenius F-algebra. Amongst those finite-dimensional, unital, associative algebras whose right regular representation is injective, the Frobenius algebras A are precisely those whose simple modules M have the same dimension as their A-duals, HomA(M,A). Amongst these algebras, the A-duals of simple modules are always simple. A finite-dimensional bi-Frobenius algebra or strict double Frobenius algebra is a k-vector-space A with two multiplication structures as unital Frobenius algebras (A, • , 1) and (A, ⋆ {\displaystyle \star } , ι {\displaystyle \iota } ): there must be multiplicative homomorphisms ϕ {\displaystyle \phi } and ε {\displaystyle \varepsilon } of A into k with ϕ ( a ⋅ b ) {\displaystyle \phi (a\cdot b)} and ε ( a ⋆ b ) {\displaystyle \varepsilon (a\star b)} non-degenerate, and a k-isomorphism S of A onto itself which is an anti-automorphism for both structures, such that ϕ ( a ⋅ b ) = ε ( S ( a ) ⋆ b ) . {\displaystyle \phi (a\cdot b)=\varepsilon (S(a)\star b).} This is the case precisely when A is a finite-dimensional Hopf algebra over k and S is its antipode. The group algebra of a finite group gives an example. == Category-theoretical definition == In category theory, the notion of Frobenius object is an abstract definition of a Frobenius algebra in a category. A Frobenius object ( A , μ , η , δ , ε ) {\displaystyle (A,\mu ,\eta ,\delta ,\varepsilon )} in a monoidal category ( C , ⊗ , I ) {\displaystyle (C,\otimes ,I)} consists of an object A of C together with four morphisms μ : A ⊗ A → A , η : I → A , δ : A → A ⊗ A a n d ε : A → I {\displaystyle \mu :A\otimes A\to A,\qquad \eta :I\to A,\qquad \delta :A\to A\otimes A\qquad \mathrm {and} \qquad \varepsilon :A\to I} such that ( A , μ , η ) {\displaystyle (A,\mu ,\eta )\,} is a monoid object in C, ( A , δ , ε ) {\displaystyle (A,\delta ,\varepsilon )} is a comonoid object in C, the diagrams and commute (for simplicity the diagrams are given here in the case where the monoidal category C is strict) and are known as Frobenius conditions. More compactly, a Frobenius algebra in C is a so-called Frobenius monoidal functor A:1 → C, where 1 is the category consisting of one object and one arrow. A Frobenius algebra is called isometric or special if μ ∘ δ = I d A {\displaystyle \mu \circ \delta =\mathrm {Id} _{A}} . == Applications == Frobenius algebras originally were studied as part of an investigation into the representation theory of finite groups, and have contributed to the study of number theory, algebraic geometry, and combinatorics. They have been used to study Hopf algebras, coding theory, and cohomology rings of compact oriented manifolds. === Topological quantum field theories === Recently, it has been seen that they play an important role in the algebraic treatment and axiomatic foundation of topological quantum field theory. A commutative Frobenius algebra determines uniquely (up to isomorphism) a (1+1)-dimensional TQFT. More precisely, the category of commutative Frobenius K {\displaystyle K} -algebras is equivalent to the category of symmetric strong monoidal functors from 2 {\displaystyle 2} - Cob {\displaystyle {\textbf {Cob}}} (the category of 2-dimensional cobordisms between 1-dimensional manifolds) to Vect K {\displaystyle {\textbf {Vect}}_{K}} (the category of vector spaces over K {\displaystyle K} ). The correspondence between TQFTs and Frobenius algebras is given as follows: 1-dimensional manifolds are disjoint unions of circles: a TQFT associates a vector space with a circle, and the tensor product of vector spaces with a disjoint union of circles, a TQFT associates (functorially) to each cobordism between manifolds a map between vector spaces, the map associated with a pair of pants (a cobordism between 1 circle and 2 circles) gives a product map V ⊗ V → V {\displaystyle V\otimes V\to V} or a coproduct map V → V ⊗ V {\displaystyle V\to V\otimes V} , depending on how the boundary components are grouped – which is commutative or cocommutative, and the map associated with a disk gives a counit (trace) or unit (scalars), depending on grouping of boundary. This relation between Frobenius algebras and (1+1)-dimensional TQFTs can be used to explain Khovanov's categorification of the Jones polynomial. == Generalizations == === Frobenius extensions === Let B be a subring sharing the identity element of a unital associative ring A. This is also known as ring extension A | B. Such a ring extension is called Frobenius if There is a linear mapping E: A → B satisfying the bimodule condition E(bac) = bE(a)c for all b,c ∈ B and a ∈ A. There are elements in A denoted { x i } i = 1 n {\displaystyle \{x_{i}\}_{i=1}^{n}} and { y i } i = 1 n {\displaystyle \{y_{i}\}_{i=1}^{n}} such that for all a ∈ A we have: ∑ i = 1 n E ( a x i ) y i = a = ∑ i = 1 n x i E ( y i a ) {\displaystyle \sum _{i=1}^{n}E(ax_{i})y_{i}=a=\sum _{i=1}^{n}x_{i}E(y_{i}a)} The map E is sometimes referred to as a Frobenius homomorphism and the elements x i , y i {\displaystyle x_{i},y_{i}} as dual bases. (As an exercise it is possible to give an equivalent definition of Frobenius extension as a Frobenius algebra-coalgebra object in the category of B-B-bimodules, where the equations just given become the counit equations for the counit E.) For example, a Frobenius algebra A over a commutative ring K, with associative nondegenerate bilinear form (-,-) and projective K-bases x i , y i {\displaystyle x_{i},y_{i}} is a Frobenius extension A | K with E(a) = (a,1). Other examples of Frobenius extensions are pairs of group algebras associated to a subgroup of finite index, Hopf subalgebras of a semisimple Hopf algebra, Galois extensions and certain von Neumann algebra subfactors of finite index. Another source of examples of Frobenius extensions (and twisted versions) are certain subalgebra pairs of Frobenius algebras, where the subalgebra is stabilized by the symmetrizing automorphism of the overalgebra. The details of the group ring example are the following application of elementary notions in group theory. Let G be a group and H a subgroup of finite index n in G; let g1, ..., gn. be left coset representatives, so that G is a disjoint union of the cosets g1H, ..., gnH. Over any commutative base ring k define the group algebras A = k[G] and B = k[H], so B is a subalgebra of A. Define a Frobenius homomorphism E: A → B by letting E(h) = h for all h in H, and E(g) = 0 for g not in H : extend this linearly from the basis group elements to all of A, so one obtains the B-B-bimodule projection E ( ∑ g ∈ G n g g ) = ∑ h ∈ H n h h for n g ∈ k {\displaystyle E\left(\sum _{g\in G}n_{g}g\right)=\sum _{h\in H}n_{h}h\ \ \ {\text{ for }}n_{g}\in k} (The orthonormality condition E ( g i − 1 g j ) = δ i j 1 {\displaystyle E(g_{i}^{-1}g_{j})=\delta _{ij}1} follows.) The dual base is given by x i = g i , y i = g i − 1 {\displaystyle x_{i}=g_{i},y_{i}=g_{i}^{-1}} , since ∑ i = 1 n g i E ( g i − 1 ∑ g ∈ G n g g ) = ∑ i ∑ h ∈ H n g i h g i h = ∑ g ∈ G n g g {\displaystyle \sum _{i=1}^{n}g_{i}E\left(g_{i}^{-1}\sum _{g\in G}n_{g}g\right)=\sum _{i}\sum _{h\in H}n_{g_{i}h}g_{i}h=\sum _{g\in G}n_{g}g} The other dual base equation may be derived from the observation that G is also a disjoint union of the right cosets H g 1 − 1 , … , H g n − 1 {\displaystyle Hg_{1}^{-1},\ldots ,Hg_{n}^{-1}} . Also Hopf-Galois extensions are Frobenius extensions by a theorem of Kreimer and Takeuchi from 1989. A simple example of this is a finite group G acting by automorphisms on an algebra A with subalgebra of invariants: B = { x ∈ A ∣ ∀ g ∈ G , g ( x ) = x } . {\displaystyle B=\{x\in A\mid \forall g\in G,g(x)=x\}.} By DeMeyer's criterion A is G-Galois over B if there are elements { a i } i = 1 n , { b i } i = 1 n {\displaystyle \{a_{i}\}_{i=1}^{n},\{b_{i}\}_{i=1}^{n}} in A satisfying: ∀ g ∈ G : ∑ i = 1 n a i g ( b i ) = δ g , 1 G 1 A {\displaystyle \forall g\in G:\ \ \sum _{i=1}^{n}a_{i}g(b_{i})=\delta _{g,1_{G}}1_{A}} whence also ∀ g ∈ G : ∑ i = 1 n g ( a i ) b i = δ g , 1 G 1 A . {\displaystyle \forall g\in G:\ \ \sum _{i=1}^{n}g(a_{i})b_{i}=\delta _{g,1_{G}}1_{A}.} Then A is a Frobenius extension of B with E: A → B defined by E ( a ) = ∑ g ∈ G g ( a ) {\displaystyle E(a)=\sum _{g\in G}g(a)} which satisfies ∀ x ∈ A : ∑ i = 1 n E ( x a i ) b i = x = ∑ i = 1 n a i E ( b i x ) . {\displaystyle \forall x\in A:\ \ \sum _{i=1}^{n}E(xa_{i})b_{i}=x=\sum _{i=1}^{n}a_{i}E(b_{i}x).} (Furthermore, an example of a separable algebra extension since e = ∑ i = 1 n a i ⊗ B b i {\textstyle e=\sum _{i=1}^{n}a_{i}\otimes _{B}b_{i}} is a separability element satisfying ea = ae for all a in A as well as ∑ i = 1 n a i b i = 1 {\textstyle \sum _{i=1}^{n}a_{i}b_{i}=1} . Also an example of a depth two subring (B in A) since a ⊗ B 1 = ∑ g ∈ G t g g ( a ) {\displaystyle a\otimes _{B}1=\sum _{g\in G}t_{g}g(a)} where t g = ∑ i = 1 n a i ⊗ B g ( b i ) {\displaystyle t_{g}=\sum _{i=1}^{n}a_{i}\otimes _{B}g(b_{i})} for each g in G and a in A.) Frobenius extensions have a well-developed theory of induced representations investigated in papers by Kasch and Pareigis, Nakayama and Tzuzuku in the 1950s and 1960s. For example, for each B-module M, the induced module A ⊗B M (if M is a left module) and co-induced module HomB(A, M) are naturally isomorphic as A-modules (as an exercise one defines the isomorphism given E and dual bases). The endomorphism ring theorem of Kasch from 1960 states that if A | B is a Frobenius extension, then so is A → End(AB) where the mapping is given by a ↦ λa(x) and λa(x) = ax for each a,x ∈ A. Endomorphism ring theorems and converses were investigated later by Mueller, Morita, Onodera and others. === Frobenius adjunctions === As already hinted at in the previous paragraph, Frobenius extensions have an equivalent categorical formulation. Namely, given a ring extension S ⊂ R {\displaystyle S\subset R} , the induced induction functor R ⊗ S − : Mod ( S ) → Mod ( R ) {\displaystyle R\otimes _{S}-\colon {\text{Mod}}(S)\to {\text{Mod}}(R)} from the category of, say, left S-modules to the category of left R-modules has both a left and a right adjoint, called co-restriction and restriction, respectively. The ring extension is then called Frobenius if and only if the left and the right adjoint are naturally isomorphic. This leads to the obvious abstraction to ordinary category theory: An adjunction F ⊣ G {\displaystyle F\dashv G} is called a Frobenius adjunction iff also G ⊣ F {\displaystyle G\dashv F} . A functor F is a Frobenius functor if it is part of a Frobenius adjunction, i.e. if it has isomorphic left and right adjoints. == See also == == References == == External links == Street, Ross (2004). "Frobenius algebras and monoidal categories" (PDF). Annual Meeting Aust. Math. Soc. CiteSeerX 10.1.1.180.7082.
Wikipedia/Frobenius_algebra
In gauge theory, topological Yang–Mills theory, also known as the theta term or θ {\displaystyle \theta } -term is a gauge-invariant term which can be added to the action for four-dimensional field theories, first introduced by Edward Witten. It does not change the classical equations of motion, and its effects are only seen at the quantum level, having important consequences for CPT symmetry. == Action == === Spacetime and field content === The most common setting is on four-dimensional, flat spacetime (Minkowski space). As a gauge theory, the theory has a gauge symmetry under the action of a gauge group, a Lie group G {\displaystyle G} , with associated Lie algebra g {\displaystyle {\mathfrak {g}}} through the usual correspondence. The field content is the gauge field A μ {\displaystyle A_{\mu }} , also known in geometry as the connection. It is a 1 {\displaystyle 1} -form valued in a Lie algebra g {\displaystyle {\mathfrak {g}}} . === Action === In this setting the theta term action is S θ = θ 16 π 2 ∫ d 4 x tr ( F μ ν ∗ F μ ν ) = θ 16 π 2 ∫ ⟨ F ∧ F ⟩ {\displaystyle S_{\theta }={\frac {\theta }{16\pi ^{2}}}\int d^{4}x\,{\text{tr}}(F_{\mu \nu }*F^{\mu \nu })={\frac {\theta }{16\pi ^{2}}}\int \langle F\wedge F\rangle } where F μ ν {\displaystyle F_{\mu \nu }} is the field strength tensor, also known in geometry as the curvature tensor. It is defined as F μ ν = ∂ μ A ν − ∂ ν A μ + [ A μ , A ν ] {\displaystyle F_{\mu \nu }=\partial _{\mu }A_{\nu }-\partial _{\nu }A_{\mu }+[A_{\mu },A_{\nu }]} , up to some choice of convention: the commutator sometimes appears with a scalar prefactor of ± i {\displaystyle \pm i} or g {\displaystyle g} , a coupling constant. ∗ F μ ν {\displaystyle *F^{\mu \nu }} is the dual field strength, defined ∗ F μ ν = 1 2 ϵ μ ν ρ σ F ρ σ {\displaystyle *F^{\mu \nu }={\frac {1}{2}}\epsilon ^{\mu \nu \rho \sigma }F_{\rho \sigma }} . ϵ μ ν ρ σ {\displaystyle \epsilon ^{\mu \nu \rho \sigma }} is the totally antisymmetric symbol, or alternating tensor. In a more general geometric setting it is the volume form, and the dual field strength ∗ F {\displaystyle *F} is the Hodge dual of the field strength F {\displaystyle F} . θ {\displaystyle \theta } is the theta-angle, a real parameter. tr {\displaystyle {\text{tr}}} is an invariant, symmetric bilinear form on g {\displaystyle {\mathfrak {g}}} . It is denoted tr {\displaystyle {\text{tr}}} as it is often the trace when g {\displaystyle {\mathfrak {g}}} is under some representation. Concretely, this is often the adjoint representation and in this setting tr {\displaystyle {\text{tr}}} is the Killing form. === As a total derivative === The action can be written as S θ = θ 8 π 2 ∫ d 4 x ∂ μ ϵ μ ν ρ σ tr ( A ν ∂ ρ A σ + 2 3 A ν A ρ A σ ) = θ 8 π 2 ∫ d 4 x ∂ μ ϵ μ ν ρ σ CS ( A ) ν ρ σ , {\displaystyle S_{\theta }={\frac {\theta }{8\pi ^{2}}}\int d^{4}x\,\partial _{\mu }\epsilon ^{\mu \nu \rho \sigma }{\text{tr}}\left(A_{\nu }\partial _{\rho }A_{\sigma }+{\frac {2}{3}}A_{\nu }A_{\rho }A_{\sigma }\right)={\frac {\theta }{8\pi ^{2}}}\int d^{4}x\,\partial _{\mu }\epsilon ^{\mu \nu \rho \sigma }{\text{CS}}(A)_{\nu \rho \sigma },} where CS ( A ) {\displaystyle {\text{CS}}(A)} is the Chern–Simons 3-form. Classically, this means the theta term does not contribute to the classical equations of motion. == Properties of the quantum theory == === CP violation === === Chiral anomaly === == See also == Yang–Mills theory == References == == External links == nLab
Wikipedia/Topological_Yang–Mills_theory
A conformal field theory (CFT) is a quantum field theory that is invariant under conformal transformations. In two dimensions, there is an infinite-dimensional algebra of local conformal transformations, and conformal field theories can sometimes be exactly solved or classified. Conformal field theory has important applications to condensed matter physics, statistical mechanics, quantum statistical mechanics, and string theory. Statistical and condensed matter systems are indeed often conformally invariant at their thermodynamic or quantum critical points. == Scale invariance vs conformal invariance == In quantum field theory, scale invariance is a common and natural symmetry, because any fixed point of the renormalization group is by definition scale invariant. Conformal symmetry is stronger than scale invariance, and one needs additional assumptions to argue that it should appear in nature. The basic idea behind its plausibility is that local scale invariant theories have their currents given by T μ ν ξ ν {\displaystyle T_{\mu \nu }\xi ^{\nu }} where ξ ν {\displaystyle \xi ^{\nu }} is a Killing vector and T μ ν {\displaystyle T_{\mu \nu }} is a conserved operator (the stress-tensor) of dimension exactly ⁠ d {\displaystyle d} ⁠. For the associated symmetries to include scale but not conformal transformations, the trace T μ μ {\displaystyle T_{\mu }{}^{\mu }} has to be a non-zero total derivative implying that there is a non-conserved operator of dimension exactly ⁠ d − 1 {\displaystyle d-1} ⁠. Under some assumptions it is possible to completely rule out this type of non-renormalization and hence prove that scale invariance implies conformal invariance in a quantum field theory, for example in unitary compact conformal field theories in two dimensions. While it is possible for a quantum field theory to be scale invariant but not conformally invariant, examples are rare. For this reason, the terms are often used interchangeably in the context of quantum field theory. == Two dimensions vs higher dimensions == The number of independent conformal transformations is infinite in two dimensions, and finite in higher dimensions. This makes conformal symmetry much more constraining in two dimensions. All conformal field theories share the ideas and techniques of the conformal bootstrap. But the resulting equations are more powerful in two dimensions, where they are sometimes exactly solvable (for example in the case of minimal models), in contrast to higher dimensions, where numerical approaches dominate. The development of conformal field theory has been earlier and deeper in the two-dimensional case, in particular after the 1983 article by Belavin, Polyakov and Zamolodchikov. The term conformal field theory has sometimes been used with the meaning of two-dimensional conformal field theory, as in the title of a 1997 textbook. Higher-dimensional conformal field theories have become more popular with the AdS/CFT correspondence in the late 1990s, and the development of numerical conformal bootstrap techniques in the 2000s. === Global vs local conformal symmetry in two dimensions === The global conformal group of the Riemann sphere is the group of Möbius transformations ⁠ P S L 2 ( C ) {\displaystyle \mathrm {PSL} _{2}(\mathbb {C} )} ⁠, which is finite-dimensional. On the other hand, infinitesimal conformal transformations form the infinite-dimensional Witt algebra: the conformal Killing equations in two dimensions, ∂ μ ξ ν + ∂ ν ξ μ = ∂ ⋅ ξ η μ ν , {\displaystyle \partial _{\mu }\xi _{\nu }+\partial _{\nu }\xi _{\mu }=\partial \cdot \xi \eta _{\mu \nu },~} reduce to just the Cauchy-Riemann equations, ⁠ ∂ z ¯ ξ ( z ) = 0 = ∂ z ξ ( z ¯ ) {\displaystyle \partial _{\bar {z}}\xi (z)=0=\partial _{z}\xi ({\bar {z}})} ⁠, the infinity of modes of arbitrary analytic coordinate transformations ξ ( z ) {\displaystyle \xi (z)} yield the infinity of Killing vector fields ⁠ z n ∂ z {\displaystyle z^{n}\partial _{z}} ⁠. Strictly speaking, it is possible for a two-dimensional conformal field theory to be local (in the sense of possessing a stress-tensor) while still only exhibiting invariance under the global ⁠ P S L 2 ( C ) {\displaystyle \mathrm {PSL} _{2}(\mathbb {C} )} ⁠. This turns out to be unique to non-unitary theories; an example is the biharmonic scalar. This property should be viewed as even more special than scale without conformal invariance as it requires T μ μ {\displaystyle T_{\mu }{}^{\mu }} to be a total second derivative. Global conformal symmetry in two dimensions is a special case of conformal symmetry in higher dimensions, and is studied with the same techniques. This is done not only in theories that have global but not local conformal symmetry, but also in theories that do have local conformal symmetry, for the purpose of testing techniques or ideas from higher-dimensional CFT. In particular, numerical bootstrap techniques can be tested by applying them to minimal models, and comparing the results with the known analytic results that follow from local conformal symmetry. === Conformal field theories with a Virasoro symmetry algebra === In a conformally invariant two-dimensional quantum theory, the Witt algebra of infinitesimal conformal transformations has to be centrally extended. The quantum symmetry algebra is therefore the Virasoro algebra, which depends on a number called the central charge. This central extension can also be understood in terms of a conformal anomaly. It was shown by Alexander Zamolodchikov that there exists a function which decreases monotonically under the renormalization group flow of a two-dimensional quantum field theory, and is equal to the central charge for a two-dimensional conformal field theory. This is known as the Zamolodchikov C-theorem, and tells us that renormalization group flow in two dimensions is irreversible. In addition to being centrally extended, the symmetry algebra of a conformally invariant quantum theory has to be complexified, resulting in two copies of the Virasoro algebra. In Euclidean CFT, these copies are called holomorphic and antiholomorphic. In Lorentzian CFT, they are called left-moving and right moving. Both copies have the same central charge. The space of states of a theory is a representation of the product of the two Virasoro algebras. This space is a Hilbert space if the theory is unitary. This space may contain a vacuum state, or in statistical mechanics, a thermal state. Unless the central charge vanishes, there cannot exist a state that leaves the entire infinite dimensional conformal symmetry unbroken. The best we can have is a state that is invariant under the generators L n ≥ − 1 {\displaystyle L_{n\geq -1}} of the Virasoro algebra, whose basis is ⁠ ( L n ) n ∈ Z {\displaystyle (L_{n})_{n\in \mathbb {Z} }} ⁠. This contains the generators L − 1 , L 0 , L 1 {\displaystyle L_{-1},L_{0},L_{1}} of the global conformal transformations. The rest of the conformal group is spontaneously broken. == Conformal symmetry == === Definition and Jacobian === For a given spacetime and metric, a conformal transformation is a transformation that preserves angles. We will focus on conformal transformations of the flat d {\displaystyle d} -dimensional Euclidean space R d {\displaystyle \mathbb {R} ^{d}} or of the Minkowski space ⁠ R 1 , d − 1 {\displaystyle \mathbb {R} ^{1,d-1}} ⁠. If x → f ( x ) {\displaystyle x\to f(x)} is a conformal transformation, the Jacobian J ν μ ( x ) = ∂ f μ ( x ) ∂ x ν {\displaystyle J_{\nu }^{\mu }(x)={\frac {\partial f^{\mu }(x)}{\partial x^{\nu }}}} is of the form J ν μ ( x ) = Ω ( x ) R ν μ ( x ) , {\displaystyle J_{\nu }^{\mu }(x)=\Omega (x)R_{\nu }^{\mu }(x),} where Ω ( x ) {\displaystyle \Omega (x)} is the scale factor, and R ν μ ( x ) {\displaystyle R_{\nu }^{\mu }(x)} is a rotation (i.e. an orthogonal matrix) or Lorentz transformation. === Conformal group === The conformal group of Euclidean space is locally isomorphic to ⁠ S O ( 1 , d + 1 ) {\displaystyle \mathrm {SO} (1,d+1)} ⁠, and of Minkowski space is ⁠ S O ( 2 , d ) {\displaystyle \mathrm {SO} (2,d)} ⁠. This includes translations, rotations (Euclidean) or Lorentz transformations (Minkowski), and dilations i.e. scale transformations x μ → λ x μ . {\displaystyle x^{\mu }\to \lambda x^{\mu }.} This also includes special conformal transformations. For any translation ⁠ T a ( x ) = x + a {\displaystyle T_{a}(x)=x+a} ⁠, there is a special conformal transformation S a = I ∘ T a ∘ I , {\displaystyle S_{a}=I\circ T_{a}\circ I,} where I {\displaystyle I} is the inversion such that I ( x μ ) = x μ x 2 . {\displaystyle I\left(x^{\mu }\right)={\frac {x^{\mu }}{x^{2}}}.} In the sphere ⁠ S d = R d ∪ { ∞ } {\displaystyle S^{d}=\mathbb {R} ^{d}\cup \{\infty \}} ⁠, the inversion exchanges 0 {\displaystyle 0} with ⁠ ∞ {\displaystyle \infty } ⁠. Translations leave ∞ {\displaystyle \infty } fixed, while special conformal transformations leave 0 {\displaystyle 0} fixed. === Conformal algebra === The commutation relations of the corresponding Lie algebra are [ P μ , P ν ] = 0 , [ D , K μ ] = − K μ , [ D , P μ ] = P μ , [ K μ , K ν ] = 0 , [ K μ , P ν ] = η μ ν D − i M μ ν , {\displaystyle {\begin{aligned}[][P_{\mu },P_{\nu }]&=0,\\[][D,K_{\mu }]&=-K_{\mu },\\[][D,P_{\mu }]&=P_{\mu },\\[][K_{\mu },K_{\nu }]&=0,\\[][K_{\mu },P_{\nu }]&=\eta _{\mu \nu }D-iM_{\mu \nu },\end{aligned}}} where P {\displaystyle P} generate translations, D {\displaystyle D} generates dilations, K μ {\displaystyle K_{\mu }} generate special conformal transformations, and M μ ν {\displaystyle M_{\mu \nu }} generate rotations or Lorentz transformations. The tensor η μ ν {\displaystyle \eta _{\mu \nu }} is the flat metric. === Global issues in Minkowski space === In Minkowski space, the conformal group does not preserve causality. Observables such as correlation functions are invariant under the conformal algebra, but not under the conformal group. As shown by Lüscher and Mack, it is possible to restore the invariance under the conformal group by extending the flat Minkowski space into a Lorentzian cylinder. The original Minkowski space is conformally equivalent to a region of the cylinder called a Poincaré patch. In the cylinder, global conformal transformations do not violate causality: instead, they can move points outside the Poincaré patch. == Correlation functions and conformal bootstrap == In the conformal bootstrap approach, a conformal field theory is a set of correlation functions that obey a number of axioms. The n {\displaystyle n} -point correlation function ⟨ O 1 ( x 1 ) ⋯ O n ( x n ) ⟩ {\displaystyle \left\langle O_{1}(x_{1})\cdots O_{n}(x_{n})\right\rangle } is a function of the positions x i {\displaystyle x_{i}} and other parameters of the fields ⁠ O 1 , … , O n {\displaystyle O_{1},\dots ,O_{n}} ⁠. In the bootstrap approach, the fields themselves make sense only in the context of correlation functions, and may be viewed as efficient notations for writing axioms for correlation functions. Correlation functions depend linearly on fields, in particular ⁠ ∂ x 1 ⟨ O 1 ( x 1 ) ⋯ ⟩ = ⟨ ∂ x 1 O 1 ( x 1 ) ⋯ ⟩ {\displaystyle \partial _{x_{1}}\left\langle O_{1}(x_{1})\cdots \right\rangle =\left\langle \partial _{x_{1}}O_{1}(x_{1})\cdots \right\rangle } ⁠. We focus on CFT on the Euclidean space ⁠ R d {\displaystyle \mathbb {R} ^{d}} ⁠. In this case, correlation functions are Schwinger functions. They are defined for ⁠ x i ≠ x j {\displaystyle x_{i}\neq x_{j}} ⁠, and do not depend on the order of the fields. In Minkowski space, correlation functions are Wightman functions. They can depend on the order of the fields, as fields commute only if they are spacelike separated. A Euclidean CFT can be related to a Minkowskian CFT by Wick rotation, for example thanks to the Osterwalder-Schrader theorem. In such cases, Minkowskian correlation functions are obtained from Euclidean correlation functions by an analytic continuation that depends on the order of the fields. === Behaviour under conformal transformations === Any conformal transformation x → f ( x ) {\displaystyle x\to f(x)} acts linearly on fields ⁠ O ( x ) → π f ( O ) ( x ) {\displaystyle O(x)\to \pi _{f}(O)(x)} ⁠, such that f → π f {\displaystyle f\to \pi _{f}} is a representation of the conformal group, and correlation functions are invariant: ⟨ π f ( O 1 ) ( x 1 ) ⋯ π f ( O n ) ( x n ) ⟩ = ⟨ O 1 ( x 1 ) ⋯ O n ( x n ) ⟩ . {\displaystyle \left\langle \pi _{f}(O_{1})(x_{1})\cdots \pi _{f}(O_{n})(x_{n})\right\rangle =\left\langle O_{1}(x_{1})\cdots O_{n}(x_{n})\right\rangle .} Primary fields are fields that transform into themselves via ⁠ π f {\displaystyle \pi _{f}} ⁠. The behaviour of a primary field is characterized by a number Δ {\displaystyle \Delta } called its conformal dimension, and a representation ρ {\displaystyle \rho } of the rotation or Lorentz group. For a primary field, we then have π f ( O ) ( x ) = Ω ( x ′ ) − Δ ρ ( R ( x ′ ) ) O ( x ′ ) , where x ′ = f − 1 ( x ) . {\displaystyle \pi _{f}(O)(x)=\Omega (x')^{-\Delta }\rho (R(x'))O(x'),\quad {\text{where}}\ x'=f^{-1}(x).} Here Ω ( x ) {\displaystyle \Omega (x)} and R ( x ) {\displaystyle R(x)} are the scale factor and rotation that are associated to the conformal transformation ⁠ f {\displaystyle f} ⁠. The representation ρ {\displaystyle \rho } is trivial in the case of scalar fields, which transform as ⁠ π f ( O ) ( x ) = Ω ( x ′ ) − Δ O ( x ′ ) {\displaystyle \pi _{f}(O)(x)=\Omega (x')^{-\Delta }O(x')} ⁠. For vector fields, the representation ρ {\displaystyle \rho } is the fundamental representation, and we would have ⁠ π f ( O μ ) ( x ) = Ω ( x ′ ) − Δ R μ ν ( x ′ ) O ν ( x ′ ) {\displaystyle \pi _{f}(O_{\mu })(x)=\Omega (x')^{-\Delta }R_{\mu }^{\nu }(x')O_{\nu }(x')} ⁠. A primary field that is characterized by the conformal dimension Δ {\displaystyle \Delta } and representation ρ {\displaystyle \rho } behaves as a highest-weight vector in an induced representation of the conformal group from the subgroup generated by dilations and rotations. In particular, the conformal dimension Δ {\displaystyle \Delta } characterizes a representation of the subgroup of dilations. In two dimensions, the fact that this induced representation is a Verma module appears throughout the literature. For higher-dimensional CFTs (in which the maximally compact subalgebra is larger than the Cartan subalgebra), it has recently been appreciated that this representation is a parabolic or generalized Verma module. Derivatives (of any order) of primary fields are called descendant fields. Their behaviour under conformal transformations is more complicated. For example, if O {\displaystyle O} is a primary field, then π f ( ∂ μ O ) ( x ) = ∂ μ ( π f ( O ) ( x ) ) {\displaystyle \pi _{f}(\partial _{\mu }O)(x)=\partial _{\mu }\left(\pi _{f}(O)(x)\right)} is a linear combination of ∂ μ O {\displaystyle \partial _{\mu }O} and ⁠ O {\displaystyle O} ⁠. Correlation functions of descendant fields can be deduced from correlation functions of primary fields. However, even in the common case where all fields are either primaries or descendants thereof, descendant fields play an important role, because conformal blocks and operator product expansions involve sums over all descendant fields. The collection of all primary fields ⁠ O p {\displaystyle O_{p}} ⁠, characterized by their scaling dimensions Δ p {\displaystyle \Delta _{p}} and the representations ⁠ ρ p {\displaystyle \rho _{p}} ⁠, is called the spectrum of the theory. === Dependence on field positions === The invariance of correlation functions under conformal transformations severely constrain their dependence on field positions. In the case of two- and three-point functions, that dependence is determined up to finitely many constant coefficients. Higher-point functions have more freedom, and are only determined up to functions of conformally invariant combinations of the positions. The two-point function of two primary fields vanishes if their conformal dimensions differ. Δ 1 ≠ Δ 2 ⟹ ⟨ O 1 ( x 1 ) O 2 ( x 2 ) ⟩ = 0. {\displaystyle \Delta _{1}\neq \Delta _{2}\implies \left\langle O_{1}(x_{1})O_{2}(x_{2})\right\rangle =0.} If the dilation operator is diagonalizable (i.e. if the theory is not logarithmic), there exists a basis of primary fields such that two-point functions are diagonal, i.e. ⁠ i ≠ j ⟹ ⟨ O i O j ⟩ = 0 {\displaystyle i\neq j\implies \left\langle O_{i}O_{j}\right\rangle =0} ⁠. In this case, the two-point function of a scalar primary field is ⟨ O ( x 1 ) O ( x 2 ) ⟩ = 1 | x 1 − x 2 | 2 Δ , {\displaystyle \left\langle O(x_{1})O(x_{2})\right\rangle ={\frac {1}{|x_{1}-x_{2}|^{2\Delta }}},} where we choose the normalization of the field such that the constant coefficient, which is not determined by conformal symmetry, is one. Similarly, two-point functions of non-scalar primary fields are determined up to a coefficient, which can be set to one. In the case of a symmetric traceless tensor of rank ⁠ ℓ {\displaystyle \ell } ⁠, the two-point function is ⟨ O μ 1 , … , μ ℓ ( x 1 ) O ν 1 , … , ν ℓ ( x 2 ) ⟩ = ∏ i = 1 ℓ I μ i , ν i ( x 1 − x 2 ) − traces | x 1 − x 2 | 2 Δ , {\displaystyle \left\langle O_{\mu _{1},\dots ,\mu _{\ell }}(x_{1})O_{\nu _{1},\dots ,\nu _{\ell }}(x_{2})\right\rangle ={\frac {\prod _{i=1}^{\ell }I_{\mu _{i},\nu _{i}}(x_{1}-x_{2})-{\text{traces}}}{|x_{1}-x_{2}|^{2\Delta }}},} where the tensor I μ , ν ( x ) {\displaystyle I_{\mu ,\nu }(x)} is defined as I μ , ν ( x ) = η μ ν − 2 x μ x ν x 2 . {\displaystyle I_{\mu ,\nu }(x)=\eta _{\mu \nu }-{\frac {2x_{\mu }x_{\nu }}{x^{2}}}.} The three-point function of three scalar primary fields is ⟨ O 1 ( x 1 ) O 2 ( x 2 ) O 3 ( x 3 ) ⟩ = C 123 | x 12 | Δ 1 + Δ 2 − Δ 3 | x 13 | Δ 1 + Δ 3 − Δ 2 | x 23 | Δ 2 + Δ 3 − Δ 1 , {\displaystyle \left\langle O_{1}(x_{1})O_{2}(x_{2})O_{3}(x_{3})\right\rangle ={\frac {C_{123}}{|x_{12}|^{\Delta _{1}+\Delta _{2}-\Delta _{3}}|x_{13}|^{\Delta _{1}+\Delta _{3}-\Delta _{2}}|x_{23}|^{\Delta _{2}+\Delta _{3}-\Delta _{1}}}},} where ⁠ x i j = x i − x j {\displaystyle x_{ij}=x_{i}-x_{j}} ⁠, and C 123 {\displaystyle C_{123}} is a three-point structure constant. With primary fields that are not necessarily scalars, conformal symmetry allows a finite number of tensor structures, and there is a structure constant for each tensor structure. In the case of two scalar fields and a symmetric traceless tensor of rank ⁠ ℓ {\displaystyle \ell } ⁠, there is only one tensor structure, and the three-point function is ⟨ O 1 ( x 1 ) O 2 ( x 2 ) O μ 1 , … , μ ℓ ( x 3 ) ⟩ = C 123 ( ∏ i = 1 ℓ V μ i − traces ) | x 12 | Δ 1 + Δ 2 − Δ 3 | x 13 | Δ 1 + Δ 3 − Δ 2 | x 23 | Δ 2 + Δ 3 − Δ 1 , {\displaystyle \left\langle O_{1}(x_{1})O_{2}(x_{2})O_{\mu _{1},\dots ,\mu _{\ell }}(x_{3})\right\rangle ={\frac {C_{123}\left(\prod _{i=1}^{\ell }V_{\mu _{i}}-{\text{traces}}\right)}{|x_{12}|^{\Delta _{1}+\Delta _{2}-\Delta _{3}}|x_{13}|^{\Delta _{1}+\Delta _{3}-\Delta _{2}}|x_{23}|^{\Delta _{2}+\Delta _{3}-\Delta _{1}}}},} where we introduce the vector V μ = x 13 μ x 23 2 − x 23 μ x 13 2 | x 12 | | x 13 | | x 23 | . {\displaystyle V_{\mu }={\frac {x_{13}^{\mu }x_{23}^{2}-x_{23}^{\mu }x_{13}^{2}}{|x_{12}||x_{13}||x_{23}|}}.} Four-point functions of scalar primary fields are determined up to arbitrary functions g ( u , v ) {\displaystyle g(u,v)} of the two cross-ratios u = x 12 2 x 34 2 x 13 2 x 24 2 , v = x 14 2 x 23 2 x 13 2 x 24 2 . {\displaystyle u={\frac {x_{12}^{2}x_{34}^{2}}{x_{13}^{2}x_{24}^{2}}}\ ,\ v={\frac {x_{14}^{2}x_{23}^{2}}{x_{13}^{2}x_{24}^{2}}}.} The four-point function is then ⟨ ∏ i = 1 4 O i ( x i ) ⟩ = ( | x 24 | | x 14 | ) Δ 1 − Δ 2 ( | x 14 | | x 13 | ) Δ 3 − Δ 4 | x 12 | Δ 1 + Δ 2 | x 34 | Δ 3 + Δ 4 g ( u , v ) . {\displaystyle \left\langle \prod _{i=1}^{4}O_{i}(x_{i})\right\rangle ={\frac {\left({\frac {|x_{24}|}{|x_{14}|}}\right)^{\Delta _{1}-\Delta _{2}}\left({\frac {|x_{14}|}{|x_{13}|}}\right)^{\Delta _{3}-\Delta _{4}}}{|x_{12}|^{\Delta _{1}+\Delta _{2}}|x_{34}|^{\Delta _{3}+\Delta _{4}}}}g(u,v).} === Operator product expansion === The operator product expansion (OPE) is more powerful in conformal field theory than in more general quantum field theories. This is because in conformal field theory, the operator product expansion's radius of convergence is finite (i.e. it is not zero). Provided the positions x 1 , x 2 {\displaystyle x_{1},x_{2}} of two fields are close enough, the operator product expansion rewrites the product of these two fields as a linear combination of fields at a given point, which can be chosen as x 2 {\displaystyle x_{2}} for technical convenience. The operator product expansion of two fields takes the form O 1 ( x 1 ) O 2 ( x 2 ) = ∑ k c 12 k ( x 1 − x 2 ) O k ( x 2 ) , {\displaystyle O_{1}(x_{1})O_{2}(x_{2})=\sum _{k}c_{12k}(x_{1}-x_{2})O_{k}(x_{2}),} where c 12 k ( x ) {\displaystyle c_{12k}(x)} is some coefficient function, and the sum in principle runs over all fields in the theory. (Equivalently, by the state-field correspondence, the sum runs over all states in the space of states.) Some fields may actually be absent, in particular due to constraints from symmetry: conformal symmetry, or extra symmetries. If all fields are primary or descendant, the sum over fields can be reduced to a sum over primaries, by rewriting the contributions of any descendant in terms of the contribution of the corresponding primary: O 1 ( x 1 ) O 2 ( x 2 ) = ∑ p C 12 p P p ( x 1 − x 2 , ∂ x 2 ) O p ( x 2 ) , {\displaystyle O_{1}(x_{1})O_{2}(x_{2})=\sum _{p}C_{12p}P_{p}(x_{1}-x_{2},\partial _{x_{2}})O_{p}(x_{2}),} where the fields O p {\displaystyle O_{p}} are all primary, and C 12 p {\displaystyle C_{12p}} is the three-point structure constant (which for this reason is also called OPE coefficient). The differential operator P p ( x 1 − x 2 , ∂ x 2 ) {\displaystyle P_{p}(x_{1}-x_{2},\partial _{x_{2}})} is an infinite series in derivatives, which is determined by conformal symmetry and therefore in principle known. Viewing the OPE as a relation between correlation functions shows that the OPE must be associative. Furthermore, if the space is Euclidean, the OPE must be commutative, because correlation functions do not depend on the order of the fields, i.e. ⁠ O 1 ( x 1 ) O 2 ( x 2 ) = O 2 ( x 2 ) O 1 ( x 1 ) {\displaystyle O_{1}(x_{1})O_{2}(x_{2})=O_{2}(x_{2})O_{1}(x_{1})} ⁠. The existence of the operator product expansion is a fundamental axiom of the conformal bootstrap. However, it is generally not necessary to compute operator product expansions and in particular the differential operators ⁠ P p ( x 1 − x 2 , ∂ x 2 ) {\displaystyle P_{p}(x_{1}-x_{2},\partial _{x_{2}})} ⁠. Rather, it is the decomposition of correlation functions into structure constants and conformal blocks that is needed. The OPE can in principle be used for computing conformal blocks, but in practice there are more efficient methods. === Conformal blocks and crossing symmetry === Using the OPE ⁠ O 1 ( x 1 ) O 2 ( x 2 ) {\displaystyle O_{1}(x_{1})O_{2}(x_{2})} ⁠, a four-point function can be written as a combination of three-point structure constants and s-channel conformal blocks, ⟨ ∏ i = 1 4 O i ( x i ) ⟩ = ∑ p C 12 p C p 34 G p ( s ) ( x i ) . {\displaystyle \left\langle \prod _{i=1}^{4}O_{i}(x_{i})\right\rangle =\sum _{p}C_{12p}C_{p34}G_{p}^{(s)}(x_{i}).} The conformal block G p ( s ) ( x i ) {\displaystyle G_{p}^{(s)}(x_{i})} is the sum of the contributions of the primary field O p {\displaystyle O_{p}} and its descendants. It depends on the fields O i {\displaystyle O_{i}} and their positions. If the three-point functions ⟨ O 1 O 2 O p ⟩ {\displaystyle \left\langle O_{1}O_{2}O_{p}\right\rangle } or ⟨ O 3 O 4 O p ⟩ {\displaystyle \left\langle O_{3}O_{4}O_{p}\right\rangle } involve several independent tensor structures, the structure constants and conformal blocks depend on these tensor structures, and the primary field O p {\displaystyle O_{p}} contributes several independent blocks. Conformal blocks are determined by conformal symmetry, and known in principle. To compute them, there are recursion relations and integrable techniques. Using the OPE O 1 ( x 1 ) O 4 ( x 4 ) {\displaystyle O_{1}(x_{1})O_{4}(x_{4})} or ⁠ O 1 ( x 1 ) O 3 ( x 3 ) {\displaystyle O_{1}(x_{1})O_{3}(x_{3})} ⁠, the same four-point function is written in terms of t-channel conformal blocks or u-channel conformal blocks, ⟨ ∏ i = 1 4 O i ( x i ) ⟩ = ∑ p C 14 p C p 23 G p ( t ) ( x i ) = ∑ p C 13 p C p 24 G p ( u ) ( x i ) . {\displaystyle \left\langle \prod _{i=1}^{4}O_{i}(x_{i})\right\rangle =\sum _{p}C_{14p}C_{p23}G_{p}^{(t)}(x_{i})=\sum _{p}C_{13p}C_{p24}G_{p}^{(u)}(x_{i}).} The equality of the s-, t- and u-channel decompositions is called crossing symmetry: a constraint on the spectrum of primary fields, and on the three-point structure constants. Conformal blocks obey the same conformal symmetry constraints as four-point functions. In particular, s-channel conformal blocks can be written in terms of functions g p ( s ) ( u , v ) {\displaystyle g_{p}^{(s)}(u,v)} of the cross-ratios. While the OPE O 1 ( x 1 ) O 2 ( x 2 ) {\displaystyle O_{1}(x_{1})O_{2}(x_{2})} only converges if ⁠ | x 12 | < min ( | x 23 | , | x 24 | ) {\displaystyle \vert x_{12}\vert <\min(\vert x_{23}\vert ,\vert x_{24}\vert )} ⁠, conformal blocks can be analytically continued to all (non pairwise coinciding) values of the positions. In Euclidean space, conformal blocks are single-valued real-analytic functions of the positions except when the four points x i {\displaystyle x_{i}} lie on a circle but in a singly-transposed cyclic order [1324], and only in these exceptional cases does the decomposition into conformal blocks not converge. A conformal field theory in flat Euclidean space R d {\displaystyle \mathbb {R} ^{d}} is thus defined by its spectrum { ( Δ p , ρ p ) } {\displaystyle \{(\Delta _{p},\rho _{p})\}} and OPE coefficients (or three-point structure constants) ⁠ { C p p ′ p ″ } {\displaystyle \{C_{pp'p''}\}} ⁠, satisfying the constraint that all four-point functions are crossing-symmetric. From the spectrum and OPE coefficients (collectively referred to as the CFT data), correlation functions of arbitrary order can be computed. == Features == === Unitarity === A conformal field theory is unitary if its space of states has a positive definite scalar product such that the dilation operator is self-adjoint. Then the scalar product endows the space of states with the structure of a Hilbert space. In Euclidean conformal field theories, unitarity is equivalent to reflection positivity of correlation functions: one of the Osterwalder-Schrader axioms. Unitarity implies that the conformal dimensions of primary fields are real and bounded from below. The lower bound depends on the spacetime dimension ⁠ d {\displaystyle d} ⁠, and on the representation of the rotation or Lorentz group in which the primary field transforms. For scalar fields, the unitarity bound is Δ ≥ 1 2 ( d − 2 ) . {\displaystyle \Delta \geq {\frac {1}{2}}(d-2).} In a unitary theory, three-point structure constants must be real, which in turn implies that four-point functions obey certain inequalities. Powerful numerical bootstrap methods are based on exploiting these inequalities. === Compactness === A conformal field theory is compact if it obeys three conditions: All conformal dimensions are real. For any Δ ∈ R {\displaystyle \Delta \in \mathbb {R} } there are finitely many states whose dimensions are less than ⁠ Δ {\displaystyle \Delta } ⁠. There is a unique state with the dimension ⁠ Δ = 0 {\displaystyle \Delta =0} ⁠, and it is the vacuum state, i.e. the corresponding field is the identity field. (The identity field is the field whose insertion into correlation functions does not modify them, i.e. ⁠ ⟨ I ( x ) ⋯ ⟩ = ⟨ ⋯ ⟩ {\displaystyle \left\langle I(x)\cdots \right\rangle =\left\langle \cdots \right\rangle } ⁠.) The name comes from the fact that if a 2D conformal field theory is also a sigma model, it will satisfy these conditions if and only if its target space is compact. It is believed that all unitary conformal field theories are compact in dimension ⁠ d > 2 {\displaystyle d>2} ⁠. Without unitarity, on the other hand, it is possible to find CFTs in dimension four and in dimension 4 − ϵ {\displaystyle 4-\epsilon } that have a continuous spectrum. And in dimension two, Liouville theory is unitary but not compact. === Extra symmetries === A conformal field theory may have extra symmetries in addition to conformal symmetry. For example, the Ising model has a Z 2 {\displaystyle \mathbb {Z} _{2}} symmetry, and superconformal field theories have supersymmetry. == Examples == === Mean field theory === A generalized free field is a field whose correlation functions are deduced from its two-point function by Wick's theorem. For instance, if ϕ {\displaystyle \phi } is a scalar primary field of dimension ⁠ Δ {\displaystyle \Delta } ⁠, its four-point function reads ⟨ ∏ i = 1 4 ϕ ( x i ) ⟩ = 1 | x 12 | 2 Δ | x 34 | 2 Δ + 1 | x 13 | 2 Δ | x 24 | 2 Δ + 1 | x 14 | 2 Δ | x 23 | 2 Δ . {\displaystyle \left\langle \prod _{i=1}^{4}\phi (x_{i})\right\rangle ={\frac {1}{|x_{12}|^{2\Delta }|x_{34}|^{2\Delta }}}+{\frac {1}{|x_{13}|^{2\Delta }|x_{24}|^{2\Delta }}}+{\frac {1}{|x_{14}|^{2\Delta }|x_{23}|^{2\Delta }}}.} For instance, if ϕ 1 , ϕ 2 {\displaystyle \phi _{1},\phi _{2}} are two scalar primary fields such that ⟨ ϕ 1 ϕ 2 ⟩ = 0 {\displaystyle \langle \phi _{1}\phi _{2}\rangle =0} (which is the case in particular if Δ 1 ≠ Δ 2 {\displaystyle \Delta _{1}\neq \Delta _{2}} ), we have the four-point function ⟨ ϕ 1 ( x 1 ) ϕ 1 ( x 2 ) ϕ 2 ( x 3 ) ϕ 2 ( x 4 ) ⟩ = 1 | x 12 | 2 Δ 1 | x 34 | 2 Δ 2 . {\displaystyle {\Big \langle }\phi _{1}(x_{1})\phi _{1}(x_{2})\phi _{2}(x_{3})\phi _{2}(x_{4}){\Big \rangle }={\frac {1}{|x_{12}|^{2\Delta _{1}}|x_{34}|^{2\Delta _{2}}}}.} Mean field theory is a generic name for conformal field theories that are built from generalized free fields. For example, a mean field theory can be built from one scalar primary field ⁠ ϕ {\displaystyle \phi } ⁠. Then this theory contains ⁠ ϕ {\displaystyle \phi } ⁠, its descendant fields, and the fields that appear in the OPE \phi \phi. The primary fields that appear in ϕ ϕ {\displaystyle \phi \phi } can be determined by decomposing the four-point function ⟨ ϕ ϕ ϕ ϕ ⟩ {\displaystyle \langle \phi \phi \phi \phi \rangle } in conformal blocks: their conformal dimensions belong to 2 Δ + 2 N {\displaystyle 2\Delta +2\mathbb {N} } : in mean field theory, the conformal dimension is conserved modulo integers. Structure constants can be computed exactly in terms of the Gamma function. Similarly, it is possible to construct mean field theories starting from a field with non-trivial Lorentz spin. For example, the 4d Maxwell theory (in the absence of charged matter fields) is a mean field theory built out of an antisymmetric tensor field F μ ν {\displaystyle F_{\mu \nu }} with scaling dimension ⁠ Δ = 2 {\displaystyle \Delta =2} ⁠. Mean field theories have a Lagrangian description in terms of a quadratic action involving Laplacian raised to an arbitrary real power (which determines the scaling dimension of the field). For a generic scaling dimension, the power of the Laplacian is non-integer. The corresponding mean field theory is then non-local (e.g. it does not have a conserved stress tensor operator). === Critical Ising model === The critical Ising model is the critical point of the Ising model on a hypercubic lattice in two or three dimensions. It has a Z 2 {\displaystyle \mathbb {Z} _{2}} global symmetry, corresponding to flipping all spins. The two-dimensional critical Ising model includes the M ( 4 , 3 ) {\displaystyle {\mathcal {M}}(4,3)} Virasoro minimal model, which can be solved exactly. There is no Ising CFT in d ≥ 4 {\displaystyle d\geq 4} dimensions. === Critical Potts model === The critical Potts model with q = 2 , 3 , 4 , ⋯ {\displaystyle q=2,3,4,\cdots } colors is a unitary CFT that is invariant under the permutation group ⁠ S q {\displaystyle S_{q}} ⁠. It is a generalization of the critical Ising model, which corresponds to ⁠ q = 2 {\displaystyle q=2} ⁠. The critical Potts model exists in a range of dimensions depending on ⁠ q {\displaystyle q} ⁠. The critical Potts model may be constructed as the continuum limit of the Potts model on d-dimensional hypercubic lattice. In the Fortuin-Kasteleyn reformulation in terms of clusters, the Potts model can be defined for ⁠ q ∈ C {\displaystyle q\in \mathbb {C} } ⁠, but it is not unitary if q {\displaystyle q} is not integer. === Critical O(N) model === The critical O(N) model is a CFT invariant under the orthogonal group. For any integer ⁠ N {\displaystyle N} ⁠, it exists as an interacting, unitary and compact CFT in d = 3 {\displaystyle d=3} dimensions (and for N = 1 {\displaystyle N=1} also in two dimensions). It is a generalization of the critical Ising model, which corresponds to the O(N) CFT at ⁠ N = 1 {\displaystyle N=1} ⁠. The O(N) CFT can be constructed as the continuum limit of a lattice model with spins that are N-vectors, called the n-vector model. Alternatively, the critical O ( N ) {\displaystyle O(N)} model can be constructed as the ε → 1 {\displaystyle \varepsilon \to 1} limit of Wilson–Fisher fixed point in d = 4 − ε {\displaystyle d=4-\varepsilon } dimensions. At ⁠ ε = 0 {\displaystyle \varepsilon =0} ⁠, the Wilson–Fisher fixed point becomes the tensor product of N {\displaystyle N} free scalars with dimension ⁠ Δ = 1 {\displaystyle \Delta =1} ⁠. For 0 < ε < 1 {\displaystyle 0<\varepsilon <1} the model in question is non-unitary. When N is large, the O(N) model can be solved perturbatively in a 1/N expansion by means of the Hubbard–Stratonovich transformation. In particular, the N → ∞ {\displaystyle N\to \infty } limit of the critical O(N) model is well-understood. The conformal data of the critical O(N) model are functions of N and of the dimension, on which many results are known. === Conformal gauge theories === Some conformal field theories in three and four dimensions admit a Lagrangian description in the form of a gauge theory, either abelian or non-abelian. Examples of such CFTs are conformal QED with sufficiently many charged fields in d = 3 {\displaystyle d=3} or the Banks-Zaks fixed point in ⁠ d = 4 {\displaystyle d=4} ⁠. == Applications == === Continuous phase transitions === Continuous phase transitions (critical points) of classical statistical physics systems with D spatial dimensions are often described by Euclidean conformal field theories. A necessary condition for this to happen is that the critical point should be invariant under spatial rotations and translations. However this condition is not sufficient: some exceptional critical points are described by scale invariant but not conformally invariant theories. If the classical statistical physics system is reflection positive, the corresponding Euclidean CFT describing its critical point will be unitary. Continuous quantum phase transitions in condensed matter systems with D spatial dimensions may be described by Lorentzian D+1 dimensional conformal field theories (related by Wick rotation to Euclidean CFTs in D + 1 dimensions). Apart from translation and rotation invariance, an additional necessary condition for this to happen is that the dynamical critical exponent z should be equal to 1. CFTs describing such quantum phase transitions (in absence of quenched disorder) are always unitary. === String theory === World-sheet description of string theory involves a two-dimensional CFT coupled to dynamical two-dimensional quantum gravity (or supergravity, in case of superstring theory). Consistency of string theory models imposes constraints on the central charge of this CFT, which should be c = 26 in bosonic string theory and c = 10 in superstring theory. Coordinates of the spacetime in which string theory lives correspond to bosonic fields of this CFT. === AdS/CFT correspondence === Conformal field theories play a prominent role in the AdS/CFT correspondence, in which a gravitational theory in anti-de Sitter space (AdS) is equivalent to a conformal field theory on the AdS boundary. Notable examples are d = 4, N = 4 supersymmetric Yang–Mills theory, which is dual to Type IIB string theory on AdS5 × S5, and d = 3, N = 6 super-Chern–Simons theory, which is dual to M-theory on AdS4 × S7. (The prefix "super" denotes supersymmetry, N denotes the degree of extended supersymmetry possessed by the theory, and d the number of space-time dimensions on the boundary.) === Conformal perturbation theory === By perturbing a conformal field theory, it is possible to construct other field theories, conformal or not. Their correlation functions can be computed perturbatively from the correlation functions of the original CFT, by a technique called conformal perturbation theory. For example, a type of perturbation consists in discretizing a conformal field theory by studying it on a discrete spacetime. The resulting finite-size effects can be computed using conformal perturbation theory. == See also == == References == == Further reading == Rychkov, Slava (2016). "EPFL Lectures on Conformal Field Theory in D ≥ 3 Dimensions". SpringerBriefs in Physics. arXiv:1601.05000. doi:10.1007/978-3-319-43626-5. ISBN 978-3-319-43625-8. S2CID 119192484. Martin Schottenloher, A Mathematical Introduction to Conformal Field Theory, Springer-Verlag, Berlin, Heidelberg, 1997. ISBN 3-540-61753-1, 2nd edition 2008, ISBN 978-3-540-68625-5. == External links == Media related to Conformal field theory at Wikimedia Commons
Wikipedia/Conformal_Field_Theory
In topology and related branches of mathematics, separated sets are pairs of subsets of a given topological space that are related to each other in a certain way: roughly speaking, neither overlapping nor touching. The notion of when two sets are separated or not is important both to the notion of connected spaces (and their connected components) as well as to the separation axioms for topological spaces. Separated sets should not be confused with separated spaces (defined below), which are somewhat related but different. Separable spaces are again a completely different topological concept. == Definitions == There are various ways in which two subsets A {\displaystyle A} and B {\displaystyle B} of a topological space X {\displaystyle X} can be considered to be separated. A most basic way in which two sets can be separated is if they are disjoint, that is, if their intersection is the empty set. This property has nothing to do with topology as such, but only set theory. Each of the following properties is stricter than disjointness, incorporating some topological information. The properties below are presented in increasing order of specificity, each being a stronger notion than the preceding one. The sets A {\displaystyle A} and B {\displaystyle B} are separated in X {\displaystyle X} if each is disjoint from the other's closure: A ∩ B ¯ = ∅ = A ¯ ∩ B . {\displaystyle A\cap {\bar {B}}=\varnothing ={\bar {A}}\cap B.} This property is known as the Hausdorff−Lennes Separation Condition. Since every set is contained in its closure, two separated sets automatically must be disjoint. The closures themselves do not have to be disjoint from each other; for example, the intervals [ 0 , 1 ) {\displaystyle [0,1)} and ( 1 , 2 ] {\displaystyle (1,2]} are separated in the real line R , {\displaystyle \mathbb {R} ,} even though the point 1 belongs to both of their closures. A more general example is that in any metric space, two open balls B r ( p ) = { x ∈ X : d ( p , x ) < r } {\displaystyle B_{r}(p)=\{x\in X:d(p,x)<r\}} and B s ( q ) = { x ∈ X : d ( q , x ) < s } {\displaystyle B_{s}(q)=\{x\in X:d(q,x)<s\}} are separated whenever d ( p , q ) ≥ r + s . {\displaystyle d(p,q)\geq r+s.} The property of being separated can also be expressed in terms of derived set (indicated by the prime symbol): A {\displaystyle A} and B {\displaystyle B} are separated when they are disjoint and each is disjoint from the other's derived set, that is, A ′ ∩ B = ∅ = B ′ ∩ A . {\textstyle A'\cap B=\varnothing =B'\cap A.} (As in the case of the first version of the definition, the derived sets A ′ {\displaystyle A'} and B ′ {\displaystyle B'} are not required to be disjoint from each other.) The sets A {\displaystyle A} and B {\displaystyle B} are separated by neighbourhoods if there are neighbourhoods U {\displaystyle U} of A {\displaystyle A} and V {\displaystyle V} of B {\displaystyle B} such that U {\displaystyle U} and V {\displaystyle V} are disjoint. (Sometimes you will see the requirement that U {\displaystyle U} and V {\displaystyle V} be open neighbourhoods, but this makes no difference in the end.) For the example of A = [ 0 , 1 ) {\displaystyle A=[0,1)} and B = ( 1 , 2 ] , {\displaystyle B=(1,2],} you could take U = ( − 1 , 1 ) {\displaystyle U=(-1,1)} and V = ( 1 , 3 ) . {\displaystyle V=(1,3).} Note that if any two sets are separated by neighbourhoods, then certainly they are separated. If A {\displaystyle A} and B {\displaystyle B} are open and disjoint, then they must be separated by neighbourhoods; just take U = A {\displaystyle U=A} and V = B . {\displaystyle V=B.} For this reason, separatedness is often used with closed sets (as in the normal separation axiom). The sets A {\displaystyle A} and B {\displaystyle B} are separated by closed neighbourhoods if there is a closed neighbourhood U {\displaystyle U} of A {\displaystyle A} and a closed neighbourhood V {\displaystyle V} of B {\displaystyle B} such that U {\displaystyle U} and V {\displaystyle V} are disjoint. Our examples, [ 0 , 1 ) {\displaystyle [0,1)} and ( 1 , 2 ] , {\displaystyle (1,2],} are not separated by closed neighbourhoods. You could make either U {\displaystyle U} or V {\displaystyle V} closed by including the point 1 in it, but you cannot make them both closed while keeping them disjoint. Note that if any two sets are separated by closed neighbourhoods, then certainly they are separated by neighbourhoods. The sets A {\displaystyle A} and B {\displaystyle B} are separated by a continuous function if there exists a continuous function f : X → R {\displaystyle f:X\to \mathbb {R} } from the space X {\displaystyle X} to the real line R {\displaystyle \mathbb {R} } such that A ⊆ f − 1 ( 0 ) {\displaystyle A\subseteq f^{-1}(0)} and B ⊆ f − 1 ( 1 ) {\displaystyle B\subseteq f^{-1}(1)} , that is, members of A {\displaystyle A} map to 0 and members of B {\displaystyle B} map to 1. (Sometimes the unit interval [ 0 , 1 ] {\displaystyle [0,1]} is used in place of R {\displaystyle \mathbb {R} } in this definition, but this makes no difference.) In our example, [ 0 , 1 ) {\displaystyle [0,1)} and ( 1 , 2 ] {\displaystyle (1,2]} are not separated by a function, because there is no way to continuously define f {\displaystyle f} at the point 1. If two sets are separated by a continuous function, then they are also separated by closed neighbourhoods; the neighbourhoods can be given in terms of the preimage of f {\displaystyle f} as U = f − 1 [ − c , c ] {\displaystyle U=f^{-1}[-c,c]} and V = f − 1 [ 1 − c , 1 + c ] , {\displaystyle V=f^{-1}[1-c,1+c],} where c {\displaystyle c} is any positive real number less than 1 / 2. {\displaystyle 1/2.} The sets A {\displaystyle A} and B {\displaystyle B} are precisely separated by a continuous function if there exists a continuous function f : X → R {\displaystyle f:X\to \mathbb {R} } such that A = f − 1 ( 0 ) {\displaystyle A=f^{-1}(0)} and B = f − 1 ( 1 ) . {\displaystyle B=f^{-1}(1).} (Again, you may also see the unit interval in place of R , {\displaystyle \mathbb {R} ,} and again it makes no difference.) Note that if any two sets are precisely separated by a function, then they are separated by a function. Since { 0 } {\displaystyle \{0\}} and { 1 } {\displaystyle \{1\}} are closed in R , {\displaystyle \mathbb {R} ,} only closed sets are capable of being precisely separated by a function, but just because two sets are closed and separated by a function does not mean that they are automatically precisely separated by a function (even a different function). == Relation to separation axioms and separated spaces == The separation axioms are various conditions that are sometimes imposed upon topological spaces, many of which can be described in terms of the various types of separated sets. As an example we will define the T2 axiom, which is the condition imposed on separated spaces. Specifically, a topological space is separated if, given any two distinct points x and y, the singleton sets {x} and {y} are separated by neighbourhoods. Separated spaces are usually called Hausdorff spaces or T2 spaces. == Relation to connected spaces == Given a topological space X, it is sometimes useful to consider whether it is possible for a subset A to be separated from its complement. This is certainly true if A is either the empty set or the entire space X, but there may be other possibilities. A topological space X is connected if these are the only two possibilities. Conversely, if a nonempty subset A is separated from its own complement, and if the only subset of A to share this property is the empty set, then A is an open-connected component of X. (In the degenerate case where X is itself the empty set ∅ {\displaystyle \emptyset } , authorities differ on whether ∅ {\displaystyle \emptyset } is connected and whether ∅ {\displaystyle \emptyset } is an open-connected component of itself.) == Relation to topologically distinguishable points == Given a topological space X, two points x and y are topologically distinguishable if there exists an open set that one point belongs to but the other point does not. If x and y are topologically distinguishable, then the singleton sets {x} and {y} must be disjoint. On the other hand, if the singletons {x} and {y} are separated, then the points x and y must be topologically distinguishable. Thus for singletons, topological distinguishability is a condition in between disjointness and separatedness. == See also == Hausdorff space – Type of topological space Locally Hausdorff space – Space such that every point has a Hausdorff neighborhood Separation axiom – Axioms in topology defining notions of "separation" == Citations == == Sources ==
Wikipedia/Separated_by_neighbourhoods
In topology and related branches of mathematics, separated sets are pairs of subsets of a given topological space that are related to each other in a certain way: roughly speaking, neither overlapping nor touching. The notion of when two sets are separated or not is important both to the notion of connected spaces (and their connected components) as well as to the separation axioms for topological spaces. Separated sets should not be confused with separated spaces (defined below), which are somewhat related but different. Separable spaces are again a completely different topological concept. == Definitions == There are various ways in which two subsets A {\displaystyle A} and B {\displaystyle B} of a topological space X {\displaystyle X} can be considered to be separated. A most basic way in which two sets can be separated is if they are disjoint, that is, if their intersection is the empty set. This property has nothing to do with topology as such, but only set theory. Each of the following properties is stricter than disjointness, incorporating some topological information. The properties below are presented in increasing order of specificity, each being a stronger notion than the preceding one. The sets A {\displaystyle A} and B {\displaystyle B} are separated in X {\displaystyle X} if each is disjoint from the other's closure: A ∩ B ¯ = ∅ = A ¯ ∩ B . {\displaystyle A\cap {\bar {B}}=\varnothing ={\bar {A}}\cap B.} This property is known as the Hausdorff−Lennes Separation Condition. Since every set is contained in its closure, two separated sets automatically must be disjoint. The closures themselves do not have to be disjoint from each other; for example, the intervals [ 0 , 1 ) {\displaystyle [0,1)} and ( 1 , 2 ] {\displaystyle (1,2]} are separated in the real line R , {\displaystyle \mathbb {R} ,} even though the point 1 belongs to both of their closures. A more general example is that in any metric space, two open balls B r ( p ) = { x ∈ X : d ( p , x ) < r } {\displaystyle B_{r}(p)=\{x\in X:d(p,x)<r\}} and B s ( q ) = { x ∈ X : d ( q , x ) < s } {\displaystyle B_{s}(q)=\{x\in X:d(q,x)<s\}} are separated whenever d ( p , q ) ≥ r + s . {\displaystyle d(p,q)\geq r+s.} The property of being separated can also be expressed in terms of derived set (indicated by the prime symbol): A {\displaystyle A} and B {\displaystyle B} are separated when they are disjoint and each is disjoint from the other's derived set, that is, A ′ ∩ B = ∅ = B ′ ∩ A . {\textstyle A'\cap B=\varnothing =B'\cap A.} (As in the case of the first version of the definition, the derived sets A ′ {\displaystyle A'} and B ′ {\displaystyle B'} are not required to be disjoint from each other.) The sets A {\displaystyle A} and B {\displaystyle B} are separated by neighbourhoods if there are neighbourhoods U {\displaystyle U} of A {\displaystyle A} and V {\displaystyle V} of B {\displaystyle B} such that U {\displaystyle U} and V {\displaystyle V} are disjoint. (Sometimes you will see the requirement that U {\displaystyle U} and V {\displaystyle V} be open neighbourhoods, but this makes no difference in the end.) For the example of A = [ 0 , 1 ) {\displaystyle A=[0,1)} and B = ( 1 , 2 ] , {\displaystyle B=(1,2],} you could take U = ( − 1 , 1 ) {\displaystyle U=(-1,1)} and V = ( 1 , 3 ) . {\displaystyle V=(1,3).} Note that if any two sets are separated by neighbourhoods, then certainly they are separated. If A {\displaystyle A} and B {\displaystyle B} are open and disjoint, then they must be separated by neighbourhoods; just take U = A {\displaystyle U=A} and V = B . {\displaystyle V=B.} For this reason, separatedness is often used with closed sets (as in the normal separation axiom). The sets A {\displaystyle A} and B {\displaystyle B} are separated by closed neighbourhoods if there is a closed neighbourhood U {\displaystyle U} of A {\displaystyle A} and a closed neighbourhood V {\displaystyle V} of B {\displaystyle B} such that U {\displaystyle U} and V {\displaystyle V} are disjoint. Our examples, [ 0 , 1 ) {\displaystyle [0,1)} and ( 1 , 2 ] , {\displaystyle (1,2],} are not separated by closed neighbourhoods. You could make either U {\displaystyle U} or V {\displaystyle V} closed by including the point 1 in it, but you cannot make them both closed while keeping them disjoint. Note that if any two sets are separated by closed neighbourhoods, then certainly they are separated by neighbourhoods. The sets A {\displaystyle A} and B {\displaystyle B} are separated by a continuous function if there exists a continuous function f : X → R {\displaystyle f:X\to \mathbb {R} } from the space X {\displaystyle X} to the real line R {\displaystyle \mathbb {R} } such that A ⊆ f − 1 ( 0 ) {\displaystyle A\subseteq f^{-1}(0)} and B ⊆ f − 1 ( 1 ) {\displaystyle B\subseteq f^{-1}(1)} , that is, members of A {\displaystyle A} map to 0 and members of B {\displaystyle B} map to 1. (Sometimes the unit interval [ 0 , 1 ] {\displaystyle [0,1]} is used in place of R {\displaystyle \mathbb {R} } in this definition, but this makes no difference.) In our example, [ 0 , 1 ) {\displaystyle [0,1)} and ( 1 , 2 ] {\displaystyle (1,2]} are not separated by a function, because there is no way to continuously define f {\displaystyle f} at the point 1. If two sets are separated by a continuous function, then they are also separated by closed neighbourhoods; the neighbourhoods can be given in terms of the preimage of f {\displaystyle f} as U = f − 1 [ − c , c ] {\displaystyle U=f^{-1}[-c,c]} and V = f − 1 [ 1 − c , 1 + c ] , {\displaystyle V=f^{-1}[1-c,1+c],} where c {\displaystyle c} is any positive real number less than 1 / 2. {\displaystyle 1/2.} The sets A {\displaystyle A} and B {\displaystyle B} are precisely separated by a continuous function if there exists a continuous function f : X → R {\displaystyle f:X\to \mathbb {R} } such that A = f − 1 ( 0 ) {\displaystyle A=f^{-1}(0)} and B = f − 1 ( 1 ) . {\displaystyle B=f^{-1}(1).} (Again, you may also see the unit interval in place of R , {\displaystyle \mathbb {R} ,} and again it makes no difference.) Note that if any two sets are precisely separated by a function, then they are separated by a function. Since { 0 } {\displaystyle \{0\}} and { 1 } {\displaystyle \{1\}} are closed in R , {\displaystyle \mathbb {R} ,} only closed sets are capable of being precisely separated by a function, but just because two sets are closed and separated by a function does not mean that they are automatically precisely separated by a function (even a different function). == Relation to separation axioms and separated spaces == The separation axioms are various conditions that are sometimes imposed upon topological spaces, many of which can be described in terms of the various types of separated sets. As an example we will define the T2 axiom, which is the condition imposed on separated spaces. Specifically, a topological space is separated if, given any two distinct points x and y, the singleton sets {x} and {y} are separated by neighbourhoods. Separated spaces are usually called Hausdorff spaces or T2 spaces. == Relation to connected spaces == Given a topological space X, it is sometimes useful to consider whether it is possible for a subset A to be separated from its complement. This is certainly true if A is either the empty set or the entire space X, but there may be other possibilities. A topological space X is connected if these are the only two possibilities. Conversely, if a nonempty subset A is separated from its own complement, and if the only subset of A to share this property is the empty set, then A is an open-connected component of X. (In the degenerate case where X is itself the empty set ∅ {\displaystyle \emptyset } , authorities differ on whether ∅ {\displaystyle \emptyset } is connected and whether ∅ {\displaystyle \emptyset } is an open-connected component of itself.) == Relation to topologically distinguishable points == Given a topological space X, two points x and y are topologically distinguishable if there exists an open set that one point belongs to but the other point does not. If x and y are topologically distinguishable, then the singleton sets {x} and {y} must be disjoint. On the other hand, if the singletons {x} and {y} are separated, then the points x and y must be topologically distinguishable. Thus for singletons, topological distinguishability is a condition in between disjointness and separatedness. == See also == Hausdorff space – Type of topological space Locally Hausdorff space – Space such that every point has a Hausdorff neighborhood Separation axiom – Axioms in topology defining notions of "separation" == Citations == == Sources ==
Wikipedia/Separated_by_a_function
In topology and related areas of mathematics, a neighbourhood (or neighborhood) is one of the basic concepts in a topological space. It is closely related to the concepts of open set and interior. Intuitively speaking, a neighbourhood of a point is a set of points containing that point where one can move some amount in any direction away from that point without leaving the set. == Definitions == === Neighbourhood of a point === If X {\displaystyle X} is a topological space and p {\displaystyle p} is a point in X , {\displaystyle X,} then a neighbourhood of p {\displaystyle p} is a subset V {\displaystyle V} of X {\displaystyle X} that includes an open set U {\displaystyle U} containing p {\displaystyle p} , p ∈ U ⊆ V ⊆ X . {\displaystyle p\in U\subseteq V\subseteq X.} This is equivalent to the point p ∈ X {\displaystyle p\in X} belonging to the topological interior of V {\displaystyle V} in X . {\displaystyle X.} The neighbourhood V {\displaystyle V} need not be an open subset of X . {\displaystyle X.} When V {\displaystyle V} is open (resp. closed, compact, etc.) in X , {\displaystyle X,} it is called an open neighbourhood (resp. closed neighbourhood, compact neighbourhood, etc.). Some authors require neighbourhoods to be open, so it is important to note their conventions. A set that is a neighbourhood of each of its points is open since it can be expressed as the union of open sets containing each of its points. A closed rectangle, as illustrated in the figure, is not a neighbourhood of all its points; points on the edges or corners of the rectangle are not contained in any open set that is contained within the rectangle. The collection of all neighbourhoods of a point is called the neighbourhood system at the point. === Neighbourhood of a set === If S {\displaystyle S} is a subset of a topological space X {\displaystyle X} , then a neighbourhood of S {\displaystyle S} is a set V {\displaystyle V} that includes an open set U {\displaystyle U} containing S {\displaystyle S} , S ⊆ U ⊆ V ⊆ X . {\displaystyle S\subseteq U\subseteq V\subseteq X.} It follows that a set V {\displaystyle V} is a neighbourhood of S {\displaystyle S} if and only if it is a neighbourhood of all the points in S . {\displaystyle S.} Furthermore, V {\displaystyle V} is a neighbourhood of S {\displaystyle S} if and only if S {\displaystyle S} is a subset of the interior of V . {\displaystyle V.} A neighbourhood of S {\displaystyle S} that is also an open subset of X {\displaystyle X} is called an open neighbourhood of S . {\displaystyle S.} The neighbourhood of a point is just a special case of this definition. == In a metric space == In a metric space M = ( X , d ) , {\displaystyle M=(X,d),} a set V {\displaystyle V} is a neighbourhood of a point p {\displaystyle p} if there exists an open ball with center p {\displaystyle p} and radius r > 0 , {\displaystyle r>0,} such that B r ( p ) = B ( p ; r ) = { x ∈ X : d ( x , p ) < r } {\displaystyle B_{r}(p)=B(p;r)=\{x\in X:d(x,p)<r\}} is contained in V . {\displaystyle V.} V {\displaystyle V} is called a uniform neighbourhood of a set S {\displaystyle S} if there exists a positive number r {\displaystyle r} such that for all elements p {\displaystyle p} of S , {\displaystyle S,} B r ( p ) = { x ∈ X : d ( x , p ) < r } {\displaystyle B_{r}(p)=\{x\in X:d(x,p)<r\}} is contained in V . {\displaystyle V.} Under the same condition, for r > 0 , {\displaystyle r>0,} the r {\displaystyle r} -neighbourhood S r {\displaystyle S_{r}} of a set S {\displaystyle S} is the set of all points in X {\displaystyle X} that are at distance less than r {\displaystyle r} from S {\displaystyle S} (or equivalently, S r {\displaystyle S_{r}} is the union of all the open balls of radius r {\displaystyle r} that are centered at a point in S {\displaystyle S} ): S r = ⋃ p ∈ S B r ( p ) . {\displaystyle S_{r}=\bigcup \limits _{p\in {}S}B_{r}(p).} It directly follows that an r {\displaystyle r} -neighbourhood is a uniform neighbourhood, and that a set is a uniform neighbourhood if and only if it contains an r {\displaystyle r} -neighbourhood for some value of r . {\displaystyle r.} == Examples == Given the set of real numbers R {\displaystyle \mathbb {R} } with the usual Euclidean metric and a subset V {\displaystyle V} defined as V := ⋃ n ∈ N B ( n ; 1 / n ) , {\displaystyle V:=\bigcup _{n\in \mathbb {N} }B\left(n\,;\,1/n\right),} then V {\displaystyle V} is a neighbourhood for the set N {\displaystyle \mathbb {N} } of natural numbers, but is not a uniform neighbourhood of this set. == Topology from neighbourhoods == The above definition is useful if the notion of open set is already defined. There is an alternative way to define a topology, by first defining the neighbourhood system, and then open sets as those sets containing a neighbourhood of each of their points. A neighbourhood system on X {\displaystyle X} is the assignment of a filter N ( x ) {\displaystyle N(x)} of subsets of X {\displaystyle X} to each x {\displaystyle x} in X , {\displaystyle X,} such that the point x {\displaystyle x} is an element of each U {\displaystyle U} in N ( x ) {\displaystyle N(x)} each U {\displaystyle U} in N ( x ) {\displaystyle N(x)} contains some V {\displaystyle V} in N ( x ) {\displaystyle N(x)} such that for each y {\displaystyle y} in V , {\displaystyle V,} U {\displaystyle U} is in N ( y ) . {\displaystyle N(y).} One can show that both definitions are compatible, that is, the topology obtained from the neighbourhood system defined using open sets is the original one, and vice versa when starting out from a neighbourhood system. == Uniform neighbourhoods == In a uniform space S = ( X , Φ ) , {\displaystyle S=(X,\Phi ),} V {\displaystyle V} is called a uniform neighbourhood of P {\displaystyle P} if there exists an entourage U ∈ Φ {\displaystyle U\in \Phi } such that V {\displaystyle V} contains all points of X {\displaystyle X} that are U {\displaystyle U} -close to some point of P ; {\displaystyle P;} that is, U [ x ] ⊆ V {\displaystyle U[x]\subseteq V} for all x ∈ P . {\displaystyle x\in P.} == Deleted neighbourhood == A deleted neighbourhood of a point p {\displaystyle p} (sometimes called a punctured neighbourhood) is a neighbourhood of p , {\displaystyle p,} without { p } . {\displaystyle \{p\}.} For instance, the interval ( − 1 , 1 ) = { y : − 1 < y < 1 } {\displaystyle (-1,1)=\{y:-1<y<1\}} is a neighbourhood of p = 0 {\displaystyle p=0} in the real line, so the set ( − 1 , 0 ) ∪ ( 0 , 1 ) = ( − 1 , 1 ) ∖ { 0 } {\displaystyle (-1,0)\cup (0,1)=(-1,1)\setminus \{0\}} is a deleted neighbourhood of 0. {\displaystyle 0.} A deleted neighbourhood of a given point is not in fact a neighbourhood of the point. The concept of deleted neighbourhood occurs in the definition of the limit of a function and in the definition of limit points (among other things). == See also == Isolated point – Point of a subset S around which there are no other points of S Neighbourhood system – Concept in mathematics Region (mathematics) – Connected open subset of a topological spacePages displaying short descriptions of redirect targets Tubular neighbourhood – neighborhood of a submanifold homeomorphic to that submanifold’s normal bundlePages displaying wikidata descriptions as a fallback == Notes == == References == Bredon, Glen E. (1993). Topology and geometry. New York: Springer-Verlag. ISBN 0-387-97926-3. Engelking, Ryszard (1989). General Topology. Heldermann Verlag, Berlin. ISBN 3-88538-006-4. Kaplansky, Irving (2001). Set Theory and Metric Spaces. American Mathematical Society. ISBN 0-8218-2694-8. Kelley, John L. (1975). General topology. New York: Springer-Verlag. ISBN 0-387-90125-6. Willard, Stephen (2004) [1970]. General Topology. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-43479-7. OCLC 115240.
Wikipedia/Neighbourhood_(topology)
In topology and related branches of mathematics, separated sets are pairs of subsets of a given topological space that are related to each other in a certain way: roughly speaking, neither overlapping nor touching. The notion of when two sets are separated or not is important both to the notion of connected spaces (and their connected components) as well as to the separation axioms for topological spaces. Separated sets should not be confused with separated spaces (defined below), which are somewhat related but different. Separable spaces are again a completely different topological concept. == Definitions == There are various ways in which two subsets A {\displaystyle A} and B {\displaystyle B} of a topological space X {\displaystyle X} can be considered to be separated. A most basic way in which two sets can be separated is if they are disjoint, that is, if their intersection is the empty set. This property has nothing to do with topology as such, but only set theory. Each of the following properties is stricter than disjointness, incorporating some topological information. The properties below are presented in increasing order of specificity, each being a stronger notion than the preceding one. The sets A {\displaystyle A} and B {\displaystyle B} are separated in X {\displaystyle X} if each is disjoint from the other's closure: A ∩ B ¯ = ∅ = A ¯ ∩ B . {\displaystyle A\cap {\bar {B}}=\varnothing ={\bar {A}}\cap B.} This property is known as the Hausdorff−Lennes Separation Condition. Since every set is contained in its closure, two separated sets automatically must be disjoint. The closures themselves do not have to be disjoint from each other; for example, the intervals [ 0 , 1 ) {\displaystyle [0,1)} and ( 1 , 2 ] {\displaystyle (1,2]} are separated in the real line R , {\displaystyle \mathbb {R} ,} even though the point 1 belongs to both of their closures. A more general example is that in any metric space, two open balls B r ( p ) = { x ∈ X : d ( p , x ) < r } {\displaystyle B_{r}(p)=\{x\in X:d(p,x)<r\}} and B s ( q ) = { x ∈ X : d ( q , x ) < s } {\displaystyle B_{s}(q)=\{x\in X:d(q,x)<s\}} are separated whenever d ( p , q ) ≥ r + s . {\displaystyle d(p,q)\geq r+s.} The property of being separated can also be expressed in terms of derived set (indicated by the prime symbol): A {\displaystyle A} and B {\displaystyle B} are separated when they are disjoint and each is disjoint from the other's derived set, that is, A ′ ∩ B = ∅ = B ′ ∩ A . {\textstyle A'\cap B=\varnothing =B'\cap A.} (As in the case of the first version of the definition, the derived sets A ′ {\displaystyle A'} and B ′ {\displaystyle B'} are not required to be disjoint from each other.) The sets A {\displaystyle A} and B {\displaystyle B} are separated by neighbourhoods if there are neighbourhoods U {\displaystyle U} of A {\displaystyle A} and V {\displaystyle V} of B {\displaystyle B} such that U {\displaystyle U} and V {\displaystyle V} are disjoint. (Sometimes you will see the requirement that U {\displaystyle U} and V {\displaystyle V} be open neighbourhoods, but this makes no difference in the end.) For the example of A = [ 0 , 1 ) {\displaystyle A=[0,1)} and B = ( 1 , 2 ] , {\displaystyle B=(1,2],} you could take U = ( − 1 , 1 ) {\displaystyle U=(-1,1)} and V = ( 1 , 3 ) . {\displaystyle V=(1,3).} Note that if any two sets are separated by neighbourhoods, then certainly they are separated. If A {\displaystyle A} and B {\displaystyle B} are open and disjoint, then they must be separated by neighbourhoods; just take U = A {\displaystyle U=A} and V = B . {\displaystyle V=B.} For this reason, separatedness is often used with closed sets (as in the normal separation axiom). The sets A {\displaystyle A} and B {\displaystyle B} are separated by closed neighbourhoods if there is a closed neighbourhood U {\displaystyle U} of A {\displaystyle A} and a closed neighbourhood V {\displaystyle V} of B {\displaystyle B} such that U {\displaystyle U} and V {\displaystyle V} are disjoint. Our examples, [ 0 , 1 ) {\displaystyle [0,1)} and ( 1 , 2 ] , {\displaystyle (1,2],} are not separated by closed neighbourhoods. You could make either U {\displaystyle U} or V {\displaystyle V} closed by including the point 1 in it, but you cannot make them both closed while keeping them disjoint. Note that if any two sets are separated by closed neighbourhoods, then certainly they are separated by neighbourhoods. The sets A {\displaystyle A} and B {\displaystyle B} are separated by a continuous function if there exists a continuous function f : X → R {\displaystyle f:X\to \mathbb {R} } from the space X {\displaystyle X} to the real line R {\displaystyle \mathbb {R} } such that A ⊆ f − 1 ( 0 ) {\displaystyle A\subseteq f^{-1}(0)} and B ⊆ f − 1 ( 1 ) {\displaystyle B\subseteq f^{-1}(1)} , that is, members of A {\displaystyle A} map to 0 and members of B {\displaystyle B} map to 1. (Sometimes the unit interval [ 0 , 1 ] {\displaystyle [0,1]} is used in place of R {\displaystyle \mathbb {R} } in this definition, but this makes no difference.) In our example, [ 0 , 1 ) {\displaystyle [0,1)} and ( 1 , 2 ] {\displaystyle (1,2]} are not separated by a function, because there is no way to continuously define f {\displaystyle f} at the point 1. If two sets are separated by a continuous function, then they are also separated by closed neighbourhoods; the neighbourhoods can be given in terms of the preimage of f {\displaystyle f} as U = f − 1 [ − c , c ] {\displaystyle U=f^{-1}[-c,c]} and V = f − 1 [ 1 − c , 1 + c ] , {\displaystyle V=f^{-1}[1-c,1+c],} where c {\displaystyle c} is any positive real number less than 1 / 2. {\displaystyle 1/2.} The sets A {\displaystyle A} and B {\displaystyle B} are precisely separated by a continuous function if there exists a continuous function f : X → R {\displaystyle f:X\to \mathbb {R} } such that A = f − 1 ( 0 ) {\displaystyle A=f^{-1}(0)} and B = f − 1 ( 1 ) . {\displaystyle B=f^{-1}(1).} (Again, you may also see the unit interval in place of R , {\displaystyle \mathbb {R} ,} and again it makes no difference.) Note that if any two sets are precisely separated by a function, then they are separated by a function. Since { 0 } {\displaystyle \{0\}} and { 1 } {\displaystyle \{1\}} are closed in R , {\displaystyle \mathbb {R} ,} only closed sets are capable of being precisely separated by a function, but just because two sets are closed and separated by a function does not mean that they are automatically precisely separated by a function (even a different function). == Relation to separation axioms and separated spaces == The separation axioms are various conditions that are sometimes imposed upon topological spaces, many of which can be described in terms of the various types of separated sets. As an example we will define the T2 axiom, which is the condition imposed on separated spaces. Specifically, a topological space is separated if, given any two distinct points x and y, the singleton sets {x} and {y} are separated by neighbourhoods. Separated spaces are usually called Hausdorff spaces or T2 spaces. == Relation to connected spaces == Given a topological space X, it is sometimes useful to consider whether it is possible for a subset A to be separated from its complement. This is certainly true if A is either the empty set or the entire space X, but there may be other possibilities. A topological space X is connected if these are the only two possibilities. Conversely, if a nonempty subset A is separated from its own complement, and if the only subset of A to share this property is the empty set, then A is an open-connected component of X. (In the degenerate case where X is itself the empty set ∅ {\displaystyle \emptyset } , authorities differ on whether ∅ {\displaystyle \emptyset } is connected and whether ∅ {\displaystyle \emptyset } is an open-connected component of itself.) == Relation to topologically distinguishable points == Given a topological space X, two points x and y are topologically distinguishable if there exists an open set that one point belongs to but the other point does not. If x and y are topologically distinguishable, then the singleton sets {x} and {y} must be disjoint. On the other hand, if the singletons {x} and {y} are separated, then the points x and y must be topologically distinguishable. Thus for singletons, topological distinguishability is a condition in between disjointness and separatedness. == See also == Hausdorff space – Type of topological space Locally Hausdorff space – Space such that every point has a Hausdorff neighborhood Separation axiom – Axioms in topology defining notions of "separation" == Citations == == Sources ==
Wikipedia/Precisely_separated_by_a_function
In non-technical terms, M-theory presents an idea about the basic substance of the universe. Although a complete mathematical formulation of M-theory is not known, the general approach is the leading contender for a universal "Theory of Everything" that unifies gravity with other forces such as electromagnetism. M-theory aims to unify quantum mechanics with general relativity's gravitational force in a mathematically consistent way. In comparison, other theories such as loop quantum gravity are considered by physicists and researchers to be less elegant, because they posit gravity to be completely different from forces such as the electromagnetic force. == Background == In the early years of the 20th century, the atom – long believed to be the smallest building-block of matter – was proven to consist of even smaller components called protons, neutrons and electrons, which are known as subatomic particles. Other subatomic particles began being discovered in the 1960s. In the 1970s, it was discovered that protons and neutrons (and other hadrons) are themselves made up of smaller particles called quarks. The Standard Model is the set of rules that describes the interactions of these particles. In the 1980s, a new mathematical model of theoretical physics, called string theory, emerged. It showed how all the different subatomic particles known to science could be constructed by hypothetical one-dimensional "strings", infinitesimal building-blocks that have only the dimension of length, but not height or width. These strings vibrate in multiple dimensions and, depending on how they vibrate, they might be seen in three-dimensional space as matter, light or gravity. In string theory, every form of matter is said to be the result of the vibration of strings. However, for string theory to be mathematically consistent, the strings must live in a universe with ten dimensions. String theory explains our perception of the universe to have four dimensions (three space dimensions and one time dimension) by imagining that the extra six dimensions are "curled up", to be so small that they can't be observed day-to-day. The technical term for this is compactification. These dimensions are usually made to take the shape of mathematical objects called Calabi–Yau manifolds. Five major string theories were developed and found to be mathematically consistent with the principle of all matter being made of strings. Having five different versions of string theory was seen as a puzzle. Speaking at the Strings '95 conference at the University of Southern California, Edward Witten of the Institute for Advanced Study suggested that the five different versions of string theory might be describing the same thing seen from different perspectives. He proposed a unifying theory called "M-theory", which brought all of the string theories together. It did this by asserting that strings are an approximation of curled-up two-dimensional membranes vibrating in an 11-dimensional spacetime. According to Witten, the M could stand for "magic", "mystery", or "membrane" according to taste, and the true meaning of the title should be decided when a better understanding of the theory is discovered. == Status == M-theory is not complete, and the mathematics of the approach are not yet well understood. M-theory is a theory of quantum gravity; and as all others it has not gained experimental evidence that would confirm its validity. It also does not single out our observable universe as being special, and so does not aim to predict from first principles everything we can measure about it. Nevertheless, some physicists are drawn to M-theory because of its degree of uniqueness and rich set of mathematical properties, triggering the hope that it may describe our world within a single framework. One feature of M-theory that has drawn great interest is that it naturally predicts the existence of the graviton, a spin-2 particle hypothesized to mediate the gravitational force. Furthermore, M-theory naturally predicts a phenomenon that resembles black hole evaporation. Competing unification theories such as asymptotically safe gravity, E8 theory, noncommutative geometry, and causal fermion systems have not demonstrated any level of mathematical consistency. == See also == History of string theory == References == == Further reading == Greene, B. (1999). The Elegant Universe: Superstrings, Hidden Dimensions, and the Quest for the Ultimate Theory. W.W. Norton. ISBN 978-0-375-70811-4. Greene, B. (2004). The Fabric of the Cosmos: Space, Time, and the Texture of Reality. Alfred A. Knopf. Bibcode:2004fcst.book.....G. ISBN 978-0-375-41288-2. Miemic, A.; Schnakenburg, I. (2006). "Basics of M-theory". Fortschritte der Physik. 54 (1): 5–72. arXiv:hep-th/0509137. Bibcode:2006ForPh..54....5M. doi:10.1002/prop.200510256. S2CID 98007313. Musser, G. (2008). The Complete Idiot's Guide to String Theory. Alpha Books. ISBN 978-1-59257-702-6. Smolin, L. (2006). The Trouble with Physics. Houghton Mifflin. ISBN 978-0-618-55105-7. Woit, P. (2006). Not Even Wrong: The Failure of String Theory and the Continuing Challenge to Unify the Laws of Physics. Basic Books. ISBN 978-0-465-09275-8. == External links == The Elegant Universe – A three-hour miniseries with Brian Greene by NOVA (original PBS Broadcast Dates: October 28 and November 4, 2003). Various images, texts, videos and animations explaining string theory and M-theory.
Wikipedia/Introduction_to_M-theory
In physics, the fundamental interactions or fundamental forces are interactions in nature that appear not to be reducible to more basic interactions. There are four fundamental interactions known to exist: gravity electromagnetism weak interaction strong interaction The gravitational and electromagnetic interactions produce long-range forces whose effects can be seen directly in everyday life. The strong and weak interactions produce forces at subatomic scales and govern nuclear interactions inside atoms. Some scientists hypothesize that a fifth force might exist, but these hypotheses remain speculative. Each of the known fundamental interactions can be described mathematically as a field. The gravitational interaction is attributed to the curvature of spacetime, described by Einstein's general theory of relativity. The other three are discrete quantum fields, and their interactions are mediated by elementary particles described by the Standard Model of particle physics. Within the Standard Model, the strong interaction is carried by a particle called the gluon and is responsible for quarks binding together to form hadrons, such as protons and neutrons. As a residual effect, it creates the nuclear force that binds the latter particles to form atomic nuclei. The weak interaction is carried by particles called W and Z bosons, and also acts on the nucleus of atoms, mediating radioactive decay. The electromagnetic force, carried by the photon, creates electric and magnetic fields, which are responsible for the attraction between orbital electrons and atomic nuclei which holds atoms together, as well as chemical bonding and electromagnetic waves, including visible light, and forms the basis for electrical technology. Although the electromagnetic force is far stronger than gravity, it tends to cancel itself out within large objects, so over large (astronomical) distances gravity tends to be the dominant force, and is responsible for holding together the large scale structures in the universe, such as planets, stars, and galaxies. The historical success of models that show relationships between fundamental interactions have led to efforts to go beyond the Standard Model and combine all four forces in to a theory of everything. == History == === Classical theory === In his 1687 theory, Isaac Newton postulated space as an infinite and unalterable physical structure existing before, within, and around all objects while their states and relations unfold at a constant pace everywhere, thus absolute space and time. Inferring that all objects bearing mass approach at a constant rate, but collide by impact proportional to their masses, Newton inferred that matter exhibits an attractive force. His law of universal gravitation implied there to be instant interaction among all objects. As conventionally interpreted, Newton's theory of motion modelled a central force without a communicating medium. Thus Newton's theory violated the tradition, going back to Descartes, that there should be no action at a distance. Conversely, during the 1820s, when explaining magnetism, Michael Faraday inferred a field filling space and transmitting that force. Faraday conjectured that ultimately, all forces unified into one. In 1873, James Clerk Maxwell unified electricity and magnetism as effects of an electromagnetic field whose third consequence was light, travelling at constant speed in vacuum. If his electromagnetic field theory held true in all inertial frames of reference, this would contradict Newton's theory of motion, which relied on Galilean relativity. If, instead, his field theory only applied to reference frames at rest relative to a mechanical luminiferous aether—presumed to fill all space whether within matter or in vacuum and to manifest the electromagnetic field—then it could be reconciled with Galilean relativity and Newton's laws. (However, such a "Maxwell aether" was later disproven; Newton's laws did, in fact, have to be replaced.) === Standard Model === The Standard Model of particle physics was developed throughout the latter half of the 20th century. In the Standard Model, the electromagnetic, strong, and weak interactions associate with elementary particles, whose behaviours are modelled in quantum mechanics (QM). For predictive success with QM's probabilistic outcomes, particle physics conventionally models QM events across a field set to special relativity, altogether relativistic quantum field theory (QFT). Force particles, called gauge bosons—force carriers or messenger particles of underlying fields—interact with matter particles, called fermions. Everyday matter is atoms, composed of three fermion types: up-quarks and down-quarks constituting, as well as electrons orbiting, the atom's nucleus. Atoms interact, form molecules, and manifest further properties through electromagnetic interactions among their electrons absorbing and emitting photons, the electromagnetic field's force carrier, which if unimpeded traverse potentially infinite distance. Electromagnetism's QFT is quantum electrodynamics (QED). The force carriers of the weak interaction are the massive W and Z bosons. Electroweak theory (EWT) covers both electromagnetism and the weak interaction. At the high temperatures shortly after the Big Bang, the weak interaction, the electromagnetic interaction, and the Higgs boson were originally mixed components of a different set of ancient pre-symmetry-breaking fields. As the early universe cooled, these fields split into the long-range electromagnetic interaction, the short-range weak interaction, and the Higgs boson. In the Higgs mechanism, the Higgs field manifests Higgs bosons that interact with some quantum particles in a way that endows those particles with mass. The strong interaction, whose force carrier is the gluon, traversing minuscule distance among quarks, is modeled in quantum chromodynamics (QCD). EWT, QCD, and the Higgs mechanism comprise particle physics' Standard Model (SM). Predictions are usually made using calculational approximation methods, although such perturbation theory is inadequate to model some experimental observations (for instance bound states and solitons). Still, physicists widely accept the Standard Model as science's most experimentally confirmed theory. Beyond the Standard Model, some theorists work to unite the electroweak and strong interactions within a Grand Unified Theory (GUT). Some attempts at GUTs hypothesize "shadow" particles, such that every known matter particle associates with an undiscovered force particle, and vice versa, altogether supersymmetry (SUSY). Other theorists seek to quantize the gravitational field by the modelling behaviour of its hypothetical force carrier, the graviton and achieve quantum gravity (QG). One approach to QG is loop quantum gravity (LQG). Still other theorists seek both QG and GUT within one framework, reducing all four fundamental interactions to a Theory of Everything (ToE). The most prevalent aim at a ToE is string theory, although to model matter particles, it added SUSY to force particles—and so, strictly speaking, became superstring theory. Multiple, seemingly disparate superstring theories were unified on a backbone, M-theory. Theories beyond the Standard Model remain highly speculative, lacking great experimental support. == Overview of the fundamental interactions == In the conceptual model of fundamental interactions, matter consists of fermions, which carry properties called charges and spin ±1⁄2 (intrinsic angular momentum ±ħ⁄2, where ħ is the reduced Planck constant). They attract or repel each other by exchanging bosons. The interaction of any pair of fermions in perturbation theory can then be modelled thus: Two fermions go in → interaction by boson exchange → two changed fermions go out. The exchange of bosons always carries energy and momentum between the fermions, thereby changing their speed and direction. The exchange may also transport a charge between the fermions, changing the charges of the fermions in the process (e.g., turn them from one type of fermion to another). Since bosons carry one unit of angular momentum, the fermion's spin direction will flip from +1⁄2 to −1⁄2 (or vice versa) during such an exchange (in units of the reduced Planck constant). Since such interactions result in a change in momentum, they can give rise to classical Newtonian forces. In quantum mechanics, physicists often use the terms "force" and "interaction" interchangeably; for example, the weak interaction is sometimes referred to as the "weak force". According to the present understanding, there are four fundamental interactions or forces: gravitation, electromagnetism, the weak interaction, and the strong interaction. Their magnitude and behaviour vary greatly, as described in the table below. Modern physics attempts to explain every observed physical phenomenon by these fundamental interactions. Moreover, reducing the number of different interaction types is seen as desirable. Two cases in point are the unification of: Electric and magnetic force into electromagnetism; The electromagnetic interaction and the weak interaction into the electroweak interaction; see below. Both magnitude ("relative strength") and "range" of the associated potential, as given in the table, are meaningful only within a rather complex theoretical framework. The table below lists properties of a conceptual scheme that remains the subject of ongoing research. The modern (perturbative) quantum mechanical view of the fundamental forces other than gravity is that particles of matter (fermions) do not directly interact with each other, but rather carry a charge, and exchange virtual particles (gauge bosons), which are the interaction carriers or force mediators. For example, photons mediate the interaction of electric charges, and gluons mediate the interaction of color charges. The full theory includes perturbations beyond simply fermions exchanging bosons; these additional perturbations can involve bosons that exchange fermions, as well as the creation or destruction of particles: see Feynman diagrams for examples. == Interactions == === Gravity === Gravitation is the weakest of the four interactions at the atomic scale, where electromagnetic interactions dominate. Gravitation is the most important of the four fundamental forces for astronomical objects over astronomical distances for two reasons. First, gravitation has an infinite effective range, like electromagnetism but unlike the strong and weak interactions. Second, gravity always attracts and never repels; in contrast, astronomical bodies tend toward a near-neutral net electric charge, such that the attraction to one type of charge and the repulsion from the opposite charge mostly cancel each other out. Even though electromagnetism is far stronger than gravitation, electrostatic attraction is not relevant for large celestial bodies, such as planets, stars, and galaxies, simply because such bodies contain equal numbers of protons and electrons and so have a net electric charge of zero. Nothing "cancels" gravity, since it is only attractive, unlike electric forces which can be attractive or repulsive. On the other hand, all objects having mass are subject to the gravitational force, which only attracts. Therefore, only gravitation matters on the large-scale structure of the universe. The long range of gravitation makes it responsible for such large-scale phenomena as the structure of galaxies and black holes and, being only attractive, it slows down the expansion of the universe. Gravitation also explains astronomical phenomena on more modest scales, such as planetary orbits, as well as everyday experience: objects fall; heavy objects act as if they were glued to the ground, and animals can only jump so high. Gravitation was the first interaction to be described mathematically. In ancient times, Aristotle hypothesized that objects of different masses fall at different rates. During the Scientific Revolution, Galileo Galilei experimentally determined that this hypothesis was wrong under certain circumstances—neglecting the friction due to air resistance and buoyancy forces if an atmosphere is present (e.g. the case of a dropped air-filled balloon vs a water-filled balloon), all objects accelerate toward the Earth at the same rate. Isaac Newton's law of Universal Gravitation (1687) was a good approximation of the behaviour of gravitation. Present-day understanding of gravitation stems from Einstein's General Theory of Relativity of 1915, a more accurate (especially for cosmological masses and distances) description of gravitation in terms of the geometry of spacetime. Merging general relativity and quantum mechanics (or quantum field theory) into a more general theory of quantum gravity is an area of active research. It is hypothesized that gravitation is mediated by a massless spin-2 particle called the graviton. Although general relativity has been experimentally confirmed (at least for weak fields, i.e. not black holes) on all but the smallest scales, there are alternatives to general relativity. These theories must reduce to general relativity in some limit, and the focus of observational work is to establish limits on what deviations from general relativity are possible. Proposed extra dimensions could explain why the gravity force is so weak. === Electroweak interaction === Electromagnetism and weak interaction appear to be very different at everyday low energies. They can be modeled using two different theories. However, above unification energy, on the order of 100 GeV, they would merge into a single electroweak force. The electroweak theory is very important for modern cosmology, particularly on how the universe evolved. This is because shortly after the Big Bang, when the temperature was still above approximately 1015 K, the electromagnetic force and the weak force were still merged as a combined electroweak force. For contributions to the unification of the weak and electromagnetic interaction between elementary particles, Abdus Salam, Sheldon Glashow and Steven Weinberg were awarded the Nobel Prize in Physics in 1979. ==== Electromagnetism ==== Electromagnetism is the force that acts between electrically charged particles. This phenomenon includes the electrostatic force acting between charged particles at rest, and the combined effect of electric and magnetic forces acting between charged particles moving relative to each other. Electromagnetism has an infinite range, as gravity does, but is vastly stronger. It is the force that binds electrons to atoms, and it holds molecules together. It is responsible for everyday phenomena like light, magnets, electricity, and friction. Electromagnetism fundamentally determines all macroscopic, and many atomic-level, properties of the chemical elements. In a four kilogram (~1 gallon) jug of water, there is of total electron charge. Thus, if we place two such jugs a meter apart, the electrons in one of the jugs repel those in the other jug with a force of This force is many times larger than the weight of the planet Earth. The atomic nuclei in one jug also repel those in the other with the same force. However, these repulsive forces are canceled by the attraction of the electrons in jug A with the nuclei in jug B and the attraction of the nuclei in jug A with the electrons in jug B, resulting in no net force. Electromagnetic forces are tremendously stronger than gravity, but tend to cancel out so that for astronomical-scale bodies, gravity dominates. Electrical and magnetic phenomena have been observed since ancient times, but it was only in the 19th century James Clerk Maxwell discovered that electricity and magnetism are two aspects of the same fundamental interaction. By 1864, Maxwell's equations had rigorously quantified this unified interaction. Maxwell's theory, restated using vector calculus, is the classical theory of electromagnetism, suitable for most technological purposes. The constant speed of light in vacuum (customarily denoted with a lowercase letter c) can be derived from Maxwell's equations, which are consistent with the theory of special relativity. Albert Einstein's 1905 theory of special relativity, however, which follows from the observation that the speed of light is constant no matter how fast the observer is moving, showed that the theoretical result implied by Maxwell's equations has profound implications far beyond electromagnetism on the very nature of time and space. In another work that departed from classical electro-magnetism, Einstein also explained the photoelectric effect by utilizing Max Planck's discovery that light was transmitted in 'quanta' of specific energy content based on the frequency, which we now call photons. Starting around 1927, Paul Dirac combined quantum mechanics with the relativistic theory of electromagnetism. Further work in the 1940s, by Richard Feynman, Freeman Dyson, Julian Schwinger, and Sin-Itiro Tomonaga, completed this theory, which is now called quantum electrodynamics, the revised theory of electromagnetism. Quantum electrodynamics and quantum mechanics provide a theoretical basis for electromagnetic behavior such as quantum tunneling, in which a certain percentage of electrically charged particles move in ways that would be impossible under the classical electromagnetic theory, that is necessary for everyday electronic devices such as transistors to function. ==== Weak interaction ==== The weak interaction or weak nuclear force is responsible for some nuclear phenomena such as beta decay. Electromagnetism and the weak force are now understood to be two aspects of a unified electroweak interaction — this discovery was the first step toward the unified theory known as the Standard Model. In the theory of the electroweak interaction, the carriers of the weak force are the massive gauge bosons called the W and Z bosons. The weak interaction is the only known interaction that does not conserve parity; it is left–right asymmetric. The weak interaction even violates CP symmetry but does conserve CPT. === Strong interaction === The strong interaction, or strong nuclear force, is the most complicated interaction, mainly because of the way it varies with distance. The nuclear force is powerfully attractive between nucleons at distances of about 1 femtometre (fm, or 10−15 metres), but it rapidly decreases to insignificance at distances beyond about 2.5 fm. At distances less than 0.7 fm, the nuclear force becomes repulsive. This repulsive component is responsible for the physical size of nuclei, since the nucleons can come no closer than the force allows. After the nucleus was discovered in 1908, it was clear that a new force, today known as the nuclear force, was needed to overcome the electrostatic repulsion, a manifestation of electromagnetism, of the positively charged protons. Otherwise, the nucleus could not exist. Moreover, the force had to be strong enough to squeeze the protons into a volume whose diameter is about 10−15 m, much smaller than that of the entire atom. From the short range of this force, Hideki Yukawa predicted that it was associated with a massive force particle, whose mass is approximately 100 MeV. The 1947 discovery of the pion ushered in the modern era of particle physics. Hundreds of hadrons were discovered from the 1940s to 1960s, and an extremely complicated theory of hadrons as strongly interacting particles was developed. Most notably: The pions were understood to be oscillations of vacuum condensates; Jun John Sakurai proposed the rho and omega vector bosons to be force carrying particles for approximate symmetries of isospin and hypercharge; Geoffrey Chew, Edward K. Burdett and Steven Frautschi grouped the heavier hadrons into families that could be understood as vibrational and rotational excitations of strings. While each of these approaches offered insights, no approach led directly to a fundamental theory. Murray Gell-Mann along with George Zweig first proposed fractionally charged quarks in 1961. Throughout the 1960s, different authors considered theories similar to the modern fundamental theory of quantum chromodynamics (QCD) as simple models for the interactions of quarks. The first to hypothesize the gluons of QCD were Moo-Young Han and Yoichiro Nambu, who introduced the quark color charge. Han and Nambu hypothesized that it might be associated with a force-carrying field. At that time, however, it was difficult to see how such a model could permanently confine quarks. Han and Nambu also assigned each quark color an integer electrical charge, so that the quarks were fractionally charged only on average, and they did not expect the quarks in their model to be permanently confined. In 1971, Murray Gell-Mann and Harald Fritzsch proposed that the Han/Nambu color gauge field was the correct theory of the short-distance interactions of fractionally charged quarks. A little later, David Gross, Frank Wilczek, and David Politzer discovered that this theory had the property of asymptotic freedom, allowing them to make contact with experimental evidence. They concluded that QCD was the complete theory of the strong interactions, correct at all distance scales. The discovery of asymptotic freedom led most physicists to accept QCD since it became clear that even the long-distance properties of the strong interactions could be consistent with experiment if the quarks are permanently confined: the strong force increases indefinitely with distance, trapping quarks inside the hadrons. Assuming that quarks are confined, Mikhail Shifman, Arkady Vainshtein and Valentine Zakharov were able to compute the properties of many low-lying hadrons directly from QCD, with only a few extra parameters to describe the vacuum. In 1980, Kenneth G. Wilson published computer calculations based on the first principles of QCD, establishing, to a level of confidence tantamount to certainty, that QCD will confine quarks. Since then, QCD has been the established theory of strong interactions. QCD is a theory of fractionally charged quarks interacting by means of 8 bosonic particles called gluons. The gluons also interact with each other, not just with the quarks, and at long distances the lines of force collimate into strings, loosely modeled by a linear potential, a constant attractive force. In this way, the mathematical theory of QCD not only explains how quarks interact over short distances but also the string-like behavior, discovered by Chew and Frautschi, which they manifest over longer distances. === Higgs interaction === Conventionally, the Higgs interaction is not counted among the four fundamental forces. Nonetheless, although not a gauge interaction nor generated by any diffeomorphism symmetry, the Higgs field's cubic Yukawa coupling produces a weakly attractive fifth interaction. After spontaneous symmetry breaking via the Higgs mechanism, Yukawa terms remain of the form λ i 2 ψ ¯ ϕ ′ ψ = m i ν ψ ¯ ϕ ′ ψ {\displaystyle {\frac {\lambda _{i}}{\sqrt {2}}}{\bar {\psi }}\phi '\psi ={\frac {m_{i}}{\nu }}{\bar {\psi }}\phi '\psi } , with Yukawa coupling λ i {\displaystyle \lambda _{i}} , particle mass m i {\displaystyle m_{i}} (in eV), and Higgs vacuum expectation value 246.22 GeV. Hence coupled particles can exchange a virtual Higgs boson, yielding classical potentials of the form V ( r ) = − m i m j m H 2 1 4 π r e − m H c r / ℏ {\displaystyle V(r)=-{\frac {m_{i}m_{j}}{m_{\rm {H}}^{2}}}{\frac {1}{4\pi r}}e^{-m_{\rm {H}}\,c\,r/\hbar }} , with Higgs mass 125.18 GeV. Because the reduced Compton wavelength of the Higgs boson is so small (1.576×10−18 m, comparable to the W and Z bosons), this potential has an effective range of a few attometers. Between two electrons, it begins roughly 1011 times weaker than the weak interaction, and grows exponentially weaker at non-zero distances. === Beyond the Standard Model === The fundamental forces may become unified into a single force at very high energies and on a minuscule scale, the Planck scale. Particle accelerators cannot produce the enormous energies required to experimentally probe this regime. The weak and electromagnetic forces have already been unified with the electroweak theory of Sheldon Glashow, Abdus Salam, and Steven Weinberg, for which they received the 1979 Nobel Prize in physics. Numerous theoretical efforts have been made to systematize the existing four fundamental interactions on the model of electroweak unification. Grand Unified Theories (GUTs) are proposals to show that each of the three fundamental interactions described by the Standard Model is a different manifestation of a single interaction with symmetries that break down and create separate interactions below some extremely high level of energy. GUTs are also expected to predict some of the relationships between constants of nature that the Standard Model treats as unrelated and gauge coupling unification for the relative strengths of the electromagnetic, weak, and strong forces. A so-called theory of everything, which would integrate GUTs with a quantum gravity theory, faces a greater barrier because no quantum gravity theory (e.g., string theory, loop quantum gravity, and twistor theory) has secured wide acceptance. Some theories look for a graviton to complete the Standard Model list of force-carrying particles, while others, like loop quantum gravity, emphasize the possibility that time-space itself may have a quantum aspect to it. Some theories beyond the Standard Model include a hypothetical fifth force, and the search for such a force is an ongoing line of experimental physics research. In supersymmetric theories, some particles, known as moduli, acquire their masses only through supersymmetry breaking effects and can mediate new forces. Another reason to look for new forces is the discovery that the expansion of the universe is accelerating (also known as dark energy), creating a need to explain a nonzero cosmological constant and possibly other modifications of general relativity. Fifth forces have also been suggested to explain phenomena such as CP violations, dark matter, and dark flow. == See also == Quintessence, a hypothesized fifth force Gerardus 't Hooft Edward Witten Howard Georgi == References == == Bibliography == Davies, Paul (1986), The Forces of Nature, Cambridge Univ. Press 2nd ed. Feynman, Richard (1967), The Character of Physical Law, MIT Press, ISBN 978-0-262-56003-0 Schumm, Bruce A. (2004), Deep Down Things, Johns Hopkins University Press While all interactions are discussed, discussion is especially thorough on the weak. Weinberg, Steven (1993), The First Three Minutes: A Modern View of the Origin of the Universe, Basic Books, ISBN 978-0-465-02437-7 Weinberg, Steven (1994), Dreams of a Final Theory, Basic Books, ISBN 978-0-679-74408-5 Padmanabhan, T. (1998), After The First Three Minutes: The Story of Our Universe, Cambridge Univ. Press, ISBN 978-0-521-62972-0 Perkins, Donald H. (2000), Introduction to High Energy Physics (4th ed.), Cambridge Univ. Press, ISBN 978-0-521-62196-0 Riazuddin (December 29, 2009). "Non-standard interactions" (PDF). NCP 5th Particle Physics Sypnoisis. 1 (1): 1–25. Archived from the original (PDF) on March 3, 2016. Retrieved March 19, 2011.
Wikipedia/Fundamental_physics
In physics, the term swampland refers to effective low-energy physical theories which are not compatible with quantum gravity. This is in contrast with the so-called "string theory landscape" that are known to be compatible with string theory, which is hypothesized to be a consistent quantum theory of gravity. In other words, the Swampland is the set of consistent-looking theories with no consistent ultraviolet completion with the addition of gravity. Developments in string theory also suggest that the string theory landscape of false vacuum is vast, so it is natural to ask if the landscape is as vast as allowed by anomaly-free effective field theories. The Swampland program aims to delineate the theories of quantum gravity by identifying the universal principles shared among all theories compatible with gravitational UV completion. The program was initiated by Cumrun Vafa who argued that string theory suggests that the Swampland is in fact much larger than the string theory landscape. Quantum gravity differs from quantum field theory in several key ways, including locality and UV/IR decoupling. In quantum gravity, a local structure of observables is emergent rather than fundamental. A concrete example of the emergence of locality is AdS/CFT, where the local quantum field theory description in bulk is only an approximation that emerges within certain limits of the theory. Moreover, in quantum gravity, it is believed that different spacetime topologies can contribute to the gravitational path integral, which suggests that spacetime emerges due to one saddle being more dominant. Moreover, in quantum gravity, UV and IR are closely related. This connection is manifested in black hole thermodynamics, where a semiclassical IR theory calculates the black hole entropy, which captures the density of gravitational UV states known as black holes. In addition to general arguments based on black hole physics, developments in string theory also suggests that there are universal principles shared among all the theories in the string landscape. The swampland conjectures are a set of conjectured criteria for theories in the quantum gravity landscape. The criteria are often motivated by black hole physics, universal patterns in string theory, and non-trivial self-consistencies among each other. == No global symmetry conjecture == The no global symmetry conjecture states that any symmetry in quantum gravity is either broken or gauged. In other words, there are no accidental symmetries in quantum gravity. The original motivation for the conjecture goes back to black holes. Hawking radiation of a generic black hole is only sensitive to charges that can be measured outside of the black hole, which are charges under gauge symmetries. Therefore, it is believed that the process of black hole formation and evaporation violates any conservation, which is not protected by gauge symmetry. The no global symmetry conjecture can also be derived from AdS/CFT correspondence in AdS. === Generalization to higher-form symmetries === The modern understanding of global and gauge symmetries allows for a natural generalization of the no-global symmetry conjectures to higher-form symmetries. A conventional symmetry (0-form symmetry) is a map that acts on point-like operators. For example, a free complex scalar field ϕ ( x ) {\displaystyle \phi (x)} has a U ( 1 ) {\displaystyle U(1)} symmetry which acts on the operator ϕ ^ ( x ) {\displaystyle {\hat {\phi }}(x)} as ϕ ^ ( x ) → e i α ϕ ^ ( x ) {\displaystyle {\hat {\phi }}(x)\rightarrow e^{i\alpha }{\hat {\phi }}(x)} , where α {\displaystyle \alpha } is a constant. One can use the symmetry to associate an operator O g ( Σ ) {\displaystyle {\mathcal {O}}_{g}(\Sigma )} to any symmetry element g {\displaystyle g} and codimension-1 hypersurface Σ {\displaystyle \Sigma } such that O g ( Σ ) {\displaystyle {\mathcal {O}}_{g}(\Sigma )} maps any charged local operator such as ϕ ^ ( x ) {\displaystyle {\hat {\phi }}(x)} to g ( ϕ ^ ( x ) ) {\displaystyle g({\hat {\phi }}(x))} if the point x {\displaystyle x} is enclosed (or linked) by Σ {\displaystyle \Sigma } . By definition, the action of the operator O g ( Σ ) {\displaystyle {\mathcal {O}}_{g}(\Sigma )} does not change by a continuous deformation of Σ {\displaystyle \Sigma } as long as Σ {\displaystyle \Sigma } does not hit a charged operator. Due to this feature, the operator O {\displaystyle {\mathcal {O}}} is called a topological operator. If the algebra governing the fusion of the symmetry operators has an element without an inverse, the corresponding symmetry is called a non-invertible symmetry. The above definitions can be generalized to higher dimensional charged operators. A collection of codimension- ( p + 1 ) {\displaystyle (p+1)} topological operators which act non-trivially on dimension- p {\displaystyle p} operators and are closed under fusion is called a p {\displaystyle p} -form symmetry. Compactification of a higher dimensional theory with a p {\displaystyle p} -form symmetry on a p {\displaystyle p} -dimensional torus can map the higher form symmetry to a 0 {\displaystyle 0} -form symmetry in the lower dimensional theory. Therefore, it is believed that higher-form global symmetries are also excluded from quantum gravity. Note that gauge symmetry does not satisfy this definition since, in the process of gauging, any local charged operator is excluded from the physical spectrum. === Cobordism conjecture === Global symmetries are closely connected to conservation laws. The no-global symmetry conjecture essentially states that any conservation law that is not protected by a gauge symmetry can be violated via a dynamical process. This intuition leads to the cobordism conjecture. Consider a gravitational theory that can be put on two backgrounds with d {\displaystyle d} non-compact dimensions and internal geometries M {\displaystyle M} and N {\displaystyle N} . Cobordism conjecture states that there must be a dynamical process which connects the two backgrounds to each other. In other words, there must exist a domain wall in the lower-dimensional theory which separates the two backgrounds. This resembles the idea of cobordism in mathematics, which interpolates between two manifolds by connecting them using a higher dimensional manifold. == Completeness of spectrum hypothesis == The completeness of spectrum hypothesis conjectures that in quantum gravity, the spectrum of charges under any gauge symmetry is completely realized. This conjecture is universally satisfied in string theory, but is also motivated by black hole physics. The entropy of charged black holes is non-zero. Since the exponential of entropy counts the number of states, the non-zero entropy of black holes suggests that for sufficiently high charges, any charge is realized by at least one black hole state. === Relation to no-global symmetry conjecture === The completeness of spectrum hypothesis is closely related to the no global symmetry conjecture. Example: Consider a U ( 1 ) {\displaystyle U(1)} gauge symmetry. In the absence of charged particles, the theory has a 1-form global symmetry ( R , + ) {\displaystyle (\mathbb {R} ,+)} . For any number c ∈ R {\displaystyle c\in \mathbb {R} } and any codimension 2 surface Σ {\displaystyle \Sigma } , the symmetry operator O c ( Σ ) {\displaystyle {\mathcal {O}}_{c}(\Sigma )} multiplies a Wilson line that links with Σ {\displaystyle \Sigma } by e i c n {\displaystyle e^{icn}} , where the charge associated with the Wilson line is n {\displaystyle n} units of the fundamental charge. In the presence of charged particles, Wilson lines can break up. Suppose there is a charged particle with charge k {\displaystyle k} , the Wilson lines can change their charges for multiples of k {\displaystyle k} . Therefore, some of the symmetry operators O c ( Σ ) {\displaystyle {\mathcal {O}}_{c}(\Sigma )} are no longer well-defined. However, if we take k {\displaystyle k} to be the smallest charge, the values c ∈ { 1 / k , 2 / k , . . . , k / k } {\displaystyle c\in \{1/k,2/k,...,k/k\}} give rise to well defined symmetry operators. Therefore, a Z k {\displaystyle \mathbb {Z} _{k}} part of the global symmetry survives. To avoid any global symmetry, k {\displaystyle k} must be 1 which means all charges appear in the spectrum. The above argument can be generalized to discrete and higher-dimensional symmetries. The completeness of spectrum follows from the absence of generalized global symmetry which also includes non-invertible symmetries. == Weak gravity conjecture == The weak gravity conjecture (WGC) is a conjecture regarding the strength gravity can have in a theory of quantum gravity relative to the gauge forces in that theory. It roughly states that gravity should be the weakest force in any consistent theory of quantum gravity. === Original conjecture === The weak gravity conjecture postulates that every black hole must decay unless it is protected by supersymmetry. Suppose there is a U ( 1 ) {\displaystyle U(1)} gauge symmetry, there is an upper bound on the charge of the black holes with a given mass. The black holes that saturate that bound are extremal black holes. The extremal black holes have zero Hawking temperature. However, whether or not a black hole with a charge and a mass that exactly satisfies the extremality condition exists depends on the quantum theory. But given the high entropy of the large extremal black holes, there must exist many states with charges and masses that are arbitrarily close to the extremality condition. Suppose the black hole emits a particle with charge q {\displaystyle q} and mass m {\displaystyle m} . For the remaining black hole to remain subextremal, we must have | q | < m {\displaystyle |q|<m} in Planck units where the extremality condition takes the form | Q | = M {\displaystyle |Q|=M} . === Mild version === Given that black holes are the natural extension of particles beyond a certain mass, it is natural to assume that there must also be black holes with a charge-to-mass ratio that is greater than that of very large black holes. In other words, the correction to the extremality condition | Q | = M + δ M {\displaystyle |Q|=M+\delta M} must be such that δ M > 0 {\displaystyle \delta M>0} . === Higher dimensional generalization === Weak gravity conjecture can be generalized to higher-form gauge symmetries. The generalization postulates that for any higher-form gauge symmetry, there exists a brane which has a charge-to-mass ratio that exceeds the charge-to-mass ratio of the extremal branes. == Distance conjecture == String dualities have played a crucial role in developing the modern understanding of string theory by providing a non-perturbative window into UV physics. In string theory, when one takes the vacuum expectation values of the scalar fields of a theory to a certain limit, a dual description always emerges. An example of this is T-duality, where there are two dual descriptions to understand a string theory with an internal geometry of a circle. However, each perturbative description becomes valid in a different regime of the parameter space. The circle's radius manifests itself as a scalar field in the lower dimensional theory. If one takes the value of this scalar field to infinity, the resulting theory can be described by the original higher dimensional theory. The new description includes a tower of light states corresponding to the Kaluza-Klein (KK) particles. On the other hand, if we take the size of the circle to zero, the strings that wind around the circle will become light. T-duality is the statement that there exists an alternative description which captures these light winding states as KK particles. Note that in the absence of a string, there is no reason to believe any states should become light in the limit where the size of the circle goes to zero. Distance conjecture quantifies the above observation and states that it must happen at any infinite distance limit of the parameter space. === Original conjecture === If one takes the vacuum expectation value of the scalar fields to infinity, there exists a tower of light and weakly coupled states whose mass in Planck units goes to zero. Moreover, the mass of the particles depends on the canonical distance travelled in the moduli space Δ ϕ {\displaystyle \Delta \phi } as m ∼ M 0 exp ⁡ ( − λ Δ ϕ ) {\displaystyle m\sim M_{0}\exp(-\lambda \Delta \phi )} , where M 0 {\displaystyle M_{0}} and λ {\displaystyle \lambda } are constants. Moreover, there is a universal dimension-dependent lower bound on λ {\displaystyle \lambda } . The canonical distance between two points in the target space for scalar expectations values (moduli space) is measured using the canonical metric G {\displaystyle G} , which is defined by the kinetic term in action. S = ∫ d d x g 1 2 G i j ∂ μ ϕ i ∂ μ ϕ j + . . . {\displaystyle S=\int d^{d}x{\sqrt {g}}{\frac {1}{2}}G_{ij}\partial _{\mu }\phi ^{i}\partial ^{\mu }\phi ^{j}+...} === Emergent string conjecture === A stronger version of the original distance conjecture additionally postulates that the lightest tower of states at any infinite distance limit is either a KK tower or a string tower. In other words, the leading tower of states can either be understood via dimensional reduction of a higher dimensional theory (just like the example provided above) or as excitations of a weakly coupled string. This conjecture is often further strengthened by imposing the string to be a fundamental string. === The sharpened distance conjecture === The sharpened distance conjecture states that in d {\displaystyle d} spacetime dimensions, λ ≥ 1 / d − 2 {\displaystyle \lambda \geq 1/{\sqrt {d-2}}} . == References == == External links == Lecture by Cumrun Vafa, String Landscape and the Swampland, March 2018
Wikipedia/Swampland_(physics)
The Trouble with Physics: The Rise of String Theory, the Fall of a Science, and What Comes Next is a 2006 book by the theoretical physicist Lee Smolin about the problems with string theory. The book strongly criticizes string theory and its prominence in contemporary theoretical physics, on the grounds that string theory has yet to come up with a single prediction that can be verified using any technology that is likely to be feasible within our lifetimes. Smolin also focuses on the difficulties faced by research in quantum gravity, and by current efforts to come up with a theory explaining all four fundamental interactions. The book is broadly concerned with the role of controversy and diversity of approaches in scientific processes and ethics. Smolin suggests both that there appear to be serious deficiencies in string theory and that string theory has an unhealthy near-monopoly on fundamental physics in the United States, and that a diversity of approaches is needed. He argues that more attention should instead be paid to background independent theories of quantum gravity. In the book, Smolin claims that string theory makes no new testable predictions; that it has no coherent mathematical formulation; and that it has not been mathematically proved finite. Some experts in the theoretical physics community disagree with these statements. Smolin states that to propose a string theory landscape having up to 10500 string vacuum solutions is tantamount to abandoning accepted science: The scenario of many unobserved universes plays the same logical role as the scenario of an intelligent designer. Each provides an untestable hypothesis that, if true, makes something improbable seem quite probable. == Reviews == The book generated much controversy and debate about the merits of string theory, and was criticised by some prominent physicists including Sean Carroll and string theorists Joseph Polchinski and Luboš Motl. Polchinski's review states, "In the end, these [Smolin and others'] books fail to capture much of the spirit and logic of string theory." Motl's review goes on to say "the concentration of irrational statements and anti-scientific sentiments has exceeded my expectations," and, In the context of string theory, he literally floods the pages of his book with undefendable speculations about some basic results of string theory. Because these statements are of mathematical nature, we are sure that Lee is wrong even in the absence of any experiments. Sean Carroll's review expressed frustration because in his opinion, "The Trouble with Physics is really two books, with intertwined but ultimately independent arguments." He suggested that the arguments in the book appear divided:"[one argument is] big and abstract and likely to be ignored by most of the book's audience; the other is narrow and specific and part of a wide-ranging and heated discussion carried out between scientists, in the popular press, and on the internet." Furthermore, The abstract argument — about academic culture and the need to nurture speculative ideas — is, in my opinion, important and largely correct, while the specific one — about the best way to set about quantizing gravity — is overstated and undersupported. Carroll fears that excessive attention paid to the specific dispute is likely to disadvantage the more general abstract argument. Sabine Hossenfelder, in a review written a year later and titled "The Trouble With Physics: Aftermath" alludes to the book's polarising effect on the scientific community. She explores the author's views as a contrast in generations, while supporting his right to them. Hossenfelder believes that Smolin's book attempts to restore the relation physics once had with philosophy, quoting him as follows: Philosophy used to be part of the natural sciences – for a long time. For long centuries during which our understanding of the world we live in has progressed tremendously. There is no doubt that times change, but not all changes are a priori good if left without further consideration. Here, change has resulted in a gap between the natural sciences where questioning the basis of our theories, and an embedding into the historical and sociological context used to be. Even though many new specifically designed interdisciplinary fields have been established, investigating the foundations of our current theories has basically been erased out of curricula and textbooks.< == The String Wars == A discussion in 2006 took place between UCSB physicists at KITP and science journalist George Johnson regarding the controversy caused by the books of Smolin (The Trouble with Physics) and Peter Woit (Not Even Wrong). The meeting was titled "The String Wars" to reflect the impression the media has given people regarding the controversy in string theory caused by Smolin's and Woit's books. A video of the proceedings is available at UCSB's website. == See also == Loop quantum gravity Peter Woit == References == Notes Further reading Greene, Brian, 1999. The Elegant Universe. Vintage Paperbacks. A nontechnical introduction to string theory. Greene, Brian, 2004. The Fabric of the Cosmos. Penguin Books. Space, time, cosmology, and more string theory. Nontechnical Penrose, Roger, 2004. The Road to Reality. Alfred A. Knopf. Technical. Randall, Lisa, 2005. Warped Passages. Smolin, Lee, 2001. Three Roads to Quantum Gravity. Woit, Peter, 2006. Not Even Wrong: The Failure of String Theory & the Continuing Challenge to Unify the Laws of Physics. Jonathan Cape (UK) and Basic Books (USA). The "other book" criticizing string theory and the stagnation of theoretical particle physics. == External links == The Trouble with Physics, webpage maintained by the publisher, Houghton Mifflin. Joseph Polchinski (2007) "All Strung Out?" a review of The Trouble with Physics and Not Even Wrong, American Scientist 95(1):1. Smolin's comment and Polchinsky's reply. Mindmap of the fundamental concepts described in the book. Lee Smolin, Brian Greene (August 18, 2006). Physicists Debate the Merits of String Theory (Talk-Show Debate). National Public Radio, "Talk of the Nation". Retrieved April 19, 2018.
Wikipedia/The_Trouble_With_Physics
In string theory, the string theory landscape (or landscape of vacua) is the collection of possible false vacua, together comprising a collective "landscape" of choices of parameters governing compactifications. The term "landscape" comes from the notion of a fitness landscape in evolutionary biology. It was first applied to cosmology by Lee Smolin in his book The Life of the Cosmos (1997), and was first used in the context of string theory by Leonard Susskind. == Compactified Calabi–Yau manifolds == In string theory the number of flux vacua is commonly thought to be roughly 10 500 {\displaystyle 10^{500}} , but could be 10 272 , 000 {\displaystyle 10^{272,000}} or higher. The large number of possibilities arises from choices of Calabi–Yau manifolds and choices of generalized magnetic fluxes over various homology cycles, found in F-theory. If there is no structure in the space of vacua, the problem of finding one with a sufficiently small cosmological constant is NP complete. This is a version of the subset sum problem. A possible mechanism of string theory vacuum stabilization, now known as the KKLT mechanism, was proposed in 2003 by Shamit Kachru, Renata Kallosh, Andrei Linde, and Sandip Trivedi. == Fine-tuning by the anthropic principle == Fine-tuning of constants like the cosmological constant or the Higgs boson mass are usually assumed to occur for precise physical reasons as opposed to taking their particular values at random. That is, these values should be uniquely consistent with underlying physical laws. The number of theoretically allowed configurations has prompted suggestions that this is not the case, and that many different vacua are physically realized. The anthropic principle proposes that fundamental constants may have the values they have because such values are necessary for life (and therefore intelligent observers to measure the constants). The anthropic landscape thus refers to the collection of those portions of the landscape that are suitable for supporting intelligent life. === Weinberg model === In 1987, Steven Weinberg proposed that the observed value of the cosmological constant was so small because it is impossible for life to occur in a universe with a much larger cosmological constant. Weinberg attempted to predict the magnitude of the cosmological constant based on probabilistic arguments. Other attempts have been made to apply similar reasoning to models of particle physics. Such attempts are based in the general ideas of Bayesian probability; interpreting probability in a context where it is only possible to draw one sample from a distribution is problematic in frequentist probability but not in Bayesian probability, which is not defined in terms of the frequency of repeated events. In such a framework, the probability P ( x ) {\displaystyle P(x)} of observing some fundamental parameters x {\displaystyle x} is given by, P ( x ) = P p r i o r ( x ) × P s e l e c t i o n ( x ) , {\displaystyle P(x)=P_{\mathrm {prior} }(x)\times P_{\mathrm {selection} }(x),} where P p r i o r {\displaystyle P_{\mathrm {prior} }} is the prior probability, from fundamental theory, of the parameters x {\displaystyle x} and P s e l e c t i o n {\displaystyle P_{\mathrm {selection} }} is the "anthropic selection function", determined by the number of "observers" that would occur in the universe with parameters x {\displaystyle x} . These probabilistic arguments are the most controversial aspect of the landscape. Technical criticisms of these proposals have pointed out that: The function P p r i o r {\displaystyle P_{\mathrm {prior} }} is completely unknown in string theory and may be impossible to define or interpret in any sensible probabilistic way. The function P s e l e c t i o n {\displaystyle P_{\mathrm {selection} }} is completely unknown, since so little is known about the origin of life. Simplified criteria (such as the number of galaxies) must be used as a proxy for the number of observers. Moreover, it may never be possible to compute it for parameters radically different from those of the observable universe. === Simplified approaches === Tegmark et al. have recently considered these objections and proposed a simplified anthropic scenario for axion dark matter in which they argue that the first two of these problems do not apply. Vilenkin and collaborators have proposed a consistent way to define the probabilities for a given vacuum. A problem with many of the simplified approaches people have tried is that they "predict" a cosmological constant that is too large by a factor of 10–1000 orders of magnitude (depending on one's assumptions) and hence suggest that the cosmic acceleration should be much more rapid than is observed. === Interpretation === Few dispute the large number of metastable vacua. The existence, meaning, and scientific relevance of the anthropic landscape, however, remain controversial. ==== Cosmological constant problem ==== Andrei Linde, Sir Martin Rees and Leonard Susskind advocate it as a solution to the cosmological constant problem. === Weak scale supersymmetry from the landscape === The string landscape ideas can be applied to the notion of weak scale supersymmetry and the Little Hierarchy problem. For string vacua which include the MSSM (Minimal Supersymmetric Standard Model) as the low energy effective field theory, all values of SUSY breaking fields are expected to be equally likely on the landscape. This led Douglas and others to propose that the SUSY breaking scale is distributed as a power law in the landscape P p r i o r ∼ m s o f t 2 n F + n D − 1 {\displaystyle P_{prior}\sim m_{soft}^{2n_{F}+n_{D}-1}} where n F {\displaystyle n_{F}} is the number of F-breaking fields (distributed as complex numbers) and n D {\displaystyle n_{D}} is the number of D-breaking fields (distributed as real numbers). Next, one may impose the Agrawal, Barr, Donoghue, Seckel (ABDS) anthropic requirement that the derived weak scale lie within a factor of a few of our measured value (lest nuclei as needed for life as we know it become unstable (the atomic principle)). Combining these effects with a mild power-law draw to large soft SUSY breaking terms, one may calculate the Higgs boson and superparticle masses expected from the landscape. The Higgs mass probability distribution peaks around 125 GeV while sparticles (with the exception of light higgsinos) tend to lie well beyond current LHC search limits. This approach is an example of the application of stringy naturalness. ==== Scientific relevance ==== David Gross suggests that the idea is inherently unscientific, unfalsifiable or premature. A famous debate on the anthropic landscape of string theory is the Smolin–Susskind debate on the merits of the landscape. ==== Popular reception ==== There are several popular books about the anthropic principle in cosmology. The authors of two physics blogs, Lubos Motl and Peter Woit, are opposed to this use of the anthropic principle. == See also == Swampland Extra dimensions == References == == External links == String landscape; moduli stabilization; flux vacua; flux compactification on arxiv.org. Cvetič, Mirjam; García-Etxebarria, Iñaki; Halverson, James (March 2011). "On the computation of non-perturbative effective potentials in the string theory landscape". Fortschritte der Physik. 59 (3–4): 243–283. arXiv:1009.5386. Bibcode:2011ForPh..59..243C. doi:10.1002/prop.201000093. S2CID 46634583.
Wikipedia/String_theory_landscape
In theoretical physics, compactification means changing a theory with respect to one of its space-time dimensions. Instead of having a theory with this dimension being infinite, one changes the theory so that this dimension has a finite length, and may also be periodic. Compactification plays an important part in thermal field theory where one compactifies time, in string theory where one compactifies the extra dimensions of the theory, and in two- or one-dimensional solid state physics, where one considers a system which is limited in one of the three usual spatial dimensions. At the limit where the size of the compact dimension goes to zero, no fields depend on this extra dimension, and the theory is dimensionally reduced. == In string theory == In string theory, compactification is a generalization of Kaluza–Klein theory. It tries to reconcile the gap between the conception of our universe based on its four observable dimensions with the ten, eleven, or twenty-six dimensions which theoretical equations lead us to suppose the universe is made with. For this purpose it is assumed the extra dimensions are "wrapped" up on themselves, or "curled" up on Calabi–Yau spaces, or on orbifolds. Models in which the compact directions support fluxes are known as flux compactifications. The coupling constant of string theory, which determines the probability of strings splitting and reconnecting, can be described by a field called a dilaton. This in turn can be described as the size of an extra (eleventh) dimension which is compact. In this way, the ten-dimensional type IIA string theory can be described as the compactification of M-theory in eleven dimensions. Furthermore, different versions of string theory are related by different compactifications in a procedure known as T-duality. The formulation of more precise versions of the meaning of compactification in this context has been promoted by discoveries such as the mysterious duality. == Flux compactification == A flux compactification is a particular way to deal with additional dimensions required by string theory. It assumes that the shape of the internal manifold is a Calabi–Yau manifold or generalized Calabi–Yau manifold which is equipped with non-zero values of fluxes, i.e. differential forms, that generalize the concept of an electromagnetic field (see p-form electrodynamics). The hypothetical concept of the anthropic landscape in string theory follows from a large number of possibilities in which the integers that characterize the fluxes can be chosen without violating rules of string theory. The flux compactifications can be described as F-theory vacua or type IIB string theory vacua with or without D-branes. == See also == Dimensional reduction == References == == Further reading == Chapter 16 of Michael Green, John H. Schwarz and Edward Witten (1987). Superstring theory. Cambridge University Press. Vol. 2: Loop amplitudes, anomalies and phenomenology. ISBN 0-521-35753-5. Brian R. Greene, "String Theory on Calabi–Yau Manifolds". arXiv:hep-th/9702155 . Mariana Graña, "Flux compactifications in string theory: A comprehensive review", Physics Reports 423, 91–158 (2006). arXiv:hep-th/0509003 . Michael R. Douglas and Shamit Kachru "Flux compactification", Rev. Mod. Phys. 79, 733 (2007). arXiv:hep-th/0610102 . Ralph Blumenhagen, Boris Körs, Dieter Lüst, Stephan Stieberger, "Four-dimensional string compactifications with D-branes, orientifolds and fluxes", Physics Reports 445, 1–193 (2007). arXiv:hep-th/0610327 .
Wikipedia/Compactification_(physics)
In algebraic geometry, a complex algebraic variety is an algebraic variety (in the scheme sense or otherwise) over the field of complex numbers. == Chow's theorem == Chow's theorem states that a projective complex analytic variety, i.e., a closed analytic subvariety of the complex projective space C P n {\displaystyle \mathbb {C} \mathbf {P} ^{n}} , is an algebraic variety. These are usually simply referred to as projective varieties. == Hironaka's theorem == Let X be a complex algebraic variety. Then there is a projective resolution of singularities X ′ → X {\displaystyle X'\to X} . == Relation with similar concepts == Despite Chow's theorem, not every complex analytic variety is a complex algebraic variety. == See also == Complete variety Complex analytic variety == References == == Bibliography == Abramovich, Dan (2017). "Resolution of singularities of complex algebraic varieties and their families". Proceedings of the International Congress of Mathematicians (ICM 2018). pp. 523–546. arXiv:1711.09976. doi:10.1142/9789813272880_0066. ISBN 978-981-327-287-3. S2CID 119708681. Hironaka, Heisuke (1964). "Resolution of Singularities of an Algebraic Variety over a Field of Characteristic Zero: I". Annals of Mathematics. 79 (1): 109–203. doi:10.2307/1970486. JSTOR 1970486.
Wikipedia/Complex_algebraic_variety
In mathematics, a vertex operator algebra (VOA) is an algebraic structure that plays an important role in two-dimensional conformal field theory and string theory. In addition to physical applications, vertex operator algebras have proven useful in purely mathematical contexts such as monstrous moonshine and the geometric Langlands correspondence. The related notion of vertex algebra was introduced by Richard Borcherds in 1986, motivated by a construction of an infinite-dimensional Lie algebra due to Igor Frenkel. In the course of this construction, one employs a Fock space that admits an action of vertex operators attached to elements of a lattice. Borcherds formulated the notion of vertex algebra by axiomatizing the relations between the lattice vertex operators, producing an algebraic structure that allows one to construct new Lie algebras by following Frenkel's method. The notion of vertex operator algebra was introduced as a modification of the notion of vertex algebra, by Frenkel, James Lepowsky, and Arne Meurman in 1988, as part of their project to construct the moonshine module. They observed that many vertex algebras that appear 'in nature' carry an action of the Virasoro algebra, and satisfy a bounded-below property with respect to an energy operator. Motivated by this observation, they added the Virasoro action and bounded-below property as axioms. We now have post-hoc motivation for these notions from physics, together with several interpretations of the axioms that were not initially known. Physically, the vertex operators arising from holomorphic field insertions at points in two-dimensional conformal field theory admit operator product expansions when insertions collide, and these satisfy precisely the relations specified in the definition of vertex operator algebra. Indeed, the axioms of a vertex operator algebra are a formal algebraic interpretation of what physicists call chiral algebras (not to be confused with the more precise notion with the same name in mathematics) or "algebras of chiral symmetries", where these symmetries describe the Ward identities satisfied by a given conformal field theory, including conformal invariance. Other formulations of the vertex algebra axioms include Borcherds's later work on singular commutative rings, algebras over certain operads on curves introduced by Huang, Kriz, and others, D-module-theoretic objects called chiral algebras introduced by Alexander Beilinson and Vladimir Drinfeld and factorization algebras, also introduced by Beilinson and Drinfeld. Important basic examples of vertex operator algebras include the lattice VOAs (modeling lattice conformal field theories), VOAs given by representations of affine Kac–Moody algebras (from the WZW model), the Virasoro VOAs, which are VOAs corresponding to representations of the Virasoro algebra, and the moonshine module V♮, which is distinguished by its monster symmetry. More sophisticated examples such as affine W-algebras and the chiral de Rham complex on a complex manifold arise in geometric representation theory and mathematical physics. == Formal definition == === Vertex algebra === A vertex algebra is a collection of data that satisfy certain axioms. ==== Data ==== a vector space V {\displaystyle V} , called the space of states. The underlying field is typically taken to be the complex numbers, although Borcherds's original formulation allowed for an arbitrary commutative ring. an identity element 1 ∈ V {\displaystyle 1\in V} , sometimes written | 0 ⟩ {\displaystyle |0\rangle } or Ω {\displaystyle \Omega } to indicate a vacuum state. an endomorphism T : V → V {\displaystyle T:V\rightarrow V} , called "translation". (Borcherds's original formulation included a system of divided powers of T {\displaystyle T} , because he did not assume the ground ring was divisible.) a linear multiplication map Y : V ⊗ V → V ( ( z ) ) {\displaystyle Y:V\otimes V\rightarrow V((z))} , where V ( ( z ) ) {\displaystyle V((z))} is the space of all formal Laurent series with coefficients in V {\displaystyle V} . This structure has some alternative presentations: as an infinite collection of bilinear products ⋅ n : u ⊗ v ↦ u n v {\displaystyle \cdot _{n}:u\otimes v\mapsto u_{n}v} where n ∈ Z {\displaystyle n\in \mathbb {Z} } and u n ∈ E n d ( V ) {\displaystyle u_{n}\in \mathrm {End} (V)} , so that for each v {\displaystyle v} , there is an N {\displaystyle N} such that u n v = 0 {\displaystyle u_{n}v=0} for n < N {\displaystyle n<N} . as a left-multiplication map V → E n d ( V ) [ [ z ± 1 ] ] {\displaystyle V\rightarrow \mathrm {End} (V)[[z^{\pm 1}]]} . This is the 'state-to-field' map of the so-called state-field correspondence. For each u ∈ V {\displaystyle u\in V} , the endomorphism-valued formal distribution Y ( u , z ) {\displaystyle Y(u,z)} is called a vertex operator or a field, and the coefficient of z − n − 1 {\displaystyle z^{-n-1}} is the operator u n {\displaystyle u_{n}} . In the context of vertex algebras, a field is more precisely an element of E n d ( V ) [ [ z ± 1 ] ] {\displaystyle \mathrm {End} (V)[[z^{\pm 1}]]} , which can be written A ( z ) = ∑ n ∈ Z A n z n , A n ∈ E n d ( V ) {\displaystyle A(z)=\sum _{n\in \mathbb {Z} }A_{n}z^{n},A_{n}\in \mathrm {End} (V)} such that for any v ∈ V , A n v = 0 {\displaystyle v\in V,A_{n}v=0} for sufficiently small n {\displaystyle n} (which may depend on v {\displaystyle v} ). The standard notation for the multiplication is u ⊗ v ↦ Y ( u , z ) v = ∑ n ∈ Z u n v z − n − 1 . {\displaystyle u\otimes v\mapsto Y(u,z)v=\sum _{n\in \mathbf {Z} }u_{n}vz^{-n-1}.} ==== Axioms ==== These data are required to satisfy the following axioms: Identity. For any u ∈ V , Y ( 1 , z ) u = u {\displaystyle u\in V\,,\,Y(1,z)u=u} and Y ( u , z ) 1 ∈ u + z V [ [ z ] ] {\displaystyle \,Y(u,z)1\in u+zV[[z]]} . Translation. T ( 1 ) = 0 {\displaystyle T(1)=0} , and for any u , v ∈ V {\displaystyle u,v\in V} , [ T , Y ( u , z ) ] v = T Y ( u , z ) v − Y ( u , z ) T v = d d z Y ( u , z ) v {\displaystyle [T,Y(u,z)]v=TY(u,z)v-Y(u,z)Tv={\frac {d}{dz}}Y(u,z)v} Locality (Jacobi identity, or Borcherds identity). For any u , v ∈ V {\displaystyle u,v\in V} , there exists a positive integer N such that: ( z − x ) N Y ( u , z ) Y ( v , x ) = ( z − x ) N Y ( v , x ) Y ( u , z ) . {\displaystyle (z-x)^{N}Y(u,z)Y(v,x)=(z-x)^{N}Y(v,x)Y(u,z).} ===== Equivalent formulations of locality axiom ===== The locality axiom has several equivalent formulations in the literature, e.g., Frenkel–Lepowsky–Meurman introduced the Jacobi identity: ∀ u , v , w ∈ V {\displaystyle \forall u,v,w\in V} , z − 1 δ ( x − y z ) Y ( u , x ) Y ( v , y ) w − z − 1 δ ( − y + x z ) Y ( v , y ) Y ( u , x ) w = y − 1 δ ( x − z y ) Y ( Y ( u , z ) v , y ) w , {\displaystyle {\begin{aligned}&z^{-1}\delta \left({\frac {x-y}{z}}\right)Y(u,x)Y(v,y)w-z^{-1}\delta \left({\frac {-y+x}{z}}\right)Y(v,y)Y(u,x)w\\&=y^{-1}\delta \left({\frac {x-z}{y}}\right)Y(Y(u,z)v,y)w\end{aligned}},} where we define the formal delta series by: δ ( x − y z ) := ∑ s ≥ 0 , r ∈ Z ( r s ) ( − 1 ) s y r − s x s z − r . {\displaystyle \delta \left({\frac {x-y}{z}}\right):=\sum _{s\geq 0,r\in \mathbf {Z} }{\binom {r}{s}}(-1)^{s}y^{r-s}x^{s}z^{-r}.} Borcherds initially used the following two identities: for any u , v , w ∈ V {\displaystyle u,v,w\in V} and integers m , n {\displaystyle m,n} we have ( u m ( v ) ) n ( w ) = ∑ i ≥ 0 ( − 1 ) i ( m i ) ( u m − i ( v n + i ( w ) ) − ( − 1 ) m v m + n − i ( u i ( w ) ) ) {\displaystyle (u_{m}(v))_{n}(w)=\sum _{i\geq 0}(-1)^{i}{\binom {m}{i}}\left(u_{m-i}(v_{n+i}(w))-(-1)^{m}v_{m+n-i}(u_{i}(w))\right)} and u m v = ∑ i ≥ 0 ( − 1 ) m + i + 1 T i i ! v m + i u {\displaystyle u_{m}v=\sum _{i\geq 0}(-1)^{m+i+1}{\frac {T^{i}}{i!}}v_{m+i}u} . He later gave a more expansive version that is equivalent but easier to use: for any u , v , w ∈ V {\displaystyle u,v,w\in V} and integers m , n , q {\displaystyle m,n,q} we have ∑ i ∈ Z ( m i ) ( u q + i ( v ) ) m + n − i ( w ) = ∑ i ∈ Z ( − 1 ) i ( q i ) ( u m + q − i ( v n + i ( w ) ) − ( − 1 ) q v n + q − i ( u m + i ( w ) ) ) {\displaystyle \sum _{i\in \mathbf {Z} }{\binom {m}{i}}\left(u_{q+i}(v)\right)_{m+n-i}(w)=\sum _{i\in \mathbf {Z} }(-1)^{i}{\binom {q}{i}}\left(u_{m+q-i}\left(v_{n+i}(w)\right)-(-1)^{q}v_{n+q-i}\left(u_{m+i}(w)\right)\right)} This identity is the same as the Jacobi identity by expanding both sides in all formal variables. Finally, there is a formal function version of locality: For any u , v , w ∈ V {\displaystyle u,v,w\in V} , there is an element X ( u , v , w ; z , x ) ∈ V [ [ z , x ] ] [ z − 1 , x − 1 , ( z − x ) − 1 ] {\displaystyle X(u,v,w;z,x)\in V[[z,x]]\left[z^{-1},x^{-1},(z-x)^{-1}\right]} such that Y ( u , z ) Y ( v , x ) w {\displaystyle Y(u,z)Y(v,x)w} and Y ( v , x ) Y ( u , z ) w {\displaystyle Y(v,x)Y(u,z)w} are the corresponding expansions of X ( u , v , w ; z , x ) {\displaystyle X(u,v,w;z,x)} in V ( ( z ) ) ( ( x ) ) {\displaystyle V((z))((x))} and V ( ( x ) ) ( ( z ) ) {\displaystyle V((x))((z))} . === Vertex operator algebra === A vertex operator algebra is a vertex algebra equipped with a conformal element ω ∈ V {\displaystyle \omega \in V} , such that the vertex operator Y ( ω , z ) {\displaystyle Y(\omega ,z)} is the weight two Virasoro field L ( z ) {\displaystyle L(z)} : Y ( ω , z ) = ∑ n ∈ Z ω n z − n − 1 = L ( z ) = ∑ n ∈ Z L n z − n − 2 {\displaystyle Y(\omega ,z)=\sum _{n\in \mathbf {Z} }\omega _{n}{z^{-n-1}}=L(z)=\sum _{n\in \mathbf {Z} }L_{n}z^{-n-2}} and satisfies the following properties: [ L m , L n ] = ( m − n ) L m + n + 1 12 δ m + n , 0 ( m 3 − m ) c I d V {\displaystyle [L_{m},L_{n}]=(m-n)L_{m+n}+{\frac {1}{12}}\delta _{m+n,0}(m^{3}-m)c\,\mathrm {Id} _{V}} , where c {\displaystyle c} is a constant called the central charge, or rank of V {\displaystyle V} . In particular, the coefficients of this vertex operator endow V {\displaystyle V} with an action of the Virasoro algebra with central charge c {\displaystyle c} . L 0 {\displaystyle L_{0}} acts semisimply on V {\displaystyle V} with integer eigenvalues that are bounded below. Under the grading provided by the eigenvalues of L 0 {\displaystyle L_{0}} , the multiplication on V {\displaystyle V} is homogeneous in the sense that if u {\displaystyle u} and v {\displaystyle v} are homogeneous, then u n v {\displaystyle u_{n}v} is homogeneous of degree d e g ( u ) + d e g ( v ) − n − 1 {\displaystyle \mathrm {deg} (u)+\mathrm {deg} (v)-n-1} . The identity 1 {\displaystyle 1} has degree 0, and the conformal element ω {\displaystyle \omega } has degree 2. L − 1 = T {\displaystyle L_{-1}=T} . A homomorphism of vertex algebras is a map of the underlying vector spaces that respects the additional identity, translation, and multiplication structure. Homomorphisms of vertex operator algebras have "weak" and "strong" forms, depending on whether they respect conformal vectors. == Commutative vertex algebras == A vertex algebra V {\displaystyle V} is commutative if all vertex operators Y ( u , z ) {\displaystyle Y(u,z)} commute with each other. This is equivalent to the property that all products Y ( u , z ) v {\displaystyle Y(u,z)v} lie in V [ [ z ] ] {\displaystyle V[[z]]} , or that Y ( u , z ) ∈ End ⁡ [ [ z ] ] {\displaystyle Y(u,z)\in \operatorname {End} [[z]]} . Thus, an alternative definition for a commutative vertex algebra is one in which all vertex operators Y ( u , z ) {\displaystyle Y(u,z)} are regular at z = 0 {\displaystyle z=0} . Given a commutative vertex algebra, the constant terms of multiplication endow the vector space with a commutative and associative ring structure, the vacuum vector 1 {\displaystyle 1} is a unit and T {\displaystyle T} is a derivation. Hence the commutative vertex algebra equips V {\displaystyle V} with the structure of a commutative unital algebra with derivation. Conversely, any commutative ring V {\displaystyle V} with derivation T {\displaystyle T} has a canonical vertex algebra structure, where we set Y ( u , z ) v = u − 1 v z 0 = u v {\displaystyle Y(u,z)v=u_{-1}vz^{0}=uv} , so that Y {\displaystyle Y} restricts to a map Y : V → End ⁡ ( V ) {\displaystyle Y:V\rightarrow \operatorname {End} (V)} which is the multiplication map u ↦ u ⋅ {\displaystyle u\mapsto u\cdot } with ⋅ {\displaystyle \cdot } the algebra product. If the derivation T {\displaystyle T} vanishes, we may set ω = 0 {\displaystyle \omega =0} to obtain a vertex operator algebra concentrated in degree zero. Any finite-dimensional vertex algebra is commutative. Thus even the smallest examples of noncommutative vertex algebras require significant introduction. == Basic properties == The translation operator T {\displaystyle T} in a vertex algebra induces infinitesimal symmetries on the product structure, and satisfies the following properties: Y ( u , z ) 1 = e z T u {\displaystyle \,Y(u,z)1=e^{zT}u} T u = u − 2 1 {\displaystyle \,Tu=u_{-2}1} , so T {\displaystyle T} is determined by Y {\displaystyle Y} . Y ( T u , z ) = d Y ( u , z ) d z {\displaystyle \,Y(Tu,z)={\frac {\mathrm {d} Y(u,z)}{\mathrm {d} z}}} e x T Y ( u , z ) e − x T = Y ( e x T u , z ) = Y ( u , z + x ) {\displaystyle \,e^{xT}Y(u,z)e^{-xT}=Y(e^{xT}u,z)=Y(u,z+x)} (skew-symmetry) Y ( u , z ) v = e z T Y ( v , − z ) u {\displaystyle Y(u,z)v=e^{zT}Y(v,-z)u} For a vertex operator algebra, the other Virasoro operators satisfy similar properties: x L 0 Y ( u , z ) x − L 0 = Y ( x L 0 u , x z ) {\displaystyle \,x^{L_{0}}Y(u,z)x^{-L_{0}}=Y(x^{L_{0}}u,xz)} e x L 1 Y ( u , z ) e − x L 1 = Y ( e x ( 1 − x z ) L 1 ( 1 − x z ) − 2 L 0 u , z ( 1 − x z ) − 1 ) {\displaystyle \,e^{xL_{1}}Y(u,z)e^{-xL_{1}}=Y(e^{x(1-xz)L_{1}}(1-xz)^{-2L_{0}}u,z(1-xz)^{-1})} (quasi-conformality) [ L m , Y ( u , z ) ] = ∑ k = 0 m + 1 ( m + 1 k ) z k Y ( L m − k u , z ) {\displaystyle [L_{m},Y(u,z)]=\sum _{k=0}^{m+1}{\binom {m+1}{k}}z^{k}Y(L_{m-k}u,z)} for all m ≥ − 1 {\displaystyle m\geq -1} . (Associativity, or Cousin property): For any u , v , w ∈ V {\displaystyle u,v,w\in V} , the element X ( u , v , w ; z , x ) ∈ V [ [ z , x ] ] [ z − 1 , x − 1 , ( z − x ) − 1 ] {\displaystyle X(u,v,w;z,x)\in V[[z,x]][z^{-1},x^{-1},(z-x)^{-1}]} given in the definition also expands to Y ( Y ( u , z − x ) v , x ) w {\displaystyle Y(Y(u,z-x)v,x)w} in V ( ( x ) ) ( ( z − x ) ) {\displaystyle V((x))((z-x))} . The associativity property of a vertex algebra follows from the fact that the commutator of Y ( u , z ) {\displaystyle Y(u,z)} and Y ( v , z ) {\displaystyle Y(v,z)} is annihilated by a finite power of z − x {\displaystyle z-x} , i.e., one can expand it as a finite linear combination of derivatives of the formal delta function in ( z − x ) {\displaystyle (z-x)} , with coefficients in E n d ( V ) {\displaystyle \mathrm {End} (V)} . Reconstruction: Let V {\displaystyle V} be a vertex algebra, and let J a {\displaystyle J_{a}} be a set of vectors, with corresponding fields J a ( z ) ∈ E n d ( V ) [ [ z ± 1 ] ] {\displaystyle J^{a}(z)\in \mathrm {End} (V)[[z^{\pm 1}]]} . If V {\displaystyle V} is spanned by monomials in the positive weight coefficients of the fields (i.e., finite products of operators J n a {\displaystyle J_{n}^{a}} applied to 1 {\displaystyle 1} , where n {\displaystyle n} is negative), then we may write the operator product of such a monomial as a normally ordered product of divided power derivatives of fields (here, normal ordering means polar terms on the left are moved to the right). Specifically, Y ( J n 1 + 1 a 1 J n 2 + 1 a 2 . . . J n k + 1 a k 1 , z ) =: ∂ n 1 ∂ z n 1 J a 1 ( z ) n 1 ! ∂ n 2 ∂ z n 2 J a 2 ( z ) n 2 ! ⋯ ∂ n k ∂ z n k J a k ( z ) n k ! : {\displaystyle Y(J_{n_{1}+1}^{a_{1}}J_{n_{2}+1}^{a_{2}}...J_{n_{k}+1}^{a_{k}}1,z)=:{\frac {\partial ^{n_{1}}}{\partial z^{n_{1}}}}{\frac {J^{a_{1}}(z)}{n_{1}!}}{\frac {\partial ^{n_{2}}}{\partial z^{n_{2}}}}{\frac {J^{a_{2}}(z)}{n_{2}!}}\cdots {\frac {\partial ^{n_{k}}}{\partial z^{n_{k}}}}{\frac {J^{a_{k}}(z)}{n_{k}!}}:} More generally, if one is given a vector space V {\displaystyle V} with an endomorphism T {\displaystyle T} and vector 1 {\displaystyle 1} , and one assigns to a set of vectors J a {\displaystyle J^{a}} a set of fields J a ( z ) ∈ E n d ( V ) [ [ z ± 1 ] ] {\displaystyle J^{a}(z)\in \mathrm {End} (V)[[z^{\pm 1}]]} that are mutually local, whose positive weight coefficients generate V {\displaystyle V} , and that satisfy the identity and translation conditions, then the previous formula describes a vertex algebra structure. === Operator product expansion === In vertex algebra theory, due to associativity, we can abuse notation to write, for A , B , C ∈ V , {\displaystyle A,B,C\in V,} Y ( A , z ) Y ( B , w ) C = ∑ n ∈ Z Y ( A ( n ) ⋅ B , w ) ( z − w ) n + 1 C . {\displaystyle Y(A,z)Y(B,w)C=\sum _{n\in \mathbb {Z} }{\frac {Y(A_{(n)}\cdot B,w)}{(z-w)^{n+1}}}C.} This is the operator product expansion. Equivalently, Y ( A , z ) Y ( B , w ) = ∑ n ≥ 0 Y ( A ( n ) ⋅ B , w ) ( z − w ) n + 1 + : Y ( A , z ) Y ( B , w ) : . {\displaystyle Y(A,z)Y(B,w)=\sum _{n\geq 0}{\frac {Y(A_{(n)}\cdot B,w)}{(z-w)^{n+1}}}+:Y(A,z)Y(B,w):.} Since the normal ordered part is regular in z {\displaystyle z} and w {\displaystyle w} , this can be written more in line with physics conventions as Y ( A , z ) Y ( B , w ) ∼ ∑ n ≥ 0 Y ( A ( n ) ⋅ B , w ) ( z − w ) n + 1 , {\displaystyle Y(A,z)Y(B,w)\sim \sum _{n\geq 0}{\frac {Y(A_{(n)}\cdot B,w)}{(z-w)^{n+1}}},} where the equivalence relation ∼ {\displaystyle \sim } denotes equivalence up to regular terms. ==== Commonly used OPEs ==== Here some OPEs frequently found in conformal field theory are recorded. == Examples from Lie algebras == The basic examples come from infinite-dimensional Lie algebras. === Heisenberg vertex operator algebra === A basic example of a noncommutative vertex algebra is the rank 1 free boson, also called the Heisenberg vertex operator algebra. It is "generated" by a single vector b, in the sense that by applying the coefficients of the field b(z) := Y(b,z) to the vector 1, we obtain a spanning set. The underlying vector space is the infinite-variable polynomial ring C [ b − 1 , b − 2 , ⋯ ] {\displaystyle \mathbb {C} [b_{-1},b_{-2},\cdots ]} , where for positive n {\displaystyle n} , b − n {\displaystyle b_{-n}} acts obviously by multiplication, and b n {\displaystyle b_{n}} acts as n ∂ b − n {\displaystyle n\partial _{b_{-n}}} . The action of b0 is multiplication by zero, producing the "momentum zero" Fock representation V0 of the Heisenberg Lie algebra (generated by bn for integers n, with commutation relations [bn,bm]=n δn,–m), induced by the trivial representation of the subalgebra spanned by bn, n ≥ 0. The Fock space V0 can be made into a vertex algebra by the following definition of the state-operator map on a basis b j 1 b j 2 . . . b j k {\displaystyle b_{j_{1}}b_{j_{2}}...b_{j_{k}}} with each j i < 0 {\displaystyle j_{i}<0} , Y ( b j 1 b j 2 . . . b j k , z ) := 1 ( − j 1 − 1 ) ! ( − j 2 − 1 ) ! ⋯ ( − j k − 1 ) ! : ∂ − j 1 − 1 b ( z ) ∂ − j 2 − 1 b ( z ) . . . ∂ − j k − 1 b ( z ) : {\displaystyle Y(b_{j_{1}}b_{j_{2}}...b_{j_{k}},z):={\frac {1}{(-j_{1}-1)!(-j_{2}-1)!\cdots (-j_{k}-1)!}}:\partial ^{-j_{1}-1}b(z)\partial ^{-j_{2}-1}b(z)...\partial ^{-j_{k}-1}b(z):} where : O : {\displaystyle :{\mathcal {O}}:} denotes normal ordering of an operator O {\displaystyle {\mathcal {O}}} . The vertex operators may also be written as a functional of a multivariable function f as: Y [ f , z ] ≡: f ( b ( z ) 0 ! , b ′ ( z ) 1 ! , b ″ ( z ) 2 ! , . . . ) : {\displaystyle Y[f,z]\equiv :f\left({\frac {b(z)}{0!}},{\frac {b'(z)}{1!}},{\frac {b''(z)}{2!}},...\right):} if we understand that each term in the expansion of f is normal ordered. The rank n free boson is given by taking an n-fold tensor product of the rank 1 free boson. For any vector b in n-dimensional space, one has a field b(z) whose coefficients are elements of the rank n Heisenberg algebra, whose commutation relations have an extra inner product term: [bn,cm]=n (b,c) δn,–m. The Heisenberg vertex operator algebra has a one-parameter family of conformal vectors with parameter λ ∈ C {\displaystyle \lambda \in \mathbb {C} } of conformal vectors ω λ {\displaystyle \omega _{\lambda }} given by ω λ = 1 2 b − 1 2 + λ b − 2 , {\displaystyle \omega _{\lambda }={\frac {1}{2}}b_{-1}^{2}+\lambda b_{-2},} with central charge c λ = 1 − 12 λ 2 {\displaystyle c_{\lambda }=1-12\lambda ^{2}} . When λ = 0 {\displaystyle \lambda =0} , there is the following formula for the Virasoro character: T r V q L 0 := ∑ n ∈ Z dim ⁡ V n q n = ∏ n ≥ 1 ( 1 − q n ) − 1 {\displaystyle Tr_{V}q^{L_{0}}:=\sum _{n\in \mathbf {Z} }\dim V_{n}q^{n}=\prod _{n\geq 1}(1-q^{n})^{-1}} This is the generating function for partitions, and is also written as q1/24 times the weight −1/2 modular form 1/η (the reciprocal of the Dedekind eta function). The rank n free boson then has an n parameter family of Virasoro vectors, and when those parameters are zero, the character is qn/24 times the weight −n/2 modular form η−n. === Virasoro vertex operator algebra === Virasoro vertex operator algebras are important for two reasons: First, the conformal element in a vertex operator algebra canonically induces a homomorphism from a Virasoro vertex operator algebra, so they play a universal role in the theory. Second, they are intimately connected to the theory of unitary representations of the Virasoro algebra, and these play a major role in conformal field theory. In particular, the unitary Virasoro minimal models are simple quotients of these vertex algebras, and their tensor products provide a way to combinatorially construct more complicated vertex operator algebras. The Virasoro vertex operator algebra is defined as an induced representation of the Virasoro algebra: If we choose a central charge c, there is a unique one-dimensional module for the subalgebra C[z]∂z + K for which K acts by cId, and C[z]∂z acts trivially, and the corresponding induced module is spanned by polynomials in L–n = –z−n–1∂z as n ranges over integers greater than 1. The module then has partition function T r V q L 0 = ∑ n ∈ R dim ⁡ V n q n = ∏ n ≥ 2 ( 1 − q n ) − 1 {\displaystyle Tr_{V}q^{L_{0}}=\sum _{n\in \mathbf {R} }\dim V_{n}q^{n}=\prod _{n\geq 2}(1-q^{n})^{-1}} . This space has a vertex operator algebra structure, where the vertex operators are defined by: Y ( L − n 1 − 2 L − n 2 − 2 . . . L − n k − 2 | 0 ⟩ , z ) ≡ 1 n 1 ! n 2 ! . . n k ! : ∂ n 1 L ( z ) ∂ n 2 L ( z ) . . . ∂ n k L ( z ) : {\displaystyle Y(L_{-n_{1}-2}L_{-n_{2}-2}...L_{-n_{k}-2}|0\rangle ,z)\equiv {\frac {1}{n_{1}!n_{2}!..n_{k}!}}:\partial ^{n_{1}}L(z)\partial ^{n_{2}}L(z)...\partial ^{n_{k}}L(z):} and ω = L − 2 | 0 ⟩ {\displaystyle \omega =L_{-2}|0\rangle } . The fact that the Virasoro field L(z) is local with respect to itself can be deduced from the formula for its self-commutator: [ L ( z ) , L ( x ) ] = ( ∂ ∂ x L ( x ) ) w − 1 δ ( z x ) − 2 L ( x ) x − 1 ∂ ∂ z δ ( z x ) − 1 12 c x − 1 ( ∂ ∂ z ) 3 δ ( z x ) {\displaystyle [L(z),L(x)]=\left({\frac {\partial }{\partial x}}L(x)\right)w^{-1}\delta \left({\frac {z}{x}}\right)-2L(x)x^{-1}{\frac {\partial }{\partial z}}\delta \left({\frac {z}{x}}\right)-{\frac {1}{12}}cx^{-1}\left({\frac {\partial }{\partial z}}\right)^{3}\delta \left({\frac {z}{x}}\right)} where c is the central charge. Given a vertex algebra homomorphism from a Virasoro vertex algebra of central charge c to any other vertex algebra, the vertex operator attached to the image of ω automatically satisfies the Virasoro relations, i.e., the image of ω is a conformal vector. Conversely, any conformal vector in a vertex algebra induces a distinguished vertex algebra homomorphism from some Virasoro vertex operator algebra. The Virasoro vertex operator algebras are simple, except when c has the form 1–6(p–q)2/pq for coprime integers p,q strictly greater than 1 – this follows from Kac's determinant formula. In these exceptional cases, one has a unique maximal ideal, and the corresponding quotient is called a minimal model. When p = q+1, the vertex algebras are unitary representations of Virasoro, and their modules are known as discrete series representations. They play an important role in conformal field theory in part because they are unusually tractable, and for small p, they correspond to well-known statistical mechanics systems at criticality, e.g., the Ising model, the tri-critical Ising model, the three-state Potts model, etc. By work of Weiqang Wang concerning fusion rules, we have a full description of the tensor categories of unitary minimal models. For example, when c=1/2 (Ising), there are three irreducible modules with lowest L0-weight 0, 1/2, and 1/16, and its fusion ring is Z[x,y]/(x2–1, y2–x–1, xy–y). === Affine vertex algebra === By replacing the Heisenberg Lie algebra with an untwisted affine Kac–Moody Lie algebra (i.e., the universal central extension of the loop algebra on a finite-dimensional simple Lie algebra), one may construct the vacuum representation in much the same way as the free boson vertex algebra is constructed. This algebra arises as the current algebra of the Wess–Zumino–Witten model, which produces the anomaly that is interpreted as the central extension. Concretely, pulling back the central extension 0 → C → g ^ → g [ t , t − 1 ] → 0 {\displaystyle 0\to \mathbb {C} \to {\hat {\mathfrak {g}}}\to {\mathfrak {g}}[t,t^{-1}]\to 0} along the inclusion g [ t ] → g [ t , t − 1 ] {\displaystyle {\mathfrak {g}}[t]\to {\mathfrak {g}}[t,t^{-1}]} yields a split extension, and the vacuum module is induced from the one-dimensional representation of the latter on which a central basis element acts by some chosen constant called the "level". Since central elements can be identified with invariant inner products on the finite type Lie algebra g {\displaystyle {\mathfrak {g}}} , one typically normalizes the level so that the Killing form has level twice the dual Coxeter number. Equivalently, level one gives the inner product for which the longest root has norm 2. This matches the loop algebra convention, where levels are discretized by third cohomology of simply connected compact Lie groups. By choosing a basis Ja of the finite type Lie algebra, one may form a basis of the affine Lie algebra using Jan = Ja tn together with a central element K. By reconstruction, we can describe the vertex operators by normal ordered products of derivatives of the fields J a ( z ) = ∑ n = − ∞ ∞ J n a z − n − 1 = ∑ n = − ∞ ∞ ( J a t n ) z − n − 1 . {\displaystyle J^{a}(z)=\sum _{n=-\infty }^{\infty }J_{n}^{a}z^{-n-1}=\sum _{n=-\infty }^{\infty }(J^{a}t^{n})z^{-n-1}.} When the level is non-critical, i.e., the inner product is not minus one half of the Killing form, the vacuum representation has a conformal element, given by the Sugawara construction. For any choice of dual bases Ja, Ja with respect to the level 1 inner product, the conformal element is ω = 1 2 ( k + h ∨ ) ∑ a J a , − 1 J − 1 a 1 {\displaystyle \omega ={\frac {1}{2(k+h^{\vee })}}\sum _{a}J_{a,-1}J_{-1}^{a}1} and yields a vertex operator algebra whose central charge is k ⋅ dim ⁡ g / ( k + h ∨ ) {\displaystyle k\cdot \dim {\mathfrak {g}}/(k+h^{\vee })} . At critical level, the conformal structure is destroyed, since the denominator is zero, but one may produce operators Ln for n ≥ –1 by taking a limit as k approaches criticality. == Modules == Much like ordinary rings, vertex algebras admit a notion of module, or representation. Modules play an important role in conformal field theory, where they are often called sectors. A standard assumption in the physics literature is that the full Hilbert space of a conformal field theory decomposes into a sum of tensor products of left-moving and right-moving sectors: H ≅ ⨁ i ∈ I M i ⊗ M i ¯ {\displaystyle {\mathcal {H}}\cong \bigoplus _{i\in I}M_{i}\otimes {\overline {M_{i}}}} That is, a conformal field theory has a vertex operator algebra of left-moving chiral symmetries, a vertex operator algebra of right-moving chiral symmetries, and the sectors moving in a given direction are modules for the corresponding vertex operator algebra. === Definition === Given a vertex algebra V with multiplication Y, a V-module is a vector space M equipped with an action YM: V ⊗ M → M((z)), satisfying the following conditions: (Identity) YM(1,z) = IdM (Associativity, or Jacobi identity) For any u, v ∈ V, w ∈ M, there is an element X ( u , v , w ; z , x ) ∈ M [ [ z , x ] ] [ z − 1 , x − 1 , ( z − x ) − 1 ] {\displaystyle X(u,v,w;z,x)\in M[[z,x]][z^{-1},x^{-1},(z-x)^{-1}]} such that YM(u,z)YM(v,x)w and YM(Y(u,z–x)v,x)w are the corresponding expansions of X ( u , v , w ; z , x ) {\displaystyle X(u,v,w;z,x)} in M((z))((x)) and M((x))((z–x)). Equivalently, the following "Jacobi identity" holds: z − 1 δ ( y − x z ) Y M ( u , x ) Y M ( v , y ) w − z − 1 δ ( − y + x z ) Y M ( v , y ) Y M ( u , x ) w = y − 1 δ ( x + z y ) Y M ( Y ( u , z ) v , y ) w . {\displaystyle z^{-1}\delta \left({\frac {y-x}{z}}\right)Y^{M}(u,x)Y^{M}(v,y)w-z^{-1}\delta \left({\frac {-y+x}{z}}\right)Y^{M}(v,y)Y^{M}(u,x)w=y^{-1}\delta \left({\frac {x+z}{y}}\right)Y^{M}(Y(u,z)v,y)w.} The modules of a vertex algebra form an abelian category. When working with vertex operator algebras, the previous definition is sometimes given the name weak V {\displaystyle V} -module, and genuine V-modules must respect the conformal structure given by the conformal vector ω {\displaystyle \omega } . More precisely, they are required to satisfy the additional condition that L0 acts semisimply with finite-dimensional eigenspaces and eigenvalues bounded below in each coset of Z. Work of Huang, Lepowsky, Miyamoto, and Zhang has shown at various levels of generality that modules of a vertex operator algebra admit a fusion tensor product operation, and form a braided tensor category. When the category of V-modules is semisimple with finitely many irreducible objects, the vertex operator algebra V is called rational. Rational vertex operator algebras satisfying an additional finiteness hypothesis (known as Zhu's C2-cofiniteness condition) are known to be particularly well-behaved, and are called regular. For example, Zhu's 1996 modular invariance theorem asserts that the characters of modules of a regular VOA form a vector-valued representation of S L ( 2 , Z ) {\displaystyle \mathrm {SL} (2,\mathbb {Z} )} . In particular, if a VOA is holomorphic, that is, its representation category is equivalent to that of vector spaces, then its partition function is S L ( 2 , Z ) {\displaystyle \mathrm {SL} (2,\mathbb {Z} )} -invariant up to a constant. Huang showed that the category of modules of a regular VOA is a modular tensor category, and its fusion rules satisfy the Verlinde formula. === Heisenberg algebra modules === Modules of the Heisenberg algebra can be constructed as Fock spaces π λ {\displaystyle \pi _{\lambda }} for λ ∈ C {\displaystyle \lambda \in \mathbb {C} } which are induced representations of the Heisenberg Lie algebra, given by a vacuum vector v λ {\displaystyle v_{\lambda }} satisfying b n v λ = 0 {\displaystyle b_{n}v_{\lambda }=0} for n > 0 {\displaystyle n>0} , b 0 v λ = 0 {\displaystyle b_{0}v_{\lambda }=0} , and being acted on freely by the negative modes b − n {\displaystyle b_{-n}} for n > 0 {\displaystyle n>0} . The space can be written as C [ b − 1 , b − 2 , ⋯ ] v λ {\displaystyle \mathbb {C} [b_{-1},b_{-2},\cdots ]v_{\lambda }} . Every irreducible, Z {\displaystyle \mathbb {Z} } -graded Heisenberg algebra module with gradation bounded below is of this form. These are used to construct lattice vertex algebras, which as vector spaces are direct sums of Heisenberg modules, when the image of Y {\displaystyle Y} is extended appropriately to module elements. The module category is not semisimple, since one may induce a representation of the abelian Lie algebra where b0 acts by a nontrivial Jordan block. For the rank n free boson, one has an irreducible module Vλ for each vector λ in complex n-dimensional space. Each vector b ∈ Cn yields the operator b0, and the Fock space Vλ is distinguished by the property that each such b0 acts as scalar multiplication by the inner product (b, λ). === Twisted modules === Unlike ordinary rings, vertex algebras admit a notion of twisted module attached to an automorphism. For an automorphism σ of order N, the action has the form V ⊗ M → M((z1/N)), with the following monodromy condition: if u ∈ V satisfies σ u = exp(2πik/N)u, then un = 0 unless n satisfies n+k/N ∈ Z (there is some disagreement about signs among specialists). Geometrically, twisted modules can be attached to branch points on an algebraic curve with a ramified Galois cover. In the conformal field theory literature, twisted modules are called twisted sectors, and are intimately connected with string theory on orbifolds. == Additional examples == === Vertex operator algebra defined by an even lattice === The lattice vertex algebra construction was the original motivation for defining vertex algebras. It is constructed by taking a sum of irreducible modules for the Heisenberg algebra corresponding to lattice vectors, and defining a multiplication operation by specifying intertwining operators between them. That is, if Λ is an even lattice (if the lattice is not even, the structure obtained is instead a vertex superalgebra), the lattice vertex algebra VΛ decomposes into free bosonic modules as: V Λ ≅ ⨁ λ ∈ Λ V λ {\displaystyle V_{\Lambda }\cong \bigoplus _{\lambda \in \Lambda }V_{\lambda }} Lattice vertex algebras are canonically attached to double covers of even integral lattices, rather than the lattices themselves. While each such lattice has a unique lattice vertex algebra up to isomorphism, the vertex algebra construction is not functorial, because lattice automorphisms have an ambiguity in lifting. The double covers in question are uniquely determined up to isomorphism by the following rule: elements have the form ±eα for lattice vectors α ∈ Λ (i.e., there is a map to Λ sending eα to α that forgets signs), and multiplication satisfies the relations eαeβ = (–1)(α,β)eβeα. Another way to describe this is that given an even lattice Λ, there is a unique (up to coboundary) normalised cocycle ε(α, β) with values ±1 such that (−1)(α,β) = ε(α, β) ε(β, α), where the normalization condition is that ε(α, 0) = ε(0, α) = 1 for all α ∈ Λ. This cocycle induces a central extension of Λ by a group of order 2, and we obtain a twisted group ring Cε[Λ] with basis eα (α ∈ Λ), and multiplication rule eαeβ = ε(α, β)eα+β – the cocycle condition on ε ensures associativity of the ring. The vertex operator attached to lowest weight vector vλ in the Fock space Vλ is Y ( v λ , z ) = e λ : exp ⁡ ∫ λ ( z ) := e λ z λ exp ⁡ ( ∑ n < 0 λ n z − n n ) exp ⁡ ( ∑ n > 0 λ n z − n n ) , {\displaystyle Y(v_{\lambda },z)=e_{\lambda }:\exp \int \lambda (z):=e_{\lambda }z^{\lambda }\exp \left(\sum _{n<0}\lambda _{n}{\frac {z^{-n}}{n}}\right)\exp \left(\sum _{n>0}\lambda _{n}{\frac {z^{-n}}{n}}\right),} where zλ is a shorthand for the linear map that takes any element of the α-Fock space Vα to the monomial z(λ,α). The vertex operators for other elements of the Fock space are then determined by reconstruction. As in the case of the free boson, one has a choice of conformal vector, given by an element s of the vector space Λ ⊗ C, but the condition that the extra Fock spaces have integer L0 eigenvalues constrains the choice of s: for an orthonormal basis xi, the vector 1/2 xi,12 + s2 must satisfy (s, λ) ∈ Z for all λ ∈ Λ, i.e., s lies in the dual lattice. If the even lattice Λ is generated by its "root vectors" (those satisfying (α, α)=2), and any two root vectors are joined by a chain of root vectors with consecutive inner products non-zero then the vertex operator algebra is the unique simple quotient of the vacuum module of the affine Kac–Moody algebra of the corresponding simply laced simple Lie algebra at level one. This is known as the Frenkel–Kac (or Frenkel–Kac–Segal) construction, and is based on the earlier construction by Sergio Fubini and Gabriele Veneziano of the tachyonic vertex operator in the dual resonance model. Among other features, the zero modes of the vertex operators corresponding to root vectors give a construction of the underlying simple Lie algebra, related to a presentation originally due to Jacques Tits. In particular, one obtains a construction of all ADE type Lie groups directly from their root lattices. And this is commonly considered the simplest way to construct the 248-dimensional group E8. === Monster vertex algebra === The monster vertex algebra V ♮ {\displaystyle V^{\natural }} (also called the "moonshine module") is the key to Borcherds's proof of the Monstrous moonshine conjectures. It was constructed by Frenkel, Lepowsky, and Meurman in 1988. It is notable because its character is the j-invariant with no constant term, j ( τ ) − 744 {\displaystyle j(\tau )-744} , and its automorphism group is the monster group. It is constructed by orbifolding the lattice vertex algebra constructed from the Leech lattice by the order 2 automorphism induced by reflecting the Leech lattice in the origin. That is, one forms the direct sum of the Leech lattice VOA with the twisted module, and takes the fixed points under an induced involution. Frenkel, Lepowsky, and Meurman conjectured in 1988 that V ♮ {\displaystyle V^{\natural }} is the unique holomorphic vertex operator algebra with central charge 24, and partition function j ( τ ) − 744 {\displaystyle j(\tau )-744} . This conjecture is still open. === Chiral de Rham complex === Malikov, Schechtman, and Vaintrob showed that by a method of localization, one may canonically attach a bcβγ (boson–fermion superfield) system to a smooth complex manifold. This complex of sheaves has a distinguished differential, and the global cohomology is a vertex superalgebra. Ben-Zvi, Heluani, and Szczesny showed that a Riemannian metric on the manifold induces an N=1 superconformal structure, which is promoted to an N=2 structure if the metric is Kähler and Ricci-flat, and a hyperkähler structure induces an N=4 structure. Borisov and Libgober showed that one may obtain the two-variable elliptic genus of a compact complex manifold from the cohomology of the Chiral de Rham complex. If the manifold is Calabi–Yau, then this genus is a weak Jacobi form. === Vertex algebra associated to a surface defect === A vertex algebra can arise as a subsector of higher dimensional quantum field theory which localizes to a two real-dimensional submanifold of the space on which the higher dimensional theory is defined. A prototypical example is the construction of Beem, Leemos, Liendo, Peelaers, Rastelli, and van Rees which associates a vertex algebra to any 4d N=2 superconformal field theory. This vertex algebra has the property that its character coincides with the Schur index of the 4d superconformal theory. When the theory admits a weak coupling limit, the vertex algebra has an explicit description as a BRST reduction of a bcβγ system. == Vertex operator superalgebras == By allowing the underlying vector space to be a superspace (i.e., a Z/2Z-graded vector space V = V + ⊕ V − {\displaystyle V=V_{+}\oplus V_{-}} ) one can define a vertex superalgebra by the same data as a vertex algebra, with 1 in V+ and T an even operator. The axioms are essentially the same, but one must incorporate suitable signs into the locality axiom, or one of the equivalent formulations. That is, if a and b are homogeneous, one compares Y(a,z)Y(b,w) with εY(b,w)Y(a,z), where ε is –1 if both a and b are odd and 1 otherwise. If in addition there is a Virasoro element ω in the even part of V2, and the usual grading restrictions are satisfied, then V is called a vertex operator superalgebra. One of the simplest examples is the vertex operator superalgebra generated by a single free fermion ψ. As a Virasoro representation, it has central charge 1/2, and decomposes as a direct sum of Ising modules of lowest weight 0 and 1/2. One may also describe it as a spin representation of the Clifford algebra on the quadratic space t1/2C[t,t−1](dt)1/2 with residue pairing. The vertex operator superalgebra is holomorphic, in the sense that all modules are direct sums of itself, i.e., the module category is equivalent to the category of vector spaces. The tensor square of the free fermion is called the free charged fermion, and by boson–fermion correspondence, it is isomorphic to the lattice vertex superalgebra attached to the odd lattice Z. This correspondence has been used by Date–Jimbo–Kashiwara-Miwa to construct soliton solutions to the KP hierarchy of nonlinear PDEs. == Superconformal structures == The Virasoro algebra has some supersymmetric extensions that naturally appear in superconformal field theory and superstring theory. The N=1, 2, and 4 superconformal algebras are of particular importance. Infinitesimal holomorphic superconformal transformations of a supercurve (with one even local coordinate z and N odd local coordinates θ1,...,θN) are generated by the coefficients of a super-stress–energy tensor T(z, θ1, ..., θN). When N=1, T has odd part given by a Virasoro field L(z), and even part given by a field G ( z ) = ∑ n G n z − n − 3 / 2 {\displaystyle G(z)=\sum _{n}G_{n}z^{-n-3/2}} subject to commutation relations [ G m , L n ] = ( m − n / 2 ) G m + n {\displaystyle [G_{m},L_{n}]=(m-n/2)G_{m+n}} [ G m , G n ] = ( m − n ) L m + n + δ m , − n 4 m 2 + 1 12 c {\displaystyle [G_{m},G_{n}]=(m-n)L_{m+n}+\delta _{m,-n}{\frac {4m^{2}+1}{12}}c} By examining the symmetry of the operator products, one finds that there are two possibilities for the field G: the indices n are either all integers, yielding the Ramond algebra, or all half-integers, yielding the Neveu–Schwarz algebra. These algebras have unitary discrete series representations at central charge c ^ = 2 3 c = 1 − 8 m ( m + 2 ) m ≥ 3 {\displaystyle {\hat {c}}={\frac {2}{3}}c=1-{\frac {8}{m(m+2)}}\quad m\geq 3} and unitary representations for all c greater than 3/2, with lowest weight h only constrained by h≥ 0 for Neveu–Schwarz and h ≥ c/24 for Ramond. An N=1 superconformal vector in a vertex operator algebra V of central charge c is an odd element τ ∈ V of weight 3/2, such that Y ( τ , z ) = G ( z ) = ∑ m ∈ Z + 1 / 2 G n z − n − 3 / 2 , {\displaystyle Y(\tau ,z)=G(z)=\sum _{m\in \mathbb {Z} +1/2}G_{n}z^{-n-3/2},} G−1/2τ = ω, and the coefficients of G(z) yield an action of the N=1 Neveu–Schwarz algebra at central charge c. For N=2 supersymmetry, one obtains even fields L(z) and J(z), and odd fields G+(z) and G−(z). The field J(z) generates an action of the Heisenberg algebras (described by physicists as a U(1) current). There are both Ramond and Neveu–Schwarz N=2 superconformal algebras, depending on whether the indexing on the G fields is integral or half-integral. However, the U(1) current gives rise to a one-parameter family of isomorphic superconformal algebras interpolating between Ramond and Neveu–Schwartz, and this deformation of structure is known as spectral flow. The unitary representations are given by discrete series with central charge c = 3-6/m for integers m at least 3, and a continuum of lowest weights for c > 3. An N=2 superconformal structure on a vertex operator algebra is a pair of odd elements τ+, τ− of weight 3/2, and an even element μ of weight 1 such that τ± generate G±(z), and μ generates J(z). For N=3 and 4, unitary representations only have central charges in a discrete family, with c=3k/2 and 6k, respectively, as k ranges over positive integers. == Additional constructions == Fixed point subalgebras: Given an action of a symmetry group on a vertex operator algebra, the subalgebra of fixed vectors is also a vertex operator algebra. In 2013, Miyamoto proved that two important finiteness properties, namely Zhu's condition C2 and regularity, are preserved when taking fixed points under finite solvable group actions. Current extensions: Given a vertex operator algebra and some modules of integral conformal weight, one may under favorable circumstances describe a vertex operator algebra structure on the direct sum. Lattice vertex algebras are a standard example of this. Another family of examples are framed VOAs, which start with tensor products of Ising models, and add modules that correspond to suitably even codes. Orbifolds: Given a finite cyclic group acting on a holomorphic VOA, it is conjectured that one may construct a second holomorphic VOA by adjoining irreducible twisted modules and taking fixed points under an induced automorphism, as long as those twisted modules have suitable conformal weight. This is known to be true in special cases, e.g., groups of order at most 3 acting on lattice VOAs. The coset construction (due to Goddard, Kent, and Olive): Given a vertex operator algebra V of central charge c and a set S of vectors, one may define the commutant C(V,S) to be the subspace of vectors v strictly commute with all fields coming from S, i.e., such that Y(s,z)v ∈ V[[z]] for all s ∈ S. This turns out to be a vertex subalgebra, with Y, T, and identity inherited from V. And if S is a VOA of central charge cS, the commutant is a VOA of central charge c–cS. For example, the embedding of SU(2) at level k+1 into the tensor product of two SU(2) algebras at levels k and 1 yields the Virasoro discrete series with p=k+2, q=k+3, and this was used to prove their existence in the 1980s. Again with SU(2), the embedding of level k+2 into the tensor product of level k and level 2 yields the N=1 superconformal discrete series. BRST reduction: For any degree 1 vector v satisfying v02=0, the cohomology of this operator has a graded vertex superalgebra structure. More generally, one may use any weight 1 field whose residue has square zero. The usual method is to tensor with fermions, as one then has a canonical differential. An important special case is quantum Drinfeld–Sokolov reduction applied to affine Kac–Moody algebras to obtain affine W-algebras as degree 0 cohomology. These W algebras also admit constructions as vertex subalgebras of free bosons given by kernels of screening operators. == Related algebraic structures == If one considers only the singular part of the OPE in a vertex algebra, one arrives at the definition of a Lie conformal algebra. Since one is often only concerned with the singular part of the OPE, this makes Lie conformal algebras a natural object to study. There is a functor from vertex algebras to Lie conformal algebras that forgets the regular part of OPEs, and it has a left adjoint, called the "universal vertex algebra" functor. Vacuum modules of affine Kac–Moody algebras and Virasoro vertex algebras are universal vertex algebras, and in particular, they can be described very concisely once the background theory is developed. There are several generalizations of the notion of vertex algebra in the literature. Some mild generalizations involve a weakening of the locality axiom to allow monodromy, e.g., the abelian intertwining algebras of Dong and Lepowsky. One may view these roughly as vertex algebra objects in a braided tensor category of graded vector spaces, in much the same way that a vertex superalgebra is such an object in the category of super vector spaces. More complicated generalizations relate to q-deformations and representations of quantum groups, such as in work of Frenkel–Reshetikhin, Etingof–Kazhdan, and Li. Beilinson and Drinfeld introduced a sheaf-theoretic notion of chiral algebra that is closely related to the notion of vertex algebra, but is defined without using any visible power series. Given an algebraic curve X, a chiral algebra on X is a DX-module A equipped with a multiplication operation j ∗ j ∗ ( A ⊠ A ) → Δ ∗ A {\displaystyle j_{*}j^{*}(A\boxtimes A)\to \Delta _{*}A} on X×X that satisfies an associativity condition. They also introduced an equivalent notion of factorization algebra that is a system of quasicoherent sheaves on all finite products of the curve, together with a compatibility condition involving pullbacks to the complement of various diagonals. Any translation-equivariant chiral algebra on the affine line can be identified with a vertex algebra by taking the fiber at a point, and there is a natural way to attach a chiral algebra on a smooth algebraic curve to any vertex operator algebra. == See also == Operator algebra Zhu algebra == Notes == === Citations === == Sources ==
Wikipedia/Vertex_algebra
In physics, matrix string theory is a set of equations that describe superstring theory in a non-perturbative framework. Type IIA string theory can be shown to be equivalent to a maximally supersymmetric two-dimensional gauge theory, the gauge group of which is U(N) for a large value of N. This matrix string theory was first proposed by Luboš Motl in 1997 and later independently in a more complete paper by Robbert Dijkgraaf, Erik Verlinde, and Herman Verlinde. Another matrix string theory equivalent to Type IIB string theory was constructed in 1996 by Ishibashi, Kawai, Kitazawa and Tsuchiya. == See also == Matrix theory (physics) == References ==
Wikipedia/Matrix_string_theory
In theoretical physics, the anti-de Sitter/conformal field theory correspondence (frequently abbreviated as AdS/CFT) is a conjectured relationship between two kinds of physical theories. On one side are anti-de Sitter spaces (AdS) that are used in theories of quantum gravity, formulated in terms of string theory or M-theory. On the other side of the correspondence are conformal field theories (CFT) that are quantum field theories, including theories similar to the Yang–Mills theories that describe elementary particles. The duality represents a major advance in the understanding of string theory and quantum gravity. This is because it provides a non-perturbative formulation of string theory with certain boundary conditions and because it is the most successful realization of the holographic principle, an idea in quantum gravity originally proposed by Gerard 't Hooft and promoted by Leonard Susskind. It also provides a powerful toolkit for studying strongly coupled quantum field theories. Much of the usefulness of the duality results from the fact that it is a strong–weak duality: when the fields of the quantum field theory are strongly interacting, the ones in the gravitational theory are weakly interacting and thus more mathematically tractable. This fact has been used to study many aspects of nuclear and condensed matter physics by translating problems in those subjects into more mathematically tractable problems in string theory. The AdS/CFT correspondence was first proposed by Juan Maldacena in late 1997. Important aspects of the correspondence were soon elaborated on in two articles, one by Steven Gubser, Igor Klebanov and Alexander Polyakov, and another by Edward Witten. By 2015, Maldacena's article had over 10,000 citations, becoming the most highly cited article in the field of high energy physics. One of the most prominent examples of the AdS/CFT correspondence has been the AdS5/CFT4 correspondence: a relation between N = 4 supersymmetric Yang–Mills theory in 3+1 dimensions and type IIB superstring theory on AdS5 × S5. == Background == === Quantum gravity and strings === Current understanding of gravity is based on Albert Einstein's general theory of relativity. Formulated in 1915, general relativity explains gravity in terms of the geometry of space and time, or spacetime. It is formulated in the language of classical physics that was developed by physicists such as Isaac Newton and James Clerk Maxwell. The other nongravitational forces are explained in the framework of quantum mechanics. Developed in the first half of the twentieth century by a number of different physicists, quantum mechanics provides a radically different way of describing physical phenomena based on probability. Quantum gravity is the branch of physics that seeks to describe gravity using the principles of quantum mechanics. Currently, a popular approach to quantum gravity is string theory, which models elementary particles not as zero-dimensional points but as one-dimensional objects called strings. In the AdS/CFT correspondence, one typically considers theories of quantum gravity derived from string theory or its modern extension, M-theory. In everyday life, there are three familiar dimensions of space (up/down, left/right, and forward/backward), and there is one dimension of time. Thus, in the language of modern physics, one says that spacetime is four-dimensional. One peculiar feature of string theory and M-theory is that these theories require extra dimensions of spacetime for their mathematical consistency: in string theory spacetime is ten-dimensional, while in M-theory it is eleven-dimensional. The quantum gravity theories appearing in the AdS/CFT correspondence are typically obtained from string and M-theory by a process known as compactification. This produces a theory in which spacetime has effectively a lower number of dimensions and the extra dimensions are "curled up" into circles. A standard analogy for compactification is to consider a multidimensional object such as a garden hose. If the hose is viewed from a sufficient distance, it appears to have only one dimension, its length, but as one approaches the hose, one discovers that it contains a second dimension, its circumference. Thus, an ant crawling inside it would move in two dimensions. === Quantum field theory === The application of quantum mechanics to physical objects such as the electromagnetic field, which are extended in space and time, is known as quantum field theory. In particle physics, quantum field theories form the basis for our understanding of elementary particles, which are modeled as excitations in the fundamental fields. Quantum field theories are also used throughout condensed matter physics to model particle-like objects called quasiparticles. In the AdS/CFT correspondence, one considers, in addition to a theory of quantum gravity, a certain kind of quantum field theory called a conformal field theory. This is a particularly symmetric and mathematically well-behaved type of quantum field theory. Such theories are often studied in the context of string theory, where they are associated with the surface swept out by a string propagating through spacetime, and in statistical mechanics, where they model systems at a thermodynamic critical point. == Overview of the correspondence == === Geometry of anti-de Sitter space === In the AdS/CFT correspondence, one considers string theory or M-theory on an anti-de Sitter background. This means that the geometry of spacetime is described in terms of a certain vacuum solution of Einstein's equation called anti-de Sitter space. In very elementary terms, anti-de Sitter space is a mathematical model of spacetime in which the notion of distance between points (the metric) is different from the notion of distance in ordinary Euclidean geometry. It is closely related to hyperbolic space, which can be viewed as a disk as illustrated on the right. This image shows a tessellation of a disk by triangles and squares. One can define the distance between points of this disk in such a way that all the triangles and squares are the same size and the circular outer boundary is infinitely far from any point in the interior. Now imagine a stack of hyperbolic disks where each disk represents the state of the universe at a given time. The resulting geometric object is three-dimensional anti-de Sitter space. It looks like a solid cylinder in which any cross section is a copy of the hyperbolic disk. Time runs along the vertical direction in this picture. The surface of this cylinder plays an important role in the AdS/CFT correspondence. As with the hyperbolic plane, anti-de Sitter space is curved in such a way that any point in the interior is actually infinitely far from this boundary surface. This construction describes a hypothetical universe with only two space and one time dimension, but it can be generalized to any number of dimensions. Indeed, hyperbolic space can have more than two dimensions and one can "stack up" copies of hyperbolic space to get higher-dimensional models of anti-de Sitter space. === Idea of AdS/CFT === An important feature of anti-de Sitter space is its boundary (which looks like a cylinder in the case of three-dimensional anti-de Sitter space). One property of this boundary is that, locally around any point, it looks just like Minkowski space, the model of spacetime used in nongravitational physics. One can therefore consider an auxiliary theory in which "spacetime" is given by the boundary of anti-de Sitter space. This observation is the starting point for the AdS/CFT correspondence, which states that the boundary of anti-de Sitter space can be regarded as the "spacetime" for a conformal field theory. The claim is that this conformal field theory is equivalent to the gravitational theory on the bulk anti-de Sitter space in the sense that there is a "dictionary" for translating calculations in one theory into calculations in the other. Every entity in one theory has a counterpart in the other theory. For example, a single particle in the gravitational theory might correspond to some collection of particles in the boundary theory. In addition, the predictions in the two theories are quantitatively identical so that if two particles have a 40 percent chance of colliding in the gravitational theory, then the corresponding collections in the boundary theory would also have a 40 percent chance of colliding. Notice that the boundary of anti-de Sitter space has fewer dimensions than anti-de Sitter space itself. For instance, in the three-dimensional example illustrated above, the boundary is a two-dimensional surface. The AdS/CFT correspondence is often described as a "holographic duality" because this relationship between the two theories is similar to the relationship between a three-dimensional object and its image as a hologram. Although a hologram is two-dimensional, it encodes information about all three dimensions of the object it represents. In the same way, theories that are related by the AdS/CFT correspondence are conjectured to be exactly equivalent, despite living in different numbers of dimensions. The conformal field theory is like a hologram that captures information about the higher-dimensional quantum gravity theory. === Examples of the correspondence === Following Maldacena's insight in 1997, theorists have discovered many different realizations of the AdS/CFT correspondence. These relate various conformal field theories to compactifications of string theory and M-theory in various numbers of dimensions. The theories involved are generally not viable models of the real world, but they have certain features, such as their particle content or high degree of symmetry, which make them useful for solving problems in quantum field theory and quantum gravity. The most famous example of the AdS/CFT correspondence states that type IIB string theory on the product space AdS5 × S5 is equivalent to N = 4 supersymmetric Yang–Mills theory on the four-dimensional boundary. In this example, the spacetime on which the gravitational theory lives is effectively five-dimensional (hence the notation AdS5), and there are five additional compact dimensions (encoded by the S5 factor). In the real world, spacetime is four-dimensional, at least macroscopically, so this version of the correspondence does not provide a realistic model of gravity. Likewise, the dual theory is not a viable model of any real-world system as it assumes a large amount of supersymmetry. Nevertheless, as explained below, this boundary theory shares some features in common with quantum chromodynamics, the fundamental theory of the strong force. It describes particles similar to the gluons of quantum chromodynamics together with certain fermions. As a result, it has found applications in nuclear physics, particularly in the study of the quark–gluon plasma. Another realization of the correspondence states that M-theory on AdS7 × S4 is equivalent to the so-called (2,0)-theory in six dimensions. In this example, the spacetime of the gravitational theory is effectively seven-dimensional. The existence of the (2,0)-theory that appears on one side of the duality is predicted by the classification of superconformal field theories. It is still poorly understood because it is a quantum mechanical theory without a classical limit. Despite the inherent difficulty in studying this theory, it is considered to be an interesting object for a variety of reasons, both physical and mathematical. Yet another realization of the correspondence states that M-theory on AdS4 × S7 is equivalent to the ABJM superconformal field theory in three dimensions. Here the gravitational theory has four noncompact dimensions, so this version of the correspondence provides a somewhat more realistic description of gravity. == Applications to quantum gravity == === A non-perturbative formulation of string theory === In quantum field theory, one typically computes the probabilities of various physical events using the techniques of perturbation theory. Developed by Richard Feynman and others in the first half of the twentieth century, perturbative quantum field theory uses special diagrams called Feynman diagrams to organize computations. One imagines that these diagrams depict the paths of point-like particles and their interactions. Although this formalism is extremely useful for making predictions, these predictions are only possible when the strength of the interactions, the coupling constant, is small enough to reliably describe the theory as being close to a theory without interactions. The starting point for string theory is the idea that the point-like particles of quantum field theory can also be modeled as one-dimensional objects called strings. The interaction of strings is most straightforwardly defined by generalizing the perturbation theory used in ordinary quantum field theory. At the level of Feynman diagrams, this means replacing the one-dimensional diagram representing the path of a point particle by a two-dimensional surface representing the motion of a string. Unlike in quantum field theory, string theory does not yet have a full non-perturbative definition, so many of the theoretical questions that physicists would like to answer remain out of reach. The problem of developing a non-perturbative formulation of string theory was one of the original motivations for studying the AdS/CFT correspondence. As explained above, the correspondence provides several examples of quantum field theories that are equivalent to string theory on anti-de Sitter space. One can alternatively view this correspondence as providing a definition of string theory in the special case where the gravitational field is asymptotically anti-de Sitter (that is, when the gravitational field resembles that of anti-de Sitter space at spatial infinity). Physically interesting quantities in string theory are defined in terms of quantities in the dual quantum field theory. === Black hole information paradox === In 1975, Stephen Hawking published a calculation that suggested that black holes are not completely black but emit a dim radiation due to quantum effects near the event horizon. At first, Hawking's result posed a problem for theorists because it suggested that black holes destroy information. More precisely, Hawking's calculation seemed to conflict with one of the basic postulates of quantum mechanics, which states that physical systems evolve in time according to the Schrödinger equation. This property is usually referred to as unitarity of time evolution. The apparent contradiction between Hawking's calculation and the unitarity postulate of quantum mechanics came to be known as the black hole information paradox. The AdS/CFT correspondence resolves the black hole information paradox, at least to some extent, because it shows how a black hole can evolve in a manner consistent with quantum mechanics in some contexts. Indeed, one can consider black holes in the context of the AdS/CFT correspondence, and any such black hole corresponds to a configuration of particles on the boundary of anti-de Sitter space. These particles obey the usual rules of quantum mechanics and in particular evolve in a unitary fashion, so the black hole must also evolve in a unitary fashion, respecting the principles of quantum mechanics. In 2005, Hawking announced that the paradox had been settled in favor of information conservation by the AdS/CFT correspondence, and he suggested a concrete mechanism by which black holes might preserve information. == Applications to quantum field theory == === Nuclear physics === One physical system that has been studied using the AdS/CFT correspondence is the quark–gluon plasma, an exotic state of matter produced in particle accelerators. This state of matter arises for brief instants when heavy ions such as gold or lead nuclei are collided at high energies. Such collisions cause the quarks that make up atomic nuclei to deconfine at temperatures of approximately two trillion kelvins, conditions similar to those present at around 10−11 seconds after the Big Bang. The physics of the quark–gluon plasma is governed by quantum chromodynamics, but this theory is mathematically intractable in problems involving the quark–gluon plasma. In an article appearing in 2005, Đàm Thanh Sơn and his collaborators showed that the AdS/CFT correspondence could be used to understand some aspects of the quark–gluon plasma by describing it in the language of string theory. By applying the AdS/CFT correspondence, Sơn and his collaborators were able to describe the quark gluon plasma in terms of black holes in five-dimensional spacetime. The calculation showed that the ratio of two quantities associated with the quark–gluon plasma, the shear viscosity η and volume density of entropy s, should be approximately equal to a certain universal constant: η s ≈ ℏ 4 π k {\displaystyle {\frac {\eta }{s}}\approx {\frac {\hbar }{4\pi k}}} where ħ denotes the reduced Planck constant and k is the Boltzmann constant. In addition, the authors conjectured that this universal constant provides a lower bound for η/s in a large class of systems. In an experiment conducted at the Relativistic Heavy Ion Collider at Brookhaven National Laboratory, the experimental result in one model was close to this universal constant but it was not the case in another model. Another important property of the quark–gluon plasma is that very high energy quarks moving through the plasma are stopped or "quenched" after traveling only a few femtometres. This phenomenon is characterized by a number ^q called the jet quenching parameter, which relates the energy loss of such a quark to the squared distance traveled through the plasma. Calculations based on the AdS/CFT correspondence give the estimated value ^q ≈ 4 GeV2/fm, and the experimental value of ^q lies in the range 5–15 GeV2/fm. === Condensed matter physics === Over the decades, experimental condensed matter physicists have discovered a number of exotic states of matter, including superconductors and superfluids. These states are described using the formalism of quantum field theory, but some phenomena are difficult to explain using standard field theoretic techniques. Some condensed matter theorists including Subir Sachdev hope that the AdS/CFT correspondence will make it possible to describe these systems in the language of string theory and learn more about their behavior. So far some success has been achieved in using string theory methods to describe the transition of a superfluid to an insulator. A superfluid is a system of electrically neutral atoms that flows without any friction. Such systems are often produced in the laboratory using liquid helium, but recently experimentalists have developed new ways of producing artificial superfluids by pouring trillions of cold atoms into a lattice of criss-crossing lasers. These atoms initially behave as a superfluid, but as experimentalists increase the intensity of the lasers, they become less mobile and then suddenly transition to an insulating state. During the transition, the atoms behave in an unusual way. For example, the atoms slow to a halt at a rate that depends on the temperature and on the Planck constant, the fundamental parameter of quantum mechanics, which does not enter into the description of the other phases. This behavior has recently been understood by considering a dual description where properties of the fluid are described in terms of a higher dimensional black hole. === Criticism === With many physicists turning towards string-based methods to solve problems in nuclear and condensed matter physics, some theorists working in these areas have expressed doubts about whether the AdS/CFT correspondence can provide the tools needed to realistically model real-world systems. In a talk at the Quark Matter conference in 2006, an American physicist, Larry McLerran pointed out that the N = 4 super Yang–Mills theory that appears in the AdS/CFT correspondence differs significantly from quantum chromodynamics, making it difficult to apply these methods to nuclear physics. According to McLerran, N = 4 supersymmetric Yang–Mills is not QCD ... It has no mass scale and is conformally invariant. It has no confinement and no running coupling constant. It is supersymmetric. It has no chiral symmetry breaking or mass generation. It has six scalar and fermions in the adjoint representation ... It may be possible to correct some or all of the above problems, or, for various physical problems, some of the objections may not be relevant. As yet there is not consensus nor compelling arguments for the conjectured fixes or phenomena which would insure that the N = 4 supersymmetric Yang Mills results would reliably reflect QCD. In a letter to Physics Today, Nobel laureate Philip W. Anderson voiced similar concerns about applications of AdS/CFT to condensed matter physics, stating As a very general problem with the AdS/CFT approach in condensed-matter theory, we can point to those telltale initials "CFT"—conformal field theory. Condensed-matter problems are, in general, neither relativistic nor conformal. Near a quantum critical point, both time and space may be scaling, but even there we still have a preferred coordinate system and, usually, a lattice. There is some evidence of other linear-T phases to the left of the strange metal about which they are welcome to speculate, but again in this case the condensed-matter problem is overdetermined by experimental facts. == History and development == === String theory and nuclear physics === The discovery of the AdS/CFT correspondence in late 1997 was the culmination of a long history of efforts to relate string theory to nuclear physics. In fact, string theory was originally developed during the late 1960s and early 1970s as a theory of hadrons, the subatomic particles like the proton and neutron that are held together by the strong nuclear force. The idea was that each of these particles could be viewed as a different oscillation mode of a string. In the late 1960s, experimentalists had found that hadrons fall into families called Regge trajectories with squared energy proportional to angular momentum, and theorists showed that this relationship emerges naturally from the physics of a rotating relativistic string. On the other hand, attempts to model hadrons as strings faced serious problems. One problem was that string theory includes a massless spin-2 particle whereas no such particle appears in the physics of hadrons. Such a particle would mediate a force with the properties of gravity. In 1974, Joël Scherk and John Schwarz suggested that string theory was therefore not a theory of nuclear physics as many theorists had thought but instead a theory of quantum gravity. At the same time, it was realized that hadrons are actually made of quarks, and the string theory approach was abandoned in favor of quantum chromodynamics. In quantum chromodynamics, quarks have a kind of charge that comes in three varieties called colors. In a paper from 1974, Gerard 't Hooft studied the relationship between string theory and nuclear physics from another point of view by considering theories similar to quantum chromodynamics, where the number of colors is some arbitrary number N, rather than three. In this article, 't Hooft considered a certain limit where N tends to infinity and argued that in this limit certain calculations in quantum field theory resemble calculations in string theory. === Black holes and holography === In 1975, Stephen Hawking published a calculation that suggested that black holes are not completely black but emit a dim radiation due to quantum effects near the event horizon. This work extended previous results of Jacob Bekenstein who had suggested that black holes have a well-defined entropy. At first, Hawking's result appeared to contradict one of the main postulates of quantum mechanics, namely the unitarity of time evolution. Intuitively, the unitarity postulate says that quantum mechanical systems do not destroy information as they evolve from one state to another. For this reason, the apparent contradiction came to be known as the black hole information paradox. Later, in 1993, Gerard 't Hooft wrote a speculative paper on quantum gravity in which he revisited Hawking's work on black hole thermodynamics, concluding that the total number of degrees of freedom in a region of spacetime surrounding a black hole is proportional to the surface area of the horizon. This idea was promoted by Leonard Susskind and is now known as the holographic principle. The holographic principle and its realization in string theory through the AdS/CFT correspondence have helped elucidate the mysteries of black holes suggested by Hawking's work and are believed to provide a resolution of the black hole information paradox. In 2004, Hawking conceded that black holes do not violate quantum mechanics, and he suggested a concrete mechanism by which they might preserve information. === Maldacena's paper === On January 1, 1998, Juan Maldacena published a landmark paper that initiated the study of AdS/CFT. According to Alexander Markovich Polyakov, "[Maldacena's] work opened the flood gates." The conjecture immediately excited great interest in the string theory community and was considered in a paper by Steven Gubser, Igor Klebanov and Polyakov, and another paper of Edward Witten. These papers made Maldacena's conjecture more precise and showed that the conformal field theory appearing in the correspondence lives on the boundary of anti-de Sitter space. One special case of Maldacena's proposal says that N = 4 super Yang–Mills theory, a gauge theory similar in some ways to quantum chromodynamics, is equivalent to string theory in five-dimensional anti-de Sitter space. This result helped clarify the earlier work of 't Hooft on the relationship between string theory and quantum chromodynamics, taking string theory back to its roots as a theory of nuclear physics. Maldacena's results also provided a concrete realization of the holographic principle with important implications for quantum gravity and black hole physics. By the year 2015, Maldacena's paper had become the most highly cited paper in high energy physics with over 10,000 citations. These subsequent articles have provided considerable evidence that the correspondence is correct, although so far it has not been rigorously proved. == Generalizations == === Three-dimensional gravity === In order to better understand the quantum aspects of gravity in our four-dimensional universe, some physicists have considered a lower-dimensional mathematical model in which spacetime has only two spatial dimensions and one time dimension. In this setting, the mathematics describing the gravitational field simplifies drastically, and one can study quantum gravity using familiar methods from quantum field theory, eliminating the need for string theory or other more radical approaches to quantum gravity in four dimensions. Beginning with the work of J. David Brown and Marc Henneaux in 1986, physicists have noticed that quantum gravity in a three-dimensional spacetime is closely related to two-dimensional conformal field theory. In 1995, Henneaux and his coworkers explored this relationship in more detail, suggesting that three-dimensional gravity in anti-de Sitter space is equivalent to the conformal field theory known as Liouville field theory. Another conjecture formulated by Edward Witten states that three-dimensional gravity in anti-de Sitter space is equivalent to a conformal field theory with monster group symmetry. These conjectures provide examples of the AdS/CFT correspondence that do not require the full apparatus of string or M-theory. === dS/CFT correspondence === Unlike our universe, which is now known to be expanding at an accelerating rate, anti-de Sitter space is neither expanding nor contracting. Instead it looks the same at all times. In more technical language, one says that anti-de Sitter space corresponds to a universe with a negative cosmological constant, whereas the real universe has a small positive cosmological constant. Although the properties of gravity at short distances should be somewhat independent of the value of the cosmological constant, it is desirable to have a version of the AdS/CFT correspondence for positive cosmological constant. In 2001, Andrew Strominger introduced a version of the duality called the dS/CFT correspondence. This duality involves a model of spacetime called de Sitter space with a positive cosmological constant. Such a duality is interesting from the point of view of cosmology since many cosmologists believe that the very early universe was close to being de Sitter space. === Kerr/CFT correspondence === Although the AdS/CFT correspondence is often useful for studying the properties of black holes, most of the black holes considered in the context of AdS/CFT are physically unrealistic. Indeed, as explained above, most versions of the AdS/CFT correspondence involve higher-dimensional models of spacetime with unphysical supersymmetry. In 2009, Monica Guica, Thomas Hartman, Wei Song, and Andrew Strominger showed that the ideas of AdS/CFT could nevertheless be used to understand certain astrophysical black holes. More precisely, their results apply to black holes that are approximated by extremal Kerr black holes, which have the largest possible angular momentum compatible with a given mass. They showed that such black holes have an equivalent description in terms of conformal field theory. The Kerr/CFT correspondence was later extended to black holes with lower angular momentum. === Higher spin gauge theories === The AdS/CFT correspondence is closely related to another duality conjectured by Igor Klebanov and Alexander Markovich Polyakov in 2002. This duality states that certain "higher spin gauge theories" on anti-de Sitter space are equivalent to conformal field theories with O(N) symmetry. Here the theory in the bulk is a type of gauge theory describing particles of arbitrarily high spin. It is similar to string theory, where the excited modes of vibrating strings correspond to particles with higher spin, and it may help to better understand the string theoretic versions of AdS/CFT and possibly even prove the correspondence. In 2010, Simone Giombi and Xi Yin obtained further evidence for this duality by computing quantities called three-point functions. == See also == Algebraic holography Ambient construction Randall–Sundrum model == Notes == == References ==
Wikipedia/Anti-de_Sitter/conformal_field_theory_correspondence
In the general theory of relativity, the Einstein field equations (EFE; also known as Einstein's equations) relate the geometry of spacetime to the distribution of matter within it. The equations were published by Albert Einstein in 1915 in the form of a tensor equation which related the local spacetime curvature (expressed by the Einstein tensor) with the local energy, momentum and stress within that spacetime (expressed by the stress–energy tensor). Analogously to the way that electromagnetic fields are related to the distribution of charges and currents via Maxwell's equations, the EFE relate the spacetime geometry to the distribution of mass–energy, momentum and stress, that is, they determine the metric tensor of spacetime for a given arrangement of stress–energy–momentum in the spacetime. The relationship between the metric tensor and the Einstein tensor allows the EFE to be written as a set of nonlinear partial differential equations when used in this way. The solutions of the EFE are the components of the metric tensor. The inertial trajectories of particles and radiation (geodesics) in the resulting geometry are then calculated using the geodesic equation. As well as implying local energy–momentum conservation, the EFE reduce to Newton's law of gravitation in the limit of a weak gravitational field and velocities that are much less than the speed of light. Exact solutions for the EFE can only be found under simplifying assumptions such as symmetry. Special classes of exact solutions are most often studied since they model many gravitational phenomena, such as rotating black holes and the expanding universe. Further simplification is achieved in approximating the spacetime as having only small deviations from flat spacetime, leading to the linearized EFE. These equations are used to study phenomena such as gravitational waves. == Mathematical form == The Einstein field equations (EFE) may be written in the form: G μ ν + Λ g μ ν = κ T μ ν , {\displaystyle G_{\mu \nu }+\Lambda g_{\mu \nu }=\kappa T_{\mu \nu },} where Gμν is the Einstein tensor, gμν is the metric tensor, Tμν is the stress–energy tensor, Λ is the cosmological constant and κ is the Einstein gravitational constant. The Einstein tensor is defined as G μ ν = R μ ν − 1 2 R g μ ν , {\displaystyle G_{\mu \nu }=R_{\mu \nu }-{\frac {1}{2}}Rg_{\mu \nu },} where Rμν is the Ricci curvature tensor, and R is the scalar curvature. This is a symmetric second-degree tensor that depends on only the metric tensor and its first and second derivatives. The Einstein gravitational constant is defined as κ = 8 π G c 4 ≈ 2.07665 × 10 − 43 N − 1 , {\displaystyle \kappa ={\frac {8\pi G}{c^{4}}}\approx 2.07665\times 10^{-43}\,{\textrm {N}}^{-1},} where G is the Newtonian constant of gravitation and c is the speed of light in vacuum. The EFE can thus also be written as R μ ν − 1 2 R g μ ν + Λ g μ ν = κ T μ ν . {\displaystyle R_{\mu \nu }-{\frac {1}{2}}Rg_{\mu \nu }+\Lambda g_{\mu \nu }=\kappa T_{\mu \nu }.} In standard units, each term on the left has quantity dimension of L−2. The expression on the left represents the curvature of spacetime as determined by the metric; the expression on the right represents the stress–energy–momentum content of spacetime. The EFE can then be interpreted as a set of equations dictating how stress–energy–momentum determines the curvature of spacetime. These equations, together with the geodesic equation, which dictates how freely falling matter moves through spacetime, form the core of the mathematical formulation of general relativity. The EFE is a tensor equation relating a set of symmetric 4 × 4 tensors. Each tensor has 10 independent components. The four Bianchi identities reduce the number of independent equations from 10 to 6, leaving the metric with four gauge-fixing degrees of freedom, which correspond to the freedom to choose a coordinate system. Although the Einstein field equations were initially formulated in the context of a four-dimensional theory, some theorists have explored their consequences in n dimensions. The equations in contexts outside of general relativity are still referred to as the Einstein field equations. The vacuum field equations (obtained when Tμν is everywhere zero) define Einstein manifolds. The equations are more complex than they appear. Given a specified distribution of matter and energy in the form of a stress–energy tensor, the EFE are understood to be equations for the metric tensor gμν, since both the Ricci tensor and scalar curvature depend on the metric in a complicated nonlinear manner. When fully written out, the EFE are a system of ten coupled, nonlinear, hyperbolic-elliptic partial differential equations. === Sign convention === The above form of the EFE is the standard established by Misner, Thorne, and Wheeler (MTW). The authors analyzed conventions that exist and classified these according to three signs ([S1] [S2] [S3]): g μ ν = [ S 1 ] × diag ⁡ ( − 1 , + 1 , + 1 , + 1 ) R μ α β γ = [ S 2 ] × ( Γ α γ , β μ − Γ α β , γ μ + Γ σ β μ Γ γ α σ − Γ σ γ μ Γ β α σ ) G μ ν = [ S 3 ] × κ T μ ν {\displaystyle {\begin{aligned}g_{\mu \nu }&=[S1]\times \operatorname {diag} (-1,+1,+1,+1)\\[6pt]{R^{\mu }}_{\alpha \beta \gamma }&=[S2]\times \left(\Gamma _{\alpha \gamma ,\beta }^{\mu }-\Gamma _{\alpha \beta ,\gamma }^{\mu }+\Gamma _{\sigma \beta }^{\mu }\Gamma _{\gamma \alpha }^{\sigma }-\Gamma _{\sigma \gamma }^{\mu }\Gamma _{\beta \alpha }^{\sigma }\right)\\[6pt]G_{\mu \nu }&=[S3]\times \kappa T_{\mu \nu }\end{aligned}}} The third sign above is related to the choice of convention for the Ricci tensor: R μ ν = [ S 2 ] × [ S 3 ] × R α μ α ν {\displaystyle R_{\mu \nu }=[S2]\times [S3]\times {R^{\alpha }}_{\mu \alpha \nu }} With these definitions Misner, Thorne, and Wheeler classify themselves as (+ + +), whereas Weinberg (1972) is (+ − −), Peebles (1980) and Efstathiou et al. (1990) are (− + +), Rindler (1977), Atwater (1974), Collins Martin & Squires (1989) and Peacock (1999) are (− + −). Authors including Einstein have used a different sign in their definition for the Ricci tensor which results in the sign of the constant on the right side being negative: R μ ν − 1 2 R g μ ν − Λ g μ ν = − κ T μ ν . {\displaystyle R_{\mu \nu }-{\frac {1}{2}}Rg_{\mu \nu }-\Lambda g_{\mu \nu }=-\kappa T_{\mu \nu }.} The sign of the cosmological term would change in both these versions if the (+ − − −) metric sign convention is used rather than the MTW (− + + +) metric sign convention adopted here. === Equivalent formulations === Taking the trace with respect to the metric of both sides of the EFE one gets R − D 2 R + D Λ = κ T , {\displaystyle R-{\frac {D}{2}}R+D\Lambda =\kappa T,} where D is the spacetime dimension. Solving for R and substituting this in the original EFE, one gets the following equivalent "trace-reversed" form: R μ ν − 2 D − 2 Λ g μ ν = κ ( T μ ν − 1 D − 2 T g μ ν ) . {\displaystyle R_{\mu \nu }-{\frac {2}{D-2}}\Lambda g_{\mu \nu }=\kappa \left(T_{\mu \nu }-{\frac {1}{D-2}}Tg_{\mu \nu }\right).} In D = 4 dimensions this reduces to R μ ν − Λ g μ ν = κ ( T μ ν − 1 2 T g μ ν ) . {\displaystyle R_{\mu \nu }-\Lambda g_{\mu \nu }=\kappa \left(T_{\mu \nu }-{\frac {1}{2}}T\,g_{\mu \nu }\right).} Reversing the trace again would restore the original EFE. The trace-reversed form may be more convenient in some cases (for example, when one is interested in weak-field limit and can replace gμν in the expression on the right with the Minkowski metric without significant loss of accuracy). == Cosmological constant == In the Einstein field equations G μ ν + Λ g μ ν = κ T μ ν , {\displaystyle G_{\mu \nu }+\Lambda g_{\mu \nu }=\kappa T_{\mu \nu }\,,} the term containing the cosmological constant Λ was absent from the version in which he originally published them. Einstein then included the term with the cosmological constant to allow for a universe that is not expanding or contracting. This effort was unsuccessful because: any desired steady state solution described by this equation is unstable, and observations by Edwin Hubble showed that our universe is expanding. Einstein then abandoned Λ, remarking to George Gamow "that the introduction of the cosmological term was the biggest blunder of his life". The inclusion of this term does not create inconsistencies. For many years the cosmological constant was almost universally assumed to be zero. More recent astronomical observations have shown an accelerating expansion of the universe, and to explain this a positive value of Λ is needed. The effect of the cosmological constant is negligible at the scale of a galaxy or smaller. Einstein thought of the cosmological constant as an independent parameter, but its term in the field equation can also be moved algebraically to the other side and incorporated as part of the stress–energy tensor: T μ ν ( v a c ) = − Λ κ g μ ν . {\displaystyle T_{\mu \nu }^{\mathrm {(vac)} }=-{\frac {\Lambda }{\kappa }}g_{\mu \nu }\,.} This tensor describes a vacuum state with an energy density ρvac and isotropic pressure pvac that are fixed constants and given by ρ v a c = − p v a c = Λ κ , {\displaystyle \rho _{\mathrm {vac} }=-p_{\mathrm {vac} }={\frac {\Lambda }{\kappa }},} where it is assumed that Λ has SI unit m−2 and κ is defined as above. The existence of a cosmological constant is thus equivalent to the existence of a vacuum energy and a pressure of opposite sign. This has led to the terms "cosmological constant" and "vacuum energy" being used interchangeably in general relativity. == Features == === Conservation of energy and momentum === General relativity is consistent with the local conservation of energy and momentum expressed as ∇ β T α β = T α β ; β = 0. {\displaystyle \nabla _{\beta }T^{\alpha \beta }={T^{\alpha \beta }}_{;\beta }=0.} which expresses the local conservation of stress–energy. This conservation law is a physical requirement. With his field equations Einstein ensured that general relativity is consistent with this conservation condition. === Nonlinearity === The nonlinearity of the EFE distinguishes general relativity from many other fundamental physical theories. For example, Maxwell's equations of electromagnetism are linear in the electric and magnetic fields, and charge and current distributions (i.e. the sum of two solutions is also a solution); another example is the Schrödinger equation of quantum mechanics, which is linear in the wavefunction. === Correspondence principle === The EFE reduce to Newton's law of gravity by using both the weak-field approximation and the low-velocity approximation. The constant G appearing in the EFE is determined by making these two approximations. == Vacuum field equations == If the energy–momentum tensor Tμν is zero in the region under consideration, then the field equations are also referred to as the vacuum field equations. By setting Tμν = 0 in the trace-reversed field equations, the vacuum field equations, also known as 'Einstein vacuum equations' (EVE), can be written as R μ ν = 0 . {\displaystyle R_{\mu \nu }=0\,.} In the case of nonzero cosmological constant, the equations are R μ ν = Λ D 2 − 1 g μ ν . {\displaystyle R_{\mu \nu }={\frac {\Lambda }{{\frac {D}{2}}-1}}g_{\mu \nu }\,.} The solutions to the vacuum field equations are called vacuum solutions. Flat Minkowski space is the simplest example of a vacuum solution. Nontrivial examples include the Schwarzschild solution and the Kerr solution. Manifolds with a vanishing Ricci tensor, Rμν = 0, are referred to as Ricci-flat manifolds and manifolds with a Ricci tensor proportional to the metric as Einstein manifolds. == Einstein–Maxwell equations == If the energy–momentum tensor Tμν is that of an electromagnetic field in free space, i.e. if the electromagnetic stress–energy tensor T α β = − 1 μ 0 ( F α ψ F ψ β + 1 4 g α β F ψ τ F ψ τ ) {\displaystyle T^{\alpha \beta }=\,-{\frac {1}{\mu _{0}}}\left({F^{\alpha }}^{\psi }{F_{\psi }}^{\beta }+{\tfrac {1}{4}}g^{\alpha \beta }F_{\psi \tau }F^{\psi \tau }\right)} is used, then the Einstein field equations are called the Einstein–Maxwell equations (with cosmological constant Λ, taken to be zero in conventional relativity theory): G α β + Λ g α β = κ μ 0 ( F α ψ F ψ β + 1 4 g α β F ψ τ F ψ τ ) . {\displaystyle G^{\alpha \beta }+\Lambda g^{\alpha \beta }={\frac {\kappa }{\mu _{0}}}\left({F^{\alpha }}^{\psi }{F_{\psi }}^{\beta }+{\tfrac {1}{4}}g^{\alpha \beta }F_{\psi \tau }F^{\psi \tau }\right).} Additionally, the covariant Maxwell equations are also applicable in free space: F α β ; β = 0 F [ α β ; γ ] = 1 3 ( F α β ; γ + F β γ ; α + F γ α ; β ) = 1 3 ( F α β , γ + F β γ , α + F γ α , β ) = 0 , {\displaystyle {\begin{aligned}{F^{\alpha \beta }}_{;\beta }&=0\\F_{[\alpha \beta ;\gamma ]}&={\tfrac {1}{3}}\left(F_{\alpha \beta ;\gamma }+F_{\beta \gamma ;\alpha }+F_{\gamma \alpha ;\beta }\right)={\tfrac {1}{3}}\left(F_{\alpha \beta ,\gamma }+F_{\beta \gamma ,\alpha }+F_{\gamma \alpha ,\beta }\right)=0,\end{aligned}}} where the semicolon represents a covariant derivative, and the brackets denote anti-symmetrization. The first equation asserts that the 4-divergence of the 2-form F is zero, and the second that its exterior derivative is zero. From the latter, it follows by the Poincaré lemma that in a coordinate chart it is possible to introduce an electromagnetic field potential Aα such that F α β = A α ; β − A β ; α = A α , β − A β , α {\displaystyle F_{\alpha \beta }=A_{\alpha ;\beta }-A_{\beta ;\alpha }=A_{\alpha ,\beta }-A_{\beta ,\alpha }} in which the comma denotes a partial derivative. This is often taken as equivalent to the covariant Maxwell equation from which it is derived. However, there are global solutions of the equation that may lack a globally defined potential. == Solutions == The solutions of the Einstein field equations are metrics of spacetime. These metrics describe the structure of the spacetime including the inertial motion of objects in the spacetime. As the field equations are non-linear, they cannot always be completely solved (i.e. without making approximations). For example, there is no known complete solution for a spacetime with two massive bodies in it (which is a theoretical model of a binary star system, for example). However, approximations are usually made in these cases. These are commonly referred to as post-Newtonian approximations. Even so, there are several cases where the field equations have been solved completely, and those are called exact solutions. The study of exact solutions of Einstein's field equations is one of the activities of cosmology. It leads to the prediction of black holes and to different models of evolution of the universe. One can also discover new solutions of the Einstein field equations via the method of orthonormal frames as pioneered by Ellis and MacCallum. In this approach, the Einstein field equations are reduced to a set of coupled, nonlinear, ordinary differential equations. As discussed by Hsu and Wainwright, self-similar solutions to the Einstein field equations are fixed points of the resulting dynamical system. New solutions have been discovered using these methods by LeBlanc and Kohli and Haslam. == Linearized EFE == The nonlinearity of the EFE makes finding exact solutions difficult. One way of solving the field equations is to make an approximation, namely, that far from the source(s) of gravitating matter, the gravitational field is very weak and the spacetime approximates that of Minkowski space. The metric is then written as the sum of the Minkowski metric and a term representing the deviation of the true metric from the Minkowski metric, ignoring higher-power terms. This linearization procedure can be used to investigate the phenomena of gravitational radiation. == Polynomial form == Despite the EFE as written containing the inverse of the metric tensor, they can be arranged in a form that contains the metric tensor in polynomial form and without its inverse. First, the determinant of the metric in 4 dimensions can be written det ( g ) = 1 24 ε α β γ δ ε κ λ μ ν g α κ g β λ g γ μ g δ ν {\displaystyle \det(g)={\tfrac {1}{24}}\varepsilon ^{\alpha \beta \gamma \delta }\varepsilon ^{\kappa \lambda \mu \nu }g_{\alpha \kappa }g_{\beta \lambda }g_{\gamma \mu }g_{\delta \nu }} using the Levi-Civita symbol; and the inverse of the metric in 4 dimensions can be written as: g α κ = 1 6 ε α β γ δ ε κ λ μ ν g β λ g γ μ g δ ν det ( g ) . {\displaystyle g^{\alpha \kappa }={\frac {{\tfrac {1}{6}}\varepsilon ^{\alpha \beta \gamma \delta }\varepsilon ^{\kappa \lambda \mu \nu }g_{\beta \lambda }g_{\gamma \mu }g_{\delta \nu }}{\det(g)}}\,.} Substituting this expression of the inverse of the metric into the equations then multiplying both sides by a suitable power of det(g) to eliminate it from the denominator results in polynomial equations in the metric tensor and its first and second derivatives. The Einstein–Hilbert action from which the equations are derived can also be written in polynomial form by suitable redefinitions of the fields. == See also == == Notes == == References == See General relativity resources. Misner, Charles W.; Thorne, Kip S.; Wheeler, John Archibald (1973). Gravitation. San Francisco: W. H. Freeman. ISBN 978-0-7167-0344-0. Weinberg, Steven (1972). Gravitation and Cosmology. John Wiley & Sons. ISBN 0-471-92567-5. Peacock, John A. (1999). Cosmological Physics. Cambridge University Press. ISBN 978-0521410724. == External links == "Einstein equations", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Caltech Tutorial on Relativity — A simple introduction to Einstein's Field Equations. The Meaning of Einstein's Equation — An explanation of Einstein's field equation, its derivation, and some of its consequences Video Lecture on Einstein's Field Equations by MIT Physics Professor Edmund Bertschinger. Arch and scaffold: How Einstein found his field equations Physics Today November 2015, History of the Development of the Field Equations === External images === The Einstein field equation on the wall of the Museum Boerhaave in downtown Leiden Suzanne Imber, "The impact of general relativity on the Atacama Desert", Einstein field equation on the side of a train in Bolivia.
Wikipedia/Einstein's_equation
The SYZ conjecture is an attempt to understand the mirror symmetry conjecture, an issue in theoretical physics and mathematics. The original conjecture was proposed in a paper by Strominger, Yau, and Zaslow, entitled "Mirror Symmetry is T-duality". Along with the homological mirror symmetry conjecture, it is one of the most explored tools applied to understand mirror symmetry in mathematical terms. While the homological mirror symmetry is based on homological algebra, the SYZ conjecture is a geometrical realization of mirror symmetry. == Formulation == In string theory, mirror symmetry relates type IIA and type IIB theories. It predicts that the effective field theory of type IIA and type IIB should be the same if the two theories are compactified on mirror pair manifolds. The SYZ conjecture uses this fact to realize mirror symmetry. It starts from considering BPS states of type IIA theories compactified on X, especially 0-branes that have moduli space X. It is known that all of the BPS states of type IIB theories compactified on Y are 3-branes. Therefore, mirror symmetry will map 0-branes of type IIA theories into a subset of 3-branes of type IIB theories. By considering supersymmetric conditions, it has been shown that these 3-branes should be special Lagrangian submanifolds. On the other hand, T-duality does the same transformation in this case, thus "mirror symmetry is T-duality". == Mathematical statement == The initial proposal of the SYZ conjecture by Strominger, Yau, and Zaslow, was not given as a precise mathematical statement. One part of the mathematical resolution of the SYZ conjecture is to, in some sense, correctly formulate the statement of the conjecture itself. There is no agreed upon precise statement of the conjecture within the mathematical literature, but there is a general statement that is expected to be close to the correct formulation of the conjecture, which is presented here. This statement emphasizes the topological picture of mirror symmetry, but does not precisely characterise the relationship between the complex and symplectic structures of the mirror pairs, or make reference to the associated Riemannian metrics involved. SYZ Conjecture: Every 6-dimensional Calabi–Yau manifold X {\displaystyle X} has a mirror 6-dimensional Calabi–Yau manifold X ^ {\displaystyle {\hat {X}}} such that there are continuous surjections f : X → B {\displaystyle f:X\to B} , f ^ : X ^ → B {\displaystyle {\hat {f}}:{\hat {X}}\to B} to a compact topological manifold B {\displaystyle B} of dimension 3, such that There exists a dense open subset B reg ⊂ B {\displaystyle B_{\text{reg}}\subset B} on which the maps f , f ^ {\displaystyle f,{\hat {f}}} are fibrations by nonsingular special Lagrangian 3-tori. Furthermore for every point b ∈ B reg {\displaystyle b\in B_{\text{reg}}} , the torus fibres f − 1 ( b ) {\displaystyle f^{-1}(b)} and f ^ − 1 ( b ) {\displaystyle {\hat {f}}^{-1}(b)} should be dual to each other in some sense, analogous to duality of Abelian varieties. For each b ∈ B ∖ B reg {\displaystyle b\in B\backslash B_{\text{reg}}} , the fibres f − 1 ( b ) {\displaystyle f^{-1}(b)} and f ^ − 1 ( b ) {\displaystyle {\hat {f}}^{-1}(b)} should be singular 3-dimensional special Lagrangian submanifolds of X {\displaystyle X} and X ^ {\displaystyle {\hat {X}}} respectively. The situation in which B reg = B {\displaystyle B_{\text{reg}}=B} so that there is no singular locus is called the semi-flat limit of the SYZ conjecture, and is often used as a model situation to describe torus fibrations. The SYZ conjecture can be shown to hold in some simple cases of semi-flat limits, for example given by Abelian varieties and K3 surfaces which are fibred by elliptic curves. It is expected that the correct formulation of the SYZ conjecture will differ somewhat from the statement above. For example the possible behaviour of the singular set B ∖ B reg {\displaystyle B\backslash B_{\text{reg}}} is not well understood, and this set could be quite large in comparison to B {\displaystyle B} . Mirror symmetry is also often phrased in terms of degenerating families of Calabi–Yau manifolds instead of for a single Calabi–Yau, and one might expect the SYZ conjecture to reformulated more precisely in this language. == Relation to homological mirror symmetry conjecture == The SYZ mirror symmetry conjecture is one possible refinement of the original mirror symmetry conjecture relating Hodge numbers of mirror Calabi–Yau manifolds. The other is Kontsevich's homological mirror symmetry conjecture (HMS conjecture). These two conjectures encode the predictions of mirror symmetry in different ways: homological mirror symmetry in an algebraic way, and the SYZ conjecture in a geometric way. There should be a relationship between these three interpretations of mirror symmetry, but it is not yet known whether they should be equivalent or one proposal is stronger than the other. Progress has been made toward showing under certain assumptions that homological mirror symmetry implies Hodge theoretic mirror symmetry. Nevertheless, in simple settings there are clear ways of relating the SYZ and HMS conjectures. The key feature of HMS is that the conjecture relates objects (either submanifolds or sheaves) on mirror geometric spaces, so the required input to try to understand or prove the HMS conjecture includes a mirror pair of geometric spaces. The SYZ conjecture predicts how these mirror pairs should arise, and so whenever an SYZ mirror pair is found, it is a good candidate to try and prove the HMS conjecture on this pair. To relate the SYZ and HMS conjectures, it is convenient to work in the semi-flat limit. The important geometric feature of a pair of Lagrangian torus fibrations X , X ^ → B {\displaystyle X,{\hat {X}}\to B} which encodes mirror symmetry is the dual torus fibres of the fibration. Given a Lagrangian torus T ⊂ X {\displaystyle T\subset X} , the dual torus is given by the Jacobian variety of T {\displaystyle T} , denoted T ^ = J a c ( T ) {\displaystyle {\hat {T}}=\mathrm {Jac} (T)} . This is again a torus of the same dimension, and the duality is encoded in the fact that J a c ( J a c ( T ) ) = T {\displaystyle \mathrm {Jac} (\mathrm {Jac} (T))=T} so T {\displaystyle T} and T ^ {\displaystyle {\hat {T}}} are indeed dual under this construction. The Jacobian variety T ^ {\displaystyle {\hat {T}}} has the important interpretation as the moduli space of line bundles on T {\displaystyle T} . This duality and the interpretation of the dual torus as a moduli space of sheaves on the original torus is what allows one to interchange the data of submanifolds and subsheaves. There are two simple examples of this phenomenon: If p ∈ X {\displaystyle p\in X} is a point which lies inside some fibre p ∈ T ⊂ X {\displaystyle p\in T\subset X} of the special Lagrangian torus fibration, then since T = J a c ( T ^ ) {\displaystyle T=\mathrm {Jac} ({\hat {T}})} , the point p {\displaystyle p} corresponds to a line bundle supported on T ^ ⊂ X ^ {\displaystyle {\hat {T}}\subset {\hat {X}}} . If one chooses a Lagrangian section s : B → X {\displaystyle s:B\to X} such that s ( B ) = L {\displaystyle s(B)=L} is a Lagrangian submanifold of X {\displaystyle X} , then precisely since s {\displaystyle s} chooses one point in each torus fibre of the SYZ fibration, this Lagrangian section is mirror dual to a choice of line bundle structure supported on each torus fibre of the mirror manifold X ^ {\displaystyle {\hat {X}}} , and consequently a line bundle on the total space of X ^ {\displaystyle {\hat {X}}} , the simplest example of a coherent sheaf appearing in the derived category of the mirror manifold. If the mirror torus fibrations are not in the semi-flat limit, then special care must be taken when crossing over singular set of the base B {\displaystyle B} . Another example of a Lagrangian submanifold is the torus fibre itself, and one sees that if the entire torus is taken as the Lagrangian T ⊂ X {\displaystyle T\subset X} , with the added data of a flat unitary line bundle over it, as is often necessary in homological mirror symmetry, then in the dual torus T ^ ⊂ X ^ {\displaystyle {\hat {T}}\subset {\hat {X}}} this corresponds to a single point which represents that line bundle over the torus. If one takes the skyscraper sheaf supported on that point in the dual torus, then we see torus fibres of the SYZ fibration get sent to skyscraper sheaves supported on points in the mirror torus fibre. These two examples produce the most extreme kinds of coherent sheaf, locally free sheaves (of rank 1) and torsion sheaves supported on points. By more careful construction one can build up more complicated examples of coherent sheaves, analogous to building a coherent sheaf using the torsion filtration. As a simple example, a Lagrangian multisection (a union of k Lagrangian sections) should be mirror dual to a rank k vector bundle on the mirror manifold, but one must take care to account for instanton corrections by counting holomorphic discs which are bounded by the multisection, in the sense of Gromov-Witten theory. In this way enumerative geometry becomes important for understanding how mirror symmetry interchanges dual objects. By combining the geometry of mirror fibrations in the SYZ conjecture with a detailed understanding of enumerative invariants and the structure of the singular set of the base B {\displaystyle B} , it is possible to use the geometry of the fibration to build the isomorphism of categories from the Lagrangian submanifolds of X {\displaystyle X} to the coherent sheaves of X ^ {\displaystyle {\hat {X}}} , the map F u k ( X ) → D b C o h ( X ^ ) {\displaystyle \mathrm {Fuk} (X)\to \mathrm {D} ^{b}\mathrm {Coh} ({\hat {X}})} . By repeating this same discussion in reverse using the duality of the torus fibrations, one similarly can understand coherent sheaves on X {\displaystyle X} in terms of Lagrangian submanifolds of X ^ {\displaystyle {\hat {X}}} , and hope to get a complete understanding of how the HMS conjecture relates to the SYZ conjecture. == References ==
Wikipedia/SYZ_conjecture
In theoretical physics, p-form electrodynamics is a generalization of Maxwell's theory of electromagnetism. == Ordinary (via. one-form) Abelian electrodynamics == We have a 1-form A {\displaystyle \mathbf {A} } , a gauge symmetry A → A + d α , {\displaystyle \mathbf {A} \rightarrow \mathbf {A} +d\alpha ,} where α {\displaystyle \alpha } is any arbitrary fixed 0-form and d {\displaystyle d} is the exterior derivative, and a gauge-invariant vector current J {\displaystyle \mathbf {J} } with density 1 satisfying the continuity equation d ⋆ J = 0 , {\displaystyle d{\star }\mathbf {J} =0,} where ⋆ {\displaystyle {\star }} is the Hodge star operator. Alternatively, we may express J {\displaystyle \mathbf {J} } as a closed (n − 1)-form, but we do not consider that case here. F {\displaystyle \mathbf {F} } is a gauge-invariant 2-form defined as the exterior derivative F = d A {\displaystyle \mathbf {F} =d\mathbf {A} } . F {\displaystyle \mathbf {F} } satisfies the equation of motion d ⋆ F = ⋆ J {\displaystyle d{\star }\mathbf {F} ={\star }\mathbf {J} } (this equation obviously implies the continuity equation). This can be derived from the action S = ∫ M [ 1 2 F ∧ ⋆ F − A ∧ ⋆ J ] , {\displaystyle S=\int _{M}\left[{\frac {1}{2}}\mathbf {F} \wedge {\star }\mathbf {F} -\mathbf {A} \wedge {\star }\mathbf {J} \right],} where M {\displaystyle M} is the spacetime manifold. == p-form Abelian electrodynamics == We have a p-form B {\displaystyle \mathbf {B} } , a gauge symmetry B → B + d α , {\displaystyle \mathbf {B} \rightarrow \mathbf {B} +d\mathbf {\alpha } ,} where α {\displaystyle \alpha } is any arbitrary fixed (p − 1)-form and d {\displaystyle d} is the exterior derivative, and a gauge-invariant p-vector J {\displaystyle \mathbf {J} } with density 1 satisfying the continuity equation d ⋆ J = 0 , {\displaystyle d{\star }\mathbf {J} =0,} where ⋆ {\displaystyle {\star }} is the Hodge star operator. Alternatively, we may express J {\displaystyle \mathbf {J} } as a closed (n − p)-form. C {\displaystyle \mathbf {C} } is a gauge-invariant (p + 1)-form defined as the exterior derivative C = d B {\displaystyle \mathbf {C} =d\mathbf {B} } . B {\displaystyle \mathbf {B} } satisfies the equation of motion d ⋆ C = ⋆ J {\displaystyle d{\star }\mathbf {C} ={\star }\mathbf {J} } (this equation obviously implies the continuity equation). This can be derived from the action S = ∫ M [ 1 2 C ∧ ⋆ C + ( − 1 ) p B ∧ ⋆ J ] {\displaystyle S=\int _{M}\left[{\frac {1}{2}}\mathbf {C} \wedge {\star }\mathbf {C} +(-1)^{p}\mathbf {B} \wedge {\star }\mathbf {J} \right]} where M is the spacetime manifold. Other sign conventions do exist. The Kalb–Ramond field is an example with p = 2 in string theory; the Ramond–Ramond fields whose charged sources are D-branes are examples for all values of p. In eleven-dimensional supergravity or M-theory, we have a 3-form electrodynamics. == Non-abelian generalization == Just as we have non-abelian generalizations of electrodynamics, leading to Yang–Mills theories, we also have nonabelian generalizations of p-form electrodynamics. They typically require the use of gerbes. == References == Henneaux; Teitelboim (1986), "p-Form electrodynamics", Foundations of Physics 16 (7): 593-617, doi:10.1007/BF01889624 Bunster, C.; Henneaux, M. (2011). "Action for twisted self-duality". Physical Review D. 83 (12): 125015. arXiv:1103.3621. Bibcode:2011PhRvD..83l5015B. doi:10.1103/PhysRevD.83.125015. S2CID 119268081. Navarro; Sancho (2012), "Energy and electromagnetism of a differential k-form ", J. Math. Phys. 53, 102501 (2012) doi:10.1063/1.4754817
Wikipedia/P-form_electrodynamics
The Trouble with Physics: The Rise of String Theory, the Fall of a Science, and What Comes Next is a 2006 book by the theoretical physicist Lee Smolin about the problems with string theory. The book strongly criticizes string theory and its prominence in contemporary theoretical physics, on the grounds that string theory has yet to come up with a single prediction that can be verified using any technology that is likely to be feasible within our lifetimes. Smolin also focuses on the difficulties faced by research in quantum gravity, and by current efforts to come up with a theory explaining all four fundamental interactions. The book is broadly concerned with the role of controversy and diversity of approaches in scientific processes and ethics. Smolin suggests both that there appear to be serious deficiencies in string theory and that string theory has an unhealthy near-monopoly on fundamental physics in the United States, and that a diversity of approaches is needed. He argues that more attention should instead be paid to background independent theories of quantum gravity. In the book, Smolin claims that string theory makes no new testable predictions; that it has no coherent mathematical formulation; and that it has not been mathematically proved finite. Some experts in the theoretical physics community disagree with these statements. Smolin states that to propose a string theory landscape having up to 10500 string vacuum solutions is tantamount to abandoning accepted science: The scenario of many unobserved universes plays the same logical role as the scenario of an intelligent designer. Each provides an untestable hypothesis that, if true, makes something improbable seem quite probable. == Reviews == The book generated much controversy and debate about the merits of string theory, and was criticised by some prominent physicists including Sean Carroll and string theorists Joseph Polchinski and Luboš Motl. Polchinski's review states, "In the end, these [Smolin and others'] books fail to capture much of the spirit and logic of string theory." Motl's review goes on to say "the concentration of irrational statements and anti-scientific sentiments has exceeded my expectations," and, In the context of string theory, he literally floods the pages of his book with undefendable speculations about some basic results of string theory. Because these statements are of mathematical nature, we are sure that Lee is wrong even in the absence of any experiments. Sean Carroll's review expressed frustration because in his opinion, "The Trouble with Physics is really two books, with intertwined but ultimately independent arguments." He suggested that the arguments in the book appear divided:"[one argument is] big and abstract and likely to be ignored by most of the book's audience; the other is narrow and specific and part of a wide-ranging and heated discussion carried out between scientists, in the popular press, and on the internet." Furthermore, The abstract argument — about academic culture and the need to nurture speculative ideas — is, in my opinion, important and largely correct, while the specific one — about the best way to set about quantizing gravity — is overstated and undersupported. Carroll fears that excessive attention paid to the specific dispute is likely to disadvantage the more general abstract argument. Sabine Hossenfelder, in a review written a year later and titled "The Trouble With Physics: Aftermath" alludes to the book's polarising effect on the scientific community. She explores the author's views as a contrast in generations, while supporting his right to them. Hossenfelder believes that Smolin's book attempts to restore the relation physics once had with philosophy, quoting him as follows: Philosophy used to be part of the natural sciences – for a long time. For long centuries during which our understanding of the world we live in has progressed tremendously. There is no doubt that times change, but not all changes are a priori good if left without further consideration. Here, change has resulted in a gap between the natural sciences where questioning the basis of our theories, and an embedding into the historical and sociological context used to be. Even though many new specifically designed interdisciplinary fields have been established, investigating the foundations of our current theories has basically been erased out of curricula and textbooks.< == The String Wars == A discussion in 2006 took place between UCSB physicists at KITP and science journalist George Johnson regarding the controversy caused by the books of Smolin (The Trouble with Physics) and Peter Woit (Not Even Wrong). The meeting was titled "The String Wars" to reflect the impression the media has given people regarding the controversy in string theory caused by Smolin's and Woit's books. A video of the proceedings is available at UCSB's website. == See also == Loop quantum gravity Peter Woit == References == Notes Further reading Greene, Brian, 1999. The Elegant Universe. Vintage Paperbacks. A nontechnical introduction to string theory. Greene, Brian, 2004. The Fabric of the Cosmos. Penguin Books. Space, time, cosmology, and more string theory. Nontechnical Penrose, Roger, 2004. The Road to Reality. Alfred A. Knopf. Technical. Randall, Lisa, 2005. Warped Passages. Smolin, Lee, 2001. Three Roads to Quantum Gravity. Woit, Peter, 2006. Not Even Wrong: The Failure of String Theory & the Continuing Challenge to Unify the Laws of Physics. Jonathan Cape (UK) and Basic Books (USA). The "other book" criticizing string theory and the stagnation of theoretical particle physics. == External links == The Trouble with Physics, webpage maintained by the publisher, Houghton Mifflin. Joseph Polchinski (2007) "All Strung Out?" a review of The Trouble with Physics and Not Even Wrong, American Scientist 95(1):1. Smolin's comment and Polchinsky's reply. Mindmap of the fundamental concepts described in the book. Lee Smolin, Brian Greene (August 18, 2006). Physicists Debate the Merits of String Theory (Talk-Show Debate). National Public Radio, "Talk of the Nation". Retrieved April 19, 2018.
Wikipedia/The_Trouble_with_Physics:_The_Rise_of_String_Theory,_the_Fall_of_a_Science,_and_What_Comes_Next
Twistor string theory is an equivalence between N = 4 supersymmetric Yang–Mills theory and the perturbative topological B model string theory in twistor space. It was initially proposed by Edward Witten in 2003. Twistor theory was introduced by Roger Penrose from the 1960s as a new approach to the unification of quantum theory with gravity. Twistor space is a three-dimensional complex projective space in which physical quantities appear as certain structural deformations. Spacetime and the familiar physical fields emerge as consequences of this description. But twistor space is chiral (handed) with left- and right-handed objects treated differently. For example, the graviton for gravity and the gluon for the strong force are both right-handed. During this period, Edward Witten was a leading developer of string theory. In 2003, he produced a paper showing how string theory may be introduced into twistor space to provide a full physical model incorporating both left- and right-handed fields together with their full interactions. The most important contribution of twistor string theory has been in the calculation of particle-particle collision scattering amplitudes, which determine the probabilities of the possible scattering processes. Witten showed that they have a remarkably simple structure in twistor space; in particular amplitudes are supported on algebraic curves. This has allowed both better understanding of experimental observations in particle colliders and deep insights into the natures of different quantum field theories. These insights have in turn led to new insights in pure mathematics. Such topics include Grassmannian residue formulae, the amplituhedron and holomorphic linking. == See also == BCFW recursion MHV amplitudes == References ==
Wikipedia/Twistor_string_theory
In mathematics, a modular form is a holomorphic function on the complex upper half-plane, H {\displaystyle {\mathcal {H}}} , that roughly satisfies a functional equation with respect to the group action of the modular group and a growth condition. The theory of modular forms has origins in complex analysis, with important connections with number theory. Modular forms also appear in other areas, such as algebraic topology, sphere packing, and string theory. Modular form theory is a special case of the more general theory of automorphic forms, which are functions defined on Lie groups that transform nicely with respect to the action of certain discrete subgroups, generalizing the example of the modular group S L 2 ( Z ) ⊂ S L 2 ( R ) {\displaystyle \mathrm {SL} _{2}(\mathbb {Z} )\subset \mathrm {SL} _{2}(\mathbb {R} )} . Every modular form is attached to a Galois representation. The term "modular form", as a systematic description, is usually attributed to Erich Hecke. The importance of modular forms across multiple field of mathematics has been humorously represented in a possibly apocryphal quote attributed to Martin Eichler describing modular forms as being the fifth fundamental operation in mathematics, after addition, subtraction, multiplication and division. == Definition == In general, given a subgroup Γ < SL 2 ( Z ) {\displaystyle \Gamma <{\text{SL}}_{2}(\mathbb {Z} )} of finite index (called an arithmetic group), a modular form of level Γ {\displaystyle \Gamma } and weight k {\displaystyle k} is a holomorphic function f : H → C {\displaystyle f:{\mathcal {H}}\to \mathbb {C} } from the upper half-plane satisfying the following two conditions: Automorphy condition: for any γ ∈ Γ {\displaystyle \gamma \in \Gamma } , we have f ( γ ( z ) ) = ( c z + d ) k f ( z ) {\displaystyle f(\gamma (z))=(cz+d)^{k}f(z)} , and Growth condition: for any γ ∈ SL 2 ( Z ) {\displaystyle \gamma \in {\text{SL}}_{2}(\mathbb {Z} )} , the function ( c z + d ) − k f ( γ ( z ) ) {\displaystyle (cz+d)^{-k}f(\gamma (z))} is bounded for im ( z ) → ∞ {\displaystyle {\text{im}}(z)\to \infty } . In addition, a modular form is called a cusp form if it satisfies the following growth condition: Cuspidal condition: For any γ ∈ SL 2 ( Z ) {\displaystyle \gamma \in {\text{SL}}_{2}(\mathbb {Z} )} , we have ( c z + d ) − k f ( γ ( z ) ) → 0 {\displaystyle (cz+d)^{-k}f(\gamma (z))\to 0} as im ( z ) → ∞ {\displaystyle {\text{im}}(z)\to \infty } . Note that γ {\displaystyle \gamma } is a matrix γ = ( a b c d ) ∈ SL 2 ( Z ) , {\textstyle \gamma ={\begin{pmatrix}a&b\\c&d\end{pmatrix}}\in {\text{SL}}_{2}(\mathbb {Z} ),} identified with the function γ ( z ) = ( a z + b ) / ( c z + d ) {\textstyle \gamma (z)=(az+b)/(cz+d)} . The identification of functions with matrices makes function composition equivalent to matrix multiplication. === As sections of a line bundle === Modular forms can also be interpreted as sections of a specific line bundle on modular varieties. For Γ < SL 2 ( Z ) {\displaystyle \Gamma <{\text{SL}}_{2}(\mathbb {Z} )} a modular form of level Γ {\displaystyle \Gamma } and weight k {\displaystyle k} can be defined as an element of f ∈ H 0 ( X Γ , ω ⊗ k ) = M k ( Γ ) , {\displaystyle f\in H^{0}(X_{\Gamma },\omega ^{\otimes k})=M_{k}(\Gamma ),} where ω {\displaystyle \omega } is a canonical line bundle on the modular curve X Γ = Γ ∖ ( H ∪ P 1 ( Q ) ) . {\displaystyle X_{\Gamma }=\Gamma \backslash ({\mathcal {H}}\cup \mathbb {P} ^{1}(\mathbb {Q} )).} The dimensions of these spaces of modular forms can be computed using the Riemann–Roch theorem. The classical modular forms for Γ = SL 2 ( Z ) {\displaystyle \Gamma ={\text{SL}}_{2}(\mathbb {Z} )} are sections of a line bundle on the moduli stack of elliptic curves. == Modular function == A modular function is a function that is invariant with respect to the modular group, but without the condition that it be holomorphic in the upper half-plane (among other requirements). Instead, modular functions are meromorphic: they are holomorphic on the complement of a set of isolated points, which are poles of the function. == Modular forms for SL(2, Z) == === Standard definition === A modular form of weight k {\displaystyle k} for the modular group SL ( 2 , Z ) = { ( a b c d ) | a , b , c , d ∈ Z , a d − b c = 1 } {\displaystyle {\text{SL}}(2,\mathbb {Z} )=\left\{\left.{\begin{pmatrix}a&b\\c&d\end{pmatrix}}\right|a,b,c,d\in \mathbb {Z} ,\ ad-bc=1\right\}} is a function f {\displaystyle f} on the upper half-plane H = { z ∈ C ∣ Im ⁡ ( z ) > 0 } {\displaystyle {\mathcal {H}}=\{z\in \mathbb {C} \mid \operatorname {Im} (z)>0\}} satisfying the following three conditions: f {\displaystyle f} is holomorphic on H {\displaystyle {\mathcal {H}}} . For any z ∈ H {\displaystyle z\in {\mathcal {H}}} and any matrix in SL ( 2 , Z ) {\displaystyle {\text{SL}}(2,\mathbb {Z} )} , we have f ( a z + b c z + d ) = ( c z + d ) k f ( z ) {\displaystyle f\left({\frac {az+b}{cz+d}}\right)=(cz+d)^{k}f(z)} . f {\displaystyle f} is bounded as Im ⁡ ( z ) → ∞ {\displaystyle \operatorname {Im} (z)\to \infty } . Remarks: The weight k {\displaystyle k} is typically a positive integer. For odd k {\displaystyle k} , only the zero function can satisfy the second condition. The third condition is also phrased by saying that f {\displaystyle f} is "holomorphic at the cusp", a terminology that is explained below. Explicitly, the condition means that there exist some M , D > 0 {\displaystyle M,D>0} such that Im ⁡ ( z ) > M ⟹ | f ( z ) | < D {\displaystyle \operatorname {Im} (z)>M\implies |f(z)|<D} , meaning f {\displaystyle f} is bounded above some horizontal line. The second condition for S = ( 0 − 1 1 0 ) , T = ( 1 1 0 1 ) {\displaystyle S={\begin{pmatrix}0&-1\\1&0\end{pmatrix}},\qquad T={\begin{pmatrix}1&1\\0&1\end{pmatrix}}} reads f ( − 1 z ) = z k f ( z ) , f ( z + 1 ) = f ( z ) {\displaystyle f\left(-{\frac {1}{z}}\right)=z^{k}f(z),\qquad f(z+1)=f(z)} respectively. Since S {\displaystyle S} and T {\displaystyle T} generate the group SL ( 2 , Z ) {\displaystyle {\text{SL}}(2,\mathbb {Z} )} , the second condition above is equivalent to these two equations. Since f ( z + 1 ) = f ( z ) {\displaystyle f(z+1)=f(z)} , modular forms are periodic functions with period 1, and thus have a Fourier series. === Definition in terms of lattices or elliptic curves === A modular form can equivalently be defined as a function F from the set of lattices in C to the set of complex numbers which satisfies certain conditions: If we consider the lattice Λ = Zα + Zz generated by a constant α and a variable z, then F(Λ) is an analytic function of z. If α is a non-zero complex number and αΛ is the lattice obtained by multiplying each element of Λ by α, then F(αΛ) = α−kF(Λ) where k is a constant (typically a positive integer) called the weight of the form. The absolute value of F(Λ) remains bounded above as long as the absolute value of the smallest non-zero element in Λ is bounded away from 0. The key idea in proving the equivalence of the two definitions is that such a function F is determined, because of the second condition, by its values on lattices of the form Z + Zτ, where τ ∈ H. === Examples === I. Eisenstein series The simplest examples from this point of view are the Eisenstein series. For each even integer k > 2, we define Gk(Λ) to be the sum of λ−k over all non-zero vectors λ of Λ: G k ( Λ ) = ∑ 0 ≠ λ ∈ Λ λ − k . {\displaystyle G_{k}(\Lambda )=\sum _{0\neq \lambda \in \Lambda }\lambda ^{-k}.} Then Gk is a modular form of weight k. For Λ = Z + Zτ we have G k ( Λ ) = G k ( τ ) = ∑ ( 0 , 0 ) ≠ ( m , n ) ∈ Z 2 1 ( m + n τ ) k , {\displaystyle G_{k}(\Lambda )=G_{k}(\tau )=\sum _{(0,0)\neq (m,n)\in \mathbf {Z} ^{2}}{\frac {1}{(m+n\tau )^{k}}},} and G k ( − 1 τ ) = τ k G k ( τ ) , G k ( τ + 1 ) = G k ( τ ) . {\displaystyle {\begin{aligned}G_{k}\left(-{\frac {1}{\tau }}\right)&=\tau ^{k}G_{k}(\tau ),\\G_{k}(\tau +1)&=G_{k}(\tau ).\end{aligned}}} The condition k > 2 is needed for convergence; for odd k there is cancellation between λ−k and (−λ)−k, so that such series are identically zero. II. Theta functions of even unimodular lattices An even unimodular lattice L in Rn is a lattice generated by n vectors forming the columns of a matrix of determinant 1 and satisfying the condition that the square of the length of each vector in L is an even integer. The so-called theta function ϑ L ( z ) = ∑ λ ∈ L e π i ‖ λ ‖ 2 z {\displaystyle \vartheta _{L}(z)=\sum _{\lambda \in L}e^{\pi i\Vert \lambda \Vert ^{2}z}} converges when Im(z) > 0, and as a consequence of the Poisson summation formula can be shown to be a modular form of weight n/2. It is not so easy to construct even unimodular lattices, but here is one way: Let n be an integer divisible by 8 and consider all vectors v in Rn such that 2v has integer coordinates, either all even or all odd, and such that the sum of the coordinates of v is an even integer. We call this lattice Ln. When n = 8, this is the lattice generated by the roots in the root system called E8. Because there is only one modular form of weight 8 up to scalar multiplication, ϑ L 8 × L 8 ( z ) = ϑ L 16 ( z ) , {\displaystyle \vartheta _{L_{8}\times L_{8}}(z)=\vartheta _{L_{16}}(z),} even though the lattices L8 × L8 and L16 are not similar. John Milnor observed that the 16-dimensional tori obtained by dividing R16 by these two lattices are consequently examples of compact Riemannian manifolds which are isospectral but not isometric (see Hearing the shape of a drum.) III. The modular discriminant The Dedekind eta function is defined as η ( z ) = q 1 / 24 ∏ n = 1 ∞ ( 1 − q n ) , q = e 2 π i z . {\displaystyle \eta (z)=q^{1/24}\prod _{n=1}^{\infty }(1-q^{n}),\qquad q=e^{2\pi iz}.} where q is the square of the nome. Then the modular discriminant Δ(z) = (2π)12 η(z)24 is a modular form of weight 12. The presence of 24 is related to the fact that the Leech lattice has 24 dimensions. A celebrated conjecture of Ramanujan asserted that when Δ(z) is expanded as a power series in q, the coefficient of qp for any prime p has absolute value ≤ 2p11/2. This was confirmed by the work of Eichler, Shimura, Kuga, Ihara, and Pierre Deligne as a result of Deligne's proof of the Weil conjectures, which were shown to imply Ramanujan's conjecture. The second and third examples give some hint of the connection between modular forms and classical questions in number theory, such as representation of integers by quadratic forms and the partition function. The crucial conceptual link between modular forms and number theory is furnished by the theory of Hecke operators, which also gives the link between the theory of modular forms and representation theory. == Modular functions == When the weight k is zero, it can be shown using Liouville's theorem that the only modular forms are constant functions. However, relaxing the requirement that f be holomorphic leads to the notion of modular functions. A function f : H → C is called modular if it satisfies the following properties: f is meromorphic in the open upper half-plane H For every integer matrix ( a b c d ) {\displaystyle {\begin{pmatrix}a&b\\c&d\end{pmatrix}}} in the modular group Γ, f ( a z + b c z + d ) = f ( z ) {\displaystyle f\left({\frac {az+b}{cz+d}}\right)=f(z)} . The second condition implies that f is periodic, and therefore has a Fourier series. The third condition is that this series is of the form f ( z ) = ∑ n = − m ∞ a n e 2 i π n z . {\displaystyle f(z)=\sum _{n=-m}^{\infty }a_{n}e^{2i\pi nz}.} It is often written in terms of q = exp ⁡ ( 2 π i z ) {\displaystyle q=\exp(2\pi iz)} (the square of the nome), as: f ( z ) = ∑ n = − m ∞ a n q n . {\displaystyle f(z)=\sum _{n=-m}^{\infty }a_{n}q^{n}.} This is also referred to as the q-expansion of f (q-expansion principle). The coefficients a n {\displaystyle a_{n}} are known as the Fourier coefficients of f, and the number m is called the order of the pole of f at i∞. This condition is called "meromorphic at the cusp", meaning that only finitely many negative-n coefficients are non-zero, so the q-expansion is bounded below, guaranteeing that it is meromorphic at q = 0. Sometimes a weaker definition of modular functions is used – under the alternative definition, it is sufficient that f be meromorphic in the open upper half-plane and that f be invariant with respect to a sub-group of the modular group of finite index. This is not adhered to in this article. Another way to phrase the definition of modular functions is to use elliptic curves: every lattice Λ determines an elliptic curve C/Λ over C; two lattices determine isomorphic elliptic curves if and only if one is obtained from the other by multiplying by some non-zero complex number α. Thus, a modular function can also be regarded as a meromorphic function on the set of isomorphism classes of elliptic curves. For example, the j-invariant j(z) of an elliptic curve, regarded as a function on the set of all elliptic curves, is a modular function. More conceptually, modular functions can be thought of as functions on the moduli space of isomorphism classes of complex elliptic curves. A modular form f that vanishes at q = 0 (equivalently, a0 = 0, also paraphrased as z = i∞) is called a cusp form (Spitzenform in German). The smallest n such that an ≠ 0 is the order of the zero of f at i∞. A modular unit is a modular function whose poles and zeroes are confined to the cusps. == Modular forms for more general groups == The functional equation, i.e., the behavior of f with respect to z ↦ a z + b c z + d {\displaystyle z\mapsto {\frac {az+b}{cz+d}}} can be relaxed by requiring it only for matrices in smaller groups. === The Riemann surface G\H∗ === Let G be a subgroup of SL(2, Z) that is of finite index. Such a group G acts on H in the same way as SL(2, Z). The quotient topological space G\H can be shown to be a Hausdorff space. Typically it is not compact, but can be compactified by adding a finite number of points called cusps. These are points at the boundary of H, i.e. in Q∪{∞}, such that there is a parabolic element of G (a matrix with trace ±2) fixing the point. This yields a compact topological space G\H∗. What is more, it can be endowed with the structure of a Riemann surface, which allows one to speak of holo- and meromorphic functions. Important examples are, for any positive integer N, either one of the congruence subgroups Γ 0 ( N ) = { ( a b c d ) ∈ SL ( 2 , Z ) : c ≡ 0 ( mod N ) } Γ ( N ) = { ( a b c d ) ∈ SL ( 2 , Z ) : c ≡ b ≡ 0 , a ≡ d ≡ 1 ( mod N ) } . {\displaystyle {\begin{aligned}\Gamma _{0}(N)&=\left\{{\begin{pmatrix}a&b\\c&d\end{pmatrix}}\in {\text{SL}}(2,\mathbf {Z} ):c\equiv 0{\pmod {N}}\right\}\\\Gamma (N)&=\left\{{\begin{pmatrix}a&b\\c&d\end{pmatrix}}\in {\text{SL}}(2,\mathbf {Z} ):c\equiv b\equiv 0,a\equiv d\equiv 1{\pmod {N}}\right\}.\end{aligned}}} For G = Γ0(N) or Γ(N), the spaces G\H and G\H∗ are denoted Y0(N) and X0(N) and Y(N), X(N), respectively. The geometry of G\H∗ can be understood by studying fundamental domains for G, i.e. subsets D ⊂ H such that D intersects each orbit of the G-action on H exactly once and such that the closure of D meets all orbits. For example, the genus of G\H∗ can be computed. === Definition === A modular form for G of weight k is a function on H satisfying the above functional equation for all matrices in G, that is holomorphic on H and at all cusps of G. Again, modular forms that vanish at all cusps are called cusp forms for G. The C-vector spaces of modular and cusp forms of weight k are denoted Mk(G) and Sk(G), respectively. Similarly, a meromorphic function on G\H∗ is called a modular function for G. In case G = Γ0(N), they are also referred to as modular/cusp forms and functions of level N. For G = Γ(1) = SL(2, Z), this gives back the afore-mentioned definitions. === Consequences === The theory of Riemann surfaces can be applied to G\H∗ to obtain further information about modular forms and functions. For example, the spaces Mk(G) and Sk(G) are finite-dimensional, and their dimensions can be computed thanks to the Riemann–Roch theorem in terms of the geometry of the G-action on H. For example, dim C ⁡ M k ( SL ( 2 , Z ) ) = { ⌊ k / 12 ⌋ k ≡ 2 ( mod 12 ) ⌊ k / 12 ⌋ + 1 otherwise {\displaystyle \dim _{\mathbf {C} }M_{k}\left({\text{SL}}(2,\mathbf {Z} )\right)={\begin{cases}\left\lfloor k/12\right\rfloor &k\equiv 2{\pmod {12}}\\\left\lfloor k/12\right\rfloor +1&{\text{otherwise}}\end{cases}}} where ⌊ ⋅ ⌋ {\displaystyle \lfloor \cdot \rfloor } denotes the floor function and k {\displaystyle k} is even. The modular functions constitute the field of functions of the Riemann surface, and hence form a field of transcendence degree one (over C). If a modular function f is not identically 0, then it can be shown that the number of zeroes of f is equal to the number of poles of f in the closure of the fundamental region RΓ.It can be shown that the field of modular function of level N (N ≥ 1) is generated by the functions j(z) and j(Nz). === Line bundles === The situation can be profitably compared to that which arises in the search for functions on the projective space P(V): in that setting, one would ideally like functions F on the vector space V which are polynomial in the coordinates of v ≠ 0 in V and satisfy the equation F(cv) = F(v) for all non-zero c. Unfortunately, the only such functions are constants. If we allow denominators (rational functions instead of polynomials), we can let F be the ratio of two homogeneous polynomials of the same degree. Alternatively, we can stick with polynomials and loosen the dependence on c, letting F(cv) = ckF(v). The solutions are then the homogeneous polynomials of degree k. On the one hand, these form a finite dimensional vector space for each k, and on the other, if we let k vary, we can find the numerators and denominators for constructing all the rational functions which are really functions on the underlying projective space P(V). One might ask, since the homogeneous polynomials are not really functions on P(V), what are they, geometrically speaking? The algebro-geometric answer is that they are sections of a sheaf (one could also say a line bundle in this case). The situation with modular forms is precisely analogous. Modular forms can also be profitably approached from this geometric direction, as sections of line bundles on the moduli space of elliptic curves. == Rings of modular forms == For a subgroup Γ of the SL(2, Z), the ring of modular forms is the graded ring generated by the modular forms of Γ. In other words, if Mk(Γ) is the vector space of modular forms of weight k, then the ring of modular forms of Γ is the graded ring M ( Γ ) = ⨁ k > 0 M k ( Γ ) {\displaystyle M(\Gamma )=\bigoplus _{k>0}M_{k}(\Gamma )} . Rings of modular forms of congruence subgroups of SL(2, Z) are finitely generated due to a result of Pierre Deligne and Michael Rapoport. Such rings of modular forms are generated in weight at most 6 and the relations are generated in weight at most 12 when the congruence subgroup has nonzero odd weight modular forms, and the corresponding bounds are 5 and 10 when there are no nonzero odd weight modular forms. More generally, there are formulas for bounds on the weights of generators of the ring of modular forms and its relations for arbitrary Fuchsian groups. == Types == === New forms === New forms are a subspace of modular forms of a fixed level N {\displaystyle N} which cannot be constructed from modular forms of lower levels M {\displaystyle M} dividing N {\displaystyle N} . The other forms are called old forms. These old forms can be constructed using the following observations: if M ∣ N {\displaystyle M\mid N} then Γ 1 ( N ) ⊆ Γ 1 ( M ) {\displaystyle \Gamma _{1}(N)\subseteq \Gamma _{1}(M)} giving a reverse inclusion of modular forms M k ( Γ 1 ( M ) ) ⊆ M k ( Γ 1 ( N ) ) {\displaystyle M_{k}(\Gamma _{1}(M))\subseteq M_{k}(\Gamma _{1}(N))} . === Cusp forms === A cusp form is a modular form with a zero constant coefficient in its Fourier series. It is called a cusp form because the form vanishes at all cusps. == Generalizations == There are a number of other usages of the term "modular function", apart from this classical one; for example, in the theory of Haar measures, it is a function Δ(g) determined by the conjugation action. Maass forms are real-analytic eigenfunctions of the Laplacian but need not be holomorphic. The holomorphic parts of certain weak Maass wave forms turn out to be essentially Ramanujan's mock theta functions. Groups which are not subgroups of SL(2, Z) can be considered. Hilbert modular forms are functions in n variables, each a complex number in the upper half-plane, satisfying a modular relation for 2×2 matrices with entries in a totally real number field. Siegel modular forms are associated to larger symplectic groups in the same way in which classical modular forms are associated to SL(2, R); in other words, they are related to abelian varieties in the same sense that classical modular forms (which are sometimes called elliptic modular forms to emphasize the point) are related to elliptic curves. Jacobi forms are a mixture of modular forms and elliptic functions. Examples of such functions are very classical - the Jacobi theta functions and the Fourier coefficients of Siegel modular forms of genus two - but it is a relatively recent observation that the Jacobi forms have an arithmetic theory very analogous to the usual theory of modular forms. Automorphic forms extend the notion of modular forms to general Lie groups. Modular integrals of weight k are meromorphic functions on the upper half plane of moderate growth at infinity which fail to be modular of weight k by a rational function. Automorphic factors are functions of the form ε ( a , b , c , d ) ( c z + d ) k {\displaystyle \varepsilon (a,b,c,d)(cz+d)^{k}} which are used to generalise the modularity relation defining modular forms, so that f ( a z + b c z + d ) = ε ( a , b , c , d ) ( c z + d ) k f ( z ) . {\displaystyle f\left({\frac {az+b}{cz+d}}\right)=\varepsilon (a,b,c,d)(cz+d)^{k}f(z).} The function ε ( a , b , c , d ) {\displaystyle \varepsilon (a,b,c,d)} is called the nebentypus of the modular form. Functions such as the Dedekind eta function, a modular form of weight 1/2, may be encompassed by the theory by allowing automorphic factors. == History == The theory of modular forms was developed in four periods: In connection with the theory of elliptic functions, in the early nineteenth century By Felix Klein and others towards the end of the nineteenth century as the automorphic form concept became understood (for one variable) By Erich Hecke from about 1925 In the 1960s, as the needs of number theory and the formulation of the modularity theorem in particular made it clear that modular forms are deeply implicated. Taniyama and Shimura identified a 1-to-1 matching between certain modular forms and elliptic curves. Robert Langlands built on this idea in the construction of his expansive Langlands program, which has become one of the most far-reaching and consequential research programs in math. In 1994 Andrew Wiles used modular forms to prove Fermat’s Last Theorem. In 2001 all elliptic curves were proven to be modular over the rational numbers. In 2013 elliptic curves were proven to be modular over real quadratic fields. In 2023 elliptic curves were proven to be modular over about half of imaginary quadratic fields, including fields formed by combining the rational numbers with the square root of integers down to −5. == See also == Wiles's proof of Fermat's Last Theorem == Notes == == Citations == == References == Apostol, Tom M. (1990), Modular functions and Dirichlet Series in Number Theory, New York: Springer-Verlag, ISBN 0-387-97127-0 Diamond, Fred; Shurman, Jerry Michael (2005), A First Course in Modular Forms, Graduate Texts in Mathematics, vol. 228, New York: Springer-Verlag, ISBN 978-0387232294 Leads up to an overview of the proof of the modularity theorem. Gelbart, Stephen S. (1975), Automorphic Forms on Adèle Groups, Annals of Mathematics Studies, vol. 83, Princeton, N.J.: Princeton University Press, MR 0379375. Provides an introduction to modular forms from the point of view of representation theory. Hecke, Erich (1970), Mathematische Werke, Göttingen: Vandenhoeck & Ruprecht Rankin, Robert A. (1977), Modular forms and functions, Cambridge: Cambridge University Press, ISBN 0-521-21212-X Ribet, K.; Stein, W., Lectures on Modular Forms and Hecke Operators Serre, Jean-Pierre (1973), A Course in Arithmetic, Graduate Texts in Mathematics, vol. 7, New York: Springer-Verlag. Chapter VII provides an elementary introduction to the theory of modular forms. Skoruppa, N. P.; Zagier, D. (1988), "Jacobi forms and a certain space of modular forms", Inventiones Mathematicae, 94, Springer: 113, Bibcode:1988InMat..94..113S, doi:10.1007/BF01394347 Behold Modular Forms, the ‘Fifth Fundamental Operation’ of Math
Wikipedia/Modular_function
The term "bootstrap model" is used for a class of theories that use very general consistency criteria to determine the form of a quantum theory from some assumptions on the spectrum of particles. It is a form of S-matrix theory. == Overview == In the 1960s and '70s, the ever-growing list of strongly interacting particles — mesons and baryons — made it clear to physicists that none of these particles is elementary. Geoffrey Chew and others went so far as to question the distinction between composite and elementary particles, advocating a "nuclear democracy" in which the idea that some particles were more elementary than others was discarded. Instead, they sought to derive as much information as possible about the strong interaction from plausible assumptions about the S-matrix, which describes what happens when particles of any sort collide, an approach advocated by Werner Heisenberg two decades earlier. The reason the program had any hope of success was because of crossing, the principle that the forces between particles are determined by particle exchange. Once the spectrum of particles is known, the force law is known, and this means that the spectrum is constrained to bound states which form through the action of these forces. The simplest way to solve the consistency condition is to postulate a few elementary particles of spin less than or equal to one, and construct the scattering perturbatively through field theory, but this method does not allow for composite particles of spin greater than 1 and without the then undiscovered phenomenon of confinement, it is naively inconsistent with the observed Regge behavior of hadrons. Chew and followers believed that it would be possible to use crossing symmetry and Regge behavior to formulate a consistent S-matrix for infinitely many particle types. The Regge hypothesis would determine the spectrum, crossing and analyticity would determine the scattering amplitude (the forces), while unitarity would determine the self-consistent quantum corrections in a way analogous to including loops. The only fully successful implementation of the program required another assumption to organize the mathematics of unitarity (the narrow resonance approximation). This meant that all the hadrons were stable particles in the first approximation, so that scattering and decays could be thought of as a perturbation. This allowed a bootstrap model with infinitely many particle types to be constructed like a field theory — the lowest order scattering amplitude should show Regge behavior and unitarity would determine the loop corrections order by order. This is how Gabriele Veneziano and many others constructed string theory, which remains the only theory constructed from general consistency conditions and mild assumptions on the spectrum. Many in the bootstrap community believed that field theory, which was plagued by problems of definition, was fundamentally inconsistent at high energies. Some believed that there is only one consistent theory which requires infinitely many particle species and whose form can be found by consistency alone. This is nowadays known not to be true, since there are many theories which are nonperturbatively consistent, each with their own S-matrix. Without the narrow-resonance approximation, the bootstrap program did not have a clear expansion parameter, and the consistency equations were often complicated and unwieldy, so that the method had limited success. It fell out of favor with the rise of quantum chromodynamics, which described mesons and baryons in terms of elementary particles called quarks and gluons. Bootstrapping here refers to 'pulling oneself up by one's bootstraps,' as particles were surmised to be held together by forces consisting of exchanges of the particles themselves. In 2017 Quanta Magazine published an article in which bootstrap was said to enable new discoveries in the field of quantum theories. Decades after bootstrap seemed to be forgotten, physicists have discovered novel "bootstrap techniques" that appear to solve many problems. The bootstrap approach is said to be "a powerful tool for understanding more symmetric , perfect theories that, according to experts, serve as 'signposts' or 'building blocks' in the space of all possible quantum field theories". == See also == Tullio Regge Stanley Mandelstam Conformal bootstrap == Notes == == References == G. Chew (1962). S-Matrix theory of strong interactions. New York: W.A. Benjamin. D. Kaiser (2002). "Nuclear democracy: Political engagement, pedagogical reform, and particle physics in postwar America." Isis, 93, 229–268. == Further reading == Wolchover, Natalie (9 December 2019). "Why the Laws of Physics Are Inevitable". Quanta Magazine.
Wikipedia/Bootstrap_model
In string theory, a heterotic string is a closed string (or loop) which is a hybrid ('heterotic') of a superstring and a bosonic string. There are two kinds of heterotic superstring theories, the heterotic SO(32) and the heterotic E8 × E8, abbreviated to HO and HE. Apart from that there exist seven more heterotic string theories which are not supersymmetric and hence are only of secondary importance in most applications. Heterotic string theory was first developed in 1985 by David Gross, Jeffrey Harvey, Emil Martinec, and Ryan Rohm (the so-called "Princeton string quartet"), in one of the key papers that fueled the first superstring revolution. == Overview == In string theory, the left-moving and the right-moving excitations of strings are completely decoupled for a closed string, and it is possible to construct a string theory whose left-moving (counter-clockwise) excitations are treated as a bosonic string propagating in D = 26 dimensions, while the right-moving (clockwise) excitations are treated as a superstring in D = 10 dimensions. The mismatched 16 dimensions must be compactified on an even, self-dual lattice (a discrete subgroup of a linear space). There are two possible even self-dual lattices in 16 dimensions, and it leads to two types of the heterotic string. They differ by the gauge group in 10 dimensions. One gauge group is SO(32) (the HO string) while the other is E8 × E8 (the HE string). These two gauge groups also turned out to be the only two anomaly-free gauge groups that can be coupled to the N = 1 supergravity in 10 dimensions. (Although not realized for quite some time, U(1)496 and E8 × U(1)248 are anomalous.) Every heterotic string must be a closed string, not an open string; it is not possible to define any boundary conditions that would relate the left-moving and the right-moving excitations because they have a different character. == String duality == String duality is a class of symmetries in physics that link different string theories. In the 1990s, it was realized that the strong coupling limit of the HO theory is type I string theory — a theory that also contains open strings; this relation is called S-duality. The HO and HE theories are also related by T-duality. Because the various superstring theories were shown to be related by dualities, it was proposed that each type of string was a different limit of a single underlying theory called M-theory. == References ==
Wikipedia/Heterotic_string_theory
In theoretical physics, F-theory is a branch of string theory developed by Iranian-American physicist Cumrun Vafa. The new vacua described by F-theory were discovered by Vafa and allowed string theorists to construct new realistic vacua — in the form of F-theory compactified on elliptically fibered Calabi–Yau four-folds. The letter "F" supposedly stands for "Father" in relation to "Mother"-theory. == Compactifications == F-theory is formally a 12-dimensional theory, but the only way to obtain an acceptable background is to compactify this theory on a two-torus. By doing so, one obtains type IIB superstring theory in 10 dimensions. The SL(2,Z) S-duality symmetry of the resulting type IIB string theory is manifest because it arises as the group of large diffeomorphisms of the two-dimensional torus. More generally, one can compactify F-theory on an elliptically fibered manifold (elliptic fibration), i.e. a fiber bundle whose fiber is a two-dimensional torus (also called an elliptic curve). For example, a subclass of the K3 manifolds is elliptically fibered, and F-theory on a K3 manifold is dual to heterotic string theory on a two-torus. Also, the moduli spaces of those theories should be isomorphic. The large number of semirealistic solutions to string theory referred to as the string theory landscape, with 10 272 , 000 {\displaystyle 10^{272,000}} elements or so, is dominated by F-theory compactifications on Calabi–Yau four-folds. There are about 10 15 {\displaystyle 10^{15}} of those solutions consistent with the Standard Model of particle physics. == Phenomenology == New models of Grand Unified Theory have recently been developed using F-theory. == Extra time dimension == F-theory has the metric signature (10,2), which means that it includes a second time dimension. == See also == Dilaton Axion M-theory == References ==
Wikipedia/F-theory
A chiral phenomenon is one that is not identical to its mirror image (see the article on mathematical chirality). The spin of a particle may be used to define a handedness, or helicity, for that particle, which, in the case of a massless particle, is the same as chirality. A symmetry transformation between the two is called parity transformation. Invariance under parity transformation by a Dirac fermion is called chiral symmetry. == Chirality and helicity == The helicity of a particle is positive ("right-handed") if the direction of its spin is the same as the direction of its motion. It is negative ("left-handed") if the directions of spin and motion are opposite. So a standard clock, with its spin vector defined by the rotation of its hands, has left-handed helicity if tossed with its face directed forwards. Mathematically, helicity is the sign of the projection of the spin vector onto the momentum vector: "left" is negative, "right" is positive. The chirality of a particle is more abstract: It is determined by whether the particle transforms in a right- or left-handed representation of the Poincaré group. For massless particles – photons, gluons, and (hypothetical) gravitons – chirality is the same as helicity; a given massless particle appears to spin in the same direction along its axis of motion regardless of point of view of the observer. For massive particles – such as electrons, quarks, and neutrinos – chirality and helicity must be distinguished: In the case of these particles, it is possible for an observer to change to a reference frame that is moving faster than the spinning particle is, in which case the particle will then appear to move backwards, and its helicity (which may be thought of as "apparent chirality") will be reversed. Helicity is a constant of motion, but it is not Lorentz invariant. Chirality is Lorentz invariant, but is not a constant of motion: a massive left-handed spinor, when propagating, will evolve into a right handed spinor over time, and vice versa. A massless particle moves with the speed of light, so no real observer (who must always travel at less than the speed of light) can be in any reference frame in which the particle appears to reverse its relative direction of spin, meaning that all real observers see the same helicity. Because of this, the direction of spin of massless particles is not affected by a change of inertial reference frame (a Lorentz boost) in the direction of motion of the particle, and the sign of the projection (helicity) is fixed for all reference frames: The helicity of massless particles is a relativistic invariant (a quantity whose value is the same in all inertial reference frames) and always matches the massless particle's chirality. The discovery of neutrino oscillation implies that neutrinos have mass, leaving the photon as the only confirmed massless particle; gluons are expected to also be massless, although this has not been conclusively tested. Hence, these are the only two particles now known for which helicity could be identical to chirality, of which only the photon has been confirmed by measurement. All other observed particles have mass and thus may have different helicities in different reference frames. == Chiral theories == Particle physicists have only observed or inferred left-chiral fermions and right-chiral antifermions engaging in the charged weak interaction. In the case of the weak interaction, which can in principle engage with both left- and right-chiral fermions, only two left-handed fermions interact. Interactions involving right-handed or opposite-handed fermions have not been shown to occur, implying that the universe has a preference for left-handed chirality. This preferential treatment of one chiral realization over another violates parity, as first noted by Chien Shiung Wu in her famous experiment known as the Wu experiment. This is a striking observation, since parity is a symmetry that holds for all other fundamental interactions. Chirality for a Dirac fermion ψ is defined through the operator γ5, which has eigenvalues ±1; the eigenvalue's sign is equal to the particle's chirality: +1 for right-handed, −1 for left-handed. Any Dirac field can thus be projected into its left- or right-handed component by acting with the projection operators ⁠1/2⁠(1 − γ5) or ⁠1/2⁠(1 + γ5) on ψ. The coupling of the charged weak interaction to fermions is proportional to the first projection operator, which is responsible for this interaction's parity symmetry violation. A common source of confusion is due to conflating the γ5, chirality operator with the helicity operator. Since the helicity of massive particles is frame-dependent, it might seem that the same particle would interact with the weak force according to one frame of reference, but not another. The resolution to this paradox is that the chirality operator is equivalent to helicity for massless fields only, for which helicity is not frame-dependent. By contrast, for massive particles, chirality is not the same as helicity, or, alternatively, helicity is not Lorentz invariant, so there is no frame dependence of the weak interaction: a particle that couples to the weak force in one frame does so in every frame. A theory that is asymmetric with respect to chiralities is called a chiral theory, while a non-chiral (i.e., parity-symmetric) theory is sometimes called a vector theory. Many pieces of the Standard Model of physics are non-chiral, which is traceable to anomaly cancellation in chiral theories. Quantum chromodynamics is an example of a vector theory, since both chiralities of all quarks appear in the theory, and couple to gluons in the same way. The electroweak theory, developed in the mid 20th century, is an example of a chiral theory. Originally, it assumed that neutrinos were massless, and assumed the existence of only left-handed neutrinos and right-handed antineutrinos. After the observation of neutrino oscillations, which implies that no fewer than two of the three neutrinos are massive, the revised theories of the electroweak interaction now include both right- and left-handed neutrinos. However, it is still a chiral theory, as it does not respect parity symmetry. The exact nature of the neutrino is still unsettled and so the electroweak theories that have been proposed are somewhat different, but most accommodate the chirality of neutrinos in the same way as was already done for all other fermions. == Chiral symmetry == Vector gauge theories with massless Dirac fermion fields ψ exhibit chiral symmetry, i.e., rotating the left-handed and the right-handed components independently makes no difference to the theory. We can write this as the action of rotation on the fields: ψ L → e i θ L ψ L {\displaystyle \psi _{\rm {L}}\rightarrow e^{i\theta _{\rm {L}}}\psi _{\rm {L}}} and ψ R → ψ R {\displaystyle \psi _{\rm {R}}\rightarrow \psi _{\rm {R}}} or ψ L → ψ L {\displaystyle \psi _{\rm {L}}\rightarrow \psi _{\rm {L}}} and ψ R → e i θ R ψ R . {\displaystyle \psi _{\rm {R}}\rightarrow e^{i\theta _{\rm {R}}}\psi _{\rm {R}}.} With N flavors, we have unitary rotations instead: U(N)L × U(N)R. More generally, we write the right-handed and left-handed states as a projection operator acting on a spinor. The right-handed and left-handed projection operators are P R = 1 + γ 5 2 {\displaystyle P_{\rm {R}}={\frac {1+\gamma ^{5}}{2}}} and P L = 1 − γ 5 2 {\displaystyle P_{\rm {L}}={\frac {1-\gamma ^{5}}{2}}} Massive fermions do not exhibit chiral symmetry, as the mass term in the Lagrangian, mψψ, breaks chiral symmetry explicitly. Spontaneous chiral symmetry breaking may also occur in some theories, as it most notably does in quantum chromodynamics. The chiral symmetry transformation can be divided into a component that treats the left-handed and the right-handed parts equally, known as vector symmetry, and a component that actually treats them differently, known as axial symmetry. (cf. Current algebra.) A scalar field model encoding chiral symmetry and its breaking is the chiral model. The most common application is expressed as equal treatment of clockwise and counter-clockwise rotations from a fixed frame of reference. The general principle is often referred to by the name chiral symmetry. The rule is absolutely valid in the classical mechanics of Newton and Einstein, but results from quantum mechanical experiments show a difference in the behavior of left-chiral versus right-chiral subatomic particles. === Example: u and d quarks in QCD === Consider quantum chromodynamics (QCD) with two massless quarks u and d (massive fermions do not exhibit chiral symmetry). The Lagrangian reads L = u ¯ i ⧸ D u + d ¯ i ⧸ D d + L g l u o n s . {\displaystyle {\mathcal {L}}={\overline {u}}\,i\displaystyle {\not }D\,u+{\overline {d}}\,i\displaystyle {\not }D\,d+{\mathcal {L}}_{\mathrm {gluons} }~.} In terms of left-handed and right-handed spinors, it reads L = u ¯ L i ⧸ D u L + u ¯ R i ⧸ D u R + d ¯ L i ⧸ D d L + d ¯ R i ⧸ D d R + L g l u o n s . {\displaystyle {\mathcal {L}}={\overline {u}}_{\rm {L}}\,i\displaystyle {\not }D\,u_{\rm {L}}+{\overline {u}}_{\rm {R}}\,i\displaystyle {\not }D\,u_{\rm {R}}+{\overline {d}}_{\rm {L}}\,i\displaystyle {\not }D\,d_{\rm {L}}+{\overline {d}}_{\rm {R}}\,i\displaystyle {\not }D\,d_{\rm {R}}+{\mathcal {L}}_{\mathrm {gluons} }~.} (Here, i is the imaginary unit and ⧸ D {\displaystyle \displaystyle {\not }D} the Dirac operator.) Defining q = [ u d ] , {\displaystyle q={\begin{bmatrix}u\\d\end{bmatrix}},} it can be written as L = q ¯ L i ⧸ D q L + q ¯ R i ⧸ D q R + L g l u o n s . {\displaystyle {\mathcal {L}}={\overline {q}}_{\rm {L}}\,i\displaystyle {\not }D\,q_{\rm {L}}+{\overline {q}}_{\rm {R}}\,i\displaystyle {\not }D\,q_{\rm {R}}+{\mathcal {L}}_{\mathrm {gluons} }~.} The Lagrangian is unchanged under a rotation of qL by any 2×2 unitary matrix L, and qR by any 2×2 unitary matrix R. This symmetry of the Lagrangian is called flavor chiral symmetry, and denoted as U(2)L × U(2)R. It decomposes into S U ( 2 ) L × S U ( 2 ) R × U ( 1 ) V × U ( 1 ) A . {\displaystyle \mathrm {SU} (2)_{\text{L}}\times \mathrm {SU} (2)_{\text{R}}\times \mathrm {U} (1)_{V}\times \mathrm {U} (1)_{A}~.} The singlet vector symmetry, U(1)V, acts as q L → e i θ ( x ) q L q R → e i θ ( x ) q R , {\displaystyle q_{\text{L}}\rightarrow e^{i\theta (x)}q_{\text{L}}\qquad q_{\text{R}}\rightarrow e^{i\theta (x)}q_{\text{R}}~,} and thus invariant under U(1) gauge symmetry. This corresponds to baryon number conservation. The singlet axial group U(1)A transforms as the following global transformation q L → e i θ q L q R → e − i θ q R . {\displaystyle q_{\text{L}}\rightarrow e^{i\theta }q_{\text{L}}\qquad q_{\text{R}}\rightarrow e^{-i\theta }q_{\text{R}}~.} However, it does not correspond to a conserved quantity, because the associated axial current is not conserved. It is explicitly violated by a quantum anomaly. The remaining chiral symmetry SU(2)L × SU(2)R turns out to be spontaneously broken by a quark condensate ⟨ q ¯ R a q L b ⟩ = v δ a b {\displaystyle \textstyle \langle {\bar {q}}_{\text{R}}^{a}q_{\text{L}}^{b}\rangle =v\delta ^{ab}} formed through nonperturbative action of QCD gluons, into the diagonal vector subgroup SU(2)V known as isospin. The Goldstone bosons corresponding to the three broken generators are the three pions. As a consequence, the effective theory of QCD bound states like the baryons, must now include mass terms for them, ostensibly disallowed by unbroken chiral symmetry. Thus, this chiral symmetry breaking induces the bulk of hadron masses, such as those for the nucleons — in effect, the bulk of the mass of all visible matter. In the real world, because of the nonvanishing and differing masses of the quarks, SU(2)L × SU(2)R is only an approximate symmetry to begin with, and therefore the pions are not massless, but have small masses: they are pseudo-Goldstone bosons. === More flavors === For more "light" quark species, N flavors in general, the corresponding chiral symmetries are U(N)L × U(N)R′, decomposing into S U ( N ) L × S U ( N ) R × U ( 1 ) V × U ( 1 ) A , {\displaystyle \mathrm {SU} (N)_{\text{L}}\times \mathrm {SU} (N)_{\text{R}}\times \mathrm {U} (1)_{V}\times \mathrm {U} (1)_{A}~,} and exhibiting a very analogous chiral symmetry breaking pattern. Most usually, N = 3 is taken, the u, d, and s quarks taken to be light (the eightfold way), so then approximately massless for the symmetry to be meaningful to a lowest order, while the other three quarks are sufficiently heavy to barely have a residual chiral symmetry be visible for practical purposes. === An application in particle physics === In theoretical physics, the electroweak model breaks parity maximally. All its fermions are chiral Weyl fermions, which means that the charged weak gauge bosons W+ and W− only couple to left-handed quarks and leptons. Some theorists found this objectionable, and so conjectured a GUT extension of the weak force which has new, high energy W′ and Z′ bosons, which do couple with right handed quarks and leptons: S U ( 2 ) W × U ( 1 ) Y Z 2 {\displaystyle {\frac {\mathrm {SU} (2)_{\text{W}}\times \mathrm {U} (1)_{Y}}{\mathbb {Z} _{2}}}} to S U ( 2 ) L × S U ( 2 ) R × U ( 1 ) B − L Z 2 . {\displaystyle {\frac {\mathrm {SU} (2)_{\text{L}}\times \mathrm {SU} (2)_{\text{R}}\times \mathrm {U} (1)_{B-L}}{\mathbb {Z} _{2}}}.} Here, SU(2)L (pronounced "SU(2) left") is SU(2)W from above, while B−L is the baryon number minus the lepton number. The electric charge formula in this model is given by Q = T 3 L + T 3 R + B − L 2 ; {\displaystyle Q=T_{\rm {3L}}+T_{\rm {3R}}+{\frac {B-L}{2}}\,;} where T 3 L {\displaystyle \ T_{\rm {3L}}\ } and T 3 R {\displaystyle \ T_{\rm {3R}}\ } are the left and right weak isospin values of the fields in the theory. There is also the chromodynamic SU(3)C. The idea was to restore parity by introducing a left-right symmetry. This is a group extension of Z 2 {\displaystyle \mathbb {Z} _{2}} (the left-right symmetry) by S U ( 3 ) C × S U ( 2 ) L × S U ( 2 ) R × U ( 1 ) B − L Z 6 {\displaystyle {\frac {\mathrm {SU} (3)_{\text{C}}\times \mathrm {SU} (2)_{\text{L}}\times \mathrm {SU} (2)_{\text{R}}\times \mathrm {U} (1)_{B-L}}{\mathbb {Z} _{6}}}} to the semidirect product S U ( 3 ) C × S U ( 2 ) L × S U ( 2 ) R × U ( 1 ) B − L Z 6 ⋊ Z 2 . {\displaystyle {\frac {\mathrm {SU} (3)_{\text{C}}\times \mathrm {SU} (2)_{\text{L}}\times \mathrm {SU} (2)_{\text{R}}\times \mathrm {U} (1)_{B-L}}{\mathbb {Z} _{6}}}\rtimes \mathbb {Z} _{2}\ .} This has two connected components where Z 2 {\displaystyle \mathbb {Z} _{2}} acts as an automorphism, which is the composition of an involutive outer automorphism of SU(3)C with the interchange of the left and right copies of SU(2) with the reversal of U(1)B−L. It was shown by Mohapatra & Senjanovic (1975) that left-right symmetry can be spontaneously broken to give a chiral low energy theory, which is the Standard Model of Glashow, Weinberg, and Salam, and also connects the small observed neutrino masses to the breaking of left-right symmetry via the seesaw mechanism. In this setting, the chiral quarks ( 3 , 2 , 1 ) + 1 3 {\displaystyle (3,2,1)_{+{1 \over 3}}} and ( 3 ¯ , 1 , 2 ) − 1 3 {\displaystyle \left({\bar {3}},1,2\right)_{-{1 \over 3}}} are unified into an irreducible representation ("irrep") ( 3 , 2 , 1 ) + 1 3 ⊕ ( 3 ¯ , 1 , 2 ) − 1 3 . {\displaystyle (3,2,1)_{+{1 \over 3}}\oplus \left({\bar {3}},1,2\right)_{-{1 \over 3}}\ .} The leptons are also unified into an irreducible representation ( 1 , 2 , 1 ) − 1 ⊕ ( 1 , 1 , 2 ) + 1 . {\displaystyle (1,2,1)_{-1}\oplus (1,1,2)_{+1}\ .} The Higgs bosons needed to implement the breaking of left-right symmetry down to the Standard Model are ( 1 , 3 , 1 ) 2 ⊕ ( 1 , 1 , 3 ) 2 . {\displaystyle (1,3,1)_{2}\oplus (1,1,3)_{2}\ .} This then provides three sterile neutrinos which are perfectly consistent with current neutrino oscillation data. Within the seesaw mechanism, the sterile neutrinos become superheavy without affecting physics at low energies. Because the left–right symmetry is spontaneously broken, left–right models predict domain walls. This left-right symmetry idea first appeared in the Pati–Salam model (1974) and Mohapatra–Pati models (1975). == Chirality in materials science == Chirality in other branches of physics is often used for classifying and studying the properties of bodies and materials under external influences. Classification by chirality, as a special case of symmetry classification, allows for a better understanding of first-principles construction of molecules, crystals, quasicrystals, and more. An example is the homochirality of amino acids in all known forms of life, which can be reproduced in physical experiments under external influence. Optical activity (including circular dichroism and magnetic circular dichroism) of materials is determined by their chirality. Chiral physical systems are characterized by the absence of invariance under the parity operator. An ambiguity arises in defining chirality in physics depending on whether one compares directions of motion using the reflection or spatial inversion operation. Accordingly, one distinguishes between "true" chirality (which is invariant under the time-reversal operation) and "false" chirality (non-invariant under time reversal). Many physical quantities change sign under the time-reversal operation (e.g., velocity, power, electric current, magnetization). Accordingly, "false" chirality is so typical in physics that the term can be misleading, and it is clearer to speak of T-invariant and T-non-invariant chirality. Effects related to chirality are described using pseudoscalar or axial vector physical quantities in general, and particularly, in magnetically ordered media, are described using time-direction-dependent chirality. This approach is formalized using dichromatic symmetry groups. T-invariant chirality corresponds to the absence in the symmetry group of any symmetry operations that include spatial inversion 1 ¯ {\displaystyle {\bar {1}}} or reflection m, according to international notation. The criterion for T-non-invariant chirality is the presence of these symmetry operations, but only when combined with time reversal 1 ′ {\displaystyle 1'} , such as operations m′ or 1 ¯ ′ {\displaystyle {\bar {1}}'} . At the level of atomic structure of materials, one distinguishes vector, scalar, and other types of chirality depending on the direction/sign of triple and vector products of spins. == See also == Electroweak theory Chirality (chemistry) Chirality (mathematics) Chiral symmetry breaking Handedness Spinors Fermionic field § Dirac fields Sigma model Chiral model == Notes == == References == Walter Greiner; Berndt Müller (2000). Gauge Theory of Weak Interactions. Springer. ISBN 3-540-67672-4. Gordon L. Kane (1987). Modern Elementary Particle Physics. Perseus Books. ISBN 0-201-11749-5. Kondepudi, Dilip K.; Hegstrom, Roger A. (January 1990). "The Handedness of the Universe". Scientific American. 262 (1): 108–115. Bibcode:1990SciAm.262a.108H. doi:10.1038/scientificamerican0190-108. Winters, Jeffrey (November 1995). "Looking for the Right Hand". Discover. Retrieved 12 September 2015. == External links == History of science: parity violation Helicity, Chirality, Mass, and the Higgs (Quantum Diaries blog) Chirality vs helicity chart (Robert D. Klauber)
Wikipedia/Chirality_(physics)
String theory is a branch of theoretical physics. String theory may also refer to: Concatenation theory, a topic in symbolic logic dealing with strings of characters Music String Theory (band), an American electronic music band String Theory (Hanson album), 2018 String Theory (The Selecter album), 2013 Other media "String Theory" (Heroes), retitled "Five Years Gone", an episode of the TV series Heroes "String Theory" (The Shield), an episode of the TV series The Shield String Theory (novels), a trilogy of Star Trek: Voyager novels String Theory, a webcomic graphic novel based on the TV series Heroes String Theory (artist collective), based in Gothenburg, Sweden, and Berlin, Germany == See also == String theory landscape, the large number of possible false vacua in string theory Knot theory, a branch of mathematical topology
Wikipedia/String_theory_(disambiguation)
In general relativity, a vacuum solution is a Lorentzian manifold whose Einstein tensor vanishes identically. According to the Einstein field equation, this means that the stress–energy tensor also vanishes identically, so that no matter or non-gravitational fields are present. These are distinct from the electrovacuum solutions, which take into account the electromagnetic field in addition to the gravitational field. Vacuum solutions are also distinct from the lambdavacuum solutions, where the only term in the stress–energy tensor is the cosmological constant term (and thus, the lambdavacuums can be taken as cosmological models). More generally, a vacuum region in a Lorentzian manifold is a region in which the Einstein tensor vanishes. Vacuum solutions are a special case of the more general exact solutions in general relativity. == Equivalent conditions == It is a mathematical fact that the Einstein tensor vanishes if and only if the Ricci tensor vanishes. This follows from the fact that these two second rank tensors stand in a kind of dual relationship; they are the trace reverse of each other: G a b = R a b − R 2 g a b , R a b = G a b − G 2 g a b {\displaystyle G_{ab}=R_{ab}-{\frac {R}{2}}\,g_{ab},\;\;R_{ab}=G_{ab}-{\frac {G}{2}}\,g_{ab}} where the traces are R = R a a , G = G a a = − R {\displaystyle R={R^{a}}_{a},\;\;G={G^{a}}_{a}=-R} . A third equivalent condition follows from the Ricci decomposition of the Riemann curvature tensor as a sum of the Weyl curvature tensor plus terms built out of the Ricci tensor: the Weyl and Riemann tensors agree, R a b c d = C a b c d {\displaystyle R_{abcd}=C_{abcd}} , in some region if and only if it is a vacuum region. == Gravitational energy == Since T a b = 0 {\displaystyle T^{ab}=0} in a vacuum region, it might seem that according to general relativity, vacuum regions must contain no energy. But the gravitational field can do work, so we must expect the gravitational field itself to possess energy, and it does. However, determining the precise location of this gravitational field energy is technically problematical in general relativity, by its very nature of the clean separation into a universal gravitational interaction and "all the rest". The fact that the gravitational field itself possesses energy yields a way to understand the nonlinearity of the Einstein field equation: this gravitational field energy itself produces more gravity. (This is described as "the gravity of gravity", or by saying that "gravity gravitates".) This means that the gravitational field outside the Sun is a bit stronger according to general relativity than it is according to Newton's theory. == Examples == Well-known examples of explicit vacuum solutions include: Minkowski spacetime (which describes empty space with no cosmological constant) Milne model (which is a model developed by E. A. Milne describing an empty universe which has no curvature) Schwarzschild vacuum (which describes the spacetime geometry around a spherical mass), Kerr vacuum (which describes the geometry around a rotating object), Taub–NUT vacuum (a famous counterexample describing the exterior gravitational field of an isolated object with strange properties), Kerns–Wild vacuum (Robert M. Kerns and Walter J. Wild 1982) (a Schwarzschild object immersed in an ambient "almost uniform" gravitational field), double Kerr vacuum (two Kerr objects sharing the same axis of rotation, but held apart by unphysical zero active gravitational mass "cables" going out to suspension points infinitely removed), Khan–Penrose vacuum (K. A. Khan and Roger Penrose 1971) (a simple colliding plane wave model), Oszváth–Schücking vacuum (the circularly polarized sinusoidal gravitational wave, another famous counterexample). Kasner metric (An anisotropic solution, used to study gravitational chaos in three or more dimensions). These all belong to one or more general families of solutions: the Weyl vacua (Hermann Weyl) (the family of all static vacuum solutions), the Beck vacua (Guido Beck 1925) (the family of all cylindrically symmetric nonrotating vacuum solutions), the Ernst vacua (Frederick J. Ernst 1968) (the family of all stationary axisymmetric vacuum solutions), the Ehlers vacua (Jürgen Ehlers) (the family of all cylindrically symmetric vacuum solutions), the Szekeres vacua (George Szekeres) (the family of all colliding gravitational plane wave models), the Gowdy vacua (Robert H. Gowdy) (cosmological models constructed using gravitational waves), Several of the families mentioned here, members of which are obtained by solving an appropriate linear or nonlinear, real or complex partial differential equation, turn out to be very closely related, in perhaps surprising ways. In addition to these, we also have the vacuum pp-wave spacetimes, which include the gravitational plane waves. == See also == Introduction to the mathematics of general relativity Topological defect == References == === Sources === Stephani, Hans, ed. (2003). Exact solutions of Einstein's field equations (PDF). Cambridge monographs on mathematical physics (2nd ed.). Cambridge, UK; New York: Cambridge University Press. ISBN 978-0-521-46136-8.
Wikipedia/Vacuum_solution
The Elegant Universe: Superstrings, Hidden Dimensions, and the Quest for the Ultimate Theory is a book by Brian Greene published in 1999, which introduces string and superstring theory, and provides a comprehensive though non-technical assessment of the theory and some of its shortcomings. In 2000, it won the Royal Society Prize for Science Books and was a finalist for the Pulitzer Prize for General Nonfiction. A new edition was released in 2003, with an updated preface. == Summary == === Part I: The Edge of Knowledge === Chapter 1, "Tied Up With String", briefly introduces the conflicts between our current theories, and how they may be resolved. He introduces the building blocks of matter, electrons and quarks, and the forces that govern them. === Part II: The Dilemma of Space, Time, and the Quantum === Chapter 2, "Space, Time, and the Eye of the Beholder" explains Albert Einstein's special relativity, which united James Clerk Maxwell's electrodynamics with Galileo's principle of relativity. Einstein established that speed of light is a universal constant, and that the laws of physics are the same for all observers in relative motion. As a consequence, Isaac Newton's absolute time and space were replaced by a dynamic spacetime. Chapter 3, "Of Warps and Ripples", introduces Einstein's general relativity, which resolved the conflict between Newton's theory of gravity and special relativity. General relativity explains gravity as the curvature of spacetime. Chapter 4, "Microscopic Weirdness", introduces quantum mechanics. Greene begins with Max Planck's 1900 proposal that energy is absorbed and emitted in discrete units, or quanta. In 1905, Einstein used quantum theory to explain the photoelectric effect, the extraction of electrons from a metal by light. Greene uses the double-slit experiment to illustrate wave-particle duality of light. Louis de Broglie extended this to include matter. Werner Heisenberg's uncertainty principle says that we cannot simultaneously know the position and velocity of a particle, and the more we know about one, the less we know about the other. Chapter 5, "The Need For a New Theory: General Relativity vs. Quantum Mechanics" explains the conflict between the two pillars of modern physics. === Part III: The Cosmic Symphony === Chapter 6, "Nothing But Music: The Essentials of String Theory" offers a brief history of string theory, starting with Gabriele Veneziano's work on the strong nuclear force. String theory replaces the conception of electrons and quarks as point particles with tiny, vibrating loops of string. One such vibration describes the properties predicted for the graviton, the postulated quantum of gravity. Chapter 7, "The 'Super' in Superstrings discusses the importance of symmetry in physics, and the possibility of supersymmetry. Chapter 8, "More Dimensions than Meets the Eye" discusses Theodor Kaluza's proposed unification of general relativity and electromagnetism, which required an extra dimension of space. The idea was elaborated on by the mathematician Oskar Klein. Chapter 9, "The Smoking Gun: Experimental Signatures" discusses criticisms of string theory, namely that it has not yet yielded testable predictions. Greene explains how this may change in the near future. === Part IV: String Theory and the Fabric of Spacetime === Chapter 10, "Quantum Geometry" discusses Calabi-Yau spaces and their applications. Chapter 11, "Tearing the Fabric of Space" discusses Greene's own work in string theory, and how strings could repair tears in the fabric of space Chapter 12, "Beyond Strings: In Search of M-Theory" discusses the different versions of string theory, and how they might be pointing towards a single theory, mysteriously called M-Theory. Chapter 13, "Black Holes: A String/ M-Theory Perspective" looks at mysteries of black holes and how they might be resolved by string theory. Greene discusses Stephen Hawking and Jacob Bekenstein's discovery of black hole thermodynamics and Hawking's discovery of Hawking radiation. Chapter 14, "Reflections on Cosmology" gives an overview of the standard Big Bang model and the refinements of inflationary cosmology. String theory could answer questions such as whether the universe began with a singularity. === Part V: Unification in the Twenty-First Century === Chapter 15,"Prospects" looks at questions string theory might answer, such as the nature of space and time. He speculates about the future of the theory. == Reception == George Johson wrote in The New York Times: Writing about this area of physics, as Greene does, without assuming that the reader has any mathematical background is the hardest challenge of popular science writing. Michio Kaku, a physicist at City College in New York, provided a very nice introduction to superstrings in Beyond Einstein: The Cosmic Quest for the Theory of the Universe. But Greene goes beyond Kaku's book, exploring the ideas and recent developments with a depth and clarity I wouldn't have thought possible. Like Simon Singh in Fermat's Enigma, he has a rare ability to explain even the most evanescent ideas in a way that gives at least the illusion of understanding, enough of a mental toehold to get on with the climb. John H. Schwarz wrote: Since he is an expert in the subject, Greene's description of the current state of understanding of string theory is reliable. I am not aware of any errors in his depiction of the subject. He writes with a flair that is rare in the scientific world, and which should make the book very appealing to the lay reader. Indeed, following the publication of this book, he has become something of a media celebrity. Ian McEwan included the book in his canon of science writing. Steven Weinberg included it on his list of the thirteen best science books for the general reader. == Adaptations == The Elegant Universe was adapted into an Emmy Award-winning three-hour program in three parts for television broadcast by David Hickman in late 2003 on the PBS series NOVA. Einstein's Dream String's The Thing Welcome To The 11th Dimension The Elegant Universe was also interpreted by choreographer Karole Armitage, of Armitage Gone! Dance, in New York City. A performance of the work-in-progress formed part of the inaugural World Science Festival. == See also == The Fabric of the Cosmos (2004) The Road to Reality (2004) == Footnotes == == References == Brian Greene, "The Elegant Universe: Superstrings, Hidden Dimensions, and the Quest for the Ultimate Theory", Vintage Series, Random House Inc., February 2000 ISBN 0-375-70811-1 == External links == "The Elegant Universe". PBS. Perkowitz, Sidney (11 June 1999). "The Seductive Melody of the Strings". Science. 284 (5421): 1780. doi:10.1126/science.284.5421.1780a. JSTOR 2898035. S2CID 119020033. Brown, Laurie M. (June 2004). "Reviewed Work: The Elegant Universe: Superstrings, Hidden Dimensions, and the Quest for the Ultimate Theory by Brian Greene". Isis. 95 (2): 327. doi:10.1086/426259. JSTOR 10.1086/426259. Santiago, Luis E. Ibáñez (November 2000). "Un viaje hacia la teoría final". Revista de Libros de la Fundación Caja Madrid (in Spanish) (47): 28. JSTOR 30229390.
Wikipedia/The_Elegant_Universe:_Superstrings,_Hidden_Dimensions,_and_the_Quest_for_the_Ultimate_Theory
The non-critical string theory describes the relativistic string without enforcing the critical dimension. Although this allows the construction of a string theory in 4 spacetime dimensions, such a theory usually does not describe a Lorentz invariant background. However, there are recent developments which make possible Lorentz invariant quantization of string theory in 4-dimensional Minkowski space-time. There are several applications of the non-critical string. Through the AdS/CFT correspondence it provides a holographic description of gauge theories which are asymptotically free. It may then have applications to the study of the QCD, the theory of strong interactions between quarks. == The critical dimension and central charge == In order for a string theory to be consistent, the worldsheet theory must be conformally invariant. The obstruction to conformal symmetry is known as the Weyl anomaly and is proportional to the central charge of the worldsheet theory. In order to preserve conformal symmetry the Weyl anomaly, and thus the central charge, must vanish. For the bosonic string this can be accomplished by a worldsheet theory consisting of 26 free bosons. Since each boson is interpreted as a flat spacetime dimension, the critical dimension of the bosonic string is 26. A similar logic for the superstring results in 10 free bosons (and 10 free fermions as required by worldsheet supersymmetry). The bosons are again interpreted as spacetime dimensions and so the critical dimension for the superstring is 10. A string theory which is formulated in the critical dimension is called a critical string. The non-critical string is not formulated with the critical dimension, but nonetheless has vanishing Weyl anomaly. A worldsheet theory with the correct central charge can be constructed by introducing a non-trivial target space, commonly by giving an expectation value to the dilaton which varies linearly along some spacetime direction. (From the point of view of the worldsheet CFT, this corresponds to having a background charge.) For this reason non-critical string theory is sometimes called the linear dilaton theory. Since the dilaton is related to the string coupling constant, this theory contains a region where the coupling is weak (and so perturbation theory is valid) and another region where the theory is strongly coupled. For dilaton varying along a spacelike direction, the dimension of the theory is less than the critical dimension and so the theory is termed subcritical. For dilaton varying along a timelike direction, the dimension is greater than the critical dimension and the theory is termed supercritical. The dilaton can also vary along a lightlike direction, in which case the dimension is equal to the critical dimension and the theory is a critical string theory. == Two-dimensional string theory == Perhaps the most studied example of non-critical string theory is that with two-dimensional target space. While clearly not of phenomenological interest, string theories in two dimensions serve as important toy models. They allow one to probe interesting concepts which would be computationally intractable in a more realistic scenario. These models often have fully non-perturbative descriptions in the form of the quantum mechanics of large matrices. Such a description known as the c=1 matrix model captures the dynamics of bosonic string theory in two dimensions. Of much recent interest are matrix models of the two-dimensional Type 0 string theories. These "matrix models" are understood as describing the dynamics of open strings lying on D-branes in these theories. Degrees of freedom associated with closed strings, and spacetime itself, appear as emergent phenomena, providing an important example of open string tachyon condensation in string theory. == See also == String theory, for general information about critical superstrings Weyl anomaly Central charge Liouville gravity == References ==
Wikipedia/Non-critical_string_theory
In theoretical physics, the superconformal algebra is a graded Lie algebra or superalgebra that combines the conformal algebra and supersymmetry. In two dimensions, the superconformal algebra is infinite-dimensional. In higher dimensions, superconformal algebras are finite-dimensional and generate the superconformal group (in two Euclidean dimensions, the Lie superalgebra does not generate any Lie supergroup). == Superconformal algebra in dimension greater than 2 == The conformal group of the ( p + q ) {\displaystyle (p+q)} -dimensional space R p , q {\displaystyle \mathbb {R} ^{p,q}} is S O ( p + 1 , q + 1 ) {\displaystyle SO(p+1,q+1)} and its Lie algebra is s o ( p + 1 , q + 1 ) {\displaystyle {\mathfrak {so}}(p+1,q+1)} . The superconformal algebra is a Lie superalgebra containing the bosonic factor s o ( p + 1 , q + 1 ) {\displaystyle {\mathfrak {so}}(p+1,q+1)} and whose odd generators transform in spinor representations of s o ( p + 1 , q + 1 ) {\displaystyle {\mathfrak {so}}(p+1,q+1)} . Given Kac's classification of finite-dimensional simple Lie superalgebras, this can only happen for small values of p {\displaystyle p} and q {\displaystyle q} . A (possibly incomplete) list is o s p ∗ ( 2 N | 2 , 2 ) {\displaystyle {\mathfrak {osp}}^{*}(2N|2,2)} in 3+0D thanks to u s p ( 2 , 2 ) ≃ s o ( 4 , 1 ) {\displaystyle {\mathfrak {usp}}(2,2)\simeq {\mathfrak {so}}(4,1)} ; o s p ( N | 4 ) {\displaystyle {\mathfrak {osp}}(N|4)} in 2+1D thanks to s p ( 4 , R ) ≃ s o ( 3 , 2 ) {\displaystyle {\mathfrak {sp}}(4,\mathbb {R} )\simeq {\mathfrak {so}}(3,2)} ; s u ∗ ( 2 N | 4 ) {\displaystyle {\mathfrak {su}}^{*}(2N|4)} in 4+0D thanks to s u ∗ ( 4 ) ≃ s o ( 5 , 1 ) {\displaystyle {\mathfrak {su}}^{*}(4)\simeq {\mathfrak {so}}(5,1)} ; s u ( 2 , 2 | N ) {\displaystyle {\mathfrak {su}}(2,2|N)} in 3+1D thanks to s u ( 2 , 2 ) ≃ s o ( 4 , 2 ) {\displaystyle {\mathfrak {su}}(2,2)\simeq {\mathfrak {so}}(4,2)} ; s l ( 4 | N ) {\displaystyle {\mathfrak {sl}}(4|N)} in 2+2D thanks to s l ( 4 , R ) ≃ s o ( 3 , 3 ) {\displaystyle {\mathfrak {sl}}(4,\mathbb {R} )\simeq {\mathfrak {so}}(3,3)} ; real forms of F ( 4 ) {\displaystyle F(4)} in five dimensions o s p ( 8 ∗ | 2 N ) {\displaystyle {\mathfrak {osp}}(8^{*}|2N)} in 5+1D, thanks to the fact that spinor and fundamental representations of s o ( 8 , C ) {\displaystyle {\mathfrak {so}}(8,\mathbb {C} )} are mapped to each other by outer automorphisms. == Superconformal algebra in 3+1D == According to the superconformal algebra with N {\displaystyle {\mathcal {N}}} supersymmetries in 3+1 dimensions is given by the bosonic generators P μ {\displaystyle P_{\mu }} , D {\displaystyle D} , M μ ν {\displaystyle M_{\mu \nu }} , K μ {\displaystyle K_{\mu }} , the U(1) R-symmetry A {\displaystyle A} , the SU(N) R-symmetry T j i {\displaystyle T_{j}^{i}} and the fermionic generators Q α i {\displaystyle Q^{\alpha i}} , Q ¯ i α ˙ {\displaystyle {\overline {Q}}_{i}^{\dot {\alpha }}} , S i α {\displaystyle S_{i}^{\alpha }} and S ¯ α ˙ i {\displaystyle {\overline {S}}^{{\dot {\alpha }}i}} . Here, μ , ν , ρ , … {\displaystyle \mu ,\nu ,\rho ,\dots } denote spacetime indices; α , β , … {\displaystyle \alpha ,\beta ,\dots } left-handed Weyl spinor indices; α ˙ , β ˙ , … {\displaystyle {\dot {\alpha }},{\dot {\beta }},\dots } right-handed Weyl spinor indices; and i , j , … {\displaystyle i,j,\dots } the internal R-symmetry indices. The Lie superbrackets of the bosonic conformal algebra are given by [ M μ ν , M ρ σ ] = η ν ρ M μ σ − η μ ρ M ν σ + η ν σ M ρ μ − η μ σ M ρ ν {\displaystyle [M_{\mu \nu },M_{\rho \sigma }]=\eta _{\nu \rho }M_{\mu \sigma }-\eta _{\mu \rho }M_{\nu \sigma }+\eta _{\nu \sigma }M_{\rho \mu }-\eta _{\mu \sigma }M_{\rho \nu }} [ M μ ν , P ρ ] = η ν ρ P μ − η μ ρ P ν {\displaystyle [M_{\mu \nu },P_{\rho }]=\eta _{\nu \rho }P_{\mu }-\eta _{\mu \rho }P_{\nu }} [ M μ ν , K ρ ] = η ν ρ K μ − η μ ρ K ν {\displaystyle [M_{\mu \nu },K_{\rho }]=\eta _{\nu \rho }K_{\mu }-\eta _{\mu \rho }K_{\nu }} [ M μ ν , D ] = 0 {\displaystyle [M_{\mu \nu },D]=0} [ D , P ρ ] = − P ρ {\displaystyle [D,P_{\rho }]=-P_{\rho }} [ D , K ρ ] = + K ρ {\displaystyle [D,K_{\rho }]=+K_{\rho }} [ P μ , K ν ] = − 2 M μ ν + 2 η μ ν D {\displaystyle [P_{\mu },K_{\nu }]=-2M_{\mu \nu }+2\eta _{\mu \nu }D} [ K n , K m ] = 0 {\displaystyle [K_{n},K_{m}]=0} [ P n , P m ] = 0 {\displaystyle [P_{n},P_{m}]=0} where η is the Minkowski metric; while the ones for the fermionic generators are: { Q α i , Q ¯ β ˙ j } = 2 δ i j σ α β ˙ μ P μ {\displaystyle \left\{Q_{\alpha i},{\overline {Q}}_{\dot {\beta }}^{j}\right\}=2\delta _{i}^{j}\sigma _{\alpha {\dot {\beta }}}^{\mu }P_{\mu }} { Q , Q } = { Q ¯ , Q ¯ } = 0 {\displaystyle \left\{Q,Q\right\}=\left\{{\overline {Q}},{\overline {Q}}\right\}=0} { S α i , S ¯ β ˙ j } = 2 δ j i σ α β ˙ μ K μ {\displaystyle \left\{S_{\alpha }^{i},{\overline {S}}_{{\dot {\beta }}j}\right\}=2\delta _{j}^{i}\sigma _{\alpha {\dot {\beta }}}^{\mu }K_{\mu }} { S , S } = { S ¯ , S ¯ } = 0 {\displaystyle \left\{S,S\right\}=\left\{{\overline {S}},{\overline {S}}\right\}=0} { Q , S } = {\displaystyle \left\{Q,S\right\}=} { Q , S ¯ } = { Q ¯ , S } = 0 {\displaystyle \left\{Q,{\overline {S}}\right\}=\left\{{\overline {Q}},S\right\}=0} The bosonic conformal generators do not carry any R-charges, as they commute with the R-symmetry generators: [ A , M ] = [ A , D ] = [ A , P ] = [ A , K ] = 0 {\displaystyle [A,M]=[A,D]=[A,P]=[A,K]=0} [ T , M ] = [ T , D ] = [ T , P ] = [ T , K ] = 0 {\displaystyle [T,M]=[T,D]=[T,P]=[T,K]=0} But the fermionic generators do carry R-charge: [ A , Q ] = − 1 2 Q {\displaystyle [A,Q]=-{\frac {1}{2}}Q} [ A , Q ¯ ] = 1 2 Q ¯ {\displaystyle [A,{\overline {Q}}]={\frac {1}{2}}{\overline {Q}}} [ A , S ] = 1 2 S {\displaystyle [A,S]={\frac {1}{2}}S} [ A , S ¯ ] = − 1 2 S ¯ {\displaystyle [A,{\overline {S}}]=-{\frac {1}{2}}{\overline {S}}} [ T j i , Q k ] = − δ k i Q j {\displaystyle [T_{j}^{i},Q_{k}]=-\delta _{k}^{i}Q_{j}} [ T j i , Q ¯ k ] = δ j k Q ¯ i {\displaystyle [T_{j}^{i},{\overline {Q}}^{k}]=\delta _{j}^{k}{\overline {Q}}^{i}} [ T j i , S k ] = δ j k S i {\displaystyle [T_{j}^{i},S^{k}]=\delta _{j}^{k}S^{i}} [ T j i , S ¯ k ] = − δ k i S ¯ j {\displaystyle [T_{j}^{i},{\overline {S}}_{k}]=-\delta _{k}^{i}{\overline {S}}_{j}} Under bosonic conformal transformations, the fermionic generators transform as: [ D , Q ] = − 1 2 Q {\displaystyle [D,Q]=-{\frac {1}{2}}Q} [ D , Q ¯ ] = − 1 2 Q ¯ {\displaystyle [D,{\overline {Q}}]=-{\frac {1}{2}}{\overline {Q}}} [ D , S ] = 1 2 S {\displaystyle [D,S]={\frac {1}{2}}S} [ D , S ¯ ] = 1 2 S ¯ {\displaystyle [D,{\overline {S}}]={\frac {1}{2}}{\overline {S}}} [ P , Q ] = [ P , Q ¯ ] = 0 {\displaystyle [P,Q]=[P,{\overline {Q}}]=0} [ K , S ] = [ K , S ¯ ] = 0 {\displaystyle [K,S]=[K,{\overline {S}}]=0} == Superconformal algebra in 2D == There are two possible algebras with minimal supersymmetry in two dimensions; a Neveu–Schwarz algebra and a Ramond algebra. Additional supersymmetry is possible, for instance the N = 2 superconformal algebra. == See also == Conformal symmetry Super Virasoro algebra Supersymmetry algebra == References ==
Wikipedia/Superconformal_algebra
In theoretical physics, Nordström's theory of gravitation was a predecessor of general relativity. Strictly speaking, there were actually two distinct theories proposed by the Finnish theoretical physicist Gunnar Nordström, in 1912 and 1913, respectively. The first was quickly dismissed, but the second became the first known example of a metric theory of gravitation, in which the effects of gravitation are treated entirely in terms of the geometry of a curved spacetime. Neither of Nordström's theories are in agreement with observation and experiment. Nonetheless, the first remains of interest insofar as it led to the second. The second remains of interest both as an important milestone on the road to the current theory of gravitation, general relativity, and as a simple example of a self-consistent relativistic theory of gravitation. As an example, this theory is particularly useful in the context of pedagogical discussions of how to derive and test the predictions of a metric theory of gravitation. == Development of the theories == Nordström's theories arose at a time when several leading physicists, including Nordström in Helsinki, Max Abraham in Milan, Gustav Mie in Greifswald, Germany, and Albert Einstein in Prague, were all trying to create competing relativistic theories of gravitation. All of these researchers began by trying to suitably modify the existing theory, the field theory version of Newton's theory of gravitation. In this theory, the field equation is the Poisson equation Δ ϕ = 4 π ρ {\displaystyle \Delta \phi =4\pi \rho } , where ϕ {\displaystyle \phi } is the gravitational potential and ρ {\displaystyle \rho } is the density of matter, augmented by an equation of motion for a test particle in an ambient gravitational field, which we can derive from Newton's force law and which states that the acceleration of the test particle is given by the gradient of the potential d u → d t = − ∇ ϕ {\displaystyle {\frac {d{\vec {u}}}{dt}}=-\nabla \phi } This theory is not relativistic because the equation of motion refers to coordinate time rather than proper time, and because, should the matter in some isolated object suddenly be redistributed by an explosion, the field equation requires that the potential everywhere in "space" must be "updated" instantaneously, which violates the principle that any "news" which has a physical effect (in this case, an effect on test particle motion far from the source of the field) cannot be transmitted faster than the speed of light. Einstein's former calculus professor, Hermann Minkowski had sketched a vector theory of gravitation as early as 1908, but in 1912, Abraham pointed out that no such theory would admit stable planetary orbits. This was one reason why Nordström turned to scalar theories of gravitation (while Einstein explored tensor theories). Nordström's first attempt to propose a suitable relativistic scalar field equation of gravitation was the simplest and most natural choice imaginable: simply replace the Laplacian in the Newtonian field equation with the D'Alembertian or wave operator, which gives ◻ ϕ = 4 π ρ {\displaystyle \Box \phi =4\pi \,\rho } . This has the result of changing the vacuum field equation from the Laplace equation to the wave equation, which means that any "news" concerning redistribution of matter in one location is transmitted at the speed of light to other locations. Correspondingly, the simplest guess for a suitable equation of motion for test particles might seem to be u ˙ a = − ϕ , a {\displaystyle {\dot {u}}_{a}=-\phi _{,a}} where the dot signifies differentiation with respect to proper time, subscripts following the comma denote partial differentiation with respect to the indexed coordinate, and where u a {\displaystyle u^{a}} is the velocity four-vector of the test particle. This force law had earlier been proposed by Abraham, but it does not preserve the norm of the four-velocity as is required by the definition of proper time, so Nordström instead proposed u ˙ a = − ϕ , a − ϕ ˙ u a {\displaystyle {\dot {u}}_{a}=-\phi _{,a}-{\dot {\phi }}\,u_{a}} . However, this theory is unacceptable for a variety of reasons. Two objections are theoretical. First, this theory is not derivable from a Lagrangian, unlike the Newtonian field theory (or most metric theories of gravitation). Second, the proposed field equation is linear. But by analogy with electromagnetism, we should expect the gravitational field to carry energy, and on the basis of Einstein's work on relativity theory, we should expect this energy to be equivalent to mass and therefore, to gravitate. This implies that the field equation should be nonlinear. Another objection is more practical: this theory disagrees drastically with observation. Einstein and von Laue proposed that the problem might lie with the field equation, which, they suggested, should have the linear form F T m a t t e r = ρ {\displaystyle FT_{\rm {matter}}=\rho } , where F is some yet unknown function of ϕ {\displaystyle \phi } , and where Tmatter is the trace of the stress–energy tensor describing the density, momentum, and stress of any matter present. In response to these criticisms, Nordström proposed his second theory in 1913. From the proportionality of inertial and gravitational mass, he deduced that the field equation should be ϕ ◻ ϕ = − 4 π T m a t t e r {\displaystyle \phi \,\Box \phi =-4\pi \,T_{\rm {matter}}} , which is nonlinear. Nordström now took the equation of motion to be d ( ϕ u a ) d s = − ϕ , a {\displaystyle {\frac {d\left(\phi \,u_{a}\right)}{ds}}=-\phi _{,a}} or ϕ u ˙ a = − ϕ , a − ϕ ˙ u a {\displaystyle \phi \,{\dot {u}}_{a}=-\phi _{,a}-{\dot {\phi }}\,u_{a}} . Einstein took the first opportunity to proclaim his approval of the new theory. In a keynote address to the annual meeting of the Society of German Scientists and Physicians, given in Vienna on September 23, 1913, Einstein surveyed the state of the art, declaring that only his own work with Marcel Grossmann and the second theory of Nordström were worthy of consideration. (Mie, who was in the audience, rose to protest, but Einstein explained his criteria and Mie was forced to admit that his own theory did not meet them.) Einstein considered the special case when the only matter present is a cloud of dust (that is, a perfect fluid in which the pressure is assumed to be negligible). He argued that the contribution of this matter to the stress–energy tensor should be: ( T m a t t e r ) a b = ϕ ρ u a u b {\displaystyle \left(T_{\rm {matter}}\right)_{ab}=\phi \,\rho \,u_{a}\,u_{b}} He then derived an expression for the stress–energy tensor of the gravitational field in Nordström's second theory, 4 π ( T g r a v ) a b = ϕ , a ϕ , b − 1 / 2 η a b ϕ , m ϕ , m {\displaystyle 4\pi \,\left(T_{\rm {grav}}\right)_{ab}=\phi _{,a}\,\phi _{,b}-1/2\,\eta _{ab}\,\phi _{,m}\,\phi ^{,m}} which he proposed should hold in general, and showed that the sum of the contributions to the stress–energy tensor from the gravitational field energy and from matter would be conserved, as should be the case. Furthermore, he showed, the field equation of Nordström's second theory follows from the Lagrangian L = 1 8 π η a b ϕ , a ϕ , b − ρ ϕ {\displaystyle L={\frac {1}{8\pi }}\,\eta ^{ab}\,\phi _{,a}\,\phi _{,b}-\rho \,\phi } Since Nordström's equation of motion for test particles in an ambient gravitational field also follows from a Lagrangian, this shows that Nordström's second theory can be derived from an action principle and also shows that it obeys other properties we must demand from a self-consistent field theory. Meanwhile, a gifted Dutch student, Adriaan Fokker had written a Ph.D. thesis under Hendrik Lorentz in which he derived what is now called the Fokker–Planck equation. Lorentz, delighted by his former student's success, arranged for Fokker to pursue post-doctoral study with Einstein in Prague. The result was a historic paper which appeared in 1914, in which Einstein and Fokker observed that the Lagrangian for Nordström's equation of motion for test particles, L = ϕ 2 η a b u ˙ a u ˙ b {\displaystyle L=\phi ^{2}\,\eta _{ab}\,{\dot {u}}^{a}\,{\dot {u}}^{b}} , is the geodesic Lagrangian for a curved Lorentzian manifold with metric tensor g a b = ϕ 2 η a b {\displaystyle g_{ab}=\phi ^{2}\,\eta _{ab}} . If we adopt Cartesian coordinates with line element d σ 2 = η a b d x a d x b {\displaystyle d\sigma ^{2}=\eta _{ab}\,dx^{a}\,dx^{b}} with corresponding wave operator ◻ {\displaystyle \Box } on the flat background, or Minkowski spacetime, so that the line element of the curved spacetime is d s 2 = ϕ 2 η a b d x a d x b {\displaystyle ds^{2}=\phi ^{2}\,\eta _{ab}\,dx^{a}\,dx^{b}} , then the Ricci scalar of this curved spacetime is just R = − 6 ◻ ϕ ϕ 3 {\displaystyle R=-{\frac {6\,\Box \phi }{\phi ^{3}}}} Therefore, Nordström's field equation becomes simply R = 24 π T {\displaystyle R=24\pi \,T} where on the right hand side, we have taken the trace of the stress–energy tensor (with contributions from matter plus any non-gravitational fields) using the metric tensor g a b {\displaystyle g_{ab}} . This is a historic result, because here for the first time we have a field equation in which on the left hand side stands a purely geometrical quantity (the Ricci scalar is the trace of the Ricci tensor, which is itself a kind of trace of the fourth rank Riemann curvature tensor), and on the right hand stands a purely physical quantity, the trace of the stress–energy tensor. Einstein gleefully pointed out that this equation now takes the form which he had earlier proposed with von Laue, and gives a concrete example of a class of theories which he had studied with Grossmann. Some time later, Hermann Weyl introduced the Weyl curvature tensor C a b c d {\displaystyle C_{abcd}} , which measures the deviation of a Lorentzian manifold from being conformally flat, i.e. with metric tensor having the form of the product of some scalar function with the metric tensor of flat spacetime. This is exactly the special form of the metric proposed in Nordström's second theory, so the entire content of this theory can be summarized in the following two equations: R = 24 π T , C a b c d = 0 {\displaystyle R=24\pi \,T,\;\;\;C_{abcd}=0} == Features of Nordström's theory == Einstein was attracted to Nordström's second theory by its simplicity. The vacuum field equations in Nordström's theory are simply R = 0 , C a b c d = 0 {\displaystyle R=0,\;\;\;C_{abcd}=0} We can immediately write down the general vacuum solution in Nordström's theory: d s 2 = exp ⁡ ( 2 ψ ) η a b d x a d x b , ◻ ψ = 0 {\displaystyle ds^{2}=\exp(2\psi )\,\eta _{ab}\,dx^{a}\,dx^{b},\;\;\;\Box \psi =0} where ϕ = exp ⁡ ( ψ ) {\displaystyle \phi =\exp(\psi )} and d σ 2 = η a b d x a d x b {\displaystyle d\sigma ^{2}=\eta _{ab}\,dx^{a}\,dx^{b}} is the line element for flat spacetime in any convenient coordinate chart (such as cylindrical, polar spherical, or double null coordinates), and where ◻ {\displaystyle \Box } is the ordinary wave operator on flat spacetime (expressed in cylindrical, polar spherical, or double null coordinates, respectively). But the general solution of the ordinary three-dimensional wave equation is well known, and can be given rather explicit form. Specifically, for certain charts such as cylindrical or polar spherical charts on flat spacetime (which induce corresponding charts on our curved Lorentzian manifold), we can write the general solution in terms of a power series, and we can write the general solution of certain Cauchy problems in the manner familiar from the Lienard-Wiechert potentials in electromagnetism. In any solution to Nordström's field equations (vacuum or otherwise), if we consider ψ {\displaystyle \psi } as controlling a conformal perturbation from flat spacetime, then to first order in ψ {\displaystyle \psi } we have d s 2 = exp ⁡ ( 2 ψ ) η a b d x a d x b ≈ ( 1 + 2 ψ ) η a b d x a d x b {\displaystyle ds^{2}=\exp(2\,\psi )\,\eta _{ab}\,dx^{a}\,dx^{b}\approx (1+2\psi )\,\eta _{ab}\,dx^{a}\,dx^{b}} Thus, in the weak field approximation, we can identify ψ {\displaystyle \psi } with the Newtonian gravitational potential, and we can regard it as controlling a small conformal perturbation from a flat spacetime background. In any metric theory of gravitation, all gravitational effects arise from the curvature of the metric. In a spacetime model in Nordström's theory (but not in general relativity), this depends only on the trace of the stress–energy tensor. But the field energy of an electromagnetic field contributes a term to the stress–energy tensor which is traceless, so in Nordström's theory, electromagnetic field energy does not gravitate! Indeed, since every solution to the field equations of this theory is a spacetime which is among other things conformally equivalent to flat spacetime, null geodesics must agree with the null geodesics of the flat background, so this theory can exhibit no light bending. Incidentally, the fact that the trace of the stress–energy tensor for an electrovacuum solution (a solution in which there is no matter present, nor any non-gravitational fields except for an electromagnetic field) vanishes shows that in the general electrovacuum solution in Nordström's theory, the metric tensor has the same form as in a vacuum solution, so we need only write down and solve the curved spacetime Maxwell field equations. But these are conformally invariant, so we can also write down the general electrovacuum solution, say in terms of a power series. In any Lorentzian manifold (with appropriate tensor fields describing any matter and physical fields) which stands as a solution to Nordström's field equations, the conformal part of the Riemann tensor (i.e. the Weyl tensor) always vanishes. The Ricci scalar also vanishes identically in any vacuum region (or even, any region free of matter but containing an electromagnetic field). Are there any further restrictions on the Riemann tensor in Nordström's theory? To find out, note that an important identity from the theory of manifolds, the Ricci decomposition, splits the Riemann tensor into three pieces, which are each fourth-rank tensors, built out of, respectively, the Ricci scalar, the trace-free Ricci tensor S a b = R a b − 1 4 R g a b {\displaystyle S_{ab}=R_{ab}-{\frac {1}{4}}\,R\,g_{ab}} and the Weyl tensor. It immediately follows that Nordström's theory leaves the trace-free Ricci tensor entirely unconstrained by algebraic relations (other than the symmetric property, which this second rank tensor always enjoys). But taking account of the twice-contracted and detracted Bianchi identity, a differential identity which holds for the Riemann tensor in any (semi)-Riemannian manifold, we see that in Nordström's theory, as a consequence of the field equations, we have the first-order covariant differential equation S a b ; b = 6 π T ; a {\displaystyle {{S_{a}}^{b}}_{;b}=6\,\pi \,T_{;a}} which constrains the semi-traceless part of the Riemann tensor (the one built out of the trace-free Ricci tensor). Thus, according to Nordström's theory, in a vacuum region only the semi-traceless part of the Riemann tensor can be nonvanishing. Then our covariant differential constraint on S a b {\displaystyle S_{ab}} shows how variations in the trace of the stress–energy tensor in our spacetime model can generate a nonzero trace-free Ricci tensor, and thus nonzero semi-traceless curvature, which can propagate into a vacuum region. This is critically important, because otherwise gravitation would not, according to this theory, be a long-range force capable of propagating through a vacuum. In general relativity, something somewhat analogous happens, but there it is the Ricci tensor which vanishes in any vacuum region (but not in a region which is matter-free but contains an electromagnetic field), and it is the Weyl curvature which is generated (via another first order covariant differential equation) by variations in the stress–energy tensor and which then propagates into vacuum regions, rendering gravitation a long-range force capable of propagating through a vacuum. We can tabulate the most basic differences between Nordström's theory and general relativity, as follows: Another feature of Nordström's theory is that it can be written as the theory of a certain scalar field in Minkowski spacetime, and in this form enjoys the expected conservation law for nongravitational mass-energy together with gravitational field energy, but suffers from a not very memorable force law. In the curved spacetime formulation the motion of test particles is described (the world line of a free test particle is a timelike geodesic, and by an obvious limit, the world line of a laser pulse is a null geodesic), but we lose the conservation law. So which interpretation is correct? In other words, which metric is the one which according to Nordström can be measured locally by physical experiments? The answer is: the curved spacetime is the physically observable one in this theory (as in all metric theories of gravitation); the flat background is a mere mathematical fiction which is, however, of inestimable value for such purposes as writing down the general vacuum solution, or studying the weak field limit. At this point, we could show that in the limit of slowly moving test particles and slowly evolving weak gravitational fields, Nordström's theory of gravitation reduces to the Newtonian theory of gravitation. Rather than showing this in detail, we will proceed to a detailed study of the two most important solutions in this theory: the spherically symmetric static asymptotically flat vacuum solutions the general vacuum gravitational plane wave solution in this theory. We will use the first to obtain the predictions of Nordström's theory for the four classic solar system tests of relativistic gravitation theories (in the ambient field of an isolated spherically symmetric object), and we will use the second to compare gravitational radiation in Nordström's theory and in Einstein's general theory of relativity. == The static spherically symmetric asymptotically flat vacuum solution == The static vacuum solutions in Nordström's theory are the Lorentzian manifolds with metrics of the form d s 2 = exp ⁡ ( 2 ψ ) η a b d x a d x b , Δ ψ = 0 {\displaystyle ds^{2}=\exp(2\psi )\,\eta _{ab}\,dx^{a}\,dx^{b},\;\;\Delta \psi =0} where we can take the flat spacetime Laplace operator on the right. To first order in ψ {\displaystyle \psi } , the metric becomes d s 2 = ( 1 + 2 ψ ) η a b d x a d x b {\displaystyle ds^{2}=(1+2\,\psi )\,\eta _{ab}\,dx^{a}\,dx^{b}} where η a b d x a d x b {\displaystyle \eta _{ab}\,dx^{a}\,dx^{b}} is the metric of Minkowski spacetime (the flat background). === The metric === Adopting polar spherical coordinates, and using the known spherically symmetric asymptotically vanishing solutions of the Laplace equation, we can write the desired exact solution as d s 2 = ( 1 − m / ρ ) ( − d t 2 + d ρ 2 + ρ 2 ( d θ 2 + sin ⁡ ( θ ) 2 d ϕ 2 ) ) {\displaystyle ds^{2}=(1-m/\rho )\,\left(-dt^{2}+d\rho ^{2}+\rho ^{2}\,(d\theta ^{2}+\sin(\theta )^{2}\,d\phi ^{2})\right)} where we justify our choice of integration constants by the fact that this is the unique choice giving the correct Newtonian limit. This gives the solution in terms of coordinates which directly exhibit the fact that this spacetime is conformally equivalent to Minkowski spacetime, but the radial coordinate in this chart does not readily admit a direct geometric interpretation. Therefore, we adopt instead Schwarzschild coordinates, using the transformation r = ρ ( 1 − m / ρ ) {\displaystyle r=\rho \,(1-m/\rho )} , which brings the metric into the form d s 2 = ( 1 + m / r ) − 2 ( − d t 2 + d r 2 ) + r 2 ( d θ 2 + sin ⁡ ( θ ) 2 d ϕ 2 ) {\displaystyle ds^{2}=(1+m/r)^{-2}\,(-dt^{2}+dr^{2})+r^{2}\,(d\theta ^{2}+\sin(\theta )^{2}\,d\phi ^{2})} − ∞ < t < ∞ , 0 < r < ∞ , 0 < θ < π , − π < ϕ < π {\displaystyle -\infty <t<\infty ,\;0<r<\infty ,\;0<\theta <\pi ,\;-\pi <\phi <\pi } Here, r {\displaystyle r} now has the simple geometric interpretation that the surface area of the coordinate sphere r = r 0 {\displaystyle r=r_{0}} is just 4 π r 0 2 {\displaystyle 4\pi \,r_{0}^{2}} . Just as happens in the corresponding static spherically symmetric asymptotically flat solution of general relativity, this solution admits a four-dimensional Lie group of isometries, or equivalently, a four-dimensional (real) Lie algebra of Killing vector fields. These are readily determined to be ∂ t {\displaystyle \partial _{t}} (translation in time) ∂ ϕ {\displaystyle \partial _{\phi }} (rotation about an axis through the origin) − cos ⁡ ( ϕ ) ∂ θ + cot ⁡ ( θ ) sin ⁡ ( ϕ ) ∂ ϕ {\displaystyle -\cos(\phi )\,\partial _{\theta }+\cot(\theta )\,\sin(\phi )\,\partial _{\phi }} sin ⁡ ( ϕ ) ∂ θ + cot ⁡ ( θ ) cos ⁡ ( ϕ ) ∂ ϕ {\displaystyle \sin(\phi )\,\partial _{\theta }+\cot(\theta )\,\cos(\phi )\,\partial _{\phi }} These are exactly the same vector fields which arise in the Schwarzschild coordinate chart for the Schwarzschild vacuum solution of general relativity, and they simply express the fact that this spacetime is static and spherically symmetric. === Geodesics === The geodesic equations are readily obtained from the geodesic Lagrangian. As always, these are second order nonlinear ordinary differential equations. If we set θ = π / 2 {\displaystyle \theta =\pi /2} we find that test particle motion confined to the equatorial plane is possible, and in this case first integrals (first order ordinary differential equations) are readily obtained. First, we have t ˙ = E ( 1 + m / r ) 2 ≈ E ( 1 + 2 m / r ) {\displaystyle {\dot {t}}=E\,\left(1+m/r\right)^{2}\approx E\,\left(1+2m/r\right)} where to first order in m we have the same result as for the Schwarzschild vacuum. This also shows that Nordström's theory agrees with the result of the Pound–Rebka experiment. Second, we have ϕ ˙ = L / r 2 {\displaystyle {\dot {\phi }}=L/r^{2}} which is the same result as for the Schwarzschild vacuum. This expresses conservation of orbital angular momentum of test particles moving in the equatorial plane, and shows that the period of a nearly circular orbit (as observed by a distant observer) will be same as for the Schwarzschild vacuum. Third, with ϵ = − 1 , 0 , 1 {\displaystyle \epsilon =-1,0,1} for timelike, null, spacelike geodesics, we find r ˙ 2 ( 1 + m / r ) 4 = E 2 − V {\displaystyle {\frac {{\dot {r}}^{2}}{\left(1+m/r\right)^{4}}}=E^{2}-V} where V = L 2 / r 2 − ϵ ( 1 + m / r ) 2 {\displaystyle V={\frac {L^{2}/r^{2}-\epsilon }{\left(1+m/r\right)^{2}}}} is a kind of effective potential. In the timelike case, we see from this that there exist stable circular orbits at r c = L 2 / m {\displaystyle r_{c}=L^{2}/m} , which agrees perfectly with Newtonian theory (if we ignore the fact that now the angular but not the radial distance interpretation of r agrees with flat space notions). In contrast, in the Schwarzschild vacuum we have to first order in m the expression r c ≈ L 2 / m − 3 m {\displaystyle r_{c}\approx L^{2}/m-3m} . In a sense, the extra term here results from the nonlinearity of the vacuum Einstein field equation. === Static observers === It makes sense to ask how much force is required to hold a test particle with a given mass over the massive object which we assume is the source of this static spherically symmetric gravitational field. To find out, we need only adopt the simple frame field e → 0 = ( 1 + m / r ) ∂ t {\displaystyle {\vec {e}}_{0}=\left(1+m/r\right)\,\partial _{t}} e → 1 = ( 1 + m / r ) ∂ r {\displaystyle {\vec {e}}_{1}=\left(1+m/r\right)\,\partial _{r}} e → 2 = 1 r ∂ θ {\displaystyle {\vec {e}}_{2}={\frac {1}{r}}\,\partial _{\theta }} e → 3 = 1 r sin ⁡ ( θ ) ∂ ϕ {\displaystyle {\vec {e}}_{3}={\frac {1}{r\,\sin(\theta )}}\,\partial _{\phi }} Then, the acceleration of the world line of our test particle is simply ∇ e → 0 e → 0 = m r 2 e → 1 {\displaystyle \nabla _{{\vec {e}}_{0}}{\vec {e}}_{0}={\frac {m}{r^{2}}}\,{\vec {e}}_{1}} Thus, the particle must maintain radially outward to maintain its position, with a magnitude given by the familiar Newtonian expression (but again we must bear in mind that the radial coordinate here cannot quite be identified with a flat space radial coordinate). Put in other words, this is the "gravitational acceleration" measured by a static observer who uses a rocket engine to maintain his position. In contrast, to second order in m, in the Schwarzschild vacuum the magnitude of the radially outward acceleration of a static observer is m r−2 + m^2 r−3; here too, the second term expresses the fact that Einstein gravity is slightly stronger "at corresponding points" than Nordström gravity. The tidal tensor measured by a static observer is E [ X → ] a b = m r 3 d i a g ( − 2 , 1 , 1 ) + m 2 r 4 d i a g ( − 1 , 1 , 1 ) {\displaystyle E[{\vec {X}}]_{ab}={\frac {m}{r^{3}}}\,{\rm {diag}}(-2,1,1)+{\frac {m^{2}}{r^{4}}}\,{\rm {diag}}(-1,1,1)} where we take X → = e → 0 {\displaystyle {\vec {X}}={\vec {e}}_{0}} . The first term agrees with the corresponding solution in the Newtonian theory of gravitation and the one in general relativity. The second term shows that the tidal forces are a bit stronger in Nordström gravity than in Einstein gravity. === Extra-Newtonian precession of periastria === In our discussion of the geodesic equations, we showed that in the equatorial coordinate plane θ = π / 2 {\displaystyle \theta =\pi /2} we have r ˙ 2 = ( E 2 − V ) ( 1 + m / r ) 4 {\displaystyle {\dot {r}}^{2}=(E^{2}-V)\;(1+m/r)^{4}} where V = ( 1 + L 2 / r 2 ) / ( 1 + m / r ) 2 {\displaystyle V=(1+L^{2}/r^{2})/(1+m/r)^{2}} for a timelike geodesic. Differentiating with respect to proper time s, we obtain 2 r ˙ r ¨ = d d r ( ( E 2 − V ) ( 1 + m / r ) 4 ) r ˙ {\displaystyle 2{\dot {r}}{\ddot {r}}={\frac {d}{dr}}\left((E^{2}-V)\,(1+m/r)^{4}\right)\;{\dot {r}}} Dividing both sides by r ˙ {\displaystyle {\dot {r}}} gives r ¨ = 1 2 d d r ( ( E 2 − V ) ( 1 + m / r ) 4 ) {\displaystyle {\ddot {r}}={\frac {1}{2}}\,{\frac {d}{dr}}\left((E^{2}-V)\,(1+m/r)^{4}\right)} We found earlier that the minimum of V occurs at r c = L 2 / m {\displaystyle r_{c}=L^{2}/m} where E c = L 2 / ( L 2 + m 2 ) {\displaystyle E_{c}=L^{2}/(L^{2}+m^{2})} . Evaluating the derivative, using our earlier results, and setting ε = r − L 2 / m 2 {\displaystyle \varepsilon =r-L^{2}/m^{2}} , we find ε ¨ = − m 4 L 8 ( m 2 + L 2 ) ε + O ( ε 2 ) {\displaystyle {\ddot {\varepsilon }}=-{\frac {m^{4}}{L^{8}}}\,(m^{2}+L^{2})\,\varepsilon +O(\varepsilon ^{2})} which is (to first order) the equation of simple harmonic motion. In other words, nearly circular orbits will exhibit a radial oscillation. However, unlike what happens in Newtonian gravitation, the period of this oscillation will not quite match the orbital period. This will result in slow precession of the periastria (points of closest approach) of our nearly circular orbit, or more vividly, in a slow rotation of the long axis of a quasi-Keplerian nearly elliptical orbit. Specifically, ω s h m ≈ m 2 L 4 m 2 + L 2 = 1 r 2 m 2 + m r {\displaystyle \omega _{\rm {shm}}\approx {\frac {m^{2}}{L^{4}}}\,{\sqrt {m^{2}+L^{2}}}={\frac {1}{r^{2}}}\,{\sqrt {m^{2}+mr}}} (where we used L = m r {\displaystyle L={\sqrt {mr}}} and removed the subscript from r c {\displaystyle r_{c}} ), whereas ω o r b = L r 2 = m / r 3 {\displaystyle \omega _{\rm {orb}}={\frac {L}{r^{2}}}={\sqrt {m/r^{3}}}} The discrepancy is Δ ω = ω o r b − ω s h m = m r 3 − m 2 r 4 + m r 3 ≈ − 1 2 m 3 r 5 {\displaystyle \Delta \omega =\omega _{\rm {orb}}-\omega _{\rm {shm}}={\sqrt {\frac {m}{r^{3}}}}-{\sqrt {{\frac {m^{2}}{r^{4}}}+{\frac {m}{r^{3}}}}}\approx -{\frac {1}{2}}{\sqrt {\frac {m^{3}}{r^{5}}}}} so the periastrion lag per orbit is Δ ϕ = 2 π Δ ω ≈ − π m 3 r 5 {\displaystyle \Delta \phi =2\pi \,\Delta \omega \approx -\pi \,{\sqrt {\frac {m^{3}}{r^{5}}}}} and to first order in m, the long axis of the nearly elliptical orbit rotates with the rate Δ ϕ ω o r b ≈ − π m r {\displaystyle {\frac {\Delta \phi }{\omega _{\rm {orb}}}}\approx -{\frac {\pi m}{r}}} This can be compared with the corresponding expression for the Schwarzschild vacuum solution in general relativity, which is (to first order in m) Δ ϕ ω o r b ≈ 6 π m r {\displaystyle {\frac {\Delta \phi }{\omega _{\rm {orb}}}}\approx {\frac {6\pi m}{r}}} Thus, in Nordström's theory, if the nearly elliptical orbit is transversed counterclockwise, the long axis slowly rotates clockwise, whereas in general relativity, it rotates counterclockwise six times faster. In the first case we may speak of a periastrion lag and in the second case, a periastrion advance. In either theory, with more work, we can derive more general expressions, but we shall be satisfied here with treating the special case of nearly circular orbits. For example, according to Nordström's theory, the perihelia of Mercury should lag at a rate of about 7 seconds of arc per century, whereas according to general relativity, the perihelia should advance at a rate of about 43 seconds of arc per century. === Light delay === Null geodesics in the equatorial plane of our solution satisfy 0 = − d t 2 + d r 2 ( 1 + m / r ) 2 + r 2 d ϕ 2 {\displaystyle 0={\frac {-dt^{2}+dr^{2}}{(1+m/r)^{2}}}+r^{2}\,d\phi ^{2}} Consider two events on a null geodesic, before and after its point of closest approach to the origin. Let these distances be R 1 , R , R 2 {\displaystyle R_{1},\,R,\,R_{2}} with R 1 , R 2 ≫ R {\displaystyle R_{1},\,R_{2}\gg R} . We wish to eliminate ϕ {\displaystyle \phi } , so put R = r cos ⁡ ϕ {\displaystyle R=r\,\cos \phi } (the equation of a straight line in polar coordinates) and differentiate to obtain 0 = − r sin ⁡ ϕ d ϕ + cos ⁡ ϕ d r {\displaystyle 0=-r\sin \phi \,d\phi +\cos \phi \,dr} Thus r 2 d ϕ 2 = cot ⁡ ( ϕ ) 2 d r 2 = R 2 r 2 − R 2 d r 2 {\displaystyle r^{2}\,d\phi ^{2}=\cot(\phi )^{2}\,dr^{2}={\frac {R^{2}}{r^{2}-R^{2}}}\,dr^{2}} Plugging this into the line element and solving for dt, we obtain d t ≈ 1 r 2 − R 2 ( r + m R 2 r 2 ) d r {\displaystyle dt\approx {\frac {1}{\sqrt {r^{2}-R^{2}}}}\;\left(r+m\,{\frac {R^{2}}{r^{2}}}\right)\;dr} Thus the coordinate time from the first event to the event of closest approach is ( Δ t ) 1 = ∫ R R 1 d t ≈ m + R 1 R 1 R 1 2 − R 2 = R 1 2 − R 2 + m 1 − ( R / R 1 ) 2 {\displaystyle (\Delta t)_{1}=\int _{R}^{R_{1}}dt\approx {\frac {m+R_{1}}{R_{1}}}\,{\sqrt {R_{1}^{2}-R^{2}}}={\sqrt {R_{1}^{2}-R^{2}}}+m\,{\sqrt {1-(R/R_{1})^{2}}}} and likewise ( Δ t ) 2 = ∫ R R 2 d t ≈ m + R 2 R 2 R 2 2 − R 2 = R 2 2 − R 2 + m 1 − ( R / R 2 ) 2 {\displaystyle (\Delta t)_{2}=\int _{R}^{R_{2}}dt\approx {\frac {m+R_{2}}{R_{2}}}\,{\sqrt {R_{2}^{2}-R^{2}}}={\sqrt {R_{2}^{2}-R^{2}}}+m\,{\sqrt {1-(R/R_{2})^{2}}}} Here the elapsed coordinate time expected from Newtonian theory is of course R 1 2 − R 2 + R 2 2 − R 2 {\displaystyle {\sqrt {R_{1}^{2}-R^{2}}}+{\sqrt {R_{2}^{2}-R^{2}}}} so the relativistic time delay, according to Nordström's theory, is Δ t = m ( 1 − ( R / R 1 ) 2 + 1 − ( R / R 2 ) 2 ) {\displaystyle \Delta t=m\,\left({\sqrt {1-(R/R_{1})^{2}}}+{\sqrt {1-(R/R_{2})^{2}}}\right)} To first order in the small ratios R / R 1 , R / R 2 {\displaystyle R/R_{1},\;R/R_{2}} this is just Δ t = 2 m {\displaystyle \Delta t=2m} . The corresponding result in general relativity is Δ t = 2 m + 2 m log ⁡ ( 4 R 1 R 2 R 2 ) {\displaystyle \Delta t=2m+2m\,\log \left({\frac {4\,R_{1}\,R_{2}}{R^{2}}}\right)} which depends logarithmically on the small ratios R / R 1 , R / R 2 {\displaystyle R/R_{1},\;R/R_{2}} . For example, in the classic experiment in which, at a time when, as viewed from Earth, Venus is just about to pass behind the Sun, a radar signal emitted from Earth which grazes the limb of the Sun, bounces off Venus, and returns to Earth (once again grazing the limb of the Sun), the relativistic time delay is about 20 microseconds according to Nordström's theory and about 240 microseconds according to general relativity. === Summary of results === We can summarize the results we found above in the following table, in which the given expressions represent appropriate approximations: The last four lines in this table list the so-called four classic solar system tests of relativistic theories of gravitation. Of the three theories appearing in the table, only general relativity is in agreement with the results of experiments and observations in the Solar System. Nordström's theory gives the correct result only for the Pound–Rebka experiment; not surprisingly, Newton's theory flunks all four relativistic tests. == Vacuum gravitational plane wave == In the double null chart for Minkowski spacetime, d s 2 = − 2 d u d v + d x 2 + d y 2 , − ∞ < u , v , x , y < ∞ {\displaystyle ds^{2}=-2\,du\,dv+dx^{2}+dy^{2},\;\;\;-\infty <u,\,v,\,x,\,y<\infty } a simple solution of the wave equation − 2 ψ u v + ψ x x + ψ y y = 0 {\displaystyle -2\,\psi _{uv}+\psi _{xx}+\psi _{yy}=0} is ψ = f ( u ) {\displaystyle \psi =f(u)} , where f is an arbitrary smooth function. This represents a plane wave traveling in the z direction. Therefore, Nordström's theory admits the exact vacuum solution d s 2 = exp ⁡ ( 2 f ( u ) ) ( − 2 d u d v + d x 2 + d y 2 ) , − ∞ < u , v , x , y < ∞ {\displaystyle ds^{2}=\exp(2f(u))\;\left(-2\,du\,dv+dx^{2}+dy^{2}\right),\;\;\;-\infty <u,\,v,\,x,\,y<\infty } which we can interpret in terms of the propagation of a gravitational plane wave. This Lorentzian manifold admits a six-dimensional Lie group of isometries, or equivalently, a six-dimensional Lie algebra of Killing vector fields: ∂ v {\displaystyle \partial _{v}} (a null translation, "opposing" the wave vector field ∂ u {\displaystyle \partial _{u}} ) ∂ x , ∂ y {\displaystyle \partial _{x},\;\;\partial _{y}} (spatial translation orthogonal to the wavefronts) − y ∂ x + x ∂ y {\displaystyle -y\,\partial _{x}+x\,\partial _{y}} (rotation about axis parallel to direction of propagation) x ∂ v + u ∂ x , y ∂ v + u ∂ y {\displaystyle x\,\partial _{v}+u\,\partial _{x},\;\;y\,\partial _{v}+u\,\partial _{y}} For example, the Killing vector field x ∂ v + u ∂ x {\displaystyle x\,\partial _{v}+u\,\partial _{x}} integrates to give the one parameter family of isometries ( u , v , x , y ) ⟶ ( u , v + x λ + u 2 λ 2 , x + u λ , y ) {\displaystyle (u,v,x,y)\longrightarrow (u,\;v+x\,\lambda +{\frac {u}{2}}\,\lambda ^{2},\;x+u\,\lambda ,\;y)} Just as in special relativity (and general relativity), it is always possible to change coordinates, without disturbing the form of the solution, so that the wave propagates in any direction transverse to ∂ z {\displaystyle \partial _{z}} . Note that our isometry group is transitive on the hypersurfaces u = u 0 {\displaystyle u=u_{0}} . In contrast, the generic gravitational plane wave in general relativity has only a five-dimensional Lie group of isometries. (In both theories, special plane waves may have extra symmetries.) We'll say a bit more about why this is so in a moment. Adopting the frame field e → 0 = 1 2 ( ∂ v + exp ⁡ ( − 2 f ) ∂ u ) {\displaystyle {\vec {e}}_{0}={\frac {1}{\sqrt {2}}}\,\left(\partial _{v}+\exp(-2f)\,\partial _{u}\right)} e → 1 = 1 2 ( ∂ v − exp ⁡ ( − 2 f ) ∂ u ) {\displaystyle {\vec {e}}_{1}={\frac {1}{\sqrt {2}}}\,\left(\partial _{v}-\exp(-2f)\,\partial _{u}\right)} e → 2 = ∂ x {\displaystyle {\vec {e}}_{2}=\partial _{x}} e → 3 = ∂ y {\displaystyle {\vec {e}}_{3}=\partial _{y}} we find that the corresponding family of test particles are inertial (freely falling), since the acceleration vector vanishes ∇ e → 0 e → 0 = 0 {\displaystyle \nabla _{{\vec {e}}_{0}}{\vec {e}}_{0}=0} Notice that if f vanishes, this family becomes a family of mutually stationary test particles in flat (Minkowski) spacetime. With respect to the timelike geodesic congruence of world lines obtained by integrating the timelike unit vector field X → = e → 0 {\displaystyle {\vec {X}}={\vec {e}}_{0}} , the expansion tensor θ [ X → ] p ^ q ^ = 1 2 f ′ ( u ) exp ⁡ ( − 2 f ( u ) ) d i a g ( 0 , 1 , 1 ) {\displaystyle \theta [{\vec {X}}]_{{\hat {p}}{\hat {q}}}={\frac {1}{\sqrt {2}}}\,f'(u)\,\exp(-2\,f(u))\,{\rm {diag}}(0,1,1)} shows that our test particles are expanding or contracting isotropically and transversely to the direction of propagation. This is exactly what we would expect for a transverse spin-0 wave; the behavior of analogous families of test particles which encounter a gravitational plane wave in general relativity is quite different, because these are spin-2 waves. This is due to the fact that Nordström's theory of gravitation is a scalar theory, whereas Einstein's theory of gravitation (general relativity) is a tensor theory. On the other hand, gravitational waves in both theories are transverse waves. Electromagnetic plane waves are of course also transverse. The tidal tensor E [ X → ] p ^ q ^ = 1 2 exp ⁡ ( − 4 f ( u ) ) ( f ′ ( u ) 2 − f ″ ( u ) ) d i a g ( 0 , 1 , 1 ) {\displaystyle E[{\vec {X}}]_{{\hat {p}}{\hat {q}}}={\frac {1}{2}}\,\exp(-4\,f(u))\;\left(f'(u)^{2}-f''(u)\right)\,{\rm {diag}}(0,1,1)} further exhibits the spin-0 character of the gravitational plane wave in Nordström's theory. (The tidal tensor and expansion tensor are three-dimensional tensors which "live" in the hyperplane elements orthogonal to e → 0 {\displaystyle {\vec {e}}_{0}} , which in this case happens to be irrotational, so we can regard these tensors as defined on orthogonal hyperslices.) The exact solution we are discussing here, which we interpret as a propagating gravitational plane wave, gives some basic insight into the propagation of gravitational radiation in Nordström's theory, but it does not yield any insight into the generation of gravitational radiation in this theory. At this point, it would be natural to discuss the analog for Nordström's theory of gravitation of the standard linearized gravitational wave theory in general relativity, but we shall not pursue this. == See also == Alternatives to general relativity Congruence (general relativity) Gunnar Nordström Obsolete physical theories General relativity == References == Ravndal, Finn (2004). "Scalar Gravitation and Extra Dimensions". arXiv:gr-qc/0405030. Pais, Abraham (1982). "13". Subtle is the Lord: The Science and the Life of Albert Einstein. Oxford: Oxford University Press. ISBN 0-19-280672-6. Lightman, Alan P.; Press, William H.; Price, Richard H. & Teukolsky, Saul A. (1975). Problem Book in Relativity and Gravitation. Princeton: Princeton University Press. ISBN 0-691-08162-X. See problem 13.2.
Wikipedia/Nordström's_theory_of_gravitation
Holography is a technique that enables a wavefront to be recorded and later reconstructed. It is best known as a method of generating three-dimensional images, and has a wide range of other uses, including data storage, microscopy, and interferometry. In principle, it is possible to make a hologram for any type of wave. A hologram is a recording of an interference pattern that can reproduce a 3D light field using diffraction. In general usage, a hologram is a recording of any type of wavefront in the form of an interference pattern. It can be created by capturing light from a real scene, or it can be generated by a computer, in which case it is known as a computer-generated hologram, which can show virtual objects or scenes. Optical holography needs a laser light to record the light field. The reproduced light field can generate an image that has the depth and parallax of the original scene. A hologram is usually unintelligible when viewed under diffuse ambient light. When suitably lit, the interference pattern diffracts the light into an accurate reproduction of the original light field, and the objects that were in it exhibit visual depth cues such as parallax and perspective that change realistically with the different angles of viewing. That is, the view of the image from different angles shows the subject viewed from similar angles. A hologram is traditionally generated by overlaying a second wavefront, known as the reference beam, onto a wavefront of interest. This generates an interference pattern, which is then captured on a physical medium. When the recorded interference pattern is later illuminated by the second wavefront, it is diffracted to recreate the original wavefront. The 3D image from a hologram can often be viewed with non-laser light. However, in common practice, major image quality compromises are made to remove the need for laser illumination to view the hologram. A computer-generated hologram is created by digitally modeling and combining two wavefronts to generate an interference pattern image. This image can then be printed onto a mask or film and illuminated with an appropriate light source to reconstruct the desired wavefront. Alternatively, the interference pattern image can be directly displayed on a dynamic holographic display. Holographic portraiture often resorts to a non-holographic intermediate imaging procedure, to avoid the dangerous high-powered pulsed lasers which would be needed to optically "freeze" moving subjects as perfectly as the extremely motion-intolerant holographic recording process requires. Early holography required high-power and expensive lasers. Currently, mass-produced low-cost laser diodes, such as those found on DVD recorders and used in other common applications, can be used to make holograms. They have made holography much more accessible to low-budget researchers, artists, and dedicated hobbyists. Most holograms produced are of static objects, but systems for displaying changing scenes on dynamic holographic displays are now being developed. The word holography comes from the Greek words ὅλος (holos; "whole") and γραφή (graphē; "writing" or "drawing"). == History == The Hungarian-British physicist Dennis Gabor invented holography in 1948 while he was looking for a way to improve image resolution in electron microscopes. Gabor's work was built on pioneering work in the field of X-ray microscopy by other scientists including Mieczysław Wolfke in 1920 and William Lawrence Bragg in 1939. The formulation of holography was an unexpected result of Gabor's research into improving electron microscopes at the British Thomson-Houston Company (BTH) in Rugby, England, and the company filed a patent in December 1947 (patent GB685286). The technique as originally invented is still used in electron microscopy, where it is known as electron holography. Gabor was awarded the Nobel Prize in Physics in 1971 "for his invention and development of the holographic method". Optical holography did not really advance until the development of the laser in 1960. The development of the laser enabled the first practical optical holograms that recorded 3D objects to be made in 1962 by Yuri Denisyuk in the Soviet Union and by Emmett Leith and Juris Upatnieks at the University of Michigan, US. Early optical holograms used silver halide photographic emulsions as the recording medium. They were not very efficient as the produced diffraction grating absorbed much of the incident light. Various methods of converting the variation in transmission to a variation in refractive index (known as "bleaching") were developed which enabled much more efficient holograms to be produced. A major advance in the field of holography was made by Stephen Benton, who invented a way to create holograms that can be viewed with natural light instead of lasers. These are called rainbow holograms. == Basics of holography == Holography is a technique for recording and reconstructing light fields.: Section 1  A light field is generally the result of a light source scattered off objects. Holography can be thought of as somewhat similar to sound recording, whereby a sound field created by vibrating matter like musical instruments or vocal cords, is encoded in such a way that it can be reproduced later, without the presence of the original vibrating matter. However, it is even more similar to Ambisonic sound recording in which any listening angle of a sound field can be reproduced in the reproduction. === Laser === In laser holography, the hologram is recorded using a source of laser light, which is very pure in its color and orderly in its composition. Various setups may be used, and several types of holograms can be made, but all involve the interaction of light coming from different directions and producing a microscopic interference pattern which a plate, film, or other medium photographically records. In one common arrangement, the laser beam is split into two, one known as the object beam and the other as the reference beam. The object beam is expanded by passing it through a lens and used to illuminate the subject. The recording medium is located where this light, after being reflected or scattered by the subject, will strike it. The edges of the medium will ultimately serve as a window through which the subject is seen, so its location is chosen with that in mind. The reference beam is expanded and made to shine directly on the medium, where it interacts with the light coming from the subject to create the desired interference pattern. Like conventional photography, holography requires an appropriate exposure time to correctly affect the recording medium. Unlike conventional photography, during the exposure the light source, the optical elements, the recording medium, and the subject must all remain motionless relative to each other, to within about a quarter of the wavelength of the light, or the interference pattern will be blurred and the hologram spoiled. With living subjects and some unstable materials, that is only possible if a very intense and extremely brief pulse of laser light is used, a hazardous procedure which is rarely done outside of scientific and industrial laboratory settings. Exposures lasting several seconds to several minutes, using a much lower-powered continuously operating laser, are typical. === Apparatus === A hologram can be made by shining part of the light beam directly into the recording medium, and the other part onto the object in such a way that some of the scattered light falls onto the recording medium. A more flexible arrangement for recording a hologram requires the laser beam to be aimed through a series of elements that change it in different ways. The first element is a beam splitter that divides the beam into two identical beams, each aimed in different directions: One beam (known as the 'illumination' or 'object beam') is spread using lenses and directed onto the scene using mirrors. Some of the light scattered (reflected) from the scene then falls onto the recording medium. The second beam (known as the 'reference beam') is also spread through the use of lenses, but is directed so that it does not come in contact with the scene, and instead travels directly onto the recording medium. Several different materials can be used as the recording medium. One of the most common is a film very similar to photographic film (silver halide photographic emulsion), but with much smaller light-reactive grains (preferably with diameters less than 20 nm), making it capable of the much higher resolution that holograms require. A layer of this recording medium (e.g., silver halide) is attached to a transparent substrate, which is commonly glass, but may also be plastic. === Process === When the two laser beams reach the recording medium, their light waves intersect and interfere with each other. It is this interference pattern that is imprinted on the recording medium. The pattern itself is seemingly random, as it represents the way in which the scene's light interfered with the original light source – but not the original light source itself. The interference pattern can be considered an encoded version of the scene, requiring a particular key – the original light source – in order to view its contents. This missing key is provided later by shining a laser, identical to the one used to record the hologram, onto the developed film. When this beam illuminates the hologram, it is diffracted by the hologram's surface pattern. This produces a light field identical to the one originally produced by the scene and scattered onto the hologram. === Comparison with photography === Holography may be better understood via an examination of its differences from ordinary photography: A hologram represents a recording of information regarding the light that came from the original scene as scattered in a range of directions rather than from only one direction, as in a photograph. This allows the scene to be viewed from a range of different angles, as if it were still present. A photograph can be recorded using normal light sources (sunlight or electric lighting) whereas a laser is required to record a hologram. A lens is required in photography to record the image, whereas in holography, the light from the object is scattered directly onto the recording medium. A holographic recording requires a second light beam (the reference beam) to be directed onto the recording medium. A photograph can be viewed in a wide range of lighting conditions, whereas holograms can only be viewed with very specific forms of illumination. When a photograph is cut in half, each piece shows half of the scene. When a hologram is cut in half, the whole scene can still be seen in each piece. This is because, whereas each point in a photograph only represents light scattered from a single point in the scene, each point on a holographic recording includes information about light scattered from every point in the scene. It can be thought of as viewing a street outside a house through a large window, then through a smaller window. One can see all of the same things through the smaller window (by moving the head to change the viewing angle), but the viewer can see more at once through the large window. A photographic stereogram is a two-dimensional representation that can produce a three-dimensional effect but only from one point of view, whereas the reproduced viewing range of a hologram adds many more depth perception cues that were present in the original scene. These cues are recognized by the human brain and translated into the same perception of a three-dimensional image as when the original scene might have been viewed. A photograph clearly maps out the light field of the original scene. The developed hologram's surface consists of a very fine, seemingly random pattern, which appears to bear no relationship to the scene it recorded. == Physics of holography == For a better understanding of the process, it is necessary to understand interference and diffraction. Interference occurs when one or more wavefronts are superimposed. Diffraction occurs when a wavefront encounters an object. The process of producing a holographic reconstruction is explained below purely in terms of interference and diffraction. It is somewhat simplified but is accurate enough to give an understanding of how the holographic process works. For those unfamiliar with these concepts, it is worthwhile to read those articles before reading further in this article. === Plane wavefronts === A diffraction grating is a structure with a repeating pattern. A simple example is a metal plate with slits cut at regular intervals. A light wave that is incident on a grating is split into several waves; the direction of these diffracted waves is determined by the grating spacing and the wavelength of the light. A simple hologram can be made by superimposing two plane waves from the same light source on a holographic recording medium. The two waves interfere, giving a straight-line fringe pattern whose intensity varies sinusoidally across the medium. The spacing of the fringe pattern is determined by the angle between the two waves, and by the wavelength of the light. The recorded light pattern is a diffraction grating. When it is illuminated by only one of the waves used to create it, it can be shown that one of the diffracted waves emerges at the same angle at which the second wave was originally incident, so that the second wave has been 'reconstructed'. Thus, the recorded light pattern is a holographic recording as defined above. === Point sources === If the recording medium is illuminated with a point source and a normally incident plane wave, the resulting pattern is a sinusoidal zone plate, which acts as a negative Fresnel lens whose focal length is equal to the separation of the point source and the recording plane. When a plane wave-front illuminates a negative lens, it is expanded into a wave that appears to diverge from the focal point of the lens. Thus, when the recorded pattern is illuminated with the original plane wave, some of the light is diffracted into a diverging beam equivalent to the original spherical wave; a holographic recording of the point source has been created. When the plane wave is incident at a non-normal angle at the time of recording, the pattern formed is more complex, but still acts as a negative lens if it is illuminated at the original angle. === Complex objects === To record a hologram of a complex object, a laser beam is first split into two beams of light. One beam illuminates the object, which then scatters light onto the recording medium. According to diffraction theory, each point in the object acts as a point source of light so the recording medium can be considered to be illuminated by a set of point sources located at varying distances from the medium. The second (reference) beam illuminates the recording medium directly. Each point source wave interferes with the reference beam, giving rise to its own sinusoidal zone plate in the recording medium. The resulting pattern is the sum of all these 'zone plates', which combine to produce a random (speckle) pattern as in the photograph above. When the hologram is illuminated by the original reference beam, each of the individual zone plates reconstructs the object wave that produced it, and these individual wavefronts are combined to reconstruct the whole of the object beam. The viewer perceives a wavefront that is identical with the wavefront scattered from the object onto the recording medium, so that it appears that the object is still in place even if it has been removed. == Applications == === Art === Early on, artists saw the potential of holography as a medium and gained access to science laboratories to create their work. Holographic art is often the result of collaborations between scientists and artists, although some holographers would regard themselves as both an artist and a scientist. Salvador Dalí claimed to have been the first to employ holography artistically. He was certainly the first and best-known surrealist to do so, but the 1972 New York exhibit of Dalí holograms had been preceded by the holographic art exhibition that was held at the Cranbrook Academy of Art in Michigan in 1968 and by the one at the Finch College gallery in New York in 1970, which attracted national media attention. In Great Britain, Margaret Benyon began using holography as an artistic medium in the late 1960s and had a solo exhibition at the University of Nottingham art gallery in 1969. This was followed in 1970 by a solo show at the Lisson Gallery in London, which was billed as the "first London expo of holograms and stereoscopic paintings". During the 1970s, a number of art studios and schools were established, each with their particular approach to holography. Notably, there was the San Francisco School of Holography established by Lloyd Cross, The Museum of Holography in New York founded by Rosemary (Posy) H. Jackson, the Royal College of Art in London and the Lake Forest College Symposiums organised by Tung Jeong. None of these studios still exist; however, there is the Center for the Holographic Arts in New York and the HOLOcenter in Seoul, which offers artists a place to create and exhibit work. During the 1980s, many artists who worked with holography helped the diffusion of this so-called "new medium" in the art world, such as Harriet Casdin-Silver of the United States, Dieter Jung of Germany, and Moysés Baumstein of Brazil, each one searching for a proper "language" to use with the three-dimensional work, avoiding the simple holographic reproduction of a sculpture or object. For instance, in Brazil, many concrete poets (Augusto de Campos, Décio Pignatari, Julio Plaza and José Wagner Garcia, associated with Moysés Baumstein) found in holography a way to express themselves and to renew concrete poetry. A small but active group of artists still integrate holographic elements into their work. Some are associated with novel holographic techniques; for example, artist Matt Brand employed computational mirror design to eliminate image distortion from specular holography. The MIT Museum and Jonathan Ross both have extensive collections of holography and on-line catalogues of art holograms. === Data storage === Holographic data storage is a technique that can store information at high density inside crystals or photopolymers. The ability to store large amounts of information in some kind of medium is of great importance, as many electronic products incorporate storage devices. As current storage techniques such as Blu-ray Disc reach the limit of possible data density (due to the diffraction-limited size of the writing beams), holographic storage has the potential to become the next generation of popular storage media. The advantage of this type of data storage is that the volume of the recording media is used instead of just the surface. Currently available SLMs can produce about 1000 different images a second at 1024×1024-bit resolution which would result in about one-gigabit-per-second writing speed. In 2005, companies such as Optware and Maxell produced a 120 mm disc that uses a holographic layer to store data to a potential 3.9 TB, a format called Holographic Versatile Disc. As of September 2014, no commercial product has been released. Another company, InPhase Technologies, was developing a competing format, but went bankrupt in 2011 and all its assets were sold to Akonia Holographics, LLC. While many holographic data storage models have used "page-based" storage, where each recorded hologram holds a large amount of data, more recent research into using submicrometre-sized "microholograms" has resulted in several potential 3D optical data storage solutions. While this approach to data storage can not attain the high data rates of page-based storage, the tolerances, technological hurdles, and cost of producing a commercial product are significantly lower. === Dynamic holography === In static holography, recording, developing and reconstructing occur sequentially, and a permanent hologram is produced. There also exist holographic materials that do not need the developing process and can record a hologram in a very short time. This allows one to use holography to perform some simple operations in an all-optical way. Examples of applications of such real-time holograms include phase-conjugate mirrors ("time-reversal" of light), optical cache memories, image processing (pattern recognition of time-varying images), and optical computing. The amount of processed information can be very high (terabits/s), since the operation is performed in parallel on a whole image. This compensates for the fact that the recording time, which is in the order of a microsecond, is still very long compared to the processing time of an electronic computer. The optical processing performed by a dynamic hologram is also much less flexible than electronic processing. On one side, one has to perform the operation always on the whole image, and on the other side, the operation a hologram can perform is basically either a multiplication or a phase conjugation. In optics, addition and Fourier transform are already easily performed in linear materials, the latter simply by a lens. This enables some applications, such as a device that compares images in an optical way. The search for novel nonlinear optical materials for dynamic holography is an active area of research. The most common materials are photorefractive crystals, but in semiconductors or semiconductor heterostructures (such as quantum wells), atomic vapors and gases, plasmas and even liquids, it was possible to generate holograms. A particularly promising application is optical phase conjugation. It allows the removal of the wavefront distortions a light beam receives when passing through an aberrating medium, by sending it back through the same aberrating medium with a conjugated phase. This is useful, for example, in free-space optical communications to compensate for atmospheric turbulence (the phenomenon that gives rise to the twinkling of starlight). === Hobbyist use === Since the beginning of holography, many holographers have explored its uses and displayed them to the public. In 1971, Lloyd Cross opened the San Francisco School of Holography and taught amateurs how to make holograms using only a small (typically 5 mW) helium-neon laser and inexpensive home-made equipment. Holography had been supposed to require a very expensive metal optical table set-up to lock all the involved elements down in place and damp any vibrations that could blur the interference fringes and ruin the hologram. Cross's home-brew alternative was a sandbox made of a cinder block retaining wall on a plywood base, supported on stacks of old tires to isolate it from ground vibrations, and filled with sand that had been washed to remove dust. The laser was securely mounted atop the cinder block wall. The mirrors and simple lenses needed for directing, splitting and expanding the laser beam were affixed to short lengths of PVC pipe, which were stuck into the sand at the desired locations. The subject and the photographic plate holder were similarly supported within the sandbox. The holographer turned off the room light, blocked the laser beam near its source using a small relay-controlled shutter, loaded a plate into the holder in the dark, left the room, waited a few minutes to let everything settle, then made the exposure by remotely operating the laser shutter. In 1979, Jason Sapan opened the Holographic Studios in New York City. Since then, they have been involved in the production of many holographs for many artists as well as companies. Sapan has been described as the "last professional holographer of New York". Many of these holographers would go on to produce art holograms. In 1983, Fred Unterseher, a co-founder of the San Francisco School of Holography and a well-known holographic artist, published the Holography Handbook, an easy-to-read guide to making holograms at home. This brought in a new wave of holographers and provided simple methods for using the then-available AGFA silver halide recording materials. In 2000, Frank DeFreitas published the Shoebox Holography Book and introduced the use of inexpensive laser pointers to countless hobbyists. For many years, it had been assumed that certain characteristics of semiconductor laser diodes made them virtually useless for creating holograms, but when they were eventually put to the test of practical experiment, it was found that not only was this untrue, but that some actually provided a coherence length much greater than that of traditional helium-neon gas lasers. This was a very important development for amateurs, as the price of red laser diodes had dropped from hundreds of dollars in the early 1980s to about $5 after they entered the mass market as a component pulled from CD, or later, DVD players from the mid 1980s onwards. Now, there are thousands of amateur holographers worldwide. By late 2000, holography kits with inexpensive laser pointer diodes entered the mainstream consumer market. These kits enabled students, teachers, and hobbyists to make several kinds of holograms without specialized equipment, and became popular gift items by 2005. The introduction of holography kits with self-developing plates in 2003 made it possible for hobbyists to create holograms without the bother of wet chemical processing. In 2006, a large number of surplus holography-quality green lasers (Coherent C315) became available and put dichromated gelatin (DCG) holography within the reach of the amateur holographer. The holography community was surprised at the amazing sensitivity of DCG to green light. It had been assumed that this sensitivity would be uselessly slight or non-existent. Jeff Blyth responded with the G307 formulation of DCG to increase the speed and sensitivity to these new lasers. Kodak and Agfa, the former major suppliers of holography-quality silver halide plates and films, are no longer in the market. While other manufacturers have helped fill the void, many amateurs are now making their own materials. The favorite formulations are dichromated gelatin, Methylene-Blue-sensitised dichromated gelatin, and diffusion method silver halide preparations. Jeff Blyth has published very accurate methods for making these in a small lab or garage. A small group of amateurs are even constructing their own pulsed lasers to make holograms of living subjects and other unsteady or moving objects. === Holographic interferometry === Holographic interferometry (HI) is a technique that enables static and dynamic displacements of objects with optically rough surfaces to be measured to optical interferometric precision (i.e. to fractions of a wavelength of light). It can also be used to detect optical-path-length variations in transparent media, which enables, for example, fluid flow to be visualized and analyzed. It can also be used to generate contours representing the form of the surface or the isodose regions in radiation dosimetry. It has been widely used to measure stress, strain, and vibration in engineering structures. === Interferometric microscopy === The hologram keeps the information on the amplitude and phase of the field. Several holograms may keep information about the same distribution of light, emitted to various directions. The numerical analysis of such holograms allows one to emulate large numerical aperture, which, in turn, enables enhancement of the resolution of optical microscopy. The corresponding technique is called interferometric microscopy. Recent achievements of interferometric microscopy allow one to approach the quarter-wavelength limit of resolution. === Sensors or biosensors === The hologram is made with a modified material that interacts with certain molecules generating a change in the fringe periodicity or refractive index, therefore, the color of the holographic reflection. === Security === Holograms are commonly used for security, as they are replicated from a master hologram that requires expensive, specialized and technologically advanced equipment, and are thus difficult to forge. They are used widely in many currencies, such as the Brazilian 20, 50, and 100-reais notes; British 5, 10, 20 and 50-pound notes; South Korean 5000, 10,000, and 50,000-won notes; Japanese 5000 and 10,000 yen notes, Indian 50, 100, 500, and 2000 rupee notes; and all the currently-circulating banknotes of the Canadian dollar, Croatian kuna, Danish krone, and Euro. They can also be found in credit and bank cards as well as passports, ID cards, books, food packaging, DVDs, and sports equipment. Such holograms come in a variety of forms, from adhesive strips that are laminated on packaging for fast-moving consumer goods to holographic tags on electronic products. They often contain textual or pictorial elements to protect identities and separate genuine articles from counterfeits. Holographic scanners are in use in post offices, larger shipping firms, and automated conveyor systems to determine the three-dimensional size of a package. They are often used in tandem with checkweighers to allow automated pre-packing of given volumes, such as a truck or pallet for bulk shipment of goods. Holograms produced in elastomers can be used as stress-strain reporters due to its elasticity and compressibility, the pressure and force applied are correlated to the reflected wavelength, therefore its color. Holography technique can also be effectively used for radiation dosimetry. ==== High security registration plates ==== High-security holograms can be used on license plates for vehicles such as cars and motorcycles. As of April 2019, holographic license plates are required on vehicles in parts of India to aid in identification and security, especially in cases of car theft. Such number plates hold electronic data of vehicles, and have a unique ID number and a sticker to indicate authenticity. Extended Reality "XR" In March 2022, a real-time holographic communication solution was invented by Mária Virčíková and Matúš Kirchmayer, creating the world’s first holographic presence app requiring only a smartphone camera. Their company, MATSUKO, patented single-camera technology, enabling users to transmit and interact as realistic 3D holograms in XR environments, supported by 5G networks and mixed reality glasses. Further advancements, including a spatial computing holographic meeting experience MATSUKO developed with Telefónica and NVIDIA, were unveiled and demonstrated at Mobile World Congress (MWC) 2024 in February 2024. This iteration leveraged 5G, edge computing, and AI to enhance realism with eye contact and facial expression tracking, supporting devices like Apple Vision Pro and Meta Quest. == Holography using other types of waves == In principle, it is possible to make a hologram for any wave. Electron holography is the application of holography techniques to electron waves rather than light waves. Electron holography was invented by Dennis Gabor to improve the resolution and avoid the aberrations of the transmission electron microscope. Today it is commonly used to study electric and magnetic fields in thin films, as magnetic and electric fields can shift the phase of the interfering wave passing through the sample. The principle of electron holography can also be applied to interference lithography. Acoustic holography enables sound maps of an object to be generated. Measurements of the acoustic field are made at many points close to the object. These measurements are digitally processed to produce the "images" of the object. Atomic holography has evolved out of the development of the basic elements of atom optics. With the Fresnel diffraction lens and atomic mirrors atomic holography follows a natural step in the development of the physics (and applications) of atomic beams. Recent developments including atomic mirrors and especially ridged mirrors have provided the tools necessary for the creation of atomic holograms, although such holograms have not yet been commercialized. Neutron beam holography has been used to see the inside of solid objects. Holograms with x-rays are generated by using synchrotrons or x-ray free-electron lasers as radiation sources and pixelated detectors such as CCDs as recording medium. The reconstruction is then retrieved via computation. Due to the shorter wavelength of x-rays compared to visible light, this approach allows imaging objects with higher spatial resolution. As free-electron lasers can provide ultrashort and x-ray pulses in the range of femtoseconds which are intense and coherent, x-ray holography has been used to capture ultrafast dynamic processes. == False holograms == There are many optical effects that are falsely confused with holography, such as the effects produced by lenticular printing, the Pepper's ghost illusion (or modern variants such as the Musion Eyeliner), tomography and volumetric displays. Such illusions have been called "fauxlography". The Pepper's ghost technique, being the easiest to implement of these methods, is most prevalent in 3D displays that claim to be (or are referred to as) "holographic". While the original illusion, used in theater, involved actual physical objects and persons, located offstage, modern variants replace the source object with a digital screen, which displays imagery generated with 3D computer graphics to provide the necessary depth cues. The reflection, which seems to float mid-air, is still flat however, thus less realistic than if an actual 3D object was being reflected. Examples of this digital version of Pepper's ghost illusion include the Gorillaz performances in the 2005 MTV Europe Music Awards and the 48th Grammy Awards; and Tupac Shakur's virtual performance at Coachella Valley Music and Arts Festival in 2012, rapping alongside Snoop Dogg during his set with Dr. Dre. Digital avatars of the Swedish supergroup ABBA were displayed on stage in May 2022. The ABBA performance used technology that was an updated version of Pepper's Ghost created by Industrial Light & Magic. American rock group KISS unveiled similar digital avatars in December 2023 to tour in their place at the conclusion of the End of the Road World Tour using the same Pepper's Ghost technology as the ABBA avatars. An even simpler illusion can be created by rear-projecting realistic images into semi-transparent screens. The rear projection is necessary because otherwise the semi-transparency of the screen would allow the background to be illuminated by the projection, which would break the illusion. Crypton Future Media, a music software company that produced Hatsune Miku, one of many Vocaloid singing synthesizer applications, has produced concerts that have Miku, along with other Crypton Vocaloids, performing on stage as "holographic" characters. These concerts use rear projection onto a semi-transparent DILAD screen to achieve its "holographic" effect. In 2011, in Beijing, apparel company Burberry produced the "Burberry Prorsum Autumn/Winter 2011 Hologram Runway Show", which included life size 2-D projections of models. The company's own video shows several centered and off-center shots of the main 2-dimensional projection screen, the latter revealing the flatness of the virtual models. The claim that holography was used was reported as fact in the trade media. In Madrid, on 10 April 2015, a public visual presentation called "Hologramas por la Libertad" (Holograms for Liberty), featuring a ghostly virtual crowd of demonstrators, was used to protest a new Spanish law that prohibits citizens from demonstrating in public places. Although widely called a "hologram protest" in news reports, no actual holography was involved – it was yet another technologically updated variant of the Pepper's ghost illusion. Holography is distinct from specular holography which is a technique for making three-dimensional images by controlling the motion of specularities on a two-dimensional surface. It works by reflectively or refractively manipulating bundles of light rays, not by using interference and diffraction. == Tactile holograms == == In fiction == Holography has been widely referred to in movies, novels, and TV, usually in science fiction, starting in the late 1970s. Science fiction writers absorbed the urban legends surrounding holography that had been spread by overly-enthusiastic scientists and entrepreneurs trying to market the idea. This had the effect of giving the public overly high expectations of the capability of holography, due to the unrealistic depictions of it in most fiction, where they are fully three-dimensional computer projections that are sometimes tactile through the use of force fields. Examples of this type of depiction include the hologram of Princess Leia in Star Wars, Arnold Rimmer from Red Dwarf, who was later converted to "hard light" to make him solid, and the Holodeck and Emergency Medical Hologram from Star Trek. Holography has served as an inspiration for many video games with science fiction elements. In many titles, fictional holographic technology has been used to reflect real life misrepresentations of potential military use of holograms, such as the "mirage tanks" in Command & Conquer: Red Alert 2 that can disguise themselves as trees. Player characters are able to use holographic decoys in games such as Halo: Reach and Crysis 2 to confuse and distract the enemy. Starcraft ghost agent Nova has access to "holo decoy" as one of her three primary abilities in Heroes of the Storm. Fictional depictions of holograms have, however, inspired technological advances in other fields, such as augmented reality, that promise to fulfill the fictional depictions of holograms by other means. == See also == == References == == Bibliography == == Further reading == == External links == "Dennis Gabor – Autobiography", 30 September 2004, Nobelprize.org "Holography, 1948-1971 Nobel Lecture", 11 December 1971, by Dennis Gabor "How Holograms Work", How Stuff Works, by Tracy V. Wilson, 30 August 2023 "Holography" by The Strange Theory of Light, QED "Making Real Holograms!!!!!!" at YouTube by The Thought Emporium, 19 November 2020 "How are holograms possible?" at YouTube by Grant Sanderson, 3Blue1Brown, 5 October 2024
Wikipedia/Holography
String field theory (SFT) is a formalism in string theory in which the dynamics of relativistic strings is reformulated in the language of quantum field theory. This is accomplished at the level of perturbation theory by finding a collection of vertices for joining and splitting strings, as well as string propagators, that give a Feynman diagram-like expansion for string scattering amplitudes. In most string field theories, this expansion is encoded by a classical action found by second-quantizing the free string and adding interaction terms. As is usually the case in second quantization, a classical field configuration of the second-quantized theory is given by a wave function in the original theory. In the case of string field theory, this implies that a classical configuration, usually called the string field, is given by an element of the free string Fock space. The principal advantages of the formalism are that it allows the computation of off-shell amplitudes and, when a classical action is available, gives non-perturbative information that cannot be seen directly from the standard genus expansion of string scattering. In particular, following the work of Ashoke Sen, it has been useful in the study of tachyon condensation on unstable D-branes. It has also had applications to topological string theory, non-commutative geometry, and strings in low dimensions. String field theories come in a number of varieties depending on which type of string is second quantized: Open string field theories describe the scattering of open strings, closed string field theories describe closed strings, while open-closed string field theories include both open and closed strings. In addition, depending on the method used to fix the worldsheet diffeomorphisms and conformal transformations in the original free string theory, the resulting string field theories can be very different. Using light cone gauge, yields light-cone string field theories whereas using BRST quantization, one finds covariant string field theories. There are also hybrid string field theories, known as covariantized light-cone string field theories which use elements of both light-cone and BRST gauge-fixed string field theories. A final form of string field theory, known as background independent open string field theory, takes a very different form; instead of second quantizing the worldsheet string theory, it second quantizes the space of two-dimensional quantum field theories. == Light-cone string field theory == Light-cone string field theories were introduced by Stanley Mandelstam and developed by Mandelstam, Michael Green, John Schwarz and Lars Brink. An explicit description of the second-quantization of the light-cone string was given by Michio Kaku and Keiji Kikkawa. Light-cone string field theories were the first string field theories to be constructed and are based on the simplicity of string scattering in light-cone gauge. For example, in the bosonic closed string case, the worldsheet scattering diagrams naturally take a Feynman diagram-like form, being built from two ingredients, a propagator, and two vertices for splitting and joining strings, which can be used to glue three propagators together, These vertices and propagators produce a single cover of the moduli space of n {\displaystyle n} -point closed string scattering amplitudes so no higher order vertices are required. Similar vertices exist for the open string. When one considers light-cone quantized superstrings, the discussion is more subtle as divergences can arise when the light-cone vertices collide. To produce a consistent theory, it is necessary to introduce higher order vertices, called contact terms, to cancel the divergences. Light-cone string field theories have the disadvantage that they break manifest Lorentz invariance. However, in backgrounds with light-like Killing vectors, they can considerably simplify the quantization of the string action. Moreover, until the advent of the Berkovits string it was the only known method for quantizing strings in the presence of Ramond–Ramond fields. In recent research, light-cone string field theory played an important role in understanding strings in pp-wave backgrounds. == Free covariant string field theory == An important step in the construction of covariant string field theories (preserving manifest Lorentz invariance) was the construction of a covariant kinetic term. This kinetic term can be considered a string field theory in its own right: the string field theory of free strings. Since the work of Warren Siegel, it has been standard to first BRST-quantize the free string theory and then second quantize so that the classical fields of the string field theory include ghosts as well as matter fields. For example, in the case of the bosonic open string theory in 26-dimensional flat spacetime, a general element of the Fock-space of the BRST quantized string takes the form (in radial quantization in the upper half plane), | Ψ ⟩ = ∫ d 26 p ( T ( p ) c 1 e i p ⋅ X | 0 ⟩ + A μ ( p ) ∂ X μ c 1 e i p ⋅ X | 0 ⟩ + χ ( p ) c 0 e i p ⋅ X | 0 ⟩ + … ) , {\displaystyle |\Psi \rangle =\int d^{26}p\left(T(p)c_{1}e^{ip\cdot X}|0\rangle +A_{\mu }(p)\partial X^{\mu }c_{1}e^{ip\cdot X}|0\rangle +\chi (p)c_{0}e^{ip\cdot X}|0\rangle +\ldots \right),} where | 0 ⟩ {\displaystyle |0\rangle } is the free string vacuum and the dots represent more massive fields. In the language of worldsheet string theory, T ( p ) {\displaystyle T(p)} , A μ ( p ) {\displaystyle A_{\mu }(p)} , and χ ( p ) {\displaystyle \chi (p)} represent the amplitudes for the string to be found in the various basis states. After second quantization, they are interpreted instead as classical fields representing the tachyon T {\displaystyle T} , gauge field A μ {\displaystyle A_{\mu }} and a ghost field χ {\displaystyle \chi } . In the worldsheet string theory, the unphysical elements of the Fock space are removed by imposing the condition Q B | Ψ ⟩ = 0 {\displaystyle Q_{B}|\Psi \rangle =0} as well as the equivalence relation | Ψ ⟩ ∼ | Ψ ⟩ + Q B | Λ ⟩ {\displaystyle |\Psi \rangle \sim |\Psi \rangle +Q_{B}|\Lambda \rangle } . After second quantization, the equivalence relation is interpreted as a gauge invariance, whereas the condition that | Ψ ⟩ {\displaystyle |\Psi \rangle } is physical is interpreted as an equation of motion. Because the physical fields live at ghostnumber one, it is also assumed that the string field | Ψ ⟩ {\displaystyle |\Psi \rangle } is a ghostnumber one element of the Fock space. In the case of the open bosonic string a gauge-unfixed action with the appropriate symmetries and equations of motion was originally obtained by André Neveu, Hermann Nicolai and Peter C. West. It is given by S free open ( Ψ ) = 1 2 ⟨ Ψ | Q B | Ψ ⟩ , {\displaystyle S_{\text{free open}}(\Psi )={\tfrac {1}{2}}\langle \Psi |Q_{B}|\Psi \rangle \ ,} where ⟨ Ψ | {\displaystyle \langle \Psi |} is the BPZ-dual of | Ψ ⟩ {\displaystyle |\Psi \rangle } . For the bosonic closed string, construction of a BRST-invariant kinetic term requires additionally that one impose ( L 0 − L ~ 0 ) | Ψ ⟩ = 0 {\displaystyle (L_{0}-{\tilde {L}}_{0})|\Psi \rangle =0} and ( b 0 − b ~ 0 ) | Ψ ⟩ = 0 {\displaystyle (b_{0}-{\tilde {b}}_{0})|\Psi \rangle =0} . The kinetic term is then S free closed = 1 2 ⟨ Ψ | ( c 0 − c ~ 0 ) Q B | Ψ ⟩ . {\displaystyle S_{\text{free closed}}={\tfrac {1}{2}}\langle \Psi |(c_{0}-{\tilde {c}}_{0})Q_{B}|\Psi \rangle \ .} Additional considerations are required for the superstrings to deal with the superghost zero-modes. == Witten's cubic open string field theory == The best studied and simplest of covariant interacting string field theories was constructed by Edward Witten. It describes the dynamics of bosonic open strings and is given by adding to the free open string action a cubic vertex: S ( Ψ ) = 1 2 ⟨ Ψ | Q B | Ψ ⟩ + 1 3 ⟨ Ψ , Ψ , Ψ ⟩ {\displaystyle S(\Psi )={\tfrac {1}{2}}\langle \Psi |Q_{B}|\Psi \rangle +{\tfrac {1}{3}}\langle \Psi ,\Psi ,\Psi \rangle } , where, as in the free case, Ψ {\displaystyle \Psi } is a ghostnumber one element of the BRST-quantized free bosonic open-string Fock-space. The cubic vertex, ⟨ Ψ 1 , Ψ 2 , Ψ 3 ⟩ {\displaystyle \langle \Psi _{1},\Psi _{2},\Psi _{3}\rangle } is a trilinear map which takes three string fields of total ghostnumber three and yields a number. Following Witten, who was motivated by ideas from noncommutative geometry, it is conventional to introduce the ∗ {\displaystyle *} -product defined implicitly through ⟨ Σ | Ψ 1 ∗ Ψ 2 ⟩ = ⟨ Σ , Ψ 1 , Ψ 2 ⟩ . {\displaystyle \langle \Sigma |\Psi _{1}*\Psi _{2}\rangle =\langle \Sigma ,\Psi _{1},\Psi _{2}\rangle \ .} The ∗ {\displaystyle *} -product and cubic vertex satisfy a number of important properties (allowing the Ψ i {\displaystyle \Psi _{i}} to be general ghost number fields): In these equations, g n ( Ψ ) {\displaystyle gn(\Psi )} denotes the ghost number of Ψ {\displaystyle \Psi } . === Gauge invariance === These properties of the cubic vertex are sufficient to show that S ( Ψ ) {\displaystyle S(\Psi )} is invariant under the Yang–Mills-like gauge transformation, Ψ → Ψ + Q B Λ + Ψ ∗ Λ − Λ ∗ Ψ , {\displaystyle \Psi \to \Psi +Q_{B}\Lambda +\Psi *\Lambda -\Lambda *\Psi \ ,} where Λ {\displaystyle \Lambda } is an infinitesimal gauge parameter. Finite gauge transformations take the form Ψ → e − Λ ( Ψ + Q B ) e Λ {\displaystyle \Psi \to e^{-\Lambda }(\Psi +Q_{B})e^{\Lambda }} where the exponential is defined by, e Λ = 1 + Λ + 1 2 Λ ∗ Λ + 1 3 ! Λ ∗ Λ ∗ Λ + … {\displaystyle e^{\Lambda }=1+\Lambda +{\tfrac {1}{2}}\Lambda *\Lambda +{\tfrac {1}{3!}}\Lambda *\Lambda *\Lambda +\ldots } === Equations of motion === The equations of motion are given by the following equation: Q B Ψ + Ψ ∗ Ψ = 0 . {\displaystyle Q_{B}\Psi +\Psi *\Psi =0\left.\right.\ .} Because the string field Ψ {\displaystyle \Psi } is an infinite collection of ordinary classical fields, these equations represent an infinite collection of non-linear coupled differential equations. There have been two approaches to finding solutions: First, numerically, one can truncate the string field to include only fields with mass less than a fixed bound, a procedure known as "level truncation". This reduces the equations of motion to a finite number of coupled differential equations and has led to the discovery of many solutions. Second, following the work of Martin Schnabl one can seek analytic solutions by carefully picking an ansatz which has simple behavior under star multiplication and action by the BRST operator. This has led to solutions representing marginal deformations, the tachyon vacuum solution and time-independent D-brane systems. === Quantization === To consistently quantize S ( Ψ ) {\displaystyle S(\Psi )} one has to fix a gauge. The traditional choice has been Feynman–Siegel gauge, b 0 Ψ = 0 . {\displaystyle b_{0}\Psi =0\left.\right.\ .} Because the gauge transformations are themselves redundant (there are gauge transformations of the gauge transformations), the gauge fixing procedure requires introducing an infinite number of ghosts via the BV formalism. The complete gauge fixed action is given by S gauge-fixed = 1 2 ⟨ Ψ | c 0 L 0 | Ψ ⟩ + 1 3 ⟨ Ψ , Ψ , Ψ ⟩ , {\displaystyle S_{\text{gauge-fixed}}={\tfrac {1}{2}}\langle \Psi |c_{0}L_{0}|\Psi \rangle +{\tfrac {1}{3}}\langle \Psi ,\Psi ,\Psi \rangle \ ,} where the field Ψ {\displaystyle \Psi } is now allowed to be of arbitrary ghostnumber. In this gauge, the Feynman diagrams are constructed from a single propagator and vertex. The propagator takes the form of a strip of worldsheet of width π {\displaystyle \pi } and length T {\displaystyle T} There is also an insertion of an integral of the b {\displaystyle b} -ghost along the red line. The modulus, T {\displaystyle T} is integrated from 0 to ∞ {\displaystyle \infty } . The three vertex can be described as a way of gluing three propagators together, as shown in the following picture: In order to represent the vertex embedded in three dimensions, the propagators have been folded in half along their midpoints. The resulting geometry is completely flat except for a single curvature singularity where the midpoints of the three propagators meet. These Feynman diagrams generate a complete cover of the moduli space of open string scattering diagrams. It follows that, for on-shell amplitudes, the n-point open string amplitudes computed using Witten's open string field theory are identical to those computed using standard worldsheet methods. == Supersymmetric covariant open string field theories == There are two main constructions of supersymmetric extensions of Witten's cubic open string field theory. The first is very similar in form to its bosonic cousin and is known as modified cubic superstring field theory. The second, due to Nathan Berkovits is very different and is based on a WZW-type action. === Modified cubic superstring field theory === The first consistent extension of Witten's bosonic open string field theory to the RNS string was constructed by Christian Preitschopf, Charles Thorn and Scott Yost and independently by Irina Aref'eva, P. B. Medvedev and A. P. Zubarev. The NS string field is taken to be a ghostnumber one picture zero string field in the small Hilbert space (i.e. η 0 | Ψ ⟩ = 0 {\displaystyle \eta _{0}|\Psi \rangle =0} ). The action takes a very similar form to bosonic action, S ( Ψ ) = 1 2 ⟨ Ψ | Y ( i ) Y ( − i ) Q B | Ψ ⟩ + 1 3 ⟨ Ψ | Y ( i ) Y ( − i ) | Ψ ∗ Ψ ⟩ , {\displaystyle S(\Psi )={\tfrac {1}{2}}\langle \Psi |Y(i)Y(-i)Q_{B}|\Psi \rangle +{\tfrac {1}{3}}\langle \Psi |Y(i)Y(-i)|\Psi *\Psi \rangle \ ,} where, Y ( z ) = − ∂ ξ e − 2 ϕ c ( z ) {\displaystyle Y(z)=-\partial \xi e^{-2\phi }c(z)} is the inverse picture changing operator. The suggested − 1 2 {\displaystyle -{\tfrac {1}{2}}} picture number extension of this theory to the Ramond sector might be problematic. This action has been shown to reproduce tree-level amplitudes and has a tachyon vacuum solution with the correct energy. The one subtlety in the action is the insertion of picture changing operators at the midpoint, which imply that the linearized equations of motion take the form Y ( i ) Y ( − i ) Q B Ψ = 0 . {\displaystyle Y(i)Y(-i)Q_{B}\Psi =0\left.\right.\ .} Because Y ( i ) Y ( − i ) {\displaystyle Y(i)Y(-i)} has a non-trivial kernel, there are potentially extra solutions that are not in the cohomology of Q B {\displaystyle Q_{B}} . However, such solutions would have operator insertions near the midpoint and would be potentially singular, and importance of this problem remains unclear. === Berkovits superstring field theory === A very different supersymmetric action for the open string was constructed by Nathan Berkovits. It takes the form S = 1 2 ⟨ e − Φ Q B e Φ | e − Φ η 0 e Φ ⟩ − 1 2 ∫ 0 1 d t ⟨ e − Φ ^ ∂ t e Φ ^ | { e − Φ ^ Q B e Φ ^ , e − Φ ^ η 0 e Φ ^ } ⟩ {\displaystyle S={\tfrac {1}{2}}\langle e^{-\Phi }Q_{B}e^{\Phi }|e^{-\Phi }\eta _{0}e^{\Phi }\rangle -{\tfrac {1}{2}}\int _{0}^{1}dt\langle e^{-{\hat {\Phi }}}\partial _{t}e^{\hat {\Phi }}|\{e^{-{\hat {\Phi }}}Q_{B}e^{\hat {\Phi }},e^{-{\hat {\Phi }}}\eta _{0}e^{\hat {\Phi }}\}\rangle } where all of the products are performed using the ∗ {\displaystyle *} -product including the anticommutator { , } {\displaystyle \{,\}} , and Φ ^ ( t ) {\displaystyle {\hat {\Phi }}(t)} is any string field such that Φ ^ ( 0 ) = 0 {\displaystyle {\hat {\Phi }}(0)=0} and Φ ^ ( 1 ) = Φ {\displaystyle {\hat {\Phi }}(1)=\Phi } . The string field Φ {\displaystyle \Phi } is taken to be in the NS sector of the large Hilbert space, i.e. including the zero mode of ξ {\displaystyle \xi } . It is not known how to incorporate the R sector, although some preliminary ideas exist. The equations of motion take the form η 0 ( e − Φ Q B e Φ ) = 0. {\displaystyle \eta _{0}\left(e^{-\Phi }Q_{B}e^{\Phi }\right)=0.} The action is invariant under the gauge transformation e Φ → e Q B Λ e Φ e η 0 Λ ′ . {\displaystyle e^{\Phi }\to e^{Q_{B}\Lambda }e^{\Phi }e^{\eta _{0}\Lambda '}.} The principal advantage of this action is that it free from any insertions of picture-changing operators. It has been shown to reproduce correctly tree level amplitudes and has been found, numerically, to have a tachyon vacuum with appropriate energy. The known analytic solutions to the classical equations of motion include the tachyon vacuum and marginal deformations. === Other formulations of covariant open superstring field theory === A formulation of superstring field theory using the non-minimal pure-spinor variables was introduced by Berkovits. The action is cubic and includes a midpoint insertion whose kernel is trivial. As always within the pure-spinor formulation, the Ramond sector can be easily treated. However, it is not known how to incorporate the GSO- sectors into the formalism. In an attempt to resolve the allegedly problematic midpoint insertion of the modified cubic theory, Berkovits and Siegel proposed a superstring field theory based on a non-minimal extension of the RNS string, which uses a midpoint insertion with no kernel. It is not clear if such insertions are in any way better than midpoint insertions with non-trivial kernels. == Covariant closed string field theory == Covariant closed string field theories are considerably more complicated than their open string cousins. Even if one wants to construct a string field theory which only reproduces tree-level interactions between closed strings, the classical action must contain an infinite number of vertices consisting of string polyhedra. If one demands that on-shell scattering diagrams be reproduced to all orders in the string coupling, one must also include additional vertices arising from higher genus (and hence higher order in ℏ {\displaystyle \hbar } ) as well. In general, a manifestly BV invariant, quantizable action takes the form S ( Ψ ) = ℏ ∑ g ≥ 0 ( ℏ g c ) g − 1 ∑ n ≥ 0 1 n ! { Ψ n } g {\displaystyle S(\Psi )=\hbar \sum _{g\geq 0}(\hbar g_{c})^{g-1}\sum _{n\geq 0}{\frac {1}{n!}}\{\Psi ^{n}\}_{g}} where { Ψ n } g {\displaystyle \{\Psi ^{n}\}_{g}} denotes an n {\displaystyle n} th order vertex arising from a genus g {\displaystyle g} surface and g c {\displaystyle g_{c}} is the closed string coupling. The structure of the vertices is in principle determined by a minimal area prescription, although, even for the polyhedral vertices, explicit computations have only been performed to quintic order. == Covariant heterotic string field theory == A formulation of the NS sector of the heterotic string was given by Berkovits, Okawa and Zwiebach. The formulation amalgamates bosonic closed string field theory with Berkovits' superstring field theory. == See also == == References ==
Wikipedia/String_field_theory
In geometry, the Poincaré disk model, also called the conformal disk model, is a model of 2-dimensional hyperbolic geometry in which all points are inside the unit disk, and straight lines are either circular arcs contained within the disk that are orthogonal to the unit circle or diameters of the unit circle. The group of orientation preserving isometries of the disk model is given by the projective special unitary group PSU(1,1), the quotient of the special unitary group SU(1,1) by its center {I, −I}. Along with the Klein model and the Poincaré half-space model, it was proposed by Eugenio Beltrami who used these models to show that hyperbolic geometry was equiconsistent with Euclidean geometry. It is named after Henri Poincaré, because his rediscovery of this representation fourteen years later became better known than the original work of Beltrami. The Poincaré ball model is the similar model for 3 or n-dimensional hyperbolic geometry in which the points of the geometry are in the n-dimensional unit ball. == History == The disk model was first described by Bernhard Riemann in an 1854 lecture (published 1868), which inspired an 1868 paper by Eugenio Beltrami. Henri Poincaré employed it in his 1882 treatment of hyperbolic, parabolic and elliptic functions, but it became widely known following Poincaré's presentation in his 1905 philosophical treatise, Science and Hypothesis. There he describes a world, now known as the Poincaré disk, in which space was Euclidean, but which appeared to its inhabitants to satisfy the axioms of hyperbolic geometry:"Suppose, for example, a world enclosed in a large sphere and subject to the following laws: The temperature is not uniform; it is greatest at their centre, and gradually decreases as we move towards the circumference of the sphere, where it is absolute zero. The law of this temperature is as follows: If R {\displaystyle R} be the radius of the sphere, and r {\displaystyle r} the distance of the point considered from the centre, the absolute temperature will be proportional to R 2 − r 2 {\displaystyle R^{2}-r^{2}} . Further, I shall suppose that in this world all bodies have the same co-efficient of dilatation, so that the linear dilatation of any body is proportional to its absolute temperature. Finally, I shall assume that a body transported from one point to another of different temperature is instantaneously in thermal equilibrium with its new environment. ... If they construct a geometry, it will not be like ours, which is the study of the movements of our invariable solids; it will be the study of the changes of position which they will have thus distinguished, and will be 'non-Euclidean displacements,' and this will be non-Euclidean geometry. So that beings like ourselves, educated in such a world, will not have the same geometry as ours." (pp.65-68)Poincaré's disk was an important piece of evidence for the hypothesis that the choice of spatial geometry is conventional rather than factual, especially in the influential philosophical discussions of Rudolf Carnap and of Hans Reichenbach. == Lines and distance == Hyperbolic straight lines or geodesics consist of all arcs of Euclidean circles contained within the disk that are orthogonal to the boundary of the disk, plus all diameters of the disk. Distances in this model are Cayley–Klein metrics. Given two distinct points p and q inside the disk, the unique hyperbolic line connecting them intersects the boundary at two ideal points, a and b. Label them so that the points are, in order, a, p, q, b, that is, so that |aq| > |ap| and |pb| > |qb|. The hyperbolic distance between p and q is then d ( p , q ) = ln ⁡ | a q | | p b | | a p | | q b | . {\displaystyle d(p,q)=\ln {\frac {\left|aq\right|\,\left|pb\right|}{\left|ap\right|\,\left|qb\right|}}.} The vertical bars indicate Euclidean length of the line segment connecting the points between them in the model (not along the circle arc); ln is the natural logarithm. Equivalently, if u and v are two vectors in real n-dimensional vector space Rn with the usual Euclidean norm, both of which have norm less than 1, then we may define an isometric invariant by δ ( u , v ) = 2 ‖ u − v ‖ 2 ( 1 − ‖ u ‖ 2 ) ( 1 − ‖ v ‖ 2 ) , {\displaystyle \delta (u,v)=2{\frac {\lVert u-v\rVert ^{2}}{(1-\lVert u\rVert ^{2})(1-\lVert v\rVert ^{2})}}\,,} where ‖ ⋅ ‖ {\displaystyle \lVert \cdot \rVert } denotes the usual Euclidean norm. Then the distance function is d ( u , v ) = arcosh ⁡ ( 1 + δ ( u , v ) ) = 2 arsinh ⁡ δ ( u , v ) 2 = 2 ln ⁡ ‖ u − v ‖ + ‖ u ‖ 2 ‖ v ‖ 2 − 2 u ⋅ v + 1 ( 1 − ‖ u ‖ 2 ) ( 1 − ‖ v ‖ 2 ) . {\displaystyle {\begin{aligned}d(u,v)&=\operatorname {arcosh} (1+\delta (u,v))\\&=2\operatorname {arsinh} {\sqrt {\frac {\delta (u,v)}{2}}}\\\,&=2\ln {\frac {\lVert u-v\rVert +{\sqrt {\lVert u\rVert ^{2}\lVert v\rVert ^{2}-2u\cdot v+1}}}{\sqrt {(1-\lVert u\rVert ^{2})(1-\lVert v\rVert ^{2})}}}.\end{aligned}}} Such a distance function is defined for any two vectors of norm less than one, and makes the set of such vectors into a metric space which is a model of hyperbolic space of constant curvature −1. The model has the conformal property that the angle between two intersecting curves in hyperbolic space is the same as the angle in the model. Specializing to the case where one of the points is the origin and the Euclidean distance between the points is r, the hyperbolic distance is: ln ⁡ ( 1 + r 1 − r ) = 2 artanh ⁡ r {\displaystyle \ln \left({\frac {1+r}{1-r}}\right)=2\operatorname {artanh} r} where artanh {\displaystyle \operatorname {artanh} } is the inverse hyperbolic function of the hyperbolic tangent. If the two points lie on the same radius and point x ′ = ( r ′ , θ ) {\displaystyle x'=(r',\theta )} lies between the origin and point x = ( r , θ ) {\displaystyle x=(r,\theta )} , their hyperbolic distance is ln ⁡ ( 1 + r 1 − r ⋅ 1 − r ′ 1 + r ′ ) = 2 ( artanh ⁡ r − artanh ⁡ r ′ ) . {\displaystyle \ln \left({\frac {1+r}{1-r}}\cdot {\frac {1-r'}{1+r'}}\right)=2(\operatorname {artanh} r-\operatorname {artanh} r').} This reduces to the previous special case if r ′ = 0 {\displaystyle r'=0} . == Metric and curvature == The associated metric tensor of the Poincaré disk model is given by d s 2 = 4 ∑ i d x i 2 ( 1 − ∑ i x i 2 ) 2 = 4 ‖ d x ‖ l 2 ( 1 − ‖ x ‖ l 2 ) 2 {\displaystyle ds^{2}=4{\frac {\sum _{i}dx_{i}^{2}}{\left(1-\sum _{i}x_{i}^{2}\right)^{2}}}={\frac {4\,\lVert d\mathbf {x} \rVert {\vphantom {l}}^{2}}{{\bigl (}1-\lVert \mathbf {x} \rVert {\vphantom {l}}^{2}{\bigr )}^{2}}}} where the xi are the Cartesian coordinates of the ambient Euclidean space. An orthonormal frame with respect to this Riemannian metric is given by e i = 1 2 ( 1 − | x | 2 ) ∂ ∂ x i , {\displaystyle e_{i}={\frac {1}{2}}{\Bigl (}1-|\mathbf {x} |^{2}{\Bigr )}{\frac {\partial }{\partial x^{i}}},} with dual coframe of 1-forms θ i = 2 1 − | x | l 2 d x i . {\displaystyle \theta ^{i}={\frac {2}{1-|\mathbf {x} |{\vphantom {l}}^{2}}}\,dx^{i}.} === In two dimensions === In two dimensions, with respect to these frames and the Levi-Civita connection, the connection forms are given by the unique skew-symmetric matrix of 1-forms ω {\displaystyle \omega } that is torsion-free, i.e., that satisfies the matrix equation 0 = d θ + ω ∧ θ {\displaystyle 0=d\theta +\omega \wedge \theta } . Solving this equation for ω {\displaystyle \omega } yields ω = 2 ( y d x − x d y ) 1 − | x | l 2 ( 0 1 − 1 0 ) , {\displaystyle \omega ={\frac {2(y\,dx-x\,dy)}{1-|\mathbf {x} |{\vphantom {l}}^{2}}}{\begin{pmatrix}0&1\\-1&0\end{pmatrix}},} where the curvature matrix is Ω = d ω + ω ∧ ω = d ω + 0 = − 4 d x ∧ d y ( 1 − | x | l 2 ) 2 ( 0 1 − 1 0 ) . {\displaystyle \Omega =d\omega +\omega \wedge \omega =d\omega +0={\frac {-4\,dx\wedge dy}{{\bigl (}1-|\mathbf {x} |{\vphantom {l}}^{2}{\bigr )}^{2}}}{\begin{pmatrix}0&1\\-1&0\end{pmatrix}}.} Therefore, the curvature of the hyperbolic disk is K = Ω 2 1 ( e 1 , e 2 ) = − 1. {\displaystyle K=\Omega _{2}^{1}(e_{1},e_{2})=-1.} == Construction of lines == === By compass and straightedge === The unique hyperbolic line through two points P {\displaystyle P} and Q {\displaystyle Q} not on a diameter of the boundary circle can be constructed by: let P ′ {\displaystyle P'} be the inversion in the boundary circle of point P {\displaystyle P} let Q ′ {\displaystyle Q'} be the inversion in the boundary circle of point Q {\displaystyle Q} let M {\displaystyle M} be the midpoint of segment P P ′ {\displaystyle PP'} let N {\displaystyle N} be the midpoint of segment Q Q ′ {\displaystyle QQ'} Draw line m {\displaystyle m} through M {\displaystyle M} perpendicular to segment P P ′ {\displaystyle PP'} Draw line n {\displaystyle n} through N {\displaystyle N} perpendicular to segment Q Q ′ {\displaystyle QQ'} let C {\displaystyle C} be where line m {\displaystyle m} and line n {\displaystyle n} intersect. Draw circle c {\displaystyle c} with center C {\displaystyle C} and going through P {\displaystyle P} (and Q {\displaystyle Q} ). The part of circle c {\displaystyle c} that is inside the disk is the hyperbolic line. If P and Q are on a diameter of the boundary circle that diameter is the hyperbolic line. Another way is: let M {\displaystyle M} be the midpoint of segment P Q {\displaystyle PQ} Draw line m through M {\displaystyle M} perpendicular to segment P Q {\displaystyle PQ} let P ′ {\displaystyle P'} be the inversion in the boundary circle of point P {\displaystyle P} let N {\displaystyle N} be the midpoint of segment P P ′ {\displaystyle PP'} Draw line n {\displaystyle n} through N {\displaystyle N} perpendicular to segment P P ′ {\displaystyle PP'} let C {\displaystyle C} be where line m {\displaystyle m} and line n {\displaystyle n} intersect. Draw circle c {\displaystyle c} with center C {\displaystyle C} and going through P {\displaystyle P} (and Q {\displaystyle Q} ). The part of circle c {\displaystyle c} that is inside the disk is the hyperbolic line. === By analytic geometry === A basic construction of analytic geometry is to find a line through two given points. In the Poincaré disk model, lines in the plane are defined by portions of circles having equations of the form x 2 + y 2 + a x + b y + 1 = 0 , {\displaystyle x^{2}+y^{2}+ax+by+1=0\,,} which is the general form of a circle orthogonal to the unit circle, or else by diameters. Given two points u = (u1,u2) and v = (v1,v2) in the disk which do not lie on a diameter, we can solve for the circle of this form passing through both points, and obtain x 2 + y 2 + u 2 ( v 1 2 + v 2 2 + 1 ) − v 2 ( u 1 2 + u 2 2 + 1 ) u 1 v 2 − u 2 v 1 x + v 1 ( u 1 2 + u 2 2 + 1 ) − u 1 ( v 1 2 + v 2 2 + 1 ) u 1 v 2 − u 2 v 1 y + 1 = 0 . {\displaystyle {\begin{aligned}x^{2}+y^{2}&{}+{\frac {u_{2}(v_{1}^{2}+v_{2}^{2}+1)-v_{2}(u_{1}^{2}+u_{2}^{2}+1)}{u_{1}v_{2}-u_{2}v_{1}}}x\\[8pt]&{}+{\frac {v_{1}(u_{1}^{2}+u_{2}^{2}+1)-u_{1}(v_{1}^{2}+v_{2}^{2}+1)}{u_{1}v_{2}-u_{2}v_{1}}}y+1=0\,.\end{aligned}}} If the points u and v are points on the boundary of the disk not lying at the endpoints of a diameter, the above simplifies to x 2 + y 2 + 2 ( u 2 − v 2 ) u 1 v 2 − u 2 v 1 x + 2 ( v 1 − u 1 ) u 1 v 2 − u 2 v 1 y + 1 = 0 . {\displaystyle x^{2}+y^{2}+{\frac {2(u_{2}-v_{2})}{u_{1}v_{2}-u_{2}v_{1}}}x+{\frac {2(v_{1}-u_{1})}{u_{1}v_{2}-u_{2}v_{1}}}y+1=0\,.} == Angles == We may compute the angle between the circular arc whose endpoints (ideal points) are given by unit vectors u and v, and the arc whose endpoints are s and t, by means of a formula. Since the ideal points are the same in the Klein model and the Poincaré disk model, the formulas are identical for each model. If both models' lines are diameters, so that v = −u and t = −s, then we are merely finding the angle between two unit vectors, and the formula for the angle θ is cos ⁡ ( θ ) = u ⋅ s . {\displaystyle \cos(\theta )=u\cdot s\,.} If v = −u but not t = −s, the formula becomes, in terms of the wedge product ( ∧ {\displaystyle \wedge } ), cos 2 ⁡ ( θ ) = P 2 Q R , {\displaystyle \cos ^{2}(\theta )={\frac {P^{2}}{QR}},} where P = u ⋅ ( s − t ) , {\displaystyle P=u\cdot (s-t)\,,} Q = u ⋅ u , {\displaystyle Q=u\cdot u\,,} R = ( s − t ) ⋅ ( s − t ) − ( s ∧ t ) ⋅ ( s ∧ t ) . {\displaystyle R=(s-t)\cdot (s-t)-(s\wedge t)\cdot (s\wedge t)\,.} If both chords are not diameters, the general formula obtains cos 2 ⁡ ( θ ) = P 2 Q R , {\displaystyle \cos ^{2}(\theta )={\frac {P^{2}}{QR}}\,,} where P = ( u − v ) ⋅ ( s − t ) − ( u ∧ v ) ⋅ ( s ∧ t ) , {\displaystyle P=(u-v)\cdot (s-t)-(u\wedge v)\cdot (s\wedge t)\,,} Q = ( u − v ) ⋅ ( u − v ) − ( u ∧ v ) ⋅ ( u ∧ v ) , {\displaystyle Q=(u-v)\cdot (u-v)-(u\wedge v)\cdot (u\wedge v)\,,} R = ( s − t ) ⋅ ( s − t ) − ( s ∧ t ) ⋅ ( s ∧ t ) . {\displaystyle R=(s-t)\cdot (s-t)-(s\wedge t)\cdot (s\wedge t)\,.} Using the Binet–Cauchy identity and the fact that these are unit vectors we may rewrite the above expressions purely in terms of the dot product, as P = ( u − v ) ⋅ ( s − t ) + ( u ⋅ t ) ( v ⋅ s ) − ( u ⋅ s ) ( v ⋅ t ) . {\displaystyle P=(u-v)\cdot (s-t)+(u\cdot t)(v\cdot s)-(u\cdot s)(v\cdot t)\,.} Q = ( 1 − u ⋅ v ) 2 , {\displaystyle Q=(1-u\cdot v)^{2}\,,} R = ( 1 − s ⋅ t ) 2 . {\displaystyle R=(1-s\cdot t)^{2}\,.} == Cycles == In the Euclidean plane the generalized circles (curves of constant curvature) are lines and circles. On the sphere, they are great and small circles. In the hyperbolic plane, there are 4 distinct types of generalized circles or cycles: circles, horocycles, hypercycles, and geodesics (or "hyperbolic lines"). In the Poincaré disk model, all of these are represented by straight lines or circles. A Euclidean circle: that is completely inside the disk is a hyperbolic circle; that is inside the disk and tangent to the boundary is a horocycle; that intersects the boundary orthogonally is a hyperbolic line; and that intersects the boundary non-orthogonally is a hypercycle. A Euclidean chord of the boundary circle: that goes through the center is a hyperbolic line; and that does not go through the center is a hypercycle. === Circles === A circle (the set of all points in a plane that are at a given distance from a given point, its center) is a circle completely inside the disk not touching or intersecting its boundary. The hyperbolic center of the circle in the model does not in general correspond to the Euclidean center of the circle, but they are on the same radius of the Poincaré disk. (The Euclidean center is always closer to the center of the disk than the hyperbolic center.) === Hypercycles === A hypercycle (the set of all points in a plane that are on one side and at a given distance from a given line, its axis) is a Euclidean circle arc or chord of the boundary circle that intersects the boundary circle at a positive but non-right angle. Its axis is the hyperbolic line that shares the same two ideal points. This is also known as an equidistant curve. === Horocycles === A horocycle (a curve whose normal or perpendicular geodesics are limiting parallels, all converging asymptotically to the same ideal point), is a circle inside the disk that is tangent to the boundary circle of the disk. The point where it touches the boundary circle is not part of the horocycle. It is an ideal point and is the hyperbolic center of the horocycle. It is also the point to which all the perpendicular geodesics converge. In the Poincaré disk model, the Euclidean points representing opposite "ends" of a horocycle converge to its center on the boundary circle, but in the hyperbolic plane every point of a horocycle is infinitely far from its center, and opposite ends of the horocycle are not connected. (Euclidean intuition can be misleading because the scale of the model increases to infinity at the boundary circle.) == Relation to other models of hyperbolic geometry == === Relation to the Klein disk model === The Beltrami–Klein model (or Klein disk model) and the Poincaré disk are both models that project the whole hyperbolic plane in a disk. The two models are related through a projection on or from the hemisphere model. The Klein disk model is an orthographic projection to the hemisphere model while the Poincaré disk model is a stereographic projection. An advantage of the Klein disk model is that lines in this model are Euclidean straight chords. A disadvantage is that the Klein disk model is not conformal (circles and angles are distorted). When projecting the same lines in both models on one disk both lines go through the same two ideal points. (the ideal points remain on the same spot) also the pole of the chord in the Klein disk model is the center of the circle that contains the arc in the Poincaré disk model. A point (x,y) in the Poincaré disk model maps to ( 2 x 1 + x 2 + y 2 , 2 y 1 + x 2 + y 2 ) {\textstyle \left({\frac {2x}{1+x^{2}+y^{2}}}\ ,\ {\frac {2y}{1+x^{2}+y^{2}}}\right)} in the Klein model. A point (x,y) in the Klein model maps to ( x 1 + 1 − x 2 − y 2 , y 1 + 1 − x 2 − y 2 ) {\textstyle \left({\frac {x}{1+{\sqrt {1-x^{2}-y^{2}}}}}\ ,\ \ {\frac {y}{1+{\sqrt {1-x^{2}-y^{2}}}}}\right)} in the Poincaré disk model. For ideal points x 2 + y 2 = 1 {\displaystyle x^{2}+y^{2}=1} and the formulas become x = x , y = y {\displaystyle x=x\ ,\ y=y} so the points are fixed. If u {\displaystyle u} is a vector of norm less than one representing a point of the Poincaré disk model, then the corresponding point of the Klein disk model is given by: s = 2 u 1 + u ⋅ u . {\displaystyle s={\frac {2u}{1+u\cdot u}}.} Conversely, from a vector s {\displaystyle s} of norm less than one representing a point of the Beltrami–Klein model, the corresponding point of the Poincaré disk model is given by: u = s 1 + 1 − s ⋅ s = ( 1 − 1 − s ⋅ s ) s s ⋅ s . {\displaystyle u={\frac {s}{1+{\sqrt {1-s\cdot s}}}}={\frac {\left(1-{\sqrt {1-s\cdot s}}\right)s}{s\cdot s}}.} === Relation to the Poincaré half-plane model === The Poincaré disk model and the Poincaré half-plane model are related by a Möbius transformation. If u ∈ D {\displaystyle u\in \mathbb {D} } is a complex number of norm less than one representing a point of the Poincaré disk model, then the corresponding point z ∈ H {\displaystyle z\in \mathbb {H} } of the upper half plane is given by the inverse of the Cayley transform C : H → D {\textstyle C:\mathbb {H} \to \mathbb {D} } : C − 1 ( u ) = z = i 1 + u 1 − u . {\displaystyle C^{-1}(u)=z=i{\frac {1+u}{1-u}}.} Under C − 1 {\displaystyle C^{-1}} , the points { 0 , 1 , − i , i } ∈ D {\displaystyle \{0,1,-i,i\}\in \mathbb {D} } are mapped to { i , ∞ , 1 , − 1 } ∈ H {\displaystyle \{i,\infty ,1,-1\}\in \mathbb {H} } . In terms of real coordinates, a point (x,y) in the disk model maps to ( 2 x x 2 + ( 1 − y ) 2 , 1 − x 2 − y 2 x 2 + ( 1 − y ) 2 ) {\textstyle \left({\frac {2x}{x^{2}+(1-y)^{2}}}\ ,\ {\frac {1-x^{2}-y^{2}}{x^{2}+(1-y)^{2}}}\right)\,} in the halfplane model. A point (x,y) in the halfplane model maps to ( 2 x x 2 + ( 1 + y ) 2 , x 2 + y 2 − 1 x 2 + ( 1 + y ) 2 ) {\textstyle \left({\frac {2x}{x^{2}+(1+y)^{2}}}\ ,\ {\frac {x^{2}+y^{2}-1}{x^{2}+(1+y)^{2}}}\right)\,} in the disk model. === Relation to the hyperboloid model === The Poincaré disk model, as well as the Beltrami–Klein model, are related to the hyperboloid model projectively. If we have a point [t, x1, ..., xn] on the upper sheet of the hyperboloid of the hyperboloid model, thereby defining a point in the hyperboloid model, we may project it onto the hyperplane t = 0 by intersecting it with a line drawn through [−1, 0, ..., 0]. The result is the corresponding point of the Poincaré disk model. For Cartesian coordinates (t, xi) on the hyperboloid and (yi) on the plane, the conversion formulas are: y i = x i 1 + t {\displaystyle y_{i}={\frac {x_{i}}{1+t}}} ( t , x i ) = ( 1 + ∑ y i 2 , 2 y i ) 1 − ∑ y i 2 . {\displaystyle (t,x_{i})={\frac {\left(1+\sum {y_{i}^{2}},\,2y_{i}\right)}{1-\sum {y_{i}^{2}}}}\,.} Compare the formulas for stereographic projection between a sphere and a plane. == Artistic realizations == M. C. Escher explored the concept of representing infinity on a two-dimensional plane. Discussions with Canadian mathematician Harold Scott MacDonald Coxeter around 1956 inspired Escher's interest in hyperbolic tessellations, which are regular tilings of the hyperbolic plane. Escher's wood engravings Circle Limit I–IV demonstrate this concept between 1958 and 1960, the final one being Circle Limit IV: Heaven and Hell in 1960. According to Bruno Ernst, the best of them is Circle Limit III. HyperRogue, a roguelike game, uses the hyperbolic plane for its world geometry, and also uses the Poincaré disk model. == See also == Hyperbolic geometry Beltrami–Klein model Poincaré half-plane model Poincaré metric Pseudosphere Hyperboloid model Inversive geometry Uniform tilings in hyperbolic plane == References == == Further reading == James W. Anderson, Hyperbolic Geometry, second edition, Springer, 2005. Eugenio Beltrami, Teoria fondamentale degli spazii di curvatura costante, Annali. di Mat., ser II 2 (1868), 232–255. Saul Stahl, The Poincaré Half-Plane, Jones and Bartlett, 1993. == External links == Media related to Poincaré disk models at Wikimedia Commons
Wikipedia/Poincaré_disk_model
In theoretical physics, type I string theory is one of five consistent supersymmetric string theories in ten dimensions. It is the only one whose strings are unoriented (both orientations of a string are equivalent) and the only one which perturbatively contains not only closed strings, but also open strings. The terminology of type I and type II was coined by John Henry Schwarz in 1982 to classify the three string theories known at the time. == Overview == The classic 1976 work of Ferdinando Gliozzi, Joël Scherk and David Olive paved the way to a systematic understanding of the rules behind string spectra in cases where only closed strings are present via modular invariance. It did not lead to similar progress for models with open strings, despite the fact that the original discussion was based on the type I string theory. As first proposed by Augusto Sagnotti in 1988, the type I string theory can be obtained as an orientifold of type IIB string theory, with 32 half-D9-branes added in the vacuum to cancel various anomalies giving it a gauge group of SO(32) via Chan–Paton factors. At low energies, type I string theory is described by the type I supergravity in ten dimensions coupled to the SO(32) supersymmetric Yang–Mills theory. The discovery in 1984 by Michael Green and John H. Schwarz that anomalies in type I string theory cancel sparked the first superstring revolution. However, a key property of these models, shown by A. Sagnotti in 1992, is that in general the Green–Schwarz mechanism takes a more general form, and involves several two forms in the cancellation mechanism. The relation between the type IIB string theory and the type I string theory has a large number of surprising consequences, both in ten and in lower dimensions, that were first displayed by the String Theory Group at the University of Rome Tor Vergata in the early 1990s. It opened the way to the construction of entire new classes of string spectra with or without supersymmetry. Joseph Polchinski's work on D-branes provided a geometrical interpretation for these results in terms of extended objects (D-brane, orientifold). In the 1990s it was first argued by Edward Witten that type I string theory with the string coupling constant g {\displaystyle g} is equivalent to the SO(32) heterotic string with the coupling 1 / g {\displaystyle 1/g} . This equivalence is known as S-duality. == Notes == == References == E. Witten, "String theory dynamics in various dimensions", Nucl. Phys. B 443 (1995) 85. arXiv:hep-th/9503124. J. Polchinski, S. Chaudhuri and C.V. Johnson, "Notes on D-Branes", arXiv:hep-th/9602052. C. Angelantonj and A. Sagnotti, "Open strings", Phys. Rep. 1 [(Erratum-ibid.) 339] arXiv:hep-th/0204089.
Wikipedia/Type_I_string_theory
The mathematical formulations of quantum mechanics are those mathematical formalisms that permit a rigorous description of quantum mechanics. This mathematical formalism uses mainly a part of functional analysis, especially Hilbert spaces, which are a kind of linear space. Such are distinguished from mathematical formalisms for physics theories developed prior to the early 1900s by the use of abstract mathematical structures, such as infinite-dimensional Hilbert spaces (L2 space mainly), and operators on these spaces. In brief, values of physical observables such as energy and momentum were no longer considered as values of functions on phase space, but as eigenvalues; more precisely as spectral values of linear operators in Hilbert space. These formulations of quantum mechanics continue to be used today. At the heart of the description are ideas of quantum state and quantum observables, which are radically different from those used in previous models of physical reality. While the mathematics permits calculation of many quantities that can be measured experimentally, there is a definite theoretical limit to values that can be simultaneously measured. This limitation was first elucidated by Heisenberg through a thought experiment, and is represented mathematically in the new formalism by the non-commutativity of operators representing quantum observables. Prior to the development of quantum mechanics as a separate theory, the mathematics used in physics consisted mainly of formal mathematical analysis, beginning with calculus, and increasing in complexity up to differential geometry and partial differential equations. Probability theory was used in statistical mechanics. Geometric intuition played a strong role in the first two and, accordingly, theories of relativity were formulated entirely in terms of differential geometric concepts. The phenomenology of quantum physics arose roughly between 1895 and 1915, and for the 10 to 15 years before the development of quantum mechanics (around 1925) physicists continued to think of quantum theory within the confines of what is now called classical physics, and in particular within the same mathematical structures. The most sophisticated example of this is the Sommerfeld–Wilson–Ishiwara quantization rule, which was formulated entirely on the classical phase space. == History of the formalism == === The "old quantum theory" and the need for new mathematics === In the 1890s, Planck was able to derive the blackbody spectrum, which was later used to avoid the classical ultraviolet catastrophe by making the unorthodox assumption that, in the interaction of electromagnetic radiation with matter, energy could only be exchanged in discrete units which he called quanta. Planck postulated a direct proportionality between the frequency of radiation and the quantum of energy at that frequency. The proportionality constant, h, is now called the Planck constant in his honor. In 1905, Einstein explained certain features of the photoelectric effect by assuming that Planck's energy quanta were actual particles, which were later dubbed photons. All of these developments were phenomenological and challenged the theoretical physics of the time. Bohr and Sommerfeld went on to modify classical mechanics in an attempt to deduce the Bohr model from first principles. They proposed that, of all closed classical orbits traced by a mechanical system in its phase space, only the ones that enclosed an area which was a multiple of the Planck constant were actually allowed. The most sophisticated version of this formalism was the so-called Sommerfeld–Wilson–Ishiwara quantization. Although the Bohr model of the hydrogen atom could be explained in this way, the spectrum of the helium atom (classically an unsolvable 3-body problem) could not be predicted. The mathematical status of quantum theory remained uncertain for some time. In 1923, de Broglie proposed that wave–particle duality applied not only to photons but to electrons and every other physical system. The situation changed rapidly in the years 1925–1930, when working mathematical foundations were found through the groundbreaking work of Erwin Schrödinger, Werner Heisenberg, Max Born, Pascual Jordan, and the foundational work of John von Neumann, Hermann Weyl and Paul Dirac, and it became possible to unify several different approaches in terms of a fresh set of ideas. The physical interpretation of the theory was also clarified in these years after Werner Heisenberg discovered the uncertainty relations and Niels Bohr introduced the idea of complementarity. === The "new quantum theory" === Werner Heisenberg's matrix mechanics was the first successful attempt at replicating the observed quantization of atomic spectra. Later in the same year, Schrödinger created his wave mechanics. Schrödinger's formalism was considered easier to understand, visualize and calculate as it led to differential equations, which physicists were already familiar with solving. Within a year, it was shown that the two theories were equivalent. Schrödinger himself initially did not understand the fundamental probabilistic nature of quantum mechanics, as he thought that the absolute square of the wave function of an electron should be interpreted as the charge density of an object smeared out over an extended, possibly infinite, volume of space. It was Max Born who introduced the interpretation of the absolute square of the wave function as the probability distribution of the position of a pointlike object. Born's idea was soon taken over by Niels Bohr in Copenhagen who then became the "father" of the Copenhagen interpretation of quantum mechanics. Schrödinger's wave function can be seen to be closely related to the classical Hamilton–Jacobi equation. The correspondence to classical mechanics was even more explicit, although somewhat more formal, in Heisenberg's matrix mechanics. In his PhD thesis project, Paul Dirac discovered that the equation for the operators in the Heisenberg representation, as it is now called, closely translates to classical equations for the dynamics of certain quantities in the Hamiltonian formalism of classical mechanics, when one expresses them through Poisson brackets, a procedure now known as canonical quantization. Already before Schrödinger, the young postdoctoral fellow Werner Heisenberg invented his matrix mechanics, which was the first correct quantum mechanics – the essential breakthrough. Heisenberg's matrix mechanics formulation was based on algebras of infinite matrices, a very radical formulation in light of the mathematics of classical physics, although he started from the index-terminology of the experimentalists of that time, not even aware that his "index-schemes" were matrices, as Born soon pointed out to him. In fact, in these early years, linear algebra was not generally popular with physicists in its present form. Although Schrödinger himself after a year proved the equivalence of his wave-mechanics and Heisenberg's matrix mechanics, the reconciliation of the two approaches and their modern abstraction as motions in Hilbert space is generally attributed to Paul Dirac, who wrote a lucid account in his 1930 classic The Principles of Quantum Mechanics. He is the third, and possibly most important, pillar of that field (he soon was the only one to have discovered a relativistic generalization of the theory). In his above-mentioned account, he introduced the bra–ket notation, together with an abstract formulation in terms of the Hilbert space used in functional analysis; he showed that Schrödinger's and Heisenberg's approaches were two different representations of the same theory, and found a third, most general one, which represented the dynamics of the system. His work was particularly fruitful in many types of generalizations of the field. The first complete mathematical formulation of this approach, known as the Dirac–von Neumann axioms, is generally credited to John von Neumann's 1932 book Mathematical Foundations of Quantum Mechanics, although Hermann Weyl had already referred to Hilbert spaces (which he called unitary spaces) in his 1927 classic paper and 1928 book. It was developed in parallel with a new approach to the mathematical spectral theory based on linear operators rather than the quadratic forms that were David Hilbert's approach a generation earlier. Though theories of quantum mechanics continue to evolve to this day, there is a basic framework for the mathematical formulation of quantum mechanics which underlies most approaches and can be traced back to the mathematical work of John von Neumann. In other words, discussions about interpretation of the theory, and extensions to it, are now mostly conducted on the basis of shared assumptions about the mathematical foundations. === Later developments === The application of the new quantum theory to electromagnetism resulted in quantum field theory, which was developed starting around 1930. Quantum field theory has driven the development of more sophisticated formulations of quantum mechanics, of which the ones presented here are simple special cases. Path integral formulation Phase-space formulation of quantum mechanics & geometric quantization quantum field theory in curved spacetime axiomatic, algebraic and constructive quantum field theory C*-algebra formalism Generalized statistical model of quantum mechanics A related topic is the relationship to classical mechanics. Any new physical theory is supposed to reduce to successful old theories in some approximation. For quantum mechanics, this translates into the need to study the so-called classical limit of quantum mechanics. Also, as Bohr emphasized, human cognitive abilities and language are inextricably linked to the classical realm, and so classical descriptions are intuitively more accessible than quantum ones. In particular, quantization, namely the construction of a quantum theory whose classical limit is a given and known classical theory, becomes an important area of quantum physics in itself. Finally, some of the originators of quantum theory (notably Einstein and Schrödinger) were unhappy with what they thought were the philosophical implications of quantum mechanics. In particular, Einstein took the position that quantum mechanics must be incomplete, which motivated research into so-called hidden-variable theories. The issue of hidden variables has become in part an experimental issue with the help of quantum optics. == Postulates of quantum mechanics == A physical system is generally described by three basic ingredients: states; observables; and dynamics (or law of time evolution) or, more generally, a group of physical symmetries. A classical description can be given in a fairly direct way by a phase space model of mechanics: states are points in a phase space formulated by symplectic manifold, observables are real-valued functions on it, time evolution is given by a one-parameter group of symplectic transformations of the phase space, and physical symmetries are realized by symplectic transformations. A quantum description normally consists of a Hilbert space of states, observables are self-adjoint operators on the space of states, time evolution is given by a one-parameter group of unitary transformations on the Hilbert space of states, and physical symmetries are realized by unitary transformations. (It is possible, to map this Hilbert-space picture to a phase space formulation, invertibly. See below.) The following summary of the mathematical framework of quantum mechanics can be partly traced back to the Dirac–von Neumann axioms. === Description of the state of a system === Each isolated physical system is associated with a (topologically) separable complex Hilbert space H with inner product ⟨φ|ψ⟩. Separability is a mathematically convenient hypothesis, with the physical interpretation that the state is uniquely determined by countably many observations. Quantum states can be identified with equivalence classes in H, where two vectors (of length 1) represent the same state if they differ only by a phase factor: | ψ k ⟩ ∼ | ψ l ⟩ ⇔ | ψ k ⟩ = e i α | ψ l ⟩ , α ∈ R . {\displaystyle |\psi _{k}\rangle \sim |\psi _{l}\rangle \;\;\Leftrightarrow \;\;|\psi _{k}\rangle =e^{i\alpha }|\psi _{l}\rangle ,\quad \ \alpha \in \mathbb {R} .} As such, a quantum state is an element of a projective Hilbert space, conventionally termed a "ray". Accompanying Postulate I is the composite system postulate: In the presence of quantum entanglement, the quantum state of the composite system cannot be factored as a tensor product of states of its local constituents; Instead, it is expressed as a sum, or superposition, of tensor products of states of component subsystems. A subsystem in an entangled composite system generally cannot be described by a state vector (or a ray), but instead is described by a density operator; Such quantum state is known as a mixed state. The density operator of a mixed state is a trace class, nonnegative (positive semi-definite) self-adjoint operator ρ normalized to be of trace 1. In turn, any density operator of a mixed state can be represented as a subsystem of a larger composite system in a pure state (see purification theorem). In the absence of quantum entanglement, the quantum state of the composite system is called a separable state. The density matrix of a bipartite system in a separable state can be expressed as ρ = ∑ k p k ρ 1 k ⊗ ρ 2 k {\displaystyle \rho =\sum _{k}p_{k}\rho _{1}^{k}\otimes \rho _{2}^{k}} , where ∑ k p k = 1 {\displaystyle \;\sum _{k}p_{k}=1} . If there is only a single non-zero p k {\displaystyle p_{k}} , then the state can be expressed just as ρ = ρ 1 ⊗ ρ 2 , {\textstyle \rho =\rho _{1}\otimes \rho _{2},} and is called simply separable or product state. === Measurement on a system === ==== Description of physical quantities ==== Physical observables are represented by Hermitian matrices on H. Since these operators are Hermitian, their eigenvalues are always real, and represent the possible outcomes/results from measuring the corresponding observable. If the spectrum of the observable is discrete, then the possible results are quantized. ==== Results of measurement ==== By spectral theory, we can associate a probability measure to the values of A in any state ψ. We can also show that the possible values of the observable A in any state must belong to the spectrum of A. The expectation value (in the sense of probability theory) of the observable A for the system in state represented by the unit vector ψ ∈ H is ⟨ ψ | A | ψ ⟩ {\displaystyle \langle \psi |A|\psi \rangle } . If we represent the state ψ in the basis formed by the eigenvectors of A, then the square of the modulus of the component attached to a given eigenvector is the probability of observing its corresponding eigenvalue. For a mixed state ρ, the expected value of A in the state ρ is tr ⁡ ( A ρ ) {\displaystyle \operatorname {tr} (A\rho )} , and the probability of obtaining an eigenvalue a n {\displaystyle a_{n}} in a discrete, nondegenerate spectrum of the corresponding observable A {\displaystyle A} is given by P ( a n ) = tr ⁡ ( | a n ⟩ ⟨ a n | ρ ) = ⟨ a n | ρ | a n ⟩ {\displaystyle \mathbb {P} (a_{n})=\operatorname {tr} (|a_{n}\rangle \langle a_{n}|\rho )=\langle a_{n}|\rho |a_{n}\rangle } . If the eigenvalue a n {\displaystyle a_{n}} has degenerate, orthonormal eigenvectors { | a n 1 ⟩ , | a n 2 ⟩ , … , | a n m ⟩ } {\displaystyle \{|a_{n1}\rangle ,|a_{n2}\rangle ,\dots ,|a_{nm}\rangle \}} , then the projection operator onto the eigensubspace can be defined as the identity operator in the eigensubspace: P n = | a n 1 ⟩ ⟨ a n 1 | + | a n 2 ⟩ ⟨ a n 2 | + ⋯ + | a n m ⟩ ⟨ a n m | , {\displaystyle P_{n}=|a_{n1}\rangle \langle a_{n1}|+|a_{n2}\rangle \langle a_{n2}|+\dots +|a_{nm}\rangle \langle a_{nm}|,} and then P ( a n ) = tr ⁡ ( P n ρ ) {\displaystyle \mathbb {P} (a_{n})=\operatorname {tr} (P_{n}\rho )} . Postulates II.a and II.b are collectively known as the Born rule of quantum mechanics. ==== Effect of measurement on the state ==== When a measurement is performed, only one result is obtained (according to some interpretations of quantum mechanics). This is modeled mathematically as the processing of additional information from the measurement, confining the probabilities of an immediate second measurement of the same observable. In the case of a discrete, non-degenerate spectrum, two sequential measurements of the same observable will always give the same value assuming the second immediately follows the first. Therefore, the state vector must change as a result of measurement, and collapse onto the eigensubspace associated with the eigenvalue measured. For a mixed state ρ, after obtaining an eigenvalue a n {\displaystyle a_{n}} in a discrete, nondegenerate spectrum of the corresponding observable A {\displaystyle A} , the updated state is given by ρ ′ = P n ρ P n † tr ⁡ ( P n ρ P n † ) {\textstyle \rho '={\frac {P_{n}\rho P_{n}^{\dagger }}{\operatorname {tr} (P_{n}\rho P_{n}^{\dagger })}}} . If the eigenvalue a n {\displaystyle a_{n}} has degenerate, orthonormal eigenvectors { | a n 1 ⟩ , | a n 2 ⟩ , … , | a n m ⟩ } {\displaystyle \{|a_{n1}\rangle ,|a_{n2}\rangle ,\dots ,|a_{nm}\rangle \}} , then the projection operator onto the eigensubspace is P n = | a n 1 ⟩ ⟨ a n 1 | + | a n 2 ⟩ ⟨ a n 2 | + ⋯ + | a n m ⟩ ⟨ a n m | {\displaystyle P_{n}=|a_{n1}\rangle \langle a_{n1}|+|a_{n2}\rangle \langle a_{n2}|+\dots +|a_{nm}\rangle \langle a_{nm}|} . Postulates II.c is sometimes called the "state update rule" or "collapse rule"; Together with the Born rule (Postulates II.a and II.b), they form a complete representation of measurements, and are sometimes collectively called the measurement postulate(s). Note that the projection-valued measures (PVM) described in the measurement postulate(s) can be generalized to positive operator-valued measures (POVM), which is the most general kind of measurement in quantum mechanics. A POVM can be understood as the effect on a component subsystem when a PVM is performed on a larger, composite system (see Naimark's dilation theorem). === Time evolution of a system === The Schrödinger equation describes how a state vector evolves in time. Depending on the text, it may be derived from some other assumptions, motivated on heuristic grounds, or asserted as a postulate. Derivations include using the de Broglie relation between wavelength and momentum or path integrals. Equivalently, the time evolution postulate can be stated as: For a closed system in a mixed state ρ, the time evolution is ρ ( t ) = U ( t ; t 0 ) ρ ( t 0 ) U † ( t ; t 0 ) {\displaystyle \rho (t)=U(t;t_{0})\rho (t_{0})U^{\dagger }(t;t_{0})} . The evolution of an open quantum system can be described by quantum operations (in an operator sum formalism) and quantum instruments, and generally does not have to be unitary. === Other implications of the postulates === Physical symmetries act on the Hilbert space of quantum states unitarily or antiunitarily due to Wigner's theorem (supersymmetry is another matter entirely). Density operators are those that are in the closure of the convex hull of the one-dimensional orthogonal projectors. Conversely, one-dimensional orthogonal projectors are extreme points of the set of density operators. Physicists also call one-dimensional orthogonal projectors pure states and other density operators mixed states. One can in this formalism state Heisenberg's uncertainty principle and prove it as a theorem, although the exact historical sequence of events, concerning who derived what and under which framework, is the subject of historical investigations outside the scope of this article. Furthermore, to the postulates of quantum mechanics one should also add basic statements on the properties of spin and Pauli's exclusion principle, see below. === Spin === In addition to their other properties, all particles possess a quantity called spin, an intrinsic angular momentum. Despite the name, particles do not literally spin around an axis, and quantum mechanical spin has no correspondence in classical physics. In the position representation, a spinless wavefunction has position r and time t as continuous variables, ψ = ψ(r, t). For spin wavefunctions the spin is an additional discrete variable: ψ = ψ(r, t, σ), where σ takes the values; σ = − S ℏ , − ( S − 1 ) ℏ , … , 0 , … , + ( S − 1 ) ℏ , + S ℏ . {\displaystyle \sigma =-S\hbar ,-(S-1)\hbar ,\dots ,0,\dots ,+(S-1)\hbar ,+S\hbar \,.} That is, the state of a single particle with spin S is represented by a (2S + 1)-component spinor of complex-valued wave functions. Two classes of particles with very different behaviour are bosons which have integer spin (S = 0, 1, 2, ...), and fermions possessing half-integer spin (S = 1⁄2, 3⁄2, 5⁄2, ...). === Symmetrization postulate === In quantum mechanics, two particles can be distinguished from one another using two methods. By performing a measurement of intrinsic properties of each particle, particles of different types can be distinguished. Otherwise, if the particles are identical, their trajectories can be tracked which distinguishes the particles based on the locality of each particle. While the second method is permitted in classical mechanics, (i.e. all classical particles are treated with distinguishability), the same cannot be said for quantum mechanical particles since the process is infeasible due to the fundamental uncertainty principles that govern small scales. Hence the requirement of indistinguishability of quantum particles is presented by the symmetrization postulate. The postulate is applicable to a system of bosons or fermions, for example, in predicting the spectra of helium atom. The postulate, explained in the following sections, can be stated as follows: Exceptions can occur when the particles are constrained to two spatial dimensions where existence of particles known as anyons are possible which are said to have a continuum of statistical properties spanning the range between fermions and bosons. The connection between behaviour of identical particles and their spin is given by spin statistics theorem. It can be shown that two particles localized in different regions of space can still be represented using a symmetrized/antisymmetrized wavefunction and that independent treatment of these wavefunctions gives the same result. Hence the symmetrization postulate is applicable in the general case of a system of identical particles. ==== Exchange Degeneracy ==== In a system of identical particles, let P be known as exchange operator that acts on the wavefunction as: P ( ⋯ | ψ ⟩ | ϕ ⟩ ⋯ ) ≡ ⋯ | ϕ ⟩ | ψ ⟩ ⋯ {\displaystyle P{\bigg (}\cdots |\psi \rangle |\phi \rangle \cdots {\bigg )}\equiv \cdots |\phi \rangle |\psi \rangle \cdots } If a physical system of identical particles is given, wavefunction of all particles can be well known from observation but these cannot be labelled to each particle. Thus, the above exchanged wavefunction represents the same physical state as the original state which implies that the wavefunction is not unique. This is known as exchange degeneracy. More generally, consider a linear combination of such states, | Ψ ⟩ {\displaystyle |\Psi \rangle } . For the best representation of the physical system, we expect this to be an eigenvector of P since exchange operator is not excepted to give completely different vectors in projective Hilbert space. Since P 2 = 1 {\displaystyle P^{2}=1} , the possible eigenvalues of P are +1 and −1. The | Ψ ⟩ {\displaystyle |\Psi \rangle } states for identical particle system are represented as symmetric for +1 eigenvalue or antisymmetric for -1 eigenvalue as follows: P | ⋯ n i , n j ⋯ ; S ⟩ = + | ⋯ n i , n j ⋯ ; S ⟩ {\displaystyle P|\cdots n_{i},n_{j}\cdots ;S\rangle =+|\cdots n_{i},n_{j}\cdots ;S\rangle } P | ⋯ n i , n j ⋯ ; A ⟩ = − | ⋯ n i , n j ⋯ ; A ⟩ {\displaystyle P|\cdots n_{i},n_{j}\cdots ;A\rangle =-|\cdots n_{i},n_{j}\cdots ;A\rangle } The explicit symmetric/antisymmetric form of | Ψ ⟩ {\displaystyle |\Psi \rangle } is constructed using a symmetrizer or antisymmetrizer operator. Particles that form symmetric states are called bosons and those that form antisymmetric states are called as fermions. The relation of spin with this classification is given from spin statistics theorem which shows that integer spin particles are bosons and half integer spin particles are fermions. ==== Pauli exclusion principle ==== The property of spin relates to another basic property concerning systems of N identical particles: the Pauli exclusion principle, which is a consequence of the following permutation behaviour of an N-particle wave function; again in the position representation one must postulate that for the transposition of any two of the N particles one always should have i.e., on transposition of the arguments of any two particles the wavefunction should reproduce, apart from a prefactor (−1)2S which is +1 for bosons, but (−1) for fermions. Electrons are fermions with S = 1/2; quanta of light are bosons with S = 1. Due to the form of anti-symmetrized wavefunction: Ψ n 1 ⋯ n N ( A ) ( x 1 , … , x N ) = 1 N ! | ψ n 1 ( x 1 ) ψ n 1 ( x 2 ) ⋯ ψ n 1 ( x N ) ψ n 2 ( x 1 ) ψ n 2 ( x 2 ) ⋯ ψ n 2 ( x N ) ⋮ ⋮ ⋱ ⋮ ψ n N ( x 1 ) ψ n N ( x 2 ) ⋯ ψ n N ( x N ) | {\displaystyle \Psi _{n_{1}\cdots n_{N}}^{(A)}(x_{1},\ldots ,x_{N})={\frac {1}{\sqrt {N!}}}\left|{\begin{matrix}\psi _{n_{1}}(x_{1})&\psi _{n_{1}}(x_{2})&\cdots &\psi _{n_{1}}(x_{N})\\\psi _{n_{2}}(x_{1})&\psi _{n_{2}}(x_{2})&\cdots &\psi _{n_{2}}(x_{N})\\\vdots &\vdots &\ddots &\vdots \\\psi _{n_{N}}(x_{1})&\psi _{n_{N}}(x_{2})&\cdots &\psi _{n_{N}}(x_{N})\\\end{matrix}}\right|} if the wavefunction of each particle is completely determined by a set of quantum number, then two fermions cannot share the same set of quantum numbers since the resulting function cannot be anti-symmetrized (i.e. above formula gives zero). The same cannot be said of Bosons since their wavefunction is: | x 1 x 2 ⋯ x N ; S ⟩ = ∏ j n j ! N ! ∑ p | x p ( 1 ) ⟩ | x p ( 2 ) ⟩ ⋯ | x p ( N ) ⟩ {\displaystyle |x_{1}x_{2}\cdots x_{N};S\rangle ={\frac {\prod _{j}n_{j}!}{N!}}\sum _{p}\left|x_{p(1)}\right\rangle \left|x_{p(2)}\right\rangle \cdots \left|x_{p(N)}\right\rangle } where n j {\displaystyle n_{j}} is the number of particles with same wavefunction. ==== Exceptions for symmetrization postulate ==== In nonrelativistic quantum mechanics all particles are either bosons or fermions; in relativistic quantum theories also "supersymmetric" theories exist, where a particle is a linear combination of a bosonic and a fermionic part. Only in dimension d = 2 can one construct entities where (−1)2S is replaced by an arbitrary complex number with magnitude 1, called anyons. In relativistic quantum mechanics, spin statistic theorem can prove that under certain set of assumptions that the integer spins particles are classified as bosons and half spin particles are classified as fermions. Anyons which form neither symmetric nor antisymmetric states are said to have fractional spin. Although spin and the Pauli principle can only be derived from relativistic generalizations of quantum mechanics, the properties mentioned in the last two paragraphs belong to the basic postulates already in the non-relativistic limit. Especially, many important properties in natural science, e.g. the periodic system of chemistry, are consequences of the two properties. == Mathematical structure of quantum mechanics == === Pictures of dynamics === Summary: === Representations === The original form of the Schrödinger equation depends on choosing a particular representation of Heisenberg's canonical commutation relations. The Stone–von Neumann theorem dictates that all irreducible representations of the finite-dimensional Heisenberg commutation relations are unitarily equivalent. A systematic understanding of its consequences has led to the phase space formulation of quantum mechanics, which works in full phase space instead of Hilbert space, so then with a more intuitive link to the classical limit thereof. This picture also simplifies considerations of quantization, the deformation extension from classical to quantum mechanics. The quantum harmonic oscillator is an exactly solvable system where the different representations are easily compared. There, apart from the Heisenberg, or Schrödinger (position or momentum), or phase-space representations, one also encounters the Fock (number) representation and the Segal–Bargmann (Fock-space or coherent state) representation (named after Irving Segal and Valentine Bargmann). All four are unitarily equivalent. === Time as an operator === The framework presented so far singles out time as the parameter that everything depends on. It is possible to formulate mechanics in such a way that time becomes itself an observable associated with a self-adjoint operator. At the classical level, it is possible to arbitrarily parameterize the trajectories of particles in terms of an unphysical parameter s, and in that case the time t becomes an additional generalized coordinate of the physical system. At the quantum level, translations in s would be generated by a "Hamiltonian" H − E, where E is the energy operator and H is the "ordinary" Hamiltonian. However, since s is an unphysical parameter, physical states must be left invariant by "s-evolution", and so the physical state space is the kernel of H − E (this requires the use of a rigged Hilbert space and a renormalization of the norm). This is related to the quantization of constrained systems and quantization of gauge theories. It is also possible to formulate a quantum theory of "events" where time becomes an observable. == Problem of measurement == The picture given in the preceding paragraphs is sufficient for description of a completely isolated system. However, it fails to account for one of the main differences between quantum mechanics and classical mechanics, that is, the effects of measurement. The von Neumann description of quantum measurement of an observable A, when the system is prepared in a pure state ψ is the following (note, however, that von Neumann's description dates back to the 1930s and is based on experiments as performed during that time – more specifically the Compton–Simon experiment; it is not applicable to most present-day measurements within the quantum domain): Let A have spectral resolution A = ∫ λ d E A ⁡ ( λ ) , {\displaystyle A=\int \lambda \,d\operatorname {E} _{A}(\lambda ),} where EA is the resolution of the identity (also called projection-valued measure) associated with A. Then the probability of the measurement outcome lying in an interval B of R is |EA(B) ψ|2. In other words, the probability is obtained by integrating the characteristic function of B against the countably additive measure ⟨ ψ ∣ E A ⁡ ψ ⟩ . {\displaystyle \langle \psi \mid \operatorname {E} _{A}\psi \rangle .} If the measured value is contained in B, then immediately after the measurement, the system will be in the (generally non-normalized) state EA(B)ψ. If the measured value does not lie in B, replace B by its complement for the above state. For example, suppose the state space is the n-dimensional complex Hilbert space Cn and A is a Hermitian matrix with eigenvalues λi, with corresponding eigenvectors ψi. The projection-valued measure associated with A, EA, is then E A ⁡ ( B ) = | ψ i ⟩ ⟨ ψ i | , {\displaystyle \operatorname {E} _{A}(B)=|\psi _{i}\rangle \langle \psi _{i}|,} where B is a Borel set containing only the single eigenvalue λi. If the system is prepared in state | ψ ⟩ {\displaystyle |\psi \rangle } Then the probability of a measurement returning the value λi can be calculated by integrating the spectral measure ⟨ ψ ∣ E A ⁡ ψ ⟩ {\displaystyle \langle \psi \mid \operatorname {E} _{A}\psi \rangle } over Bi. This gives trivially ⟨ ψ | ψ i ⟩ ⟨ ψ i ∣ ψ ⟩ = | ⟨ ψ ∣ ψ i ⟩ | 2 . {\displaystyle \langle \psi |\psi _{i}\rangle \langle \psi _{i}\mid \psi \rangle =|\langle \psi \mid \psi _{i}\rangle |^{2}.} The characteristic property of the von Neumann measurement scheme is that repeating the same measurement will give the same results. This is also called the projection postulate. A more general formulation replaces the projection-valued measure with a positive-operator valued measure (POVM). To illustrate, take again the finite-dimensional case. Here we would replace the rank-1 projections | ψ i ⟩ ⟨ ψ i | {\displaystyle |\psi _{i}\rangle \langle \psi _{i}|} by a finite set of positive operators F i F i ∗ {\displaystyle F_{i}F_{i}^{*}} whose sum is still the identity operator as before (the resolution of identity). Just as a set of possible outcomes {λ1 ... λn} is associated to a projection-valued measure, the same can be said for a POVM. Suppose the measurement outcome is λi. Instead of collapsing to the (unnormalized) state | ψ i ⟩ ⟨ ψ i | ψ ⟩ {\displaystyle |\psi _{i}\rangle \langle \psi _{i}|\psi \rangle } after the measurement, the system now will be in the state F i | ψ ⟩ . {\displaystyle F_{i}|\psi \rangle .} Since the Fi Fi* operators need not be mutually orthogonal projections, the projection postulate of von Neumann no longer holds. The same formulation applies to general mixed states. In von Neumann's approach, the state transformation due to measurement is distinct from that due to time evolution in several ways. For example, time evolution is deterministic and unitary whereas measurement is non-deterministic and non-unitary. However, since both types of state transformation take one quantum state to another, this difference was viewed by many as unsatisfactory. The POVM formalism views measurement as one among many other quantum operations, which are described by completely positive maps which do not increase the trace. == List of mathematical tools == Part of the folklore of the subject concerns the mathematical physics textbook Methods of Mathematical Physics put together by Richard Courant from David Hilbert's Göttingen University courses. The story is told (by mathematicians) that physicists had dismissed the material as not interesting in the current research areas, until the advent of Schrödinger's equation. At that point it was realised that the mathematics of the new quantum mechanics was already laid out in it. It is also said that Heisenberg had consulted Hilbert about his matrix mechanics, and Hilbert observed that his own experience with infinite-dimensional matrices had derived from differential equations, advice which Heisenberg ignored, missing the opportunity to unify the theory as Weyl and Dirac did a few years later. Whatever the basis of the anecdotes, the mathematics of the theory was conventional at the time, whereas the physics was radically new. The main tools include: linear algebra: complex numbers, eigenvectors, eigenvalues functional analysis: Hilbert spaces, linear operators, spectral theory differential equations: partial differential equations, separation of variables, ordinary differential equations, Sturm–Liouville theory, eigenfunctions harmonic analysis: Fourier transforms == See also == List of mathematical topics in quantum theory Quantum foundations Symmetry in quantum mechanics == Notes == == References == Bäuerle, Gerard G. A.; de Kerf, Eddy A. (1990). Lie Algebras, Part 1: Finite and Infinite Dimensional Lie Algebras and Applications in Physics. Studies in Mathematical Physics. Amsterdam: North Holland. ISBN 0-444-88776-8. Byron, Frederick W.; Fuller, Robert W. (1992). Mathematics of Classical and Quantum Physics. New York: Courier Corporation. ISBN 978-0-486-67164-2. Cohen-Tannoudji, Claude; Diu, Bernard; Laloë, Franck (2020). Quantum mechanics. Volume 2: Angular momentum, spin, and approximation methods. Weinheim: Wiley-VCH Verlag GmbH & Co. KGaA. ISBN 978-3-527-82272-0. Dirac, P. A. M. (1925). "The Fundamental Equations of Quantum Mechanics". Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences. 109 (752): 642–653. Bibcode:1925RSPSA.109..642D. doi:10.1098/rspa.1925.0150. Edwards, David A. (1979). "The mathematical foundations of quantum mechanics". Synthese. 42 (1). Springer Science and Business Media LLC: 1–70. doi:10.1007/bf00413704. ISSN 0039-7857. S2CID 46969028. Greenstein, George; Zajonc, Arthur (2006). The Quantum Challenge. Sudbury, Mass.: Jones & Bartlett Learning. ISBN 978-0-7637-2470-2. Jauch, J. M.; Wigner, E. P.; Yanase, M. M. (1997). "Some Comments Concerning Measurements in Quantum Mechanics". Part I: Particles and Fields. Part II: Foundations of Quantum Mechanics. Berlin, Heidelberg: Springer Berlin Heidelberg. pp. 475–482. doi:10.1007/978-3-662-09203-3_52. ISBN 978-3-642-08179-8. Solem, J. C.; Biedenharn, L. C. (1993). "Understanding geometrical phases in quantum mechanics: An elementary example". Foundations of Physics. 23 (2): 185–195. Bibcode:1993FoPh...23..185S. doi:10.1007/BF01883623. S2CID 121930907. Streater, Raymond Frederick; Wightman, Arthur Strong (2000). PCT, Spin and Statistics, and All that. Princeton, NJ: Princeton University Press. ISBN 978-0-691-07062-9. Sakurai, Jun John; Napolitano, Jim (2021). Modern quantum mechanics (3rd ed.). Cambridge: Cambridge University Press. ISBN 978-1-108-47322-4. Weyl, Hermann (1950) [1931]. The Theory of Groups and Quantum Mechanics. Translated by Robertson, H. P. Dover. == Further reading == Auyang, Sunny Y. (1995). How is Quantum Field Theory Possible?. New York, NY: Oxford University Press on Demand. ISBN 978-0-19-509344-5. Emch, Gérard G. (1972). Algebraic Methods in Statistical Mechanics and Quantum Field Theory. New York: John Wiley & Sons. ISBN 0-471-23900-3. Giachetta, Giovanni; Mangiarotti, Luigi; Sardanashvily, Gennadi (2005). Geometric and Algebraic Topological Methods in Quantum Mechanics. WORLD SCIENTIFIC. arXiv:math-ph/0410040. doi:10.1142/5731. ISBN 978-981-256-129-9. Gleason, Andrew M. (1957). "Measures on the Closed Subspaces of a Hilbert Space". Journal of Mathematics and Mechanics. 6 (6). Indiana University Mathematics Department: 885–893. JSTOR 24900629. Hall, Brian C. (2013). Quantum Theory for Mathematicians. Graduate Texts in Mathematics. Vol. 267. New York, NY: Springer New York. Bibcode:2013qtm..book.....H. doi:10.1007/978-1-4614-7116-5. ISBN 978-1-4614-7115-8. ISSN 0072-5285. S2CID 117837329. Jauch, Josef Maria (1968). Foundations of Quantum Mechanics. Reading, Mass.: Addison-Wesley. ISBN 0-201-03298-8. Jost, R. (1965). The General Theory of Quantized Fields. Lectures in applied mathematics. American Mathematical Society. Kuhn, Thomas S. (1987). Black-Body Theory and the Quantum Discontinuity, 1894-1912. Chicago: University of Chicago Press. ISBN 978-0-226-45800-7. Landsman, Klaas (2017). Foundations of Quantum Theory. Fundamental Theories of Physics. Vol. 188. Cham: Springer International Publishing. doi:10.1007/978-3-319-51777-3. ISBN 978-3-319-51776-6. ISSN 0168-1222. Mackey, George W. (2004). Mathematical Foundations of Quantum Mechanics. Mineola, N.Y: Courier Corporation. ISBN 978-0-486-43517-6. McMahon, David (2013). Quantum Mechanics Demystified, 2nd Edition (PDF). New York, NY: McGraw-Hill Prof Med/Tech. ISBN 978-0-07-176563-3. Moretti, Valter (2017). Spectral Theory and Quantum Mechanics. Unitext. Vol. 110. Cham: Springer International Publishing. doi:10.1007/978-3-319-70706-8. ISBN 978-3-319-70705-1. ISSN 2038-5714. S2CID 125121522. Moretti, Valter (2019). Fundamental Mathematical Structures of Quantum Theory. Cham: Springer International Publishing. doi:10.1007/978-3-030-18346-2. ISBN 978-3-030-18345-5. S2CID 197485828. Prugovecki, Eduard (2006). Quantum Mechanics in Hilbert Space. Mineola, NY: Courier Dover Publications. ISBN 978-0-486-45327-9. Reed, Michael; Simon, Barry (1972). Methods of Modern Mathematical Physics. New York: Academic Press. ISBN 978-0-12-585001-8. Shankar, R. (2013). Principles of Quantum Mechanics (PDF). Springer. ISBN 978-1-4615-7675-4. Teschl, Gerald (2009). Mathematical Methods in Quantum Mechanics (PDF). Providence, R.I: American Mathematical Soc. ISBN 978-0-8218-4660-5. von Neumann, John (2018). Mathematical Foundations of Quantum Mechanics. Princeton Oxford: Princeton University Press. ISBN 978-0-691-17856-1. Weaver, Nik (2001). Mathematical Quantization. Chapman and Hall/CRC. doi:10.1201/9781420036237. ISBN 978-0-429-07514-8.
Wikipedia/Postulates_of_quantum_mechanics
In theoretical physics, the matrix theory is a quantum mechanical model proposed in 1997 by Tom Banks, Willy Fischler, Stephen Shenker, and Leonard Susskind; it is also known as BFSS matrix model, after the authors' initials. == Overview == This theory describes the behavior of a set of nine large matrices. In their original paper, these authors showed, among other things, that the low energy limit of this matrix model is described by eleven-dimensional supergravity. These calculations led them to propose that the BFSS matrix model is exactly equivalent to M-theory. The BFSS matrix model can therefore be used as a prototype for a correct formulation of M-theory and a tool for investigating the properties of M-theory in a relatively simple setting. The BFSS matrix model is also considered the worldvolume theory of a large number of D0-branes in Type IIA string theory. == Noncommutative geometry == In geometry, it is often useful to introduce coordinates. For example, in order to study the geometry of the Euclidean plane, one defines the coordinates x and y as the distances between any point in the plane and a pair of axes. In ordinary geometry, the coordinates of a point are numbers, so they can be multiplied, and the product of two coordinates does not depend on the order of multiplication. That is, xy = yx. This property of multiplication is known as the commutative law, and this relationship between geometry and the commutative algebra of coordinates is the starting point for much of modern geometry. Noncommutative geometry is a branch of mathematics that attempts to generalize this situation. Rather than working with ordinary numbers, one considers some similar objects, such as matrices, whose multiplication does not satisfy the commutative law (that is, objects for which xy is not necessarily equal to yx). One imagines that these noncommuting objects are coordinates on some more general notion of "space" and proves theorems about these generalized spaces by exploiting the analogy with ordinary geometry. In a paper from 1998, Alain Connes, Michael R. Douglas, and Albert Schwarz showed that some aspects of matrix models and M-theory are described by a noncommutative quantum field theory, a special kind of physical theory in which the coordinates on spacetime do not satisfy the commutativity property. This established a link between matrix models and M-theory on the one hand, and noncommutative geometry on the other hand. It quickly led to the discovery of other important links between noncommutative geometry and various physical theories. == Related models == Another notable matrix model capturing aspects of Type IIB string theory, the IKKT matrix model, was constructed in 1996–97 by N. Ishibashi, H. Kawai, Y. Kitazawa, A. Tsuchiya. Recently, the relationship to Nambu dynamics is discussed.(see Nambu dynamics#Quantization) == See also == Matrix string theory == Notes == == References == Banks, Tom; Fischler, Willy; Schenker, Stephen; Susskind, Leonard (1997). "M theory as a matrix model: A conjecture". Physical Review D. 55 (8): 5112–5128. arXiv:hep-th/9610043. Bibcode:1997PhRvD..55.5112B. doi:10.1103/physrevd.55.5112. S2CID 13073785. Connes, Alain (1994). Noncommutative Geometry. Academic Press. ISBN 978-0-12-185860-5. Connes, Alain; Douglas, Michael; Schwarz, Albert (1998). "Noncommutative geometry and matrix theory". Journal of High Energy Physics. 19981 (2): 003. arXiv:hep-th/9711162. Bibcode:1998JHEP...02..003C. doi:10.1088/1126-6708/1998/02/003. S2CID 7562354. Nekrasov, Nikita; Schwarz, Albert (1998). "Instantons on noncommutative R4 and (2,0) superconformal six dimensional theory". Communications in Mathematical Physics. 198 (3): 689–703. arXiv:hep-th/9802068. Bibcode:1998CMaPh.198..689N. doi:10.1007/s002200050490. S2CID 14125789. Seiberg, Nathan; Witten, Edward (1999). "String Theory and Noncommutative Geometry". Journal of High Energy Physics. 1999 (9): 032. arXiv:hep-th/9908142. Bibcode:1999JHEP...09..032S. doi:10.1088/1126-6708/1999/09/032. S2CID 668885.
Wikipedia/Matrix_theory_(physics)
The circuit topology of an electronic circuit is the form taken by the network of interconnections of the circuit components. Different specific values or ratings of the components are regarded as being the same topology. Topology is not concerned with the physical layout of components in a circuit, nor with their positions on a circuit diagram; similarly to the mathematical concept of topology, it is only concerned with what connections exist between the components. Numerous physical layouts and circuit diagrams may all amount to the same topology. Strictly speaking, replacing a component with one of an entirely different type is still the same topology. In some contexts, however, these can loosely be described as different topologies. For instance, interchanging inductors and capacitors in a low-pass filter results in a high-pass filter. These might be described as high-pass and low-pass topologies even though the network topology is identical. A more correct term for these classes of object (that is, a network where the type of component is specified but not the absolute value) is prototype network. Electronic network topology is related to mathematical topology. In particular, for networks which contain only two-terminal devices, circuit topology can be viewed as an application of graph theory. In a network analysis of such a circuit from a topological point of view, the network nodes are the vertices of graph theory, and the network branches are the edges of graph theory. Standard graph theory can be extended to deal with active components and multi-terminal devices such as integrated circuits. Graphs can also be used in the analysis of infinite networks. == Circuit diagrams == The circuit diagrams in this article follow the usual conventions in electronics; lines represent conductors, filled small circles represent junctions of conductors, and open small circles represent terminals for connection to the outside world. In most cases, impedances are represented by rectangles. A practical circuit diagram would use the specific symbols for resistors, inductors, capacitors etc., but topology is not concerned with the type of component in the network, so the symbol for a general impedance has been used instead. The Graph theory section of this article gives an alternative method of representing networks. == Topology names == Many topology names relate to their appearance when drawn diagrammatically. Most circuits can be drawn in a variety of ways and consequently have a variety of names. For instance, the three circuits shown in Figure 1.1 all look different but have identical topologies. This example also demonstrates a common convention of naming topologies after a letter of the alphabet to which they have a resemblance. Greek alphabet letters can also be used in this way, for example Π (pi) topology and Δ (delta) topology. == Series and parallel topologies == A network with two components or branches has only two possible topologies: series and parallel. Even for these simplest of topologies, the circuit can be presented in varying ways. A network with three branches has four possible topologies. Note that the parallel-series topology is another representation of the Delta topology discussed later. Series and parallel topologies can continue to be constructed with greater and greater numbers of branches ad infinitum. The number of unique topologies that can be obtained from n ∈ N {\displaystyle n\in \mathbb {N} } series or parallel branches is 1, 2, 4, 10, 24, 66, 180, 522, 1532, 4624, … {\displaystyle \dots } (sequence A000084 in the OEIS). == Y and Δ topologies == Y and Δ are important topologies in linear network analysis due to these being the simplest possible three-terminal networks. A Y-Δ transform is available for linear circuits. This transform is important because some networks cannot be analysed in terms of series and parallel combinations. These networks arise often in 3-phase power circuits as they are the two most common topologies for 3-phase motor or transformer windings. An example of this is the network of figure 1.6, consisting of a Y network connected in parallel with a Δ network. Say it is desired to calculate the impedance between two nodes of the network. In many networks this can be done by successive applications of the rules for combination of series or parallel impedances. This is not, however, possible in this case where the Y-Δ transform is needed in addition to the series and parallel rules. The Y topology is also called star topology. However, star topology may also refer to the more general case of many branches connected to the same node rather than just three. == Simple filter topologies == The topologies shown in figure 1.7 are commonly used for filter and attenuator designs. The L-section is identical topology to the potential divider topology. The T-section is identical topology to the Y topology. The Π-section is identical topology to the Δ topology. All these topologies can be viewed as a short section of a ladder topology. Longer sections would normally be described as ladder topology. These kinds of circuits are commonly analysed and characterised in terms of a two-port network. == Bridge topology == Bridge topology is an important topology with many uses in both linear and non-linear applications, including, amongst many others, the bridge rectifier, the Wheatstone bridge and the lattice phase equaliser. Bridge topology is rendered in circuit diagrams in several ways. The first rendering in figure 1.8 is the traditional depiction of a bridge circuit. The second rendering clearly shows the equivalence between the bridge topology and a topology derived by series and parallel combinations. The third rendering is more commonly known as lattice topology. It is not so obvious that this is topologically equivalent. It can be seen that this is indeed so by visualising the top left node moved to the right of the top right node. It is normal to call a network bridge topology only if it is being used as a two-port network with the input and output ports each consisting of a pair of diagonally opposite nodes. The box topology in figure 1.7 can be seen to be identical to bridge topology but in the case of the filter the input and output ports are each a pair of adjacent nodes. Sometimes the loading (or null indication) component on the output port of the bridge will be included in the bridge topology as shown in figure 1.9. == Bridged T and twin-T topologies == Bridged T topology is derived from bridge topology in a way explained in the Zobel network article. Many derivative topologies are also discussed in the same article. There is also a twin-T topology, which has practical applications where it is desirable to have the input and output share a common (ground) terminal. This may be, for instance, because the input and output connections are made with co-axial topology. Connecting an input and output terminal is not allowable with normal bridge topology, so Twin-T is used where a bridge would otherwise be used for balance or null measurement applications. The topology is also used in the twin-T oscillator as a sine-wave generator. The lower part of figure 1.11 shows twin-T topology redrawn to emphasise the connection with bridge topology. == Infinite topologies == Ladder topology can be extended without limit and is much used in filter designs. There are many variations on ladder topology, some of which are discussed in the Electronic filter topology and Composite image filter articles. The balanced form of ladder topology can be viewed as being the graph of the side of a prism of arbitrary order. The side of an antiprism forms a topology which, in this sense, is an anti-ladder. Anti-ladder topology finds an application in voltage multiplier circuits, in particular the Cockcroft-Walton generator. There is also a full-wave version of the Cockcroft-Walton generator which uses a double anti-ladder topology. Infinite topologies can also be formed by cascading multiple sections of some other simple topology, such as lattice or bridge-T sections. Such infinite chains of lattice sections occur in the theoretical analysis and artificial simulation of transmission lines, but are rarely used as a practical circuit implementation. == Components with more than two terminals == Circuits containing components with three or more terminals greatly increase the number of possible topologies. Conversely, the number of different circuits represented by a topology diminishes and in many cases the circuit is easily recognisable from the topology even when specific components are not identified. With more complex circuits the description may proceed by specification of a transfer function between the ports of the network rather than the topology of the components. == Graph theory == Graph theory is the branch of mathematics dealing with graphs. In network analysis, graphs are used extensively to represent a network being analysed. The graph of a network captures only certain aspects of a network: those aspects related to its connectivity, or, in other words, its topology. This can be a useful representation and generalisation of a network because many network equations are invariant across networks with the same topology. This includes equations derived from Kirchhoff's laws and Tellegen's theorem. === History === Graph theory has been used in the network analysis of linear, passive networks almost from the moment that Kirchhoff's laws were formulated. Gustav Kirchhoff himself, in 1847, used graphs as an abstract representation of a network in his loop analysis of resistive circuits. This approach was later generalised to RLC circuits, replacing resistances with impedances. In 1873 James Clerk Maxwell provided the dual of this analysis with node analysis. Maxwell is also responsible for the topological theorem that the determinant of the node-admittance matrix is equal to the sum of all the tree admittance products. In 1900 Henri Poincaré introduced the idea of representing a graph by its incidence matrix, hence founding the field of algebraic topology. In 1916 Oswald Veblen applied the algebraic topology of Poincaré to Kirchhoff's analysis. Veblen is also responsible for the introduction of the spanning tree to aid choosing a compatible set of network variables. Comprehensive cataloguing of network graphs as they apply to electrical circuits began with Percy MacMahon in 1891 (with an engineer-friendly article in The Electrician in 1892) who limited his survey to series and parallel combinations. MacMahon called these graphs yoke-chains. Ronald M. Foster in 1932 categorised graphs by their nullity or rank and provided charts of all those with a small number of nodes. This work grew out of an earlier survey by Foster while collaborating with George Campbell in 1920 on 4-port telephone repeaters and produced 83,539 distinct graphs. For a long time topology in electrical circuit theory remained concerned only with linear passive networks. The more recent developments of semiconductor devices and circuits have required new tools in topology to deal with them. Enormous increases in circuit complexity have led to the use of combinatorics in graph theory to improve the efficiency of computer calculation. === Graphs and circuit diagrams === Networks are commonly classified by the kind of electrical elements making them up. In a circuit diagram these element-kinds are specifically drawn, each with its own unique symbol. Resistive networks are one-element-kind networks, consisting only of R elements. Likewise capacitive or inductive networks are one-element-kind. The RC, RL and LC circuits are simple two-element-kind networks. The RLC circuit is the simplest three-element-kind network. The LC ladder network commonly used for low-pass filters can have many elements but is another example of a two-element-kind network. Conversely, topology is concerned only with the geometric relationship between the elements of a network, not with the kind of elements themselves. The heart of a topological representation of a network is the graph of the network. Elements are represented as the edges of the graph. An edge is drawn as a line, terminating on dots or small circles from which other edges (elements) may emanate. In circuit analysis, the edges of the graph are called branches. The dots are called the vertices of the graph and represent the nodes of the network. Node and vertex are terms that can be used interchangeably when discussing graphs of networks. Figure 2.2 shows a graph representation of the circuit in figure 2.1. Graphs used in network analysis are usually, in addition, both directed graphs, to capture the direction of current flow and voltage, and labelled graphs, to capture the uniqueness of the branches and nodes. For instance, a graph consisting of a square of branches would still be the same topological graph if two branches were interchanged unless the branches were uniquely labelled. In directed graphs, the two nodes that a branch connects to are designated the source and target nodes. Typically, these will be indicated by an arrow drawn on the branch. === Incidence === Incidence is one of the basic properties of a graph. An edge that is connected to a vertex is said to be incident on that vertex. The incidence of a graph can be captured in matrix format with a matrix called an incidence matrix. In fact, the incidence matrix is an alternative mathematical representation of the graph which dispenses with the need for any kind of drawing. Matrix rows correspond to nodes and matrix columns correspond to branches. The elements of the matrix are either zero, for no incidence, or one, for incidence between the node and branch. Direction in directed graphs is indicated by the sign of the element. === Equivalence === Graphs are equivalent if one can be transformed into the other by deformation. Deformation can include the operations of translation, rotation and reflection; bending and stretching the branches; and crossing or knotting the branches. Two graphs which are equivalent through deformation are said to be congruent. In the field of electrical networks, two additional transforms are considered to result in equivalent graphs which do not produce congruent graphs. The first of these is the interchange of series-connected branches. This is the dual of interchange of parallel-connected branches which can be achieved by deformation without the need for a special rule. The second is concerned with graphs divided into two or more separate parts, that is, a graph with two sets of nodes which have no branches incident to a node in each set. Two such separate parts are considered an equivalent graph to one where the parts are joined by combining a node from each into a single node. Likewise, a graph that can be split into two separate parts by splitting a node in two is also considered equivalent. === Trees and links === A tree is a graph in which all the nodes are connected, either directly or indirectly, by branches, but without forming any closed loops. Since there are no closed loops, there are no currents in a tree. In network analysis, we are interested in spanning trees, that is, trees that connect every node in the graph of the network. In this article, spanning tree is meant by an unqualified tree unless otherwise stated. A given network graph can contain a number of different trees. The branches removed from a graph in order to form a tree are called links; the branches remaining in the tree are called twigs. For a graph with n nodes, the number of branches in each tree, t, must be: t = n − 1 {\displaystyle t=n-1\ } An important relationship for circuit analysis is: b = ℓ + t {\displaystyle b=\ell +t\ } where b is the number of branches in the graph and ℓ is the number of links removed to form the tree. === Tie sets and cut sets === The goal of circuit analysis is to determine all the branch currents and voltages in the network. These network variables are not all independent. The branch voltages are related to the branch currents by the transfer function of the elements of which they are composed. A complete solution of the network can therefore be either in terms of branch currents or branch voltages only. Nor are all the branch currents independent from each other. The minimum number of branch currents required for a complete solution is l. This is a consequence of the fact that a tree has l links removed and there can be no currents in a tree. Since the remaining branches of the tree have zero current they cannot be independent of the link currents. The branch currents chosen as a set of independent variables must be a set associated with the links of a tree: one cannot choose any l branches arbitrarily. In terms of branch voltages, a complete solution of the network can be obtained with t branch voltages. This is a consequence the fact that short-circuiting all the branches of a tree results in the voltage being zero everywhere. The link voltages cannot, therefore, be independent of the tree branch voltages. A common analysis approach is to solve for loop currents rather than branch currents. The branch currents are then found in terms of the loop currents. Again, the set of loop currents cannot be chosen arbitrarily. To guarantee a set of independent variables the loop currents must be those associated with a certain set of loops. This set of loops consists of those loops formed by replacing a single link of a given tree of the graph of the circuit to be analysed. Since replacing a single link in a tree forms exactly one unique loop, the number of loop currents so defined is equal to l. The term loop in this context is not the same as the usual meaning of loop in graph theory. The set of branches forming a given loop is called a tie set. The set of network equations are formed by equating the loop currents to the algebraic sum of the tie set branch currents. It is possible to choose a set of independent loop currents without reference to the trees and tie sets. A sufficient, but not necessary, condition for choosing a set of independent loops is to ensure that each chosen loop includes at least one branch that was not previously included by loops already chosen. A particularly straightforward choice is that used in mesh analysis, in which the loops are all chosen to be meshes. Mesh analysis can only be applied if it is possible to map the graph onto a plane or a sphere without any of the branches crossing over. Such graphs are called planar graphs. Ability to map onto a plane or a sphere are equivalent conditions. Any finite graph mapped onto a plane can be shrunk until it will map onto a small region of a sphere. Conversely, a mesh of any graph mapped onto a sphere can be stretched until the space inside it occupies nearly all of the sphere. The entire graph then occupies only a small region of the sphere. This is the same as the first case, hence the graph will also map onto a plane. There is an approach to choosing network variables with voltages which is analogous and dual to the loop current method. Here the voltage associated with pairs of nodes are the primary variables and the branch voltages are found in terms of them. In this method also, a particular tree of the graph must be chosen in order to ensure that all the variables are independent. The dual of the tie set is the cut set. A tie set is formed by allowing all but one of the graph links to be open circuit. A cut set is formed by allowing all but one of the tree branches to be short circuit. The cut set consists of the tree branch which was not short-circuited and any of the links which are not short-circuited by the other tree branches. A cut set of a graph produces two disjoint subgraphs, that is, it cuts the graph into two parts, and is the minimum set of branches needed to do so. The set of network equations are formed by equating the node pair voltages to the algebraic sum of the cut set branch voltages. The dual of the special case of mesh analysis is nodal analysis. === Nullity and rank === The nullity, N, of a graph with s separate parts and b branches is defined by: N = b − n + s {\displaystyle N=b-n+s\ } The nullity of a graph represents the number of degrees of freedom of its set of network equations. For a planar graph, the nullity is equal to the number of meshes in the graph. The rank, R of a graph is defined by: R = n − s {\displaystyle R=n-s\ } Rank plays the same role in nodal analysis as nullity plays in mesh analysis. That is, it gives the number of node voltage equations required. Rank and nullity are dual concepts and are related by: R + N = b {\displaystyle R+N=b\ } === Solving the network variables === Once a set of geometrically independent variables have been chosen the state of the network is expressed in terms of these. The result is a set of independent linear equations which need to be solved simultaneously in order to find the values of the network variables. This set of equations can be expressed in a matrix format which leads to a characteristic parameter matrix for the network. Parameter matrices take the form of an impedance matrix if the equations have been formed on a loop-analysis basis, or as an admittance matrix if the equations have been formed on a node-analysis basis. These equations can be solved in a number of well-known ways. One method is the systematic elimination of variables. Another method involves the use of determinants. This is known as Cramer's rule and provides a direct expression for the unknown variable in terms of determinants. This is useful in that it provides a compact expression for the solution. However, for anything more than the most trivial networks, a greater calculation effort is required for this method when working manually. === Duality === Two graphs are dual when the relationship between branches and node pairs in one is the same as the relationship between branches and loops in the other. The dual of a graph can be found entirely by a graphical method. The dual of a graph is another graph. For a given tree in a graph, the complementary set of branches (i.e., the branches not in the tree) form a tree in the dual graph. The set of current loop equations associated with the tie sets of the original graph and tree is identical to the set of voltage node-pair equations associated with the cut sets of the dual graph. The following table lists dual concepts in topology related to circuit theory. The dual of a tree is sometimes called a maze. It consists of spaces connected by links in the same way that the tree consists of nodes connected by tree branches. Duals cannot be formed for every graph. Duality requires that every tie set has a dual cut set in the dual graph. This condition is met if and only if the graph is mappable on to a sphere with no branches crossing. To see this, note that a tie set is required to "tie off" a graph into two portions and its dual, the cut set, is required to cut a graph into two portions. The graph of a finite network which will not map on to a sphere will require an n-fold torus. A tie set that passes through a hole in a torus will fail to tie the graph into two parts. Consequently, the dual graph will not be cut into two parts and will not contain the required cut set. Consequently, only planar graphs have duals. Duals also cannot be formed for networks containing mutual inductances since there is no corresponding capacitive element. Equivalent circuits can be developed which do have duals, but the dual cannot be formed of a mutual inductance directly. === Node and mesh elimination === Operations on a set of network equations have a topological meaning which can aid visualisation of what is happening. Elimination of a node voltage from a set of network equations corresponds topologically to the elimination of that node from the graph. For a node connected to three other nodes, this corresponds to the well known Y-Δ transform. The transform can be extended to greater numbers of connected nodes and is then known as the star-mesh transform. The inverse of this transform is the Δ-Y transform which analytically corresponds to the elimination of a mesh current and topologically corresponds to the elimination of a mesh. However, elimination of a mesh current whose mesh has branches in common with an arbitrary number of other meshes will not, in general, result in a realisable graph. This is because the graph of the transform of the general star is a graph which will not map on to a sphere (it contains star polygons and hence multiple crossovers). The dual of such a graph cannot exist, but is the graph required to represent a generalised mesh elimination. === Mutual coupling === In conventional graph representation of circuits, there is no means of explicitly representing mutual inductive couplings, such as occurs in a transformer, and such components may result in a disconnected graph with more than one separate part. For convenience of analysis, a graph with multiple parts can be combined into a single graph by unifying one node in each part into a single node. This makes no difference to the theoretical behaviour of the circuit, so analysis carried out on it is still valid. It would, however, make a practical difference if a circuit were to be implemented this way in that it would destroy the isolation between the parts. An example would be a transformer earthed on both the primary and secondary side. The transformer still functions as a transformer with the same voltage ratio but can now no longer be used as an isolation transformer. More recent techniques in graph theory are able to deal with active components, which are also problematic in conventional theory. These new techniques are also able to deal with mutual couplings. === Active components === There are two basic approaches available for dealing with mutual couplings and active components. In the first of these, Samuel Jefferson Mason in 1953 introduced signal-flow graphs. Signal-flow graphs are weighted, directed graphs. He used these to analyse circuits containing mutual couplings and active networks. The weight of a directed edge in these graphs represents a gain, such as possessed by an amplifier. In general, signal-flow graphs, unlike the regular directed graphs described above, do not correspond to the topology of the physical arrangement of components. The second approach is to extend the classical method so that it includes mutual couplings and active components. Several methods have been proposed for achieving this. In one of these, two graphs are constructed, one representing the currents in the circuit and the other representing the voltages. Passive components will have identical branches in both trees but active components may not. The method relies on identifying spanning trees that are common to both graphs. An alternative method of extending the classical approach which requires only one graph was proposed by Chen in 1965. Chen's method is based on a rooted tree. ==== Hypergraphs ==== Another way of extending classical graph theory for active components is through the use of hypergraphs. Some electronic components are not represented naturally using graphs. The transistor has three connection points, but a normal graph branch may only connect to two nodes. Modern integrated circuits have many more connections than this. This problem can be overcome by using hypergraphs instead of regular graphs. In a conventional representation components are represented by edges, each of which connects to two nodes. In a hypergraph, components are represented by hyperedges which can connect to an arbitrary number of nodes. Hyperedges have tentacles which connect the hyperedge to the nodes. The graphical representation of a hyperedge may be a box (compared to the edge which is a line) and the representations of its tentacles are lines from the box to the connected nodes. In a directed hypergraph, the tentacles carry labels which are determined by the hyperedge's label. A conventional directed graph can be thought of as a hypergraph with hyperedges each of which has two tentacles. These two tentacles are labelled source and target and usually indicated by an arrow. In a general hypergraph with more tentacles, more complex labelling will be required. Hypergraphs can be characterised by their incidence matrices. A regular graph containing only two-terminal components will have exactly two non-zero entries in each row. Any incidence matrix with more than two non-zero entries in any row is a representation of a hypergraph. The number of non-zero entries in a row is the rank of the corresponding branch, and the highest branch rank is the rank of the incidence matrix. === Non-homogeneous variables === Classical network analysis develops a set of network equations whose network variables are homogeneous in either current (loop analysis) or voltage (node analysis). The set of network variables so found is not necessarily the minimum necessary to form a set of independent equations. There may be a difference between the number of variables in a loop analysis to a node analysis. In some cases the minimum number possible may be less than either of these if the requirement for homogeneity is relaxed and a mix of current and voltage variables allowed. A result from Kishi and Katajini in 1967 is that the absolute minimum number of variables required to describe the behaviour of the network is given by the maximum distance between any two spanning forests of the network graph. === Network synthesis === Graph theory can be applied to network synthesis. Classical network synthesis realises the required network in one of a number of canonical forms. Examples of canonical forms are the realisation of a driving-point impedance by Cauer's canonical ladder network or Foster's canonical form or Brune's realisation of an immittance from his positive-real functions. Topological methods, on the other hand, do not start from a given canonical form. Rather, the form is a result of the mathematical representation. Some canonical forms require mutual inductances for their realisation. A major aim of topological methods of network synthesis has been to eliminate the need for these mutual inductances. One theorem to come out of topology is that a realisation of a driving-point impedance without mutual couplings is minimal if and only if there are no all-inductor or all-capacitor loops. Graph theory is at its most powerful in network synthesis when the elements of the network can be represented by real numbers (one-element-kind networks such as resistive networks) or binary states (such as switching networks). === Infinite networks === Perhaps the earliest network with an infinite graph to be studied was the ladder network used to represent transmission lines developed, in its final form, by Oliver Heaviside in 1881. Certainly all early studies of infinite networks were limited to periodic structures such as ladders or grids with the same elements repeated over and over. It was not until the late 20th century that tools for analysing infinite networks with an arbitrary topology became available. Infinite networks are largely of only theoretical interest and are the plaything of mathematicians. Infinite networks that are not constrained by real-world restrictions can have some very unphysical properties. For instance Kirchhoff's laws can fail in some cases and infinite resistor ladders can be defined which have a driving-point impedance which depends on the termination at infinity. Another unphysical property of theoretical infinite networks is that, in general, they will dissipate infinite power unless constraints are placed on them in addition to the usual network laws such as Ohm's and Kirchhoff's laws. There are, however, some real-world applications. The transmission line example is one of a class of practical problems that can be modelled by infinitesimal elements (the distributed-element model). Other examples are launching waves into a continuous medium, fringing field problems, and measurement of resistance between points of a substrate or down a borehole. Transfinite networks extend the idea of infinite networks even further. A node at an extremity of an infinite network can have another branch connected to it leading to another network. This new network can itself be infinite. Thus, topologies can be constructed which have pairs of nodes with no finite path between them. Such networks of infinite networks are called transfinite networks. == Notes == == See also == Symbolic circuit analysis Network topology       Topological quantum computer == References == == Bibliography == Brittain, James E., The introduction of the loading coil: George A. Campbell and Michael I. Pupin", Technology and Culture, vol. 11, no. 1, pp. 36–57, The Johns Hopkins University Press, January 1970 doi:10.2307/3102809. Campbell, G. A., "Physical theory of the electric wave-filter", Bell System Technical Journal, November 1922, vol. 1, no. 2, pp. 1–32. Cederbaum, I., "Some applications of graph theory to network analysis and synthesis", IEEE Transactions on Circuits and Systems, vol.31, iss.1, pp. 64–68, January 1984. Farago, P. S., An Introduction to Linear Network Analysis, The English Universities Press Ltd, 1961. Foster, Ronald M., "Geometrical circuits of electrical networks", Transactions of the American Institute of Electrical Engineers, vol.51, iss.2, pp. 309–317, June 1932. Foster, Ronald M.; Campbell, George A., "Maximum output networks for telephone substation and repeater circuits", Transactions of the American Institute of Electrical Engineers, vol.39, iss.1, pp. 230–290, January 1920. Guillemin, Ernst A., Introductory Circuit Theory, New York: John Wiley & Sons, 1953 OCLC 535111 Kind, Dieter; Feser, Kurt, High-voltage Test Techniques, translator Y. Narayana Rao, Newnes, 2001 ISBN 0-7506-5183-0. Kishi, Genya; Kajitani, Yoji, "Maximally distant trees and principal partition of a linear graph", IEEE Transactions on Circuit Theory, vol.16, iss.3, pp. 323–330, August 1969. MacMahon, Percy A., "Yoke-chains and multipartite compositions in connexion with the analytical forms called “Trees”", Proceedings of the London Mathematical Society, vol.22 (1891), pp.330–346 doi:10.1112/plms/s1-22.1.330. MacMahon, Percy A., "Combinations of resistances", The Electrician, vol.28, pp. 601–602, 8 April 1892.Reprinted in Discrete Applied Mathematics, vol.54, iss.Iss.2–3, pp. 225–228, 17 October 1994 doi:10.1016/0166-218X(94)90024-8. Minas, M., "Creating semantic representations of diagrams", Applications of Graph Transformations with Industrial Relevance: international workshop, AGTIVE'99, Kerkrade, The Netherlands, September 1–3, 1999: proceedings, pp. 209–224, Springer, 2000 ISBN 3-540-67658-9. Redifon Radio Diary, 1970, William Collins Sons & Co, 1969. Skiena, Steven S., The Algorithm Design Manual, Springer, 2008, ISBN 1-84800-069-3. Suresh, Kumar K. S., "Introduction to network topology" chapter 11 in Electric Circuits And Networks, Pearson Education India, 2010 ISBN 81-317-5511-8. Tooley, Mike, BTEC First Engineering: Mandatory and Selected Optional Units for BTEC Firsts in Engineering, Routledge, 2010 ISBN 1-85617-685-1. Wildes, Karl L.; Lindgren, Nilo A., "Network analysis and synthesis: Ernst A. Guillemin", A Century of Electrical Engineering and Computer Science at MIT, 1882–1982, pp. 154–159, MIT Press, 1985 ISBN 0-262-23119-0. Zemanian, Armen H., Infinite Electrical Networks, Cambridge University Press, 1991 ISBN 0-521-40153-4.
Wikipedia/Circuit_topology_(electrical)
In chemistry, topology provides a way of describing and predicting the molecular structure within the constraints of three-dimensional (3-D) space. Given the determinants of chemical bonding and the chemical properties of the atoms, topology provides a model for explaining how the atoms ethereal wave functions must fit together. Molecular topology is a part of mathematical chemistry dealing with the algebraic description of chemical compounds so allowing a unique and easy characterization of them. Topology is insensitive to the details of a scalar field, and can often be determined using simplified calculations. Scalar fields such as electron density, Madelung field, covalent field and the electrostatic potential can be used to model topology. Each scalar field has its own distinctive topology and each provides different information about the nature of chemical bonding and structure. The analysis of these topologies, when combined with simple electrostatic theory and a few empirical observations, leads to a quantitative model of localized chemical bonding. In the process, the analysis provides insights into the nature of chemical bonding. Applied topology explains how large molecules reach their final shapes and how biological molecules achieve their activity. Circuit topology is a topological property of folded linear polymers. It describes the arrangement of intra-chain contacts. Contacts can be established by intra-chain interactions, the so called hard contacts (h-contacts), or via chain entanglement or soft contacts (s-contacts). This notion has been applied to structural analysis of biomolecules such as proteins, RNAs, and genome. == Topological indices == It is possible to set up equations correlating direct quantitative structure activity relationships with experimental properties, usually referred to as topological indices (TIs). Topological indices are used in the development of quantitative structure-activity relationships (QSARs) in which the biological activity or other properties of molecules are correlated with their chemical structure. == See also == Circuit topology Topological index Theoretical chemistry Molecular geometry Molecular graph Molecular knot Molecular Borromean rings == References == Francl, Michelle; Stretching topology Nature Chemistry 1, 334–335 (2009) doi:10.1038/nchem.302 Rouvray, D. H.; A rationale for the topological approach to chemistry; Journal of Molecular Structure: THEOCHEM Volume 336, Issues 2–3, 30 June 1995, pages 101–114
Wikipedia/Topology_(chemistry)
In category theory, a branch of mathematics, a presheaf on a category C {\displaystyle C} is a functor F : C o p → S e t {\displaystyle F\colon C^{\mathrm {op} }\to \mathbf {Set} } . If C {\displaystyle C} is the poset of open sets in a topological space, interpreted as a category, then one recovers the usual notion of presheaf on a topological space. A morphism of presheaves is defined to be a natural transformation of functors. This makes the collection of all presheaves on C {\displaystyle C} into a category, and is an example of a functor category. It is often written as C ^ = S e t C o p {\displaystyle {\widehat {C}}=\mathbf {Set} ^{C^{\mathrm {op} }}} and it is called the category of presheaves on C {\displaystyle C} . A functor into C ^ {\displaystyle {\widehat {C}}} is sometimes called a profunctor. A presheaf that is naturally isomorphic to the contravariant hom-functor Hom(–, A) for some object A of C is called a representable presheaf. Some authors refer to a functor F : C o p → V {\displaystyle F\colon C^{\mathrm {op} }\to \mathbf {V} } as a V {\displaystyle \mathbf {V} } -valued presheaf. == Examples == A simplicial set is a Set-valued presheaf on the simplex category C = Δ {\displaystyle C=\Delta } . A directed multigraph is a presheaf on the category with two elements and two parallel morphisms between them i.e. C = ( E ⟶ t s V ) {\displaystyle C=(E{\overset {s}{\underset {t}{\longrightarrow }}}V)} . An arrow category is a presheaf on the category with two elements and one morphism between them. i.e. C = ( E ⟶ f V ) {\displaystyle C=(E{\overset {f}{\longrightarrow }}V)} . A right group action is a presheaf on the category created from a group G {\displaystyle G} , i.e. a category with one element and invertible morphisms. == Properties == When C {\displaystyle C} is a small category, the functor category C ^ = S e t C o p {\displaystyle {\widehat {C}}=\mathbf {Set} ^{C^{\mathrm {op} }}} is cartesian closed. The poset of subobjects of P {\displaystyle P} form a Heyting algebra, whenever P {\displaystyle P} is an object of C ^ = S e t C o p {\displaystyle {\widehat {C}}=\mathbf {Set} ^{C^{\mathrm {op} }}} for small C {\displaystyle C} . For any morphism f : X → Y {\displaystyle f:X\to Y} of C ^ {\displaystyle {\widehat {C}}} , the pullback functor of subobjects f ∗ : S u b C ^ ( Y ) → S u b C ^ ( X ) {\displaystyle f^{*}:\mathrm {Sub} _{\widehat {C}}(Y)\to \mathrm {Sub} _{\widehat {C}}(X)} has a right adjoint, denoted ∀ f {\displaystyle \forall _{f}} , and a left adjoint, ∃ f {\displaystyle \exists _{f}} . These are the universal and existential quantifiers. A locally small category C {\displaystyle C} embeds fully and faithfully into the category C ^ {\displaystyle {\widehat {C}}} of set-valued presheaves via the Yoneda embedding which to every object A {\displaystyle A} of C {\displaystyle C} associates the hom functor C ( − , A ) {\displaystyle C(-,A)} . The category C ^ {\displaystyle {\widehat {C}}} admits small limits and small colimits. See limit and colimit of presheaves for further discussion. The density theorem states that every presheaf is a colimit of representable presheaves; in fact, C ^ {\displaystyle {\widehat {C}}} is the colimit completion of C {\displaystyle C} (see #Universal property below.) == Universal property == The construction C ↦ C ^ = F c t ( C op , S e t ) {\displaystyle C\mapsto {\widehat {C}}=\mathbf {Fct} (C^{\text{op}},\mathbf {Set} )} is called the colimit completion of C because of the following universal property: Proof: Given a presheaf F, by the density theorem, we can write F = lim → ⁡ y U i {\displaystyle F=\varinjlim yU_{i}} where U i {\displaystyle U_{i}} are objects in C. Then let η ~ F = lim → ⁡ η U i , {\displaystyle {\widetilde {\eta }}F=\varinjlim \eta U_{i},} which exists by assumption. Since lim → − {\displaystyle \varinjlim -} is functorial, this determines the functor η ~ : C ^ → D {\displaystyle {\widetilde {\eta }}:{\widehat {C}}\to D} . Succinctly, η ~ {\displaystyle {\widetilde {\eta }}} is the left Kan extension of η {\displaystyle \eta } along y; hence, the name "Yoneda extension". To see η ~ {\displaystyle {\widetilde {\eta }}} commutes with small colimits, we show η ~ {\displaystyle {\widetilde {\eta }}} is a left-adjoint (to some functor). Define H o m ( η , − ) : D → C ^ {\displaystyle {\mathcal {H}}om(\eta ,-):D\to {\widehat {C}}} to be the functor given by: for each object M in D and each object U in C, H o m ( η , M ) ( U ) = Hom D ⁡ ( η U , M ) . {\displaystyle {\mathcal {H}}om(\eta ,M)(U)=\operatorname {Hom} _{D}(\eta U,M).} Then, for each object M in D, since H o m ( η , M ) ( U i ) = Hom ⁡ ( y U i , H o m ( η , M ) ) {\displaystyle {\mathcal {H}}om(\eta ,M)(U_{i})=\operatorname {Hom} (yU_{i},{\mathcal {H}}om(\eta ,M))} by the Yoneda lemma, we have: Hom D ⁡ ( η ~ F , M ) = Hom D ⁡ ( lim → ⁡ η U i , M ) = lim ← ⁡ Hom D ⁡ ( η U i , M ) = lim ← ⁡ H o m ( η , M ) ( U i ) = Hom C ^ ⁡ ( F , H o m ( η , M ) ) , {\displaystyle {\begin{aligned}\operatorname {Hom} _{D}({\widetilde {\eta }}F,M)&=\operatorname {Hom} _{D}(\varinjlim \eta U_{i},M)=\varprojlim \operatorname {Hom} _{D}(\eta U_{i},M)=\varprojlim {\mathcal {H}}om(\eta ,M)(U_{i})\\&=\operatorname {Hom} _{\widehat {C}}(F,{\mathcal {H}}om(\eta ,M)),\end{aligned}}} which is to say η ~ {\displaystyle {\widetilde {\eta }}} is a left-adjoint to H o m ( η , − ) {\displaystyle {\mathcal {H}}om(\eta ,-)} . ◻ {\displaystyle \square } The proposition yields several corollaries. For example, the proposition implies that the construction C ↦ C ^ {\displaystyle C\mapsto {\widehat {C}}} is functorial: i.e., each functor C → D {\displaystyle C\to D} determines the functor C ^ → D ^ {\displaystyle {\widehat {C}}\to {\widehat {D}}} . == Variants == A presheaf of spaces on an ∞-category C is a contravariant functor from C to the ∞-category of spaces (for example, the nerve of the category of CW-complexes.) It is an ∞-category version of a presheaf of sets, as a "set" is replaced by a "space". The notion is used, among other things, in the ∞-category formulation of Yoneda's lemma that says: C → C ^ {\displaystyle C\to {\widehat {C}}} is fully faithful (here C can be just a simplicial set.) A copresheaf of a category C is a presheaf of Cop. In other words, it is a covariant functor from C to Set. == See also == Topos Category of elements Simplicial presheaf (this notion is obtained by replacing "set" with "simplicial set") Presheaf with transfers == Notes == == References == == Further reading == Presheaf at the nLab category of presheaves at the nLab Free cocompletion at the nLab Daniel Dugger, Sheaves and Homotopy Theory, the pdf file provided by nlab.
Wikipedia/Presheaf_(category_theory)
In mathematics, a Lawvere–Tierney topology is an analog of a Grothendieck topology for an arbitrary topos, used to construct a topos of sheaves. A Lawvere–Tierney topology is also sometimes also called a local operator or coverage or topology or geometric modality. They were introduced by William Lawvere (1971) and Myles Tierney. == Definition == If E is a topos, then a topology on E is a morphism j from the subobject classifier Ω to Ω such that j preserves truth ( j ∘ true = true {\displaystyle j\circ {\mbox{true}}={\mbox{true}}} ), preserves intersections ( j ∘ ∧ = ∧ ∘ ( j × j ) {\displaystyle j\circ \wedge =\wedge \circ (j\times j)} ), and is idempotent ( j ∘ j = j {\displaystyle j\circ j=j} ). == j-closure == Given a subobject s : S ↣ A {\displaystyle s:S\rightarrowtail A} of an object A with classifier χ s : A → Ω {\displaystyle \chi _{s}:A\rightarrow \Omega } , then the composition j ∘ χ s {\displaystyle j\circ \chi _{s}} defines another subobject s ¯ : S ¯ ↣ A {\displaystyle {\bar {s}}:{\bar {S}}\rightarrowtail A} of A such that s is a subobject of s ¯ {\displaystyle {\bar {s}}} , and s ¯ {\displaystyle {\bar {s}}} is said to be the j-closure of s. Some theorems related to j-closure are (for some subobjects s and w of A): inflationary property: s ⊆ s ¯ {\displaystyle s\subseteq {\bar {s}}} idempotence: s ¯ ≡ s ¯ ¯ {\displaystyle {\bar {s}}\equiv {\bar {\bar {s}}}} preservation of intersections: s ∩ w ¯ ≡ s ¯ ∩ w ¯ {\displaystyle {\overline {s\cap w}}\equiv {\bar {s}}\cap {\bar {w}}} preservation of order: s ⊆ w ⟹ s ¯ ⊆ w ¯ {\displaystyle s\subseteq w\Longrightarrow {\bar {s}}\subseteq {\bar {w}}} stability under pullback: f − 1 ( s ) ¯ ≡ f − 1 ( s ¯ ) {\displaystyle {\overline {f^{-1}(s)}}\equiv f^{-1}({\bar {s}})} . == Examples == Grothendieck topologies on a small category C are essentially the same as Lawvere–Tierney topologies on the topos of presheaves of sets over C. == References == Lawvere, F. W. (1971), "Quantifiers and sheaves" (PDF), Actes du Congrès International des Mathématiciens (Nice, 1970), vol. 1, Paris: Gauthier-Villars, pp. 329–334, MR 0430021, S2CID 2337874, archived from the original (PDF) on 2018-03-17 Mac Lane, Saunders; Moerdijk, Ieke (2012) [1994], Sheaves in geometry and logic. A first introduction to topos theory, Universitext, Springer, ISBN 978-1-4612-0927-0 McLarty, Colin (1995) [1992], Elementary Categories, Elementary Toposes, Oxford Logic Guides, vol. 21, Oxford University Press, p. 196, ISBN 978-0-19-158949-2
Wikipedia/Lawvere–Tierney_topology
In mathematics, the idea of descent extends the intuitive idea of 'gluing' in topology. Since the topologists' glue is the use of equivalence relations on topological spaces, the theory starts with some ideas on identification. == Descent of vector bundles == The case of the construction of vector bundles from data on a disjoint union of topological spaces is a straightforward place to start. Suppose X is a topological space covered by open sets Xi. Let Y be the disjoint union of the Xi, so that there is a natural mapping p : Y → X . {\displaystyle p:Y\rightarrow X.} We think of Y as 'above' X, with the Xi projection 'down' onto X. With this language, descent implies a vector bundle on Y (so, a bundle given on each Xi), and our concern is to 'glue' those bundles Vi, to make a single bundle V on X. What we mean is that V should, when restricted to Xi, give back Vi, up to a bundle isomorphism. The data needed is then this: on each overlap X i j , {\displaystyle X_{ij},} intersection of Xi and Xj, we'll require mappings f i j : V i → V j {\displaystyle f_{ij}:V_{i}\rightarrow V_{j}} to use to identify Vi and Vj there, fiber by fiber. Further the fij must satisfy conditions based on the reflexive, symmetric and transitive properties of an equivalence relation (gluing conditions). For example, the composition f j k ∘ f i j = f i k {\displaystyle f_{jk}\circ f_{ij}=f_{ik}} for transitivity (and choosing apt notation). The fii should be identity maps and hence symmetry becomes f i j = f j i − 1 {\displaystyle f_{ij}=f_{ji}^{-1}} (so that it is fiberwise an isomorphism). These are indeed standard conditions in fiber bundle theory (see transition map). One important application to note is change of fiber : if the fij are all you need to make a bundle, then there are many ways to make an associated bundle. That is, we can take essentially same fij, acting on various fibers. Another major point is the relation with the chain rule: the discussion of the way there of constructing tensor fields can be summed up as "once you learn to descend the tangent bundle, for which transitivity is the Jacobian chain rule, the rest is just 'naturality of tensor constructions'". To move closer towards the abstract theory we need to interpret the disjoint union of the X i j {\displaystyle X_{ij}} now as Y × X Y , {\displaystyle Y\times _{X}Y,} the fiber product (here an equalizer) of two copies of the projection p. The bundles on the Xij that we must control are Vi and Vj, the pullbacks to the fiber of V via the two different projection maps to X. Therefore, by going to a more abstract level one can eliminate the combinatorial side (that is, leave out the indices) and get something that makes sense for p not of the special form of covering with which we began. This then allows a category theory approach: what remains to do is to re-express the gluing conditions. == History == The ideas were developed in the period 1955–1965 (which was roughly the time at which the requirements of algebraic topology were met but those of algebraic geometry were not). From the point of view of abstract category theory the work of comonads of Beck was a summation of those ideas; see Beck's monadicity theorem. The difficulties of algebraic geometry with passage to the quotient are acute. The urgency (to put it that way) of the problem for the geometers accounts for the title of the 1959 Grothendieck seminar TDTE on theorems of descent and techniques of existence (see FGA) connecting the descent question with the representable functor question in algebraic geometry in general, and the moduli problem in particular. == Fully faithful descent == Let p : X ′ → X {\displaystyle p:X'\to X} . Each sheaf F on X gives rise to a descent datum ( F ′ = p ∗ F , α : p 0 ∗ F ′ ≃ p 1 ∗ F ′ ) , p i : X ″ = X ′ × X X ′ → X ′ {\displaystyle (F'=p^{*}F,\alpha :p_{0}^{*}F'\simeq p_{1}^{*}F'),\,p_{i}:X''=X'\times _{X}X'\to X'} , where α {\displaystyle \alpha } satisfies the cocycle condition p 02 ∗ α = p 12 ∗ α ∘ p 01 ∗ α , p i j : X ′ × X X ′ × X X ′ → X ′ × X X ′ {\displaystyle p_{02}^{*}\alpha =p_{12}^{*}\alpha \circ p_{01}^{*}\alpha ,\,p_{ij}:X'\times _{X}X'\times _{X}X'\to X'\times _{X}X'} . The fully faithful descent says: The functor F ↦ ( F ′ , α ) {\displaystyle F\mapsto (F',\alpha )} is fully faithful. Descent theory tells conditions for which there is a fully faithful descent, and when this functor is an equivalence of categories. == See also == Grothendieck connection Stack (mathematics) Galois descent Grothendieck topology Fibered category Beck's monadicity theorem Cohomological descent Faithfully flat descent == References == SGA 1, Ch VIII – this is the main reference Siegfried Bosch; Werner Lütkebohmert; Michel Raynaud (1990). Néron Models. Ergebnisse der Mathematik und Ihrer Grenzgebiete. 3. Folge. Vol. 21. Springer-Verlag. ISBN 3540505873. A chapter on the descent theory is more accessible than SGA. Pedicchio, Maria Cristina; Tholen, Walter, eds. (2004). Categorical foundations. Special topics in order, topology, algebra, and sheaf theory. Encyclopedia of Mathematics and Its Applications. Vol. 97. Cambridge: Cambridge University Press. ISBN 0-521-83414-7. Zbl 1034.18001. == Further reading == Other possible sources include: Angelo Vistoli, Notes on Grothendieck topologies, fibered categories and descent theory arXiv:math.AG/0412512 Mattieu Romagny, A straight way to algebraic stacks == External links == What is descent theory?
Wikipedia/Descent_(category_theory)
In ring theory, a branch of mathematics, a ring is called a reduced ring if it has no non-zero nilpotent elements. Equivalently, a ring is reduced if it has no non-zero elements with square zero, that is, x2 = 0 implies x = 0. A commutative algebra over a commutative ring is called a reduced algebra if its underlying ring is reduced. The nilpotent elements of a commutative ring R form an ideal of R, called the nilradical of R; therefore a commutative ring is reduced if and only if its nilradical is zero. Moreover, a commutative ring is reduced if and only if the only element contained in all prime ideals is zero. A quotient ring R/I is reduced if and only if I is a radical ideal. Let N R {\displaystyle {\mathcal {N}}_{R}} denote nilradical of a commutative ring R {\displaystyle R} . There is a functor R ↦ R / N R {\displaystyle R\mapsto R/{\mathcal {N}}_{R}} of the category of commutative rings Crng {\displaystyle {\text{Crng}}} into the category of reduced rings Red {\displaystyle {\text{Red}}} and it is left adjoint to the inclusion functor I {\displaystyle I} of Red {\displaystyle {\text{Red}}} into Crng {\displaystyle {\text{Crng}}} . The natural bijection Hom Red ( R / N R , S ) ≅ Hom Crng ( R , I ( S ) ) {\displaystyle {\text{Hom}}_{\text{Red}}(R/{\mathcal {N}}_{R},S)\cong {\text{Hom}}_{\text{Crng}}(R,I(S))} is induced from the universal property of quotient rings. Let D be the set of all zero-divisors in a reduced ring R. Then D is the union of all minimal prime ideals. Over a Noetherian ring R, we say a finitely generated module M has locally constant rank if p ↦ dim k ( p ) ⁡ ( M ⊗ k ( p ) ) {\displaystyle {\mathfrak {p}}\mapsto \operatorname {dim} _{k({\mathfrak {p}})}(M\otimes k({\mathfrak {p}}))} is a locally constant (or equivalently continuous) function on Spec R. Then R is reduced if and only if every finitely generated module of locally constant rank is projective. == Examples and non-examples == Subrings, products, and localizations of reduced rings are again reduced rings. The ring of integers Z is a reduced ring. Every field and every polynomial ring over a field (in arbitrarily many variables) is a reduced ring. More generally, every integral domain is a reduced ring since a nilpotent element is a fortiori a zero-divisor. On the other hand, not every reduced ring is an integral domain; for example, the ring Z[x, y]/(xy) contains x + (xy) and y + (xy) as zero-divisors, but no non-zero nilpotent elements. As another example, the ring Z × Z contains (1, 0) and (0, 1) as zero-divisors, but contains no non-zero nilpotent elements. The ring Z/6Z is reduced, however Z/4Z is not reduced: the class 2 + 4Z is nilpotent. In general, Z/nZ is reduced if and only if n = 0 or n is square-free. If R is a commutative ring and N is its nilradical, then the quotient ring R/N is reduced. A commutative ring R of prime characteristic p is reduced if and only if its Frobenius endomorphism is injective (cf. Perfect field.) == Generalizations == Reduced rings play an elementary role in algebraic geometry, where this concept is generalized to the notion of a reduced scheme. == See also == Total quotient ring § The total ring of fractions of a reduced ring == Notes == == References == N. Bourbaki, Commutative Algebra, Hermann Paris 1972, Chap. II, § 2.7 N. Bourbaki, Algebra, Springer 1990, Chap. V, § 6.7 Eisenbud, David (1995). Commutative Algebra with a View Toward Algebraic Geometry. Graduate Texts in Mathematics. Springer-Verlag. ISBN 0-387-94268-8.
Wikipedia/Reduced_(ring_theory)
In mathematics, and more particularly in set theory, a cover (or covering) of a set X {\displaystyle X} is a family of subsets of X {\displaystyle X} whose union is all of X {\displaystyle X} . More formally, if C = { U α : α ∈ A } {\displaystyle C=\lbrace U_{\alpha }:\alpha \in A\rbrace } is an indexed family of subsets U α ⊂ X {\displaystyle U_{\alpha }\subset X} (indexed by the set A {\displaystyle A} ), then C {\displaystyle C} is a cover of X {\displaystyle X} if ⋃ α ∈ A U α = X . {\displaystyle \bigcup _{\alpha \in A}U_{\alpha }=X.} Thus the collection { U α : α ∈ A } {\displaystyle \lbrace U_{\alpha }:\alpha \in A\rbrace } is a cover of X {\displaystyle X} if each element of X {\displaystyle X} belongs to at least one of the subsets U α {\displaystyle U_{\alpha }} . == Definition == Covers are commonly used in the context of topology. If the set X {\displaystyle X} is a topological space, then a cover C {\displaystyle C} of X {\displaystyle X} is a collection of subsets { U α } α ∈ A {\displaystyle \{U_{\alpha }\}_{\alpha \in A}} of X {\displaystyle X} whose union is the whole space X = ⋃ α ∈ A U α {\displaystyle X=\bigcup _{\alpha \in A}U_{\alpha }} . In this case C {\displaystyle C} is said to cover X {\displaystyle X} , or that the sets U α {\displaystyle U_{\alpha }} cover X {\displaystyle X} . If Y {\displaystyle Y} is a (topological) subspace of X {\displaystyle X} , then a cover of Y {\displaystyle Y} is a collection of subsets C = { U α } α ∈ A {\displaystyle C=\{U_{\alpha }\}_{\alpha \in A}} of X {\displaystyle X} whose union contains Y {\displaystyle Y} . That is, C {\displaystyle C} is a cover of Y {\displaystyle Y} if Y ⊆ ⋃ α ∈ A U α . {\displaystyle Y\subseteq \bigcup _{\alpha \in A}U_{\alpha }.} Here, Y {\displaystyle Y} may be covered with either sets in Y {\displaystyle Y} itself or sets in the parent space X {\displaystyle X} . A cover of X {\displaystyle X} is said to be locally finite if every point of X {\displaystyle X} has a neighborhood that intersects only finitely many sets in the cover. Formally, C = { U α } {\displaystyle C=\{U_{\alpha }\}} is locally finite if, for any x ∈ X {\displaystyle x\in X} , there exists some neighborhood N ( x ) {\displaystyle N(x)} of x {\displaystyle x} such that the set { α ∈ A : U α ∩ N ( x ) ≠ ∅ } {\displaystyle \left\{\alpha \in A:U_{\alpha }\cap N(x)\neq \varnothing \right\}} is finite. A cover of X {\displaystyle X} is said to be point finite if every point of X {\displaystyle X} is contained in only finitely many sets in the cover. A cover is point finite if locally finite, though the converse is not necessarily true. == Subcover == Let C {\displaystyle C} be a cover of a topological space X {\displaystyle X} . A subcover of C {\displaystyle C} is a subset of C {\displaystyle C} that still covers X {\displaystyle X} . The cover C {\displaystyle C} is said to be an open cover if each of its members is an open set. That is, each U α {\displaystyle U_{\alpha }} is contained in T {\displaystyle T} , where T {\displaystyle T} is the topology on X. A simple way to get a subcover is to omit the sets contained in another set in the cover. Consider specifically open covers. Let B {\displaystyle {\mathcal {B}}} be a topological basis of X {\displaystyle X} and O {\displaystyle {\mathcal {O}}} be an open cover of X {\displaystyle X} . First, take A = { A ∈ B : there exists U ∈ O such that A ⊆ U } {\displaystyle {\mathcal {A}}=\{A\in {\mathcal {B}}:{\text{ there exists }}U\in {\mathcal {O}}{\text{ such that }}A\subseteq U\}} . Then A {\displaystyle {\mathcal {A}}} is a refinement of O {\displaystyle {\mathcal {O}}} . Next, for each A ∈ A , {\displaystyle A\in {\mathcal {A}},} one may select a U A ∈ O {\displaystyle U_{A}\in {\mathcal {O}}} containing A {\displaystyle A} (requiring the axiom of choice). Then C = { U A ∈ O : A ∈ A } {\displaystyle {\mathcal {C}}=\{U_{A}\in {\mathcal {O}}:A\in {\mathcal {A}}\}} is a subcover of O . {\displaystyle {\mathcal {O}}.} Hence the cardinality of a subcover of an open cover can be as small as that of any topological basis. Hence, second countability implies space is Lindelöf. == Refinement == A refinement of a cover C {\displaystyle C} of a topological space X {\displaystyle X} is a new cover D {\displaystyle D} of X {\displaystyle X} such that every set in D {\displaystyle D} is contained in some set in C {\displaystyle C} . Formally, D = { V β } β ∈ B {\displaystyle D=\{V_{\beta }\}_{\beta \in B}} is a refinement of C = { U α } α ∈ A {\displaystyle C=\{U_{\alpha }\}_{\alpha \in A}} if for all β ∈ B {\displaystyle \beta \in B} there exists α ∈ A {\displaystyle \alpha \in A} such that V β ⊆ U α . {\displaystyle V_{\beta }\subseteq U_{\alpha }.} In other words, there is a refinement map ϕ : B → A {\displaystyle \phi :B\to A} satisfying V β ⊆ U ϕ ( β ) {\displaystyle V_{\beta }\subseteq U_{\phi (\beta )}} for every β ∈ B . {\displaystyle \beta \in B.} This map is used, for instance, in the Čech cohomology of X {\displaystyle X} . Every subcover is also a refinement, but the opposite is not always true. A subcover is made from the sets that are in the cover, but omitting some of them; whereas a refinement is made from any sets that are subsets of the sets in the cover. The refinement relation on the set of covers of X {\displaystyle X} is transitive and reflexive, i.e. a Preorder. It is never asymmetric for X ≠ ∅ {\displaystyle X\neq \emptyset } . Generally speaking, a refinement of a given structure is another that in some sense contains it. Examples are to be found when partitioning an interval (one refinement of a 0 < a 1 < ⋯ < a n {\displaystyle a_{0}<a_{1}<\cdots <a_{n}} being a 0 < b 0 < a 1 < a 2 < ⋯ < a n − 1 < b 1 < a n {\displaystyle a_{0}<b_{0}<a_{1}<a_{2}<\cdots <a_{n-1}<b_{1}<a_{n}} ), considering topologies (the standard topology in Euclidean space being a refinement of the trivial topology). When subdividing simplicial complexes (the first barycentric subdivision of a simplicial complex is a refinement), the situation is slightly different: every simplex in the finer complex is a face of some simplex in the coarser one, and both have equal underlying polyhedra. Yet another notion of refinement is that of star refinement. == Compactness == The language of covers is often used to define several topological properties related to compactness. A topological space X {\displaystyle X} is said to be: compact if every open cover has a finite subcover, (or equivalently that every open cover has a finite refinement); Lindelöf if every open cover has a countable subcover, (or equivalently that every open cover has a countable refinement); metacompact: if every open cover has a point-finite open refinement; paracompact: if every open cover admits a locally finite open refinement; and orthocompact: if every open cover has an interior-preserving open refinement. For some more variations see the above articles. == Covering dimension == A topological space X is said to be of covering dimension n if every open cover of X has a point-finite open refinement such that no point of X is included in more than n+1 sets in the refinement and if n is the minimum value for which this is true. If no such minimal n exists, the space is said to be of infinite covering dimension. == See also == Atlas (topology) – Set of charts that describes a manifold Bornology – Mathematical generalization of boundedness Covering space – Type of continuous map in topology Grothendieck topology – Structure on a category C that makes the objects of C act like the open sets of a topological space Partition of a set – Mathematical ways to group elements of a set Set cover problem – Classical problem in combinatorics Star refinement – mathematical refinementPages displaying wikidata descriptions as a fallback Subpaving – Geometrical object == References == Introduction to Topology, Second Edition, Theodore W. Gamelin & Robert Everist Greene. Dover Publications 1999. ISBN 0-486-40680-6 Kelley, John L. (1975) [1955]. General Topology. Graduate Texts in Mathematics. Vol. 27 (2nd ed.). New York: Springer-Verlag. ISBN 978-0-387-90125-1. OCLC 1365153. == External links == "Covering (of a set)", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Wikipedia/Cover_(topology)
In category theory, a branch of mathematics, a sieve is a way of choosing arrows with a common codomain. It is a categorical analogue of a collection of open subsets of a fixed open set in topology. In a Grothendieck topology, certain sieves become categorical analogues of open covers in topology. Sieves were introduced by Giraud (1964) in order to reformulate the notion of a Grothendieck topology. == Definition == Let C be a category, and let c be an object of C. A sieve S : C o p → S e t {\displaystyle S\colon C^{\rm {op}}\to {\rm {Set}}} on c is a subfunctor of Hom(−, c), i.e., for all objects c′ of C, S(c′) ⊆ Hom(c′, c), and for all arrows f:c″→c′, S(f) is the restriction of Hom(f, c), the pullback by f (in the sense of precomposition, not of fiber products), to S(c′); see the next section, below. Put another way, a sieve is a collection S of arrows with a common codomain that satisfies the condition, "If g:c′→c is an arrow in S, and if f:c″→c′ is any other arrow in C, then gf is in S." Consequently, sieves are similar to right ideals in ring theory or filters in order theory. == Pullback of sieves == The most common operation on a sieve is pullback. Pulling back a sieve S on c by an arrow f:c′→c gives a new sieve f*S on c′. This new sieve consists of all the arrows in S that factor through c′. There are several equivalent ways of defining f*S. The simplest is: For any object d of C, f*S(d) = { g:d→c′ | fg ∈ S(d)} A more abstract formulation is: f*S is the image of the fibered product S×Hom(−, c)Hom(−, c′) under the natural projection S×Hom(−, c)Hom(−, c′)→Hom(−, c′). Here the map Hom(−, c′)→Hom(−, c) is Hom(−, f), the push forward by f. The latter formulation suggests that we can also take the image of S×Hom(−, c)Hom(−, c′) under the natural map to Hom(−, c). This will be the image of f*S under composition with f. For each object d of C, this sieve will consist of all arrows fg, where g:d→c′ is an arrow of f*S(d). In other words, it consists of all arrows in S that can be factored through f. If we denote by ∅c the empty sieve on c, that is, the sieve for which ∅(d) is always the empty set, then for any f:c′→c, f*∅c is ∅c′. Furthermore, f*Hom(−, c) = Hom(−, c′). == Properties of sieves == Let S and S′ be two sieves on c. We say that S ⊆ S′ if for all objects c′ of C, S(c′) ⊆ S′(c′). For all objects d of C, we define (S ∪ S′)(d) to be S(d) ∪ S′(d) and (S ∩ S′)(d) to be S(d) ∩ S′(d). We can clearly extend this definition to infinite unions and intersections as well. If we define SieveC(c) (or Sieve(c) for short) to be the set of all sieves on c, then Sieve(c) becomes partially ordered under ⊆. It is easy to see from the definition that the union or intersection of any family of sieves on c is a sieve on c, so Sieve(c) is a complete lattice. A Grothendieck topology is a collection of sieves subject to certain properties. These sieves are called covering sieves. The set of all covering sieves on an object c is a subset J(c) of Sieve(c). J(c) satisfies several properties in addition to those required by the definition: If S and S′ are sieves on c, S ⊆ S′, and S ∈ J(c), then S′ ∈ J(c). Finite intersections of elements of J(c) are in J(c). Consequently, J(c) is also a distributive lattice, and it is cofinal in Sieve(c). == References == Artin, Michael; Alexandre Grothendieck; Jean-Louis Verdier, eds. (1972). Séminaire de Géométrie Algébrique du Bois Marie - 1963-64 - Théorie des topos et cohomologie étale des schémas - (SGA 4) - vol. 1. Lecture notes in mathematics (in French). Vol. 269. Berlin; New York: Springer-Verlag. xix+525. doi:10.1007/BFb0081551. ISBN 978-3-540-05896-0. Giraud, Jean (1964), "Analysis situs", Séminaire Bourbaki, 1962/63. Fasc. 3, Paris: Secrétariat mathématique, MR 0193122 Pedicchio, Maria Cristina; Tholen, Walter, eds. (2004). Categorical foundations. Special topics in order, topology, algebra, and sheaf theory. Encyclopedia of Mathematics and Its Applications. Vol. 97. Cambridge: Cambridge University Press. ISBN 0-521-83414-7. Zbl 1034.18001.
Wikipedia/Sieve_(category_theory)
In mathematics, the flat topology is a Grothendieck topology used in algebraic geometry. It is used to define the theory of flat cohomology; it also plays a fundamental role in the theory of descent (faithfully flat descent). The term flat here comes from flat modules. There are several slightly different flat topologies, the most common of which are the fppf topology and the fpqc topology. fppf stands for fidèlement plate de présentation finie, and in this topology, a morphism of affine schemes is a covering morphism if it is faithfully flat and of finite presentation. fpqc stands for fidèlement plate et quasi-compacte, and in this topology, a morphism of affine schemes is a covering morphism if it is faithfully flat. In both categories, a covering family is defined be a family which is a cover on Zariski open subsets. In the fpqc topology, any faithfully flat and quasi-compact morphism is a cover. These topologies are closely related to descent. The "pure" faithfully flat topology without any further finiteness conditions such as quasi compactness or finite presentation is not used much as it is not subcanonical; in other words, representable functors need not be sheaves. Unfortunately the terminology for flat topologies is not standardized. Some authors use the term "topology" for a pretopology, and there are several slightly different pretopologies sometimes called the fppf or fpqc (pre)topology, which sometimes give the same topology. Flat cohomology was introduced by Grothendieck in about 1960. == The big and small fppf sites == Let X be an affine scheme. We define an fppf cover of X to be a finite and jointly surjective family of morphisms (φa : Xa → X) with each Xa affine and each φa flat, finitely presented. This generates a pretopology: for X arbitrary, we define an fppf cover of X to be a family (φa : Xa → X) which is an fppf cover after base changing to an open affine subscheme of X. This pretopology generates a topology called the fppf topology. (This is not the same as the topology we would get if we started with arbitrary X and Xa and took covering families to be jointly surjective families of flat, finitely presented morphisms.) We write Fppf for the category of schemes with the fppf topology. The small fppf site of X is the category O(Xfppf) whose objects are schemes U with a fixed morphism U → X which is part of some covering family. (This does not imply that the morphism is flat, finitely presented.) The morphisms are morphisms of schemes compatible with the fixed maps to X. The large fppf site of X is the category Fppf/X, that is, the category of schemes with a fixed map to X, considered with the fppf topology. "Fppf" is an abbreviation for "fidèlement plate de présentation finie", that is, "faithfully flat and of finite presentation". Every surjective family of flat and finitely presented morphisms is a covering family for this topology, hence the name. The definition of the fppf pretopology can also be given with an extra quasi-finiteness condition; it follows from Corollary 17.16.2 in EGA IV4 that this gives the same topology. == The big and small fpqc sites == Let X be an affine scheme. We define an fpqc cover of X to be a finite and jointly surjective family of morphisms {uα : Xα → X} with each Xα affine and each uα flat. This generates a pretopology: For X arbitrary, we define an fpqc cover of X to be a family {uα : Xα → X} which is an fpqc cover after base changing to an open affine subscheme of X. This pretopology generates a topology called the fpqc topology. (This is not the same as the topology we would get if we started with arbitrary X and Xα and took covering families to be jointly surjective families of flat morphisms.) We write Fpqc for the category of schemes with the fpqc topology. The small fpqc site of X is the category O(Xfpqc) whose objects are schemes U with a fixed morphism U → X which is part of some covering family. The morphisms are morphisms of schemes compatible with the fixed maps to X. The large fpqc site of X is the category Fpqc/X, that is, the category of schemes with a fixed map to X, considered with the fpqc topology. "Fpqc" is an abbreviation for "fidèlement plate quasi-compacte", that is, "faithfully flat and quasi-compact". Every surjective family of flat and quasi-compact morphisms is a covering family for this topology, hence the name. == Flat cohomology == The procedure for defining the cohomology groups is the standard one: cohomology is defined as the sequence of derived functors of the functor taking the sections of a sheaf of abelian groups. While such groups have a number of applications, they are not in general easy to compute, except in cases where they reduce to other theories, such as the étale cohomology. == Example == The following example shows why the "faithfully flat topology" without any finiteness conditions does not behave well. Suppose X is the affine line over an algebraically closed field k. For each closed point x of X we can consider the local ring Rx at this point, which is a discrete valuation ring whose spectrum has one closed point and one open (generic) point. We glue these spectra together by identifying their open points to get a scheme Y. There is a natural map from Y to X. The affine line X is covered by the sets Spec(Rx) which are open in the faithfully flat topology, and each of these sets has a natural map to Y, and these maps are the same on intersections. However they cannot be combined to give a map from X to Y, because the underlying spaces of X and Y have different topologies. == See also == fpqc morphism == Notes == == References == Éléments de géométrie algébrique, Vol. IV. 2 Milne, James S. (1980), Étale Cohomology, Princeton University Press, ISBN 978-0-691-08238-7 Michael Artin and J. S. Milne, "Duality in the flat cohomology of curves", Inventiones Mathematicae, Volume 35, Number 1, December, 1976 == External links == Arithmetic Duality Theorems (PDF), online book by James Milne, explains at the level of flat cohomology duality theorems originating in the Tate–Poitou duality of Galois cohomology
Wikipedia/Flat_topology
In algebraic geometry, the h topology is a Grothendieck topology introduced by Vladimir Voevodsky to study the homology of schemes. It combines several good properties possessed by its related "sub"topologies, such as the qfh and cdh topologies. It has subsequently been used by Beilinson to study p-adic Hodge theory, in Bhatt and Scholze's work on projectivity of the affine Grassmanian, Huber and Jörder's study of differential forms, etc. == Definition == Voevodsky defined the h topology to be the topology associated to finite families { p i : U i → X } {\displaystyle \{p_{i}:U_{i}\to X\}} of morphisms of finite type such that ⨿ U i → X {\displaystyle \amalg U_{i}\to X} is a universal topological epimorphism (i.e., a set of points in the target is an open subset if and only if its preimage is open, and any base change also has this property). Voevodsky worked with this topology exclusively on categories S c h / S f t {\displaystyle Sch_{/S}^{ft}} of schemes of finite type over a Noetherian base scheme S. Bhatt-Scholze define the h topology on the category S c h / S f p {\displaystyle Sch_{/S}^{fp}} of schemes of finite presentation over a qcqs base scheme S {\displaystyle S} to be generated by v {\displaystyle v} -covers of finite presentation. They show (generalising results of Voevodsky) that the h topology is generated by: fppf-coverings, and families of the form { X ′ → X , Z → X } {\displaystyle \{X'\to X,Z\to X\}} where X ′ → X {\displaystyle X'\to X} is a proper morphism of finite presentation, Z → X {\displaystyle Z\to X} is a closed immersion of finite presentation, and X ′ → X {\displaystyle X'\to X} is an isomorphism over X ∖ Z {\displaystyle X\setminus Z} . Note that X ′ = ∅ {\displaystyle X'=\varnothing } is allowed in an abstract blowup, in which case Z is a nilimmersion of finite presentation. == Examples == The h-topology is not subcanonical, so representable presheaves are almost never h-sheaves. However, the h-sheafification of representable sheaves are interesting and useful objects; while presheaves of relative cycles are not representable, their associated h-sheaves are representable in the sense that there exists a disjoint union of quasi-projective schemes whose h-sheafifications agree with these h-sheaves of relative cycles. Any h-sheaf in positive characteristic satisfies F ( X ) = F ( X p e r f ) {\displaystyle F(X)=F(X^{perf})} where we interpret X p e r f {\displaystyle X^{perf}} as the colimit colim ⁡ ( F ( X ) → Frob F ( X ) → Frob … ) {\displaystyle \operatorname {colim} (F(X){\stackrel {\text{Frob}}{\to }}F(X){\stackrel {\text{Frob}}{\to }}\dots )} over the Frobenii (if the Frobenius is of finite presentation, and if not, use an analogous colimit consisting of morphisms of finite presentation). In fact, (in positive characteristic) the h-sheafification O h {\displaystyle {\mathcal {O}}_{h}} of the structure sheaf O {\displaystyle {\mathcal {O}}} is given by O h ( X ) = O ( X p e r f ) {\displaystyle {\mathcal {O}}_{h}(X)={\mathcal {O}}(X^{perf})} . So the structure sheaf "is an h-sheaf on the category of perfect schemes" (although this sentence doesn't really make sense mathematically since morphisms between perfect schemes are almost never of finite presentation). In characteristic zero similar results hold with perfection replaced by semi-normalisation. Huber-Jörder study the h-sheafification Ω h n {\displaystyle \Omega _{h}^{n}} of the presheaf X ↦ Γ ( X , Ω X / k n ) {\displaystyle X\mapsto \Gamma (X,\Omega _{X/k}^{n})} of Kähler differentials on categories of schemes of finite type over a characteristic zero base field k {\displaystyle k} . They show that if X is smooth, then Ω h n ( X ) = Γ ( X , Ω X / k n ) {\displaystyle \Omega _{h}^{n}(X)=\Gamma (X,\Omega _{X/k}^{n})} , and for various nice non-smooth X, the sheaf Ω h n {\displaystyle \Omega _{h}^{n}} recovers objects such as reflexive differentials and torsion-free differentials. Since the Frobenius is an h-covering, in positive characteristic we get Ω h n = 0 {\displaystyle \Omega _{h}^{n}=0} for n > 0 {\displaystyle n>0} , but analogous results are true if we replace the h-topology with the cdh-topology. By the Nullstellensatz, a morphism of finite presentation X → Spec ⁡ ( k ) {\displaystyle X\to \operatorname {Spec} (k)} towards the spectrum of a field k {\displaystyle k} admits a section up to finite extension. That is, there exists a finite field extension L / k {\displaystyle L/k} and a factorisation Spec ⁡ ( L ) → X → Spec ⁡ ( k ) {\displaystyle \operatorname {Spec} (L)\to X\to \operatorname {Spec} (k)} . Consequently, for any presheaf F {\displaystyle F} and field k {\displaystyle k} we have F h ( k ) = F e t ( k p e r f ) {\displaystyle F_{h}(k)=F_{et}(k^{perf})} where F h {\displaystyle F_{h}} , resp. F e t {\displaystyle F_{et}} , denotes the h-sheafification, resp. etale sheafification. == Properties == As mentioned above, in positive characteristic, any h-sheaf satisfies F ( X ) = F ( X p e r f ) {\displaystyle F(X)=F(X^{perf})} . In characteristic zero, we have F ( X ) = F ( X s n ) {\displaystyle F(X)=F(X^{sn})} where X s n {\displaystyle X^{sn}} is the semi-normalisation (the scheme with the same underlying topological space, but the structure sheaf is replaced with its termwise seminormalisation). Since the h-topology is finer than the Zariski topology, every scheme admits an h-covering by affine schemes. Using abstract blowups and Noetherian induction, if k {\displaystyle k} is a field admitting resolution of singularities (e.g., a characteristic zero field) then any scheme of finite type over k {\displaystyle k} admits an h-covering by smooth k {\displaystyle k} -schemes. More generally, in any situation where de Jong's theorem on alterations is valid we can find h-coverings by regular schemes. Since finite morphisms are h-coverings, algebraic correspondences are finite sums of morphisms. == cdh topology == The cdh topology on the category S c h / S f p {\displaystyle Sch_{/S}^{fp}} of schemes of finite presentation over a qcqs base scheme S {\displaystyle S} is generated by: Nisnevich coverings, and families of the form { X ′ → X , Z → X } {\displaystyle \{X'\to X,Z\to X\}} where X ′ → X {\displaystyle X'\to X} is a proper morphism of finite presentation, Z → X {\displaystyle Z\to X} is a closed immersion of finite presentation, and X ′ → X {\displaystyle X'\to X} is an isomorphism over X ∖ Z {\displaystyle X\setminus Z} . It is the universal topology with a "good" theory of compact supports. The cd stands for completely decomposed (in the same sense it is used for the Nisnevich topology). As mentioned in the examples section, over a field admitting resolution of singularities, any variety admits a cdh-covering by smooth varieties. This topology is heavily used in the study of Voevodsky motives with integral coefficients (with rational coefficients the h-topology together with de Jong alterations is used). Since the Frobenius is not a cdh-covering, the cdh-topology is also a useful replacement for the h-topology in the study of differentials in positive characteristic. Rather confusingly, there are completely decomposed h-coverings, which are not cdh-coverings, for example the completely decomposed family of flat morphisms { A 1 → x ↦ x 2 A 1 , A 1 ∖ { 0 } → x ↦ x A 1 } {\displaystyle \{\mathbb {A} ^{1}{\stackrel {x\mapsto x^{2}}{\to }}\mathbb {A} ^{1},\mathbb {A} ^{1}\setminus \{0\}{\stackrel {x\mapsto x}{\to }}\mathbb {A} ^{1}\}} . == Relation to v-topology and arc-topology == The v-topology (or universally subtrusive topology) is equivalent to the h-topology on the category S c h S f t {\displaystyle Sch_{S}^{ft}} of schemes of finite type over a Noetherian base scheme S. Indeed, a morphism in S c h S f t {\displaystyle Sch_{S}^{ft}} is universally subtrusive if and only if it is universally submersive Rydh (2010, Cor.2.10). In other words, S h v h ( S c h S f t ) = S h v v ( S c h S f t ) , ( S Noetherian ) {\displaystyle Shv_{h}(Sch_{S}^{ft})=Shv_{v}(Sch_{S}^{ft}),\qquad (S\ {\textrm {Noetherian}})} More generally, on the category S c h {\displaystyle Sch} of all qcqs schemes, neither of the v- nor the h- topologies are finer than the other: S h v h ( S c h ) ⊄ S h v v ( S c h ) {\displaystyle Shv_{h}(Sch)\not \subset Shv_{v}(Sch)} and S h v v ( S c h ) ⊄ S h v h ( S c h ) {\displaystyle Shv_{v}(Sch)\not \subset Shv_{h}(Sch)} . There are v-covers which are not h-covers (e.g., S p e c ( C ( x ) ) → S p e c ( C ) {\displaystyle Spec(\mathbb {C} (x))\to Spec(\mathbb {C} )} ) and h-covers which are not v-covers (e.g., S p e c ( R / p ) ⊔ S p e c ( R p ) → S p e c ( R ) {\displaystyle Spec(R/{\mathfrak {p}})\sqcup Spec(R_{\mathfrak {p}})\to Spec(R)} where R is a valuation ring of rank 2 and p {\displaystyle {\mathfrak {p}}} is the non-open, non-closed prime Rydh (2010, Example 4.3)). However, we could define an h-analogue of the fpqc topology by saying that an hqc-covering is a family { T i → T } i ∈ I {\displaystyle \{T_{i}\to T\}_{i\in I}} such that for each affine open U ⊆ T {\displaystyle U\subseteq T} there exists a finite set K, a map i : K → I {\displaystyle i:K\to I} and affine opens U i ( k ) ⊆ T i ( k ) × T U {\displaystyle U_{i(k)}\subseteq T_{i(k)}\times _{T}U} such that ⊔ k ∈ K U i ( k ) → U {\displaystyle \sqcup _{k\in K}U_{i(k)}\to U} is universally submersive (with no finiteness conditions). Then every v-covering is an hqc-covering. S h v h q c ( S c h ) ⊊ S h v v ( S c h ) . {\displaystyle Shv_{hqc}(Sch)\subsetneq Shv_{v}(Sch).} Indeed, any subtrusive morphism is submersive (this is an easy exercise using Rydh (2010, Cor.1.5 and Def.2.2)). By a theorem of Rydh, for a map f : Y → X {\displaystyle f:Y\to X} of qcqs schemes with X {\displaystyle X} Noetherian, f {\displaystyle f} is a v-cover if and only if it is an arc-cover (for the statement in this form see Bhatt & Mathew (2018, Prop.2.6)). That is, in the Noetherian setting everything said above for the v-topology is valid for the arc-topology. == Notes == == Further reading == Bhatt, Bhargav; Mathew, Akhil (2018), The arc-topology, arXiv:1807.04725v2 Rydh, David (2010), "Submersions and effective descent of étale morphisms", Bull. Soc. Math. France, 138 (2): 181–230, arXiv:0710.2488, doi:10.24033/bsmf.2588, MR 2679038, S2CID 17484591
Wikipedia/Cdh_topology
The Science Citation Index Expanded (SCIE) is a citation index owned by Clarivate and previously by Thomson Reuters. It was created by the Eugene Garfield at the Institute for Scientific Information, launched in 1964 as Science Citation Index (SCI). It was later distributed via CD/DVD and became available online in 1997, when it acquired the current name. The indexing database covers more than 9,200 notable and significant journals, across 178 disciplines, from 1900 to the present. These are alternatively described as the world's leading journals of science and technology, because of a rigorous selection process. == Accessibility == The index is available online within Web of Science, as part of its Core Collection (there are also CD and printed editions, covering a smaller number of journals). The database allows researchers to search through over 53 million records from thousands of academic journals that were published by publishers from around the world. == Specialty citation indexes == Clarivate previously marketed several subsets of this database, termed "Specialty Citation Indexes", such as the Neuroscience Citation Index and the Chemistry Citation Index, however these databases are no longer actively maintained. The Chemistry Citation Index was first introduced by Eugene Garfield, a chemist by training. His original "search examples were based on [his] experience as a chemist". In 1992, an electronic and print form of the index was derived from a core of 330 chemistry journals, within which all areas were covered. Additional information was provided from articles selected from 4,000 other journals. All chemistry subdisciplines were covered: organic, inorganic, analytical, physical chemistry, polymer, computational, organometallic, materials chemistry, and electrochemistry. By 2002, the core journal coverage increased to 500 and related article coverage increased to 8,000 other journals. One 1980 study reported the overall citation indexing benefits for chemistry, examining the use of citations as a tool for the study of the sociology of chemistry and illustrating the use of citation data to "observe" chemistry subfields over time. == See also == Arts and Humanities Citation Index, which covers 1,130 journals, beginning with 1975. Emerging Sources Citation Index (ESCI) Google Scholar Impact factor List of academic databases and search engines Journal Citation Reports Social Sciences Citation Index, which covers 1,700 journals, beginning with 1956. == References == == Further reading == Borgman, Christine L.; Furner, Jonathan (2005). "Scholarly Communication and Bibliometrics" (PDF). Annual Review of Information Science and Technology. 36 (1): 3–72. CiteSeerX 10.1.1.210.6040. doi:10.1002/aris.1440360102. Meho, Lokman I.; Yang, Kiduk (2007). "Impact of data sources on citation counts and rankings of LIS faculty: Web of science versus scopus and google scholar" (PDF). Journal of the American Society for Information Science and Technology. 58 (13): 2105. doi:10.1002/asi.20677. Garfield, E.; Sher, I. H. (1963). "New factors in the evaluation of scientific literature through citation indexing" (PDF). American Documentation. 14 (3): 195. doi:10.1002/asi.5090140304. Garfield, E. (1970). "Citation Indexing for Studying Science" (PDF). Nature. 227 (5259): 669–71. Bibcode:1970Natur.227..669G. doi:10.1038/227669a0. PMID 4914589. S2CID 4200369. Garfield, E. (1979). Citation Indexing: Its Theory and Application in Science, Technology, and Humanities. Information Sciences Series. New York: Wiley-Interscience. ISBN 978-0-89495-024-7. == External links == Introduction to SCIE Master journal list Chemical Information Sources/ Author and Citation Searches. on WikiBooks. Cited Reference Searching: An Introduction. Thomson Reuters. Chemistry Citation Index. Chinweb.
Wikipedia/Science_Citation_Index
Geometry (from Ancient Greek γεωμετρία (geōmetría) 'land measurement'; from γῆ (gê) 'earth, land' and μέτρον (métron) 'a measure') is a branch of mathematics concerned with properties of space such as the distance, shape, size, and relative position of figures. Geometry is, along with arithmetic, one of the oldest branches of mathematics. A mathematician who works in the field of geometry is called a geometer. Until the 19th century, geometry was almost exclusively devoted to Euclidean geometry, which includes the notions of point, line, plane, distance, angle, surface, and curve, as fundamental concepts. Originally developed to model the physical world, geometry has applications in almost all sciences, and also in art, architecture, and other activities that are related to graphics. Geometry also has applications in areas of mathematics that are apparently unrelated. For example, methods of algebraic geometry are fundamental in Wiles's proof of Fermat's Last Theorem, a problem that was stated in terms of elementary arithmetic, and remained unsolved for several centuries. During the 19th century several discoveries enlarged dramatically the scope of geometry. One of the oldest such discoveries is Carl Friedrich Gauss's Theorema Egregium ("remarkable theorem") that asserts roughly that the Gaussian curvature of a surface is independent from any specific embedding in a Euclidean space. This implies that surfaces can be studied intrinsically, that is, as stand-alone spaces, and has been expanded into the theory of manifolds and Riemannian geometry. Later in the 19th century, it appeared that geometries without the parallel postulate (non-Euclidean geometries) can be developed without introducing any contradiction. The geometry that underlies general relativity is a famous application of non-Euclidean geometry. Since the late 19th century, the scope of geometry has been greatly expanded, and the field has been split in many subfields that depend on the underlying methods—differential geometry, algebraic geometry, computational geometry, algebraic topology, discrete geometry (also known as combinatorial geometry), etc.—or on the properties of Euclidean spaces that are disregarded—projective geometry that consider only alignment of points but not distance and parallelism, affine geometry that omits the concept of angle and distance, finite geometry that omits continuity, and others. This enlargement of the scope of geometry led to a change of meaning of the word "space", which originally referred to the three-dimensional space of the physical world and its model provided by Euclidean geometry; presently a geometric space, or simply a space is a mathematical structure on which some geometry is defined. == History == The earliest recorded beginnings of geometry can be traced to ancient Mesopotamia and Egypt in the 2nd millennium BC. Early geometry was a collection of empirically discovered principles concerning lengths, angles, areas, and volumes, which were developed to meet some practical need in surveying, construction, astronomy, and various crafts. The earliest known texts on geometry are the Egyptian Rhind Papyrus (2000–1800 BC) and Moscow Papyrus (c. 1890 BC), and the Babylonian clay tablets, such as Plimpton 322 (1900 BC). For example, the Moscow Papyrus gives a formula for calculating the volume of a truncated pyramid, or frustum. Later clay tablets (350–50 BC) demonstrate that Babylonian astronomers implemented trapezoid procedures for computing Jupiter's position and motion within time-velocity space. These geometric procedures anticipated the Oxford Calculators, including the mean speed theorem, by 14 centuries. South of Egypt the ancient Nubians established a system of geometry including early versions of sun clocks. In the 7th century BC, the Greek mathematician Thales of Miletus used geometry to solve problems such as calculating the height of pyramids and the distance of ships from the shore. He is credited with the first use of deductive reasoning applied to geometry, by deriving four corollaries to Thales's theorem. Pythagoras established the Pythagorean School, which is credited with the first proof of the Pythagorean theorem, though the statement of the theorem has a long history. Eudoxus (408–c. 355 BC) developed the method of exhaustion, which allowed the calculation of areas and volumes of curvilinear figures, as well as a theory of ratios that avoided the problem of incommensurable magnitudes, which enabled subsequent geometers to make significant advances. Around 300 BC, geometry was revolutionized by Euclid, whose Elements, widely considered the most successful and influential textbook of all time, introduced mathematical rigor through the axiomatic method and is the earliest example of the format still used in mathematics today, that of definition, axiom, theorem, and proof. Although most of the contents of the Elements were already known, Euclid arranged them into a single, coherent logical framework. The Elements was known to all educated people in the West until the middle of the 20th century and its contents are still taught in geometry classes today. Archimedes (c. 287–212 BC) of Syracuse, Italy used the method of exhaustion to calculate the area under the arc of a parabola with the summation of an infinite series, and gave remarkably accurate approximations of pi. He also studied the spiral bearing his name and obtained formulas for the volumes of surfaces of revolution. Indian mathematicians also made many important contributions in geometry. The Shatapatha Brahmana (3rd century BC) contains rules for ritual geometric constructions that are similar to the Sulba Sutras. According to (Hayashi 2005, p. 363), the Śulba Sūtras contain "the earliest extant verbal expression of the Pythagorean Theorem in the world, although it had already been known to the Old Babylonians. They contain lists of Pythagorean triples, which are particular cases of Diophantine equations. In the Bakhshali manuscript, there are a handful of geometric problems (including problems about volumes of irregular solids). The Bakhshali manuscript also "employs a decimal place value system with a dot for zero." Aryabhata's Aryabhatiya (499) includes the computation of areas and volumes. Brahmagupta wrote his astronomical work Brāhmasphuṭasiddhānta in 628. Chapter 12, containing 66 Sanskrit verses, was divided into two sections: "basic operations" (including cube roots, fractions, ratio and proportion, and barter) and "practical mathematics" (including mixture, mathematical series, plane figures, stacking bricks, sawing of timber, and piling of grain). In the latter section, he stated his famous theorem on the diagonals of a cyclic quadrilateral. Chapter 12 also included a formula for the area of a cyclic quadrilateral (a generalization of Heron's formula), as well as a complete description of rational triangles (i.e. triangles with rational sides and rational areas). In the Middle Ages, mathematics in medieval Islam contributed to the development of geometry, especially algebraic geometry. Al-Mahani (b. 853) conceived the idea of reducing geometrical problems such as duplicating the cube to problems in algebra. Thābit ibn Qurra (known as Thebit in Latin) (836–901) dealt with arithmetic operations applied to ratios of geometrical quantities, and contributed to the development of analytic geometry. Omar Khayyam (1048–1131) found geometric solutions to cubic equations. The theorems of Ibn al-Haytham (Alhazen), Omar Khayyam and Nasir al-Din al-Tusi on quadrilaterals, including the Lambert quadrilateral and Saccheri quadrilateral, were part of a line of research on the parallel postulate continued by later European geometers, including Vitello (c. 1230 – c. 1314), Gersonides (1288–1344), Alfonso, John Wallis, and Giovanni Girolamo Saccheri, that by the 19th century led to the discovery of hyperbolic geometry. In the early 17th century, there were two important developments in geometry. The first was the creation of analytic geometry, or geometry with coordinates and equations, by René Descartes (1596–1650) and Pierre de Fermat (1601–1665). This was a necessary precursor to the development of calculus and a precise quantitative science of physics. The second geometric development of this period was the systematic study of projective geometry by Girard Desargues (1591–1661). Projective geometry studies properties of shapes which are unchanged under projections and sections, especially as they relate to artistic perspective. Two developments in geometry in the 19th century changed the way it had been studied previously. These were the discovery of non-Euclidean geometries by Nikolai Ivanovich Lobachevsky, János Bolyai and Carl Friedrich Gauss and of the formulation of symmetry as the central consideration in the Erlangen programme of Felix Klein (which generalized the Euclidean and non-Euclidean geometries). Two of the master geometers of the time were Bernhard Riemann (1826–1866), working primarily with tools from mathematical analysis, and introducing the Riemann surface, and Henri Poincaré, the founder of algebraic topology and the geometric theory of dynamical systems. As a consequence of these major changes in the conception of geometry, the concept of "space" became something rich and varied, and the natural background for theories as different as complex analysis and classical mechanics. == Main concepts == The following are some of the most important concepts in geometry. === Axioms === Euclid took an abstract approach to geometry in his Elements, one of the most influential books ever written. Euclid introduced certain axioms, or postulates, expressing primary or self-evident properties of points, lines, and planes. He proceeded to rigorously deduce other properties by mathematical reasoning. The characteristic feature of Euclid's approach to geometry was its rigor, and it has come to be known as axiomatic or synthetic geometry. At the start of the 19th century, the discovery of non-Euclidean geometries by Nikolai Ivanovich Lobachevsky (1792–1856), János Bolyai (1802–1860), Carl Friedrich Gauss (1777–1855) and others led to a revival of interest in this discipline, and in the 20th century, David Hilbert (1862–1943) employed axiomatic reasoning in an attempt to provide a modern foundation of geometry. === Spaces and subspaces === ==== Points ==== Points are generally considered fundamental objects for building geometry. They may be defined by the properties that they must have, as in Euclid's definition as "that which has no part", or in synthetic geometry. In modern mathematics, they are generally defined as elements of a set called space, which is itself axiomatically defined. With these modern definitions, every geometric shape is defined as a set of points; this is not the case in synthetic geometry, where a line is another fundamental object that is not viewed as the set of the points through which it passes. However, there are modern geometries in which points are not primitive objects, or even without points. One of the oldest such geometries is Whitehead's point-free geometry, formulated by Alfred North Whitehead in 1919–1920. ==== Lines ==== Euclid described a line as "breadthless length" which "lies equally with respect to the points on itself". In modern mathematics, given the multitude of geometries, the concept of a line is closely tied to the way the geometry is described. For instance, in analytic geometry, a line in the plane is often defined as the set of points whose coordinates satisfy a given linear equation, but in a more abstract setting, such as incidence geometry, a line may be an independent object, distinct from the set of points which lie on it. In differential geometry, a geodesic is a generalization of the notion of a line to curved spaces. ==== Planes ==== In Euclidean geometry a plane is a flat, two-dimensional surface that extends infinitely; the definitions for other types of geometries are generalizations of that. Planes are used in many areas of geometry. For instance, planes can be studied as a topological surface without reference to distances or angles; it can be studied as an affine space, where collinearity and ratios can be studied but not distances; it can be studied as the complex plane using techniques of complex analysis; and so on. ==== Curves ==== A curve is a 1-dimensional object that may be straight (like a line) or not; curves in 2-dimensional space are called plane curves and those in 3-dimensional space are called space curves. In topology, a curve is defined by a function from an interval of the real numbers to another space. In differential geometry, the same definition is used, but the defining function is required to be differentiable. Algebraic geometry studies algebraic curves, which are defined as algebraic varieties of dimension one. ==== Surfaces ==== A surface is a two-dimensional object, such as a sphere or paraboloid. In differential geometry and topology, surfaces are described by two-dimensional 'patches' (or neighborhoods) that are assembled by diffeomorphisms or homeomorphisms, respectively. In algebraic geometry, surfaces are described by polynomial equations. ==== Solids ==== A solid is a three-dimensional object bounded by a closed surface; for example, a ball is the volume bounded by a sphere. ==== Manifolds ==== A manifold is a generalization of the concepts of curve and surface. In topology, a manifold is a topological space where every point has a neighborhood that is homeomorphic to Euclidean space. In differential geometry, a differentiable manifold is a space where each neighborhood is diffeomorphic to Euclidean space. Manifolds are used extensively in physics, including in general relativity and string theory. === Angles === Euclid defines a plane angle as the inclination to each other, in a plane, of two lines which meet each other, and do not lie straight with respect to each other. In modern terms, an angle is the figure formed by two rays, called the sides of the angle, sharing a common endpoint, called the vertex of the angle. The size of an angle is formalized as an angular measure. In Euclidean geometry, angles are used to study polygons and triangles, as well as forming an object of study in their own right. The study of the angles of a triangle or of angles in a unit circle forms the basis of trigonometry. In differential geometry and calculus, the angles between plane curves or space curves or surfaces can be calculated using the derivative. === Measures: length, area, and volume === Length, area, and volume describe the size or extent of an object in one dimension, two dimension, and three dimensions respectively. In Euclidean geometry and analytic geometry, the length of a line segment can often be calculated by the Pythagorean theorem. Area and volume can be defined as fundamental quantities separate from length, or they can be described and calculated in terms of lengths in a plane or 3-dimensional space. Mathematicians have found many explicit formulas for area and formulas for volume of various geometric objects. In calculus, area and volume can be defined in terms of integrals, such as the Riemann integral or the Lebesgue integral. Other geometrical measures include the curvature and compactness. ==== Metrics and measures ==== The concept of length or distance can be generalized, leading to the idea of metrics. For instance, the Euclidean metric measures the distance between points in the Euclidean plane, while the hyperbolic metric measures the distance in the hyperbolic plane. Other important examples of metrics include the Lorentz metric of special relativity and the semi-Riemannian metrics of general relativity. In a different direction, the concepts of length, area and volume are extended by measure theory, which studies methods of assigning a size or measure to sets, where the measures follow rules similar to those of classical area and volume. === Congruence and similarity === Congruence and similarity are concepts that describe when two shapes have similar characteristics. In Euclidean geometry, similarity is used to describe objects that have the same shape, while congruence is used to describe objects that are the same in both size and shape. Hilbert, in his work on creating a more rigorous foundation for geometry, treated congruence as an undefined term whose properties are defined by axioms. Congruence and similarity are generalized in transformation geometry, which studies the properties of geometric objects that are preserved by different kinds of transformations. === Compass and straightedge constructions === Classical geometers paid special attention to constructing geometric objects that had been described in some other way. Classically, the only instruments used in most geometric constructions are the compass and straightedge. Also, every construction had to be complete in a finite number of steps. However, some problems turned out to be difficult or impossible to solve by these means alone, and ingenious constructions using neusis, parabolas and other curves, or mechanical devices, were found. === Rotation and orientation === The geometrical concepts of rotation and orientation define part of the placement of objects embedded in the plane or in space. === Dimension === Traditional geometry allowed dimensions 1 (a line or curve), 2 (a plane or surface), and 3 (our ambient world conceived of as three-dimensional space). Furthermore, mathematicians and physicists have used higher dimensions for nearly two centuries. One example of a mathematical use for higher dimensions is the configuration space of a physical system, which has a dimension equal to the system's degrees of freedom. For instance, the configuration of a screw can be described by five coordinates. In general topology, the concept of dimension has been extended from natural numbers, to infinite dimension (Hilbert spaces, for example) and positive real numbers (in fractal geometry). In algebraic geometry, the dimension of an algebraic variety has received a number of apparently different definitions, which are all equivalent in the most common cases. === Symmetry === The theme of symmetry in geometry is nearly as old as the science of geometry itself. Symmetric shapes such as the circle, regular polygons and platonic solids held deep significance for many ancient philosophers and were investigated in detail before the time of Euclid. Symmetric patterns occur in nature and were artistically rendered in a multitude of forms, including the graphics of Leonardo da Vinci, M. C. Escher, and others. In the second half of the 19th century, the relationship between symmetry and geometry came under intense scrutiny. Felix Klein's Erlangen program proclaimed that, in a very precise sense, symmetry, expressed via the notion of a transformation group, determines what geometry is. Symmetry in classical Euclidean geometry is represented by congruences and rigid motions, whereas in projective geometry an analogous role is played by collineations, geometric transformations that take straight lines into straight lines. However it was in the new geometries of Bolyai and Lobachevsky, Riemann, Clifford and Klein, and Sophus Lie that Klein's idea to 'define a geometry via its symmetry group' found its inspiration. Both discrete and continuous symmetries play prominent roles in geometry, the former in topology and geometric group theory, the latter in Lie theory and Riemannian geometry. A different type of symmetry is the principle of duality in projective geometry, among other fields. This meta-phenomenon can roughly be described as follows: in any theorem, exchange point with plane, join with meet, lies in with contains, and the result is an equally true theorem. A similar and closely related form of duality exists between a vector space and its dual space. == Contemporary geometry == === Euclidean geometry === Euclidean geometry is geometry in its classical sense. As it models the space of the physical world, it is used in many scientific areas, such as mechanics, astronomy, crystallography, and many technical fields, such as engineering, architecture, geodesy, aerodynamics, and navigation. The mandatory educational curriculum of the majority of nations includes the study of Euclidean concepts such as points, lines, planes, angles, triangles, congruence, similarity, solid figures, circles, and analytic geometry. ==== Euclidean vectors ==== Euclidean vectors are used for a myriad of applications in physics and engineering, such as position, displacement, deformation, velocity, acceleration, force, etc. === Differential geometry === Differential geometry uses techniques of calculus and linear algebra to study problems in geometry. It has applications in physics, econometrics, and bioinformatics, among others. In particular, differential geometry is of importance to mathematical physics due to Albert Einstein's general relativity postulation that the universe is curved. Differential geometry can either be intrinsic (meaning that the spaces it considers are smooth manifolds whose geometric structure is governed by a Riemannian metric, which determines how distances are measured near each point) or extrinsic (where the object under study is a part of some ambient flat Euclidean space). ==== Non-Euclidean geometry ==== === Topology === Topology is the field concerned with the properties of continuous mappings, and can be considered a generalization of Euclidean geometry. In practice, topology often means dealing with large-scale properties of spaces, such as connectedness and compactness. The field of topology, which saw massive development in the 20th century, is in a technical sense a type of transformation geometry, in which transformations are homeomorphisms. This has often been expressed in the form of the saying 'topology is rubber-sheet geometry'. Subfields of topology include geometric topology, differential topology, algebraic topology and general topology. === Algebraic geometry === Algebraic geometry is fundamentally the study by means of algebraic methods of some geometrical shapes, called algebraic sets, and defined as common zeros of multivariate polynomials. Algebraic geometry became an autonomous subfield of geometry c. 1900, with a theorem called Hilbert's Nullstellensatz that establishes a strong correspondence between algebraic sets and ideals of polynomial rings. This led to a parallel development of algebraic geometry, and its algebraic counterpart, called commutative algebra. From the late 1950s through the mid-1970s algebraic geometry had undergone major foundational development, with the introduction by Alexander Grothendieck of scheme theory, which allows using topological methods, including cohomology theories in a purely algebraic context. Scheme theory allowed to solve many difficult problems not only in geometry, but also in number theory. Wiles' proof of Fermat's Last Theorem is a famous example of a long-standing problem of number theory whose solution uses scheme theory and its extensions such as stack theory. One of seven Millennium Prize problems, the Hodge conjecture, is a question in algebraic geometry. Algebraic geometry has applications in many areas, including cryptography and string theory. === Complex geometry === Complex geometry studies the nature of geometric structures modelled on, or arising out of, the complex plane. Complex geometry lies at the intersection of differential geometry, algebraic geometry, and analysis of several complex variables, and has found applications to string theory and mirror symmetry. Complex geometry first appeared as a distinct area of study in the work of Bernhard Riemann in his study of Riemann surfaces. Work in the spirit of Riemann was carried out by the Italian school of algebraic geometry in the early 1900s. Contemporary treatment of complex geometry began with the work of Jean-Pierre Serre, who introduced the concept of sheaves to the subject, and illuminated the relations between complex geometry and algebraic geometry. The primary objects of study in complex geometry are complex manifolds, complex algebraic varieties, and complex analytic varieties, and holomorphic vector bundles and coherent sheaves over these spaces. Special examples of spaces studied in complex geometry include Riemann surfaces, and Calabi–Yau manifolds, and these spaces find uses in string theory. In particular, worldsheets of strings are modelled by Riemann surfaces, and superstring theory predicts that the extra 6 dimensions of 10 dimensional spacetime may be modelled by Calabi–Yau manifolds. === Discrete geometry === Discrete geometry is a subject that has close connections with convex geometry. It is concerned mainly with questions of relative position of simple geometric objects, such as points, lines and circles. Examples include the study of sphere packings, triangulations, the Kneser-Poulsen conjecture, etc. It shares many methods and principles with combinatorics. === Computational geometry === Computational geometry deals with algorithms and their implementations for manipulating geometrical objects. Important problems historically have included the travelling salesman problem, minimum spanning trees, hidden-line removal, and linear programming. Although being a young area of geometry, it has many applications in computer vision, image processing, computer-aided design, medical imaging, etc. === Geometric group theory === Groups have been understood as geometric objects since Klein's Erlangen programme. Geometric group theory studies group actions on objects that are regarded as geometric (significantly, isometric actions on metric spaces) to study finitely generated groups, often involving large-scale geometric techniques and borrowing from topology, geometry, dynamics and analysis. It had a significant impact on low-dimensional topology, a celebrated result being Agol's proof of the virtually Haken conjecture that combines Perelman geometrization with cubulation techniques. Group actions on their Cayley graphs are foundational examples of isometric group actions. Other major topics include quasi-isometries, Gromov-hyperbolic groups and their generalizations (relatively and acylindrically hyperbolic groups), free groups and their automorphisms, groups acting on trees, various notions of nonpositive curvature for groups (CAT(0) groups, Dehn functions, automaticity...), right angled Artin groups, and topics close to combinatorial group theory such as small cancellation theory and algorithmic problems (e.g. the word, conjugacy, and isomorphism problems). Other group-theoretic topics like mapping class groups, property (T), solvability, amenability and lattices in Lie groups are sometimes regarded as strongly geometric as well. === Convex geometry === Convex geometry investigates convex shapes in the Euclidean space and its more abstract analogues, often using techniques of real analysis and discrete mathematics. It has close connections to convex analysis, optimization and functional analysis and important applications in number theory. Convex geometry dates back to antiquity. Archimedes gave the first known precise definition of convexity. The isoperimetric problem, a recurring concept in convex geometry, was studied by the Greeks as well, including Zenodorus. Archimedes, Plato, Euclid, and later Kepler and Coxeter all studied convex polytopes and their properties. From the 19th century on, mathematicians have studied other areas of convex mathematics, including higher-dimensional polytopes, volume and surface area of convex bodies, Gaussian curvature, algorithms, tilings and lattices. == Applications == Geometry has found applications in many fields, some of which are described below. === Art === Mathematics and art are related in a variety of ways. For instance, the theory of perspective showed that there is more to geometry than just the metric properties of figures: perspective is the origin of projective geometry. Artists have long used concepts of proportion in design. Vitruvius developed a complicated theory of ideal proportions for the human figure. These concepts have been used and adapted by artists from Michelangelo to modern comic book artists. The golden ratio is a particular proportion that has had a controversial role in art. Often claimed to be the most aesthetically pleasing ratio of lengths, it is frequently stated to be incorporated into famous works of art, though the most reliable and unambiguous examples were made deliberately by artists aware of this legend. Tilings, or tessellations, have been used in art throughout history. Islamic art makes frequent use of tessellations, as did the art of M. C. Escher. Escher's work also made use of hyperbolic geometry. Cézanne advanced the theory that all images can be built up from the sphere, the cone, and the cylinder. This is still used in art theory today, although the exact list of shapes varies from author to author. === Architecture === Geometry has many applications in architecture. In fact, it has been said that geometry lies at the core of architectural design. Applications of geometry to architecture include the use of projective geometry to create forced perspective, the use of conic sections in constructing domes and similar objects, the use of tessellations, and the use of symmetry. === Physics === The field of astronomy, especially as it relates to mapping the positions of stars and planets on the celestial sphere and describing the relationship between movements of celestial bodies, have served as an important source of geometric problems throughout history. Riemannian geometry and pseudo-Riemannian geometry are used in general relativity. String theory makes use of several variants of geometry, as does quantum information theory. === Other fields of mathematics === Calculus was strongly influenced by geometry. For instance, the introduction of coordinates by René Descartes and the concurrent developments of algebra marked a new stage for geometry, since geometric figures such as plane curves could now be represented analytically in the form of functions and equations. This played a key role in the emergence of infinitesimal calculus in the 17th century. Analytic geometry continues to be a mainstay of pre-calculus and calculus curriculum. Another important area of application is number theory. In ancient Greece the Pythagoreans considered the role of numbers in geometry. However, the discovery of incommensurable lengths contradicted their philosophical views. Since the 19th century, geometry has been used for solving problems in number theory, for example through the geometry of numbers or, more recently, scheme theory, which is used in Wiles's proof of Fermat's Last Theorem. == See also == Lists List of geometers Category:Algebraic geometers Category:Differential geometers Category:Geometers Category:Topologists List of formulas in elementary geometry List of geometry topics List of important publications in geometry Lists of mathematics topics Related topics Descriptive geometry Flatland, a book written by Edwin Abbott Abbott about two- and three-dimensional space, to understand the concept of four dimensions List of interactive geometry software Other applications Molecular geometry == Notes == == References == === Sources === == Further reading == == External links == "Geometry" . Encyclopædia Britannica. Vol. 11 (11th ed.). 1911. pp. 675–736. A geometry course from Wikiversity Unusual Geometry Problems The Math Forum – Geometry The Math Forum – K–12 Geometry The Math Forum – College Geometry The Math Forum – Advanced Geometry Nature Precedings – Pegs and Ropes Geometry at Stonehenge The Mathematical Atlas – Geometric Areas of Mathematics "4000 Years of Geometry", lecture by Robin Wilson given at Gresham College, 3 October 2007 (available for MP3 and MP4 download as well as a text file) Finitism in Geometry at the Stanford Encyclopedia of Philosophy The Geometry Junkyard Interactive geometry reference with hundreds of applets Dynamic Geometry Sketches (with some Student Explorations) Geometry classes at Khan Academy
Wikipedia/geometry
In mathematics, the derivative is a fundamental tool that quantifies the sensitivity to change of a function's output with respect to its input. The derivative of a function of a single variable at a chosen input value, when it exists, is the slope of the tangent line to the graph of the function at that point. The tangent line is the best linear approximation of the function near that input value. For this reason, the derivative is often described as the instantaneous rate of change, the ratio of the instantaneous change in the dependent variable to that of the independent variable. The process of finding a derivative is called differentiation. There are multiple different notations for differentiation. Leibniz notation, named after Gottfried Wilhelm Leibniz, is represented as the ratio of two differentials, whereas prime notation is written by adding a prime mark. Higher order notations represent repeated differentiation, and they are usually denoted in Leibniz notation by adding superscripts to the differentials, and in prime notation by adding additional prime marks. The higher order derivatives can be applied in physics; for example, while the first derivative of the position of a moving object with respect to time is the object's velocity, how the position changes as time advances, the second derivative is the object's acceleration, how the velocity changes as time advances. Derivatives can be generalized to functions of several real variables. In this case, the derivative is reinterpreted as a linear transformation whose graph is (after an appropriate translation) the best linear approximation to the graph of the original function. The Jacobian matrix is the matrix that represents this linear transformation with respect to the basis given by the choice of independent and dependent variables. It can be calculated in terms of the partial derivatives with respect to the independent variables. For a real-valued function of several variables, the Jacobian matrix reduces to the gradient vector. == Definition == === As a limit === A function of a real variable f ( x ) {\displaystyle f(x)} is differentiable at a point a {\displaystyle a} of its domain, if its domain contains an open interval containing ⁠ a {\displaystyle a} ⁠, and the limit L = lim h → 0 f ( a + h ) − f ( a ) h {\displaystyle L=\lim _{h\to 0}{\frac {f(a+h)-f(a)}{h}}} exists. This means that, for every positive real number ⁠ ε {\displaystyle \varepsilon } ⁠, there exists a positive real number δ {\displaystyle \delta } such that, for every h {\displaystyle h} such that | h | < δ {\displaystyle |h|<\delta } and h ≠ 0 {\displaystyle h\neq 0} then f ( a + h ) {\displaystyle f(a+h)} is defined, and | L − f ( a + h ) − f ( a ) h | < ε , {\displaystyle \left|L-{\frac {f(a+h)-f(a)}{h}}\right|<\varepsilon ,} where the vertical bars denote the absolute value. This is an example of the (ε, δ)-definition of limit. If the function f {\displaystyle f} is differentiable at ⁠ a {\displaystyle a} ⁠, that is if the limit L {\displaystyle L} exists, then this limit is called the derivative of f {\displaystyle f} at a {\displaystyle a} . Multiple notations for the derivative exist. The derivative of f {\displaystyle f} at a {\displaystyle a} can be denoted ⁠ f ′ ( a ) {\displaystyle f'(a)} ⁠, read as "⁠ f {\displaystyle f} ⁠ prime of ⁠ a {\displaystyle a} ⁠"; or it can be denoted ⁠ d f d x ( a ) {\displaystyle \textstyle {\frac {df}{dx}}(a)} ⁠, read as "the derivative of f {\displaystyle f} with respect to x {\displaystyle x} at ⁠ a {\displaystyle a} ⁠" or "⁠ d f {\displaystyle df} ⁠ by (or over) d x {\displaystyle dx} at ⁠ a {\displaystyle a} ⁠". See § Notation below. If f {\displaystyle f} is a function that has a derivative at every point in its domain, then a function can be defined by mapping every point x {\displaystyle x} to the value of the derivative of f {\displaystyle f} at x {\displaystyle x} . This function is written f ′ {\displaystyle f'} and is called the derivative function or the derivative of ⁠ f {\displaystyle f} ⁠. The function f {\displaystyle f} sometimes has a derivative at most, but not all, points of its domain. The function whose value at a {\displaystyle a} equals f ′ ( a ) {\displaystyle f'(a)} whenever f ′ ( a ) {\displaystyle f'(a)} is defined and elsewhere is undefined is also called the derivative of ⁠ f {\displaystyle f} ⁠. It is still a function, but its domain may be smaller than the domain of f {\displaystyle f} . For example, let f {\displaystyle f} be the squaring function: f ( x ) = x 2 {\displaystyle f(x)=x^{2}} . Then the quotient in the definition of the derivative is f ( a + h ) − f ( a ) h = ( a + h ) 2 − a 2 h = a 2 + 2 a h + h 2 − a 2 h = 2 a + h . {\displaystyle {\frac {f(a+h)-f(a)}{h}}={\frac {(a+h)^{2}-a^{2}}{h}}={\frac {a^{2}+2ah+h^{2}-a^{2}}{h}}=2a+h.} The division in the last step is valid as long as h ≠ 0 {\displaystyle h\neq 0} . The closer h {\displaystyle h} is to ⁠ 0 {\displaystyle 0} ⁠, the closer this expression becomes to the value 2 a {\displaystyle 2a} . The limit exists, and for every input a {\displaystyle a} the limit is 2 a {\displaystyle 2a} . So, the derivative of the squaring function is the doubling function: ⁠ f ′ ( x ) = 2 x {\displaystyle f'(x)=2x} ⁠. The ratio in the definition of the derivative is the slope of the line through two points on the graph of the function ⁠ f {\displaystyle f} ⁠, specifically the points ( a , f ( a ) ) {\displaystyle (a,f(a))} and ( a + h , f ( a + h ) ) {\displaystyle (a+h,f(a+h))} . As h {\displaystyle h} is made smaller, these points grow closer together, and the slope of this line approaches the limiting value, the slope of the tangent to the graph of f {\displaystyle f} at a {\displaystyle a} . In other words, the derivative is the slope of the tangent. === Using infinitesimals === One way to think of the derivative d f d x ( a ) {\textstyle {\frac {df}{dx}}(a)} is as the ratio of an infinitesimal change in the output of the function f {\displaystyle f} to an infinitesimal change in its input. In order to make this intuition rigorous, a system of rules for manipulating infinitesimal quantities is required. The system of hyperreal numbers is a way of treating infinite and infinitesimal quantities. The hyperreals are an extension of the real numbers that contain numbers greater than anything of the form 1 + 1 + ⋯ + 1 {\displaystyle 1+1+\cdots +1} for any finite number of terms. Such numbers are infinite, and their reciprocals are infinitesimals. The application of hyperreal numbers to the foundations of calculus is called nonstandard analysis. This provides a way to define the basic concepts of calculus such as the derivative and integral in terms of infinitesimals, thereby giving a precise meaning to the d {\displaystyle d} in the Leibniz notation. Thus, the derivative of f ( x ) {\displaystyle f(x)} becomes f ′ ( x ) = st ⁡ ( f ( x + d x ) − f ( x ) d x ) {\displaystyle f'(x)=\operatorname {st} \left({\frac {f(x+dx)-f(x)}{dx}}\right)} for an arbitrary infinitesimal ⁠ d x {\displaystyle dx} ⁠, where st {\displaystyle \operatorname {st} } denotes the standard part function, which "rounds off" each finite hyperreal to the nearest real. Taking the squaring function f ( x ) = x 2 {\displaystyle f(x)=x^{2}} as an example again, f ′ ( x ) = st ⁡ ( x 2 + 2 x ⋅ d x + ( d x ) 2 − x 2 d x ) = st ⁡ ( 2 x ⋅ d x + ( d x ) 2 d x ) = st ⁡ ( 2 x ⋅ d x d x + ( d x ) 2 d x ) = st ⁡ ( 2 x + d x ) = 2 x . {\displaystyle {\begin{aligned}f'(x)&=\operatorname {st} \left({\frac {x^{2}+2x\cdot dx+(dx)^{2}-x^{2}}{dx}}\right)\\&=\operatorname {st} \left({\frac {2x\cdot dx+(dx)^{2}}{dx}}\right)\\&=\operatorname {st} \left({\frac {2x\cdot dx}{dx}}+{\frac {(dx)^{2}}{dx}}\right)\\&=\operatorname {st} \left(2x+dx\right)\\&=2x.\end{aligned}}} == Continuity and differentiability == If f {\displaystyle f} is differentiable at ⁠ a {\displaystyle a} ⁠, then f {\displaystyle f} must also be continuous at a {\displaystyle a} . As an example, choose a point a {\displaystyle a} and let f {\displaystyle f} be the step function that returns the value 1 for all x {\displaystyle x} less than ⁠ a {\displaystyle a} ⁠, and returns a different value 10 for all x {\displaystyle x} greater than or equal to a {\displaystyle a} . The function f {\displaystyle f} cannot have a derivative at a {\displaystyle a} . If h {\displaystyle h} is negative, then a + h {\displaystyle a+h} is on the low part of the step, so the secant line from a {\displaystyle a} to a + h {\displaystyle a+h} is very steep; as h {\displaystyle h} tends to zero, the slope tends to infinity. If h {\displaystyle h} is positive, then a + h {\displaystyle a+h} is on the high part of the step, so the secant line from a {\displaystyle a} to a + h {\displaystyle a+h} has slope zero. Consequently, the secant lines do not approach any single slope, so the limit of the difference quotient does not exist. However, even if a function is continuous at a point, it may not be differentiable there. For example, the absolute value function given by f ( x ) = | x | {\displaystyle f(x)=|x|} is continuous at ⁠ x = 0 {\displaystyle x=0} ⁠, but it is not differentiable there. If h {\displaystyle h} is positive, then the slope of the secant line from 0 to h {\displaystyle h} is one; if h {\displaystyle h} is negative, then the slope of the secant line from 0 {\displaystyle 0} to h {\displaystyle h} is ⁠ − 1 {\displaystyle -1} ⁠. This can be seen graphically as a "kink" or a "cusp" in the graph at x = 0 {\displaystyle x=0} . Even a function with a smooth graph is not differentiable at a point where its tangent is vertical: For instance, the function given by f ( x ) = x 1 / 3 {\displaystyle f(x)=x^{1/3}} is not differentiable at x = 0 {\displaystyle x=0} . In summary, a function that has a derivative is continuous, but there are continuous functions that do not have a derivative. Most functions that occur in practice have derivatives at all points or almost every point. Early in the history of calculus, many mathematicians assumed that a continuous function was differentiable at most points. Under mild conditions (for example, if the function is a monotone or a Lipschitz function), this is true. However, in 1872, Weierstrass found the first example of a function that is continuous everywhere but differentiable nowhere. This example is now known as the Weierstrass function. In 1931, Stefan Banach proved that the set of functions that have a derivative at some point is a meager set in the space of all continuous functions. Informally, this means that hardly any random continuous functions have a derivative at even one point. == Notation == One common way of writing the derivative of a function is Leibniz notation, introduced by Gottfried Wilhelm Leibniz in 1675, which denotes a derivative as the quotient of two differentials, such as d y {\displaystyle dy} and ⁠ d x {\displaystyle dx} ⁠. It is still commonly used when the equation y = f ( x ) {\displaystyle y=f(x)} is viewed as a functional relationship between dependent and independent variables. The first derivative is denoted by ⁠ d y d x {\displaystyle \textstyle {\frac {dy}{dx}}} ⁠, read as "the derivative of y {\displaystyle y} with respect to ⁠ x {\displaystyle x} ⁠". This derivative can alternately be treated as the application of a differential operator to a function, d y d x = d d x f ( x ) . {\textstyle {\frac {dy}{dx}}={\frac {d}{dx}}f(x).} Higher derivatives are expressed using the notation d n y d x n {\textstyle {\frac {d^{n}y}{dx^{n}}}} for the n {\displaystyle n} -th derivative of y = f ( x ) {\displaystyle y=f(x)} . These are abbreviations for multiple applications of the derivative operator; for example, d 2 y d x 2 = d d x ( d d x f ( x ) ) . {\textstyle {\frac {d^{2}y}{dx^{2}}}={\frac {d}{dx}}{\Bigl (}{\frac {d}{dx}}f(x){\Bigr )}.} Unlike some alternatives, Leibniz notation involves explicit specification of the variable for differentiation, in the denominator, which removes ambiguity when working with multiple interrelated quantities. The derivative of a composed function can be expressed using the chain rule: if u = g ( x ) {\displaystyle u=g(x)} and y = f ( g ( x ) ) {\displaystyle y=f(g(x))} then d y d x = d y d u ⋅ d u d x . {\textstyle {\frac {dy}{dx}}={\frac {dy}{du}}\cdot {\frac {du}{dx}}.} Another common notation for differentiation is by using the prime mark in the symbol of a function ⁠ f ( x ) {\displaystyle f(x)} ⁠. This notation, due to Joseph-Louis Lagrange, is now known as prime notation. The first derivative is written as ⁠ f ′ ( x ) {\displaystyle f'(x)} ⁠, read as "⁠ f {\displaystyle f} ⁠ prime of ⁠ x {\displaystyle x} ⁠", or ⁠ y ′ {\displaystyle y'} ⁠, read as "⁠ y {\displaystyle y} ⁠ prime". Similarly, the second and the third derivatives can be written as f ″ {\displaystyle f''} and ⁠ f ‴ {\displaystyle f'''} ⁠, respectively. For denoting the number of higher derivatives beyond this point, some authors use Roman numerals in superscript, whereas others place the number in parentheses, such as f i v {\displaystyle f^{\mathrm {iv} }} or ⁠ f ( 4 ) {\displaystyle f^{(4)}} ⁠. The latter notation generalizes to yield the notation f ( n ) {\displaystyle f^{(n)}} for the ⁠ n {\displaystyle n} ⁠th derivative of ⁠ f {\displaystyle f} ⁠. In Newton's notation or the dot notation, a dot is placed over a symbol to represent a time derivative. If y {\displaystyle y} is a function of ⁠ t {\displaystyle t} ⁠, then the first and second derivatives can be written as y ˙ {\displaystyle {\dot {y}}} and ⁠ y ¨ {\displaystyle {\ddot {y}}} ⁠, respectively. This notation is used exclusively for derivatives with respect to time or arc length. It is typically used in differential equations in physics and differential geometry. However, the dot notation becomes unmanageable for high-order derivatives (of order 4 or more) and cannot deal with multiple independent variables. Another notation is D-notation, which represents the differential operator by the symbol ⁠ D {\displaystyle D} ⁠. The first derivative is written D f ( x ) {\displaystyle Df(x)} and higher derivatives are written with a superscript, so the n {\displaystyle n} -th derivative is ⁠ D n f ( x ) {\displaystyle D^{n}f(x)} ⁠. This notation is sometimes called Euler notation, although it seems that Leonhard Euler did not use it, and the notation was introduced by Louis François Antoine Arbogast. To indicate a partial derivative, the variable differentiated by is indicated with a subscript, for example given the function ⁠ u = f ( x , y ) {\displaystyle u=f(x,y)} ⁠, its partial derivative with respect to x {\displaystyle x} can be written D x u {\displaystyle D_{x}u} or ⁠ D x f ( x , y ) {\displaystyle D_{x}f(x,y)} ⁠. Higher partial derivatives can be indicated by superscripts or multiple subscripts, e.g. D x y f ( x , y ) = ∂ ∂ y ( ∂ ∂ x f ( x , y ) ) {\textstyle D_{xy}f(x,y)={\frac {\partial }{\partial y}}{\Bigl (}{\frac {\partial }{\partial x}}f(x,y){\Bigr )}} and ⁠ D x 2 f ( x , y ) = ∂ ∂ x ( ∂ ∂ x f ( x , y ) ) {\displaystyle \textstyle D_{x}^{2}f(x,y)={\frac {\partial }{\partial x}}{\Bigl (}{\frac {\partial }{\partial x}}f(x,y){\Bigr )}} ⁠. == Rules of computation == In principle, the derivative of a function can be computed from the definition by considering the difference quotient and computing its limit. Once the derivatives of a few simple functions are known, the derivatives of other functions are more easily computed using rules for obtaining derivatives of more complicated functions from simpler ones. This process of finding a derivative is known as differentiation. === Rules for basic functions === The following are the rules for the derivatives of the most common basic functions. Here, a {\displaystyle a} is a real number, and e {\displaystyle e} is the base of the natural logarithm, approximately 2.71828. Derivatives of powers: d d x x a = a x a − 1 {\displaystyle {\frac {d}{dx}}x^{a}=ax^{a-1}} Functions of exponential, natural logarithm, and logarithm with general base: d d x e x = e x {\displaystyle {\frac {d}{dx}}e^{x}=e^{x}} d d x a x = a x ln ⁡ ( a ) {\displaystyle {\frac {d}{dx}}a^{x}=a^{x}\ln(a)} , for a > 0 {\displaystyle a>0} d d x ln ⁡ ( x ) = 1 x {\displaystyle {\frac {d}{dx}}\ln(x)={\frac {1}{x}}} , for x > 0 {\displaystyle x>0} d d x log a ⁡ ( x ) = 1 x ln ⁡ ( a ) {\displaystyle {\frac {d}{dx}}\log _{a}(x)={\frac {1}{x\ln(a)}}} , for x , a > 0 {\displaystyle x,a>0} Trigonometric functions: d d x sin ⁡ ( x ) = cos ⁡ ( x ) {\displaystyle {\frac {d}{dx}}\sin(x)=\cos(x)} d d x cos ⁡ ( x ) = − sin ⁡ ( x ) {\displaystyle {\frac {d}{dx}}\cos(x)=-\sin(x)} d d x tan ⁡ ( x ) = sec 2 ⁡ ( x ) = 1 cos 2 ⁡ ( x ) = 1 + tan 2 ⁡ ( x ) {\displaystyle {\frac {d}{dx}}\tan(x)=\sec ^{2}(x)={\frac {1}{\cos ^{2}(x)}}=1+\tan ^{2}(x)} Inverse trigonometric functions: d d x arcsin ⁡ ( x ) = 1 1 − x 2 {\displaystyle {\frac {d}{dx}}\arcsin(x)={\frac {1}{\sqrt {1-x^{2}}}}} , for − 1 < x < 1 {\displaystyle -1<x<1} d d x arccos ⁡ ( x ) = − 1 1 − x 2 {\displaystyle {\frac {d}{dx}}\arccos(x)=-{\frac {1}{\sqrt {1-x^{2}}}}} , for − 1 < x < 1 {\displaystyle -1<x<1} d d x arctan ⁡ ( x ) = 1 1 + x 2 {\displaystyle {\frac {d}{dx}}\arctan(x)={\frac {1}{1+x^{2}}}} === Rules for combined functions === Given that the f {\displaystyle f} and g {\displaystyle g} are the functions. The following are some of the most basic rules for deducing the derivative of functions from derivatives of basic functions. Constant rule: if f {\displaystyle f} is constant, then for all ⁠ x {\displaystyle x} ⁠, f ′ ( x ) = 0. {\displaystyle f'(x)=0.} Sum rule: ( α f + β g ) ′ = α f ′ + β g ′ {\displaystyle (\alpha f+\beta g)'=\alpha f'+\beta g'} for all functions f {\displaystyle f} and g {\displaystyle g} and all real numbers α {\displaystyle \alpha } and ⁠ β {\displaystyle \beta } ⁠. Product rule: ( f g ) ′ = f ′ g + f g ′ {\displaystyle (fg)'=f'g+fg'} for all functions f {\displaystyle f} and ⁠ g {\displaystyle g} ⁠. As a special case, this rule includes the fact ( α f ) ′ = α f ′ {\displaystyle (\alpha f)'=\alpha f'} whenever α {\displaystyle \alpha } is a constant because α ′ f = 0 ⋅ f = 0 {\displaystyle \alpha 'f=0\cdot f=0} by the constant rule. Quotient rule: ( f g ) ′ = f ′ g − f g ′ g 2 {\displaystyle \left({\frac {f}{g}}\right)'={\frac {f'g-fg'}{g^{2}}}} for all functions f {\displaystyle f} and g {\displaystyle g} at all inputs where g ≠ 0. Chain rule for composite functions: If ⁠ f ( x ) = h ( g ( x ) ) {\displaystyle f(x)=h(g(x))} ⁠, then f ′ ( x ) = h ′ ( g ( x ) ) ⋅ g ′ ( x ) . {\displaystyle f'(x)=h'(g(x))\cdot g'(x).} === Computation example === The derivative of the function given by f ( x ) = x 4 + sin ⁡ ( x 2 ) − ln ⁡ ( x ) e x + 7 {\displaystyle f(x)=x^{4}+\sin \left(x^{2}\right)-\ln(x)e^{x}+7} is f ′ ( x ) = 4 x ( 4 − 1 ) + d ( x 2 ) d x cos ⁡ ( x 2 ) − d ( ln ⁡ x ) d x e x − ln ⁡ ( x ) d ( e x ) d x + 0 = 4 x 3 + 2 x cos ⁡ ( x 2 ) − 1 x e x − ln ⁡ ( x ) e x . {\displaystyle {\begin{aligned}f'(x)&=4x^{(4-1)}+{\frac {d\left(x^{2}\right)}{dx}}\cos \left(x^{2}\right)-{\frac {d\left(\ln {x}\right)}{dx}}e^{x}-\ln(x){\frac {d\left(e^{x}\right)}{dx}}+0\\&=4x^{3}+2x\cos \left(x^{2}\right)-{\frac {1}{x}}e^{x}-\ln(x)e^{x}.\end{aligned}}} Here the second term was computed using the chain rule and the third term using the product rule. The known derivatives of the elementary functions x 2 {\displaystyle x^{2}} , x 4 {\displaystyle x^{4}} , sin ⁡ ( x ) {\displaystyle \sin(x)} , ln ⁡ ( x ) {\displaystyle \ln(x)} , and exp ⁡ ( x ) = e x {\displaystyle \exp(x)=e^{x}} , as well as the constant 7 {\displaystyle 7} , were also used. == Higher-order derivatives == Higher order derivatives are the result of differentiating a function repeatedly. Given that f {\displaystyle f} is a differentiable function, the derivative of f {\displaystyle f} is the first derivative, denoted as ⁠ f ′ {\displaystyle f'} ⁠. The derivative of f ′ {\displaystyle f'} is the second derivative, denoted as ⁠ f ″ {\displaystyle f''} ⁠, and the derivative of f ″ {\displaystyle f''} is the third derivative, denoted as ⁠ f ‴ {\displaystyle f'''} ⁠. By continuing this process, if it exists, the ⁠ n {\displaystyle n} ⁠th derivative is the derivative of the ⁠ ( n − 1 ) {\displaystyle (n-1)} ⁠th derivative or the derivative of order ⁠ n {\displaystyle n} ⁠. As has been discussed above, the generalization of derivative of a function f {\displaystyle f} may be denoted as ⁠ f ( n ) {\displaystyle f^{(n)}} ⁠. A function that has k {\displaystyle k} successive derivatives is called k {\displaystyle k} times differentiable. If the k {\displaystyle k} -th derivative is continuous, then the function is said to be of differentiability class ⁠ C k {\displaystyle C^{k}} ⁠. A function that has infinitely many derivatives is called infinitely differentiable or smooth. Any polynomial function is infinitely differentiable; taking derivatives repeatedly will eventually result in a constant function, and all subsequent derivatives of that function are zero. One application of higher-order derivatives is in physics. Suppose that a function represents the position of an object at the time. The first derivative of that function is the velocity of an object with respect to time, the second derivative of the function is the acceleration of an object with respect to time, and the third derivative is the jerk. == In other dimensions == === Vector-valued functions === A vector-valued function y {\displaystyle \mathbf {y} } of a real variable sends real numbers to vectors in some vector space R n {\displaystyle \mathbb {R} ^{n}} . A vector-valued function can be split up into its coordinate functions y 1 ( t ) , y 2 ( t ) , … , y n ( t ) {\displaystyle y_{1}(t),y_{2}(t),\dots ,y_{n}(t)} , meaning that y = ( y 1 ( t ) , y 2 ( t ) , … , y n ( t ) ) {\displaystyle \mathbf {y} =(y_{1}(t),y_{2}(t),\dots ,y_{n}(t))} . This includes, for example, parametric curves in R 2 {\displaystyle \mathbb {R} ^{2}} or R 3 {\displaystyle \mathbb {R} ^{3}} . The coordinate functions are real-valued functions, so the above definition of derivative applies to them. The derivative of y ( t ) {\displaystyle \mathbf {y} (t)} is defined to be the vector, called the tangent vector, whose coordinates are the derivatives of the coordinate functions. That is, y ′ ( t ) = lim h → 0 y ( t + h ) − y ( t ) h , {\displaystyle \mathbf {y} '(t)=\lim _{h\to 0}{\frac {\mathbf {y} (t+h)-\mathbf {y} (t)}{h}},} if the limit exists. The subtraction in the numerator is the subtraction of vectors, not scalars. If the derivative of y {\displaystyle \mathbf {y} } exists for every value of ⁠ t {\displaystyle t} ⁠, then y ′ {\displaystyle \mathbf {y} '} is another vector-valued function. === Partial derivatives === Functions can depend upon more than one variable. A partial derivative of a function of several variables is its derivative with respect to one of those variables, with the others held constant. Partial derivatives are used in vector calculus and differential geometry. As with ordinary derivatives, multiple notations exist: the partial derivative of a function f ( x , y , … ) {\displaystyle f(x,y,\dots )} with respect to the variable x {\displaystyle x} is variously denoted by among other possibilities. It can be thought of as the rate of change of the function in the x {\displaystyle x} -direction. Here ∂ is a rounded d called the partial derivative symbol. To distinguish it from the letter d, ∂ is sometimes pronounced "der", "del", or "partial" instead of "dee". For example, let ⁠ f ( x , y ) = x 2 + x y + y 2 {\displaystyle f(x,y)=x^{2}+xy+y^{2}} ⁠, then the partial derivative of function f {\displaystyle f} with respect to both variables x {\displaystyle x} and y {\displaystyle y} are, respectively: ∂ f ∂ x = 2 x + y , ∂ f ∂ y = x + 2 y . {\displaystyle {\frac {\partial f}{\partial x}}=2x+y,\qquad {\frac {\partial f}{\partial y}}=x+2y.} In general, the partial derivative of a function f ( x 1 , … , x n ) {\displaystyle f(x_{1},\dots ,x_{n})} in the direction x i {\displaystyle x_{i}} at the point ( a 1 , … , a n ) {\displaystyle (a_{1},\dots ,a_{n})} is defined to be: ∂ f ∂ x i ( a 1 , … , a n ) = lim h → 0 f ( a 1 , … , a i + h , … , a n ) − f ( a 1 , … , a i , … , a n ) h . {\displaystyle {\frac {\partial f}{\partial x_{i}}}(a_{1},\ldots ,a_{n})=\lim _{h\to 0}{\frac {f(a_{1},\ldots ,a_{i}+h,\ldots ,a_{n})-f(a_{1},\ldots ,a_{i},\ldots ,a_{n})}{h}}.} This is fundamental for the study of the functions of several real variables. Let f ( x 1 , … , x n ) {\displaystyle f(x_{1},\dots ,x_{n})} be such a real-valued function. If all partial derivatives f {\displaystyle f} with respect to x j {\displaystyle x_{j}} are defined at the point ⁠ ( a 1 , … , a n ) {\displaystyle (a_{1},\dots ,a_{n})} ⁠, these partial derivatives define the vector ∇ f ( a 1 , … , a n ) = ( ∂ f ∂ x 1 ( a 1 , … , a n ) , … , ∂ f ∂ x n ( a 1 , … , a n ) ) , {\displaystyle \nabla f(a_{1},\ldots ,a_{n})=\left({\frac {\partial f}{\partial x_{1}}}(a_{1},\ldots ,a_{n}),\ldots ,{\frac {\partial f}{\partial x_{n}}}(a_{1},\ldots ,a_{n})\right),} which is called the gradient of f {\displaystyle f} at a {\displaystyle a} . If f {\displaystyle f} is differentiable at every point in some domain, then the gradient is a vector-valued function ∇ f {\displaystyle \nabla f} that maps the point ( a 1 , … , a n ) {\displaystyle (a_{1},\dots ,a_{n})} to the vector ∇ f ( a 1 , … , a n ) {\displaystyle \nabla f(a_{1},\dots ,a_{n})} . Consequently, the gradient determines a vector field. === Directional derivatives === If f {\displaystyle f} is a real-valued function on ⁠ R n {\displaystyle \mathbb {R} ^{n}} ⁠, then the partial derivatives of f {\displaystyle f} measure its variation in the direction of the coordinate axes. For example, if f {\displaystyle f} is a function of x {\displaystyle x} and ⁠ y {\displaystyle y} ⁠, then its partial derivatives measure the variation in f {\displaystyle f} in the x {\displaystyle x} and y {\displaystyle y} direction. However, they do not directly measure the variation of f {\displaystyle f} in any other direction, such as along the diagonal line ⁠ y = x {\displaystyle y=x} ⁠. These are measured using directional derivatives. Given a vector ⁠ v = ( v 1 , … , v n ) {\displaystyle \mathbf {v} =(v_{1},\ldots ,v_{n})} ⁠, then the directional derivative of f {\displaystyle f} in the direction of v {\displaystyle \mathbf {v} } at the point x {\displaystyle \mathbf {x} } is: D v f ( x ) = lim h → 0 f ( x + h v ) − f ( x ) h . {\displaystyle D_{\mathbf {v} }{f}(\mathbf {x} )=\lim _{h\rightarrow 0}{\frac {f(\mathbf {x} +h\mathbf {v} )-f(\mathbf {x} )}{h}}.} If all the partial derivatives of f {\displaystyle f} exist and are continuous at ⁠ x {\displaystyle \mathbf {x} } ⁠, then they determine the directional derivative of f {\displaystyle f} in the direction v {\displaystyle \mathbf {v} } by the formula: D v f ( x ) = ∑ j = 1 n v j ∂ f ∂ x j . {\displaystyle D_{\mathbf {v} }{f}(\mathbf {x} )=\sum _{j=1}^{n}v_{j}{\frac {\partial f}{\partial x_{j}}}.} === Total derivative and Jacobian matrix === When f {\displaystyle f} is a function from an open subset of R n {\displaystyle \mathbb {R} ^{n}} to ⁠ R m {\displaystyle \mathbb {R} ^{m}} ⁠, then the directional derivative of f {\displaystyle f} in a chosen direction is the best linear approximation to f {\displaystyle f} at that point and in that direction. However, when ⁠ n > 1 {\displaystyle n>1} ⁠, no single directional derivative can give a complete picture of the behavior of f {\displaystyle f} . The total derivative gives a complete picture by considering all directions at once. That is, for any vector v {\displaystyle \mathbf {v} } starting at ⁠ a {\displaystyle \mathbf {a} } ⁠, the linear approximation formula holds: f ( a + v ) ≈ f ( a ) + f ′ ( a ) v . {\displaystyle f(\mathbf {a} +\mathbf {v} )\approx f(\mathbf {a} )+f'(\mathbf {a} )\mathbf {v} .} Similarly with the single-variable derivative, f ′ ( a ) {\displaystyle f'(\mathbf {a} )} is chosen so that the error in this approximation is as small as possible. The total derivative of f {\displaystyle f} at a {\displaystyle \mathbf {a} } is the unique linear transformation f ′ ( a ) : R n → R m {\displaystyle f'(\mathbf {a} )\colon \mathbb {R} ^{n}\to \mathbb {R} ^{m}} such that lim h → 0 ‖ f ( a + h ) − ( f ( a ) + f ′ ( a ) h ) ‖ ‖ h ‖ = 0. {\displaystyle \lim _{\mathbf {h} \to 0}{\frac {\lVert f(\mathbf {a} +\mathbf {h} )-(f(\mathbf {a} )+f'(\mathbf {a} )\mathbf {h} )\rVert }{\lVert \mathbf {h} \rVert }}=0.} Here h {\displaystyle \mathbf {h} } is a vector in ⁠ R n {\displaystyle \mathbb {R} ^{n}} ⁠, so the norm in the denominator is the standard length on R n {\displaystyle \mathbb {R} ^{n}} . However, f ′ ( a ) h {\displaystyle f'(\mathbf {a} )\mathbf {h} } is a vector in ⁠ R m {\displaystyle \mathbb {R} ^{m}} ⁠, and the norm in the numerator is the standard length on R m {\displaystyle \mathbb {R} ^{m}} . If v {\displaystyle v} is a vector starting at ⁠ a {\displaystyle a} ⁠, then f ′ ( a ) v {\displaystyle f'(\mathbf {a} )\mathbf {v} } is called the pushforward of v {\displaystyle \mathbf {v} } by f {\displaystyle f} . If the total derivative exists at ⁠ a {\displaystyle \mathbf {a} } ⁠, then all the partial derivatives and directional derivatives of f {\displaystyle f} exist at ⁠ a {\displaystyle \mathbf {a} } ⁠, and for all ⁠ v {\displaystyle \mathbf {v} } ⁠, f ′ ( a ) v {\displaystyle f'(\mathbf {a} )\mathbf {v} } is the directional derivative of f {\displaystyle f} in the direction ⁠ v {\displaystyle \mathbf {v} } ⁠. If f {\displaystyle f} is written using coordinate functions, so that ⁠ f = ( f 1 , f 2 , … , f m ) {\displaystyle f=(f_{1},f_{2},\dots ,f_{m})} ⁠, then the total derivative can be expressed using the partial derivatives as a matrix. This matrix is called the Jacobian matrix of f {\displaystyle f} at a {\displaystyle \mathbf {a} } : f ′ ( a ) = Jac a = ( ∂ f i ∂ x j ) i j . {\displaystyle f'(\mathbf {a} )=\operatorname {Jac} _{\mathbf {a} }=\left({\frac {\partial f_{i}}{\partial x_{j}}}\right)_{ij}.} == Generalizations == The concept of a derivative can be extended to many other settings. The common thread is that the derivative of a function at a point serves as a linear approximation of the function at that point. An important generalization of the derivative concerns complex functions of complex variables, such as functions from (a domain in) the complex numbers C {\displaystyle \mathbb {C} } to ⁠ C {\displaystyle \mathbb {C} } ⁠. The notion of the derivative of such a function is obtained by replacing real variables with complex variables in the definition. If C {\displaystyle \mathbb {C} } is identified with R 2 {\displaystyle \mathbb {R} ^{2}} by writing a complex number z {\displaystyle z} as ⁠ x + i y {\displaystyle x+iy} ⁠ then a differentiable function from C {\displaystyle \mathbb {C} } to C {\displaystyle \mathbb {C} } is certainly differentiable as a function from R 2 {\displaystyle \mathbb {R} ^{2}} to R 2 {\displaystyle \mathbb {R} ^{2}} (in the sense that its partial derivatives all exist), but the converse is not true in general: the complex derivative only exists if the real derivative is complex linear and this imposes relations between the partial derivatives called the Cauchy–Riemann equations – see holomorphic functions. Another generalization concerns functions between differentiable or smooth manifolds. Intuitively speaking such a manifold M {\displaystyle M} is a space that can be approximated near each point x {\displaystyle x} by a vector space called its tangent space: the prototypical example is a smooth surface in ⁠ R 3 {\displaystyle \mathbb {R} ^{3}} ⁠. The derivative (or differential) of a (differentiable) map f : M → N {\displaystyle f:M\to N} between manifolds, at a point x {\displaystyle x} in ⁠ M {\displaystyle M} ⁠, is then a linear map from the tangent space of M {\displaystyle M} at x {\displaystyle x} to the tangent space of N {\displaystyle N} at ⁠ f ( x ) {\displaystyle f(x)} ⁠. The derivative function becomes a map between the tangent bundles of M {\displaystyle M} and ⁠ N {\displaystyle N} ⁠. This definition is used in differential geometry. Differentiation can also be defined for maps between vector space, such as Banach space, in which those generalizations are the Gateaux derivative and the Fréchet derivative. One deficiency of the classical derivative is that very many functions are not differentiable. Nevertheless, there is a way of extending the notion of the derivative so that all continuous functions and many other functions can be differentiated using a concept known as the weak derivative. The idea is to embed the continuous functions in a larger space called the space of distributions and only require that a function is differentiable "on average". Properties of the derivative have inspired the introduction and study of many similar objects in algebra and topology; an example is differential algebra. Here, it consists of the derivation of some topics in abstract algebra, such as rings, ideals, field, and so on. The discrete equivalent of differentiation is finite differences. The study of differential calculus is unified with the calculus of finite differences in time scale calculus. The arithmetic derivative involves the function that is defined for the integers by the prime factorization. This is an analogy with the product rule. == See also == Covariant derivative Derivation Exterior derivative Functional derivative Integral Lie derivative == Notes == == References == == External links == "Derivative", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Khan Academy: "Newton, Leibniz, and Usain Bolt" Weisstein, Eric W. "Derivative". MathWorld. Online Derivative Calculator from Wolfram Alpha.
Wikipedia/Derivative_(calculus)
Quantum information is the information of the state of a quantum system. It is the basic entity of study in quantum information theory, and can be manipulated using quantum information processing techniques. Quantum information refers to both the technical definition in terms of Von Neumann entropy and the general computational term. It is an interdisciplinary field that involves quantum mechanics, computer science, information theory, philosophy and cryptography among other fields. Its study is also relevant to disciplines such as cognitive science, psychology and neuroscience. Its main focus is in extracting information from matter at the microscopic scale. Observation in science is one of the most important ways of acquiring information and measurement is required in order to quantify the observation, making this crucial to the scientific method. In quantum mechanics, due to the uncertainty principle, non-commuting observables cannot be precisely measured simultaneously, as an eigenstate in one basis is not an eigenstate in the other basis. According to the eigenstate–eigenvalue link, an observable is well-defined (definite) when the state of the system is an eigenstate of the observable. Since any two non-commuting observables are not simultaneously well-defined, a quantum state can never contain definitive information about both non-commuting observables. Data can be encoded into the quantum state of a quantum system as quantum information. While quantum mechanics deals with examining properties of matter at the microscopic level, quantum information science focuses on extracting information from those properties, and quantum computation manipulates and processes information – performs logical operations – using quantum information processing techniques. Quantum information, like classical information, can be processed using digital computers, transmitted from one location to another, manipulated with algorithms, and analyzed with computer science and mathematics. Just like the basic unit of classical information is the bit, quantum information deals with qubits. Quantum information can be measured using Von Neumann entropy. Recently, the field of quantum computing has become an active research area because of the possibility to disrupt modern computation, communication, and cryptography. == History and development == === Development from fundamental quantum mechanics === The history of quantum information theory began at the turn of the 20th century when classical physics was revolutionized into quantum physics. The theories of classical physics were predicting absurdities such as the ultraviolet catastrophe, or electrons spiraling into the nucleus. At first these problems were brushed aside by adding ad hoc hypotheses to classical physics. Soon, it became apparent that a new theory must be created in order to make sense of these absurdities, and the theory of quantum mechanics was born. Quantum mechanics was formulated by Erwin Schrödinger using wave mechanics and Werner Heisenberg using matrix mechanics. The equivalence of these methods was proven later. Their formulations described the dynamics of microscopic systems but had several unsatisfactory aspects in describing measurement processes. Von Neumann formulated quantum theory using operator algebra in a way that it described measurement as well as dynamics. These studies emphasized the philosophical aspects of measurement rather than a quantitative approach to extracting information via measurements. See: Dynamical Pictures ==== Development from communication ==== In the 1960s, Ruslan Stratonovich, Carl Helstrom and Gordon proposed a formulation of optical communications using quantum mechanics. This was the first historical appearance of quantum information theory. They mainly studied error probabilities and channel capacities for communication. Later, Alexander Holevo obtained an upper bound of communication speed in the transmission of a classical message via a quantum channel. ==== Development from atomic physics and relativity ==== In the 1970s, techniques for manipulating single-atom quantum states, such as the atom trap and the scanning tunneling microscope, began to be developed, making it possible to isolate single atoms and arrange them in arrays. Prior to these developments, precise control over single quantum systems was not possible, and experiments used coarser, simultaneous control over a large number of quantum systems. The development of viable single-state manipulation techniques led to increased interest in the field of quantum information and computation. In the 1980s, interest arose in whether it might be possible to use quantum effects to disprove Einstein's theory of relativity. If it were possible to clone an unknown quantum state, it would be possible to use entangled quantum states to transmit information faster than the speed of light, disproving Einstein's theory. However, the no-cloning theorem showed that such cloning is impossible. The theorem was one of the earliest results of quantum information theory. ==== Development from cryptography ==== Despite all the excitement and interest over studying isolated quantum systems and trying to find a way to circumvent the theory of relativity, research in quantum information theory became stagnant in the 1980s. However, around the same time another avenue started dabbling into quantum information and computation: Cryptography. In a general sense, cryptography is the problem of doing communication or computation involving two or more parties who may not trust one another. Bennett and Brassard developed a communication channel on which it is impossible to eavesdrop without being detected, a way of communicating secretly at long distances using the BB84 quantum cryptographic protocol. The key idea was the use of the fundamental principle of quantum mechanics that observation disturbs the observed, and the introduction of an eavesdropper in a secure communication line will immediately let the two parties trying to communicate know of the presence of the eavesdropper. ==== Development from computer science and mathematics ==== With the advent of Alan Turing's revolutionary ideas of a programmable computer, or Turing machine, he showed that any real-world computation can be translated into an equivalent computation involving a Turing machine. This is known as the Church–Turing thesis. Soon enough, the first computers were made, and computer hardware grew at such a fast pace that the growth, through experience in production, was codified into an empirical relationship called Moore's law. This 'law' is a projective trend that states that the number of transistors in an integrated circuit doubles every two years. As transistors began to become smaller and smaller in order to pack more power per surface area, quantum effects started to show up in the electronics resulting in inadvertent interference. This led to the advent of quantum computing, which uses quantum mechanics to design algorithms. At this point, quantum computers showed promise of being much faster than classical computers for certain specific problems. One such example problem was developed by David Deutsch and Richard Jozsa, known as the Deutsch–Jozsa algorithm. This problem however held little to no practical applications. Peter Shor in 1994 came up with a very important and practical problem, one of finding the prime factors of an integer. The discrete logarithm problem as it was called, could theoretically be solved efficiently on a quantum computer but not on a classical computer hence showing that quantum computers should be more powerful than Turing machines. ==== Development from information theory ==== Around the time computer science was making a revolution, so was information theory and communication, through Claude Shannon. Shannon developed two fundamental theorems of information theory: noiseless channel coding theorem and noisy channel coding theorem. He also showed that error correcting codes could be used to protect information being sent. Quantum information theory also followed a similar trajectory, Ben Schumacher in 1995 made an analogue to Shannon's noiseless coding theorem using the qubit. A theory of error-correction also developed, which allows quantum computers to make efficient computations regardless of noise and make reliable communication over noisy quantum channels. == Qubits and information theory == Quantum information differs strongly from classical information, epitomized by the bit, in many striking and unfamiliar ways. While the fundamental unit of classical information is the bit, the most basic unit of quantum information is the qubit. Classical information is measured using Shannon entropy, while the quantum mechanical analogue is Von Neumann entropy. Given a statistical ensemble of quantum mechanical systems with the density matrix ρ {\displaystyle \rho } , it is given by S ( ρ ) = − Tr ⁡ ( ρ ln ⁡ ρ ) . {\displaystyle S(\rho )=-\operatorname {Tr} (\rho \ln \rho ).} Many of the same entropy measures in classical information theory can also be generalized to the quantum case, such as Holevo entropy and the conditional quantum entropy. Unlike classical digital states (which are discrete), a qubit is continuous-valued, describable by a direction on the Bloch sphere. Despite being continuously valued in this way, a qubit is the smallest possible unit of quantum information, and despite the qubit state being continuous-valued, it is impossible to measure the value precisely. Five famous theorems describe the limits on manipulation of quantum information. no-teleportation theorem, which states that a qubit cannot be (wholly) converted into classical bits; that is, it cannot be fully "read". no-cloning theorem, which prevents an arbitrary qubit from being copied. no-deleting theorem, which prevents an arbitrary qubit from being deleted. no-broadcast theorem, which prevents an arbitrary qubit from being delivered to multiple recipients, although it can be transported from place to place (e.g. via quantum teleportation). no-hiding theorem, which demonstrates the conservation of quantum information. These theorems are proven from unitarity, which according to Leonard Susskind is the technical term for the statement that quantum information within the universe is conserved.: 94  The five theorems open possibilities in quantum information processing. == Quantum information processing == The state of a qubit contains all of its information. This state is frequently expressed as a vector on the Bloch sphere. This state can be changed by applying linear transformations or quantum gates to them. These unitary transformations are described as rotations on the Bloch sphere. While classical gates correspond to the familiar operations of Boolean logic, quantum gates are physical unitary operators. Due to the volatility of quantum systems and the impossibility of copying states, the storing of quantum information is much more difficult than storing classical information. Nevertheless, with the use of quantum error correction quantum information can still be reliably stored in principle. The existence of quantum error correcting codes has also led to the possibility of fault-tolerant quantum computation. Classical bits can be encoded into and subsequently retrieved from configurations of qubits, through the use of quantum gates. By itself, a single qubit can convey no more than one bit of accessible classical information about its preparation. This is Holevo's theorem. However, in superdense coding a sender, by acting on one of two entangled qubits, can convey two bits of accessible information about their joint state to a receiver. Quantum information can be moved about, in a quantum channel, analogous to the concept of a classical communications channel. Quantum messages have a finite size, measured in qubits; quantum channels have a finite channel capacity, measured in qubits per second. Quantum information, and changes in quantum information, can be quantitatively measured by using an analogue of Shannon entropy, called the von Neumann entropy. In some cases, quantum algorithms can be used to perform computations faster than in any known classical algorithm. The most famous example of this is Shor's algorithm that can factor numbers in polynomial time, compared to the best classical algorithms that take sub-exponential time. As factorization is an important part of the safety of RSA encryption, Shor's algorithm sparked the new field of post-quantum cryptography that tries to find encryption schemes that remain safe even when quantum computers are in play. Other examples of algorithms that demonstrate quantum supremacy include Grover's search algorithm, where the quantum algorithm gives a quadratic speed-up over the best possible classical algorithm. The complexity class of problems efficiently solvable by a quantum computer is known as BQP. Quantum key distribution (QKD) allows unconditionally secure transmission of classical information, unlike classical encryption, which can always be broken in principle, if not in practice. Note that certain subtle points regarding the safety of QKD are debated. The study of the above topics and differences comprises quantum information theory. == Relation to quantum mechanics == Quantum mechanics is the study of how microscopic physical systems change dynamically in nature. In the field of quantum information theory, the quantum systems studied are abstracted away from any real world counterpart. A qubit might for instance physically be a photon in a linear optical quantum computer, an ion in a trapped ion quantum computer, or it might be a large collection of atoms as in a superconducting quantum computer. Regardless of the physical implementation, the limits and features of qubits implied by quantum information theory hold as all these systems are mathematically described by the same apparatus of density matrices over the complex numbers. Another important difference with quantum mechanics is that while quantum mechanics often studies infinite-dimensional systems such as a harmonic oscillator, quantum information theory is concerned with both continuous-variable systems and finite-dimensional systems. == Entropy and information == Entropy measures the uncertainty in the state of a physical system. Entropy can be studied from the point of view of both the classical and quantum information theories. === Classical information theory === Classical information is based on the concepts of information laid out by Claude Shannon. Classical information, in principle, can be stored in a bit of binary strings. Any system having two states is a capable bit. ==== Shannon entropy ==== Shannon entropy is the quantification of the information gained by measuring the value of a random variable. Another way of thinking about it is by looking at the uncertainty of a system prior to measurement. As a result, entropy, as pictured by Shannon, can be seen either as a measure of the uncertainty prior to making a measurement or as a measure of information gained after making said measurement. Shannon entropy, written as a function of a discrete probability distribution, P ( x 1 ) , P ( x 2 ) , . . . , P ( x n ) {\displaystyle P(x_{1}),P(x_{2}),...,P(x_{n})} associated with events x 1 , . . . , x n {\displaystyle x_{1},...,x_{n}} , can be seen as the average information associated with this set of events, in units of bits: H ( X ) = H [ P ( x 1 ) , P ( x 2 ) , . . . , P ( x n ) ] = − ∑ i = 1 n P ( x i ) log 2 ⁡ P ( x i ) {\displaystyle H(X)=H[P(x_{1}),P(x_{2}),...,P(x_{n})]=-\sum _{i=1}^{n}P(x_{i})\log _{2}P(x_{i})} This definition of entropy can be used to quantify the physical resources required to store the output of an information source. The ways of interpreting Shannon entropy discussed above are usually only meaningful when the number of samples of an experiment is large. ==== Rényi entropy ==== The Rényi entropy is a generalization of Shannon entropy defined above. The Rényi entropy of order r, written as a function of a discrete probability distribution, P ( a 1 ) , P ( a 2 ) , . . . , P ( a n ) {\displaystyle P(a_{1}),P(a_{2}),...,P(a_{n})} , associated with events a 1 , . . . , a n {\displaystyle a_{1},...,a_{n}} , is defined as: H r ( A ) = 1 1 − r log 2 ⁡ ∑ i = 1 n P r ( a i ) {\displaystyle H_{r}(A)={1 \over 1-r}\log _{2}\sum _{i=1}^{n}P^{r}(a_{i})} for 0 < r < ∞ {\displaystyle 0<r<\infty } and r ≠ 1 {\displaystyle r\neq 1} . We arrive at the definition of Shannon entropy from Rényi when r → 1 {\displaystyle r\rightarrow 1} , of Hartley entropy (or max-entropy) when r → 0 {\displaystyle r\rightarrow 0} , and min-entropy when r → ∞ {\displaystyle r\rightarrow \infty } . === Quantum information theory === Quantum information theory is largely an extension of classical information theory to quantum systems. Classical information is produced when measurements of quantum systems are made. ==== Von Neumann entropy ==== One interpretation of Shannon entropy was the uncertainty associated with a probability distribution. When we want to describe the information or the uncertainty of a quantum state, the probability distributions are simply replaced by density operators ρ {\displaystyle \rho } : S ( ρ ) ≡ − t r ( ρ log 2 ⁡ ρ ) = − ∑ i λ i log 2 ⁡ λ i , {\displaystyle S(\rho )\equiv -\mathrm {tr} (\rho \ \log _{2}\ \rho )=-\sum _{i}\lambda _{i}\ \log _{2}\ \lambda _{i},} where λ i {\displaystyle \lambda _{i}} are the eigenvalues of ρ {\displaystyle \rho } . Von Neumann entropy plays a role in quantum information similar to the role Shannon entropy plays in classical information. == Applications == === Quantum communication === Quantum communication is one of the applications of quantum physics and quantum information. There are some famous theorems such as the no-cloning theorem that illustrate some important properties in quantum communication. Dense coding and quantum teleportation are also applications of quantum communication. They are two opposite ways to communicate using qubits. While teleportation transfers one qubit from Alice and Bob by communicating two classical bits under the assumption that Alice and Bob have a pre-shared Bell state, dense coding transfers two classical bits from Alice to Bob by using one qubit, again under the same assumption, that Alice and Bob have a pre-shared Bell state. === Quantum key distribution === One of the best known applications of quantum cryptography is quantum key distribution which provide a theoretical solution to the security issue of a classical key. The advantage of quantum key distribution is that it is impossible to copy a quantum key because of the no-cloning theorem. If someone tries to read encoded data, the quantum state being transmitted will change. This could be used to detect eavesdropping. ==== BB84 ==== The first quantum key distribution scheme, BB84, was developed by Charles Bennett and Gilles Brassard in 1984. It is usually explained as a method of securely communicating a private key from a third party to another for use in one-time pad encryption. ==== E91 ==== E91 was made by Artur Ekert in 1991. His scheme uses entangled pairs of photons. These two photons can be created by Alice, Bob, or by a third party including eavesdropper Eve. One of the photons is distributed to Alice and the other to Bob so that each one ends up with one photon from the pair. This scheme relies on two properties of quantum entanglement: The entangled states are perfectly correlated which means that if Alice and Bob both measure their particles having either a vertical or horizontal polarization, they always get the same answer with 100% probability. The same is true if they both measure any other pair of complementary (orthogonal) polarizations. This necessitates that the two distant parties have exact directionality synchronization. However, from quantum mechanics theory the quantum state is completely random so that it is impossible for Alice to predict if she will get vertical polarization or horizontal polarization results. Any attempt at eavesdropping by Eve destroys this quantum entanglement such that Alice and Bob can detect. ==== B92 ==== B92 is a simpler version of BB84. The main difference between B92 and BB84: B92 only needs two states BB84 needs 4 polarization states Like the BB84, Alice transmits to Bob a string of photons encoded with randomly chosen bits but this time the bits Alice chooses the bases she must use. Bob still randomly chooses a basis by which to measure but if he chooses the wrong basis, he will not measure anything which is guaranteed by quantum mechanics theories. Bob can simply tell Alice after each bit she sends whether he measured it correctly. === Quantum computation === The most widely used model in quantum computation is the quantum circuit, which are based on the quantum bit "qubit". Qubit is somewhat analogous to the bit in classical computation. Qubits can be in a 1 or 0 quantum state, or they can be in a superposition of the 1 and 0 states. However, when qubits are measured, the result of the measurement is always either a 0 or a 1; the probabilities of these two outcomes depend on the quantum state that the qubits were in immediately prior to the measurement. Any quantum computation algorithm can be represented as a network of quantum logic gates. === Quantum decoherence === If a quantum system were perfectly isolated, it would maintain coherence perfectly, but it would be impossible to test the entire system. If it is not perfectly isolated, for example during a measurement, coherence is shared with the environment and appears to be lost with time; this process is called quantum decoherence. As a result of this process, quantum behavior is apparently lost, just as energy appears to be lost by friction in classical mechanics. === Quantum error correction === QEC is used in quantum computing to protect quantum information from errors due to decoherence and other quantum noise. Quantum error correction is essential if one is to achieve fault-tolerant quantum computation that can deal not only with noise on stored quantum information, but also with faulty quantum gates, faulty quantum preparation, and faulty measurements. Peter Shor first discovered this method of formulating a quantum error correcting code by storing the information of one qubit onto a highly entangled state of ancilla qubits. A quantum error correcting code protects quantum information against errors. == Journals == Many journals publish research in quantum information science, although only a few are dedicated to this area. Among these are: International Journal of Quantum Information npj Quantum Information Quantum Quantum Information & Computation Quantum Information Processing Quantum Science and Technology == See also == == References ==
Wikipedia/Quantum_information_theory
Bass–Serre theory is a part of the mathematical subject of group theory that deals with analyzing the algebraic structure of groups acting by automorphisms on simplicial trees. The theory relates group actions on trees with decomposing groups as iterated applications of the operations of free product with amalgamation and HNN extension, via the notion of the fundamental group of a graph of groups. Bass–Serre theory can be regarded as one-dimensional version of the orbifold theory. == History == Bass–Serre theory was developed by Jean-Pierre Serre in the 1970s and formalized in Trees, Serre's 1977 monograph (developed in collaboration with Hyman Bass) on the subject. Serre's original motivation was to understand the structure of certain algebraic groups whose Bruhat–Tits buildings are trees. However, the theory quickly became a standard tool of geometric group theory and geometric topology, particularly the study of 3-manifolds. Subsequent work of Bass contributed substantially to the formalization and development of basic tools of the theory and currently the term "Bass–Serre theory" is widely used to describe the subject. Mathematically, Bass–Serre theory builds on exploiting and generalizing the properties of two older group-theoretic constructions: free product with amalgamation and HNN extension. However, unlike the traditional algebraic study of these two constructions, Bass–Serre theory uses the geometric language of covering theory and fundamental groups. Graphs of groups, which are the basic objects of Bass–Serre theory, can be viewed as one-dimensional versions of orbifolds. Apart from Serre's book, the basic treatment of Bass–Serre theory is available in the article of Bass, the article of G. Peter Scott and C. T. C. Wall and the books of Allen Hatcher, Gilbert Baumslag, Warren Dicks and Martin Dunwoody and Daniel E. Cohen. == Basic set-up == === Graphs in the sense of Serre === Serre's formalism of graphs is slightly different from the standard formalism from graph theory. Here a graph A consists of a vertex set V, an edge set E, an edge reversal map E → E , e ↦ e ¯ {\displaystyle E\to E,\ e\mapsto {\overline {e}}} such that e ≠ e and e ¯ ¯ = e {\displaystyle {\overline {\overline {e}}}=e} for every e in E, and an initial vertex map o : E → V {\displaystyle o\colon E\to V} . Thus in A every edge e comes equipped with its formal inverse e. The vertex o(e) is called the origin or the initial vertex of e and the vertex o(e) is called the terminus of e and is denoted t(e). Both loop-edges (that is, edges e such that o(e) = t(e)) and multiple edges are allowed. An orientation on A is a partition of E into the union of two disjoint subsets E+ and E− so that for every edge e exactly one of the edges from the pair e, e belongs to E+ and the other belongs to E−. === Graphs of groups === A graph of groups A consists of the following data: A connected graph A; An assignment of a vertex group Av to every vertex v of A. An assignment of an edge group Ae to every edge e of A so that we have A e = A e ¯ {\displaystyle A_{e}=A_{\overline {e}}} for every e ∈ E. Boundary monomorphisms α e : A e → A o ( e ) {\displaystyle \alpha _{e}:A_{e}\to A_{o(e)}} for all edges e of A, so that each α e {\displaystyle \alpha _{e}} is an injective group homomorphism. For every e ∈ E {\displaystyle e\in E} the map α e ¯ : A e → A t ( e ) {\displaystyle \alpha _{\overline {e}}\colon A_{e}\to A_{t(e)}} is also denoted by ω e {\displaystyle \omega _{e}} . === Fundamental group of a graph of groups === There are two equivalent definitions of the notion of the fundamental group of a graph of groups: the first is a direct algebraic definition via an explicit group presentation (as a certain iterated application of amalgamated free products and HNN extensions), and the second using the language of groupoids. The algebraic definition is easier to state: First, choose a spanning tree T in A. The fundamental group of A with respect to T, denoted π1(A, T), is defined as the quotient of the free product ( ∗ v ∈ V A v ) ∗ F ( E ) {\displaystyle (\ast _{v\in V}A_{v})\ast F(E)} where F(E) is a free group with free basis E, subject to the following relations: e ¯ α e ( g ) e = α e ¯ ( g ) {\displaystyle {\overline {e}}\alpha _{e}(g)e=\alpha _{\overline {e}}(g)} for every e in E and every g ∈ A e {\displaystyle g\in A_{e}} . (The so-called Bass–Serre relation.) ee = 1 for every e in E. e = 1 for every edge e of the spanning tree T. There is also a notion of the fundamental group of A with respect to a base-vertex v in V, denoted π1(A, v), which is defined using the formalism of groupoids. It turns out that for every choice of a base-vertex v and every spanning tree T in A the groups π1(A, T) and π1(A, v) are naturally isomorphic. The fundamental group of a graph of groups has a natural topological interpretation as well: it is the fundamental group of a graph of spaces whose vertex spaces and edge spaces have the fundamental groups of the vertex groups and edge groups, respectively, and whose gluing maps induce the homomorphisms of the edge groups into the vertex groups. One can therefore take this as a third definition of the fundamental group of a graph of groups. ==== Fundamental groups of graphs of groups as iterations of amalgamated products and HNN-extensions ==== The group G = π1(A, T) defined above admits an algebraic description in terms of iterated amalgamated free products and HNN extensions. First, form a group B as a quotient of the free product ( ∗ v ∈ V A v ) ∗ F ( E + T ) {\displaystyle (\ast _{v\in V}A_{v})*F(E^{+}T)} subject to the relations e−1αe(g)e = ωe(g) for every e in E+T and every g ∈ A e {\displaystyle g\in A_{e}} . e = 1 for every e in E+T. This presentation can be rewritten as B = ∗ v ∈ V A v / n c l { α e ( g ) = ω e ( g ) , where e ∈ E + T , g ∈ G e } {\displaystyle B=\ast _{v\in V}A_{v}/{\rm {ncl}}\{\alpha _{e}(g)=\omega _{e}(g),{\text{ where }}e\in E^{+}T,g\in G_{e}\}} which shows that B is an iterated amalgamated free product of the vertex groups Av. Then the group G = π1(A, T) has the presentation ⟨ B , E + ( A − T ) | e − 1 α e ( g ) e = ω e ( g ) where e ∈ E + ( A − T ) , g ∈ G e ⟩ , {\displaystyle \langle B,E^{+}(A-T)|e^{-1}\alpha _{e}(g)e=\omega _{e}(g){\text{ where }}e\in E^{+}(A-T),g\in G_{e}\rangle ,} which shows that G = π1(A, T) is a multiple HNN extension of B with stable letters { e | e ∈ E + ( A − T ) } {\displaystyle \{e|e\in E^{+}(A-T)\}} . === Splittings === An isomorphism between a group G and the fundamental group of a graph of groups is called a splitting of G. If the edge groups in the splitting come from a particular class of groups (e.g. finite, cyclic, abelian, etc.), the splitting is said to be a splitting over that class. Thus a splitting where all edge groups are finite is called a splitting over finite groups. Algebraically, a splitting of G with trivial edge groups corresponds to a free product decomposition G = ( ∗ A v ) ∗ F ( X ) {\displaystyle G=(\ast A_{v})\ast F(X)} where F(X) is a free group with free basis X = E+(A−T) consisting of all positively oriented edges (with respect to some orientation on A) in the complement of some spanning tree T of A. === The normal forms theorem === Let g be an element of G = π1(A, T) represented as a product of the form g = a 0 e 1 a 1 … e n a n , {\displaystyle g=a_{0}e_{1}a_{1}\dots e_{n}a_{n},} where e1, ..., en is a closed edge-path in A with the vertex sequence v0, v1, ..., vn = v0 (that is v0=o(e1), vn = t(en) and vi = t(ei) = o(ei+1) for 0 < i < n) and where a i ∈ A v i {\displaystyle a_{i}\in A_{v_{i}}} for i = 0, ..., n. Suppose that g = 1 in G. Then either n = 0 and a0 = 1 in A v 0 {\displaystyle A_{v_{0}}} , or n > 0 and there is some 0 < i < n such that ei+1 = ei and a i ∈ ω e i ( A e i ) {\displaystyle a_{i}\in \omega _{e_{i}}(A_{e_{i}})} . The normal forms theorem immediately implies that the canonical homomorphisms Av → π1(A, T) are injective, so that we can think of the vertex groups Av as subgroups of G. Higgins has given a nice version of the normal form using the fundamental groupoid of a graph of groups. This avoids choosing a base point or tree, and has been exploited by Moore. == Bass–Serre covering trees == To every graph of groups A, with a specified choice of a base-vertex, one can associate a Bass–Serre covering tree A ~ {\displaystyle {\tilde {\mathbf {A} }}} , which is a tree that comes equipped with a natural group action of the fundamental group π1(A, v) without edge-inversions. Moreover, the quotient graph A ~ / π 1 ( A , v ) {\displaystyle {\tilde {\mathbf {A} }}/\pi _{1}(\mathbf {A} ,v)} is isomorphic to A. Similarly, if G is a group acting on a tree X without edge-inversions (that is, so that for every edge e of X and every g in G we have ge ≠ e), one can define the natural notion of a quotient graph of groups A. The underlying graph A of A is the quotient graph X/G. The vertex groups of A are isomorphic to vertex stabilizers in G of vertices of X and the edge groups of A are isomorphic to edge stabilizers in G of edges of X. Moreover, if X was the Bass–Serre covering tree of a graph of groups A and if G = π1(A, v) then the quotient graph of groups for the action of G on X can be chosen to be naturally isomorphic to A. == Fundamental theorem of Bass–Serre theory == Let G be a group acting on a tree X without inversions. Let A be the quotient graph of groups and let v be a base-vertex in A. Then G is isomorphic to the group π1(A, v) and there is an equivariant isomorphism between the tree X and the Bass–Serre covering tree A ~ {\displaystyle {\tilde {\mathbf {A} }}} . More precisely, there is a group isomorphism σ: G → π1(A, v) and a graph isomorphism j : X → A ~ {\displaystyle j:X\to {\tilde {\mathbf {A} }}} such that for every g in G, for every vertex x of X and for every edge e of X we have j(gx) = g j(x) and j(ge) = g j(e). This result is also known as the structure theorem. One of the immediate consequences is the classic Kurosh subgroup theorem describing the algebraic structure of subgroups of free products. == Examples == === Amalgamated free product === Consider a graph of groups A consisting of a single non-loop edge e (together with its formal inverse e) with two distinct end-vertices u = o(e) and v = t(e), vertex groups H = Au, K = Av, an edge group C = Ae and the boundary monomorphisms α = α e : C → H , ω = ω e : C → K {\displaystyle \alpha =\alpha _{e}:C\to H,\omega =\omega _{e}:C\to K} . Then T = A is a spanning tree in A and the fundamental group π1(A, T) is isomorphic to the amalgamated free product G = H ∗ C K = H ∗ K / n c l { α ( c ) = ω ( c ) , c ∈ C } . {\displaystyle G=H\ast _{C}K=H\ast K/{\rm {ncl}}\{\alpha (c)=\omega (c),c\in C\}.} In this case the Bass–Serre tree X = A ~ {\displaystyle X={\tilde {\mathbf {A} }}} can be described as follows. The vertex set of X is the set of cosets V X = { g K : g ∈ G } ⊔ { g H : g ∈ G } . {\displaystyle VX=\{gK:g\in G\}\sqcup \{gH:g\in G\}.} Two vertices gK and fH are adjacent in X whenever there exists k ∈ K such that fH = gkH (or, equivalently, whenever there is h ∈ H such that gK = fhK). The G-stabilizer of every vertex of X of type gK is equal to gKg−1 and the G-stabilizer of every vertex of X of type gH is equal to gHg−1. For an edge [gH, ghK] of X its G-stabilizer is equal to ghα(C)h−1g−1. For every c ∈ C and h ∈ 'k ∈ K' the edges [gH, ghK] and [gH, ghα(c)K] are equal and the degree of the vertex gH in X is equal to the index [H:α(C)]. Similarly, every vertex of type gK has degree [K:ω(C)] in X. === HNN extension === Let A be a graph of groups consisting of a single loop-edge e (together with its formal inverse e), a single vertex v = o(e) = t(e), a vertex group B = Av, an edge group C = Ae and the boundary monomorphisms α = α e : C → B , ω = ω e : C → B {\displaystyle \alpha =\alpha _{e}:C\to B,\omega =\omega _{e}:C\to B} . Then T = v is a spanning tree in A and the fundamental group π1(A, T) is isomorphic to the HNN extension G = ⟨ B , e | e − 1 α ( c ) e = ω ( c ) , c ∈ C ⟩ . {\displaystyle G=\langle B,e|e^{-1}\alpha (c)e=\omega (c),c\in C\rangle .} with the base group B, stable letter e and the associated subgroups H = α(C), K = ω(C) in B. The composition ϕ = ω ∘ α − 1 : H → K {\displaystyle \phi =\omega \circ \alpha ^{-1}:H\to K} is an isomorphism and the above HNN-extension presentation of G can be rewritten as G = ⟨ B , e | e − 1 h e = ϕ ( h ) , h ∈ H ⟩ . {\displaystyle G=\langle B,e|e^{-1}he=\phi (h),h\in H\rangle .\,} In this case the Bass–Serre tree X = A ~ {\displaystyle X={\tilde {\mathbf {A} }}} can be described as follows. The vertex set of X is the set of cosets VX = {gB : g ∈ G}. Two vertices gB and fB are adjacent in X whenever there exists b in B such that either fB = gbeB or fB = gbe−1B. The G-stabilizer of every vertex of X is conjugate to B in G and the stabilizer of every edge of X is conjugate to H in G. Every vertex of X has degree equal to [B : H] + [B : K]. === A graph with the trivial graph of groups structure === Let A be a graph of groups with underlying graph A such that all the vertex and edge groups in A are trivial. Let v be a base-vertex in A. Then π1(A,v) is equal to the fundamental group π1(A,v) of the underlying graph A in the standard sense of algebraic topology and the Bass–Serre covering tree A ~ {\displaystyle {\tilde {\mathbf {A} }}} is equal to the standard universal covering space A ~ {\displaystyle {\tilde {A}}} of A. Moreover, the action of π1(A,v) on A ~ {\displaystyle {\tilde {\mathbf {A} }}} is exactly the standard action of π1(A,v) on A ~ {\displaystyle {\tilde {A}}} by deck transformations. == Basic facts and properties == If A is a graph of groups with a spanning tree T and if G = π1(A, T), then for every vertex v of A the canonical homomorphism from Av to G is injective. If g ∈ G is an element of finite order then g is conjugate in G to an element of finite order in some vertex group Av. If F ≤ G is a finite subgroup then F is conjugate in G to a subgroup of some vertex group Av. If the graph A is finite and all vertex groups Av are finite then the group G is virtually free, that is, G contains a free subgroup of finite index. If A is finite and all the vertex groups Av are finitely generated then G is finitely generated. If A is finite and all the vertex groups Av are finitely presented and all the edge groups Ae are finitely generated then G is finitely presented. == Trivial and nontrivial actions == A graph of groups A is called trivial if A = T is already a tree and there is some vertex v of A such that Av = π1(A, A). This is equivalent to the condition that A is a tree and that for every edge e = [u, z] of A (with o(e) = u, t(e) = z) such that u is closer to v than z we have [Az : ωe(Ae)] = 1, that is Az = ωe(Ae). An action of a group G on a tree X without edge-inversions is called trivial if there exists a vertex x of X that is fixed by G, that is such that Gx = x. It is known that an action of G on X is trivial if and only if the quotient graph of groups for that action is trivial. Typically, only nontrivial actions on trees are studied in Bass–Serre theory since trivial graphs of groups do not carry any interesting algebraic information, although trivial actions in the above sense (e. g. actions of groups by automorphisms on rooted trees) may also be interesting for other mathematical reasons. One of the classic and still important results of the theory is a theorem of Stallings about ends of groups. The theorem states that a finitely generated group has more than one end if and only if this group admits a nontrivial splitting over finite subgroups that is, if and only if the group admits a nontrivial action without inversions on a tree with finite edge stabilizers. An important general result of the theory states that if G is a group with Kazhdan's property (T) then G does not admit any nontrivial splitting, that is, that any action of G on a tree X without edge-inversions has a global fixed vertex. == Hyperbolic length functions == Let G be a group acting on a tree X without edge-inversions. For every g∈G put ℓ X ( g ) = min { d ( x , g x ) | x ∈ V X } . {\displaystyle \ell _{X}(g)=\min\{d(x,gx)|x\in VX\}.} Then ℓX(g) is called the translation length of g on X. The function ℓ X : G → Z , g ∈ G ↦ ℓ X ( g ) {\displaystyle \ell _{X}:G\to \mathbf {Z} ,\quad g\in G\mapsto \ell _{X}(g)} is called the hyperbolic length function or the translation length function for the action of G on X. === Basic facts regarding hyperbolic length functions === For g ∈ G exactly one of the following holds: (a) ℓX(g) = 0 and g fixes a vertex of G. In this case g is called an elliptic element of G. (b) ℓX(g) > 0 and there is a unique bi-infinite embedded line in X, called the axis of g and denoted Lg which is g-invariant. In this case g acts on Lg by translation of magnitude ℓX(g) and the element g ∈ G is called hyperbolic. If ℓX(G) ≠ 0 then there exists a unique minimal G-invariant subtree XG of X. Moreover, XG is equal to the union of axes of hyperbolic elements of G. The length-function ℓX : G → Z is said to be abelian if it is a group homomorphism from G to Z and non-abelian otherwise. Similarly, the action of G on X is said to be abelian if the associated hyperbolic length function is abelian and is said to be non-abelian otherwise. In general, an action of G on a tree X without edge-inversions is said to be minimal if there are no proper G-invariant subtrees in X. An important fact in the theory says that minimal non-abelian tree actions are uniquely determined by their hyperbolic length functions: === Uniqueness theorem === Let G be a group with two nonabelian minimal actions without edge-inversions on trees X and Y. Suppose that the hyperbolic length functions ℓX and ℓY on G are equal, that is ℓX(g) = ℓY(g) for every g ∈ G. Then the actions of G on X and Y are equal in the sense that there exists a graph isomorphism f : X → Y which is G-equivariant, that is f(gx) = g f(x) for every g ∈ G and every x ∈ VX. == Important developments in Bass–Serre theory == Important developments in Bass–Serre theory in the last 30 years include: Various accessibility results for finitely presented groups that bound the complexity (that is, the number of edges) in a graph of groups decomposition of a finitely presented group, where some algebraic or geometric restrictions on the types of groups considered are imposed. These results include: Dunwoody's theorem about accessibility of finitely presented groups stating that for any finitely presented group G there exists a bound on the complexity of splittings of G over finite subgroups (the splittings are required to satisfy a technical assumption of being "reduced"); Bestvina–Feighn generalized accessibility theorem stating that for any finitely presented group G there is a bound on the complexity of reduced splittings of G over small subgroups (the class of small groups includes, in particular, all groups that do not contain non-abelian free subgroups); Acylindrical accessibility results for finitely presented (Sela, Delzant) and finitely generated (Weidmann) groups which bound the complexity of the so-called acylindrical splittings, that is splittings where for their Bass–Serre covering trees the diameters of fixed subsets of nontrivial elements of G are uniformly bounded. The theory of JSJ-decompositions for finitely presented groups. This theory was motivated by the classic notion of JSJ decomposition in 3-manifold topology and was initiated, in the context of word-hyperbolic groups, by the work of Sela. JSJ decompositions are splittings of finitely presented groups over some classes of small subgroups (cyclic, abelian, noetherian, etc., depending on the version of the theory) that provide a canonical descriptions, in terms of some standard moves, of all splittings of the group over subgroups of the class. There are a number of versions of JSJ-decomposition theories: The initial version of Sela for cyclic splittings of torsion-free word-hyperbolic groups. Bowditch's version of JSJ theory for word-hyperbolic groups (with possible torsion) encoding their splittings over virtually cyclic subgroups. The version of Rips and Sela of JSJ decompositions of torsion-free finitely presented groups encoding their splittings over free abelian subgroups. The version of Dunwoody and Sageev of JSJ decompositions of finitely presented groups over noetherian subgroups. The version of Fujiwara and Papasoglu, also of JSJ decompositions of finitely presented groups over noetherian subgroups. A version of JSJ decomposition theory for finitely presented groups developed by Scott and Swarup. The theory of lattices in automorphism groups of trees. The theory of tree lattices was developed by Bass, Kulkarni and Lubotzky by analogy with the theory of lattices in Lie groups (that is discrete subgroups of Lie groups of finite co-volume). For a discrete subgroup G of the automorphism group of a locally finite tree X one can define a natural notion of volume for the quotient graph of groups A as v o l ( A ) = ∑ v ∈ V 1 | A v | . {\displaystyle vol(\mathbf {A} )=\sum _{v\in V}{\frac {1}{|A_{v}|}}.} The group G is called an X-lattice if vol(A)< ∞. The theory of tree lattices turns out to be useful in the study of discrete subgroups of algebraic groups over non-archimedean local fields and in the study of Kac–Moody groups. Development of foldings and Nielsen methods for approximating group actions on trees and analyzing their subgroup structure. The theory of ends and relative ends of groups, particularly various generalizations of Stallings theorem about groups with more than one end. Quasi-isometric rigidity results for groups acting on trees. == Generalizations == There have been several generalizations of Bass–Serre theory: The theory of complexes of groups (see Haefliger, Corson Bridson-Haefliger) provides a higher-dimensional generalization of Bass–Serre theory. The notion of a graph of groups is replaced by that of a complex of groups, where groups are assigned to each cell in a simplicial complex, together with monomorphisms between these groups corresponding to face inclusions (these monomorphisms are required to satisfy certain compatibility conditions). One can then define an analog of the fundamental group of a graph of groups for a complex of groups. However, in order for this notion to have good algebraic properties (such as embeddability of the vertex groups in it) and in order for a good analog for the notion of the Bass–Serre covering tree to exist in this context, one needs to require some sort of "non-positive curvature" condition for the complex of groups in question (see, for example ). The theory of isometric group actions on real trees (or R-trees) which are metric spaces generalizing the graph-theoretic notion of a tree (graph theory). The theory was developed largely in the 1990s, where the Rips machine of Eliyahu Rips on the structure theory of stable group actions on R-trees played a key role (see Bestvina-Feighn). This structure theory assigns to a stable isometric action of a finitely generated group G a certain "normal form" approximation of that action by a stable action of G on a simplicial tree and hence a splitting of G in the sense of Bass–Serre theory. Group actions on real trees arise naturally in several contexts in geometric topology: for example as boundary points of the Teichmüller space (every point in the Thurston boundary of the Teichmüller space is represented by a measured geodesic lamination on the surface; this lamination lifts to the universal cover of the surface and a naturally dual object to that lift is an R-tree endowed with an isometric action of the fundamental group of the surface), as Gromov-Hausdorff limits of, appropriately rescaled, Kleinian group actions, and so on. The use of R-trees machinery provides substantial shortcuts in modern proofs of Thurston's Hyperbolization Theorem for Haken 3-manifolds. Similarly, R-trees play a key role in the study of Culler-Vogtmann's Outer space as well as in other areas of geometric group theory; for example, asymptotic cones of groups often have a tree-like structure and give rise to group actions on real trees. The use of R-trees, together with Bass–Serre theory, is a key tool in the work of Sela on solving the isomorphism problem for (torsion-free) word-hyperbolic groups, Sela's version of the JSJ-decomposition theory and the work of Sela on the Tarski Conjecture for free groups and the theory of limit groups. The theory of group actions on Λ-trees, where Λ is an ordered abelian group (such as R or Z) provides a further generalization of both the Bass–Serre theory and the theory of group actions on R-trees (see Morgan, Alperin-Bass, Chiswell). == See also == Geometric group theory == References ==
Wikipedia/Bass–Serre_theory
Forced perspective is a technique that employs optical illusion to make an object appear farther away, closer, larger or smaller than it actually is. It manipulates human visual perception through the use of scaled objects and the correlation between them and the vantage point of the spectator or camera. It has uses in photography, filmmaking and architecture. == In filmmaking == Forced perspective had been a feature of German silent films, and Citizen Kane revived the practice. Movies, especially B-movies in the 1950s and 1960s, were produced on limited budgets and often featured forced perspective shots. Forced perspective can be made more believable when environmental conditions obscure the difference in perspective. For example, the final scene of the famous movie Casablanca takes place at an airport in the middle of a storm, although the entire scene was shot in a studio. This was accomplished by using a painted backdrop of an aircraft, which was "serviced" by dwarfs standing next to the backdrop. A downpour (created in the studio) draws much of the viewer's attention away from the backdrop and extras, making the simulated perspective less noticeable. === Role of light === Early instances of forced perspective used in low-budget motion pictures showed objects that were clearly different from their surroundings, often blurred or at a different light level. The principal cause of this was geometric. Light from a point source travels in a spherical wave, decreasing in intensity (or illuminance) as the inverse square of the distance travelled. This means that a light source must be four times as bright to produce the same illuminance at an object twice as far away. Thus to create the illusion of a distant object being at the same distance as a near object and scaled accordingly, much more light is required. When shooting with forced perspective, it's important to have the aperture stopped down sufficiently to achieve proper depth of field (DOF), so that the foreground object and background are both sharp. Since miniature models would need to be subjected to far greater lighting than the main focus of the camera, the area of action, it is important to ensure that these can withstand the significant heat generated by the incandescent light sources typically used in film and TV production. === In motion === Peter Jackson's film adaptations of The Lord of the Rings make extended use of forced perspective. Characters apparently standing next to each other would be displaced by several feet in depth from the camera. This, in a still shot, makes some characters (Dwarves and Hobbits) appear much smaller than others. If the camera's point of view were moved, then parallax would reveal the true relative positions of the characters in space. Even if the camera is just rotated, its point of view may move accidentally if the camera is not rotated about the correct point. This point of view is called the 'zero-parallax-point' (or front nodal point), and is approximated in practice as the centre of the entrance pupil. An extensively used technique in The Lord of the Rings: The Fellowship of the Ring was an enhancement of this principle, which could be used in moving shots. Portions of sets were mounted on movable platforms which would move precisely according to the movement of the camera, so that the optical illusion would be preserved at all times for the duration of the shot. The same techniques were used in the Harry Potter movies to make the character Rubeus Hagrid look like a giant. Props around Harry and his friends are of normal size, while seemingly identical props placed around Hagrid are in fact smaller. === Comic effects === As with many film genres and effects, forced perspective can be used to visual-comedy effect. Typically, when an object or character is portrayed in a scene, its size is defined by its surroundings. A character then interacts with the object or character, in the process showing that the viewer has been fooled and there is forced perspective in use. The 1930 Laurel and Hardy movie Brats used forced perspective to depict Stan and Ollie simultaneously as adults and as their own sons. An example used for comic effect can be found in the slapstick comedy Top Secret! in a scene which appears to begin as a close-up of a ringing phone with the characters in the distance. However, when the character walks up to the phone (towards the camera) and picks it up, it becomes apparent that the phone is extremely oversized instead of being close to the camera. Another scene in the same movie begins with a close-up of a wristwatch. The next cut shows that the character actually has a gargantuan wristwatch. The same technique is also used in the Dennis Waterman sketch in the British BBC sketch show Little Britain. In the television version, larger than life props are used to make the caricatured Waterman look just three feet tall or less. In The History of the World, Part I, while escaping the French peasants, Mel Brooks' character, Jacques, who is doubling for King Louis, runs down a hall of the palace, which turns into a ramp, showing the smaller forced perspective door at the end. As he backs down into the normal part of the room, he mutters, "Who designed this place?" One of the recurring The Kids in the Hall sketches featured Mr. Tyzik, "The Headcrusher", who used forced perspective (from his own point of view) to "crush" other people's heads between his fingers. This is also done by the character Sheldon Cooper in the TV show The Big Bang Theory to his friends when they displease him. In the making of Season 5 of Red vs. Blue, the creators used forced perspective to make the character of Tucker's baby, Junior, look small. In the game, the alien character used as Junior is the same height as other characters. The short-lived 2013 Internet meme "baby mugging" used forced perspective to make babies look like they were inside items like mugs and teacups. == In architecture == In architecture, a structure can be made to seem larger, taller, farther away or otherwise by adjusting the scale of objects in relation to the spectator, increasing or decreasing perceived depth. When forced perspective is supposed to make an object appear farther away, the following method can be used: by constantly decreasing the scale of objects from expectancy and convention toward the farthest point from the spectator, an illusion is created that the scale of said objects is decreasing due to their distant location. In contrast, the opposite technique was sometimes used in classical garden designs and other follies to shorten the perceived distances of points of interest along a path. The Statue of Liberty is built with a slight forced perspective so that it appears more correctly proportioned when viewed from its base. When the statue was designed in the late 19th century (before easy air flight), there were few other angles from which to view the statue. This caused a difficulty for special effects technicians working on the movie Ghostbusters II, who had to reduce the amount of forced perspective used when replicating the statue for the movie so that their model (which was photographed head-on) would not look top-heavy. This effect can also be seen in Michelangelo's statue of David. == Through depth perception == The technique takes advantage of the visual cues humans use to perceive depth such as angular size, aerial perspective, shading, and relative size. In film, photography and art, perceived object distance is manipulated by altering fundamental monocular cues used to discern the depth of an object in the scene such as aerial perspective, blurring, relative size and lighting. Using these monocular cues in concert with angular size, the eyes can perceive the distance of an object. Artists are able to freely move the visual plane of objects by obscuring these cues to their advantage. Increasing the object's distance from the audience makes an object appear smaller, its apparent size decreases as distance from the audience increases. This phenomenon is that of the manipulation of angular and apparent size. The Ames room attraction in some museums and amusement parks takes advantage of distance to make people appear different sizes in corners of a room that appears rectangular to the viewer. A person perceives the size of an object based on the size of the object's image on the retina. This depends solely on the angle created by the rays coming from the topmost and bottommost part of the object that pass through the center of the lens of the eye. The larger the angle an object subtends, the larger the apparent size of the object. The subtended angle increases as the object moves closer to the lens. Two objects with different actual size have the same apparent size when they subtend the same angle. Similarly, two objects of the same actual size can have drastically varying apparent size when they are moved to different distances from the lens. === Calculating angular size === The formula for calculating angular size is as follows: θ = 2 ⋅ arctan ⁡ h 2 D {\displaystyle \theta =2\cdot \arctan {\frac {h}{2D}}} in which θ is the subtended angle, h is the actual size of the object and D is the distance from the lens to the object. === Techniques employed === Solely manipulating angular size by moving objects closer and farther away cannot fully trick the eye. Objects that are farther away from the eye have a lower luminescent contrast due to atmospheric scattering of rays. Fewer rays of light reach the eye from more distant objects. Using the monocular cue of aerial perspective, the eye uses the relative luminescence of objects in a scene to discern relative distance. Filmmakers and photographers combat this cue by manually increasing the luminescence of objects farther away to equal that of objects in the desired plane. This effect is achieved by making the more distant object more bright by shining more light on it. Because luminance decreases by ½d (where d is distance from the eye), artists can calculate the exact amount of light needed to counter the cue of aerial perspective. Similarly, blurring can create the opposite effect by giving the impression of depth. Selectively blurring an object moves it out of its original visual plane without having to manually move the object. A perceptive illusion that may be infused in film culture is the idea of Gestalt psychology, which holds that people often view the whole of an object as opposed to the sum of its individual parts. Another monocular cue of depth perception is that of lighting and shading. Shading in a scene or on an object allows the audience to locate the light source relative to the object. Making two objects at different distances have the same shading gives the impression that they are in similar positions relative to the light source; therefore, they appear closer to each other than they actually are. Artists may also employ the simpler technique of manipulating relative size. Once the audience becomes acquainted with the size of an object in proportion to the rest of the objects in a scene, the photographer or filmmaker can replace the object with a larger or smaller replica to change another part of the scene's apparent size. This is done frequently in movies. For example, to aid in the appearance of a person as a giant next to a "regular sized" person, a filmmaker might have a shot of two identical glasses together, then follow with the person who is supposed to play the giant holding a much smaller replica of the glass and the person who is playing the regular-sized person holding a much larger replica. Because the audience sees that the glasses are the same size in the original shot, the difference in relation to the two characters allows the audience to perceive the characters as different sizes based on their relative size to the glasses they hold. A painter can give the illusion of distance by adding blue or red tinting to the color of the object he is painting. This monocular cue takes advantage of the trend for the color of distant objects to shift towards the blue end of the spectrum, while the colors of closer objects shift toward the red end of the spectrum. The optical phenomenon is known as chromostereopsis. === Examples === ==== In film ==== Forced perspective has been employed to realize characters in film. One notable example is Rubeus Hagrid, the half-giant in the Harry Potter series. The technique is used in the Lord of the Rings series for depicting the apparent heights of the hobbit characters, such as Frodo, who are supposed to be around half the height or less of the humans and wizards, such as Gandalf. In reality, the difference in height between the respective actors playing those roles is only 5 inches (13 cm), where Elijah Wood as the hobbit Frodo is 5 ft 6 in (1.68 m) tall, and Ian McKellen as the wizard Gandalf is 5 ft 11 in (1.80 m). The use of camera angles and trick scenery and props creates the illusion of a much greater difference in size and height. Numerous camera angle tricks are played in the comedy film Elf (2003) to make the elf characters in the movie appear smaller than the human characters. ==== In art ==== In his painting entitled Still life with a curtain, Paul Cézanne creates the illusion of depth by using brighter colors on objects closer to the viewer and dimmer colors and shading to distance the "light source" from objects that he wanted to appear farther away. His shading technique allows the audience to discern the distance between objects due to their relative distances from a stationary light source that illuminates the scene. Furthermore, he uses a blue tint on objects that should be farther away and redder tint to objects in the foreground. ==== Full size dioramas ==== Modern museum dioramas may be seen in most major natural history museums. Typically, these displays use a tilted plane to represent what would otherwise be a level surface, incorporate a painted background of distant objects, and often employ false perspective, carefully modifying the scale of objects placed on the plane to reinforce the illusion through depth perception in which objects of identical real-world size placed farther from the observer appear smaller than those closer. Often the distant painted background or sky will be painted upon a continuous curved surface so that the viewer is not distracted by corners, seams, or edges. All of these techniques are means of presenting a realistic view of a large scene in a compact space. A photograph or single-eye view of such a diorama can be especially convincing since in this case there is no distraction by the binocular perception of depth. Carl Akeley, a naturalist, sculptor, and taxidermist, is credited with creating the first ever habitat diorama in the year 1889. Akeley's diorama featured taxidermied beavers in a three-dimensional habitat with a realistic, painted background. With the support of curator Frank M. Chapman, Akeley designed the popular habitat dioramas featured at the American Museum of Natural History. Combining art with science, these exhibitions were intended to educate the public about the growing need for habitat conservation. The modern AMNH Exhibitions Lab is charged with the creation of all dioramas and otherwise immersive environments in the museum. ==== Theme parks ==== Forced perspective is extensively employed at theme parks and other such architecture as found in Disneyland and Las Vegas, often to make structures seem larger than they are in reality where physically larger structures would not be feasible or desirable, or to otherwise provide an optical illusion for entertainment value. Most notably, it is used by Walt Disney Imagineering in the Disney Theme Parks. Some notable examples of forced perspective in the parks, used to make the objects bigger, are the castles (Sleeping Beauty, Cinderella, Belle, Magical Dreams, and Enchanted Storybook). One of the most notable examples of forced perspective being used to make the object appear smaller is The American Adventure pavilion in Epcot. == See also == Ames room Anamorphosis Depth perception Perspective distortion Trompe-l'œil Vista paradox == References == == External links == Media related to Forced perspectives at Wikimedia Commons
Wikipedia/Forced_perspective
In the mathematical subject of group theory, small cancellation theory studies groups given by group presentations satisfying small cancellation conditions, that is where defining relations have "small overlaps" with each other. Small cancellation conditions imply algebraic, geometric and algorithmic properties of the group. Finitely presented groups satisfying sufficiently strong small cancellation conditions are word hyperbolic and have word problem solvable by Dehn's algorithm. Small cancellation methods are also used for constructing Tarski monsters, and for solutions of Burnside's problem. == History == Some ideas underlying the small cancellation theory go back to the work of Max Dehn in the 1910s. Dehn proved that fundamental groups of closed orientable surfaces of genus at least two have word problem solvable by what is now called Dehn's algorithm. His proof involved drawing the Cayley graph of such a group in the hyperbolic plane and performing curvature estimates via the Gauss–Bonnet theorem for a closed loop in the Cayley graph to conclude that such a loop must contain a large portion (more than a half) of a defining relation. A 1949 paper of Tartakovskii was an immediate precursor for small cancellation theory: this paper provided a solution of the word problem for a class of groups satisfying a complicated set of combinatorial conditions, where small cancellation type assumptions played a key role. The standard version of small cancellation theory, as it is used today, was developed by Martin Greendlinger in a series of papers in the early 1960s, who primarily dealt with the "metric" small cancellation conditions. In particular, Greendlinger proved that finitely presented groups satisfying the C′(1/6) small cancellation condition have word problem solvable by Dehn's algorithm. The theory was further refined and formalized in the subsequent work of Lyndon, Schupp and Lyndon-Schupp, who also treated the case of non-metric small cancellation conditions and developed a version of small cancellation theory for amalgamated free products and HNN-extensions. Small cancellation theory was further generalized by Alexander Ol'shanskii who developed a "graded" version of the theory where the set of defining relations comes equipped with a filtration and where a defining relator of a particular grade is allowed to have a large overlap with a defining relator of a higher grade. Olshaskii used graded small cancellation theory to construct various "monster" groups, including the Tarski monster and also to give a new proof that free Burnside groups of large odd exponent are infinite (this result was originally proved by Adian and Novikov in 1968 using more combinatorial methods). Small cancellation theory supplied a basic set of examples and ideas for the theory of word-hyperbolic groups that was put forward by Gromov in a seminal 1987 monograph "Hyperbolic groups". == Main definitions == The exposition below largely follows Ch. V of the book of Lyndon and Schupp. === Pieces === Let G = ⟨ X ∣ R ⟩ ( ∗ ) {\displaystyle G=\langle X\mid R\rangle \qquad (*)} be a group presentation where R ⊆ F(X) is a set of freely reduced and cyclically reduced words in the free group F(X) such that R is symmetrized, that is, closed under taking cyclic permutations and inverses. A nontrivial freely reduced word u in F(X) is called a piece with respect to (∗) if there exist two distinct elements r1, r2 in R that have u as maximal common initial segment. Note that if G = ⟨ X ∣ S ⟩ {\displaystyle G=\langle X\mid S\rangle } is a group presentation where the set of defining relators S is not symmetrized, we can always take the symmetrized closure R of S, where R consists of all cyclic permutations of elements of S and S−1. Then R is symmetrized and G = ⟨ X ∣ R ⟩ {\displaystyle G=\langle X\mid R\rangle } is also a presentation of G. === Metric small cancellation conditions === Let 0 < λ < 1. Presentation (∗) as above is said to satisfy the C′(λ) small cancellation condition if whenever u is a piece with respect to (∗) and u is a subword of some r ∈ R, then |u| < λ|r|. Here |v| is the length of a word v. The condition C′(λ) is sometimes called a metric small cancellation condition. === Non-metric small cancellation conditions === Let p ≥ 3 be an integer. A group presentation (∗) as above is said to satisfy the C(p) small cancellation condition if whenever r ∈ R and r = u 1 … u m {\displaystyle r=u_{1}\dots u_{m}} where ui are pieces and where the above product is freely reduced as written, then m ≥ p. That is, no defining relator can be written as a reduced product of fewer than p pieces. Let q ≥ 3 be an integer. A group presentation (∗) as above is said to satisfy the T(q) small cancellation condition if whenever 3 ≤ t < q and r1,...,rt in R are such that r1 ≠ r2−1,..., rt ≠ r1−1 then at least one of the products r1r2,...,rt−1rt, rtr1 is freely reduced as written. Geometrically, condition T(q) essentially means that if D is a reduced van Kampen diagram over (∗) then every interior vertex of D of degree at least three actually has degree at least q. === Examples === Let G = ⟨ a , b ∣ a b a − 1 b − 1 ⟩ {\displaystyle G=\langle a,b\mid aba^{-1}b^{-1}\rangle } be the standard presentation of the free abelian group of rank two. Then for the symmetrized closure of this presentation the only pieces are words of length 1. This symmetrized form satisfies the C(4)–T(4) small cancellation conditions and the C′(λ) condition for any 1 > λ > 1/4. Let G = ⟨ a 1 , b 1 , … , a k , b k ∣ [ a 1 , b 1 ] ⋅ ⋯ ⋅ [ a k , b k ] ⟩ {\displaystyle G=\langle a_{1},b_{1},\dots ,a_{k},b_{k}\mid [a_{1},b_{1}]\cdot \dots \cdot [a_{k},b_{k}]\rangle } , where k ≥ 2, be the standard presentation of the fundamental group of a closed orientable surface of genus k. Then for the symmetrization of this presentation the only pieces are words of length 1 and this symmetrization satisfies the C′(1/7) and C(8) small cancellation conditions. Let G = ⟨ a , b ∣ a b a b 2 a b 3 … a b 100 ⟩ {\displaystyle G=\langle a,b\mid abab^{2}ab^{3}\dots ab^{100}\rangle } . Then, up to inversion, every piece for the symmetrized version of this presentation, has the form biabj or bi, where 0 ≤ i,j ≤ 100. This symmetrization satisfies the C′(1/20) small cancellation condition. If a symmetrized presentation satisfies the C′(1/m) condition then it also satisfies the C(m) condition. Let r ∈ F(X) be a nontrivial cyclically reduced word which is not a proper power in F(X) and let n ≥ 2. Then the symmetrized closure of the presentation G = ⟨ X ∣ r n ⟩ {\displaystyle G=\langle X\mid r^{n}\rangle } satisfies the C(2n) and C′(1/n) small cancellation conditions. == Basic results of small cancellation theory == === Greendlinger's lemma === The main result regarding the metric small cancellation condition is the following statement (see Theorem 4.4 in Ch. V of ) which is usually called Greendlinger's lemma: Let (∗) be a group presentation as above satisfying the C′(λ) small cancellation condition where 0 ≤ λ ≤ 1/6. Let w ∈ F(X) be a nontrivial freely reduced word such that w = 1 in G. Then there is a subword v of w and a defining relator r ∈ R such that v is also a subword of r and such that | v | > ( 1 − 3 λ ) | r | {\displaystyle \left|v\right|>\left(1-3\lambda \right)\left|r\right|} Note that the assumption λ ≤ 1/6 implies that (1 − 3λ) ≥ 1/2, so that w contains a subword more than a half of some defining relator. Greendlinger's lemma is obtained as a corollary of the following geometric statement: Under the assumptions of Greendlinger's lemma, let D be a reduced van Kampen diagram over (∗) with a cyclically reduced boundary label such that D contains at least two regions. Then there exist two distinct regions D1 and D2 in D such that for j = 1,2 the region Dj intersects the boundary cycle ∂D of D in a simple arc whose length is bigger than (1 − 3λ)|∂Dj|. This result in turn is proved by considering a dual diagram for D. There one defines a combinatorial notion of curvature (which, by the small cancellation assumptions, is negative at every interior vertex), and one then obtains a combinatorial version of the Gauss–Bonnet theorem. Greendlinger's lemma is proved as a consequence of this analysis and in this way the proof evokes the ideas of the original proof of Dehn for the case of surface groups. === Dehn's algorithm === For any symmetrized group presentation (∗), the following abstract procedure is called Dehn's algorithm: Given a freely reduced word w on X±1, construct a sequence of freely reduced words w = w0, w1, w2,..., as follows. Suppose wj is already constructed. If it is the empty word, terminate the algorithm. Otherwise check if wj contains a subword v such that v is also a subword of some defining relator r = vu ∈ R such that |v| > |r|/2. If no, terminate the algorithm with output wj. If yes, replace v by u−1 in wj, then freely reduce, denote the resulting freely reduced word by wj+1 and go to the next step of the algorithm. Note that we always have |w0| > |w1| > |w2| >... which implies that the process must terminate in at most |w| steps. Moreover, all the words wj represent the same element of G as does w and hence if the process terminates with the empty word, then w represents the identity element of G. One says that for a symmetrized presentation (∗) Dehn's algorithm solves the word problem in G if the converse is also true, that is if for any freely reduced word w in F(X) this word represents the identity element of G if and only if Dehn's algorithm, starting from w, terminates in the empty word. Greendlinger's lemma implies that for a C′(1/6) presentation Dehn's algorithm solves the word problem. If a C′(1/6) presentation (∗) is finite (that is both X and R are finite), then Dehn's algorithm is an actual non-deterministic algorithm in the sense of recursion theory. However, even if (∗) is an infinite C′(1/6) presentation, Dehn's algorithm, understood as an abstract procedure, still correctly decides whether or not a word in the generators X±1 represents the identity element of G. === Asphericity === Let (∗) be a C′(1/6) or, more generally, C(6) presentation where every r ∈ R is not a proper power in F(X) then G is aspherical in the following sense. Consider a minimal subset S of R such that the symmetrized closure of S is equal to R. Thus if r and s are distinct elements of S then r is not a cyclic permutation of s±1 and G = ⟨ X ∣ S ⟩ {\displaystyle G=\langle X\mid S\rangle } is another presentation for G. Let Y be the presentation complex for this presentation. Then (see and Theorem 13.3 in ), under the above assumptions on (∗), Y is a classifying space for G, that is G = π1(Y) and the universal cover of Y is contractible. In particular, this implies that G is torsion-free and has cohomological dimension two. === More general curvature === More generally, it is possible to define various sorts of local "curvature" on any van Kampen diagram to be - very roughly - the average excess of vertices + faces − edges (which, by Euler's formula, must total 2) and, by showing, in a particular group, that this is always non-positive (or – even better – negative) internally, show that the curvature must all be on or near the boundary and thereby try to obtain a solution of the word problem. Furthermore, one can restrict attention to diagrams that do not contain any of a set of "regions" such that there is a "smaller" region with the same boundary. === Other basic properties of small cancellation groups === Let (∗) be a C′(1/6) presentation. Then an element g in G has order n > 1 if and only if there is a relator r in R of the form r = sn in F(X) such that g is conjugate to s in G. In particular, if all elements of R are not proper powers in F(X) then G is torsion-free. If (∗) is a finite C′(1/6) presentation, the group G is word-hyperbolic. If R and S are finite symmetrized subsets of F(X) with equal normal closures in F(X) such that both presentations ⟨ X ∣ R ⟩ {\displaystyle \langle X\mid R\rangle } and ⟨ X ∣ S ⟩ {\displaystyle \langle X\mid S\rangle } satisfy the C′(1/6) condition then R = S. If a finite presentation (∗) satisfies one of C′(1/6), C′(1/4)–T(4), C(6), C(4)–T(4), C(3)–T(6) then the group G has solvable word problem and solvable conjugacy problem == Applications == Examples of applications of small cancellation theory include: Solution of the conjugacy problem for groups of alternating knots (see and Chapter V, Theorem 8.5 in ), via showing that for such knots augmented knot groups admit C(4)–T(4) presentations. Finitely presented C′(1/6) small cancellation groups are basic examples of word-hyperbolic groups. One of the equivalent characterizations of word-hyperbolic groups is as those admitting finite presentations where Dehn's algorithm solves the word problem. Finitely presented groups given by finite C(4)–T(4) presentations where every piece has length one are basic examples of CAT(0) groups: for such a presentation the universal cover of the presentation complex is a CAT(0) square complex. Early applications of small cancellation theory involve obtaining various embeddability results. Examples include a 1974 paper of Sacerdote and Schupp with a proof that every one-relator group with at least three generators is SQ-universal and a 1976 paper of Schupp with a proof that every countable group can be embedded into a simple group generated by an element of order two and an element of order three. The so-called Rips construction, due to Eliyahu Rips, provides a rich source of counter-examples regarding various subgroup properties of word-hyperbolic groups: Given an arbitrary finitely presented group Q, the construction produces a short exact sequence 1 → K → G → Q → 1 {\displaystyle 1\to K\to G\to Q\to 1} where K is two-generated and where G is torsion-free and given by a finite C′(1/6)–presentation (and thus G is word-hyperbolic). The construction yields proofs of unsolvability of several algorithmic problems for word-hyperbolic groups, including the subgroup membership problem, the generation problem and the rank problem. Also, with a few exceptions, the group K in the Rips construction is not finitely presentable. This implies that there exist word-hyperbolic groups that are not coherent that is which contain subgroups that are finitely generated but not finitely presentable. Small cancellation methods (for infinite presentations) were used by Ol'shanskii to construct various "monster" groups, including the Tarski monster and also to give a proof that free Burnside groups of large odd exponent are infinite (a similar result was originally proved by Adian and Novikov in 1968 using more combinatorial methods). Some other "monster" groups constructed by Ol'shanskii using this methods include: an infinite simple Noetherian group; an infinite group in which every proper subgroup has prime order and any two subgroups of the same order are conjugate; a nonamenable group where every proper subgroup is cyclic; and others. Bowditch used infinite small cancellation presentations to prove that there exist continuumly many quasi-isometry types of two-generator groups. Thomas and Velickovic used small cancellation theory to construct a finitely generated group with two non-homeomorphic asymptotic cones, thus answering a question of Gromov. McCammond and Wise showed how to overcome difficulties posed by the Rips construction and produce large classes of small cancellation groups that are coherent (that is where all finitely generated subgroups are finitely presented) and, moreover, locally quasiconvex (that is where all finitely generated subgroups are quasiconvex). Small cancellation methods play a key role in the study of various models of "generic" or "random" finitely presented groups (see ). In particular, for a fixed number m ≥ 2 of generators and a fixed number t ≥ 1 of defining relations and for any λ < 1 a random m-generator t-relator group satisfies the C′(λ) small cancellation condition. Even if the number of defining relations t is not fixed but grows as (2m − 1)εn (where ε ≥ 0 is the fixed density parameter in Gromov's density model of "random" groups, and where n → ∞ {\displaystyle n\to \infty } is the length of the defining relations), then an ε-random group satisfies the C′(1/6) condition provided ε < 1/12. Gromov used a version of small cancellation theory with respect to a graph to prove the existence of a finitely presented group that "contains" (in the appropriate sense) an infinite sequence of expanders and therefore does not admit a uniform embedding into a Hilbert space. This result provides a direction (the only one available so far) for looking for counter-examples to the Novikov conjecture. Osin used a generalization of small cancellation theory to obtain an analog of Thurston's hyperbolic Dehn surgery theorem for relatively hyperbolic groups. == Generalizations == A version of small cancellation theory for quotient groups of amalgamated free products and HNN extensions was developed in the paper of Sacerdote and Schupp and then in the book of Lyndon and Schupp. Rips and Ol'shanskii developed a "stratified" version of small cancellation theory where the set of relators is filtered as an ascending union of strata (each stratum satisfying a small cancellation condition) and for a relator r from some stratum and a relator s from a higher stratum their overlap is required to be small with respect to |s| but is allowed to have a large with respect to |r|. This theory allowed Ol'shanskii to construct various "monster" groups including the Tarski monster and to give a new proof that free Burnside groups of large odd exponent are infinite. Ol'shanskii and Delzant later on developed versions of small cancellation theory for quotients of word-hyperbolic groups. McCammond provided a higher-dimensional version of small cancellation theory. McCammond and Wise pushed substantially further the basic results of the standard small cancellation theory (such as Greendlinger's lemma) regarding the geometry of van Kampen diagrams over small cancellation presentations. Gromov used a version of small cancellation theory with respect to a graph to prove the existence of a finitely presented group that "contains" (in the appropriate sense) an infinite sequence of expanders and therefore does not admit a uniform embedding into a Hilbert space. Osin gave a version of small cancellation theory for quotients of relatively hyperbolic groups and used it to obtain a relatively hyperbolic generalization of Thurston's hyperbolic Dehn surgery theorem. == Basic references == Roger Lyndon and Paul Schupp, Combinatorial group theory. Reprint of the 1977 edition. Classics in Mathematics. Springer-Verlag, Berlin, 2001. ISBN 3-540-41158-5. Alexander Yu. Olʹshanskii, Geometry of defining relations in groups. Translated from the 1989 Russian original by Yu. A. Bakhturin. Mathematics and its Applications (Soviet Series), 70. Kluwer Academic Publishers Group, Dordrecht, 1991. ISBN 0-7923-1394-1. Ralph Strebel, Appendix. Small cancellation groups. Sur les groupes hyperboliques d'après Mikhael Gromov (Bern, 1988), pp. 227–273, Progress in Mathematics, 83, Birkhäuser Boston, Boston, Massachusetts, 1990. ISBN 0-8176-3508-4. Milé Krajčevski, Tilings of the plane, hyperbolic groups and small cancellation conditions. Memoirs of the American Mathematical Society, vol. 154 (2001), no. 733. == See also == Geometric group theory Word-hyperbolic group Tarski monster group Burnside problem Finitely presented group Word problem for groups Van Kampen diagram == Notes ==
Wikipedia/Small_cancellation_theory
In the mathematical subject of geometric group theory, a Dehn function, named after Max Dehn, is an optimal function associated to a finite group presentation which bounds the area of a relation in that group (that is a freely reduced word in the generators representing the identity element of the group) in terms of the length of that relation (see pp. 79–80 in ). The growth type of the Dehn function is a quasi-isometry invariant of a finitely presented group. The Dehn function of a finitely presented group is also closely connected with non-deterministic algorithmic complexity of the word problem in groups. In particular, a finitely presented group has solvable word problem if and only if the Dehn function for a finite presentation of this group is recursive (see Theorem 2.1 in ). The notion of a Dehn function is motivated by isoperimetric problems in geometry, such as the classic isoperimetric inequality for the Euclidean plane and, more generally, the notion of a filling area function that estimates the area of a minimal surface in a Riemannian manifold in terms of the length of the boundary curve of that surface. == History == The idea of an isoperimetric function for a finitely presented group goes back to the work of Max Dehn in 1910s. Dehn proved that the word problem for the standard presentation of the fundamental group of a closed oriented surface of genus at least two is solvable by what is now called Dehn's algorithm. A direct consequence of this fact is that for this presentation the Dehn function satisfies Dehn(n) ≤ n. This result was extended in 1960s by Martin Greendlinger to finitely presented groups satisfying the C'(1/6) small cancellation condition. The formal notion of an isoperimetric function and a Dehn function as it is used today appeared in late 1980s – early 1990s together with the introduction and development of the theory of word-hyperbolic groups. In his 1987 monograph "Hyperbolic groups" Gromov proved that a finitely presented group is word-hyperbolic if and only if it satisfies a linear isoperimetric inequality, that is, if and only if the Dehn function of this group is equivalent to the function f(n) = n. Gromov's proof was in large part informed by analogy with filling area functions for compact Riemannian manifolds where the area of a minimal surface bounding a null-homotopic closed curve is bounded in terms of the length of that curve. The study of isoperimetric and Dehn functions quickly developed into a separate major theme in geometric group theory, especially since the growth types of these functions are natural quasi-isometry invariants of finitely presented groups. One of the major results in the subject was obtained by Sapir, Birget and Rips who showed that most "reasonable" time complexity functions of Turing machines can be realized, up to natural equivalence, as Dehn functions of finitely presented groups. == Formal definition == Let G = ⟨ X | R ⟩ ( ∗ ) {\displaystyle G=\langle X|R\rangle \qquad (*)} be a finite group presentation where the X is a finite alphabet and where R ⊆ F(X) is a finite set of cyclically reduced words. === Area of a relation === Let w ∈ F(X) be a relation in G, that is, a freely reduced word such that w = 1 in G. Note that this is equivalent to saying that w belongs to the normal closure of R in F(X), that is, there exists a representation of w as w = u 1 r 1 u 1 − 1 ⋯ u m r m u m − 1 in F ( X ) , {\displaystyle w=u_{1}r_{1}u_{1}^{-1}\cdots u_{m}r_{m}u_{m}^{-1}{\text{ in }}F(X),} (♠) where m ≥ 0 and where ri ∈ R± 1 for i = 1, ..., m. For w ∈ F(X) satisfying w = 1 in G, the area of w with respect to (∗), denoted Area(w), is the smallest m ≥ 0 such that there exists a representation (♠) for w as the product in F(X) of m conjugates of elements of R± 1. A freely reduced word w ∈ F(X) satisfies w = 1 in G if and only if the loop labeled by w in the presentation complex for G corresponding to (∗) is null-homotopic. This fact can be used to show that Area(w) is the smallest number of 2-cells in a van Kampen diagram over (∗) with boundary cycle labelled by w. === Isoperimetric function === An isoperimetric function for a finite presentation (∗) is a monotone non-decreasing function f : N → [ 0 , ∞ ) {\displaystyle f:\mathbb {N} \to [0,\infty )} such that whenever w ∈ F(X) is a freely reduced word satisfying w = 1 in G, then Area(w) ≤ f(|w|), where |w| is the length of the word w. === Dehn function === Then the Dehn function of a finite presentation (∗) is defined as D e h n ( n ) = max { A r e a ( w ) : w = 1 in G , | w | ≤ n , w freely reduced . } {\displaystyle {\rm {Dehn}}(n)=\max\{{\rm {Area}}(w):w=1{\text{ in }}G,|w|\leq n,w{\text{ freely reduced}}.\}} Equivalently, Dehn(n) is the smallest isoperimetric function for (∗), that is, Dehn(n) is an isoperimetric function for (∗) and for any other isoperimetric function f(n) we have Dehn(n) ≤ f(n) for every n ≥ 0. === Growth types of functions === Because the exact Dehn function usually depends on the presentation, one usually studies its asymptotic growth type as n tends to infinity, which only depends on the group. For two monotone-nondecreasing functions f , g : N → [ 0 , ∞ ) {\displaystyle f,g:\mathbb {N} \to [0,\infty )} one says that f is dominated by g if there exists C ≥1 such that f ( n ) ≤ C g ( C n + C ) + C n + C {\displaystyle f(n)\leq Cg(Cn+C)+Cn+C} for every integer n ≥ 0. Say that f ≈ g if f is dominated by g and g is dominated by f. Then ≈ is an equivalence relation and Dehn functions and isoperimetric functions are usually studied up to this equivalence relation. Thus for any a,b > 1 we have an ≈ bn. Similarly, if f(n) is a polynomial of degree d (where d ≥ 1 is a real number) with non-negative coefficients, then f(n) ≈ nd. Also, 1 ≈ n. If a finite group presentation admits an isoperimetric function f(n) that is equivalent to a linear (respectively, quadratic, cubic, polynomial, exponential, etc.) function in n, the presentation is said to satisfy a linear (respectively, quadratic, cubic, polynomial, exponential, etc.) isoperimetric inequality. == Basic properties == If G and H are quasi-isometric finitely presented groups and some finite presentation of G has an isoperimetric function f(n) then for any finite presentation of H there is an isoperimetric function equivalent to f(n). In particular, this fact holds for G = H, where the same group is given by two different finite presentations. Consequently, for a finitely presented group the growth type of its Dehn function, in the sense of the above definition, does not depend on the choice of a finite presentation for that group. More generally, if two finitely presented groups are quasi-isometric then their Dehn functions are equivalent. For a finitely presented group G given by a finite presentation (∗) the following conditions are equivalent: G has a recursive Dehn function with respect to (∗). There exists a recursive isoperimetric function f(n) for (∗). The group G has solvable word problem. In particular, this implies that solvability of the word problem is a quasi-isometry invariant for finitely presented groups. Knowing the area Area(w) of a relation w allows to bound, in terms of |w|, not only the number of conjugates of the defining relations in (♠) but the lengths of the conjugating elements ui as well. As a consequence, it is known that if a finitely presented group G given by a finite presentation (∗) has computable Dehn function Dehn(n), then the word problem for G is solvable with non-deterministic time complexity Dehn(n) and deterministic time complexity Exp(Dehn(n)). However, in general there is no reasonable bound on the Dehn function of a finitely presented group in terms of the deterministic time complexity of the word problem and the gap between the two functions can be quite large. == Examples == For any finite presentation of a finite group G we have Dehn(n) ≈ n. For the closed oriented surface of genus 2, the standard presentation of its fundamental group G = ⟨ a 1 , a 2 , b 1 , b 2 | [ a 1 , b 1 ] [ a 2 , b 2 ] = 1 ⟩ {\displaystyle G=\langle a_{1},a_{2},b_{1},b_{2}|[a_{1},b_{1}][a_{2},b_{2}]=1\rangle } satisfies Dehn(n) ≤ n and Dehn(n) ≈ n. For every integer k ≥ 2 the free abelian group Z k {\displaystyle \mathbb {Z} ^{k}} has Dehn(n) ≈ n2. The Baumslag-Solitar group B ( 1 , 2 ) = ⟨ a , b | b − 1 a b = a 2 ⟩ {\displaystyle B(1,2)=\langle a,b|b^{-1}ab=a^{2}\rangle } has Dehn(n) ≈ 2n (see ). The 3-dimensional discrete Heisenberg group H 3 = ⟨ a , b , t | [ a , t ] = [ b , t ] = 1 , [ a , b ] = t 2 ⟩ {\displaystyle H_{3}=\langle a,b,t|[a,t]=[b,t]=1,[a,b]=t^{2}\rangle } satisfies a cubic but no quadratic isoperimetric inequality. Higher-dimensional Heisenberg groups H 2 k + 1 = ⟨ a 1 , b 1 , … , a k , b k , t | [ a i , b i ] = t , [ a i , t ] = [ b i , t ] = 1 , i = 1 , … , k , [ a i , b j ] = 1 , i ≠ j ⟩ {\displaystyle H_{2k+1}=\langle a_{1},b_{1},\dots ,a_{k},b_{k},t|[a_{i},b_{i}]=t,[a_{i},t]=[b_{i},t]=1,i=1,\dots ,k,[a_{i},b_{j}]=1,i\neq j\rangle } , where k ≥ 2, satisfy quadratic isoperimetric inequalities. If G is a "Novikov-Boone group", that is, a finitely presented group with unsolvable word problem, then the Dehn function of G growths faster than any recursive function. For the Thompson group F the Dehn function is quadratic, that is, equivalent to n2 (see ). The so-called Baumslag-Gersten group G = ⟨ a , t | ( t − 1 a − 1 t ) a ( t − 1 a t ) = a 2 ⟩ {\displaystyle G=\langle a,t|(t^{-1}a^{-1}t)a(t^{-1}at)=a^{2}\rangle } has a Dehn function growing faster than any fixed iterated tower of exponentials. Specifically, for this group Dehn(n) ≈ exp(exp(exp(...(exp(1))...))) where the number of exponentials is equal to the integral part of log2(n) (see ). == Known results == A finitely presented group is word-hyperbolic group if and only if its Dehn function is equivalent to n, that is, if and only if every finite presentation of this group satisfies a linear isoperimetric inequality. Isoperimetric gap: If a finitely presented group satisfies a subquadratic isoperimetric inequality then it is word-hyperbolic. Thus there are no finitely presented groups with Dehn functions equivalent to nd with d ∈ (1,2). Automatic groups and, more generally, combable groups satisfy quadratic isoperimetric inequalities. A finitely generated nilpotent group has a Dehn function equivalent to nd where d ≥ 1 and all positive integers d are realized in this way. Moreover, every finitely generated nilpotent group G admits a polynomial isoperimetric inequality of degree c + 1, where c is the nilpotency class of G. The set of real numbers d ≥ 1, such that there exists a finitely presented group with Dehn function equivalent to nd, is dense in the interval [ 2 , ∞ ) {\displaystyle [2,\infty )} . If all asymptotic cones of a finitely presented group are simply connected, then the group satisfies a polynomial isoperimetric inequality. If a finitely presented group satisfies a quadratic isoperimetric inequality, then all asymptotic cones of this group are simply connected. If (M,g) is a closed Riemannian manifold and G = π1(M) then the Dehn function of G is equivalent to the filling area function of the manifold. If G is a group acting properly discontinuously and cocompactly by isometries on a CAT(0) space, then G satisfies a quadratic isoperimetric inequality. In particular, this applies to the case where G is the fundamental group of a closed Riemannian manifold of non-positive sectional curvature (not necessarily constant). The Dehn function of SL(m, Z) is at most exponential for any m ≥ 3. For SL(3,Z) this bound is sharp and it is known in that case that the Dehn function does not admit a subexponential upper bound. The Dehn functions for SL(m,Z), where m > 4 are quadratic. The Dehn function of SL(4,Z), has been conjectured to be quadratic, by Thurston. This and, more generally, Gromov's conjecture that lattices in higher rank Lie groups have a quadratic Dehn function has been proved by Leuzinger and Young. Mapping class groups of surfaces of finite type are automatic and satisfy quadratic isoperimetric inequalities. The Dehn functions for the groups Aut(Fk) and Out(Fk) are exponential for every k ≥ 3. Exponential isoperimetric inequalities for Aut(Fk) and Out(Fk) when k ≥ 3 were found by Hatcher and Vogtmann. These bounds are sharp, and the groups Aut(Fk) and Out(Fk) do not satisfy subexponential isoperimetric inequalities, as shown for k = 3 by Bridson and Vogtmann, and for k ≥ 4 by Handel and Mosher. For every automorphism φ of a finitely generated free group Fk the mapping torus group F k ⋊ ϕ Z {\displaystyle F_{k}\rtimes _{\phi }\mathbb {Z} } of φ satisfies a quadratic isoperimetric inequality. Most "reasonable" computable functions that are ≥n4, can be realized, up to equivalence, as Dehn functions of finitely presented groups. In particular, if f(n) ≥ n4 is a superadditive function whose binary representation is computable in time O ( f ( n ) 4 ) {\displaystyle O\left({\sqrt[{4}]{f(n)}}\right)} by a Turing machine then f(n) is equivalent to the Dehn function of a finitely presented group. Although one cannot reasonably bound the Dehn function of a group in terms of the complexity of its word problem, Birget, Olʹshanskii, Rips and Sapir obtained the following result, providing a far-reaching generalization of Higman's embedding theorem: The word problem of a finitely generated group is decidable in nondeterministic polynomial time if and only if this group can be embedded into a finitely presented group with a polynomial isoperimetric function. Moreover, every group with the word problem solvable in time T(n) can be embedded into a group with isoperimetric function equivalent to n2T(n2)4. == Generalizations == There are several companion notions closely related to the notion of an isoperimetric function. Thus an isodiametric function bounds the smallest diameter (with respect to the simplicial metric where every edge has length one) of a van Kampen diagram for a particular relation w in terms of the length of w. A filling length function the smallest filling length of a van Kampen diagram for a particular relation w in terms of the length of w. Here the filling length of a diagram is the minimum, over all combinatorial null-homotopies of the diagram, of the maximal length of intermediate loops bounding intermediate diagrams along such null-homotopies. The filling length function is closely related to the non-deterministic space complexity of the word problem for finitely presented groups. There are several general inequalities connecting the Dehn function, the optimal isodiametric function and the optimal filling length function, but the precise relationship between them is not yet understood. There are also higher-dimensional generalizations of isoperimetric and Dehn functions. For k ≥ 1 the k-dimensional isoperimetric function of a group bounds the minimal combinatorial volume of (k + 1)-dimensional ball-fillings of k-spheres mapped into a k-connected space on which the group acts properly and cocompactly; the bound is given as a function of the combinatorial volume of the k-sphere. The standard notion of an isoperimetric function corresponds to the case k = 1. Compared to the classical case only little is known about these higher dimensional filling functions. One chief result is that lattices in higher rank semisimple Lie groups are undistorted in dimensions below the rank, i.e. they satisfy the same filling functions as their associated symmetric space. In his monograph Asymptotic invariants of infinite groups Gromov proposed a probabilistic or averaged version of Dehn function and suggested that for many groups averaged Dehn functions should have strictly slower asymptotics than the standard Dehn functions. More precise treatments of the notion of an averaged Dehn function or mean Dehn function were given later by other researchers who also proved that indeed averaged Dehn functions are subasymptotic to standard Dehn functions in a number of cases (such as nilpotent and abelian groups). A relative version of the notion of an isoperimetric function plays a central role in Osin' approach to relatively hyperbolic groups. Grigorchuk and Ivanov explored several natural generalizations of Dehn function for group presentations on finitely many generators but with infinitely many defining relations. == See also == van Kampen diagram Word-hyperbolic group Automatic group Small cancellation theory Geometric group theory == Notes == == Further reading == Noel Brady, Tim Riley and Hamish Short. The Geometry of the Word Problem for Finitely Generated Groups. Advanced Courses in Mathematics CRM Barcelona, Birkhäuser, Basel, 2007. ISBN 3-7643-7949-9. Martin R. Bridson. The geometry of the word problem. Invitations to geometry and topology, pp. 29–91, Oxford Graduate Texts in Mathematics, 7, Oxford University Press, Oxford, 2002. ISBN 0-19-850772-0. == External links == The Isoperimetric Inequality for SL(n,Z). A September 2008 Workshop at the American Institute of Mathematics. PDF of Bridson's article The geometry of the word problem.
Wikipedia/Dehn_function
In topology, an area of mathematics, the virtually Haken conjecture states that every compact, orientable, irreducible three-dimensional manifold with infinite fundamental group is virtually Haken. That is, it has a finite cover (a covering space with a finite-to-one covering map) that is a Haken manifold. After the proof of the geometrization conjecture by Perelman, the conjecture was only open for hyperbolic 3-manifolds. The conjecture is usually attributed to Friedhelm Waldhausen in a paper from 1968, although he did not formally state it. This problem is formally stated as Problem 3.2 in Kirby's problem list. A proof of the conjecture was announced on March 12, 2012 by Ian Agol in a seminar lecture he gave at the Institut Henri Poincaré. The proof appeared shortly thereafter in a preprint which was eventually published in Documenta Mathematica. The proof was obtained via a strategy by previous work of Daniel Wise and collaborators, relying on actions of the fundamental group on certain auxiliary spaces (CAT(0) cube complexes, also known as median graphs) It used as an essential ingredient the freshly-obtained solution to the surface subgroup conjecture by Jeremy Kahn and Vladimir Markovic. Other results which are directly used in Agol's proof include the Malnormal Special Quotient Theorem of Wise and a criterion of Nicolas Bergeron and Wise for the cubulation of groups. In 2018 related results were obtained by Piotr Przytycki and Daniel Wise proving that mixed 3-manifolds are also virtually special, that is they can be cubulated into a cube complex with a finite cover where all the hyperplanes are embedded which by the previous mentioned work can be made virtually Haken. == See also == Virtually fibered conjecture Surface subgroup conjecture Ehrenpreis conjecture == Notes == == References == Dunfield, Nathan; Thurston, William (2003), "The virtual Haken conjecture: experiments and examples", Geometry and Topology, 7: 399–441, arXiv:math/0209214, doi:10.2140/gt.2003.7.399, MR 1988291, S2CID 6265421. Kirby, Robion (1978), "Problems in low dimensional manifold theory.", Algebraic and geometric topology (Proc. Sympos. Pure Math., Stanford Univ., Stanford, Calif., 1976), vol. 7, pp. 273–312, ISBN 9780821867891, MR 0520548. == External links == Klarreich, Erica (2012-10-02). "Getting Into Shapes: From Hyperbolic Geometry to Cube Complexes and Back". Quanta Magazine.
Wikipedia/Virtually_Haken_conjecture
In algebra, a finitely generated group is a group G that has some finite generating set S so that every element of G can be written as the combination (under the group operation) of finitely many elements of S and of inverses of such elements. By definition, every finite group is finitely generated, since S can be taken to be G itself. Every infinite finitely generated group must be countable but countable groups need not be finitely generated. The additive group of rational numbers Q is an example of a countable group that is not finitely generated. == Examples == Every quotient of a finitely generated group G is finitely generated; the quotient group is generated by the images of the generators of G under the canonical projection. A group that is generated by a single element is called cyclic. Every infinite cyclic group is isomorphic to the additive group of the integers Z. A locally cyclic group is a group in which every finitely generated subgroup is cyclic. The free group on a finite set is finitely generated by the elements of that set (§Examples). A fortiori, every finitely presented group (§Examples) is finitely generated. == Finitely generated abelian groups == Every abelian group can be seen as a module over the ring of integers Z, and in a finitely generated abelian group with generators x1, ..., xn, every group element x can be written as a linear combination of these generators, x = α1⋅x1 + α2⋅x2 + ... + αn⋅xn with integers α1, ..., αn. Subgroups of a finitely generated abelian group are themselves finitely generated. The fundamental theorem of finitely generated abelian groups states that a finitely generated abelian group is the direct sum of a free abelian group of finite rank and a finite abelian group, each of which are unique up to isomorphism. == Subgroups == A subgroup of a finitely generated group need not be finitely generated. The commutator subgroup of the free group F 2 {\displaystyle F_{2}} on two generators is an example of a subgroup of a finitely generated group that is not finitely generated. On the other hand, all subgroups of a finitely generated abelian group are finitely generated. A subgroup of finite index in a finitely generated group is always finitely generated, and the Schreier index formula gives a bound on the number of generators required. In 1954, Albert G. Howson showed that the intersection of two finitely generated subgroups of a free group is again finitely generated. Furthermore, if m {\displaystyle m} and n {\displaystyle n} are the numbers of generators of the two finitely generated subgroups then their intersection is generated by at most 2 m n − m − n + 1 {\displaystyle 2mn-m-n+1} generators. This upper bound was then significantly improved by Hanna Neumann to 2 ( m − 1 ) ( n − 1 ) + 1 {\displaystyle 2(m-1)(n-1)+1} ; see Hanna Neumann conjecture. The lattice of subgroups of a group satisfies the ascending chain condition if and only if all subgroups of the group are finitely generated. A group such that all its subgroups are finitely generated is called Noetherian. A group such that every finitely generated subgroup is finite is called locally finite. Every locally finite group is periodic, i.e., every element has finite order. Conversely, every periodic abelian group is locally finite. == Applications == Finitely generated groups arise in diverse mathematical and scientific contexts. A frequent way they do so is by the Švarc-Milnor lemma, or more generally thanks to an action through which a group inherits some finiteness property of a space. Geometric group theory studies the connections between algebraic properties of finitely generated groups and topological and geometric properties of spaces on which these groups act. === Differential geometry and topology === Fundamental groups of compact manifolds are finitely generated. Their geometry coarsely reflects the possible geometries of the manifold: for instance, non-positively curved compact manifolds have CAT(0) fundamental groups, whereas uniformly positively-curved manifolds have finite fundamental group (see Myers' theorem). Mostow's rigidity theorem: for compact hyperbolic manifolds of dimension at least 3, an isomorphism between their fundamental groups extends to a Riemannian isometry. Mapping class groups of surfaces are also important finitely generated groups in low-dimensional topology. === Algebraic geometry and number theory === Lattices in Lie groups, in p-adic groups... Superrigidity, Margulis' arithmeticity theorem === Combinatorics, algorithmics and cryptography === Infinite families of expander graphs can be constructed thanks to finitely generated groups with property T Algorithmic problems in combinatorial group theory Group-based cryptography attempts to make use of hard algorithmic problems related to group presentations in order to construct quantum-resilient cryptographic protocols === Analysis === === Probability theory === Random walks on Cayley graphs of finitely generated groups provide approachable examples of random walks on graphs Percolation on Cayley graphs === Physics and chemistry === Crystallographic groups Mapping class groups appear in topological quantum field theories === Biology === Knot groups are used to study molecular knots == Related notions == The word problem for a finitely generated group is the decision problem of whether two words in the generators of the group represent the same element. The word problem for a given finitely generated group is solvable if and only if the group can be embedded in every algebraically closed group. The rank of a group is often defined to be the smallest cardinality of a generating set for the group. By definition, the rank of a finitely generated group is finite. == See also == Finitely generated module Presentation of a group == Notes == == References == Rose, John S. (2012) [unabridged and unaltered republication of a work first published by the Cambridge University Press, Cambridge, England, in 1978]. A Course on Group Theory. Dover Publications. ISBN 978-0-486-68194-8.
Wikipedia/Finitely_generated_group