id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
2,410,571
https://en.wikipedia.org/wiki/Kepler%20problem
In classical mechanics, the Kepler problem is a special case of the two-body problem, in which the two bodies interact by a central force that varies in strength as the inverse square of the distance between them. The force may be either attractive or repulsive. The problem is to find the position or speed of the two bodies over time given their masses, positions, and velocities. Using classical mechanics, the solution can be expressed as a Kepler orbit using six orbital elements. The Kepler problem is named after Johannes Kepler, who proposed Kepler's laws of planetary motion (which are part of classical mechanics and solved the problem for the orbits of the planets) and investigated the types of forces that would result in orbits obeying those laws (called Kepler's inverse problem). For a discussion of the Kepler problem specific to radial orbits, see Radial trajectory. General relativity provides more accurate solutions to the two-body problem, especially in strong gravitational fields. Applications The inverse square law behind the Kepler problem is the most important central force law. The Kepler problem is important in celestial mechanics, since Newtonian gravity obeys an inverse square law. Examples include a satellite moving about a planet, a planet about its sun, or two binary stars about each other. The Kepler problem is also important in the motion of two charged particles, since Coulomb’s law of electrostatics also obeys an inverse square law. The Kepler problem and the simple harmonic oscillator problem are the two most fundamental problems in classical mechanics. They are the only two problems that have closed orbits for every possible set of initial conditions, i.e., return to their starting point with the same velocity (Bertrand's theorem). The Kepler problem also conserves the Laplace–Runge–Lenz vector, which has since been generalized to include other interactions. The solution of the Kepler problem allowed scientists to show that planetary motion could be explained entirely by classical mechanics and Newton’s law of gravity; the scientific explanation of planetary motion played an important role in ushering in the Enlightenment. History The Kepler problem begins with the empirical results of Johannes Kepler arduously derived by analysis of the astronomical observations of Tycho Brache. After some 70 attempts to match the data to circular orbits, Kepler hit upon the idea of the elliptic orbit. He eventually summarized his results in the form of three laws of planetary motion. What is now called the Kepler problem was first discussed by Isaac Newton as a major part of his Principia. His "Theorema I" begins with the first two of his three axioms or laws of motion and results in Kepler's second law of planetary motion. Next Newton proves his "Theorema II" which shows that if Kepler's second law results, then the force involved must be along the line between the two bodies. In other words, Newton proves what today might be called the "inverse Kepler problem": the orbit characteristics require the force to depend on the inverse square of the distance. Mathematical definition The central force F between two objects varies in strength as the inverse square of the distance r between them: where k is a constant and represents the unit vector along the line between them. The force may be either attractive (k < 0) or repulsive (k > 0). The corresponding scalar potential is: Solution of the Kepler problem The equation of motion for the radius of a particle of mass moving in a central potential is given by Lagrange's equations and the angular momentum is conserved. For illustration, the first term on the left-hand side is zero for circular orbits, and the applied inwards force equals the centripetal force requirement , as expected. If L is not zero the definition of angular momentum allows a change of independent variable from to giving the new equation of motion that is independent of time The expansion of the first term is This equation becomes quasilinear on making the change of variables and multiplying both sides by After substitution and rearrangement: For an inverse-square force law such as the gravitational or electrostatic potential, the scalar potential can be written The orbit can be derived from the general equation whose solution is the constant plus a simple sinusoid where (the eccentricity) and (the phase offset) are constants of integration. This is the general formula for a conic section that has one focus at the origin; corresponds to a circle, corresponds to an ellipse, corresponds to a parabola, and corresponds to a hyperbola. The eccentricity is related to the total energy (cf. the Laplace–Runge–Lenz vector) Comparing these formulae shows that corresponds to an ellipse (all solutions which are closed orbits are ellipses), corresponds to a parabola, and corresponds to a hyperbola. In particular, for perfectly circular orbits (the central force exactly equals the centripetal force requirement, which determines the required angular velocity for a given circular radius). For a repulsive force (k > 0) only e > 1 applies. See also Action-angle coordinates Bertrand's theorem Binet equation Hamilton–Jacobi equation Laplace–Runge–Lenz vector Kepler orbit Kepler problem in general relativity Kepler's equation Kepler's laws of planetary motion References Classical mechanics Johannes Kepler
Kepler problem
[ "Physics" ]
1,076
[ "Mechanics", "Classical mechanics" ]
2,411,486
https://en.wikipedia.org/wiki/Acoustic%20lubrication
Acoustic lubrication or sonic lubrication occurs when sound (measurable in a vacuum by placing a microphone on one element of the sliding system) permits vibration to introduce separation between the sliding faces. This could happen between two plates or between a series of particles. The frequency of sound required to induce optimal vibration, and thus cause sonic lubrication, varies with the size of the particles (high frequencies will have the desired, or undesired, effect on sand and lower frequencies will have this effect on boulders). Examples If there is a dynamic coefficient of friction between two objects of 0.20, and vibration causes them to be in contact only half of the time, that would be equivalent to a constant coefficient of friction of 0.10. This substantial reduction in friction can have a profound effect on the system. According to anecdote, World War II Panzer tank treads may have been lubricated by their own squeak providing a serendipitous example of acoustic lubrication. Another example occurs during landslides. Most landslides do not involve this effect, but occasionally the frequency of vibrations caused by the landslide is optimal to cause the boulders to vibrate. In this case, feedback causes the boulders to slide much farther and more quickly than typical, which can pose an increased danger to those in their path. One notable feature of such a landslide is that it appears to resemble flowing water, or mud, and not the dry sliding rocks that they were seconds earlier. Applications Besides the study of landslides, there could be many other applications for acoustic lubrication, particularly where variable friction is required or traditional lubricants can't be used. One case might be drilling wells (for water, oil, etc.) through sand. The optimal pitch of the sound (measurement of frequency) could reduce the friction between the drill bit and sand considerably. New razors with a vibrating head may also be an example. In fiction The protagonist in the videogame Shadow Complex can acquire a "friction dampener" that uses acoustic lubrication; this enables him to run at very high speeds. See also Friction References Acoustics Mechanics
Acoustic lubrication
[ "Physics", "Engineering" ]
435
[ "Mechanical engineering", "Mechanics", "Classical mechanics", "Acoustics" ]
2,413,037
https://en.wikipedia.org/wiki/Isolated%20system
In physical science, an isolated system is either of the following: a physical system so far removed from other systems that it does not interact with them. a thermodynamic system enclosed by rigid immovable walls through which neither mass nor energy can pass. Though subject internally to its own gravity, an isolated system is usually taken to be outside the reach of external gravitational and other long-range forces. This can be contrasted with what (in the more common terminology used in thermodynamics) is called a closed system, being enclosed by selective walls through which energy can pass as heat or work, but not matter; and with an open system, which both matter and energy can enter or exit, though it may have variously impermeable walls in parts of its boundaries. An isolated system obeys the conservation law that its total energy–mass stays constant. Most often, in thermodynamics, mass and energy are treated as separately conserved. Because of the requirement of enclosure, and the near ubiquity of gravity, strictly and ideally isolated systems do not actually occur in experiments or in nature. Though very useful, they are strictly hypothetical. Classical thermodynamics is usually presented as postulating the existence of isolated systems. It is also usually presented as the fruit of experience. Obviously, no experience has been reported of an ideally isolated system. It is, however, the fruit of experience that some physical systems, including isolated ones, do seem to reach their own states of internal thermodynamic equilibrium. Classical thermodynamics postulates the existence of systems in their own states of internal thermodynamic equilibrium. This postulate is a very useful idealization. In the attempt to explain the idea of a gradual approach to thermodynamic equilibrium after a thermodynamic operation, with entropy increasing according to the second law of thermodynamics, Boltzmann’s H-theorem used equations, which assumed a system (for example, a gas) was isolated. That is, all the mechanical degrees of freedom could be specified, treating the enclosing walls simply as mirror boundary conditions. This led to Loschmidt's paradox. If, however, the stochastic behavior of the molecules and thermal radiation in real enclosing walls is considered, then the system is in effect in a heat bath. Then Boltzmann’s assumption of molecular chaos can be justified. The concept of an isolated system can serve as a useful model approximating many real-world situations. It is an acceptable idealization used in constructing mathematical models of certain natural phenomena; e.g., the planets in the Solar System, and the proton and electron in a hydrogen atom are often treated as isolated systems. But, from time to time, a hydrogen atom will interact with electromagnetic radiation and go to an excited state. Radiative isolation For radiative isolation, the walls should be perfectly conductive, so as to perfectly reflect the radiation within the cavity, as for example imagined by Planck. He was considering the internal thermal radiative equilibrium of a thermodynamic system in a cavity initially devoid of substance. He did not mention what he imagined to surround his perfectly reflective and thus perfectly conductive walls. Presumably, since they are perfectly reflective, they isolate the cavity from any external electromagnetic effect. Planck held that for radiative equilibrium within the isolated cavity, it needed to have added to its interior a speck of carbon. If the cavity with perfectly reflective walls contains enough radiative energy to sustain a temperature of cosmological magnitude, then the speck of carbon is not needed because the radiation generates particles of substance, such as for example electron-positron pairs, and thereby reaches thermodynamic equilibrium. A different approach is taken by Roger Balian. For quantizing the radiation in the cavity, he imagines his radiatively isolating walls to be perfectly conductive. Though he does not mention mass outside, and it seems from his context that he intends the reader to suppose the interior of the cavity to be devoid of mass, he does imagine that some factor causes currents in the walls. If that factor is internal to the cavity, it can be only the radiation, which would thereby be perfectly reflected. For the thermal equilibrium problem, however, he considers walls that contain charged particles that interact with the radiation inside the cavity; such cavities are of course not isolated, but may be regarded as in a heat bath. See also Closed system Dynamical system Open system Thermodynamic system Open system (thermodynamics) References Thermodynamic systems
Isolated system
[ "Physics", "Chemistry", "Mathematics" ]
951
[ "Physical systems", "Thermodynamic systems", "Thermodynamics", "Dynamical systems" ]
2,414,080
https://en.wikipedia.org/wiki/Degree%20of%20polymerization
The degree of polymerization, or DP, is the number of monomeric units in a macromolecule or polymer or oligomer molecule. For a homopolymer, there is only one type of monomeric unit and the number-average degree of polymerization is given by , where is the number-average molecular weight and is the molecular weight of the monomer unit. The overlines indicate arithmetic mean values. For most industrial purposes, degrees of polymerization in the thousands or tens of thousands are desired. This number does not reflect the variation in molecule size of the polymer that typically occurs, it only represents the mean number of monomeric units. Some authors, however, define DP as the number of repeat units, where for copolymers the repeat unit may not be identical to the monomeric unit. For example, in nylon-6,6, the repeat unit contains the two monomeric units —NH(CH2)6NH— and —OC(CH2)4CO—, so that a chain of 1000 monomeric units corresponds to 500 repeat units. The degree of polymerization or chain length is then 1000 by the first (IUPAC) definition, but 500 by the second. Step-growth and chain-growth polymerization In step-growth polymerization, in order to achieve a high degree of polymerization (and hence molecular weight), , a high fractional monomer conversion, p, is required, according to Carothers' equation For example, a monomer conversion of p = 99% would be required to achieve . For chain-growth free radical polymerization, however, Carothers' equation does not apply. Instead long chains are formed from the beginning of the reaction. Long reaction times increase the polymer yield, but have little effect on the average molecular weight. The degree of polymerization is related to the kinetic chain length, which is the average number of monomer molecules polymerized per chain initiated. However it often differs from the kinetic chain length for several reasons: chain termination may occur wholly or partly by recombination of two chain radicals, which doubles the degree of polymerization chain transfer to monomer starts a new macromolecule for the same kinetic chain (of reaction steps), corresponding to a decrease of the degree of polymerization chain transfer to solvent or to another solute (a modifier or regulator also decreases the degree of polymerization Correlation with physical properties Polymers with identical composition but different molecular weights may exhibit different physical properties. In general, increasing degree of polymerization correlates with higher melting temperature and higher mechanical strength. Number-average and weight-average Synthetic polymers invariably consist of a mixture of macromolecular species with different degrees of polymerization and therefore of different molecular weights. There are different types of average polymer molecular weight, which can be measured in different experiments. The two most important are the number average (Xn) and the weight average (Xw). The number-average degree of polymerization is a weighted mean of the degrees of polymerization of polymer species, weighted by the mole fractions (or the number of molecules) of the species. It is typically determined by measurements of the osmotic pressure of the polymer. The weight-average degree of polymerization is a weighted mean of the degrees of polymerization, weighted by the weight fractions (or the overall weight of the molecules) of the species. It is typically determined by measurements of Rayleigh light scattering by the polymer. See also Anhydroglucose unit References Polymer chemistry
Degree of polymerization
[ "Chemistry", "Materials_science", "Engineering" ]
724
[ "Materials science", "Polymer chemistry" ]
7,765,005
https://en.wikipedia.org/wiki/Fenske%20equation
The Fenske equation in continuous fractional distillation is an equation used for calculating the minimum number of theoretical plates required for the separation of a binary feed stream by a fractionation column that is being operated at total reflux (i.e., which means that no overhead product distillate is being withdrawn from the column). The equation was derived in 1932 by Merrell Fenske, a professor who served as the head of the chemical engineering department at the Pennsylvania State University from 1959 to 1969. When designing large-scale, continuous industrial distillation towers, it is very useful to first calculate the minimum number of theoretical plates required to obtain the desired overhead product composition. Common versions of the Fenske equation This is one of the many different but equivalent versions of the Fenske equation valid only for binary mixtures: where: is the minimum number of theoretical plates required at total reflux (of which the reboiler is one),  is the mole fraction of more volatile component in the overhead distillate,  is the mole fraction of more volatile component in the bottoms,  is the average relative volatility of the more volatile component to the less volatile component. For a multi-component mixture the following formula holds. For ease of expression, the more volatile and the less volatile components are commonly referred to as the and the , respectively. Using that terminology, the above equation may be expressed as: or also: If the relative volatility of the light key to the heavy key is constant from the column top to the column bottom, then is simply . If the relative volatility is not constant from top to bottom of the column, then the following approximation may be used: where: is the relative volatility of light key to heavy key at top of column, is the relative volatility of light key to heavy key at bottom of column. The above forms of the Fenske equation can be modified for use in the total reflux distillation of multi-component feeds. It is also helpful in solving liquid–liquid extraction problems, because an extraction system can also be represented as a series of equilibrium stages and relative solubility can be substituted for relative volatility. Another form of the Fenske equation A derivation of another form of the Fenske equation for use in gas chromatography is available on the U.S. Naval Academy's web site. Using Raoult's law and Dalton's Law for a series of condensation and evaporation cycles (i.e., equilibrium stages), the following form of the Fenske equation is obtained: where:  is the number of equilibrium stages,  is the mole fraction of component n in the vapor phase,  is the mole fraction of component n in the liquid phase,  is the vapor pressure of pure component n. See also References External links Lecture Notes (R.M. Price, Christian Brothers University, Tennessee) Studies in Chemical Process Design and Synthesis , Y. A. Liu, T.E. Quantrille, and S. Chengt, Ind. Eng. Chem. Res., Volume 29, 1990 Multi-component Distillation (M.B. Jennings, San Jose State University) Chemical engineering Distillation Equations
Fenske equation
[ "Chemistry", "Mathematics", "Engineering" ]
660
[ "Separation processes", "Chemical engineering", "Mathematical objects", "Equations", "Distillation", "nan" ]
7,768,355
https://en.wikipedia.org/wiki/Fas%20receptor
The Fas receptor, also known as Fas, FasR, apoptosis antigen 1 (APO-1 or APT), cluster of differentiation 95 (CD95) or tumor necrosis factor receptor superfamily member 6 (TNFRSF6), is a protein that in humans is encoded by the FAS gene. Fas was first identified using a monoclonal antibody generated by immunizing mice with the FS-7 cell line. Thus, the name Fas is derived from FS-7-associated surface antigen. The Fas receptor is a death receptor on the surface of cells that leads to programmed cell death (apoptosis) if it binds its ligand, Fas ligand (FasL). It is one of two apoptosis pathways, the other being the mitochondrial pathway. Gene FAS receptor gene is located on the long arm of chromosome 10 (10q24.1) in humans and on chromosome 19 in mice. The gene lies on the plus (Watson strand) and is 25,255 bases in length organized into nine protein encoding exons. Similar sequences related by evolution (orthologs) are found in most mammals. Protein Previous reports have identified as many as eight splice variants, which are translated into seven isoforms of the protein. Apoptosis-inducing Fas receptor is dubbed isoform 1 and is a type 1 transmembrane protein. Many of the other isoforms are rare haplotypes that are usually associated with a state of disease. However, two isoforms, the apoptosis-inducing membrane-bound form and the soluble form, are normal products whose production via alternative splicing is regulated by the cytotoxic RNA binding protein TIA1. The mature Fas protein has 319 amino acids, has a predicted molecular weight of 48 kilodaltons and is divided into three domains: an extracellular domain, a transmembrane domain, and a cytoplasmic domain. The extracellular domain has 157 amino acids and is rich in cysteine residues. The transmembrane and cytoplasmic domains have 17 and 145 amino acids respectively. Exons 1 through 5 encode the extracellular region. Exon 6 encodes the transmembrane region. Exons 7-9 encode the intracellular region. Function Fas forms the death-inducing signaling complex (DISC) upon ligand binding. Membrane-anchored Fas ligand trimer on the surface of an adjacent cell causes oligomerization of Fas. Recent studies which suggested the trimerization of Fas could not be validated. Other models suggested the oligomerization up to 5–7 Fas molecules in the DISC. This event is also mimicked by binding of an agonistic Fas antibody, though some evidence suggests that the apoptotic signal induced by the antibody is unreliable in the study of Fas signaling. To this end, several clever ways of trimerizing the antibody for in vitro research have been employed. Upon ensuing death domain (DD) aggregation, the receptor complex is internalized via the cellular endosomal machinery. This allows the adaptor molecule FADD to bind the death domain of Fas through its own death domain. FADD also contains a death effector domain (DED) near its amino terminus, which facilitates binding to the DED of FADD-like interleukin-1 beta-converting enzyme (FLICE), more commonly referred to as caspase-8. FLICE can then self-activate through proteolytic cleavage into p10 and p18 subunits, two each of which form the active heterotetramer enzyme. Active caspase-8 is then released from the DISC into the cytosol, where it cleaves other effector caspases, eventually leading to DNA degradation, membrane blebbing, and other hallmarks of apoptosis. Recently, Fas has also been shown to promote tumor growth, since during tumor progression, it is frequently downregulated or cells are rendered apoptosis resistant. Cancer cells in general, regardless of their Fas apoptosis sensitivity, depend on constitutive activity of Fas. This is stimulated by cancer-produced Fas ligand for optimal growth. Although Fas has been shown to promote tumor growth in the above mouse models, analysis of the human cancer genomics database revealed that FAS is not significantly focally amplified across a dataset of 3131 tumors (FAS is not an oncogene), but is significantly focally deleted across the entire dataset of these 3131 tumors, suggesting that FAS functions as a tumor suppressor in humans. In cultured cells, FasL induces various types of cancer cell apoptosis through the Fas receptor. In AOM-DSS-induced colon carcinoma and MCA-induced sarcoma mouse models, it has been shown that Fas acts as a tumor suppressor. Furthermore, the Fas receptor also mediates tumor-specific cytotoxic T lymphocyte (CTL) anti-tumor cytotoxicity. In addition to the well-described on-target CTL anti-tumor cytotoxicity, Fas has been ascribed with a distinct function – the induction of bystander tumor cell death even amongst cognate antigen non-expressing (bystander) cells. CTL-mediated bystander killing was described by the Fleischer Lab in 1986 and later attributed to fas-mediated lysis in vitro by the Austin Research Institute, Cellular Cytotoxicity Laboratory. More recently, fas-mediated bystander tumor cell killing was demonstrated in vivo by the Lymphoma Immunotherapy Program at Mount Sinai School of Medicine using T cells and CAR-T cells, similar to additional in vitro work using bispecific antibodies performed at Amgen. Role in apoptosis Some reports have suggested that the extrinsic Fas pathway is sufficient to induce complete apoptosis in certain cell types through DISC assembly and subsequent caspase-8 activation. These cells are dubbed Type 1 cells and are characterized by the inability of anti-apoptotic members of the Bcl-2 family (namely Bcl-2 and Bcl-xL) to protect from Fas-mediated apoptosis. Characterized Type 1 cells include H9, CH1, SKW6.4 and SW480, all of which are lymphocyte lineages except the latter, which is a colon adenocarcinoma lineage. However, evidence for crosstalk between the extrinsic and intrinsic pathways exists in the Fas signal cascade. In most cell types, caspase-8 catalyzes the cleavage of the pro-apoptotic BH3-only protein Bid into its truncated form, tBid. BH-3 only members of the Bcl-2 family exclusively engage anti-apoptotic members of the family (Bcl-2, Bcl-xL), allowing Bak and Bax to translocate to the outer mitochondrial membrane, thus permeabilizing it and facilitating release of pro-apoptotic proteins such as cytochrome c and Smac/DIABLO, an antagonist of inhibitors of apoptosis proteins (IAPs). Interactions Fas receptor has been shown to interact with: Caspase 8, Caspase 10, CFLAR, FADD, Fas ligand, PDCD6, and Small ubiquitin-related modifier 1. References Further reading External links Immune system Programmed cell death Signal transduction
Fas receptor
[ "Chemistry", "Biology" ]
1,578
[ "Immune system", "Signal transduction", "Senescence", "Organ systems", "Biochemistry", "Neurochemistry", "Programmed cell death" ]
4,477,636
https://en.wikipedia.org/wiki/Ultraluminous%20X-ray%20source
In astronomy and astrophysics, an ultraluminous X-ray source (ULX) is less luminous than an active galactic nucleus but more consistently luminous than any known stellar process (over 1039 erg/s, or 1032 watts), assuming that it radiates isotropically (the same in all directions). Typically there is about one ULX per galaxy in galaxies which host them, but some galaxies contain many. The Milky Way has not been shown to contain an ULX, although SS 433 is a candidate. The main interest in ULXs stems from their luminosity exceeding the Eddington luminosity of neutron stars and even stellar black holes. It is not known what powers ULXs; models include beamed emission of stellar mass objects, accreting intermediate-mass black holes, and super-Eddington emission. Observational facts ULXs were first discovered in the 1980s by the Einstein Observatory. Later observations were made by ROSAT. Great progress has been made by the X-ray observatories XMM-Newton and Chandra, which have a much greater spectral and angular resolution. A survey of ULXs by Chandra observations shows that there is approximately one ULX per galaxy in galaxies which host ULXs (most do not). ULXs are found in all types of galaxies, including elliptical galaxies but are more ubiquitous in star-forming galaxies and in gravitationally interacting galaxies. Tens of percents of ULXs are in fact background quasars; the probability for a ULX to be a background source is larger in elliptical galaxies than in spiral galaxies. Models The fact that ULXs have Eddington luminosities larger than that of stellar mass objects implies that they are different from normal X-ray binaries. There are several models for ULXs, and it is likely that different models apply for different sources. Beamed emission — If the emission of the sources is strongly beamed, the Eddington argument is circumvented twice: first because the actual luminosity of the source is lower than inferred, and second because the accreted gas may come from a different direction than that in which the photons are emitted. Modelling indicates that stellar mass sources may reach luminosities up to 1040 erg/s (1033 W), enough to explain most of the sources, but too low for the most luminous sources. If the source is stellar mass and has a thermal spectrum, its temperature should be high, temperature times the Boltzmann constant kT ≈ 1 keV, and quasi-periodic oscillations are not expected. Intermediate-mass black holes — Black holes are observed in nature with masses of the order of ten times the mass of the Sun, and with masses of millions to billions times the solar mass. The former are 'stellar black holes', the end product of massive stars, while the latter are supermassive black holes, and exist in the centers of galaxies. Intermediate-mass black holes (IMBHs) are a hypothetical third class of objects, with masses in the range of hundreds to thousands of solar masses. Intermediate-mass black holes are light enough not to sink to the center of their host galaxies by dynamical friction, but sufficiently massive to be able to emit at ULX luminosities without exceeding the Eddington limit. If a ULX is an intermediate-mass black hole, in the high/soft state it should have a thermal component from an accretion disk peaking at a relatively low temperature (kT ≈ 0.1 keV) and it may exhibit quasi-periodic oscillation at relatively low frequencies. An argument made in favor of some sources as possible IMBHs is the analogy of the X-ray spectra as scaled-up stellar mass black hole X-ray binaries. The spectra of X-ray binaries have been observed to go through various transition states. The most notable of these states are the low/hard state and the high/soft state (see Remillard & McClintock 2006). The low/hard state or power-law dominated state is characterized by an absorbed power-law X-ray spectrum with spectral index from 1.5 to 2.0 (hard X-ray spectrum). Historically, this state was associated with a lower luminosity, though with better observations with satellites such as RXTE, this is not necessarily the case. The high/soft state is characterized by an absorbed thermal component (blackbody with a disk temperature of (kT ≈ 1.0 keV) and power-law (spectral index ≈ 2.5). At least one ULX source, Holmberg II X-1, has been observed in states with spectra characteristic of both the high and low state. This suggests that some ULXs may be accreting IMBHs (see Winter, Mushotzky, Reynolds 2006). Background quasars — A significant fraction of observed ULXs are in fact background sources. Such sources may be identified by a very low temperature (e.g. the soft excess in PG quasars). Supernova remnants — Bright supernova (SN) remnants may perhaps reach luminosities as high as 1039 erg/s (1032 W). If a ULX is a SN remnant it is not variable on short time-scales, and fades on a time-scale of the order of a few years. Notable ULXs Holmberg II X-1: This famous ULX resides in a dwarf galaxy. Multiple observations with XMM have revealed the source in both a low/hard and high/soft state, suggesting that this source could be a scaled-up X-ray binary or accreting IMBH. M74: Possibly containing an intermediate-mass black hole, as observed by Chandra in 2005. M82 X-1: This is the most luminous known ULX (as of Oct 2004), and has often been marked as the best candidate to host an intermediate-mass black hole. M82-X1 is associated with a star cluster, exhibits quasi-periodic oscillations (QPOs), has a modulation of 62 days in its X-ray amplitude. M82 X-2: An unusual ULX that was discovered in 2014 to be a pulsar rather than a black hole. M101-X1: One of the brightest ULXs, with luminosities up to 1041 erg/s (1034 W). This ULX coincides with an optical source that has been interpreted to be a supergiant star, thus supporting the case that this may be an X-ray binary. NGC 1313 X1 and X2: NGC 1313, a spiral galaxy in the constellation Reticulum, contains two ultraluminous X-ray sources. These two sources had low temperature disk components, which was interpreted as possible evidence for the presence of an intermediate-mass black hole. However, while the low-energy emission can be modelled as a low-temperature disk, the high-energy emission is at odds with the intermediate-mass black hole hypothesis. Moreover, X-ray pulsations have been detected in NGC 1313 X-2, identifying the object as a neutron star. At the same time, the intermediate-mass black hole hypothesis cannot account for the presence of large optical bubbles surrounding each of the ULXs. It is more likely that these two ULXs host stellar-mass neutron stars or black holes accreting at super-Eddington mass-transfer rates and that the powerful winds from the accretion disk have blown away the cavity that surrounds them. RX J0209.6-7427: A transient Be X-ray binary system last detected in 1993 in the Magellanic bridge that was found to be an ULX pulsar when it woke up from deep slumber after 26 years in 2019. See also Astronomical X-ray source X-ray astronomy References X-ray astronomy
Ultraluminous X-ray source
[ "Astronomy" ]
1,652
[ "Astronomical X-ray sources", "Astronomical objects", "Astronomical sub-disciplines", "X-ray astronomy" ]
4,477,923
https://en.wikipedia.org/wiki/Arc%20mapping
Arc mapping is a technique used in fire investigation that relies on finding the locations of electrical arcs and other electrical faults that occurred during a fire; the locations of the electrical faults can then, under some circumstances, indicate the progression of the fire over time. It is usually performed by a forensic electrical engineer. It The technique relies on the assumption that, when heat from fire (or from hot gases caused by the fire) impinges on an electrical line (whether or not protected by a conduit), it will melt the wire insulation and cause an electrical fault at the first point that it reaches on the electrical line. For this to occur, the electrical line must be energized at the time that fire hits it. Origin of fire Arc mapping often aims to determine the point of origin of the fire. A common assumption used in reaching this goal is that the fire expands uniformly in all directions as it burns, and maintains a circular shape centered on the point of origin; therefore, when an electrical fault occurs on an electrical line, the point of origin is on a perpendicular to that point. Contrary to this assumption, however, in many cases fire does not extend uniformly; in particular, large local fuel loads, venting, and air currents have a strong effect on fire progression. Several other factors affect the locations where electrical faults occur. Electrical faults do not always occur where fire first reaches a conduit, but preferentially occur at bends in a conduit or at locations where wires are pressed together. The elevation of an electrical line has a strong effect on its exposure to heat, since temperatures in a fire are generally highest near ceiling level, except in the immediate vicinity of the point of origin. Protection from the fire is an important consideration: being located within a wall or being covered in fiberglass insulation will offer some protection to an electrical line, and will delay electrical faults. Electrical faults An electrical fault will only occur if an electrical line is energized - more specifically, if two conductors are at different potentials, and have the capacity to source/sink significant current. An electrical arcing event will often (though not always) cause a loss of power downstream on the electrical line, such as by severing the conductors or tripping a circuit breaker; as a result, a significant part of arc-map analysis is determining the order in which sections of electrical lines lost power. In most cases, the electrical faults that occurred furthest downstream occurred first. Electrical faults can also energize conductors or components that are not normally energized, such as conduits (normally grounded). ("Energized" usually refers to a conductor being connected to hot power, as opposed to neutral. More generally, electrical faults occur between points at different potentials, such as between hot and neutral, between hot and ground, or between hots of two different phases.) Electrical arcing Investigators distinguish between electrical arcing and copper that melted due to the high temperatures of the fire. The term "electrical arcing", in fire investigation, often refers to "melted copper that indicates that electrical arcing occurred". "Melted copper" generally refers not to copper that is currently melted, but to "previously melted, resolidified copper". NFPA 921 and Kirk's Fire Investigation give some guidelines and illustrations on distinguishing between electrical arcing and other melted copper; however, these strict guidelines occasion much debate. References Fire Forensic techniques
Arc mapping
[ "Chemistry" ]
706
[ "Combustion", "Fire" ]
4,478,605
https://en.wikipedia.org/wiki/Organization%20for%20Women%20in%20Science%20for%20the%20Developing%20World
The Organization for Women in Science for the Developing World (OWSD) is an international organisation that provides research training, career development and networking opportunities for women scientists throughout the developing world at different stages in their career. It was founded in 1987 and was officially launched in 1993. The organisation was formerly known as the Third World Organization for Women in Science (TWOWS). It is a program unit of UNESCO and based at the offices of The World Academy of Sciences in Trieste, Italy. The organisation aims to unite eminent scientists from the developing and developed world with an objective of supporting their efforts in the development process. It also aims at promoting the representation of its members in the sphere of scientific and technological leadership. It does so through different programs like memberships, fellowships, and awards. Objectives Scientific research and advancement is important to generate knowledge and products aimed at solving problems faced by society. The generation of such knowledge is especially important in the developing countries which face numerable problems like poverty, food scarcity, disease, climate change, and many more. Scientific innovation is not only important to find solutions to these problems but can also contribute to local economies. The inclusion of women in this innovation process provides a unique perspective on the local problems. In many developing countries, women have daily needs and routines oriented to their roles as main care-givers to the elderly and children. Women make up the majority of agricultural workers too, growing and harvesting food for their families, as well as collecting fresh water for drinking. If women are included as both participants in scientific research and as the beneficiaries of scientific research, the impact on children, on the elderly and on local communities will be direct and highly effective. OWSD aims to pursue the following objectives: a. Increase the participation of women in developing countries in scientific and technological research, teaching and leadership; b. Promote the recognition of the scientific and technological achievements of women scientists and technologists in developing countries; c. Promote collaboration and communication among women scientists and technologists in developing countries and with the international scientific community as a whole; d. Increase access of women in developing countries to the socio-economic benefits of science and technology; e. Promote the participation of women scientists and technologists in the sustainable and economic development of their country; and f. Increase understanding of the role of science and technology in supporting women's development activities. History The idea for OWSD was first raised at a conference on The Role of Women in the Development of Science and Technology in the Third World in 1988, organized by the World Academy of Sciences, where more than 200 leading women scientists from 63 developing countries participated. A study group consisting of top women scientists and experts was formed to explore the possibility of creating an organization that would champion the experience, needs and skills of women scientists in the developing world. At a further meeting in Trieste in 1989, the Third World Organization for Women in Science (TWOWS) was established and a constitution adopted. TWOWS was officially launched four years later in 1993, at the First General Assembly in Cairo, Egypt. Name change On 29 June 2010, at the organisation's Fourth General Assembly in Beijing, China members voted to adopt the name Organization for Women in Science for the Developing World (OWSD). Funding OWSD is funded by external donors. The Swedish International Development Cooperation Agency (Sida) has supported OWSD financially since 1998 and provides full funding for the postgraduate fellowship programme. Since 2010, the Elsevier Foundation has provided funding for the OWSD-Elsevier Foundation Awards for Early Career Women Scientists, given to five women scientists from the developing world each year. In 2017, an agreement was signed with Canada's International Development Research Centre (IDRC) to fully fund a new OWSD fellowship for Early Career Women Scientists. National chapters OWSD has national chapters in 30 countries. National chapters organize regional conferences, seminars and workshops, lead national and regional initiatives for women in science, and provide OWSD members with networking opportunities. Programmes PhD fellowships OWSD PhD fellowships are offered to women from selected science and technology-lagging countries in the developing world to undertake PhD research in the natural sciences, including engineering and information technology, at host institutes in another developing country. These scholarships cover all costs related to undertaking research in a host country that are not covered by the host institute, including travel, visa and health costs, tuition and bench fees as well as a monthly stipend for the awardees' board, accommodation and living expenses. The fellowship also includes additional funding for each PhD fellow to travel to international workshops and conferences of relevance. The fellowship is offered as either a full-time (up to four years) or sandwich option, in which the fellow is a registered PhD student in her home country and undertakes a maximum of 3 research visits at the host institute for minimum 6 up to 20 months. More than 250 women scientists have graduated from the fellowship programme with PhDs since 1998. The programme is funded by Sida, the Swedish International Development Cooperation Agency. Early career fellowships The OWSD Early Career fellowship is a prestigious award of up to US$50,000 offered to women who have completed their PhDs in science, technology, engineering and mathematics (STEM) subjects and are employed at an academic or scientific research institute in selected science and technology-lagging countries in the developing world. Early career fellows are supported to continue their research at an international level while based at their home institutes and to build up research groups that will attract international visitors. The Early Career fellowship programme is funded by the International Development Research Centre (IDRC) of Canada. The first cohort of Early Career fellows was awarded in 2018. OWSD-Elsevier Foundation Awards for Early Career Women Scientists The OWSD-Elsevier Foundation Awards are an annual prize given to reward and encourage women scientists working and living in developing countries who are in the early stages of their careers. Initially launched in 2010, the Awards are presented to five scientists each year, one from each of the four OWSD regions plus an additional exceptional winner from any region. The eligible scientific disciplines for the Awards rotate between the biological sciences, physical sciences and engineering. Each winner receives US$5,000 and presents her research during a special awards ceremony at the American Association for the Advancement of Science (AAAS) annual meeting. Awardees must have made a demonstrable impact on the research environment both at a regional and international level. Membership OWSD has more than 8,000 members from 137 countries.  Over 90% of OWSD members are women living and working in developing countries who have master's or doctorate degrees in scientific subjects. Leadership OWSD is governed by an executive board, composed of a president, four regional vice presidents and four regional members. All the members of the executive board are elected by the members of the general assembly who vote out of the shortlist of candidates which were selected through an online voting system. All members are invited to attend the general assembly and have the right to participate in its discussions but only full members have the right to vote. List of presidents 2016–2020: Jennifer A. Thomson (South Africa) 2010–2015: Fang Xin (China) 2005–2010: Kaiser Jamil (India) 1999–2004: Lydia Makhubu (Swaziland) 1993–1998: Lydia Makhubu (Swaziland) Impact Since its inception in 1987 and launch in 1993, OWSD has grown manifolds in both scope and impact. The organisation celebrated its 25th anniversary in 2018 and released its first Annual Report. The report highlighted the membership of over 7,100 scientists from more than 150 countries. It also detailed the work of the organization through its headquarters in Italy and the 20 other national chapters in developing countries across the globe. OWSD's flagship programme has been the South-to-South PhD fellowship programme, which aims at supporting the mobility of women scientists. The fellowship provides support to women from scientifically- and technologically-lagging countries (STLCs) to undertake PhD research at a host institution of recognized research excellence in another developing country. In the past ten years, a total of 314 fellowships have been awarded. By 2018, 251 fellows had successfully graduated and a further 193 were enrolled and completing their studies. In 2018, OWSD launched an Early-Career fellowship program. This fellowship is funded by the Canadian International Development Research Centre (IDRC) and greatly increases the impact of the organization. It expands the scope of OWSD's programmes by providing fellowships to scientists in the early stages of their career. The first cohort included 19 candidates from 11 different countries. References External links Women's organisations based in Italy Organizations for women in science and technology Organizations established in 1993 International women's organizations
Organization for Women in Science for the Developing World
[ "Technology" ]
1,761
[ "Organizations for women in science and technology", "Women in science and technology" ]
4,479,365
https://en.wikipedia.org/wiki/Stream%20ecology
Stream ecology is the scientific study of the aquatic species, their interactions with one another, and their connection with the biological, chemical, and physical processes from multiple dimensions within streams. Streams display great variability in their force and generate spatial and temporal gradients in abiotic and biotic activities. The physical strcture of stream networks show headwater systems behave different from mid-lower order systems with mean annual discharge, channel size, alluvial habitat and contributing area all key factors. Importance Streams along with lakes, rivers, and wetlands within protected areas were viewed as an afterthought. However, freshwater aquatic ecosystems such as streams are connected by flowing water that changes on a spatial and temporal scale. Streams (as well as rivers), are crucial part of the many past and current characteristics of cities, and potentially any future. Early settlements and the development of cities were around streams as they provde many services such as: transportation, commerce, recreation, water supply, and much more. Therefore, there has been a shift in research from biophysical structure of stream ecosystems to their functional properties. Streams carry sediments and nutrients into rivers, lakes, and subsequently into oceans. Understanding streams Streams have two primary functions: transporting water from higher elevations to lower elevations; and to transport sediment. A healthy stream would have an equal amount of sediment being picked up and moved downstream and sediment being deposited in the stream. This may also result in healthy lakes, rivers, and estuaries. Excess deposition in stream can lead to mid-channel sediments bars. Excess channel erosion can result in rapid deepening or widening of the stream channel, all of which effect stream ecosystem functions and services. They also provide essential servies to their aquatic life and associated organisms such as horses, who may depend on the stream for drinking water, or fish who depend on them for habitat. Therefore, stream ecosystems cannot be studied in isolation. Streams are dynamic and therefore produce a lot of energy. This occurs due to their movement of water and sediment through a stream system. "The faster the stream flows, the greate the power it has to erode and carry sediment." A way a stream can dissipate this energy of flowing water is by altering their flow pattern or meandering by forming curves along the distance the flow travels. Therefore, it is normal for stream channels to move slowly over time. Riparian zones are equally important in streams as the deep or densely rooted, water-loving plants in this zone along the stream channel add another layer of protection to this energy. Communities Microbial communities in stream ecosystems are the trophic foundation, playing a large role in nutrient turnover and recycling. The discharge and velocity of a stream typically determines the species (especially algal) that occupy a system. Prokaryotes and eukaryotes decompose organic matter and are consumed by other organisms at a higher trophic level. Together, the productive of these microbial communities represent the overall productivity of stream ecosystems. Other organisms inhabit stream ecosystems such as: plants, aquatic insects, fungi, fish, mammals, and much more. The riparian zone The area alongside a stream covered in vegetation is called the riparian zone. The vegetation that thrive are dependent on the geologic location of the stream such as the continent, climate, stream hydrology, etc. These zones contribute nutrients, shade, organic materials, habitats, protection for stream, and much more. Human impacts Urbanization The field of urban stream ecology has evolved rapidly over the past thirty years, showcasing the growing need to set regulations and spread educational resources on streams. However, with increasing urban development this has resulted in alteration in stream systems such as their catchment land cover and flow paths, riparian zones, channels, et cetera, bringing devastation to their natural ecosystem structures and functions. Urbanization replaces natural landscapes with impermeable surfaces such as roads. As urbanization spreads to rural areas and people are leaving inner city living, infrastructure demand rises and natural spaces are being removed. One of the most detrimental effects has been the introduction of complex chemical mixtures of contaminants and nutrients into natural ecosystems, including streams. This typically leads to excess nutrient levels in these systems, most notably nitrogen and phosphorus. Chemical and biogeochemical processes There are many different hydrological and biogeochemical processes that work separately or in tandem to contribute to nitrogren and phosphorus removal. Common processes occurring in stream ecosystems include: denitrification, nitrification, sediment retention, assimilation, and adsorption, among other processes. Freshwater salinization Freshwater salinization is a growing threat to urban streams, watersheds, and other sources of freshwater, primarily due to freshwater salinization syndrome. A major driver in the mobilization of salts, nutrients, and metals are anthropogenic factors such as road salting, sweage systems, and the addition of impervious surfaces. Stream restoration A large reason for stream restoration is to remove nitrogren and phosphorus pollution. Excess nitrogen and phosphorus from anthropogenic activities have partaken in stream and river quality concerns such as drinking water contamination, hypoxia, and algal blooms. There are different approaches one may take to offset these effects. However, the approach used will vary based on the stream ecosystem in question. It is also important to note that implementing projects do not only produce positive outcomes. There are many trade-offs and co-benefits that arise with each approach. Floodplain reconnection Employed to attempt to influence water quality by slowing down stream flow via reconnection of a stream to its floodplain. This approach works best in locations where excess stormwater can temporarily be stored in the floodplain, reducing peak flows and therefore improving nurient processing such as denitrificaition. However, a limitation is that is may deteriorate over time due to erosion, failure of restoration features, or even by not keeping maintenance of the site. Streambed (hyporheic) reconnection This approach targets the reconnection of the stream to its hyporheic zone. It aims to improve water exchange between the stream and the hyporheic zone. Therefore, it works best when there is a need to increase nutrient processing, enhance water quality, or even slow down water flow, allowing for more interaction between the zones. However, a limitation is that there can be less effectiveness in urban areas due to having more impermeable surfaces and less space to implement reconnection. Increasing stream surface area This includes changing a stream's morphology by adding features to the stream such as meanders, step pools, oxbows, et cetera, to create more surface area for nutrient processes and interaction. Similar to the streambed reconnection approach, a limitation is a lack of space to implement it effectively. However, it works best when wanting to facilitate biological interactions, sediment trapping, nutrient uptake, et cetera, to improve nutrient processes. Stormwater management This approach entails infrastructure that can slow down stormwater so it can infiltrate, and reduce flashy flows and nutrient loads. Therefore, it is best implemented for long-term goals and planning, as the time for a stream to recover following stormwater construction projects can vary and may take several years. It does aim to mitigate the negative impacts of storm flow/runoff on stream health. However, a limitation to this approach is that existing infrastructure can complicate it. Thus, there could be contaminated water and it can be costly. Increasing the surface area of wetlands This approach aims to increase the area of wetlands through restoration or creation, since wetlands are highly effective at retaining nutrients. For instance, creating wetlands within the stream channel can increase the retention of nutrients. Wetlands can also store floodwaters and filter nutrients before they enter streams. Therefore, it is best to be considered when you want to create a type of buffer zone to enhance nutrient processes. A limitation to this approach is that wetlands can dry out or suffer from degradation if in-stream features or hydrologic connectivity is not maintained. References Water streams Aquatic ecology
Stream ecology
[ "Biology" ]
1,646
[ "Aquatic ecology", "Ecosystems" ]
4,480,666
https://en.wikipedia.org/wiki/Ceramic%20engineering
Ceramic engineering is the science and technology of creating objects from inorganic, non-metallic materials. This is done either by the action of heat, or at lower temperatures using precipitation reactions from high-purity chemical solutions. The term includes the purification of raw materials, the study and production of the chemical compounds concerned, their formation into components and the study of their structure, composition and properties. Ceramic materials may have a crystalline or partly crystalline structure, with long-range order on atomic scale. Glass-ceramics may have an amorphous or glassy structure, with limited or short-range atomic order. They are either formed from a molten mass that solidifies on cooling, formed and matured by the action of heat, or chemically synthesized at low temperatures using, for example, hydrothermal or sol-gel synthesis. The special character of ceramic materials gives rise to many applications in materials engineering, electrical engineering, chemical engineering and mechanical engineering. As ceramics are heat resistant, they can be used for many tasks for which materials like metal and polymers are unsuitable. Ceramic materials are used in a wide range of industries, including mining, aerospace, medicine, refinery, food and chemical industries, packaging science, electronics, industrial and transmission electricity, and guided lightwave transmission. History The word "ceramic" is derived from the Greek word () meaning pottery. It is related to the older Indo-European language root "to burn". "Ceramic" may be used as a noun in the singular to refer to a ceramic material or the product of ceramic manufacture, or as an adjective. Ceramics is the making of things out of ceramic materials. Ceramic engineering, like many sciences, evolved from a different discipline by today's standards. Materials science engineering is grouped with ceramics engineering to this day. Abraham Darby first used coke in 1709 in Shropshire, England, to improve the yield of a smelting process. Coke is now widely used to produce carbide ceramics. Potter Josiah Wedgwood opened the first modern ceramics factory in Stoke-on-Trent, England, in 1759. Austrian chemist Carl Josef Bayer, working for the textile industry in Russia, developed a process to separate alumina from bauxite ore in 1888. The Bayer process is still used to purify alumina for the ceramic and aluminium industries. Brothers Pierre and Jacques Curie discovered piezoelectricity in Rochelle salt . Piezoelectricity is one of the key properties of electroceramics. E.G. Acheson heated a mixture of coke and clay in 1893, and invented carborundum, or synthetic silicon carbide. Henri Moissan also synthesized SiC and tungsten carbide in his electric arc furnace in Paris about the same time as Acheson. Karl Schröter used liquid-phase sintering to bond or "cement" Moissan's tungsten carbide particles with cobalt in 1923 in Germany. Cemented (metal-bonded) carbide edges greatly increase the durability of hardened steel cutting tools. W.H. Nernst developed cubic-stabilized zirconia in the 1920s in Berlin. This material is used as an oxygen sensor in exhaust systems. The main limitation on the use of ceramics in engineering is brittleness. Military The military requirements of World War II encouraged developments, which created a need for high-performance materials and helped speed the development of ceramic science and engineering. Throughout the 1960s and 1970s, new types of ceramics were developed in response to advances in atomic energy, electronics, communications, and space travel. The discovery of ceramic superconductors in 1986 has spurred intense research to develop superconducting ceramic parts for electronic devices, electric motors, and transportation equipment. There is an increasing need in the military sector for high-strength, robust materials which have the capability to transmit light around the visible (0.4–0.7 micrometers) and mid-infrared (1–5 micrometers) regions of the spectrum. These materials are needed for applications requiring transparent armour. Transparent armour is a material or system of materials designed to be optically transparent, yet protect from fragmentation or ballistic impacts. The primary requirement for a transparent armour system is to not only defeat the designated threat but also provide a multi-hit capability with minimized distortion of surrounding areas. Transparent armour windows must also be compatible with night vision equipment. New materials that are thinner, lightweight, and offer better ballistic performance are being sought. Such solid-state components have found widespread use for various applications in the electro-optical field including: optical fibres for guided lightwave transmission, optical switches, laser amplifiers and lenses, hosts for solid-state lasers and optical window materials for gas lasers, and infrared (IR) heat seeking devices for missile guidance systems and IR night vision. Modern industry Now a multibillion-dollar a year industry, ceramic engineering and research has established itself as an important field of science. Applications continue to expand as researchers develop new kinds of ceramics to serve different purposes. Zirconium dioxide ceramics are used in the manufacture of knives. The blade of the ceramic knife will stay sharp for much longer than that of a steel knife, although it is more brittle and can be snapped by dropping it on a hard surface. Ceramics such as alumina, boron carbide and silicon carbide have been used in bulletproof vests to repel small arms rifle fire. Such plates are known commonly as ballistic plates. Similar material is used to protect cockpits of some military aircraft, because of the low weight of the material. Silicon nitride parts are used in ceramic ball bearings. Their higher hardness means that they are much less susceptible to wear and can offer more than triple lifetimes. They also deform less under load meaning they have less contact with the bearing retainer walls and can roll faster. In very high speed applications, heat from friction during rolling can cause problems for metal bearings; problems which are reduced by the use of ceramics. Ceramics are also more chemically resistant and can be used in wet environments where steel bearings would rust. The major drawback to using ceramics is a significantly higher cost. In many cases their electrically insulating properties may also be valuable in bearings. In the early 1980s, Toyota researched production of an adiabatic ceramic engine which can run at a temperature of over 6000 °F (3300 °C). Ceramic engines do not require a cooling system and hence allow a major weight reduction and therefore greater fuel efficiency. Fuel efficiency of the engine is also higher at high temperature, as shown by Carnot's theorem. In a conventional metallic engine, much of the energy released from the fuel must be dissipated as waste heat in order to prevent a meltdown of the metallic parts. Despite all of these desirable properties, such engines are not in production because the manufacturing of ceramic parts in the requisite precision and durability is difficult. Imperfection in the ceramic leads to cracks, which can lead to potentially dangerous equipment failure. Such engines are possible in laboratory settings, but mass-production is not feasible with current technology. Work is being done in developing ceramic parts for gas turbine engines. Currently, even blades made of advanced metal alloys used in the engines' hot section require cooling and careful limiting of operating temperatures. Turbine engines made with ceramics could operate more efficiently, giving aircraft greater range and payload for a set amount of fuel. Recently, there have been advances in ceramics which include bio-ceramics, such as dental implants and synthetic bones. Hydroxyapatite, the natural mineral component of bone, has been made synthetically from a number of biological and chemical sources and can be formed into ceramic materials. Orthopedic implants made from these materials bond readily to bone and other tissues in the body without rejection or inflammatory reactions. Because of this, they are of great interest for gene delivery and tissue engineering scaffolds. Most hydroxyapatite ceramics are very porous and lack mechanical strength and are used to coat metal orthopedic devices to aid in forming a bond to bone or as bone fillers. They are also used as fillers for orthopedic plastic screws to aid in reducing the inflammation and increase absorption of these plastic materials. Work is being done to make strong, fully dense nano crystalline hydroxyapatite ceramic materials for orthopedic weight bearing devices, replacing foreign metal and plastic orthopedic materials with a synthetic, but naturally occurring, bone mineral. Ultimately these ceramic materials may be used as bone replacements or with the incorporation of protein collagens, synthetic bones. Durable actinide-containing ceramic materials have many applications such as in nuclear fuels for burning excess Pu and in chemically-inert sources of alpha irradiation for power supply of unmanned space vehicles or to produce electricity for microelectronic devices. Both use and disposal of radioactive actinides require their immobilization in a durable host material. Nuclear waste long-lived radionuclides such as actinides are immobilized using chemically-durable crystalline materials based on polycrystalline ceramics and large single crystals. Alumina ceramics are widely utilized in the chemical industry due to their excellent chemical stability and high resistance to corrosion. It is used as acid-resistant pump impellers and pump bodies, ensuring long-lasting performance in transferring aggressive fluids. They are also used in acid-carrying pipe linings to prevent contamination and maintain fluid purity, which is crucial in industries like pharmaceuticals and food processing. Valves made from alumina ceramics demonstrate exceptional durability and resistance to chemical attack, making them reliable for controlling the flow of corrosive liquids. Glass-ceramics Glass-ceramic materials share many properties with both glasses and ceramics. Glass-ceramics have an amorphous phase and one or more crystalline phases and are produced by a so-called "controlled crystallization", which is typically avoided in glass manufacturing. Glass-ceramics often contain a crystalline phase which constitutes anywhere from 30% [m/m] to 90% [m/m] of its composition by volume, yielding an array of materials with interesting thermomechanical properties. In the processing of glass-ceramics, molten glass is cooled down gradually before reheating and annealing. In this heat treatment the glass partly crystallizes. In many cases, so-called 'nucleation agents' are added in order to regulate and control the crystallization process. Because there is usually no pressing and sintering, glass-ceramics do not contain the volume fraction of porosity typically present in sintered ceramics. The term mainly refers to a mix of lithium and aluminosilicates which yields an array of materials with interesting thermomechanical properties. The most commercially important of these have the distinction of being impervious to thermal shock. Thus, glass-ceramics have become extremely useful for countertop cooking. The negative thermal expansion coefficient (TEC) of the crystalline ceramic phase can be balanced with the positive TEC of the glassy phase. At a certain point (~70% crystalline) the glass-ceramic has a net TEC near zero. This type of glass-ceramic exhibits excellent mechanical properties and can sustain repeated and quick temperature changes up to 1000 °C. Processing steps The traditional ceramic process generally follows this sequence: Milling → Batching → Mixing → Forming → Drying → Firing → Assembly. Milling is the process by which materials are reduced from a large size to a smaller size. Milling may involve breaking up cemented material (in which case individual particles retain their shape) or pulverization (which involves grinding the particles themselves to a smaller size). Milling is generally done by mechanical means, including attrition (which is particle-to-particle collision that results in agglomerate break up or particle shearing), compression (which applies a forces that results in fracturing), and impact (which employs a milling medium or the particles themselves to cause fracturing). Attrition milling equipment includes the wet scrubber (also called the planetary mill or wet attrition mill), which has paddles in water creating vortexes in which the material collides and break up. Compression mills include the jaw crusher, roller crusher and cone crusher. Impact mills include the ball mill, which has media that tumble and fracture the material, or the ResonantAcoustic mixer. Shaft impactors cause particle-to particle attrition and compression. Batching is the process of weighing the oxides according to recipes, and preparing them for mixing and drying. Mixing occurs after batching and is performed with various machines, such as dry mixing ribbon mixers (a type of cement mixer), ResonantAcoustic mixers, Mueller mixers, and pug mills. Wet mixing generally involves the same equipment. Forming is making the mixed material into shapes, ranging from toilet bowls to spark plug insulators. Forming can involve: (1) Extrusion, such as extruding "slugs" to make bricks, (2) Pressing to make shaped parts, (3) Slip casting, as in making toilet bowls, wash basins and ornamentals like ceramic statues. Forming produces a "green" part, ready for drying. Green parts are soft, pliable, and over time will lose shape. Handling the green product will change its shape. For example, a green brick can be "squeezed", and after squeezing it will stay that way. Drying is removing the water or binder from the formed material. Spray drying is widely used to prepare powder for pressing operations. Other dryers are tunnel dryers and periodic dryers. Controlled heat is applied in this two-stage process. First, heat removes water. This step needs careful control, as rapid heating causes cracks and surface defects. The dried part is smaller than the green part, and is brittle, necessitating careful handling, since a small impact will cause crumbling and breaking. Sintering is where the dried parts pass through a controlled heating process, and the oxides are chemically changed to cause bonding and densification. The fired part will be smaller than the dried part. Forming methods Ceramic forming techniques include throwing, slipcasting, tape casting, freeze-casting, injection molding, dry pressing, isostatic pressing, hot isostatic pressing (HIP), 3D printing and others. Methods for forming ceramic powders into complex shapes are desirable in many areas of technology. Such methods are required for producing advanced, high-temperature structural parts such as heat engine components and turbines. Materials other than ceramics which are used in these processes may include: wood, metal, water, plaster and epoxy—most of which will be eliminated upon firing. A ceramic-filled epoxy, such as Martyte, is sometimes used to protect structural steel under conditions of rocket exhaust impingement. These forming techniques are well known for providing tools and other components with dimensional stability, surface quality, high (near theoretical) density and microstructural uniformity. The increasing use and diversity of specialty forms of ceramics adds to the diversity of process technologies to be used. Thus, reinforcing fibers and filaments are mainly made by polymer, sol-gel, or CVD processes, but melt processing also has applicability. The most widely used specialty form is layered structures, with tape casting for electronic substrates and packages being pre-eminent. Photo-lithography is of increasing interest for precise patterning of conductors and other components for such packaging. Tape casting or forming processes are also of increasing interest for other applications, ranging from open structures such as fuel cells to ceramic composites. The other major layer structure is coating, where thermal spraying is very important, but chemical and physical vapor deposition and chemical (e.g., sol-gel and polymer pyrolysis) methods are all seeing increased use. Besides open structures from formed tape, extruded structures, such as honeycomb catalyst supports, and highly porous structures, including various foams, for example, reticulated foam, are of increasing use. Densification of consolidated powder bodies continues to be achieved predominantly by (pressureless) sintering. However, the use of pressure sintering by hot pressing is increasing, especially for non-oxides and parts of simple shapes where higher quality (mainly microstructural homogeneity) is needed, and larger size or multiple parts per pressing can be an advantage. The sintering process The principles of sintering-based methods are simple ("sinter" has roots in the English "cinder"). The firing is done at a temperature below the melting point of the ceramic. Once a roughly-held-together object called a "green body" is made, it is fired in a kiln, where atomic and molecular diffusion processes give rise to significant changes in the primary microstructural features. This includes the gradual elimination of porosity, which is typically accompanied by a net shrinkage and overall densification of the component. Thus, the pores in the object may close up, resulting in a denser product of significantly greater strength and fracture toughness. Another major change in the body during the firing or sintering process will be the establishment of the polycrystalline nature of the solid. Significant grain growth tends to occur during sintering, with this growth depending on temperature and duration of the sintering process. The growth of grains will result in some form of grain size distribution, which will have a significant impact on the ultimate physical properties of the material. In particular, abnormal grain growth in which certain grains grow very large in a matrix of finer grains will significantly alter the physical and mechanical properties of the obtained ceramic. In the sintered body, grain sizes are a product of the thermal processing parameters as well as the initial particle size, or possibly the sizes of aggregates or particle clusters which arise during the initial stages of processing. The ultimate microstructure (and thus the physical properties) of the final product will be limited by and subject to the form of the structural template or precursor which is created in the initial stages of chemical synthesis and physical forming. Hence the importance of chemical powder and polymer processing as it pertains to the synthesis of industrial ceramics, glasses and glass-ceramics. There are numerous possible refinements of the sintering process. Some of the most common involve pressing the green body to give the densification a head start and reduce the sintering time needed. Sometimes organic binders such as polyvinyl alcohol are added to hold the green body together; these burn out during the firing (at 200–350 °C). Sometimes organic lubricants are added during pressing to increase densification. It is common to combine these, and add binders and lubricants to a powder, then press. (The formulation of these organic chemical additives is an art in itself. This is particularly important in the manufacture of high performance ceramics such as those used by the billions for electronics, in capacitors, inductors, sensors, etc.) A slurry can be used in place of a powder, and then cast into a desired shape, dried and then sintered. Indeed, traditional pottery is done with this type of method, using a plastic mixture worked with the hands. If a mixture of different materials is used together in a ceramic, the sintering temperature is sometimes above the melting point of one minor component – a liquid phase sintering. This results in shorter sintering times compared to solid state sintering. Such liquid phase sintering involves in faster diffusion processes and may result in abnormal grain growth. Strength of ceramics A material's strength is dependent on its microstructure. The engineering processes to which a material is subjected can alter its microstructure. The variety of strengthening mechanisms that alter the strength of a material include the mechanism of grain boundary strengthening. Thus, although yield strength is maximized with decreasing grain size, ultimately, very small grain sizes make the material brittle. Considered in tandem with the fact that the yield strength is the parameter that predicts plastic deformation in the material, one can make informed decisions on how to increase the strength of a material depending on its microstructural properties and the desired end effect. The relation between yield stress and grain size is described mathematically by the Hall-Petch equation which is where ky is the strengthening coefficient (a constant unique to each material), σo is a materials constant for the starting stress for dislocation movement (or the resistance of the lattice to dislocation motion), d is the grain diameter, and σy is the yield stress. Theoretically, a material could be made infinitely strong if the grains are made infinitely small. This is, unfortunately, impossible because the lower limit of grain size is a single unit cell of the material. Even then, if the grains of a material are the size of a single unit cell, then the material is in fact amorphous, not crystalline, since there is no long range order, and dislocations can not be defined in an amorphous material. It has been observed experimentally that the microstructure with the highest yield strength is a grain size of about 10 nanometers, because grains smaller than this undergo another yielding mechanism, grain boundary sliding. Producing engineering materials with this ideal grain size is difficult because of the limitations of initial particle sizes inherent to nanomaterials and nanotechnology. Faber-Evans model The Faber-Evans model, developed by Katherine Faber and Anthony G. Evans, was developed to predict the increase in fracture toughness in ceramics due to crack deflection around second-phase particles that are prone to microcracking in a matrix. The model considers particle morphology, aspect ratio, spacing, and volume fraction of the second phase, as well as the reduction in local stress intensity at the crack tip when the crack is deflected or the crack plane bows. Actual crack tortuosity is obtained through imaging techniques, which allows for the direct input of deflection and bowing angles into the model. The model calculates the average strain energy release rate and compares the resulting increase in fracture toughness to that of a flat crack through the plain matrix. The magnitude of the toughening is determined by the mismatch strain caused by thermal contraction incompatibility and the microfracture resistance of the particle/matrix interface. The toughening becomes noticeable with a narrow size distribution of appropriately sized particles, and researchers typically accept that deflection effects in materials with roughly equiaxial grains may increase the fracture toughness by about twice the grain boundary value. The model reveals that the increase in toughness is dependent on particle shape and the volume fraction of the second phase, with the most effective morphology being the rod of high aspect ratio, which can account for a fourfold increase in fracture toughness. The toughening arises primarily from the twist of the crack front between particles, as indicated by deflection profiles. Disc-shaped particles and spheres are less effective in toughening. Fracture toughness, regardless of morphology, is determined by the twist of the crack front at its most severe configuration, rather than the initial tilt of the crack front. Only for disc-shaped particles does the initial tilting of the crack front provide significant toughening; however, the twist component still overrides the tilt-derived toughening. Additional important features of the deflection analysis include the appearance of asymptotic toughening for the three morphologies at volume fractions in excess of 0.2. It is also noted that a significant influence on the toughening by spherical particles is exerted by the interparticle spacing distribution; greater toughening is afforded when spheres are nearly contacting such that twist angles approach π/2. These predictions provide the basis for the design of high-toughness two-phase ceramic materials. The ideal second phase, in addition to maintaining chemical compatibility, should be present in amounts of 10 to 20 volume percent. Greater amounts may diminish the toughness increase due to overlapping particles. Particles with high aspect ratios, especially those with rod-shaped morphologies, are most suitable for maximum toughening. This model is often used to determine the factors that contribute to the increase in fracture toughness in ceramics which is ultimately useful in the development of advanced ceramic materials with improved performance. Theory of chemical processing Microstructural uniformity In the processing of fine ceramics, the irregular particle sizes and shapes in a typical powder often lead to non-uniform packing morphologies that result in packing density variations in the powder compact. Uncontrolled agglomeration of powders due to attractive van der Waals forces can also give rise to in microstructural inhomogeneities. Differential stresses that develop as a result of non-uniform drying shrinkage are directly related to the rate at which the solvent can be removed, and thus highly dependent upon the distribution of porosity. Such stresses have been associated with a plastic-to-brittle transition in consolidated bodies, and can yield to crack propagation in the unfired body if not relieved. In addition, any fluctuations in packing density in the compact as it is prepared for the kiln are often amplified during the sintering process, yielding inhomogeneous densification. Some pores and other structural defects associated with density variations have been shown to play a detrimental role in the sintering process by growing and thus limiting end-point densities. Differential stresses arising from inhomogeneous densification have also been shown to result in the propagation of internal cracks, thus becoming the strength-controlling flaws. It would therefore appear desirable to process a material in such a way that it is physically uniform with regard to the distribution of components and porosity, rather than using particle size distributions which will maximize the green density. The containment of a uniformly dispersed assembly of strongly interacting particles in suspension requires total control over particle-particle interactions. Monodisperse colloids provide this potential. Monodisperse powders of colloidal silica, for example, may therefore be stabilized sufficiently to ensure a high degree of order in the colloidal crystal or polycrystalline colloidal solid which results from aggregation. The degree of order appears to be limited by the time and space allowed for longer-range correlations to be established. Such defective polycrystalline colloidal structures would appear to be the basic elements of sub-micrometer colloidal materials science, and, therefore, provide the first step in developing a more rigorous understanding of the mechanisms involved in microstructural evolution in inorganic systems such as polycrystalline ceramics. Self-assembly Self-assembly is the most common term in use in the modern scientific community to describe the spontaneous aggregation of particles (atoms, molecules, colloids, micelles, etc.) without the influence of any external forces. Large groups of such particles are known to assemble themselves into thermodynamically stable, structurally well-defined arrays, quite reminiscent of one of the 7 crystal systems found in metallurgy and mineralogy (e.g. face-centered cubic, body-centered cubic, etc.). The fundamental difference in equilibrium structure is in the spatial scale of the unit cell (or lattice parameter) in each particular case. Thus, self-assembly is emerging as a new strategy in chemical synthesis and nanotechnology. Molecular self-assembly has been observed in various biological systems and underlies the formation of a wide variety of complex biological structures. Molecular crystals, liquid crystals, colloids, micelles, emulsions, phase-separated polymers, thin films and self-assembled monolayers all represent examples of the types of highly ordered structures which are obtained using these techniques. The distinguishing feature of these methods is self-organization in the absence of any external forces. In addition, the principal mechanical characteristics and structures of biological ceramics, polymer composites, elastomers, and cellular materials are being re-evaluated, with an emphasis on bioinspired materials and structures. Traditional approaches focus on design methods of biological materials using conventional synthetic materials. This includes an emerging class of mechanically superior biomaterials based on microstructural features and designs found in nature. The new horizons have been identified in the synthesis of bioinspired materials through processes that are characteristic of biological systems in nature. This includes the nanoscale self-assembly of the components and the development of hierarchical structures. Ceramic composites Substantial interest has arisen in recent years in fabricating ceramic composites. While there is considerable interest in composites with one or more non-ceramic constituents, the greatest attention is on composites in which all constituents are ceramic. These typically comprise two ceramic constituents: a continuous matrix, and a dispersed phase of ceramic particles, whiskers, or short (chopped) or continuous ceramic fibers. The challenge, as in wet chemical processing, is to obtain a uniform or homogeneous distribution of the dispersed particle or fiber phase. Consider first the processing of particulate composites. The particulate phase of greatest interest is tetragonal zirconia because of the toughening that can be achieved from the phase transformation from the metastable tetragonal to the monoclinic crystalline phase, aka transformation toughening. There is also substantial interest in dispersion of hard, non-oxide phases such as SiC, TiB, TiC, boron, carbon and especially oxide matrices like alumina and mullite. There is also interest too incorporating other ceramic particulates, especially those of highly anisotropic thermal expansion. Examples include Al2O3, TiO2, graphite, and boron nitride. In processing particulate composites, the issue is not only homogeneity of the size and spatial distribution of the dispersed and matrix phases, but also control of the matrix grain size. However, there is some built-in self-control due to inhibition of matrix grain growth by the dispersed phase. Particulate composites, though generally offer increased resistance to damage, failure, or both, are still quite sensitive to inhomogeneities of composition as well as other processing defects such as pores. Thus they need good processing to be effective. Particulate composites have been made on a commercial basis by simply mixing powders of the two constituents. Although this approach is inherently limited in the homogeneity that can be achieved, it is the most readily adaptable for existing ceramic production technology. However, other approaches are of interest. From the technological standpoint, a particularly desirable approach to fabricating particulate composites is to coat the matrix or its precursor onto fine particles of the dispersed phase with good control of the starting dispersed particle size and the resultant matrix coating thickness. One should in principle be able to achieve the ultimate in homogeneity of distribution and thereby optimize composite performance. This can also have other ramifications, such as allowing more useful composite performance to be achieved in a body having porosity, which might be desired for other factors, such as limiting thermal conductivity. There are also some opportunities to utilize melt processing for fabrication of ceramic, particulate, whisker and short-fiber, and continuous-fiber composites. Clearly, both particulate and whisker composites are conceivable by solid-state precipitation after solidification of the melt. This can also be obtained in some cases by sintering, as for precipitation-toughened, partially stabilized zirconia. Similarly, it is known that one can directionally solidify ceramic eutectic mixtures and hence obtain uniaxially aligned fiber composites. Such composite processing has typically been limited to very simple shapes and thus suffers from serious economic problems due to high machining costs. Clearly, there are possibilities of using melt casting for many of these approaches. Potentially even more desirable is using melt-derived particles. In this method, quenching is done in a solid solution or in a fine eutectic structure, in which the particles are then processed by more typical ceramic powder processing methods into a useful body. There have also been preliminary attempts to use melt spraying as a means of forming composites by introducing the dispersed particulate, whisker, or fiber phase in conjunction with the melt spraying process. Other methods besides melt infiltration to manufacture ceramic composites with long fiber reinforcement are chemical vapor infiltration and the infiltration of fiber preforms with organic precursor, which after pyrolysis yield an amorphous ceramic matrix, initially with a low density. With repeated cycles of infiltration and pyrolysis one of those types of ceramic matrix composites is produced. Chemical vapor infiltration is used to manufacture carbon/carbon and silicon carbide reinforced with carbon or silicon carbide fibers. Besides many process improvements, the first of two major needs for fiber composites is lower fiber costs. The second major need is fiber compositions or coatings, or composite processing, to reduce degradation that results from high-temperature composite exposure under oxidizing conditions. Applications The products of technical ceramics include tiles used in the Space Shuttle program, gas burner nozzles, ballistic protection, nuclear fuel uranium oxide pellets, bio-medical implants, jet engine turbine blades, and missile nose cones. Its products are often made from materials other than clay, chosen for their particular physical properties. These may be classified as follows: Oxides: silica, alumina, zirconia Non-oxides: carbides, borides, nitrides, silicides Composites: particulate or whisker reinforced matrices, combinations of oxides and non-oxides (e.g. polymers). Ceramics can be used in many technological industries. One application is the ceramic tiles on NASA's Space Shuttle, used to protect it and the future supersonic space planes from the searing heat of re-entry into the Earth's atmosphere. They are also used widely in electronics and optics. In addition to the applications listed here, ceramics are also used as a coating in various engineering cases. An example would be a ceramic bearing coating over a titanium frame used for an aircraft. Recently the field has come to include the studies of single crystals or glass fibers, in addition to traditional polycrystalline materials, and the applications of these have been overlapping and changing rapidly. Aerospace Engines: shielding a hot running aircraft engine from damaging other components. Airframes: used as a high-stress, high-temp and lightweight bearing and structural component. Missile nose-cones: shielding the missile internals from heat. Space Shuttle tiles Space-debris ballistic shields: ceramic fiber woven shields offer better protection to hypervelocity (~7 km/s) particles than aluminum shields of equal weight. Rocket nozzles: focusing high-temperature exhaust gases from the rocket booster. Unmanned Air Vehicles: ceramic engine utilization in aeronautical applications (such as Unmanned Air Vehicles) may result in enhanced performance characteristics and less operational costs. Biomedical Artificial bone; Dentistry applications, teeth. Biodegradable splints; Reinforcing bones recovering from osteoporosis Implant material Electronics Capacitors Integrated circuit packages Transducers Insulators Optical Optical fibers, guided light wave transmission Switches Laser amplifiers Lenses Infrared heat-seeking devices Automotive Heat shield Exhaust heat management Biomaterials Silicification is quite common in the biological world and occurs in bacteria, single-celled organisms, plants, and animals (invertebrates and vertebrates). Crystalline minerals formed in such environment often show exceptional physical properties (e.g. strength, hardness, fracture toughness) and tend to form hierarchical structures that exhibit microstructural order over a range of length or spatial scales. The minerals are crystallized from an environment that is undersaturated with respect to silicon, and under conditions of neutral pH and low temperature (0–40 °C). Formation of the mineral may occur either within or outside of the cell wall of an organism, and specific biochemical reactions for mineral deposition exist that include lipids, proteins and carbohydrates. Most natural (or biological) materials are complex composites whose mechanical properties are often outstanding, considering the weak constituents from which they are assembled. These complex structures, which have risen from hundreds of million years of evolution, are inspiring the design of novel materials with exceptional physical properties for high performance in adverse conditions. Their defining characteristics such as hierarchy, multifunctionality, and the capacity for self-healing, are currently being investigated. The basic building blocks begin with the 20 amino acids and proceed to polypeptides, polysaccharides, and polypeptides–saccharides. These, in turn, compose the basic proteins, which are the primary constituents of the 'soft tissues' common to most biominerals. With well over 1000 proteins possible, current research emphasizes the use of collagen, chitin, keratin, and elastin. The 'hard' phases are often strengthened by crystalline minerals, which nucleate and grow in a bio-mediated environment that determines the size, shape and distribution of individual crystals. The most important mineral phases have been identified as hydroxyapatite, silica, and aragonite. Using the classification of Wegst and Ashby, the principal mechanical characteristics and structures of biological ceramics, polymer composites, elastomers, and cellular materials have been presented. Selected systems in each class are being investigated with emphasis on the relationship between their microstructure over a range of length scales and their mechanical response. Thus, the crystallization of inorganic materials in nature generally occurs at ambient temperature and pressure. Yet the vital organisms through which these minerals form are capable of consistently producing extremely precise and complex structures. Understanding the processes in which living organisms control the growth of crystalline minerals such as silica could lead to significant advances in the field of materials science, and open the door to novel synthesis techniques for nanoscale composite materials, or nanocomposites. High-resolution scanning electron microscope (SEM) observations were performed of the microstructure of the mother-of-pearl (or nacre) portion of the abalone shell. Those shells exhibit the highest mechanical strength and fracture toughness of any non-metallic substance known. The nacre from the shell of the abalone has become one of the more intensively studied biological structures in materials science. Clearly visible in these images are the neatly stacked (or ordered) mineral tiles separated by thin organic sheets along with a macrostructure of larger periodic growth bands which collectively form what scientists are currently referring to as a hierarchical composite structure. (The term hierarchy simply implies that there are a range of structural features which exist over a wide range of length scales). Future developments reside in the synthesis of bio-inspired materials through processing methods and strategies that are characteristic of biological systems. These involve nanoscale self-assembly of the components and the development of hierarchical structures. See also References External links The American Ceramic Society Ceramic Tile Institute of America Materials science Ceramic materials Engineering disciplines Industrial processes
Ceramic engineering
[ "Physics", "Materials_science", "Engineering" ]
7,976
[ "Applied and interdisciplinary physics", "Materials science", "Ceramic materials", "nan", "Ceramic engineering" ]
4,481,005
https://en.wikipedia.org/wiki/Cole%E2%80%93Cole%20equation
The Cole–Cole equation is a relaxation model that is often used to describe dielectric relaxation in polymers. It is given by the equation where is the complex dielectric constant, and are the "static" and "infinite frequency" dielectric constants, is the angular frequency and is a dielectric relaxation time constant. The exponent parameter , which takes a value between 0 and 1, allows the description of different spectral shapes. When , the Cole-Cole model reduces to the Debye model. When , the relaxation is stretched. That is, it extends over a wider range on a logarithmic scale than Debye relaxation. The separation of the complex dielectric constant was reported in the original paper by Kenneth Stewart Cole and Robert Hugh Cole as follows: Upon introduction of hyperbolic functions, the above expressions reduce to: Here . These equations reduce to the Debye expression when . The Cole-Cole equation's time domain current response corresponds to the Curie–von Schweidler law and the charge response corresponds to the stretched exponential function or the Kohlrausch–Williams–Watts (KWW) function, for small time arguments. Cole–Cole relaxation constitutes a special case of Havriliak–Negami relaxation when the symmetry parameter , that is, when the relaxation peaks are symmetric. Another special case of Havriliak–Negami relaxation where and is known as Cole–Davidson relaxation. For an abridged and updated review of anomalous dielectric relaxation in disordered systems, see Kalmykov. See also Debye relaxation Cole–Davidson relaxation Havriliak–Negami relaxation Curie–von Schweidler law References Further reading Electric and magnetic fields in matter
Cole–Cole equation
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
359
[ "Condensed matter physics", "Electric and magnetic fields in matter", "Materials science" ]
4,481,194
https://en.wikipedia.org/wiki/Response%20surface%20methodology
In statistics, response surface methodology (RSM) explores the relationships between several explanatory variables and one or more response variables. RSM is an empirical model which employs the use of mathematical and statistical techniques to relate input variables, otherwise known as factors, to the response. RSM became very useful because other methods available, such as the theoretical model, could be very cumbersome to use, time-consuming, inefficient, error-prone, and unreliable. The method was introduced by George E. P. Box and K. B. Wilson in 1951. The main idea of RSM is to use a sequence of designed experiments to obtain an optimal response. Box and Wilson suggest using a second-degree polynomial model to do this. They acknowledge that this model is only an approximation, but they use it because such a model is easy to estimate and apply, even when little is known about the process. Statistical approaches such as RSM can be employed to maximize the production of a special substance by optimization of operational factors. Of late, for formulation optimization, the RSM, using proper design of experiments (DoE), has become extensively used. In contrast to conventional methods, the interaction among process variables can be determined by statistical techniques. Basic approach of response surface methodology An easy way to estimate a first-degree polynomial model is to use a factorial experiment or a fractional factorial design. This is sufficient to determine which explanatory variables affect the response variable(s) of interest. Once it is suspected that only significant explanatory variables are left, then a more complicated design, such as a central composite design can be implemented to estimate a second-degree polynomial model, which is still only an approximation at best. However, the second-degree model can be used to optimize (maximize, minimize, or attain a specific target for) the response variable(s) of interest. Important RSM properties and features OrthogonalityThe property that allows individual effects of the k-factors to be estimated independently without (or with minimal) confounding. Also orthogonality provides minimum variance estimates of the model coefficient so that they are uncorrelated. RotatabilityThe property of rotating points of the design about the center of the factor space. The moments of the distribution of the design points are constant. UniformityA third property of CCD designs used to control the number of center points is uniform precision (or Uniformity). Special geometries Cube Cubic designs are discussed by Kiefer, by Atkinson, Donev, and Tobias and by Hardin and Sloane. Sphere Spherical designs are discussed by Kiefer and by Hardin and Sloane. Simplex geometry and mixture experiments Mixture experiments are discussed in many books on the design of experiments, and in the response-surface methodology textbooks of Box and Draper and of Atkinson, Donev and Tobias. An extensive discussion and survey appears in the advanced textbook by John Cornell. Extensions Multiple objective functions Some extensions of response surface methodology deal with the multiple response problem. Multiple response variables create difficulty because what is optimal for one response may not be optimal for other responses. Other extensions are used to reduce variability in a single response while targeting a specific value, or attaining a near maximum or minimum while preventing variability in that response from getting too large. Practical concerns Response surface methodology uses statistical models, and therefore practitioners need to be aware that even the best statistical model is an approximation to reality. In practice, both the models and the parameter values are unknown, and subject to uncertainty on top of ignorance. Of course, an estimated optimum point need not be optimum in reality, because of the errors of the estimates and of the inadequacies of the model. Nonetheless, response surface methodology has an effective track-record of helping researchers improve products and services: For example, Box's original response-surface modeling enabled chemical engineers to improve a process that had been stuck at a saddle-point for years. The engineers had not been able to afford to fit a cubic three-level design to estimate a quadratic model, and their biased linear-models estimated the gradient to be zero. Box's design reduced the costs of experimentation so that a quadratic model could be fit, which led to a (long-sought) ascent direction. See also Box–Behnken design Central composite design Gradient-enhanced kriging (GEK) IOSO method based on response-surface methodology Optimal designs Plackett–Burman design Polynomial and rational function modeling Polynomial regression Probabilistic design Surrogate model Bayesian Optimization References Box, G. E. P. and Draper, Norman. 2007. Response Surfaces, Mixtures, and Ridge Analyses, Second Edition [of Empirical Model-Building and Response Surfaces, 1987], Wiley. Historical Reprinted in paragraphs 139–157, and in External links Response surface designs Sequential experiments Design of experiments Optimal decisions Mathematical optimization Industrial engineering Systems engineering Statistical process control
Response surface methodology
[ "Mathematics", "Engineering" ]
1,003
[ "Systems engineering", "Mathematical analysis", "Statistical process control", "Industrial engineering", "Engineering statistics", "Mathematical optimization" ]
4,481,904
https://en.wikipedia.org/wiki/Van%20%27t%20Hoff%20equation
The Van 't Hoff equation relates the change in the equilibrium constant, , of a chemical reaction to the change in temperature, T, given the standard enthalpy change, , for the process. The subscript means "reaction" and the superscript means "standard". It was proposed by Dutch chemist Jacobus Henricus van 't Hoff in 1884 in his book Études de Dynamique chimique (Studies in Dynamic Chemistry). The Van 't Hoff equation has been widely utilized to explore the changes in state functions in a thermodynamic system. The Van 't Hoff plot, which is derived from this equation, is especially effective in estimating the change in enthalpy and entropy of a chemical reaction. Equation Summary and uses The standard pressure, , is used to define the reference state for the Van 't Hoff equation, which is where denotes the natural logarithm, is the thermodynamic equilibrium constant, and is the ideal gas constant. This equation is exact at any one temperature and all pressures, derived from the requirement that the Gibbs free energy of reaction be stationary in a state of chemical equilibrium. In practice, the equation is often integrated between two temperatures under the assumption that the standard reaction enthalpy is constant (and furthermore, this is also often assumed to be equal to its value at standard temperature). Since in reality and the standard reaction entropy do vary with temperature for most processes, the integrated equation is only approximate. Approximations are also made in practice to the activity coefficients within the equilibrium constant. A major use of the integrated equation is to estimate a new equilibrium constant at a new absolute temperature assuming a constant standard enthalpy change over the temperature range. To obtain the integrated equation, it is convenient to first rewrite the Van 't Hoff equation as The definite integral between temperatures and is then In this equation is the equilibrium constant at absolute temperature , and is the equilibrium constant at absolute temperature . Development from thermodynamics Combining the well-known formula for the Gibbs free energy of reaction where is the entropy of the system, with the Gibbs free energy isotherm equation: we obtain Differentiation of this expression with respect to the variable while assuming that both and are independent of yields the Van 't Hoff equation. These assumptions are expected to break down somewhat for large temperature variations. Provided that and are constant, the preceding equation gives as a linear function of and hence is known as the linear form of the Van 't Hoff equation. Therefore, when the range in temperature is small enough that the standard reaction enthalpy and reaction entropy are essentially constant, a plot of the natural logarithm of the equilibrium constant versus the reciprocal temperature gives a straight line. The slope of the line may be multiplied by the gas constant to obtain the standard enthalpy change of the reaction, and the intercept may be multiplied by to obtain the standard entropy change. Van 't Hoff isotherm The Van 't Hoff isotherm can be used to determine the temperature dependence of the Gibbs free energy of reaction for non-standard state reactions at a constant temperature: where is the Gibbs free energy of reaction under non-standard states at temperature , is the Gibbs free energy for the reaction at , is the extent of reaction, and is the thermodynamic reaction quotient. Since , the temperature dependence of both terms can be described by Van t'Hoff equations as a function of T. This finds applications in the field of electrochemistry. particularly in the study of the temperature dependence of voltaic cells. The isotherm can also be used at fixed temperature to describe the Law of Mass Action. When a reaction is at equilibrium, and . Otherwise, the Van 't Hoff isotherm predicts the direction that the system must shift in order to achieve equilibrium; when , the reaction moves in the forward direction, whereas when , the reaction moves in the backwards direction. See Chemical equilibrium. Van 't Hoff plot For a reversible reaction, the equilibrium constant can be measured at a variety of temperatures. This data can be plotted on a graph with on the -axis and on the axis. The data should have a linear relationship, the equation for which can be found by fitting the data using the linear form of the Van 't Hoff equation This graph is called the "Van 't Hoff plot" and is widely used to estimate the enthalpy and entropy of a chemical reaction. From this plot, is the slope, and is the intercept of the linear fit. By measuring the equilibrium constant, , at different temperatures, the Van 't Hoff plot can be used to assess a reaction when temperature changes. Knowing the slope and intercept from the Van 't Hoff plot, the enthalpy and entropy of a reaction can be easily obtained using The Van 't Hoff plot can be used to quickly determine the enthalpy of a chemical reaction both qualitatively and quantitatively. This change in enthalpy can be positive or negative, leading to two major forms of the Van 't Hoff plot. Endothermic reactions For an endothermic reaction, heat is absorbed, making the net enthalpy change positive. Thus, according to the definition of the slope: When the reaction is endothermic, (and the gas constant ), so Thus, for an endothermic reaction, the Van 't Hoff plot should always have a negative slope. Exothermic reactions For an exothermic reaction, heat is released, making the net enthalpy change negative. Thus, according to the definition of the slope: For an exothermic reaction , so Thus, for an exothermic reaction, the Van 't Hoff plot should always have a positive slope. Error propagation At first glance, using the fact that it would appear that two measurements of would suffice to be able to obtain an accurate value of : where and are the equilibrium constant values obtained at temperatures and respectively. However, the precision of values obtained in this way is highly dependent on the precision of the measured equilibrium constant values. The use of error propagation shows that the error in will be about 76 kJ/mol times the experimental uncertainty in , or about 110 kJ/mol times the uncertainty in the values. Similar considerations apply to the entropy of reaction obtained from . Notably, when equilibrium constants are measured at three or more temperatures, values of and are often obtained by straight-line fitting. The expectation is that the error will be reduced by this procedure, although the assumption that the enthalpy and entropy of reaction are constant may or may not prove to be correct. If there is significant temperature dependence in either or both quantities, it should manifest itself in nonlinear behavior in the Van 't Hoff plot; however, more than three data points would presumably be needed in order to observe this. Applications of the Van 't Hoff plot Van 't Hoff analysis In biological research, the Van 't Hoff plot is also called Van 't Hoff analysis. It is most effective in determining the favored product in a reaction. It may obtain results different from direct calorimetry such as differential scanning calorimetry or isothermal titration calorimetry due to various effects other than experimental error. Assume two products B and C form in a reaction: a A + d D → b B, a A + d D → c C. In this case, can be defined as ratio of B to C rather than the equilibrium constant. When > 1, B is the favored product, and the data on the Van 't Hoff plot will be in the positive region. When < 1, C is the favored product, and the data on the Van 't Hoff plot will be in the negative region. Using this information, a Van 't Hoff analysis can help determine the most suitable temperature for a favored product. In 2010, a Van 't Hoff analysis was used to determine whether water preferentially forms a hydrogen bond with the C-terminus or the N-terminus of the amino acid proline. The equilibrium constant for each reaction was found at a variety of temperatures, and a Van 't Hoff plot was created. This analysis showed that enthalpically, the water preferred to hydrogen bond to the C-terminus, but entropically it was more favorable to hydrogen bond with the N-terminus. Specifically, they found that C-terminus hydrogen bonding was favored by 4.2–6.4 kJ/mol. The N-terminus hydrogen bonding was favored by 31–43 J/(K mol). This data alone could not conclude which site water will preferentially hydrogen-bond to, so additional experiments were used. It was determined that at lower temperatures, the enthalpically favored species, the water hydrogen-bonded to the C-terminus, was preferred. At higher temperatures, the entropically favored species, the water hydrogen-bonded to the N-terminus, was preferred. Mechanistic studies A chemical reaction may undergo different reaction mechanisms at different temperatures. In this case, a Van 't Hoff plot with two or more linear fits may be exploited. Each linear fit has a different slope and intercept, which indicates different changes in enthalpy and entropy for each distinct mechanisms. The Van 't Hoff plot can be used to find the enthalpy and entropy change for each mechanism and the favored mechanism under different temperatures. In the example figure, the reaction undergoes mechanism 1 at high temperature and mechanism 2 at low temperature. Temperature dependence If the enthalpy and entropy are roughly constant as temperature varies over a certain range, then the Van 't Hoff plot is approximately linear when plotted over that range. However, in some cases the enthalpy and entropy do change dramatically with temperature. A first-order approximation is to assume that the two different reaction products have different heat capacities. Incorporating this assumption yields an additional term in the expression for the equilibrium constant as a function of temperature. A polynomial fit can then be used to analyze data that exhibits a non-constant standard enthalpy of reaction: where Thus, the enthalpy and entropy of a reaction can still be determined at specific temperatures even when a temperature dependence exists. Surfactant self-assembly The Van 't Hoff relation is particularly useful for the determination of the micellization enthalpy of surfactants from the temperature dependence of the critical micelle concentration (CMC): However, the relation loses its validity when the aggregation number is also temperature-dependent, and the following relation should be used instead: with and being the free energies of the surfactant in a micelle with aggregation number and respectively. This effect is particularly relevant for nonionic ethoxylated surfactants or polyoxypropylene–polyoxyethylene block copolymers (Poloxamers, Pluronics, Synperonics). The extended equation can be exploited for the extraction of aggregation numbers of self-assembled micelles from differential scanning calorimetric thermograms. See also Clausius–Clapeyron relation Van 't Hoff factor () Gibbs–Helmholtz equation Solubility equilibrium Arrhenius equation References Equilibrium chemistry Eponymous equations of physics Thermochemistry Jacobus Henricus van 't Hoff
Van 't Hoff equation
[ "Physics", "Chemistry" ]
2,337
[ "Thermochemistry", "Eponymous equations of physics", "Equations of physics", "Equilibrium chemistry" ]
4,482,505
https://en.wikipedia.org/wiki/Frascati%20Tokamak%20Upgrade
The Frascati Tokamak Upgrade (FTU) is a tokamak operating at Frascati, Italy. Building on the Frascati Tokamak experiment, FTU is a compact, high-magnetic-field tokamak (Btor = 8 Tesla ). It began operation in 1990 and has since achieved operating goals of 1.6 MA at 8 T and average electron density greater than 4 per cubic meter. The poloidal section of FTU is circular, with a limiter. External links Official website Tokamaks Science and technology in Italy
Frascati Tokamak Upgrade
[ "Physics" ]
117
[ "Plasma physics stubs", "Plasma physics" ]
4,482,716
https://en.wikipedia.org/wiki/Metabolic%20waste
Metabolic wastes or excrements are substances left over from metabolic processes (such as cellular respiration) which cannot be used by the organism (they are surplus or toxic), and must therefore be excreted. This includes nitrogen compounds, water, CO2, phosphates, sulphates, etc. Animals treat these compounds as excretes. Plants have metabolic pathways which transforms some of them (primarily the oxygen compounds) into useful substances. All the metabolic wastes are excreted in a form of water solutes through the excretory organs (nephridia, Malpighian tubules, kidneys), with the exception of CO2, which is excreted together with the water vapor throughout the lungs. The elimination of these compounds enables the chemical homeostasis of the organism. Nitrogen wastes The nitrogen compounds through which excess nitrogen is eliminated from organisms are called nitrogenous wastes () or nitrogen wastes. They are ammonia, urea, uric acid, and creatinine. All of these substances are produced from protein metabolism. In many animals, the urine is the main route of excretion for such wastes; in some, it is the feces. Ammonotelism Ammonotelism is the excretion of ammonia and ammonium ions. Ammonia (NH3) forms with the oxidation of amino groups.(-NH2), which are removed from the proteins when they convert into carbohydrates. It is a very toxic substance to tissues and extremely soluble in water. Only one nitrogen atom is removed with it. A lot of water is needed for the excretion of ammonia, about 0.5 L of water is needed per 1 g of nitrogen to maintain ammonia levels in the excretory fluid below the level in body fluids to prevent toxicity. Thus, the marine organisms excrete ammonia directly into the water and are called ammonotelic. Ammonotelic animals include crustaceans, platyhelminths, cnidarians, poriferans, echinoderms, and other aquatic invertebrates. Ureotelism The excretion of urea is called ureotelism. Land animals, mainly amphibians and mammals, convert ammonia into urea, a process which occurs in the liver and kidney. These animals are called ureotelic. Urea is a less toxic compound than ammonia; two nitrogen atoms are eliminated through it and less water is needed for its excretion. It requires 0.05 L of water to excrete 1 g of nitrogen, approximately only 10% of that required in ammonotelic organisms. Uricotelism Uricotelism is the excretion of excess nitrogen in the form of uric acid. Uricotelic animals include insects, birds and most reptiles. Though requiring more metabolic energy to make than urea, uric acid's low toxicity and low solubility in water allow it to be concentrated into a small volume of pasty white suspension in feces, compared to the liquid urine of mammals. Notably however, great apes and humans, while ureotelic, are also uricotelic to a small extent, with uric acid potentially causing problems such as kidney stones and gout, but also functioning as a blood antioxidant. Water and gases These compounds form during the catabolism of carbohydrates and lipids in condensation reactions, and in some other metabolic reactions of the amino acids. Oxygen is produced by plants and some bacteria in photosynthesis, while CO2 is a waste product of all animals and plants. Nitrogen gases are produced by denitrifying bacteria and as a waste product, and bacteria for decaying yield ammonia, as do most invertebrates and vertebrates. Water is the only liquid waste from animals and photosynthesizing plants. Solids Nitrates and nitrites are wastes produced by nitrifying bacteria, just as sulfur and sulfates are produced by the sulfur-reducing bacteria and sulfate-reducing bacteria. Insoluble iron waste can be made by iron bacteria by using soluble forms. In plants, resins, fats, waxes, and complex organic chemicals are exuded from plants, e.g., the latex from rubber trees and milkweeds. Solid waste products may be manufactured as organic pigments derived from breakdown of pigments like hemoglobin, and inorganic salts like carbonates, bicarbonates, and phosphate, whether in ionic or in molecular form, are excreted as solids. Animals dispose of solid waste as feces. See also Ammonia poisoning Deamination References Excretion Metabolism Waste
Metabolic waste
[ "Physics", "Chemistry", "Biology" ]
961
[ "Excretion", "Materials", "Cellular processes", "Biochemistry", "Waste", "Metabolism", "Matter" ]
25,010
https://en.wikipedia.org/wiki/Proton%E2%80%93proton%20chain
The proton–proton chain, also commonly referred to as the chain, is one of two known sets of nuclear fusion reactions by which stars convert hydrogen to helium. It dominates in stars with masses less than or equal to that of the Sun, whereas the CNO cycle, the other known reaction, is suggested by theoretical models to dominate in stars with masses greater than about 1.3 solar masses. In general, proton–proton fusion can occur only if the kinetic energy (temperature) of the protons is high enough to overcome their mutual electrostatic repulsion. In the Sun, deuteron-producing events are rare. Diprotons are the much more common result of proton–proton reactions within the star, and diprotons almost immediately decay back into two protons. Since the conversion of hydrogen to helium is slow, the complete conversion of the hydrogen initially in the core of the Sun is calculated to take more than ten billion years. Although sometimes called the "proton–proton chain reaction", it is not a chain reaction in the normal sense. In most nuclear reactions, a chain reaction designates a reaction that produces a product, such as neutrons given off during fission, that quickly induces another such reaction. The proton–proton chain is, like a decay chain, a series of reactions. The product of one reaction is the starting material of the next reaction. There are two main chains leading from hydrogen to helium in the Sun. One chain has five reactions, the other chain has six. History of the theory The theory that proton–proton reactions are the basic principle by which the Sun and other stars burn was advocated by Arthur Eddington in the 1920s. At the time, the temperature of the Sun was considered to be too low to overcome the Coulomb barrier. After the development of quantum mechanics, it was discovered that tunneling of the wavefunctions of the protons through the repulsive barrier allows for fusion at a lower temperature than the classical prediction. In 1939, Hans Bethe attempted to calculate the rates of various reactions in stars. Starting with two protons combining to give a deuterium nucleus and a positron he found what we now call Branch II of the proton–proton chain. But he did not consider the reaction of two nuclei (Branch I) which we now know to be important. This was part of the body of work in stellar nucleosynthesis for which Bethe won the Nobel Prize in Physics in 1967. The proton–proton chain The first step in all the branches is the fusion of two protons into a deuteron. As the protons fuse, one of them undergoes beta plus decay, converting into a neutron by emitting a positron and an electron neutrino (though a small amount of deuterium nuclei is produced by the "pep" reaction, see below): {| border="0" |- style="height:2em;" |p ||+ ||p||→ || | +|| | + | ||+ || |} The positron will annihilate with an electron from the environment into two gamma rays. Including this annihilation and the energy of the neutrino, the net reaction {| border="0" |- style="height:2em;" |p ||+ ||p|| + →  | + | ||+ || |} (which is the same as the PEP reaction, see below) has a Q value (released energy) of 1.442 MeV: The relative amounts of energy going to the neutrino and to the other products is variable. This is the rate-limiting reaction and is extremely slow due to it being initiated by the weak nuclear force. The average proton in the core of the Sun waits 9 billion years before it successfully fuses with another proton. It has not been possible to measure the cross-section of this reaction experimentally because it is so low but it can be calculated from theory. After it is formed, the deuteron produced in the first stage can fuse with another proton to produce the stable, light isotope of helium, : :{| border="0" |- style="height:2em;" | ||+ || ||→ || ||+ || ||+ || |} This process, mediated by the strong nuclear force rather than the weak force, is extremely fast by comparison to the first step. It is estimated that, under the conditions in the Sun's core, each newly created deuterium nucleus exists for only about one second before it is converted into helium-3. In the Sun, each helium-3 nucleus produced in these reactions exists for only about 400 years before it is converted into helium-4. Once the helium-3 has been produced, there are four possible paths to generate . In , helium-4 is produced by fusing two helium-3 nuclei; the and branches fuse with pre-existing to form beryllium-7, which undergoes further reactions to produce two helium-4 nuclei. About 99% of the energy output of the sun comes from the various chains, with the other 1% coming from the CNO cycle. According to one model of the sun, 83.3 percent of the produced by the various branches is produced via branch I while produces 16.68 percent and 0.02 percent. Since half the neutrinos produced in branches II and III are produced in the first step (synthesis of a deuteron), only about 8.35 percent of neutrinos come from the later steps (see below), and about 91.65 percent are from deuteron synthesis. However, another solar model from around the same time gives only 7.14 percent of neutrinos from the later steps and 92.86 percent from the synthesis of deuterium nuclei. The difference is apparently due to slightly different assumptions about the composition and metallicity of the sun. There is also the extremely rare branch. Other even rarer reactions may occur. The rate of these reactions is very low due to very small cross-sections, or because the number of reacting particles is so low that any reactions that might happen are statistically insignificant. The overall reaction is: releasing 26.73 MeV of energy, some of which is lost to the neutrinos. The branch {| border="0" |- style="height:2em;" | ||+ || ||→ || ||+ ||2  ||+ || |} The complete chain releases a net energy of but 2.2 percent of this energy (0.59 MeV) is lost to the neutrinos that are produced. The branch is dominant at temperatures of 10 to . Below , the chain proceeds at slow rate, resulting in a low production of . The branch :{| border="0" |- style="height:2em;" | ||+ || ||→ ||||+ || ||+ || |- style="height:2em;" | ||+ || ||→ ||||+ || ||+ || ||/ || |- style="height:2em;" | ||+ || ||→ ||2  || || ||+ || |} The branch is dominant at temperatures of 18 to . Note that the energies in the second reaction above are the energies of the neutrinos that are produced by the reaction. 90 percent of the neutrinos produced in the reaction of to carry an energy of , while the remaining 10 percent carry . The difference is whether the lithium-7 produced is in the ground state or an excited (metastable) state, respectively. The total energy released going from to stable is about 0.862 MeV, almost all of which is lost to the neutrino if the decay goes directly to the stable lithium. The branch :{| border="0" |- style="height:2em;" | ||+ || ||→ || ||+ || || || ||+ || |- style="height:2em;" | ||+ || ||→ || ||+ || |- style="height:2em;" | || || ||→ || ||+ || ||+ || || |- style="height:2em;" | || || ||→ ||2  |} The last three stages of this chain, plus the positron annihilation, contribute a total of 18.209 MeV, though much of this is lost to the neutrino. The chain is dominant if the temperature exceeds . The chain is not a major source of energy in the Sun, but it was very important in the solar neutrino problem because it generates very high energy neutrinos (up to ). The (Hep) branch This reaction is predicted theoretically, but it has never been observed due to its rarity (about in the Sun). In this reaction, helium-3 captures a proton directly to give helium-4, with an even higher possible neutrino energy (up to ). :{| border="0" |- style="height:2em;" | ||+ || ||→ || ||+ || ||+ || |} The mass–energy relationship gives for the energy released by this reaction plus the ensuing annihilation, some of which is lost to the neutrino. Energy release Comparing the mass of the final helium-4 atom with the masses of the four protons reveals that 0.7 percent of the mass of the original protons has been lost. This mass has been converted into energy, in the form of kinetic energy of produced particles, gamma rays, and neutrinos released during each of the individual reactions. The total energy yield of one whole chain is . Energy released as gamma rays will interact with electrons and protons and heat the interior of the Sun. Also kinetic energy of fusion products (e.g. of the two protons and the from the reaction) adds energy to the plasma in the Sun. This heating keeps the core of the Sun hot and prevents it from collapsing under its own weight as it would if the sun were to cool down. Neutrinos do not interact significantly with matter and therefore do not heat the interior and thereby help support the Sun against gravitational collapse. Their energy is lost: the neutrinos in the , , and chains carry away 2.0%, 4.0%, and 28.3% of the energy in those reactions, respectively. The following table calculates the amount of energy lost to neutrinos and the amount of "solar luminosity" coming from the three branches. "Luminosity" here means the amount of energy given off by the Sun as electromagnetic radiation rather than as neutrinos. The starting figures used are the ones mentioned higher in this article. The table concerns only the 99% of the power and neutrinos that come from the reactions, not the 1% coming from the CNO cycle. The PEP reaction A deuteron can also be produced by the rare pep (proton–electron–proton) reaction (electron capture): :{| border="0" |- style="height:2em;" | ||+ || ||+ || ||→ || ||+ || |} In the Sun, the frequency ratio of the pep reaction versus the reaction is 1:400. However, the neutrinos released by the pep reaction are far more energetic: while neutrinos produced in the first step of the reaction range in energy up to , the pep reaction produces sharp-energy-line neutrinos of . Detection of solar neutrinos from this reaction were reported by the Borexino collaboration in 2012. Both the pep and reactions can be seen as two different Feynman representations of the same basic interaction, where the electron passes to the right side of the reaction as a positron. This is represented in the figure of proton–proton and electron-capture reactions in a star, available at the NDM'06 web site. See also CNO cycle Triple-alpha process References External links Nuclear fusion reactions Proton
Proton–proton chain
[ "Chemistry" ]
2,582
[ "Nuclear fusion", "Nuclear fusion reactions" ]
25,021
https://en.wikipedia.org/wiki/Pearl%20Index
The Pearl Index, also called the Pearl rate, is the most common technique used in clinical trials for reporting the effectiveness of a birth control method. It is a very approximate measure of the number of unintended pregnancies in 100 woman-years of exposure that is simple to calculate, but has a number of methodological deficiencies. The index was introduced by Raymond Pearl in 1934. It has remained popular for over eighty years, in large part because of the simplicity of the calculation. Calculation Several kinds of information are needed to calculate a Pearl Index for a particular study: the total number of months or cycles of exposure by women in the study the number of pregnancies the reason for leaving the study (pregnancy or other reason) the number of children in a single pregnancy (twins or triplets can affect the final number) There are two calculation methods for determining the Pearl Index: in the first method, the relative number of pregnancies in the study is divided by the number of months of exposure, and then multiplied by 1200 in the second method, the number of pregnancies in the study is divided by the number of menstrual cycles experienced by women in the study, and then multiplied by 1300. 1300 instead of 1200 is used on the basis that the length of the average menstrual cycle is 28 days, or 13 cycles per year Usage The Pearl Index is sometimes used as a statistical estimation of the number of unintended pregnancies in 100 woman-years of exposure (e.g. 100 women over one year of use, or 10 women over 10 years). It is also sometimes used to compare birth control methods, a lower Pearl index representing a lower chance of getting unintentionally pregnant. Usually two Pearl Indexes are published from studies of birth control methods: the actual use Pearl Index, which includes all pregnancies in a study and all months (or cycles) of exposure the perfect use or method Pearl Index, which includes only pregnancies that resulted from correct and consistent use of the method, and only includes months or cycles in which the method was correctly and consistently used Criticisms Like all measures of birth control effectiveness, the Pearl Index is a calculation based on the observations of a given sample population. Thus, studies of different populations using the same contraceptive will yield different values for the index. The culture and demographics of the population being studied, and the instruction technique used to teach the method, have significant effects on its failure rate. The Pearl Index has unique shortcomings, however. It assumes a constant failure rate over time. That is an incorrect assumption for two reasons: first, the most fertile couples will get pregnant first. Couples remaining later in the study are, on average, of lower fertility. Second, most birth control methods have better effectiveness in more experienced users. The longer a couple is in the study, the better they are at using the method. So the longer the study length, the lower the Pearl Index will be – and comparisons of Pearl Indexes from studies of different lengths cannot be accurate. The Pearl Index also provides no information on factors other than accidental pregnancy which may influence effectiveness calculations, such as: dissatisfaction with the method trying to achieve pregnancy medical side effects being lost to follow-up A common misperception is that the highest possible Pearl Index is 100 – i.e. 100% of women in the study conceive in the first year. However, if all the women in the study conceived in the first month, the study would yield a Pearl Index of 1200 or 1300. The Pearl Index is only accurate as a statistical estimation of per-year risk of pregnancy if the pregnancy rate in the study was very low. In 1966, two birth control statisticians advocated abandonment of the Pearl Index: See also Comparison of birth control methods Decrement table Life table Footnotes Birth control Methods of birth control Clinical research
Pearl Index
[ "Biology" ]
787
[ "Methods of birth control", "Medical technology" ]
25,178
https://en.wikipedia.org/wiki/Applications%20of%20quantum%20mechanics
Quantum physics is a branch of modern physics in which energy and matter are described at their most fundamental level, that of energy quanta, elementary particles, and quantum fields. Quantum physics encompasses any discipline concerned with systems that exhibit notable quantum-mechanical effects, where waves have properties of particles, and particles behave like waves. Applications of quantum mechanics include explaining phenomena found in nature as well as developing technologies that rely upon quantum effects, like integrated circuits and lasers. Quantum mechanics is also critically important for understanding how individual atoms are joined by covalent bonds to form molecules. The application of quantum mechanics to chemistry is known as quantum chemistry. Quantum mechanics can also provide quantitative insight into ionic and covalent bonding processes by explicitly showing which molecules are energetically favorable to which others and the magnitudes of the energies involved. Historically, the first applications of quantum mechanics to physical systems were the algebraic determination of the hydrogen spectrum by Wolfgang Pauli and the treatment of diatomic molecules by Lucy Mensing. In many aspects modern technology operates at a scale where quantum effects are significant. Important applications of quantum theory include quantum chemistry, quantum optics, quantum computing, superconducting magnets, light-emitting diodes, the optical amplifier and the laser, the transistor and semiconductors such as the microprocessor, medical and research imaging such as magnetic resonance imaging and electron microscopy. Explanations for many biological and physical phenomena are rooted in the nature of the chemical bond, most notably the macro-molecule DNA. Electronics Many modern electronic devices are designed using quantum mechanics. Examples include lasers, electron microscopes, magnetic resonance imaging (MRI) devices and the components used in computing hardware. The study of semiconductors led to the invention of the diode and the transistor, which are indispensable parts of modern electronics systems, computer and telecommunications devices. Another application is for making laser diodes and light-emitting diodes, which are a high-efficiency source of light. The global positioning system (GPS) makes use of atomic clocks to measure precise time differences and therefore determine a user's location. Many electronic devices operate using the effect of quantum tunneling. Flash memory chips found in USB drives use quantum tunneling to erase their memory cells. Some negative differential resistance devices also utilize the quantum tunneling effect, such as resonant tunneling diodes. Unlike classical diodes, its current is carried by resonant tunneling through two or more potential barriers (see figure at right). Its negative resistance behavior can only be understood with quantum mechanics: As the confined state moves close to Fermi level, tunnel current increases. As it moves away, the current decreases. Quantum mechanics is necessary to understand and design such electronic devices. Cryptography Many scientists are currently seeking robust methods of directly manipulating quantum states. Efforts are being made to more fully develop quantum cryptography, which will theoretically allow guaranteed secure transmission of information. An inherent advantage yielded by quantum cryptography when compared to classical cryptography is the detection of passive eavesdropping. This is a natural result of the behavior of quantum bits; due to the observer effect, if a bit in a superposition state were to be observed, the superposition state would collapse into an eigenstate. Because the intended recipient was expecting to receive the bit in a superposition state, the intended recipient would know there was an attack, because the bit's state would no longer be in a superposition. Quantum computing Another goal is the development of quantum computers, which are expected to perform certain computational tasks exponentially faster than classical computers. Instead of using classical bits, quantum computers use qubits, which can be in superpositions of states. Quantum programmers are able to manipulate the superposition of qubits in order to solve problems that classical computing cannot do effectively, such as searching unsorted databases or integer factorization. IBM claims that the advent of quantum computing may progress the fields of medicine, logistics, financial services, artificial intelligence and cloud security. Another active research topic is quantum teleportation, which deals with techniques to transmit quantum information over arbitrary distances. Macroscale quantum effects While quantum mechanics primarily applies to the smaller atomic regimes of matter and energy, some systems exhibit quantum mechanical effects on a large scale. Superfluidity, the frictionless flow of a liquid at temperatures near absolute zero, is one well-known example. So is the closely related phenomenon of superconductivity, the frictionless flow of an electron gas in a conducting material (an electric current) at sufficiently low temperatures. The fractional quantum Hall effect is a topological ordered state which corresponds to patterns of long-range quantum entanglement. States with different topological orders (or different patterns of long range entanglements) cannot change into each other without a phase transition. Other phenomena Quantum theory also provides accurate descriptions for many previously unexplained phenomena, such as black-body radiation and the stability of the orbitals of electrons in atoms. It has also given insight into the workings of many different biological systems, including smell receptors and protein structures. Recent work on photosynthesis has provided evidence that quantum correlations play an essential role in this fundamental process of plants and many other organisms. Even so, classical physics can often provide good approximations to results otherwise obtained by quantum physics, typically in circumstances with large numbers of particles or large quantum numbers. Since classical formulas are much simpler and easier to compute than quantum formulas, classical approximations are used and preferred when the system is large enough to render the effects of quantum mechanics insignificant. Notes References Quantum mechanics
Applications of quantum mechanics
[ "Physics" ]
1,130
[ "Applied and interdisciplinary physics", "Quantum mechanics", "Applications of quantum mechanics" ]
25,182
https://en.wikipedia.org/wiki/Quantization%20%28physics%29
Quantization (in British English quantisation) is the systematic transition procedure from a classical understanding of physical phenomena to a newer understanding known as quantum mechanics. It is a procedure for constructing quantum mechanics from classical mechanics. A generalization involving infinite degrees of freedom is field quantization, as in the "quantization of the electromagnetic field", referring to photons as field "quanta" (for instance as light quanta). This procedure is basic to theories of atomic physics, chemistry, particle physics, nuclear physics, condensed matter physics, and quantum optics. Historical overview In 1901, when Max Planck was developing the distribution function of statistical mechanics to solve the ultraviolet catastrophe problem, he realized that the properties of blackbody radiation can be explained by the assumption that the amount of energy must be in countable fundamental units, i.e. amount of energy is not continuous but discrete. That is, a minimum unit of energy exists and the following relationship holds for the frequency . Here, is called the Planck constant, which represents the amount of the quantum mechanical effect. It means a fundamental change of mathematical model of physical quantities. In 1905, Albert Einstein published a paper, "On a heuristic viewpoint concerning the emission and transformation of light", which explained the photoelectric effect on quantized electromagnetic waves. The energy quantum referred to in this paper was later called "photon".  In July 1913, Niels Bohr used quantization to describe the spectrum of a hydrogen atom in his paper "On the constitution of atoms and molecules". The preceding theories have been successful, but they are very phenomenological theories.  However, the French mathematician Henri Poincaré first gave a systematic and rigorous definition of what quantization is in his 1912 paper "Sur la théorie des quanta". The term "quantum physics" was first used in Johnston's Planck's Universe in Light of Modern Physics.  (1931). Canonical quantization Canonical quantization develops quantum mechanics from classical mechanics. One introduces a commutation relation among canonical coordinates. Technically, one converts coordinates to operators, through combinations of creation and annihilation operators. The operators act on quantum states of the theory. The lowest energy state is called the vacuum state. Quantization schemes Even within the setting of canonical quantization, there is difficulty associated to quantizing arbitrary observables on the classical phase space. This is the ordering ambiguity: classically, the position and momentum variables x and p commute, but their quantum mechanical operator counterparts do not. Various quantization schemes have been proposed to resolve this ambiguity, of which the most popular is the Weyl quantization scheme. Nevertheless, the Groenewold–van Hove theorem dictates that no perfect quantization scheme exists. Specifically, if the quantizations of x and p are taken to be the usual position and momentum operators, then no quantization scheme can perfectly reproduce the Poisson bracket relations among the classical observables. See Groenewold's theorem for one version of this result. Covariant canonical quantization There is a way to perform a canonical quantization without having to resort to the non covariant approach of foliating spacetime and choosing a Hamiltonian. This method is based upon a classical action, but is different from the functional integral approach. The method does not apply to all possible actions (for instance, actions with a noncausal structure or actions with gauge "flows"). It starts with the classical algebra of all (smooth) functionals over the configuration space. This algebra is quotiented over by the ideal generated by the Euler–Lagrange equations. Then, this quotient algebra is converted into a Poisson algebra by introducing a Poisson bracket derivable from the action, called the Peierls bracket. This Poisson algebra is then ℏ -deformed in the same way as in canonical quantization. In quantum field theory, there is also a way to quantize actions with gauge "flows". It involves the Batalin–Vilkovisky formalism, an extension of the BRST formalism. Deformation quantization One of the earliest attempts at a natural quantization was Weyl quantization, proposed by Hermann Weyl in 1927. Here, an attempt is made to associate a quantum-mechanical observable (a self-adjoint operator on a Hilbert space) with a real-valued function on classical phase space. The position and momentum in this phase space are mapped to the generators of the Heisenberg group, and the Hilbert space appears as a group representation of the Heisenberg group. In 1946, H. J. Groenewold considered the product of a pair of such observables and asked what the corresponding function would be on the classical phase space. This led him to discover the phase-space star-product of a pair of functions. More generally, this technique leads to deformation quantization, where the ★-product is taken to be a deformation of the algebra of functions on a symplectic manifold or Poisson manifold. However, as a natural quantization scheme (a functor), Weyl's map is not satisfactory. For example, the Weyl map of the classical angular-momentum-squared is not just the quantum angular momentum squared operator, but it further contains a constant term . (This extra term offset is pedagogically significant, since it accounts for the nonvanishing angular momentum of the ground-state Bohr orbit in the hydrogen atom, even though the standard QM ground state of the atom has vanishing .) As a mere representation change, however, Weyl's map is useful and important, as it underlies the alternate equivalent phase space formulation of conventional quantum mechanics. Geometric quantization In mathematical physics, geometric quantization is a mathematical approach to defining a quantum theory corresponding to a given classical theory. It attempts to carry out quantization, for which there is in general no exact recipe, in such a way that certain analogies between the classical theory and the quantum theory remain manifest. For example, the similarity between the Heisenberg equation in the Heisenberg picture of quantum mechanics and the Hamilton equation in classical physics should be built in. A more geometric approach to quantization, in which the classical phase space can be a general symplectic manifold, was developed in the 1970s by Bertram Kostant and Jean-Marie Souriau. The method proceeds in two stages. First, once constructs a "prequantum Hilbert space" consisting of square-integrable functions (or, more properly, sections of a line bundle) over the phase space. Here one can construct operators satisfying commutation relations corresponding exactly to the classical Poisson-bracket relations. On the other hand, this prequantum Hilbert space is too big to be physically meaningful. One then restricts to functions (or sections) depending on half the variables on the phase space, yielding the quantum Hilbert space. Path integral quantization A classical mechanical theory is given by an action with the permissible configurations being the ones which are extremal with respect to functional variations of the action. A quantum-mechanical description of the classical system can also be constructed from the action of the system by means of the path integral formulation. Other types Loop quantum gravity (loop quantization) Uncertainty principle (quantum statistical mechanics approach) Schwinger's quantum action principle See also First quantization Feynman path integral Light front quantization Photon polarization Quantum decoherence Quantum Hall effect Quantum number Stochastic quantization References Ali, S. T., & Engliš, M. (2005). "Quantization methods: a guide for physicists and analysts". Reviews in Mathematical Physics 17 (04), 391-490. Abraham, R. & Marsden (1985): Foundations of Mechanics, ed. Addison–Wesley, M. Peskin, D. Schroeder, An Introduction to Quantum Field Theory (Westview Press, 1995) Weinberg, Steven, The Quantum Theory of Fields (3 volumes) G. Giachetta, L. Mangiarotti, G. Sardanashvily, Geometric and Algebraic Topological Methods in Quantum Mechanics (World Scientific, 2005) Notes Physical phenomena Quantum field theory Mathematical quantization Mathematical physics
Quantization (physics)
[ "Physics", "Mathematics" ]
1,737
[ "Quantum field theory", "Physical phenomena", "Applied mathematics", "Theoretical physics", "Quantum mechanics", "Mathematical quantization", "Mathematical physics" ]
25,202
https://en.wikipedia.org/wiki/Quantum%20mechanics
Quantum mechanics is a fundamental theory that describes the behavior of nature at and below the scale of atoms. It is the foundation of all quantum physics, which includes quantum chemistry, quantum field theory, quantum technology, and quantum information science. Quantum mechanics can describe many systems that classical physics cannot. Classical physics can describe many aspects of nature at an ordinary (macroscopic and (optical) microscopic) scale, but is not sufficient for describing them at very small submicroscopic (atomic and subatomic) scales. Most theories in classical physics can be derived from quantum mechanics as an approximation, valid at large (macroscopic/microscopic) scale. Quantum systems have bound states that are quantized to discrete values of energy, momentum, angular momentum, and other quantities, in contrast to classical systems where these quantities can be measured continuously. Measurements of quantum systems show characteristics of both particles and waves (wave–particle duality), and there are limits to how accurately the value of a physical quantity can be predicted prior to its measurement, given a complete set of initial conditions (the uncertainty principle). Quantum mechanics arose gradually from theories to explain observations that could not be reconciled with classical physics, such as Max Planck's solution in 1900 to the black-body radiation problem, and the correspondence between energy and frequency in Albert Einstein's 1905 paper, which explained the photoelectric effect. These early attempts to understand microscopic phenomena, now known as the "old quantum theory", led to the full development of quantum mechanics in the mid-1920s by Niels Bohr, Erwin Schrödinger, Werner Heisenberg, Max Born, Paul Dirac and others. The modern theory is formulated in various specially developed mathematical formalisms. In one of them, a mathematical entity called the wave function provides information, in the form of probability amplitudes, about what measurements of a particle's energy, momentum, and other physical properties may yield. Overview and fundamental concepts Quantum mechanics allows the calculation of properties and behaviour of physical systems. It is typically applied to microscopic systems: molecules, atoms and sub-atomic particles. It has been demonstrated to hold for complex molecules with thousands of atoms, but its application to human beings raises philosophical problems, such as Wigner's friend, and its application to the universe as a whole remains speculative. Predictions of quantum mechanics have been verified experimentally to an extremely high degree of accuracy. For example, the refinement of quantum mechanics for the interaction of light and matter, known as quantum electrodynamics (QED), has been shown to agree with experiment to within 1 part in 1012 when predicting the magnetic properties of an electron. A fundamental feature of the theory is that it usually cannot predict with certainty what will happen, but only give probabilities. Mathematically, a probability is found by taking the square of the absolute value of a complex number, known as a probability amplitude. This is known as the Born rule, named after physicist Max Born. For example, a quantum particle like an electron can be described by a wave function, which associates to each point in space a probability amplitude. Applying the Born rule to these amplitudes gives a probability density function for the position that the electron will be found to have when an experiment is performed to measure it. This is the best the theory can do; it cannot say for certain where the electron will be found. The Schrödinger equation relates the collection of probability amplitudes that pertain to one moment of time to the collection of probability amplitudes that pertain to another. One consequence of the mathematical rules of quantum mechanics is a tradeoff in predictability between measurable quantities. The most famous form of this uncertainty principle says that no matter how a quantum particle is prepared or how carefully experiments upon it are arranged, it is impossible to have a precise prediction for a measurement of its position and also at the same time for a measurement of its momentum. Another consequence of the mathematical rules of quantum mechanics is the phenomenon of quantum interference, which is often illustrated with the double-slit experiment. In the basic version of this experiment, a coherent light source, such as a laser beam, illuminates a plate pierced by two parallel slits, and the light passing through the slits is observed on a screen behind the plate. The wave nature of light causes the light waves passing through the two slits to interfere, producing bright and dark bands on the screen – a result that would not be expected if light consisted of classical particles. However, the light is always found to be absorbed at the screen at discrete points, as individual particles rather than waves; the interference pattern appears via the varying density of these particle hits on the screen. Furthermore, versions of the experiment that include detectors at the slits find that each detected photon passes through one slit (as would a classical particle), and not through both slits (as would a wave). However, such experiments demonstrate that particles do not form the interference pattern if one detects which slit they pass through. This behavior is known as wave–particle duality. In addition to light, electrons, atoms, and molecules are all found to exhibit the same dual behavior when fired towards a double slit. Another non-classical phenomenon predicted by quantum mechanics is quantum tunnelling: a particle that goes up against a potential barrier can cross it, even if its kinetic energy is smaller than the maximum of the potential. In classical mechanics this particle would be trapped. Quantum tunnelling has several important consequences, enabling radioactive decay, nuclear fusion in stars, and applications such as scanning tunnelling microscopy, tunnel diode and tunnel field-effect transistor. When quantum systems interact, the result can be the creation of quantum entanglement: their properties become so intertwined that a description of the whole solely in terms of the individual parts is no longer possible. Erwin Schrödinger called entanglement "...the characteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought". Quantum entanglement enables quantum computing and is part of quantum communication protocols, such as quantum key distribution and superdense coding. Contrary to popular misconception, entanglement does not allow sending signals faster than light, as demonstrated by the no-communication theorem. Another possibility opened by entanglement is testing for "hidden variables", hypothetical properties more fundamental than the quantities addressed in quantum theory itself, knowledge of which would allow more exact predictions than quantum theory provides. A collection of results, most significantly Bell's theorem, have demonstrated that broad classes of such hidden-variable theories are in fact incompatible with quantum physics. According to Bell's theorem, if nature actually operates in accord with any theory of local hidden variables, then the results of a Bell test will be constrained in a particular, quantifiable way. Many Bell tests have been performed and they have shown results incompatible with the constraints imposed by local hidden variables. It is not possible to present these concepts in more than a superficial way without introducing the mathematics involved; understanding quantum mechanics requires not only manipulating complex numbers, but also linear algebra, differential equations, group theory, and other more advanced subjects. Accordingly, this article will present a mathematical formulation of quantum mechanics and survey its application to some useful and oft-studied examples. Mathematical formulation In the mathematically rigorous formulation of quantum mechanics, the state of a quantum mechanical system is a vector belonging to a (separable) complex Hilbert space . This vector is postulated to be normalized under the Hilbert space inner product, that is, it obeys , and it is well-defined up to a complex number of modulus 1 (the global phase), that is, and represent the same physical system. In other words, the possible states are points in the projective space of a Hilbert space, usually called the complex projective space. The exact nature of this Hilbert space is dependent on the system – for example, for describing position and momentum the Hilbert space is the space of complex square-integrable functions , while the Hilbert space for the spin of a single proton is simply the space of two-dimensional complex vectors with the usual inner product. Physical quantities of interestposition, momentum, energy, spinare represented by observables, which are Hermitian (more precisely, self-adjoint) linear operators acting on the Hilbert space. A quantum state can be an eigenvector of an observable, in which case it is called an eigenstate, and the associated eigenvalue corresponds to the value of the observable in that eigenstate. More generally, a quantum state will be a linear combination of the eigenstates, known as a quantum superposition. When an observable is measured, the result will be one of its eigenvalues with probability given by the Born rule: in the simplest case the eigenvalue is non-degenerate and the probability is given by , where is its associated eigenvector. More generally, the eigenvalue is degenerate and the probability is given by , where is the projector onto its associated eigenspace. In the continuous case, these formulas give instead the probability density. After the measurement, if result was obtained, the quantum state is postulated to collapse to , in the non-degenerate case, or to , in the general case. The probabilistic nature of quantum mechanics thus stems from the act of measurement. This is one of the most difficult aspects of quantum systems to understand. It was the central topic in the famous Bohr–Einstein debates, in which the two scientists attempted to clarify these fundamental principles by way of thought experiments. In the decades after the formulation of quantum mechanics, the question of what constitutes a "measurement" has been extensively studied. Newer interpretations of quantum mechanics have been formulated that do away with the concept of "wave function collapse" (see, for example, the many-worlds interpretation). The basic idea is that when a quantum system interacts with a measuring apparatus, their respective wave functions become entangled so that the original quantum system ceases to exist as an independent entity (see Measurement in quantum mechanics). Time evolution of a quantum state The time evolution of a quantum state is described by the Schrödinger equation: Here denotes the Hamiltonian, the observable corresponding to the total energy of the system, and is the reduced Planck constant. The constant is introduced so that the Hamiltonian is reduced to the classical Hamiltonian in cases where the quantum system can be approximated by a classical system; the ability to make such an approximation in certain limits is called the correspondence principle. The solution of this differential equation is given by The operator is known as the time-evolution operator, and has the crucial property that it is unitary. This time evolution is deterministic in the sense that – given an initial quantum state – it makes a definite prediction of what the quantum state will be at any later time. Some wave functions produce probability distributions that are independent of time, such as eigenstates of the Hamiltonian. Many systems that are treated dynamically in classical mechanics are described by such "static" wave functions. For example, a single electron in an unexcited atom is pictured classically as a particle moving in a circular trajectory around the atomic nucleus, whereas in quantum mechanics, it is described by a static wave function surrounding the nucleus. For example, the electron wave function for an unexcited hydrogen atom is a spherically symmetric function known as an s orbital (Fig. 1). Analytic solutions of the Schrödinger equation are known for very few relatively simple model Hamiltonians including the quantum harmonic oscillator, the particle in a box, the dihydrogen cation, and the hydrogen atom. Even the helium atom – which contains just two electrons – has defied all attempts at a fully analytic treatment, admitting no solution in closed form. However, there are techniques for finding approximate solutions. One method, called perturbation theory, uses the analytic result for a simple quantum mechanical model to create a result for a related but more complicated model by (for example) the addition of a weak potential energy. Another approximation method applies to systems for which quantum mechanics produces only small deviations from classical behavior. These deviations can then be computed based on the classical motion. Uncertainty principle One consequence of the basic quantum formalism is the uncertainty principle. In its most familiar form, this states that no preparation of a quantum particle can imply simultaneously precise predictions both for a measurement of its position and for a measurement of its momentum. Both position and momentum are observables, meaning that they are represented by Hermitian operators. The position operator and momentum operator do not commute, but rather satisfy the canonical commutation relation: Given a quantum state, the Born rule lets us compute expectation values for both and , and moreover for powers of them. Defining the uncertainty for an observable by a standard deviation, we have and likewise for the momentum: The uncertainty principle states that Either standard deviation can in principle be made arbitrarily small, but not both simultaneously. This inequality generalizes to arbitrary pairs of self-adjoint operators and . The commutator of these two operators is and this provides the lower bound on the product of standard deviations: Another consequence of the canonical commutation relation is that the position and momentum operators are Fourier transforms of each other, so that a description of an object according to its momentum is the Fourier transform of its description according to its position. The fact that dependence in momentum is the Fourier transform of the dependence in position means that the momentum operator is equivalent (up to an factor) to taking the derivative according to the position, since in Fourier analysis differentiation corresponds to multiplication in the dual space. This is why in quantum equations in position space, the momentum is replaced by , and in particular in the non-relativistic Schrödinger equation in position space the momentum-squared term is replaced with a Laplacian times . Composite systems and entanglement When two different quantum systems are considered together, the Hilbert space of the combined system is the tensor product of the Hilbert spaces of the two components. For example, let and be two quantum systems, with Hilbert spaces and , respectively. The Hilbert space of the composite system is then If the state for the first system is the vector and the state for the second system is , then the state of the composite system is Not all states in the joint Hilbert space can be written in this form, however, because the superposition principle implies that linear combinations of these "separable" or "product states" are also valid. For example, if and are both possible states for system , and likewise and are both possible states for system , then is a valid joint state that is not separable. States that are not separable are called entangled. If the state for a composite system is entangled, it is impossible to describe either component system or system by a state vector. One can instead define reduced density matrices that describe the statistics that can be obtained by making measurements on either component system alone. This necessarily causes a loss of information, though: knowing the reduced density matrices of the individual systems is not enough to reconstruct the state of the composite system. Just as density matrices specify the state of a subsystem of a larger system, analogously, positive operator-valued measures (POVMs) describe the effect on a subsystem of a measurement performed on a larger system. POVMs are extensively used in quantum information theory. As described above, entanglement is a key feature of models of measurement processes in which an apparatus becomes entangled with the system being measured. Systems interacting with the environment in which they reside generally become entangled with that environment, a phenomenon known as quantum decoherence. This can explain why, in practice, quantum effects are difficult to observe in systems larger than microscopic. Equivalence between formulations There are many mathematically equivalent formulations of quantum mechanics. One of the oldest and most common is the "transformation theory" proposed by Paul Dirac, which unifies and generalizes the two earliest formulations of quantum mechanics – matrix mechanics (invented by Werner Heisenberg) and wave mechanics (invented by Erwin Schrödinger). An alternative formulation of quantum mechanics is Feynman's path integral formulation, in which a quantum-mechanical amplitude is considered as a sum over all possible classical and non-classical paths between the initial and final states. This is the quantum-mechanical counterpart of the action principle in classical mechanics. Symmetries and conservation laws The Hamiltonian is known as the generator of time evolution, since it defines a unitary time-evolution operator for each value of . From this relation between and , it follows that any observable that commutes with will be conserved: its expectation value will not change over time. This statement generalizes, as mathematically, any Hermitian operator can generate a family of unitary operators parameterized by a variable . Under the evolution generated by , any observable that commutes with will be conserved. Moreover, if is conserved by evolution under , then is conserved under the evolution generated by . This implies a quantum version of the result proven by Emmy Noether in classical (Lagrangian) mechanics: for every differentiable symmetry of a Hamiltonian, there exists a corresponding conservation law. Examples Free particle The simplest example of a quantum system with a position degree of freedom is a free particle in a single spatial dimension. A free particle is one which is not subject to external influences, so that its Hamiltonian consists only of its kinetic energy: The general solution of the Schrödinger equation is given by which is a superposition of all possible plane waves , which are eigenstates of the momentum operator with momentum . The coefficients of the superposition are , which is the Fourier transform of the initial quantum state . It is not possible for the solution to be a single momentum eigenstate, or a single position eigenstate, as these are not normalizable quantum states. Instead, we can consider a Gaussian wave packet: which has Fourier transform, and therefore momentum distribution We see that as we make smaller the spread in position gets smaller, but the spread in momentum gets larger. Conversely, by making larger we make the spread in momentum smaller, but the spread in position gets larger. This illustrates the uncertainty principle. As we let the Gaussian wave packet evolve in time, we see that its center moves through space at a constant velocity (like a classical particle with no forces acting on it). However, the wave packet will also spread out as time progresses, which means that the position becomes more and more uncertain. The uncertainty in momentum, however, stays constant. Particle in a box The particle in a one-dimensional potential energy box is the most mathematically simple example where restraints lead to the quantization of energy levels. The box is defined as having zero potential energy everywhere inside a certain region, and therefore infinite potential energy everywhere outside that region. For the one-dimensional case in the direction, the time-independent Schrödinger equation may be written With the differential operator defined by the previous equation is evocative of the classic kinetic energy analogue, with state in this case having energy coincident with the kinetic energy of the particle. The general solutions of the Schrödinger equation for the particle in a box are or, from Euler's formula, The infinite potential walls of the box determine the values of and at and where must be zero. Thus, at , and . At , in which cannot be zero as this would conflict with the postulate that has norm 1. Therefore, since , must be an integer multiple of , This constraint on implies a constraint on the energy levels, yielding A finite potential well is the generalization of the infinite potential well problem to potential wells having finite depth. The finite potential well problem is mathematically more complicated than the infinite particle-in-a-box problem as the wave function is not pinned to zero at the walls of the well. Instead, the wave function must satisfy more complicated mathematical boundary conditions as it is nonzero in regions outside the well. Another related problem is that of the rectangular potential barrier, which furnishes a model for the quantum tunneling effect that plays an important role in the performance of modern technologies such as flash memory and scanning tunneling microscopy. Harmonic oscillator As in the classical case, the potential for the quantum harmonic oscillator is given by This problem can either be treated by directly solving the Schrödinger equation, which is not trivial, or by using the more elegant "ladder method" first proposed by Paul Dirac. The eigenstates are given by where Hn are the Hermite polynomials and the corresponding energy levels are This is another example illustrating the discretization of energy for bound states. Mach–Zehnder interferometer The Mach–Zehnder interferometer (MZI) illustrates the concepts of superposition and interference with linear algebra in dimension 2, rather than differential equations. It can be seen as a simplified version of the double-slit experiment, but it is of interest in its own right, for example in the delayed choice quantum eraser, the Elitzur–Vaidman bomb tester, and in studies of quantum entanglement. We can model a photon going through the interferometer by considering that at each point it can be in a superposition of only two paths: the "lower" path which starts from the left, goes straight through both beam splitters, and ends at the top, and the "upper" path which starts from the bottom, goes straight through both beam splitters, and ends at the right. The quantum state of the photon is therefore a vector that is a superposition of the "lower" path and the "upper" path , that is, for complex . In order to respect the postulate that we require that . Both beam splitters are modelled as the unitary matrix , which means that when a photon meets the beam splitter it will either stay on the same path with a probability amplitude of , or be reflected to the other path with a probability amplitude of . The phase shifter on the upper arm is modelled as the unitary matrix , which means that if the photon is on the "upper" path it will gain a relative phase of , and it will stay unchanged if it is in the lower path. A photon that enters the interferometer from the left will then be acted upon with a beam splitter , a phase shifter , and another beam splitter , and so end up in the state and the probabilities that it will be detected at the right or at the top are given respectively by One can therefore use the Mach–Zehnder interferometer to estimate the phase shift by estimating these probabilities. It is interesting to consider what would happen if the photon were definitely in either the "lower" or "upper" paths between the beam splitters. This can be accomplished by blocking one of the paths, or equivalently by removing the first beam splitter (and feeding the photon from the left or the bottom, as desired). In both cases, there will be no interference between the paths anymore, and the probabilities are given by , independently of the phase . From this we can conclude that the photon does not take one path or another after the first beam splitter, but rather that it is in a genuine quantum superposition of the two paths. Applications Quantum mechanics has had enormous success in explaining many of the features of our universe, with regard to small-scale and discrete quantities and interactions which cannot be explained by classical methods. Quantum mechanics is often the only theory that can reveal the individual behaviors of the subatomic particles that make up all forms of matter (electrons, protons, neutrons, photons, and others). Solid-state physics and materials science are dependent upon quantum mechanics. In many aspects, modern technology operates at a scale where quantum effects are significant. Important applications of quantum theory include quantum chemistry, quantum optics, quantum computing, superconducting magnets, light-emitting diodes, the optical amplifier and the laser, the transistor and semiconductors such as the microprocessor, medical and research imaging such as magnetic resonance imaging and electron microscopy. Explanations for many biological and physical phenomena are rooted in the nature of the chemical bond, most notably the macro-molecule DNA. Relation to other scientific theories Classical mechanics The rules of quantum mechanics assert that the state space of a system is a Hilbert space and that observables of the system are Hermitian operators acting on vectors in that space – although they do not tell us which Hilbert space or which operators. These can be chosen appropriately in order to obtain a quantitative description of a quantum system, a necessary step in making physical predictions. An important guide for making these choices is the correspondence principle, a heuristic which states that the predictions of quantum mechanics reduce to those of classical mechanics in the regime of large quantum numbers. One can also start from an established classical model of a particular system, and then try to guess the underlying quantum model that would give rise to the classical model in the correspondence limit. This approach is known as quantization. When quantum mechanics was originally formulated, it was applied to models whose correspondence limit was non-relativistic classical mechanics. For instance, the well-known model of the quantum harmonic oscillator uses an explicitly non-relativistic expression for the kinetic energy of the oscillator, and is thus a quantum version of the classical harmonic oscillator. Complications arise with chaotic systems, which do not have good quantum numbers, and quantum chaos studies the relationship between classical and quantum descriptions in these systems. Quantum decoherence is a mechanism through which quantum systems lose coherence, and thus become incapable of displaying many typically quantum effects: quantum superpositions become simply probabilistic mixtures, and quantum entanglement becomes simply classical correlations. Quantum coherence is not typically evident at macroscopic scales, though at temperatures approaching absolute zero quantum behavior may manifest macroscopically. Many macroscopic properties of a classical system are a direct consequence of the quantum behavior of its parts. For example, the stability of bulk matter (consisting of atoms and molecules which would quickly collapse under electric forces alone), the rigidity of solids, and the mechanical, thermal, chemical, optical and magnetic properties of matter are all results of the interaction of electric charges under the rules of quantum mechanics. Special relativity and electrodynamics Early attempts to merge quantum mechanics with special relativity involved the replacement of the Schrödinger equation with a covariant equation such as the Klein–Gordon equation or the Dirac equation. While these theories were successful in explaining many experimental results, they had certain unsatisfactory qualities stemming from their neglect of the relativistic creation and annihilation of particles. A fully relativistic quantum theory required the development of quantum field theory, which applies quantization to a field (rather than a fixed set of particles). The first complete quantum field theory, quantum electrodynamics, provides a fully quantum description of the electromagnetic interaction. Quantum electrodynamics is, along with general relativity, one of the most accurate physical theories ever devised. The full apparatus of quantum field theory is often unnecessary for describing electrodynamic systems. A simpler approach, one that has been used since the inception of quantum mechanics, is to treat charged particles as quantum mechanical objects being acted on by a classical electromagnetic field. For example, the elementary quantum model of the hydrogen atom describes the electric field of the hydrogen atom using a classical Coulomb potential. Likewise, in a Stern–Gerlach experiment, a charged particle is modeled as a quantum system, while the background magnetic field is described classically. This "semi-classical" approach fails if quantum fluctuations in the electromagnetic field play an important role, such as in the emission of photons by charged particles. Quantum field theories for the strong nuclear force and the weak nuclear force have also been developed. The quantum field theory of the strong nuclear force is called quantum chromodynamics, and describes the interactions of subnuclear particles such as quarks and gluons. The weak nuclear force and the electromagnetic force were unified, in their quantized forms, into a single quantum field theory (known as electroweak theory), by the physicists Abdus Salam, Sheldon Glashow and Steven Weinberg. Relation to general relativity Even though the predictions of both quantum theory and general relativity have been supported by rigorous and repeated empirical evidence, their abstract formalisms contradict each other and they have proven extremely difficult to incorporate into one consistent, cohesive model. Gravity is negligible in many areas of particle physics, so that unification between general relativity and quantum mechanics is not an urgent issue in those particular applications. However, the lack of a correct theory of quantum gravity is an important issue in physical cosmology and the search by physicists for an elegant "Theory of Everything" (TOE). Consequently, resolving the inconsistencies between both theories has been a major goal of 20th- and 21st-century physics. This TOE would combine not only the models of subatomic physics but also derive the four fundamental forces of nature from a single force or phenomenon. One proposal for doing so is string theory, which posits that the point-like particles of particle physics are replaced by one-dimensional objects called strings. String theory describes how these strings propagate through space and interact with each other. On distance scales larger than the string scale, a string looks just like an ordinary particle, with its mass, charge, and other properties determined by the vibrational state of the string. In string theory, one of the many vibrational states of the string corresponds to the graviton, a quantum mechanical particle that carries gravitational force. Another popular theory is loop quantum gravity (LQG), which describes quantum properties of gravity and is thus a theory of quantum spacetime. LQG is an attempt to merge and adapt standard quantum mechanics and standard general relativity. This theory describes space as an extremely fine fabric "woven" of finite loops called spin networks. The evolution of a spin network over time is called a spin foam. The characteristic length scale of a spin foam is the Planck length, approximately 1.616×10−35 m, and so lengths shorter than the Planck length are not physically meaningful in LQG. Philosophical implications Since its inception, the many counter-intuitive aspects and results of quantum mechanics have provoked strong philosophical debates and many interpretations. The arguments centre on the probabilistic nature of quantum mechanics, the difficulties with wavefunction collapse and the related measurement problem, and quantum nonlocality. Perhaps the only consensus that exists about these issues is that there is no consensus. Richard Feynman once said, "I think I can safely say that nobody understands quantum mechanics." According to Steven Weinberg, "There is now in my opinion no entirely satisfactory interpretation of quantum mechanics." The views of Niels Bohr, Werner Heisenberg and other physicists are often grouped together as the "Copenhagen interpretation". According to these views, the probabilistic nature of quantum mechanics is not a temporary feature which will eventually be replaced by a deterministic theory, but is instead a final renunciation of the classical idea of "causality". Bohr in particular emphasized that any well-defined application of the quantum mechanical formalism must always make reference to the experimental arrangement, due to the complementary nature of evidence obtained under different experimental situations. Copenhagen-type interpretations were adopted by Nobel laureates in quantum physics, including Bohr, Heisenberg, Schrödinger, Feynman, and Zeilinger as well as 21st-century researchers in quantum foundations. Albert Einstein, himself one of the founders of quantum theory, was troubled by its apparent failure to respect some cherished metaphysical principles, such as determinism and locality. Einstein's long-running exchanges with Bohr about the meaning and status of quantum mechanics are now known as the Bohr–Einstein debates. Einstein believed that underlying quantum mechanics must be a theory that explicitly forbids action at a distance. He argued that quantum mechanics was incomplete, a theory that was valid but not fundamental, analogous to how thermodynamics is valid, but the fundamental theory behind it is statistical mechanics. In 1935, Einstein and his collaborators Boris Podolsky and Nathan Rosen published an argument that the principle of locality implies the incompleteness of quantum mechanics, a thought experiment later termed the Einstein–Podolsky–Rosen paradox. In 1964, John Bell showed that EPR's principle of locality, together with determinism, was actually incompatible with quantum mechanics: they implied constraints on the correlations produced by distance systems, now known as Bell inequalities, that can be violated by entangled particles. Since then several experiments have been performed to obtain these correlations, with the result that they do in fact violate Bell inequalities, and thus falsify the conjunction of locality with determinism. Bohmian mechanics shows that it is possible to reformulate quantum mechanics to make it deterministic, at the price of making it explicitly nonlocal. It attributes not only a wave function to a physical system, but in addition a real position, that evolves deterministically under a nonlocal guiding equation. The evolution of a physical system is given at all times by the Schrödinger equation together with the guiding equation; there is never a collapse of the wave function. This solves the measurement problem. Everett's many-worlds interpretation, formulated in 1956, holds that all the possibilities described by quantum theory simultaneously occur in a multiverse composed of mostly independent parallel universes. This is a consequence of removing the axiom of the collapse of the wave packet. All possible states of the measured system and the measuring apparatus, together with the observer, are present in a real physical quantum superposition. While the multiverse is deterministic, we perceive non-deterministic behavior governed by probabilities, because we do not observe the multiverse as a whole, but only one parallel universe at a time. Exactly how this is supposed to work has been the subject of much debate. Several attempts have been made to make sense of this and derive the Born rule, with no consensus on whether they have been successful. Relational quantum mechanics appeared in the late 1990s as a modern derivative of Copenhagen-type ideas, and QBism was developed some years later. History Quantum mechanics was developed in the early decades of the 20th century, driven by the need to explain phenomena that, in some cases, had been observed in earlier times. Scientific inquiry into the wave nature of light began in the 17th and 18th centuries, when scientists such as Robert Hooke, Christiaan Huygens and Leonhard Euler proposed a wave theory of light based on experimental observations. In 1803 English polymath Thomas Young described the famous double-slit experiment. This experiment played a major role in the general acceptance of the wave theory of light. During the early 19th century, chemical research by John Dalton and Amedeo Avogadro lent weight to the atomic theory of matter, an idea that James Clerk Maxwell, Ludwig Boltzmann and others built upon to establish the kinetic theory of gases. The successes of kinetic theory gave further credence to the idea that matter is composed of atoms, yet the theory also had shortcomings that would only be resolved by the development of quantum mechanics. While the early conception of atoms from Greek philosophy had been that they were indivisible unitsthe word "atom" deriving from the Greek for "uncuttable" the 19th century saw the formulation of hypotheses about subatomic structure. One important discovery in that regard was Michael Faraday's 1838 observation of a glow caused by an electrical discharge inside a glass tube containing gas at low pressure. Julius Plücker, Johann Wilhelm Hittorf and Eugen Goldstein carried on and improved upon Faraday's work, leading to the identification of cathode rays, which J. J. Thomson found to consist of subatomic particles that would be called electrons. The black-body radiation problem was discovered by Gustav Kirchhoff in 1859. In 1900, Max Planck proposed the hypothesis that energy is radiated and absorbed in discrete "quanta" (or energy packets), yielding a calculation that precisely matched the observed patterns of black-body radiation. The word quantum derives from the Latin, meaning "how great" or "how much". According to Planck, quantities of energy could be thought of as divided into "elements" whose size (E) would be proportional to their frequency (ν): , where h is the Planck constant. Planck cautiously insisted that this was only an aspect of the processes of absorption and emission of radiation and was not the physical reality of the radiation. In fact, he considered his quantum hypothesis a mathematical trick to get the right answer rather than a sizable discovery. However, in 1905 Albert Einstein interpreted Planck's quantum hypothesis realistically and used it to explain the photoelectric effect, in which shining light on certain materials can eject electrons from the material. Niels Bohr then developed Planck's ideas about radiation into a model of the hydrogen atom that successfully predicted the spectral lines of hydrogen. Einstein further developed this idea to show that an electromagnetic wave such as light could also be described as a particle (later called the photon), with a discrete amount of energy that depends on its frequency. In his paper "On the Quantum Theory of Radiation", Einstein expanded on the interaction between energy and matter to explain the absorption and emission of energy by atoms. Although overshadowed at the time by his general theory of relativity, this paper articulated the mechanism underlying the stimulated emission of radiation, which became the basis of the laser. This phase is known as the old quantum theory. Never complete or self-consistent, the old quantum theory was rather a set of heuristic corrections to classical mechanics. The theory is now understood as a semi-classical approximation to modern quantum mechanics. Notable results from this period include, in addition to the work of Planck, Einstein and Bohr mentioned above, Einstein and Peter Debye's work on the specific heat of solids, Bohr and Hendrika Johanna van Leeuwen's proof that classical physics cannot account for diamagnetism, and Arnold Sommerfeld's extension of the Bohr model to include special-relativistic effects. In the mid-1920s quantum mechanics was developed to become the standard formulation for atomic physics. In 1923, the French physicist Louis de Broglie put forward his theory of matter waves by stating that particles can exhibit wave characteristics and vice versa. Building on de Broglie's approach, modern quantum mechanics was born in 1925, when the German physicists Werner Heisenberg, Max Born, and Pascual Jordan developed matrix mechanics and the Austrian physicist Erwin Schrödinger invented wave mechanics. Born introduced the probabilistic interpretation of Schrödinger's wave function in July 1926. Thus, the entire field of quantum physics emerged, leading to its wider acceptance at the Fifth Solvay Conference in 1927. By 1930, quantum mechanics had been further unified and formalized by David Hilbert, Paul Dirac and John von Neumann with greater emphasis on measurement, the statistical nature of our knowledge of reality, and philosophical speculation about the 'observer'. It has since permeated many disciplines, including quantum chemistry, quantum electronics, quantum optics, and quantum information science. It also provides a useful framework for many features of the modern periodic table of elements, and describes the behaviors of atoms during chemical bonding and the flow of electrons in computer semiconductors, and therefore plays a crucial role in many modern technologies. While quantum mechanics was constructed to describe the world of the very small, it is also needed to explain some macroscopic phenomena such as superconductors and superfluids. See also Bra–ket notation Einstein's thought experiments List of textbooks on classical and quantum mechanics Macroscopic quantum phenomena Phase-space formulation Regularization (physics) Two-state quantum system Explanatory notes References Further reading The following titles, all by working physicists, attempt to communicate quantum theory to lay people, using a minimum of technical apparatus. Chester, Marvin (1987). Primer of Quantum Mechanics. John Wiley. Richard Feynman, 1985. QED: The Strange Theory of Light and Matter, Princeton University Press. . Four elementary lectures on quantum electrodynamics and quantum field theory, yet containing many insights for the expert. Ghirardi, GianCarlo, 2004. Sneaking a Look at God's Cards, Gerald Malsbary, trans. Princeton Univ. Press. The most technical of the works cited here. Passages using algebra, trigonometry, and bra–ket notation can be passed over on a first reading. N. David Mermin, 1990, "Spooky actions at a distance: mysteries of the QT" in his Boojums All the Way Through. Cambridge University Press: 110–76. Victor Stenger, 2000. Timeless Reality: Symmetry, Simplicity, and Multiple Universes. Buffalo, NY: Prometheus Books. Chpts. 5–8. Includes cosmological and philosophical considerations. More technical: Bryce DeWitt, R. Neill Graham, eds., 1973. The Many-Worlds Interpretation of Quantum Mechanics, Princeton Series in Physics, Princeton University Press. D. Greenberger, K. Hentschel, F. Weinert, eds., 2009. Compendium of quantum physics, Concepts, experiments, history and philosophy, Springer-Verlag, Berlin, Heidelberg. Short articles on many QM topics. A standard undergraduate text. Max Jammer, 1966. The Conceptual Development of Quantum Mechanics. McGraw Hill. Hagen Kleinert, 2004. Path Integrals in Quantum Mechanics, Statistics, Polymer Physics, and Financial Markets, 3rd ed. Singapore: World Scientific. Draft of 4th edition. Online copy Gunther Ludwig, 1968. Wave Mechanics. London: Pergamon Press. George Mackey (2004). The mathematical foundations of quantum mechanics. Dover Publications. . Albert Messiah, 1966. Quantum Mechanics (Vol. I), English translation from French by G.M. Temmer. North Holland, John Wiley & Sons. Cf. chpt. IV, section III. online Considers the extent to which chemistry and the periodic system have been reduced to quantum mechanics. Veltman, Martinus J.G. (2003), Facts and Mysteries in Elementary Particle Physics. External links J. O'Connor and E. F. Robertson: A history of quantum mechanics. Introduction to Quantum Theory at Quantiki. Quantum Physics Made Relatively Simple: three video lectures by Hans Bethe. Course material Quantum Cook Book and PHYS 201: Fundamentals of Physics II by Ramamurti Shankar, Yale OpenCourseware. Modern Physics: With waves, thermodynamics, and optics – an online textbook. MIT OpenCourseWare: Chemistry and Physics. See 8.04, 8.05 and 8.06. Examples in Quantum Mechanics. Philosophy
Quantum mechanics
[ "Physics" ]
8,871
[ "Theoretical physics", "Quantum mechanics" ]
25,211
https://en.wikipedia.org/wiki/Quantum%20chemistry
Quantum chemistry, also called molecular quantum mechanics, is a branch of physical chemistry focused on the application of quantum mechanics to chemical systems, particularly towards the quantum-mechanical calculation of electronic contributions to physical and chemical properties of molecules, materials, and solutions at the atomic level. These calculations include systematically applied approximations intended to make calculations computationally feasible while still capturing as much information about important contributions to the computed wave functions as well as to observable properties such as structures, spectra, and thermodynamic properties. Quantum chemistry is also concerned with the computation of quantum effects on molecular dynamics and chemical kinetics. Chemists rely heavily on spectroscopy through which information regarding the quantization of energy on a molecular scale can be obtained. Common methods are infra-red (IR) spectroscopy, nuclear magnetic resonance (NMR) spectroscopy, and scanning probe microscopy. Quantum chemistry may be applied to the prediction and verification of spectroscopic data as well as other experimental data. Many quantum chemistry studies are focused on the electronic ground state and excited states of individual atoms and molecules as well as the study of reaction pathways and transition states that occur during chemical reactions. Spectroscopic properties may also be predicted. Typically, such studies assume the electronic wave function is adiabatically parameterized by the nuclear positions (i.e., the Born–Oppenheimer approximation). A wide variety of approaches are used, including semi-empirical methods, density functional theory, Hartree–Fock calculations, quantum Monte Carlo methods, and coupled cluster methods. Understanding electronic structure and molecular dynamics through the development of computational solutions to the Schrödinger equation is a central goal of quantum chemistry. Progress in the field depends on overcoming several challenges, including the need to increase the accuracy of the results for small molecular systems, and to also increase the size of large molecules that can be realistically subjected to computation, which is limited by scaling considerations — the computation time increases as a power of the number of atoms. History Some view the birth of quantum chemistry as starting with the discovery of the Schrödinger equation and its application to the hydrogen atom. However, a 1927 article of Walter Heitler (1904–1981) and Fritz London is often recognized as the first milestone in the history of quantum chemistry. This was the first application of quantum mechanics to the diatomic hydrogen molecule, and thus to the phenomenon of the chemical bond. However, prior to this a critical conceptual framework was provided by Gilbert N. Lewis in his 1916 paper The Atom and the Molecule, wherein Lewis developed the first working model of valence electrons. Important contributions were also made by Yoshikatsu Sugiura and S.C. Wang. A series of articles by Linus Pauling, written throughout the 1930s, integrated the work of Heitler, London, Sugiura, Wang, Lewis, and John C. Slater on the concept of valence and its quantum-mechanical basis into a new theoretical framework. Many chemists were introduced to the field of quantum chemistry by Pauling's 1939 text The Nature of the Chemical Bond and the Structure of Molecules and Crystals: An Introduction to Modern Structural Chemistry, wherein he summarized this work (referred to widely now as valence bond theory) and explained quantum mechanics in a way which could be followed by chemists. The text soon became a standard text at many universities. In 1937, Hans Hellmann appears to have been the first to publish a book on quantum chemistry, in the Russian and German languages. In the years to follow, this theoretical basis slowly began to be applied to chemical structure, reactivity, and bonding. In addition to the investigators mentioned above, important progress and critical contributions were made in the early years of this field by Irving Langmuir, Robert S. Mulliken, Max Born, J. Robert Oppenheimer, Hans Hellmann, Maria Goeppert Mayer, Erich Hückel, Douglas Hartree, John Lennard-Jones, and Vladimir Fock. Electronic structure The electronic structure of an atom or molecule is the quantum state of its electrons. The first step in solving a quantum chemical problem is usually solving the Schrödinger equation (or Dirac equation in relativistic quantum chemistry) with the electronic molecular Hamiltonian, usually making use of the Born–Oppenheimer (B–O) approximation. This is called determining the electronic structure of the molecule. An exact solution for the non-relativistic Schrödinger equation can only be obtained for the hydrogen atom (though exact solutions for the bound state energies of the hydrogen molecular ion within the B-O approximation have been identified in terms of the generalized Lambert W function). Since all other atomic and molecular systems involve the motions of three or more "particles", their Schrödinger equations cannot be solved analytically and so approximate and/or computational solutions must be sought. The process of seeking computational solutions to these problems is part of the field known as computational chemistry. Valence bond theory As mentioned above, Heitler and London's method was extended by Slater and Pauling to become the valence-bond (VB) method. In this method, attention is primarily devoted to the pairwise interactions between atoms, and this method therefore correlates closely with classical chemists' drawings of bonds. It focuses on how the atomic orbitals of an atom combine to give individual chemical bonds when a molecule is formed, incorporating the two key concepts of orbital hybridization and resonance. Molecular orbital theory An alternative approach to valence bond theory was developed in 1929 by Friedrich Hund and Robert S. Mulliken, in which electrons are described by mathematical functions delocalized over an entire molecule. The Hund–Mulliken approach or molecular orbital (MO) method is less intuitive to chemists, but has turned out capable of predicting spectroscopic properties better than the VB method. This approach is the conceptual basis of the Hartree–Fock method and further post-Hartree–Fock methods. Density functional theory The Thomas–Fermi model was developed independently by Thomas and Fermi in 1927. This was the first attempt to describe many-electron systems on the basis of electronic density instead of wave functions, although it was not very successful in the treatment of entire molecules. The method did provide the basis for what is now known as density functional theory (DFT). Modern day DFT uses the Kohn–Sham method, where the density functional is split into four terms; the Kohn–Sham kinetic energy, an external potential, exchange and correlation energies. A large part of the focus on developing DFT is on improving the exchange and correlation terms. Though this method is less developed than post Hartree–Fock methods, its significantly lower computational requirements (scaling typically no worse than n3 with respect to n basis functions, for the pure functionals) allow it to tackle larger polyatomic molecules and even macromolecules. This computational affordability and often comparable accuracy to MP2 and CCSD(T) (post-Hartree–Fock methods) has made it one of the most popular methods in computational chemistry. Chemical dynamics A further step can consist of solving the Schrödinger equation with the total molecular Hamiltonian in order to study the motion of molecules. Direct solution of the Schrödinger equation is called quantum dynamics, whereas its solution within the semiclassical approximation is called semiclassical dynamics. Purely classical simulations of molecular motion are referred to as molecular dynamics (MD). Another approach to dynamics is a hybrid framework known as mixed quantum-classical dynamics; yet another hybrid framework uses the Feynman path integral formulation to add quantum corrections to molecular dynamics, which is called path integral molecular dynamics. Statistical approaches, using for example classical and quantum Monte Carlo methods, are also possible and are particularly useful for describing equilibrium distributions of states. Adiabatic chemical dynamics In adiabatic dynamics, interatomic interactions are represented by single scalar potentials called potential energy surfaces. This is the Born–Oppenheimer approximation introduced by Born and Oppenheimer in 1927. Pioneering applications of this in chemistry were performed by Rice and Ramsperger in 1927 and Kassel in 1928, and generalized into the RRKM theory in 1952 by Marcus who took the transition state theory developed by Eyring in 1935 into account. These methods enable simple estimates of unimolecular reaction rates from a few characteristics of the potential surface. Non-adiabatic chemical dynamics Non-adiabatic dynamics consists of taking the interaction between several coupled potential energy surfaces (corresponding to different electronic quantum states of the molecule). The coupling terms are called vibronic couplings. The pioneering work in this field was done by Stueckelberg, Landau, and Zener in the 1930s, in their work on what is now known as the Landau–Zener transition. Their formula allows the transition probability between two adiabatic potential curves in the neighborhood of an avoided crossing to be calculated. Spin-forbidden reactions are one type of non-adiabatic reactions where at least one change in spin state occurs when progressing from reactant to product. See also Atomic physics Computational chemistry Condensed matter physics Car–Parrinello molecular dynamics Electron localization function International Academy of Quantum Molecular Science Molecular modelling Physical chemistry Quantum computational chemistry List of quantum chemistry and solid-state physics software QMC@Home Quantum Aspects of Life Quantum electrochemistry Relativistic quantum chemistry Theoretical physics Spin forbidden reactions References Sources Gavroglu, Kostas; Ana Simões: Neither Physics nor Chemistry: A History of Quantum Chemistry, MIT Press, 2011, Karplus M., Porter R.N. (1971). Atoms and Molecules. An introduction for students of physical chemistry, Benjamin–Cummings Publishing Company, Considers the extent to which chemistry and especially the periodic system has been reduced to quantum mechanics. External links The Sherrill Group – Notes ChemViz Curriculum Support Resources Early ideas in the history of quantum chemistry
Quantum chemistry
[ "Physics", "Chemistry" ]
2,028
[ "Quantum chemistry", "Quantum mechanics", "Theoretical chemistry", " molecular", "Atomic", " and optical physics" ]
25,267
https://en.wikipedia.org/wiki/Quantum%20field%20theory
In theoretical physics, quantum field theory (QFT) is a theoretical framework that combines field theory and the principle of relativity with ideas behind quantum mechanics. QFT is used in particle physics to construct physical models of subatomic particles and in condensed matter physics to construct models of quasiparticles. The current standard model of particle physics is based on quantum field theory. History Quantum field theory emerged from the work of generations of theoretical physicists spanning much of the 20th century. Its development began in the 1920s with the description of interactions between light and electrons, culminating in the first quantum field theory—quantum electrodynamics. A major theoretical obstacle soon followed with the appearance and persistence of various infinities in perturbative calculations, a problem only resolved in the 1950s with the invention of the renormalization procedure. A second major barrier came with QFT's apparent inability to describe the weak and strong interactions, to the point where some theorists called for the abandonment of the field theoretic approach. The development of gauge theory and the completion of the Standard Model in the 1970s led to a renaissance of quantum field theory. Theoretical background Quantum field theory results from the combination of classical field theory, quantum mechanics, and special relativity. A brief overview of these theoretical precursors follows. The earliest successful classical field theory is one that emerged from Newton's law of universal gravitation, despite the complete absence of the concept of fields from his 1687 treatise Philosophiæ Naturalis Principia Mathematica. The force of gravity as described by Isaac Newton is an "action at a distance"—its effects on faraway objects are instantaneous, no matter the distance. In an exchange of letters with Richard Bentley, however, Newton stated that "it is inconceivable that inanimate brute matter should, without the mediation of something else which is not material, operate upon and affect other matter without mutual contact". It was not until the 18th century that mathematical physicists discovered a convenient description of gravity based on fields—a numerical quantity (a vector in the case of gravitational field) assigned to every point in space indicating the action of gravity on any particle at that point. However, this was considered merely a mathematical trick. Fields began to take on an existence of their own with the development of electromagnetism in the 19th century. Michael Faraday coined the English term "field" in 1845. He introduced fields as properties of space (even when it is devoid of matter) having physical effects. He argued against "action at a distance", and proposed that interactions between objects occur via space-filling "lines of force". This description of fields remains to this day. The theory of classical electromagnetism was completed in 1864 with Maxwell's equations, which described the relationship between the electric field, the magnetic field, electric current, and electric charge. Maxwell's equations implied the existence of electromagnetic waves, a phenomenon whereby electric and magnetic fields propagate from one spatial point to another at a finite speed, which turns out to be the speed of light. Action-at-a-distance was thus conclusively refuted. Despite the enormous success of classical electromagnetism, it was unable to account for the discrete lines in atomic spectra, nor for the distribution of blackbody radiation in different wavelengths. Max Planck's study of blackbody radiation marked the beginning of quantum mechanics. He treated atoms, which absorb and emit electromagnetic radiation, as tiny oscillators with the crucial property that their energies can only take on a series of discrete, rather than continuous, values. These are known as quantum harmonic oscillators. This process of restricting energies to discrete values is called quantization. Building on this idea, Albert Einstein proposed in 1905 an explanation for the photoelectric effect, that light is composed of individual packets of energy called photons (the quanta of light). This implied that the electromagnetic radiation, while being waves in the classical electromagnetic field, also exists in the form of particles. In 1913, Niels Bohr introduced the Bohr model of atomic structure, wherein electrons within atoms can only take on a series of discrete, rather than continuous, energies. This is another example of quantization. The Bohr model successfully explained the discrete nature of atomic spectral lines. In 1924, Louis de Broglie proposed the hypothesis of wave–particle duality, that microscopic particles exhibit both wave-like and particle-like properties under different circumstances. Uniting these scattered ideas, a coherent discipline, quantum mechanics, was formulated between 1925 and 1926, with important contributions from Max Planck, Louis de Broglie, Werner Heisenberg, Max Born, Erwin Schrödinger, Paul Dirac, and Wolfgang Pauli. In the same year as his paper on the photoelectric effect, Einstein published his theory of special relativity, built on Maxwell's electromagnetism. New rules, called Lorentz transformations, were given for the way time and space coordinates of an event change under changes in the observer's velocity, and the distinction between time and space was blurred. It was proposed that all physical laws must be the same for observers at different velocities, i.e. that physical laws be invariant under Lorentz transformations. Two difficulties remained. Observationally, the Schrödinger equation underlying quantum mechanics could explain the stimulated emission of radiation from atoms, where an electron emits a new photon under the action of an external electromagnetic field, but it was unable to explain spontaneous emission, where an electron spontaneously decreases in energy and emits a photon even without the action of an external electromagnetic field. Theoretically, the Schrödinger equation could not describe photons and was inconsistent with the principles of special relativity—it treats time as an ordinary number while promoting spatial coordinates to linear operators. Quantum electrodynamics Quantum field theory naturally began with the study of electromagnetic interactions, as the electromagnetic field was the only known classical field as of the 1920s. Through the works of Born, Heisenberg, and Pascual Jordan in 1925–1926, a quantum theory of the free electromagnetic field (one with no interactions with matter) was developed via canonical quantization by treating the electromagnetic field as a set of quantum harmonic oscillators. With the exclusion of interactions, however, such a theory was yet incapable of making quantitative predictions about the real world. In his seminal 1927 paper The quantum theory of the emission and absorption of radiation, Dirac coined the term quantum electrodynamics (QED), a theory that adds upon the terms describing the free electromagnetic field an additional interaction term between electric current density and the electromagnetic vector potential. Using first-order perturbation theory, he successfully explained the phenomenon of spontaneous emission. According to the uncertainty principle in quantum mechanics, quantum harmonic oscillators cannot remain stationary, but they have a non-zero minimum energy and must always be oscillating, even in the lowest energy state (the ground state). Therefore, even in a perfect vacuum, there remains an oscillating electromagnetic field having zero-point energy. It is this quantum fluctuation of electromagnetic fields in the vacuum that "stimulates" the spontaneous emission of radiation by electrons in atoms. Dirac's theory was hugely successful in explaining both the emission and absorption of radiation by atoms; by applying second-order perturbation theory, it was able to account for the scattering of photons, resonance fluorescence and non-relativistic Compton scattering. Nonetheless, the application of higher-order perturbation theory was plagued with problematic infinities in calculations. In 1928, Dirac wrote down a wave equation that described relativistic electrons: the Dirac equation. It had the following important consequences: the spin of an electron is 1/2; the electron g-factor is 2; it led to the correct Sommerfeld formula for the fine structure of the hydrogen atom; and it could be used to derive the Klein–Nishina formula for relativistic Compton scattering. Although the results were fruitful, the theory also apparently implied the existence of negative energy states, which would cause atoms to be unstable, since they could always decay to lower energy states by the emission of radiation. The prevailing view at the time was that the world was composed of two very different ingredients: material particles (such as electrons) and quantum fields (such as photons). Material particles were considered to be eternal, with their physical state described by the probabilities of finding each particle in any given region of space or range of velocities. On the other hand, photons were considered merely the excited states of the underlying quantized electromagnetic field, and could be freely created or destroyed. It was between 1928 and 1930 that Jordan, Eugene Wigner, Heisenberg, Pauli, and Enrico Fermi discovered that material particles could also be seen as excited states of quantum fields. Just as photons are excited states of the quantized electromagnetic field, so each type of particle had its corresponding quantum field: an electron field, a proton field, etc. Given enough energy, it would now be possible to create material particles. Building on this idea, Fermi proposed in 1932 an explanation for beta decay known as Fermi's interaction. Atomic nuclei do not contain electrons per se, but in the process of decay, an electron is created out of the surrounding electron field, analogous to the photon created from the surrounding electromagnetic field in the radiative decay of an excited atom. It was realized in 1929 by Dirac and others that negative energy states implied by the Dirac equation could be removed by assuming the existence of particles with the same mass as electrons but opposite electric charge. This not only ensured the stability of atoms, but it was also the first proposal of the existence of antimatter. Indeed, the evidence for positrons was discovered in 1932 by Carl David Anderson in cosmic rays. With enough energy, such as by absorbing a photon, an electron-positron pair could be created, a process called pair production; the reverse process, annihilation, could also occur with the emission of a photon. This showed that particle numbers need not be fixed during an interaction. Historically, however, positrons were at first thought of as "holes" in an infinite electron sea, rather than a new kind of particle, and this theory was referred to as the Dirac hole theory. QFT naturally incorporated antiparticles in its formalism. Infinities and renormalization Robert Oppenheimer showed in 1930 that higher-order perturbative calculations in QED always resulted in infinite quantities, such as the electron self-energy and the vacuum zero-point energy of the electron and photon fields, suggesting that the computational methods at the time could not properly deal with interactions involving photons with extremely high momenta. It was not until 20 years later that a systematic approach to remove such infinities was developed. A series of papers was published between 1934 and 1938 by Ernst Stueckelberg that established a relativistically invariant formulation of QFT. In 1947, Stueckelberg also independently developed a complete renormalization procedure. Such achievements were not understood and recognized by the theoretical community. Faced with these infinities, John Archibald Wheeler and Heisenberg proposed, in 1937 and 1943 respectively, to supplant the problematic QFT with the so-called S-matrix theory. Since the specific details of microscopic interactions are inaccessible to observations, the theory should only attempt to describe the relationships between a small number of observables (e.g. the energy of an atom) in an interaction, rather than be concerned with the microscopic minutiae of the interaction. In 1945, Richard Feynman and Wheeler daringly suggested abandoning QFT altogether and proposed action-at-a-distance as the mechanism of particle interactions. In 1947, Willis Lamb and Robert Retherford measured the minute difference in the 2S1/2 and 2P1/2 energy levels of the hydrogen atom, also called the Lamb shift. By ignoring the contribution of photons whose energy exceeds the electron mass, Hans Bethe successfully estimated the numerical value of the Lamb shift. Subsequently, Norman Myles Kroll, Lamb, James Bruce French, and Victor Weisskopf again confirmed this value using an approach in which infinities cancelled other infinities to result in finite quantities. However, this method was clumsy and unreliable and could not be generalized to other calculations. The breakthrough eventually came around 1950 when a more robust method for eliminating infinities was developed by Julian Schwinger, Richard Feynman, Freeman Dyson, and Shinichiro Tomonaga. The main idea is to replace the calculated values of mass and charge, infinite though they may be, by their finite measured values. This systematic computational procedure is known as renormalization and can be applied to arbitrary order in perturbation theory. As Tomonaga said in his Nobel lecture:Since those parts of the modified mass and charge due to field reactions [become infinite], it is impossible to calculate them by the theory. However, the mass and charge observed in experiments are not the original mass and charge but the mass and charge as modified by field reactions, and they are finite. On the other hand, the mass and charge appearing in the theory are… the values modified by field reactions. Since this is so, and particularly since the theory is unable to calculate the modified mass and charge, we may adopt the procedure of substituting experimental values for them phenomenologically... This procedure is called the renormalization of mass and charge… After long, laborious calculations, less skillful than Schwinger's, we obtained a result... which was in agreement with [the] Americans'. By applying the renormalization procedure, calculations were finally made to explain the electron's anomalous magnetic moment (the deviation of the electron g-factor from 2) and vacuum polarization. These results agreed with experimental measurements to a remarkable degree, thus marking the end of a "war against infinities". At the same time, Feynman introduced the path integral formulation of quantum mechanics and Feynman diagrams. The latter can be used to visually and intuitively organize and to help compute terms in the perturbative expansion. Each diagram can be interpreted as paths of particles in an interaction, with each vertex and line having a corresponding mathematical expression, and the product of these expressions gives the scattering amplitude of the interaction represented by the diagram. It was with the invention of the renormalization procedure and Feynman diagrams that QFT finally arose as a complete theoretical framework. Non-renormalizability Given the tremendous success of QED, many theorists believed, in the few years after 1949, that QFT could soon provide an understanding of all microscopic phenomena, not only the interactions between photons, electrons, and positrons. Contrary to this optimism, QFT entered yet another period of depression that lasted for almost two decades. The first obstacle was the limited applicability of the renormalization procedure. In perturbative calculations in QED, all infinite quantities could be eliminated by redefining a small (finite) number of physical quantities (namely the mass and charge of the electron). Dyson proved in 1949 that this is only possible for a small class of theories called "renormalizable theories", of which QED is an example. However, most theories, including the Fermi theory of the weak interaction, are "non-renormalizable". Any perturbative calculation in these theories beyond the first order would result in infinities that could not be removed by redefining a finite number of physical quantities. The second major problem stemmed from the limited validity of the Feynman diagram method, which is based on a series expansion in perturbation theory. In order for the series to converge and low-order calculations to be a good approximation, the coupling constant, in which the series is expanded, must be a sufficiently small number. The coupling constant in QED is the fine-structure constant , which is small enough that only the simplest, lowest order, Feynman diagrams need to be considered in realistic calculations. In contrast, the coupling constant in the strong interaction is roughly of the order of one, making complicated, higher order, Feynman diagrams just as important as simple ones. There was thus no way of deriving reliable quantitative predictions for the strong interaction using perturbative QFT methods. With these difficulties looming, many theorists began to turn away from QFT. Some focused on symmetry principles and conservation laws, while others picked up the old S-matrix theory of Wheeler and Heisenberg. QFT was used heuristically as guiding principles, but not as a basis for quantitative calculations. Source theory Schwinger, however, took a different route. For more than a decade he and his students had been nearly the only exponents of field theory, but in 1951 he found a way around the problem of the infinities with a new method using external sources as currents coupled to gauge fields. Motivated by the former findings, Schwinger kept pursuing this approach in order to "quantumly" generalize the classical process of coupling external forces to the configuration space parameters known as Lagrange multipliers. He summarized his source theory in 1966 then expanded the theory's applications to quantum electrodynamics in his three volume-set titled: Particles, Sources, and Fields. Developments in pion physics, in which the new viewpoint was most successfully applied, convinced him of the great advantages of mathematical simplicity and conceptual clarity that its use bestowed. In source theory there are no divergences, and no renormalization. It may be regarded as the calculational tool of field theory, but it is more general. Using source theory, Schwinger was able to calculate the anomalous magnetic moment of the electron, which he had done in 1947, but this time with no ‘distracting remarks’ about infinite quantities. Schwinger also applied source theory to his QFT theory of gravity, and was able to reproduce all four of Einstein's classic results: gravitational red shift, deflection and slowing of light by gravity, and the perihelion precession of Mercury. The neglect of source theory by the physics community was a major disappointment for Schwinger:The lack of appreciation of these facts by others was depressing, but understandable. -J. SchwingerSee "the shoes incident" between J. Schwinger and S. Weinberg. Standard model In 1954, Yang Chen-Ning and Robert Mills generalized the local symmetry of QED, leading to non-Abelian gauge theories (also known as Yang–Mills theories), which are based on more complicated local symmetry groups. In QED, (electrically) charged particles interact via the exchange of photons, while in non-Abelian gauge theory, particles carrying a new type of "charge" interact via the exchange of massless gauge bosons. Unlike photons, these gauge bosons themselves carry charge. Sheldon Glashow developed a non-Abelian gauge theory that unified the electromagnetic and weak interactions in 1960. In 1964, Abdus Salam and John Clive Ward arrived at the same theory through a different path. This theory, nevertheless, was non-renormalizable. Peter Higgs, Robert Brout, François Englert, Gerald Guralnik, Carl Hagen, and Tom Kibble proposed in their famous Physical Review Letters papers that the gauge symmetry in Yang–Mills theories could be broken by a mechanism called spontaneous symmetry breaking, through which originally massless gauge bosons could acquire mass. By combining the earlier theory of Glashow, Salam, and Ward with the idea of spontaneous symmetry breaking, Steven Weinberg wrote down in 1967 a theory describing electroweak interactions between all leptons and the effects of the Higgs boson. His theory was at first mostly ignored, until it was brought back to light in 1971 by Gerard 't Hooft's proof that non-Abelian gauge theories are renormalizable. The electroweak theory of Weinberg and Salam was extended from leptons to quarks in 1970 by Glashow, John Iliopoulos, and Luciano Maiani, marking its completion. Harald Fritzsch, Murray Gell-Mann, and Heinrich Leutwyler discovered in 1971 that certain phenomena involving the strong interaction could also be explained by non-Abelian gauge theory. Quantum chromodynamics (QCD) was born. In 1973, David Gross, Frank Wilczek, and Hugh David Politzer showed that non-Abelian gauge theories are "asymptotically free", meaning that under renormalization, the coupling constant of the strong interaction decreases as the interaction energy increases. (Similar discoveries had been made numerous times previously, but they had been largely ignored.) Therefore, at least in high-energy interactions, the coupling constant in QCD becomes sufficiently small to warrant a perturbative series expansion, making quantitative predictions for the strong interaction possible. These theoretical breakthroughs brought about a renaissance in QFT. The full theory, which includes the electroweak theory and chromodynamics, is referred to today as the Standard Model of elementary particles. The Standard Model successfully describes all fundamental interactions except gravity, and its many predictions have been met with remarkable experimental confirmation in subsequent decades. The Higgs boson, central to the mechanism of spontaneous symmetry breaking, was finally detected in 2012 at CERN, marking the complete verification of the existence of all constituents of the Standard Model. Other developments The 1970s saw the development of non-perturbative methods in non-Abelian gauge theories. The 't Hooft–Polyakov monopole was discovered theoretically by 't Hooft and Alexander Polyakov, flux tubes by Holger Bech Nielsen and Poul Olesen, and instantons by Polyakov and coauthors. These objects are inaccessible through perturbation theory. Supersymmetry also appeared in the same period. The first supersymmetric QFT in four dimensions was built by Yuri Golfand and Evgeny Likhtman in 1970, but their result failed to garner widespread interest due to the Iron Curtain. Supersymmetry theories only took off in the theoretical community after the work of Julius Wess and Bruno Zumino in 1973, but to date have not been widely accepted as part of the Standard Model due to lack of experimental evidence. Among the four fundamental interactions, gravity remains the only one that lacks a consistent QFT description. Various attempts at a theory of quantum gravity led to the development of string theory, itself a type of two-dimensional QFT with conformal symmetry. Joël Scherk and John Schwarz first proposed in 1974 that string theory could be the quantum theory of gravity. Condensed-matter-physics Although quantum field theory arose from the study of interactions between elementary particles, it has been successfully applied to other physical systems, particularly to many-body systems in condensed matter physics. Historically, the Higgs mechanism of spontaneous symmetry breaking was a result of Yoichiro Nambu's application of superconductor theory to elementary particles, while the concept of renormalization came out of the study of second-order phase transitions in matter. Soon after the introduction of photons, Einstein performed the quantization procedure on vibrations in a crystal, leading to the first quasiparticle—phonons. Lev Landau claimed that low-energy excitations in many condensed matter systems could be described in terms of interactions between a set of quasiparticles. The Feynman diagram method of QFT was naturally well suited to the analysis of various phenomena in condensed matter systems. Gauge theory is used to describe the quantization of magnetic flux in superconductors, the resistivity in the quantum Hall effect, as well as the relation between frequency and voltage in the AC Josephson effect. Principles For simplicity, natural units are used in the following sections, in which the reduced Planck constant and the speed of light are both set to one. Classical fields A classical field is a function of spatial and time coordinates. Examples include the gravitational field in Newtonian gravity and the electric field and magnetic field in classical electromagnetism. A classical field can be thought of as a numerical quantity assigned to every point in space that changes in time. Hence, it has infinitely many degrees of freedom. Many phenomena exhibiting quantum mechanical properties cannot be explained by classical fields alone. Phenomena such as the photoelectric effect are best explained by discrete particles (photons), rather than a spatially continuous field. The goal of quantum field theory is to describe various quantum mechanical phenomena using a modified concept of fields. Canonical quantization and path integrals are two common formulations of QFT. To motivate the fundamentals of QFT, an overview of classical field theory follows. The simplest classical field is a real scalar field — a real number at every point in space that changes in time. It is denoted as , where is the position vector, and is the time. Suppose the Lagrangian of the field, , is where is the Lagrangian density, is the time-derivative of the field, is the gradient operator, and is a real parameter (the "mass" of the field). Applying the Euler–Lagrange equation on the Lagrangian: we obtain the equations of motion for the field, which describe the way it varies in time and space: This is known as the Klein–Gordon equation. The Klein–Gordon equation is a wave equation, so its solutions can be expressed as a sum of normal modes (obtained via Fourier transform) as follows: where is a complex number (normalized by convention), denotes complex conjugation, and is the frequency of the normal mode: Thus each normal mode corresponding to a single can be seen as a classical harmonic oscillator with frequency . Canonical quantization The quantization procedure for the above classical field to a quantum operator field is analogous to the promotion of a classical harmonic oscillator to a quantum harmonic oscillator. The displacement of a classical harmonic oscillator is described by where is a complex number (normalized by convention), and is the oscillator's frequency. Note that is the displacement of a particle in simple harmonic motion from the equilibrium position, not to be confused with the spatial label of a quantum field. For a quantum harmonic oscillator, is promoted to a linear operator : Complex numbers and are replaced by the annihilation operator and the creation operator , respectively, where denotes Hermitian conjugation. The commutation relation between the two is The Hamiltonian of the simple harmonic oscillator can be written as The vacuum state , which is the lowest energy state, is defined by and has energy One can easily check that which implies that increases the energy of the simple harmonic oscillator by . For example, the state is an eigenstate of energy . Any energy eigenstate state of a single harmonic oscillator can be obtained from by successively applying the creation operator : and any state of the system can be expressed as a linear combination of the states A similar procedure can be applied to the real scalar field , by promoting it to a quantum field operator , while the annihilation operator , the creation operator and the angular frequency are now for a particular : Their commutation relations are: where is the Dirac delta function. The vacuum state is defined by Any quantum state of the field can be obtained from by successively applying creation operators (or by a linear combination of such states), e.g. While the state space of a single quantum harmonic oscillator contains all the discrete energy states of one oscillating particle, the state space of a quantum field contains the discrete energy levels of an arbitrary number of particles. The latter space is known as a Fock space, which can account for the fact that particle numbers are not fixed in relativistic quantum systems. The process of quantizing an arbitrary number of particles instead of a single particle is often also called second quantization. The foregoing procedure is a direct application of non-relativistic quantum mechanics and can be used to quantize (complex) scalar fields, Dirac fields, vector fields (e.g. the electromagnetic field), and even strings. However, creation and annihilation operators are only well defined in the simplest theories that contain no interactions (so-called free theory). In the case of the real scalar field, the existence of these operators was a consequence of the decomposition of solutions of the classical equations of motion into a sum of normal modes. To perform calculations on any realistic interacting theory, perturbation theory would be necessary. The Lagrangian of any quantum field in nature would contain interaction terms in addition to the free theory terms. For example, a quartic interaction term could be introduced to the Lagrangian of the real scalar field: where is a spacetime index, , etc. The summation over the index has been omitted following the Einstein notation. If the parameter is sufficiently small, then the interacting theory described by the above Lagrangian can be considered as a small perturbation from the free theory. Path integrals The path integral formulation of QFT is concerned with the direct computation of the scattering amplitude of a certain interaction process, rather than the establishment of operators and state spaces. To calculate the probability amplitude for a system to evolve from some initial state at time to some final state at , the total time is divided into small intervals. The overall amplitude is the product of the amplitude of evolution within each interval, integrated over all intermediate states. Let be the Hamiltonian (i.e. generator of time evolution), then Taking the limit , the above product of integrals becomes the Feynman path integral: where is the Lagrangian involving and its derivatives with respect to spatial and time coordinates, obtained from the Hamiltonian via Legendre transformation. The initial and final conditions of the path integral are respectively In other words, the overall amplitude is the sum over the amplitude of every possible path between the initial and final states, where the amplitude of a path is given by the exponential in the integrand. Two-point correlation function In calculations, one often encounters expression likein the free or interacting theory, respectively. Here, and are position four-vectors, is the time ordering operator that shuffles its operands so the time-components and increase from right to left, and is the ground state (vacuum state) of the interacting theory, different from the free ground state . This expression represents the probability amplitude for the field to propagate from to , and goes by multiple names, like the two-point propagator, two-point correlation function, two-point Green's function or two-point function for short. The free two-point function, also known as the Feynman propagator, can be found for the real scalar field by either canonical quantization or path integrals to be In an interacting theory, where the Lagrangian or Hamiltonian contains terms or that describe interactions, the two-point function is more difficult to define. However, through both the canonical quantization formulation and the path integral formulation, it is possible to express it through an infinite perturbation series of the free two-point function. In canonical quantization, the two-point correlation function can be written as: where is an infinitesimal number and is the field operator under the free theory. Here, the exponential should be understood as its power series expansion. For example, in -theory, the interacting term of the Hamiltonian is , and the expansion of the two-point correlator in terms of becomesThis perturbation expansion expresses the interacting two-point function in terms of quantities that are evaluated in the free theory. In the path integral formulation, the two-point correlation function can be written where is the Lagrangian density. As in the previous paragraph, the exponential can be expanded as a series in , reducing the interacting two-point function to quantities in the free theory. Wick's theorem further reduce any -point correlation function in the free theory to a sum of products of two-point correlation functions. For example, Since interacting correlation functions can be expressed in terms of free correlation functions, only the latter need to be evaluated in order to calculate all physical quantities in the (perturbative) interacting theory. This makes the Feynman propagator one of the most important quantities in quantum field theory. Feynman diagram Correlation functions in the interacting theory can be written as a perturbation series. Each term in the series is a product of Feynman propagators in the free theory and can be represented visually by a Feynman diagram. For example, the term in the two-point correlation function in the theory is After applying Wick's theorem, one of the terms is This term can instead be obtained from the Feynman diagram . The diagram consists of external vertices connected with one edge and represented by dots (here labeled and ). internal vertices connected with four edges and represented by dots (here labeled ). edges connecting the vertices and represented by lines. Every vertex corresponds to a single field factor at the corresponding point in spacetime, while the edges correspond to the propagators between the spacetime points. The term in the perturbation series corresponding to the diagram is obtained by writing down the expression that follows from the so-called Feynman rules: For every internal vertex , write down a factor . For every edge that connects two vertices and , write down a factor . Divide by the symmetry factor of the diagram. With the symmetry factor , following these rules yields exactly the expression above. By Fourier transforming the propagator, the Feynman rules can be reformulated from position space into momentum space. In order to compute the -point correlation function to the -th order, list all valid Feynman diagrams with external points and or fewer vertices, and then use Feynman rules to obtain the expression for each term. To be precise, is equal to the sum of (expressions corresponding to) all connected diagrams with external points. (Connected diagrams are those in which every vertex is connected to an external point through lines. Components that are totally disconnected from external lines are sometimes called "vacuum bubbles".) In the interaction theory discussed above, every vertex must have four legs. In realistic applications, the scattering amplitude of a certain interaction or the decay rate of a particle can be computed from the S-matrix, which itself can be found using the Feynman diagram method. Feynman diagrams devoid of "loops" are called tree-level diagrams, which describe the lowest-order interaction processes; those containing loops are referred to as -loop diagrams, which describe higher-order contributions, or radiative corrections, to the interaction. Lines whose end points are vertices can be thought of as the propagation of virtual particles. Renormalization Feynman rules can be used to directly evaluate tree-level diagrams. However, naïve computation of loop diagrams such as the one shown above will result in divergent momentum integrals, which seems to imply that almost all terms in the perturbative expansion are infinite. The renormalisation procedure is a systematic process for removing such infinities. Parameters appearing in the Lagrangian, such as the mass and the coupling constant , have no physical meaning — , , and the field strength are not experimentally measurable quantities and are referred to here as the bare mass, bare coupling constant, and bare field, respectively. The physical mass and coupling constant are measured in some interaction process and are generally different from the bare quantities. While computing physical quantities from this interaction process, one may limit the domain of divergent momentum integrals to be below some momentum cut-off , obtain expressions for the physical quantities, and then take the limit . This is an example of regularization, a class of methods to treat divergences in QFT, with being the regulator. The approach illustrated above is called bare perturbation theory, as calculations involve only the bare quantities such as mass and coupling constant. A different approach, called renormalized perturbation theory, is to use physically meaningful quantities from the very beginning. In the case of theory, the field strength is first redefined: where is the bare field, is the renormalized field, and is a constant to be determined. The Lagrangian density becomes: where and are the experimentally measurable, renormalized, mass and coupling constant, respectively, and are constants to be determined. The first three terms are the Lagrangian density written in terms of the renormalized quantities, while the latter three terms are referred to as "counterterms". As the Lagrangian now contains more terms, so the Feynman diagrams should include additional elements, each with their own Feynman rules. The procedure is outlined as follows. First select a regularization scheme (such as the cut-off regularization introduced above or dimensional regularization); call the regulator . Compute Feynman diagrams, in which divergent terms will depend on . Then, define , , and such that Feynman diagrams for the counterterms will exactly cancel the divergent terms in the normal Feynman diagrams when the limit is taken. In this way, meaningful finite quantities are obtained. It is only possible to eliminate all infinities to obtain a finite result in renormalizable theories, whereas in non-renormalizable theories infinities cannot be removed by the redefinition of a small number of parameters. The Standard Model of elementary particles is a renormalizable QFT, while quantum gravity is non-renormalizable. Renormalization group The renormalization group, developed by Kenneth Wilson, is a mathematical apparatus used to study the changes in physical parameters (coefficients in the Lagrangian) as the system is viewed at different scales. The way in which each parameter changes with scale is described by its β function. Correlation functions, which underlie quantitative physical predictions, change with scale according to the Callan–Symanzik equation. As an example, the coupling constant in QED, namely the elementary charge , has the following β function: where is the energy scale under which the measurement of is performed. This differential equation implies that the observed elementary charge increases as the scale increases. The renormalized coupling constant, which changes with the energy scale, is also called the running coupling constant. The coupling constant in quantum chromodynamics, a non-Abelian gauge theory based on the symmetry group , has the following β function: where is the number of quark flavours. In the case where (the Standard Model has ), the coupling constant decreases as the energy scale increases. Hence, while the strong interaction is strong at low energies, it becomes very weak in high-energy interactions, a phenomenon known as asymptotic freedom. Conformal field theories (CFTs) are special QFTs that admit conformal symmetry. They are insensitive to changes in the scale, as all their coupling constants have vanishing β function. (The converse is not true, however — the vanishing of all β functions does not imply conformal symmetry of the theory.) Examples include string theory and supersymmetric Yang–Mills theory. According to Wilson's picture, every QFT is fundamentally accompanied by its energy cut-off , i.e. that the theory is no longer valid at energies higher than , and all degrees of freedom above the scale are to be omitted. For example, the cut-off could be the inverse of the atomic spacing in a condensed matter system, and in elementary particle physics it could be associated with the fundamental "graininess" of spacetime caused by quantum fluctuations in gravity. The cut-off scale of theories of particle interactions lies far beyond current experiments. Even if the theory were very complicated at that scale, as long as its couplings are sufficiently weak, it must be described at low energies by a renormalizable effective field theory. The difference between renormalizable and non-renormalizable theories is that the former are insensitive to details at high energies, whereas the latter do depend on them. According to this view, non-renormalizable theories are to be seen as low-energy effective theories of a more fundamental theory. The failure to remove the cut-off from calculations in such a theory merely indicates that new physical phenomena appear at scales above , where a new theory is necessary. Other theories The quantization and renormalization procedures outlined in the preceding sections are performed for the free theory and theory of the real scalar field. A similar process can be done for other types of fields, including the complex scalar field, the vector field, and the Dirac field, as well as other types of interaction terms, including the electromagnetic interaction and the Yukawa interaction. As an example, quantum electrodynamics contains a Dirac field representing the electron field and a vector field representing the electromagnetic field (photon field). (Despite its name, the quantum electromagnetic "field" actually corresponds to the classical electromagnetic four-potential, rather than the classical electric and magnetic fields.) The full QED Lagrangian density is: where are Dirac matrices, , and is the electromagnetic field strength. The parameters in this theory are the (bare) electron mass and the (bare) elementary charge . The first and second terms in the Lagrangian density correspond to the free Dirac field and free vector fields, respectively. The last term describes the interaction between the electron and photon fields, which is treated as a perturbation from the free theories. Shown above is an example of a tree-level Feynman diagram in QED. It describes an electron and a positron annihilating, creating an off-shell photon, and then decaying into a new pair of electron and positron. Time runs from left to right. Arrows pointing forward in time represent the propagation of electrons, while those pointing backward in time represent the propagation of positrons. A wavy line represents the propagation of a photon. Each vertex in QED Feynman diagrams must have an incoming and an outgoing fermion (positron/electron) leg as well as a photon leg. Gauge symmetry If the following transformation to the fields is performed at every spacetime point (a local transformation), then the QED Lagrangian remains unchanged, or invariant: where is any function of spacetime coordinates. If a theory's Lagrangian (or more precisely the action) is invariant under a certain local transformation, then the transformation is referred to as a gauge symmetry of the theory. Gauge symmetries form a group at every spacetime point. In the case of QED, the successive application of two different local symmetry transformations and is yet another symmetry transformation . For any , is an element of the group, thus QED is said to have gauge symmetry. The photon field may be referred to as the gauge boson. is an Abelian group, meaning that the result is the same regardless of the order in which its elements are applied. QFTs can also be built on non-Abelian groups, giving rise to non-Abelian gauge theories (also known as Yang–Mills theories). Quantum chromodynamics, which describes the strong interaction, is a non-Abelian gauge theory with an gauge symmetry. It contains three Dirac fields representing quark fields as well as eight vector fields representing gluon fields, which are the gauge bosons. The QCD Lagrangian density is: where is the gauge covariant derivative: where is the coupling constant, are the eight generators of in the fundamental representation ( matrices), and are the structure constants of . Repeated indices are implicitly summed over following Einstein notation. This Lagrangian is invariant under the transformation: where is an element of at every spacetime point : The preceding discussion of symmetries is on the level of the Lagrangian. In other words, these are "classical" symmetries. After quantization, some theories will no longer exhibit their classical symmetries, a phenomenon called anomaly. For instance, in the path integral formulation, despite the invariance of the Lagrangian density under a certain local transformation of the fields, the measure of the path integral may change. For a theory describing nature to be consistent, it must not contain any anomaly in its gauge symmetry. The Standard Model of elementary particles is a gauge theory based on the group , in which all anomalies exactly cancel. The theoretical foundation of general relativity, the equivalence principle, can also be understood as a form of gauge symmetry, making general relativity a gauge theory based on the Lorentz group. Noether's theorem states that every continuous symmetry, i.e. the parameter in the symmetry transformation being continuous rather than discrete, leads to a corresponding conservation law. For example, the symmetry of QED implies charge conservation. Gauge-transformations do not relate distinct quantum states. Rather, it relates two equivalent mathematical descriptions of the same quantum state. As an example, the photon field , being a four-vector, has four apparent degrees of freedom, but the actual state of a photon is described by its two degrees of freedom corresponding to the polarization. The remaining two degrees of freedom are said to be "redundant" — apparently different ways of writing can be related to each other by a gauge transformation and in fact describe the same state of the photon field. In this sense, gauge invariance is not a "real" symmetry, but a reflection of the "redundancy" of the chosen mathematical description. To account for the gauge redundancy in the path integral formulation, one must perform the so-called Faddeev–Popov gauge fixing procedure. In non-Abelian gauge theories, such a procedure introduces new fields called "ghosts". Particles corresponding to the ghost fields are called ghost particles, which cannot be detected externally. A more rigorous generalization of the Faddeev–Popov procedure is given by BRST quantization. Spontaneous symmetry-breaking Spontaneous symmetry breaking is a mechanism whereby the symmetry of the Lagrangian is violated by the system described by it. To illustrate the mechanism, consider a linear sigma model containing real scalar fields, described by the Lagrangian density: where and are real parameters. The theory admits an global symmetry: The lowest energy state (ground state or vacuum state) of the classical theory is any uniform field satisfying Without loss of generality, let the ground state be in the -th direction: The original fields can be rewritten as: and the original Lagrangian density as: where . The original global symmetry is no longer manifest, leaving only the subgroup . The larger symmetry before spontaneous symmetry breaking is said to be "hidden" or spontaneously broken. Goldstone's theorem states that under spontaneous symmetry breaking, every broken continuous global symmetry leads to a massless field called the Goldstone boson. In the above example, has continuous symmetries (the dimension of its Lie algebra), while has . The number of broken symmetries is their difference, , which corresponds to the massless fields . On the other hand, when a gauge (as opposed to global) symmetry is spontaneously broken, the resulting Goldstone boson is "eaten" by the corresponding gauge boson by becoming an additional degree of freedom for the gauge boson. The Goldstone boson equivalence theorem states that at high energy, the amplitude for emission or absorption of a longitudinally polarized massive gauge boson becomes equal to the amplitude for emission or absorption of the Goldstone boson that was eaten by the gauge boson. In the QFT of ferromagnetism, spontaneous symmetry breaking can explain the alignment of magnetic dipoles at low temperatures. In the Standard Model of elementary particles, the W and Z bosons, which would otherwise be massless as a result of gauge symmetry, acquire mass through spontaneous symmetry breaking of the Higgs boson, a process called the Higgs mechanism. Supersymmetry All experimentally known symmetries in nature relate bosons to bosons and fermions to fermions. Theorists have hypothesized the existence of a type of symmetry, called supersymmetry, that relates bosons and fermions. The Standard Model obeys Poincaré symmetry, whose generators are the spacetime translations and the Lorentz transformations . In addition to these generators, supersymmetry in (3+1)-dimensions includes additional generators , called supercharges, which themselves transform as Weyl fermions. The symmetry group generated by all these generators is known as the super-Poincaré group. In general there can be more than one set of supersymmetry generators, , which generate the corresponding supersymmetry, supersymmetry, and so on. Supersymmetry can also be constructed in other dimensions, most notably in (1+1) dimensions for its application in superstring theory. The Lagrangian of a supersymmetric theory must be invariant under the action of the super-Poincaré group. Examples of such theories include: Minimal Supersymmetric Standard Model (MSSM), supersymmetric Yang–Mills theory, and superstring theory. In a supersymmetric theory, every fermion has a bosonic superpartner and vice versa. If supersymmetry is promoted to a local symmetry, then the resultant gauge theory is an extension of general relativity called supergravity. Supersymmetry is a potential solution to many current problems in physics. For example, the hierarchy problem of the Standard Model—why the mass of the Higgs boson is not radiatively corrected (under renormalization) to a very high scale such as the grand unified scale or the Planck scale—can be resolved by relating the Higgs field and its super-partner, the Higgsino. Radiative corrections due to Higgs boson loops in Feynman diagrams are cancelled by corresponding Higgsino loops. Supersymmetry also offers answers to the grand unification of all gauge coupling constants in the Standard Model as well as the nature of dark matter. Nevertheless, experiments have yet to provide evidence for the existence of supersymmetric particles. If supersymmetry were a true symmetry of nature, then it must be a broken symmetry, and the energy of symmetry breaking must be higher than those achievable by present-day experiments. Other spacetimes The theory, QED, QCD, as well as the whole Standard Model all assume a (3+1)-dimensional Minkowski space (3 spatial and 1 time dimensions) as the background on which the quantum fields are defined. However, QFT a priori imposes no restriction on the number of dimensions nor the geometry of spacetime. In condensed matter physics, QFT is used to describe (2+1)-dimensional electron gases. In high-energy physics, string theory is a type of (1+1)-dimensional QFT, while Kaluza–Klein theory uses gravity in extra dimensions to produce gauge theories in lower dimensions. In Minkowski space, the flat metric is used to raise and lower spacetime indices in the Lagrangian, e.g. where is the inverse of satisfying . For QFTs in curved spacetime on the other hand, a general metric (such as the Schwarzschild metric describing a black hole) is used: where is the inverse of . For a real scalar field, the Lagrangian density in a general spacetime background is where , and denotes the covariant derivative. The Lagrangian of a QFT, hence its calculational results and physical predictions, depends on the geometry of the spacetime background. Topological quantum field theory The correlation functions and physical predictions of a QFT depend on the spacetime metric . For a special class of QFTs called topological quantum field theories (TQFTs), all correlation functions are independent of continuous changes in the spacetime metric. QFTs in curved spacetime generally change according to the geometry (local structure) of the spacetime background, while TQFTs are invariant under spacetime diffeomorphisms but are sensitive to the topology (global structure) of spacetime. This means that all calculational results of TQFTs are topological invariants of the underlying spacetime. Chern–Simons theory is an example of TQFT and has been used to construct models of quantum gravity. Applications of TQFT include the fractional quantum Hall effect and topological quantum computers. The world line trajectory of fractionalized particles (known as anyons) can form a link configuration in the spacetime, which relates the braiding statistics of anyons in physics to the link invariants in mathematics. Topological quantum field theories (TQFTs) applicable to the frontier research of topological quantum matters include Chern-Simons-Witten gauge theories in 2+1 spacetime dimensions, other new exotic TQFTs in 3+1 spacetime dimensions and beyond. Perturbative and non-perturbative methods Using perturbation theory, the total effect of a small interaction term can be approximated order by order by a series expansion in the number of virtual particles participating in the interaction. Every term in the expansion may be understood as one possible way for (physical) particles to interact with each other via virtual particles, expressed visually using a Feynman diagram. The electromagnetic force between two electrons in QED is represented (to first order in perturbation theory) by the propagation of a virtual photon. In a similar manner, the W and Z bosons carry the weak interaction, while gluons carry the strong interaction. The interpretation of an interaction as a sum of intermediate states involving the exchange of various virtual particles only makes sense in the framework of perturbation theory. In contrast, non-perturbative methods in QFT treat the interacting Lagrangian as a whole without any series expansion. Instead of particles that carry interactions, these methods have spawned such concepts as 't Hooft–Polyakov monopole, domain wall, flux tube, and instanton. Examples of QFTs that are completely solvable non-perturbatively include minimal models of conformal field theory and the Thirring model. Mathematical rigor In spite of its overwhelming success in particle physics and condensed matter physics, QFT itself lacks a formal mathematical foundation. For example, according to Haag's theorem, there does not exist a well-defined interaction picture for QFT, which implies that perturbation theory of QFT, which underlies the entire Feynman diagram method, is fundamentally ill-defined. However, perturbative quantum field theory, which only requires that quantities be computable as a formal power series without any convergence requirements, can be given a rigorous mathematical treatment. In particular, Kevin Costello's monograph Renormalization and Effective Field Theory provides a rigorous formulation of perturbative renormalization that combines both the effective-field theory approaches of Kadanoff, Wilson, and Polchinski, together with the Batalin-Vilkovisky approach to quantizing gauge theories. Furthermore, perturbative path-integral methods, typically understood as formal computational methods inspired from finite-dimensional integration theory, can be given a sound mathematical interpretation from their finite-dimensional analogues. Since the 1950s, theoretical physicists and mathematicians have attempted to organize all QFTs into a set of axioms, in order to establish the existence of concrete models of relativistic QFT in a mathematically rigorous way and to study their properties. This line of study is called constructive quantum field theory, a subfield of mathematical physics, which has led to such results as CPT theorem, spin–statistics theorem, and Goldstone's theorem, and also to mathematically rigorous constructions of many interacting QFTs in two and three spacetime dimensions, e.g. two-dimensional scalar field theories with arbitrary polynomial interactions, the three-dimensional scalar field theories with a quartic interaction, etc. Compared to ordinary QFT, topological quantum field theory and conformal field theory are better supported mathematically — both can be classified in the framework of representations of cobordisms. Algebraic quantum field theory is another approach to the axiomatization of QFT, in which the fundamental objects are local operators and the algebraic relations between them. Axiomatic systems following this approach include Wightman axioms and Haag–Kastler axioms. One way to construct theories satisfying Wightman axioms is to use Osterwalder–Schrader axioms, which give the necessary and sufficient conditions for a real time theory to be obtained from an imaginary time theory by analytic continuation (Wick rotation). Yang–Mills existence and mass gap, one of the Millennium Prize Problems, concerns the well-defined existence of Yang–Mills theories as set out by the above axioms. The full problem statement is as follows. See also Abraham–Lorentz force AdS/CFT correspondence Axiomatic quantum field theory Introduction to quantum mechanics Common integrals in quantum field theory Conformal field theory Constructive quantum field theory Dirac's equation Form factor (quantum field theory) Feynman diagram Green–Kubo relations Green's function (many-body theory) Group field theory Lattice field theory List of quantum field theories Local quantum field theory Maximally helicity violating amplitudes Noncommutative quantum field theory Quantization of a field Quantum electrodynamics Quantum field theory in curved spacetime Quantum chromodynamics Quantum flavordynamics Quantum hadrodynamics Quantum hydrodynamics Quantum triviality Relation between Schrödinger's equation and the path integral formulation of quantum mechanics Relationship between string theory and quantum field theory Schwinger–Dyson equation Static forces and virtual-particle exchange Symmetry in quantum mechanics Topological quantum field theory Ward–Takahashi identity Wheeler–Feynman absorber theory Wigner's classification Wigner's theorem References Bibliography Further reading General readers Introductory text Kaku Michio (1993). Quantum Field Theory. Oxford University Press ISBN 0-19-509158-2. ; Advanced texts Heitler, W. (1953). The Quantum Theory of Radiation. Dover Publications, Inc. ISBN 0-486-64558-4. Umezawa, H. (1956) Quantum Field Theory. North Holland Puplishing. Barton, G. (1963). Introduction to Advanced Field Theory. Intescience Publishers. External links Stanford Encyclopedia of Philosophy: "Quantum Field Theory", by Meinard Kuhlmann. Siegel, Warren, 2005. Fields. . Quantum Field Theory by P. J. Mulders Quantum mechanics Mathematical physics
Quantum field theory
[ "Physics", "Mathematics" ]
12,198
[ "Quantum field theory", "Applied mathematics", "Theoretical physics", "Quantum mechanics", "Mathematical physics" ]
25,312
https://en.wikipedia.org/wiki/Quantum%20gravity
Quantum gravity (QG) is a field of theoretical physics that seeks to describe gravity according to the principles of quantum mechanics. It deals with environments in which neither gravitational nor quantum effects can be ignored, such as in the vicinity of black holes or similar compact astrophysical objects, as well as in the early stages of the universe moments after the Big Bang. Three of the four fundamental forces of nature are described within the framework of quantum mechanics and quantum field theory: the electromagnetic interaction, the strong force, and the weak force; this leaves gravity as the only interaction that has not been fully accommodated. The current understanding of gravity is based on Albert Einstein's general theory of relativity, which incorporates his theory of special relativity and deeply modifies the understanding of concepts like time and space. Although general relativity is highly regarded for its elegance and accuracy, it has limitations: the gravitational singularities inside black holes, the ad hoc postulation of dark matter, as well as dark energy and its relation to the cosmological constant are among the current unsolved mysteries regarding gravity, all of which signal the collapse of the general theory of relativity at different scales and highlight the need for a gravitational theory that goes into the quantum realm. At distances close to the Planck length, like those near the center of a black hole, quantum fluctuations of spacetime are expected to play an important role. Finally, the discrepancies between the predicted value for the vacuum energy and the observed values (which, depending on considerations, can be of 60 or 120 orders of magnitude) highlight the necessity for a quantum theory of gravity. The field of quantum gravity is actively developing, and theorists are exploring a variety of approaches to the problem of quantum gravity, the most popular being M-theory and loop quantum gravity. All of these approaches aim to describe the quantum behavior of the gravitational field, which does not necessarily include unifying all fundamental interactions into a single mathematical framework. However, many approaches to quantum gravity, such as string theory, try to develop a framework that describes all fundamental forces. Such a theory is often referred to as a theory of everything. Some of the approaches, such as loop quantum gravity, make no such attempt; instead, they make an effort to quantize the gravitational field while it is kept separate from the other forces. Other lesser-known but no less important theories include causal dynamical triangulation, noncommutative geometry, and twistor theory. One of the difficulties of formulating a quantum gravity theory is that direct observation of quantum gravitational effects is thought to only appear at length scales near the Planck scale, around 10−35 meters, a scale far smaller, and hence only accessible with far higher energies, than those currently available in high energy particle accelerators. Therefore, physicists lack experimental data which could distinguish between the competing theories which have been proposed. Thought experiment approaches have been suggested as a testing tool for quantum gravity theories. In the field of quantum gravity there are several open questions – e.g., it is not known how spin of elementary particles sources gravity, and thought experiments could provide a pathway to explore possible resolutions to these questions, even in the absence of lab experiments or physical observations. In the early 21st century, new experiment designs and technologies have arisen which suggest that indirect approaches to testing quantum gravity may be feasible over the next few decades. This field of study is called phenomenological quantum gravity. Overview Much of the difficulty in meshing these theories at all energy scales comes from the different assumptions that these theories make on how the universe works. General relativity models gravity as curvature of spacetime: in the slogan of John Archibald Wheeler, "Spacetime tells matter how to move; matter tells spacetime how to curve." On the other hand, quantum field theory is typically formulated in the flat spacetime used in special relativity. No theory has yet proven successful in describing the general situation where the dynamics of matter, modeled with quantum mechanics, affect the curvature of spacetime. If one attempts to treat gravity as simply another quantum field, the resulting theory is not renormalizable. Even in the simpler case where the curvature of spacetime is fixed a priori, developing quantum field theory becomes more mathematically challenging, and many ideas physicists use in quantum field theory on flat spacetime are no longer applicable. It is widely hoped that a theory of quantum gravity would allow us to understand problems of very high energy and very small dimensions of space, such as the behavior of black holes, and the origin of the universe. One major obstacle is that for quantum field theory in curved spacetime with a fixed metric, bosonic/fermionic operator fields supercommute for spacelike separated points. (This is a way of imposing a principle of locality.) However, in quantum gravity, the metric is dynamical, so that whether two points are spacelike separated depends on the state. In fact, they can be in a quantum superposition of being spacelike and not spacelike separated. Quantum mechanics and general relativity Graviton The observation that all fundamental forces except gravity have one or more known messenger particles leads researchers to believe that at least one must exist for gravity. This hypothetical particle is known as the graviton. These particles act as a force particle similar to the photon of the electromagnetic interaction. Under mild assumptions, the structure of general relativity requires them to follow the quantum mechanical description of interacting theoretical spin-2 massless particles. Many of the accepted notions of a unified theory of physics since the 1970s assume, and to some degree depend upon, the existence of the graviton. The Weinberg–Witten theorem places some constraints on theories in which the graviton is a composite particle. While gravitons are an important theoretical step in a quantum mechanical description of gravity, they are generally believed to be undetectable because they interact too weakly. Nonrenormalizability of gravity General relativity, like electromagnetism, is a classical field theory. One might expect that, as with electromagnetism, the gravitational force should also have a corresponding quantum field theory. However, gravity is perturbatively nonrenormalizable. For a quantum field theory to be well defined according to this understanding of the subject, it must be asymptotically free or asymptotically safe. The theory must be characterized by a choice of finitely many parameters, which could, in principle, be set by experiment. For example, in quantum electrodynamics these parameters are the charge and mass of the electron, as measured at a particular energy scale. On the other hand, in quantizing gravity there are, in perturbation theory, infinitely many independent parameters (counterterm coefficients) needed to define the theory. For a given choice of those parameters, one could make sense of the theory, but since it is impossible to conduct infinite experiments to fix the values of every parameter, it has been argued that one does not, in perturbation theory, have a meaningful physical theory. At low energies, the logic of the renormalization group tells us that, despite the unknown choices of these infinitely many parameters, quantum gravity will reduce to the usual Einstein theory of general relativity. On the other hand, if we could probe very high energies where quantum effects take over, then every one of the infinitely many unknown parameters would begin to matter, and we could make no predictions at all. It is conceivable that, in the correct theory of quantum gravity, the infinitely many unknown parameters will reduce to a finite number that can then be measured. One possibility is that normal perturbation theory is not a reliable guide to the renormalizability of the theory, and that there really is a UV fixed point for gravity. Since this is a question of non-perturbative quantum field theory, finding a reliable answer is difficult, pursued in the asymptotic safety program. Another possibility is that there are new, undiscovered symmetry principles that constrain the parameters and reduce them to a finite set. This is the route taken by string theory, where all of the excitations of the string essentially manifest themselves as new symmetries. Quantum gravity as an effective field theory In an effective field theory, not all but the first few of the infinite set of parameters in a nonrenormalizable theory are suppressed by huge energy scales and hence can be neglected when computing low-energy effects. Thus, at least in the low-energy regime, the model is a predictive quantum field theory. Furthermore, many theorists argue that the Standard Model should be regarded as an effective field theory itself, with "nonrenormalizable" interactions suppressed by large energy scales and whose effects have consequently not been observed experimentally. By treating general relativity as an effective field theory, one can actually make legitimate predictions for quantum gravity, at least for low-energy phenomena. An example is the well-known calculation of the tiny first-order quantum-mechanical correction to the classical Newtonian gravitational potential between two masses. Another example is the calculation of the corrections to the Bekenstein-Hawking entropy formula. Spacetime background dependence A fundamental lesson of general relativity is that there is no fixed spacetime background, as found in Newtonian mechanics and special relativity; the spacetime geometry is dynamic. While simple to grasp in principle, this is a complex idea to understand about general relativity, and its consequences are profound and not fully explored, even at the classical level. To a certain extent, general relativity can be seen to be a relational theory, in which the only physically relevant information is the relationship between different events in spacetime. On the other hand, quantum mechanics has depended since its inception on a fixed background (non-dynamic) structure. In the case of quantum mechanics, it is time that is given and not dynamic, just as in Newtonian classical mechanics. In relativistic quantum field theory, just as in classical field theory, Minkowski spacetime is the fixed background of the theory. String theory String theory can be seen as a generalization of quantum field theory where instead of point particles, string-like objects propagate in a fixed spacetime background, although the interactions among closed strings give rise to space-time in a dynamic way. Although string theory had its origins in the study of quark confinement and not of quantum gravity, it was soon discovered that the string spectrum contains the graviton, and that "condensation" of certain vibration modes of strings is equivalent to a modification of the original background. In this sense, string perturbation theory exhibits exactly the features one would expect of a perturbation theory that may exhibit a strong dependence on asymptotics (as seen, for example, in the AdS/CFT correspondence) which is a weak form of background dependence. Background independent theories Loop quantum gravity is the fruit of an effort to formulate a background-independent quantum theory. Topological quantum field theory provided an example of background-independent quantum theory, but with no local degrees of freedom, and only finitely many degrees of freedom globally. This is inadequate to describe gravity in 3+1 dimensions, which has local degrees of freedom according to general relativity. In 2+1 dimensions, however, gravity is a topological field theory, and it has been successfully quantized in several different ways, including spin networks. Semi-classical quantum gravity Quantum field theory on curved (non-Minkowskian) backgrounds, while not a full quantum theory of gravity, has shown many promising early results. In an analogous way to the development of quantum electrodynamics in the early part of the 20th century (when physicists considered quantum mechanics in classical electromagnetic fields), the consideration of quantum field theory on a curved background has led to predictions such as black hole radiation. Phenomena such as the Unruh effect, in which particles exist in certain accelerating frames but not in stationary ones, do not pose any difficulty when considered on a curved background (the Unruh effect occurs even in flat Minkowskian backgrounds). The vacuum state is the state with the least energy (and may or may not contain particles). Problem of time A conceptual difficulty in combining quantum mechanics with general relativity arises from the contrasting role of time within these two frameworks. In quantum theories, time acts as an independent background through which states evolve, with the Hamiltonian operator acting as the generator of infinitesimal translations of quantum states through time. In contrast, general relativity treats time as a dynamical variable which relates directly with matter and moreover requires the Hamiltonian constraint to vanish. Because this variability of time has been observed macroscopically, it removes any possibility of employing a fixed notion of time, similar to the conception of time in quantum theory, at the macroscopic level. Candidate theories There are a number of proposed quantum gravity theories. Currently, there is still no complete and consistent quantum theory of gravity, and the candidate models still need to overcome major formal and conceptual problems. They also face the common problem that, as yet, there is no way to put quantum gravity predictions to experimental tests, although there is hope for this to change as future data from cosmological observations and particle physics experiments become available. String theory The central idea of string theory is to replace the classical concept of a point particle in quantum field theory with a quantum theory of one-dimensional extended objects: string theory. At the energies reached in current experiments, these strings are indistinguishable from point-like particles, but, crucially, different modes of oscillation of one and the same type of fundamental string appear as particles with different (electric and other) charges. In this way, string theory promises to be a unified description of all particles and interactions. The theory is successful in that one mode will always correspond to a graviton, the messenger particle of gravity; however, the price of this success is unusual features such as six extra dimensions of space in addition to the usual three for space and one for time. In what is called the second superstring revolution, it was conjectured that both string theory and a unification of general relativity and supersymmetry known as supergravity form part of a hypothesized eleven-dimensional model known as M-theory, which would constitute a uniquely defined and consistent theory of quantum gravity. As presently understood, however, string theory admits a very large number (10500 by some estimates) of consistent vacua, comprising the so-called "string landscape". Sorting through this large family of solutions remains a major challenge. Loop quantum gravity Loop quantum gravity seriously considers general relativity's insight that spacetime is a dynamical field and is therefore a quantum object. Its second idea is that the quantum discreteness that determines the particle-like behavior of other field theories (for instance, the photons of the electromagnetic field) also affects the structure of space. The main result of loop quantum gravity is the derivation of a granular structure of space at the Planck length. This is derived from the following considerations: In the case of electromagnetism, the quantum operator representing the energy of each frequency of the field has a discrete spectrum. Thus the energy of each frequency is quantized, and the quanta are the photons. In the case of gravity, the operators representing the area and the volume of each surface or space region likewise have discrete spectra. Thus area and volume of any portion of space are also quantized, where the quanta are elementary quanta of space. It follows, then, that spacetime has an elementary quantum granular structure at the Planck scale, which cuts off the ultraviolet infinities of quantum field theory. The quantum state of spacetime is described in the theory by means of a mathematical structure called spin networks. Spin networks were initially introduced by Roger Penrose in abstract form, and later shown by Carlo Rovelli and Lee Smolin to derive naturally from a non-perturbative quantization of general relativity. Spin networks do not represent quantum states of a field in spacetime: they represent directly quantum states of spacetime. The theory is based on the reformulation of general relativity known as Ashtekar variables, which represent geometric gravity using mathematical analogues of electric and magnetic fields. In the quantum theory, space is represented by a network structure called a spin network, evolving over time in discrete steps. The dynamics of the theory is today constructed in several versions. One version starts with the canonical quantization of general relativity. The analogue of the Schrödinger equation is a Wheeler–DeWitt equation, which can be defined within the theory. In the covariant, or spinfoam formulation of the theory, the quantum dynamics is obtained via a sum over discrete versions of spacetime, called spinfoams. These represent histories of spin networks. Other theories There are a number of other approaches to quantum gravity. The theories differ depending on which features of general relativity and quantum theory are accepted unchanged, and which features are modified. Such theories include: Experimental tests As was emphasized above, quantum gravitational effects are extremely weak and therefore difficult to test. For this reason, the possibility of experimentally testing quantum gravity had not received much attention prior to the late 1990s. However, since the 2000s, physicists have realized that evidence for quantum gravitational effects can guide the development of the theory. Since theoretical development has been slow, the field of phenomenological quantum gravity, which studies the possibility of experimental tests, has obtained increased attention. The most widely pursued possibilities for quantum gravity phenomenology include gravitationally mediated entanglement, violations of Lorentz invariance, imprints of quantum gravitational effects in the cosmic microwave background (in particular its polarization), and decoherence induced by fluctuations in the space-time foam. The latter scenario has been searched for in light from gamma-ray bursts and both astrophysical and atmospheric neutrinos, placing limits on phenomenological quantum gravity parameters. ESA's INTEGRAL satellite measured polarization of photons of different wavelengths and was able to place a limit in the granularity of space that is less than 10−48 m, or 13 orders of magnitude below the Planck scale. The BICEP2 experiment detected what was initially thought to be primordial B-mode polarization caused by gravitational waves in the early universe. Had the signal in fact been primordial in origin, it could have been an indication of quantum gravitational effects, but it soon transpired that the polarization was due to interstellar dust interference. See also De Sitter relativity Dilaton Doubly special relativity Gravitational decoherence Gravitomagnetism Hawking radiation List of quantum gravity researchers Orders of magnitude (length) Penrose interpretation Planck epoch Planck units Swampland (physics) Virtual black hole Weak Gravity Conjecture Notes References Sources Further reading External links "Planck Era" and "Planck Time" (up to 10−43 seconds after birth of Universe) (University of Oregon). "Quantum Gravity", BBC Radio 4 discussion with John Gribbin, Lee Smolin and Janna Levin (In Our Time, February 22, 2001) General relativity Physics beyond the Standard Model Theories of gravity
Quantum gravity
[ "Physics" ]
3,906
[ "Theoretical physics", "Unsolved problems in physics", "Theories of gravity", "General relativity", "Quantum gravity", "Particle physics", "Theory of relativity", "Physics beyond the Standard Model" ]
25,336
https://en.wikipedia.org/wiki/Quantum%20entanglement
Quantum entanglement is the phenomenon of a group of particles being generated, interacting, or sharing spatial proximity in such a way that the quantum state of each particle of the group cannot be described independently of the state of the others, including when the particles are separated by a large distance. The topic of quantum entanglement is at the heart of the disparity between classical physics and quantum physics: entanglement is a primary feature of quantum mechanics not present in classical mechanics. Measurements of physical properties such as position, momentum, spin, and polarization performed on entangled particles can, in some cases, be found to be perfectly correlated. For example, if a pair of entangled particles is generated such that their total spin is known to be zero, and one particle is found to have clockwise spin on a first axis, then the spin of the other particle, measured on the same axis, is found to be anticlockwise. However, this behavior gives rise to seemingly paradoxical effects: any measurement of a particle's properties results in an apparent and irreversible wave function collapse of that particle and changes the original quantum state. With entangled particles, such measurements affect the entangled system as a whole. Such phenomena were the subject of a 1935 paper by Albert Einstein, Boris Podolsky, and Nathan Rosen, and several papers by Erwin Schrödinger shortly thereafter, describing what came to be known as the EPR paradox. Einstein and others considered such behavior impossible, as it violated the local realism view of causality (Einstein referring to it as "spooky action at a distance") and argued that the accepted formulation of quantum mechanics must therefore be incomplete. Later, however, the counterintuitive predictions of quantum mechanics were verified in tests where polarization or spin of entangled particles were measured at separate locations, statistically violating Bell's inequality. This established that the correlations produced from quantum entanglement cannot be explained in terms of local hidden variables, i.e., properties contained within the individual particles themselves. However, despite the fact that entanglement can produce statistical correlations between events in widely separated places, it cannot be used for faster-than-light communication. Quantum entanglement has been demonstrated experimentally with photons, electrons, top quarks, molecules and even small diamonds. The use of quantum entanglement in communication and computation is an active area of research and development. History Albert Einstein and Niels Bohr engaged in a long-running collegial dispute about the meaning of quantum mechanics, now known as the Bohr–Einstein debates. During these debates, Einstein introduced a thought experiment about a box that emits a photon. He noted that the experimenter's choice of what measurement to make upon the box will change what can be predicted about the photon, even if the photon is very far away. This argument, which Einstein had formulated by 1931, was an early recognition of the phenomenon that would later be called entanglement. That same year, Hermann Weyl observed in his textbook on group theory and quantum mechanics that quantum systems made of multiple interacting pieces exhibit a kind of Gestalt, in which "the whole is greater than the sum of its parts". In 1932, Erwin Schrödinger wrote down the defining equations of quantum entanglement but set them aside, unpublished. In 1935, Grete Hermann studied the mathematics of an electron interacting with a photon and noted the phenomenon that would come to be called entanglement. Later that same year, Einstein, Boris Podolsky and Nathan Rosen published a paper on what is now known as the Einstein–Podolsky–Rosen (EPR) paradox, a thought experiment that attempted to show that "the quantum-mechanical description of physical reality given by wave functions is not complete". Their thought experiment had two systems interact, then separate, and they showed that afterwards quantum mechanics cannot describe the two systems individually. Shortly after this paper appeared, Erwin Schrödinger wrote a letter to Einstein in German in which he used the word Verschränkung (translated by himself as entanglement) to describe situations like that of the EPR scenario. Schrödinger followed up with a full paper defining and discussing the notion of entanglement, saying "I would not call [entanglement] one but rather the characteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought." Like Einstein, Schrödinger was dissatisfied with the concept of entanglement, because it seemed to violate the speed limit on the transmission of information implicit in the theory of relativity. Einstein later referred to the effects of entanglement as "spukhafte Fernwirkung" or "spooky action at a distance", meaning the acquisition of a value of a property at one location resulting from a measurement at a distant location. In 1946, John Archibald Wheeler suggested studying the polarization of pairs of gamma-ray photons produced by electron–positron annihilation. Chien-Shiung Wu and I. Shaknov carried out this experiment in 1949, thereby demonstrating that the entangled particle pairs considered by EPR could be created in the laboratory. Despite Schrödinger's claim of its importance, little work on entanglement was published for decades after his paper was published. In 1964 John S. Bell demonstrated an upper limit, seen in Bell's inequality, regarding the strength of correlations that can be produced in any theory obeying local realism, and showed that quantum theory predicts violations of this limit for certain entangled systems. His inequality is experimentally testable, and there have been numerous relevant experiments, starting with the pioneering work of Stuart Freedman and John Clauser in 1972 and Alain Aspect's experiments in 1982. While Bell actively discouraged students from pursuing work like his as too esoteric, after a talk at Oxford a student named Artur Ekert suggested that the violation of a Bell inequality could be used as a resource for communication. Ekert followed up by publishing a quantum key distribution protocol called E91 based on it. In 1992, the entanglement concept was leveraged to propose quantum teleportation, an effect that was realized experimentally in 1997. Beginning in the mid-1990s, Anton Zeilinger used the generation of entanglement via parametric down-conversion to develop entanglement swapping and demonstrate quantum cryptography with entangled photons. In 2022, the Nobel Prize in Physics was awarded to Aspect, Clauser, and Zeilinger "for experiments with entangled photons, establishing the violation of Bell inequalities and pioneering quantum information science". Concept Meaning of entanglement Just as energy is a resource that facilitates mechanical operations, entanglement is a resource that facilitates performing tasks that involve communication and computation. The mathematical definition of entanglement can be paraphrased as saying that maximal knowledge about the whole of a system does not imply maximal knowledge about the individual parts of that system. If the quantum state that describes a pair of particles is entangled, then the results of measurements upon one half of the pair can be strongly correlated with the results of measurements upon the other. However, entanglement is not the same as "correlation" as understood in classical probability theory and in daily life. Instead, entanglement can be thought of as potential correlation that can be used to generate actual correlation in an appropriate experiment. The correlations generated from an entangled quantum state cannot in general be replicated by classical probability. An example of entanglement is a subatomic particle that decays into an entangled pair of other particles. The decay events obey the various conservation laws, and as a result, the measurement outcomes of one daughter particle must be highly correlated with the measurement outcomes of the other daughter particle (so that the total momenta, angular momenta, energy, and so forth remains roughly the same before and after this process). For instance, a spin-zero particle could decay into a pair of spin-1/2 particles. Since the total spin before and after this decay must be zero (by the conservation of angular momentum), whenever the first particle is measured to be spin up on some axis, the other, when measured on the same axis, is always found to be spin down. This is called the spin anti-correlated case; and if the prior probabilities for measuring each spin are equal, the pair is said to be in the singlet state. Perfect anti-correlations like this could be explained by "hidden variables" within the particles. For example, we could hypothesize that the particles are made in pairs such that one carries a value of "up" while the other carries a value of "down". Then, knowing the result of the spin measurement upon one particle, we could predict that the other will have the opposite value. Bell illustrated this with a story about a colleague, Bertlmann, who always wore socks with mismatching colors. "Which colour he will have on a given foot on a given day is quite unpredictable," Bell wrote, but upon observing "that the first sock is pink you can be already sure that the second sock will not be pink." Revealing the remarkable features of quantum entanglement requires considering multiple distinct experiments, such as spin measurements along different axes, and comparing the correlations obtained in these different configurations. Quantum systems can become entangled through various types of interactions. For some ways in which entanglement may be achieved for experimental purposes, see the section below on methods. Entanglement is broken when the entangled particles decohere through interaction with the environment; for example, when a measurement is made. In more detail, this process involves the particles becoming entangled with the environment, as a consequence of which, the quantum state describing the particles themselves is no longer entangled. Mathematically, an entangled system can be defined to be one whose quantum state cannot be factored as a product of states of its local constituents; that is to say, they are not individual particles but are an inseparable whole. When entanglement is present, one constituent cannot be fully described without considering the other(s). The state of a composite system is always expressible as a sum, or superposition, of products of states of local constituents; it is entangled if this sum cannot be written as a single product term. Paradox The singlet state described above is the basis for one version of the EPR paradox. In this variant, introduced by David Bohm, a source emits particles and sends them in opposite directions. The state describing each pair is entangled. In the standard textbook presentation of quantum mechanics, performing a spin measurement on one of the particles causes the wave function for the whole pair to collapse into a state in which each particle has a definite spin (either up or down) along the axis of measurement. The outcome is random, with each possibility having a probability of 50%. However, if both spins are measured along the same axis, they are found to be anti-correlated. This means that the random outcome of the measurement made on one particle seems to have been transmitted to the other, so that it can make the "right choice" when it too is measured. The distance and timing of the measurements can be chosen so as to make the interval between the two measurements spacelike, hence, any causal effect connecting the events would have to travel faster than light. According to the principles of special relativity, it is not possible for any information to travel between two such measuring events. It is not even possible to say which of the measurements came first. For two spacelike separated events and there are inertial frames in which is first and others in which is first. Therefore, the correlation between the two measurements cannot be explained as one measurement determining the other: different observers would disagree about the role of cause and effect. Failure of local hidden-variable theories A possible resolution to the paradox is to assume that quantum theory is incomplete, and the result of measurements depends on predetermined "hidden variables". The state of the particles being measured contains some hidden variables, whose values effectively determine, right from the moment of separation, what the outcomes of the spin measurements are going to be. This would mean that each particle carries all the required information with it, and nothing needs to be transmitted from one particle to the other at the time of measurement. Einstein and others (see the previous section) originally believed this was the only way out of the paradox, and the accepted quantum mechanical description (with a random measurement outcome) must be incomplete. Local hidden variable theories fail, however, when measurements of the spin of entangled particles along different axes are considered. If a large number of pairs of such measurements are made (on a large number of pairs of entangled particles), then statistically, if the local realist or hidden variables view were correct, the results would always satisfy Bell's inequality. A number of experiments have shown in practice that Bell's inequality is not satisfied. Moreover, when measurements of the entangled particles are made in moving relativistic reference frames, in which each measurement (in its own relativistic time frame) occurs before the other, the measurement results remain correlated. The fundamental issue about measuring spin along different axes is that these measurements cannot have definite values at the same time―they are incompatible in the sense that these measurements' maximum simultaneous precision is constrained by the uncertainty principle. This is contrary to what is found in classical physics, where any number of properties can be measured simultaneously with arbitrary accuracy. It has been proven mathematically that compatible measurements cannot show Bell-inequality-violating correlations, and thus entanglement is a fundamentally non-classical phenomenon. Nonlocality and entanglement As discussed above, entanglement is necessary to produce a violation of a Bell inequality. However, the mere presence of entanglement alone is insufficient, as Bell himself noted in his 1964 paper. This is demonstrated, for example, by Werner states, which are a family of states describing pairs of particles. For appropriate choices of the key parameter that identifies a given Werner state within the full set thereof, the Werner states exhibit entanglement. Yet pairs of particles described by Werner states always admit a local hidden variable model. In other words, these states cannot power the violation of a Bell inequality, despite possessing entanglement. This can be generalized from pairs of particles to larger collections as well. The violation of Bell inequalities is often called quantum nonlocality. This term is not without controversy. It is sometimes argued that using the term nonlocality carries the unwarranted implication that the violation of Bell inequalities must be explained by physical, faster-than-light signals. In other words, the failure of local hidden-variable models to reproduce quantum mechanics is not necessarily a sign of true nonlocality in quantum mechanics itself. Despite these reservations, the term nonlocality has become a widespread convention. The term nonlocality is also sometimes applied to other concepts besides the nonexistence of a local hidden-variable model, such as whether states can be distinguished by local measurements. Moreover, quantum field theory is often said to be local because observables defined within spacetime regions that are spacelike separated must commute. These other uses of local and nonlocal are not discussed further here. Mathematical details The following subsections use the formalism and theoretical framework developed in the articles bra–ket notation and mathematical formulation of quantum mechanics. Pure states Consider two arbitrary quantum systems and , with respective Hilbert spaces and . The Hilbert space of the composite system is the tensor product If the first system is in state and the second in state , the state of the composite system is States of the composite system that can be represented in this form are called separable states, or product states. However, not all states of the composite system are separable. Fix a basis for and a basis for . The most general state in is of the form . This state is separable if there exist vectors so that yielding and It is inseparable if for any vectors at least for one pair of coordinates we have If a state is inseparable, it is called an 'entangled state'. For example, given two basis vectors of and two basis vectors of , the following is an entangled state: If the composite system is in this state, it is impossible to attribute to either system or system a definite pure state. Another way to say this is that while the von Neumann entropy of the whole state is zero (as it is for any pure state), the entropy of the subsystems is greater than zero. In this sense, the systems are "entangled". The above example is one of four Bell states, which are (maximally) entangled pure states (pure states of the space, but which cannot be separated into pure states of each and ). Now suppose Alice is an observer for system , and Bob is an observer for system . If in the entangled state given above Alice makes a measurement in the eigenbasis of , there are two possible outcomes, occurring with equal probability: Alice can obtain the outcome 0, or she can obtain the outcome 1. If she obtains the outcome 0, then she can predict with certainty that Bob's result will be 1. Likewise, if she obtains the outcome 1, then she can predict with certainty that Bob's result will be 0. In other words, the results of measurements on the two qubits will be perfectly anti-correlated. This remains true even if the systems and are spatially separated. This is the foundation of the EPR paradox. The outcome of Alice's measurement is random. Alice cannot decide which state to collapse the composite system into, and therefore cannot transmit information to Bob by acting on her system. Causality is thus preserved, in this particular scheme. For the general argument, see no-communication theorem. Ensembles As mentioned above, a state of a quantum system is given by a unit vector in a Hilbert space. More generally, if one has less information about the system, then one calls it an 'ensemble' and describes it by a density matrix, which is a positive-semidefinite matrix, or a trace class when the state space is infinite-dimensional, and which has trace 1. By the spectral theorem, such a matrix takes the general form: where the wi are positive-valued probabilities (they sum up to 1), the vectors are unit vectors, and in the infinite-dimensional case, we would take the closure of such states in the trace norm. We can interpret as representing an ensemble where is the proportion of the ensemble whose states are . When a mixed state has rank 1, it therefore describes a 'pure ensemble'. When there is less than total information about the state of a quantum system we need density matrices to represent the state. Experimentally, a mixed ensemble might be realized as follows. Consider a "black box" apparatus that spits electrons towards an observer. The electrons' Hilbert spaces are identical. The apparatus might produce electrons that are all in the same state; in this case, the electrons received by the observer are then a pure ensemble. However, the apparatus could produce electrons in different states. For example, it could produce two populations of electrons: one with state with spins aligned in the positive direction, and the other with state with spins aligned in the negative direction. Generally, this is a mixed ensemble, as there can be any number of populations, each corresponding to a different state. Following the definition above, for a bipartite composite system, mixed states are just density matrices on . That is, it has the general form where the wi are positively valued probabilities, , and the vectors are unit vectors. This is self-adjoint and positive and has trace 1. Extending the definition of separability from the pure case, we say that a mixed state is separable if it can be written as where the are positively valued probabilities and the s and s are themselves mixed states (density operators) on the subsystems and respectively. In other words, a state is separable if it is a probability distribution over uncorrelated states, or product states. By writing the density matrices as sums of pure ensembles and expanding, we may assume without loss of generality that and are themselves pure ensembles. A state is then said to be entangled if it is not separable. In general, finding out whether or not a mixed state is entangled is considered difficult. The general bipartite case has been shown to be NP-hard. For the and cases, a necessary and sufficient criterion for separability is given by the famous Positive Partial Transpose (PPT) condition. Reduced density matrices The idea of a reduced density matrix was introduced by Paul Dirac in 1930. Consider as above systems and each with a Hilbert space . Let the state of the composite system be As indicated above, in general there is no way to associate a pure state to the component system . However, it still is possible to associate a density matrix. Let . which is the projection operator onto this state. The state of is the partial trace of over the basis of system : The sum occurs over and the identity operator in . is sometimes called the reduced density matrix of on subsystem . Colloquially, we "trace out" or "trace over" system to obtain the reduced density matrix on . For example, the reduced density matrix of for the entangled state discussed above is This demonstrates that the reduced density matrix for an entangled pure ensemble is a mixed ensemble. In contrast, the density matrix of for the pure product state discussed above is the projection operator onto . In general, a bipartite pure state ρ is entangled if and only if its reduced states are mixed rather than pure. Entanglement as a resource In quantum information theory, entangled states are considered a 'resource', i.e., something costly to produce and that allows implementing valuable transformations. The setting in which this perspective is most evident is that of "distant labs", i.e., two quantum systems labelled "A" and "B" on each of which arbitrary quantum operations can be performed, but which do not interact with each other quantum mechanically. The only interaction allowed is the exchange of classical information, which combined with the most general local quantum operations gives rise to the class of operations called LOCC (local operations and classical communication). These operations do not allow the production of entangled states between systems A and B. But if A and B are provided with a supply of entangled states, then these, together with LOCC operations can enable a larger class of transformations. If Alice and Bob share an entangled state, Alice can tell Bob over a telephone call how to reproduce a quantum state she has in her lab. Alice performs a joint measurement on together with her half of the entangled state and tells Bob the results. Using Alice's results Bob operates on his half of the entangled state to make it equal to . Since Alice's measurement necessarily erases the quantum state of the system in her lab, the state is not copied, but transferred: it is said to be "teleported" to Bob's laboratory through this protocol. Entanglement swapping is variant of teleportation that allows two parties that have never interacted to share an entangled state. The swapping protocol begins with two EPR sources. One source emits an entangled pair of particles A and B, while the other emits a second entangled pair of particles C and D. Particles B and C are subjected to a measurement in the basis of Bell states. The state of the remaining particles, C and D, collapses to a Bell state, leaving them entangled despite never having interacted with each other. An interaction between a qubit of A and a qubit of B can be realized by first teleporting A's qubit to B, then letting it interact with B's qubit (which is now a LOCC operation, since both qubits are in B's lab) and then teleporting the qubit back to A. Two maximally entangled states of two qubits are used up in this process. Thus entangled states are a resource that enables the realization of quantum interactions (or of quantum channels) in a setting where only LOCC are available, but they are consumed in the process. There are other applications where entanglement can be seen as a resource, e.g., private communication or distinguishing quantum states. Multipartite entanglement Quantum states describing systems made of more than two pieces can also be entangled. An example for a three-qubit system is the Greenberger–Horne–Zeilinger (GHZ) state, Another three-qubit example is the W state: Tracing out any one of the three qubits turns the GHZ state into a separable state, whereas the result of tracing over any of the three qubits in the W state is still entangled. This illustrates how multipartite entanglement is a more complicated topic than bipartite entanglement: systems composed of three or more parts can exhibit multiple qualitatively different types of entanglement. A single particle cannot be maximally entangled with more than a particle at a time, a property called monogamy. Classification of entanglement Not all quantum states are equally valuable as a resource. One method to quantify this value is to use an entanglement measure that assigns a numerical value to each quantum state. However, it is often interesting to settle for a coarser way to compare quantum states. This gives rise to different classification schemes. Most entanglement classes are defined based on whether states can be converted to other states using LOCC or a subclass of these operations. The smaller the set of allowed operations, the finer the classification. Important examples are: If two states can be transformed into each other by a local unitary operation, they are said to be in the same LU class. This is the finest of the usually considered classes. Two states in the same LU class have the same value for entanglement measures and the same value as a resource in the distant-labs setting. There is an infinite number of different LU classes (even in the simplest case of two qubits in a pure state). If two states can be transformed into each other by local operations including measurements with probability larger than 0, they are said to be in the same 'SLOCC class' ("stochastic LOCC"). Qualitatively, two states and in the same SLOCC class are equally powerful, since one can transform each into the other, but since the transformations and may succeed with different probability, they are no longer equally valuable. E.g., for two pure qubits there are only two SLOCC classes: the entangled states (which contains both the (maximally entangled) Bell states and weakly entangled states like ) and the separable ones (i.e., product states like ). Instead of considering transformations of single copies of a state (like ) one can define classes based on the possibility of multi-copy transformations. E.g., there are examples when is impossible by LOCC, but is possible. A very important (and very coarse) classification is based on the property whether it is possible to transform an arbitrarily large number of copies of a state into at least one pure entangled state. States that have this property are called distillable. These states are the most useful quantum states since, given enough of them, they can be transformed (with local operations) into any entangled state and hence allow for all possible uses. It came initially as a surprise that not all entangled states are distillable; those that are not are called 'bound entangled'. A different entanglement classification is based on what the quantum correlations present in a state allow A and B to do: one distinguishes three subsets of entangled states: (1) the non-local states, which produce correlations that cannot be explained by a local hidden variable model and thus violate a Bell inequality, (2) the steerable states that contain sufficient correlations for A to modify ("steer") by local measurements the conditional reduced state of B in such a way, that A can prove to B that the state they possess is indeed entangled, and finally (3) those entangled states that are neither non-local nor steerable. All three sets are non-empty. Entropy In this section, the entropy of a mixed state is discussed as well as how it can be viewed as a measure of quantum entanglement. Definition In classical information theory , the Shannon entropy, is associated to a probability distribution, , in the following way: Since a mixed state is a probability distribution over an ensemble, this leads naturally to the definition of the von Neumann entropy: which can be expressed in terms of the eigenvalues of : . Since an event of probability 0 should not contribute to the entropy, and given that the convention is adopted. When a pair of particles is described by the spin singlet state discussed above, the von Neumann entropy of either particle is , which can be shown to be the maximum entropy for mixed states. As a measure of entanglement Entropy provides one tool that can be used to quantify entanglement, although other entanglement measures exist. If the overall system is pure, the entropy of one subsystem can be used to measure its degree of entanglement with the other subsystems. For bipartite pure states, the von Neumann entropy of reduced states is the unique measure of entanglement in the sense that it is the only function on the family of states that satisfies certain axioms required of an entanglement measure. It is a classical result that the Shannon entropy achieves its maximum at, and only at, the uniform probability distribution . Therefore, a bipartite pure state is said to be a maximally entangled state if the reduced state of each subsystem of is the diagonal matrix For mixed states, the reduced von Neumann entropy is not the only reasonable entanglement measure. Rényi entropy also can be used as a measure of entanglement. Entanglement measures Entanglement measures quantify the amount of entanglement in a (often viewed as a bipartite) quantum state. As aforementioned, entanglement entropy is the standard measure of entanglement for pure states (but no longer a measure of entanglement for mixed states). For mixed states, there are some entanglement measures in the literature and no single one is standard. Entanglement cost Distillable entanglement Entanglement of formation Concurrence Relative entropy of entanglement Squashed entanglement Logarithmic negativity Most (but not all) of these entanglement measures reduce for pure states to entanglement entropy, and are difficult (NP-hard) to compute for mixed states as the dimension of the entangled system grows. Quantum field theory The Reeh–Schlieder theorem of quantum field theory is sometimes interpreted as saying that entanglement is omnipresent in the quantum vacuum. Applications Entanglement has many applications in quantum information theory. With the aid of entanglement, otherwise impossible tasks may be achieved. Among the best-known applications of entanglement are superdense coding and quantum teleportation. Most researchers believe that entanglement is necessary to realize quantum computing (although this is disputed by some). Entanglement is used in some protocols of quantum cryptography, but to prove the security of quantum key distribution (QKD) under standard assumptions does not require entanglement. However, the device independent security of QKD is shown exploiting entanglement between the communication partners. In August 2014, Brazilian researcher Gabriela Barreto Lemos, from the University of Vienna, and team were able to "take pictures" of objects using photons that had not interacted with the subjects, but were entangled with photons that did interact with such objects. The idea has been adapted to make infrared images using only standard cameras that are insensitive to infrared. Entangled states There are several canonical entangled states that appear often in theory and experiments. For two qubits, the Bell states are These four pure states are all maximally entangled and form an orthonormal basis of the Hilbert space of the two qubits. They provide examples of how quantum mechanics can violate Bell-type inequalities. For qubits, the GHZ state is which reduces to the Bell state for . The traditional GHZ state was defined for . GHZ states are occasionally extended to qudits, i.e., systems of d rather than 2 dimensions. Also for qubits, there are spin squeezed states, a class of squeezed coherent states satisfying certain restrictions on the uncertainty of spin measurements, which are necessarily entangled. Spin squeezed states are good candidates for enhancing precision measurements using quantum entanglement. For two bosonic modes, a NOON state is This is like the Bell state except the basis states and have been replaced with "the N photons are in one mode" and "the N photons are in the other mode". Finally, there also exist twin Fock states for bosonic modes, which can be created by feeding a Fock state into two arms leading to a beam splitter. They are the sum of multiple NOON states, and can be used to achieve the Heisenberg limit. For the appropriately chosen measures of entanglement, Bell, GHZ, and NOON states are maximally entangled while spin squeezed and twin Fock states are only partially entangled. Methods of creating entanglement Entanglement is usually created by direct interactions between subatomic particles. These interactions can take numerous forms. One of the most commonly used methods is spontaneous parametric down-conversion to generate a pair of photons entangled in polarization. Other methods include the use of a fibre coupler to confine and mix photons, photons emitted from decay cascade of the bi-exciton in a quantum dot, or the use of the Hong–Ou–Mandel effect. Quantum entanglement of a particle and its antiparticle, such as an electron and a positron, can be created by partial overlap of the corresponding quantum wave functions in Hardy's interferometer. In the earliest tests of Bell's theorem, the entangled particles were generated using atomic cascades. It is also possible to create entanglement between quantum systems that never directly interacted, through the use of entanglement swapping. Two independently prepared, identical particles may also be entangled if their wave functions merely spatially overlap, at least partially. Testing a system for entanglement A density matrix ρ is called separable if it can be written as a convex sum of product states, namely with probabilities. By definition, a state is entangled if it is not separable. For 2-qubit and qubit-qutrit systems (2 × 2 and 2 × 3 respectively) the simple Peres–Horodecki criterion provides both a necessary and a sufficient criterion for separability, and thus—inadvertently—for detecting entanglement. However, for the general case, the criterion is merely a necessary one for separability, as the problem becomes NP-hard when generalized. Other separability criteria include (but not limited to) the range criterion, reduction criterion, and those based on uncertainty relations. See Ref. for a review of separability criteria in discrete-variable systems and Ref. for a review on techniques and challenges in experimental entanglement certification in discrete-variable systems. A numerical approach to the problem is suggested by Jon Magne Leinaas, Jan Myrheim and Eirik Ovrum in their paper "Geometrical aspects of entanglement". Leinaas et al. offer a numerical approach, iteratively refining an estimated separable state towards the target state to be tested, and checking if the target state can indeed be reached. In continuous variable systems, the Peres–Horodecki criterion also applies. Specifically, Simon formulated a particular version of the Peres–Horodecki criterion in terms of the second-order moments of canonical operators and showed that it is necessary and sufficient for -mode Gaussian states (see Ref. for a seemingly different but essentially equivalent approach). It was later found that Simon's condition is also necessary and sufficient for -mode Gaussian states, but no longer sufficient for -mode Gaussian states. Simon's condition can be generalized by taking into account the higher order moments of canonical operators or by using entropic measures. In quantum gravity There is a fundamental conflict, referred to as the problem of time, between the way the concept of time is used in quantum mechanics, and the role it plays in general relativity. In standard quantum theories time acts as an independent background through which states evolve, while general relativity treats time as a dynamical variable which relates directly with matter. Part of the effort to reconcile these approaches to time results in the Wheeler–DeWitt equation, which predicts the state of the universe is timeless or static, contrary to ordinary experience. Work started by Don Page and William Wootters suggests that the universe appears to evolve for observers on the inside because of energy entanglement between an evolving system and a clock system, both within the universe. In this way the overall system can remain timeless while parts experience time via entanglement. The issue remains an open question closely related to attempts at theories of quantum gravity. In general relativity, gravity arises from the curvature of spacetime and that curvature derives from the distribution of matter. However, matter is governed by quantum mechanics. Integration of these two theories faces many problems. In an (unrealistic) model space called the anti-de Sitter space, the AdS/CFT correspondence allows a quantum gravitational system to be related to a quantum field theory without gravity. Using this correspondence, Mark Van Raamsdonk suggested that spacetime arises as an emergent phenomenon of the quantum degrees of freedom that are entangled and live in the boundary of the spacetime. Experiments demonstrating and using entanglement Bell tests A Bell test, also known as Bell inequality test or Bell experiment, is a real-world physics experiment designed to test the theory of quantum mechanics against the hypothesis of local hidden variables. These tests empirically evaluate the implications of Bell's theorem. To date, all Bell tests have found that the hypothesis of local hidden variables is inconsistent with the way that physical systems behave. Many types of Bell tests have been performed in physics laboratories, often with the goal of ameliorating problems of experimental design or set-up that could in principle affect the validity of the findings of earlier Bell tests. This is known as "closing loopholes in Bell tests". In earlier tests, it could not be ruled out that the result at one point could have been subtly transmitted to the remote point, affecting the outcome at the second location. However, so-called "loophole-free" Bell tests have since been performed where the locations were sufficiently separated that communications at the speed of light would have taken longer—in one case, 10,000 times longer—than the interval between the measurements. In 2017, Yin et al. reported setting a new quantum entanglement distance record of 1,203 km, demonstrating the survival of a two-photon pair and a violation of a Bell inequality, reaching a CHSH valuation of , under strict Einstein locality conditions, from the Micius satellite to bases in Lijian, Yunnan and Delingha, Quinhai, increasing the efficiency of transmission over prior fiberoptic experiments by an order of magnitude. Entanglement of top quarks In 2023 the LHC using techniques from quantum tomography measured entanglement at the highest energy so far, a rare intersection between quantum information and high energy physics based on theoretical work first proposed in 2021. The experiment was carried by the ATLAS detector measuring the spin of top-quark pair production and the effect was observed with a more than 5σ level of significance, the top quark is the heaviest known particle and therefore has a very short lifetime () being the only quark that decays before undergoing hadronization (~ ) and spin decorrelation (~ ), so the spin information is transferred without much loss to the leptonic decays products that will be caught by the detector. The spin polarization and correlation of the particles was measured and tested for entanglement with concurrence as well as the Peres–Horodecki criterion and subsequently the effect has been confirmed too in the CMS detector. Entanglement of macroscopic objects In 2020, researchers reported the quantum entanglement between the motion of a millimetre-sized mechanical oscillator and a disparate distant spin system of a cloud of atoms. Later work complemented this work by quantum-entangling two mechanical oscillators. Entanglement of elements of living systems In October 2018, physicists reported producing quantum entanglement using living organisms, particularly between photosynthetic molecules within living bacteria and quantized light. Living organisms (green sulphur bacteria) have been studied as mediators to create quantum entanglement between otherwise non-interacting light modes, showing high entanglement between light and bacterial modes, and to some extent, even entanglement within the bacteria. Entanglement of quarks and gluons in protons Physicists at Brookhaven National Laboratory demonstrated quantum entanglement within protons, showing quarks and gluons are interdependent rather than isolated particles. Using high-energy electron-proton collisions, they revealed maximal entanglement, reshaping our understanding of proton structure. See also Concurrence CNOT gate Einstein's thought experiments Entanglement witness ER = EPR Multipartite entanglement Normally distributed and uncorrelated does not imply independent Pauli exclusion principle Quantum coherence Quantum discord Quantum network Quantum phase transition Quantum pseudo-telepathy Retrocausality Squashed entanglement Stern–Gerlach experiment Ward's probability amplitude References Further reading External links Explanatory video by Scientific American magazine Entanglement experiment with photon pairs – interactive Audio – Cain/Gay (2009) Astronomy Cast Entanglement "Spooky Actions at a Distance?": Oppenheimer Lecture, Prof. David Mermin (Cornell University) Univ. California, Berkeley, 2008. Non-mathematical popular lecture on YouTube, posted Mar 2008 "Quantum Entanglement versus Classical Correlation" (Interactive demonstration) Quantum information science Quantum measurement
Quantum entanglement
[ "Physics" ]
8,826
[ "Quantum measurement", "Quantum mechanics" ]
25,350
https://en.wikipedia.org/wiki/Quasicrystal
A quasiperiodic crystal, or quasicrystal, is a structure that is ordered but not periodic. A quasicrystalline pattern can continuously fill all available space, but it lacks translational symmetry. While crystals, according to the classical crystallographic restriction theorem, can possess only two-, three-, four-, and six-fold rotational symmetries, the Bragg diffraction pattern of quasicrystals shows sharp peaks with other symmetry orders—for instance, five-fold. Aperiodic tilings were discovered by mathematicians in the early 1960s, and, some twenty years later, they were found to apply to the study of natural quasicrystals. The discovery of these aperiodic forms in nature has produced a paradigm shift in the field of crystallography. In crystallography the quasicrystals were predicted in 1981 by a five-fold symmetry study of Alan Lindsay Mackay,—that also brought in 1982, with the crystallographic Fourier transform of a Penrose tiling, the possibility of identifying quasiperiodic order in a material through diffraction. Quasicrystals had been investigated and observed earlier, but, until the 1980s, they were disregarded in favor of the prevailing views about the atomic structure of matter. In 2009, after a dedicated search, a mineralogical finding, icosahedrite, offered evidence for the existence of natural quasicrystals. Roughly, an ordering is non-periodic if it lacks translational symmetry, which means that a shifted copy will never match exactly with its original. The more precise mathematical definition is that there is never translational symmetry in more than n – 1 linearly independent directions, where n is the dimension of the space filled, e.g., the three-dimensional tiling displayed in a quasicrystal may have translational symmetry in two directions. Symmetrical diffraction patterns result from the existence of an indefinitely large number of elements with a regular spacing, a property loosely described as long-range order. Experimentally, the aperiodicity is revealed in the unusual symmetry of the diffraction pattern, that is, symmetry of orders other than two, three, four, or six. In 1982, materials scientist Dan Shechtman observed that certain aluminium–manganese alloys produced unusual diffractograms, which today are seen as revelatory of quasicrystal structures. Due to fear of the scientific community's reaction, it took him two years to publish the results. Shechtman's discovery challenged the long-held belief that all crystals are periodic. Observed in a rapidly solidified Al-Mn alloy, quasicrystals exhibited icosahedral symmetry, which was previously thought impossible in crystallography. This breakthrough, supported by theoretical models and experimental evidence, led to a paradigm shift in the understanding of solid-state matter. Despite initial skepticism, the discovery gained widespread acceptance, prompting the International Union of Crystallography to redefine the term "crystal." The work ultimately earned Shechtman the 2011 Nobel Prize in Chemistry and inspired significant advancements in materials science and mathematics. On 25 October 2018, Luca Bindi and Paul Steinhardt were awarded the Aspen Institute 2018 Prize for collaboration and scientific research between Italy and the United States, after they discovered icosahedrite, the first quasicrystal known to occur naturally. History The first representations of perfect quasicrystalline patterns can be found in several early Islamic works of art and architecture such as the Gunbad-i-Kabud tomb tower, the Darb-e Imam shrine and the Al-Attarine Madrasa. On July 16, 1945, in Alamogordo, New Mexico, the Trinity nuclear bomb test produced icosahedral quasicrystals. They went unnoticed at the time of the test but were later identified in samples of red trinitite, a glass-like substance formed from fused sand and copper transmission lines. Identified in 2021, they are the oldest known anthropogenic quasicrystals. In 1961, Hao Wang asked whether determining if a set of tiles admits a tiling of the plane is an algorithmically unsolvable problem or not. He conjectured that it is solvable, relying on the hypothesis that every set of tiles that can tile the plane can do it periodically (hence, it would suffice to try to tile bigger and bigger patterns until obtaining one that tiles periodically). Nevertheless, two years later, his student Robert Berger constructed a set of some 20,000 square tiles (now called Wang tiles) that can tile the plane but not in a periodic fashion. As further aperiodic sets of tiles were discovered, sets with fewer and fewer shapes were found. In 1974 Roger Penrose discovered a set of just two tiles, now referred to as Penrose tiles, that produced only non-periodic tilings of the plane. These tilings displayed instances of fivefold symmetry. One year later Alan Mackay showed theoretically that the diffraction pattern from the Penrose tiling had a two-dimensional Fourier transform consisting of sharp 'delta' peaks arranged in a fivefold symmetric pattern. Around the same time, Robert Ammann created a set of aperiodic tiles that produced eightfold symmetry. In 1972, R. M. de Wolf and W. van Aalst reported that the diffraction pattern produced by a crystal of sodium carbonate cannot be labeled with three indices but needed one more, which implied that the underlying structure had four dimensions in reciprocal space. Other puzzling cases have been reported, but until the concept of quasicrystal came to be established, they were explained away or denied. Dan Shechtman first observed ten-fold electron diffraction patterns in 1982, while conducting a routine study of an aluminium–manganese alloy, Al6Mn, at the US National Bureau of Standards (later NIST). Shechtman related his observation to Ilan Blech, who responded that such diffractions had been seen before. Around that time, Shechtman also related his finding to John W. Cahn of the NIST, who did not offer any explanation and challenged him to solve the observation. Shechtman quoted Cahn as saying: "Danny, this material is telling us something, and I challenge you to find out what it is". The observation of the ten-fold diffraction pattern lay unexplained for two years until the spring of 1984, when Blech asked Shechtman to show him his results again. A quick study of Shechtman's results showed that the common explanation for a ten-fold symmetrical diffraction pattern, a type of crystal twinning, was ruled out by his experiments. Therefore, Blech looked for a new structure containing cells connected to each other by defined angles and distances but without translational periodicity. He decided to use a computer simulation to calculate the diffraction intensity from a cluster of such a material, which he termed as "multiple polyhedral", and found a ten-fold structure similar to what was observed. The multiple polyhedral structure was termed later by many researchers as icosahedral glass. Shechtman accepted Blech's discovery of a new type of material and chose to publish his observation in a paper entitled "The Microstructure of Rapidly Solidified Al6Mn", which was written around June 1984 and published in a 1985 edition of Metallurgical Transactions A. Meanwhile, on seeing the draft of the paper, John Cahn suggested that Shechtman's experimental results merit a fast publication in a more appropriate scientific journal. Shechtman agreed and, in hindsight, called this fast publication "a winning move". This paper, published in the Physical Review Letters, repeated Shechtman's observation and used the same illustrations as the original paper. Originally, the new form of matter was dubbed "Shechtmanite". The term "quasicrystal" was first used in print by Paul Steinhardt and Dov Levine shortly after Shechtman's paper was published. Also in 1985, T. Ishimasa et al. reported twelvefold symmetry in Ni-Cr particles. Soon, eightfold diffraction patterns were recorded in V-Ni-Si and Cr-Ni-Si alloys. Over the years, hundreds of quasicrystals with various compositions and different symmetries have been discovered. The first quasicrystalline materials were thermodynamically unstable: when heated, they formed regular crystals. However, in 1987, the first of many stable quasicrystals were discovered, making it possible to produce large samples for study and applications. In 1992, the International Union of Crystallography altered its definition of a crystal, reducing it to the ability to produce a clear-cut diffraction pattern and acknowledging the possibility of the ordering to be either periodic or aperiodic. In 2001, Steinhardt hypothesized that quasicrystals could exist in nature and developed a method of recognition, inviting all the mineralogical collections of the world to identify any badly cataloged crystals. In 2007 Steinhardt received a reply by Luca Bindi, who found a quasicrystalline specimen from Khatyrka in the University of Florence Mineralogical Collection. The crystal samples were sent to Princeton University for other tests, and in late 2009, Steinhardt confirmed its quasicrystalline character. This quasicrystal, with a composition of Al63Cu24Fe13, was named icosahedrite and it was approved by the International Mineralogical Association in 2010. Analysis indicates it may be meteoritic in origin, possibly delivered from a carbonaceous chondrite asteroid. In 2011, Bindi, Steinhardt, and a team of specialists found more icosahedrite samples from Khatyrka. A further study of Khatyrka meteorites revealed micron-sized grains of another natural quasicrystal, which has a ten-fold symmetry and a chemical formula of Al71Ni24Fe5. This quasicrystal is stable in a narrow temperature range, from 1120 to 1200 K at ambient pressure, which suggests that natural quasicrystals are formed by rapid quenching of a meteorite heated during an impact-induced shock. Shechtman was awarded the Nobel Prize in Chemistry in 2011 for his work on quasicrystals. "His discovery of quasicrystals revealed a new principle for packing of atoms and molecules," stated the Nobel Committee and pointed that "this led to a paradigm shift within chemistry." In 2014, Post of Israel issued a stamp dedicated to quasicrystals and the 2011 Nobel Prize. While the first quasicrystals discovered were made out of intermetallic components, later on quasicrystals were also discovered in soft-matter and molecular systems. Soft quasicrystal structures have been found in supramolecular dendrimer liquids and ABC Star Polymers in 2004 and 2007. In 2009, it was found that thin-film quasicrystals can be formed by self-assembly of uniformly shaped, nano-sized molecular units at an air-liquid interface. It was demonstrated that these units can be both inorganic and organic. Additionally in the 2010s, two-dimensional molecular quasicrystals were discovered, driven by intermolecular interactions and interface-interactions. In 2018, chemists from Brown University announced the successful creation of a self-constructing lattice structure based on a strangely shaped quantum dot. While single-component quasicrystal lattices have been previously predicted mathematically and in computer simulations, they had not been demonstrated prior to this. Mathematics There are several ways to mathematically define quasicrystalline patterns. One definition, the "cut and project" construction, is based on the work of Harald Bohr (mathematician brother of Niels Bohr). The concept of an almost periodic function (also called a quasiperiodic function) was studied by Bohr, including work of Bohl and Escanglon. He introduced the notion of a superspace. Bohr showed that quasiperiodic functions arise as restrictions of high-dimensional periodic functions to an irrational slice (an intersection with one or more hyperplanes), and discussed their Fourier point spectrum. These functions are not exactly periodic, but they are arbitrarily close in some sense, as well as being a projection of an exactly periodic function. In order that the quasicrystal itself be aperiodic, this slice must avoid any lattice plane of the higher-dimensional lattice. De Bruijn showed that Penrose tilings can be viewed as two-dimensional slices of five-dimensional hypercubic structures; similarly, icosahedral quasicrystals in three dimensions are projected from a six-dimensional hypercubic lattice, as first described by Peter Kramer and Roberto Neri in 1984. Equivalently, the Fourier transform of such a quasicrystal is nonzero only at a dense set of points spanned by integer multiples of a finite set of basis vectors, which are the projections of the primitive reciprocal lattice vectors of the higher-dimensional lattice. Classical theory of crystals reduces crystals to point lattices where each point is the center of mass of one of the identical units of the crystal. The structure of crystals can be analyzed by defining an associated group. Quasicrystals, on the other hand, are composed of more than one type of unit, so, instead of lattices, quasilattices must be used. Instead of groups, groupoids, the mathematical generalization of groups in category theory, is the appropriate tool for studying quasicrystals. Using mathematics for construction and analysis of quasicrystal structures is a difficult task. Computer modeling, based on the existing theories of quasicrystals, however, greatly facilitated this task. Advanced programs have been developed allowing one to construct, visualize and analyze quasicrystal structures and their diffraction patterns. The aperiodic nature of quasicrystals can also make theoretical studies of physical properties, such as electronic structure, difficult due to the inapplicability of Bloch's theorem. However, spectra of quasicrystals can still be computed with error control. Study of quasicrystals may shed light on the most basic notions related to the quantum critical point observed in heavy fermion metals. Experimental measurements on an Au–Al–Yb quasicrystal have revealed a quantum critical point defining the divergence of the magnetic susceptibility as temperature tends to zero. It is suggested that the electronic system of some quasicrystals is located at a quantum critical point without tuning, while quasicrystals exhibit the typical scaling behaviour of their thermodynamic properties and belong to the well-known family of heavy fermion metals. Materials science Since the original discovery by Dan Shechtman, hundreds of quasicrystals have been reported and confirmed. Quasicrystals are found most often in aluminium alloys (Al–Li–Cu, Al–Mn–Si, Al–Ni–Co, Al–Pd–Mn, Al–Cu–Fe, Al–Cu–V, etc.), but numerous other compositions are also known (Cd–Yb, Ti–Zr–Ni, Zn–Mg–Ho, Zn–Mg–Sc, In–Ag–Yb, Pd–U–Si, etc.). Two types of quasicrystals are known. The first type, polygonal (dihedral) quasicrystals, have an axis of 8-, 10-, or 12-fold local symmetry (octagonal, decagonal, or dodecagonal quasicrystals, respectively). They are periodic along this axis and quasiperiodic in planes normal to it. The second type, icosahedral quasicrystals, are aperiodic in all directions. Icosahedral quasicrystals have a three dimensional quasiperiodic structure and possess fifteen 2-fold, ten 3-fold and six 5-fold axes in accordance with their icosahedral symmetry. Quasicrystals fall into three groups of different thermal stability: Stable quasicrystals grown by slow cooling or casting with subsequent annealing, Metastable quasicrystals prepared by melt spinning, and Metastable quasicrystals formed by the crystallization of the amorphous phase. Except for the Al–Li–Cu system, all the stable quasicrystals are almost free of defects and disorder, as evidenced by X-ray and electron diffraction revealing peak widths as sharp as those of perfect crystals such as Si. Diffraction patterns exhibit fivefold, threefold, and twofold symmetries, and reflections are arranged quasiperiodically in three dimensions. The origin of the stabilization mechanism is different for the stable and metastable quasicrystals. Nevertheless, there is a common feature observed in most quasicrystal-forming liquid alloys or their undercooled liquids: a local icosahedral order. The icosahedral order is in equilibrium in the liquid state for the stable quasicrystals, whereas the icosahedral order prevails in the undercooled liquid state for the metastable quasicrystals. A nanoscale icosahedral phase was formed in Zr-, Cu- and Hf-based bulk metallic glasses alloyed with noble metals. Most quasicrystals have ceramic-like properties including high thermal and electrical resistance, hardness and brittleness, resistance to corrosion, and non-stick properties. Many metallic quasicrystalline substances are impractical for most applications due to their thermal instability; the Al–Cu–Fe ternary system and the Al–Cu–Fe–Cr and Al–Co–Fe–Cr quaternary systems, thermally stable up to 700 °C, are notable exceptions. The quasi-ordered droplet crystals could be formed under Dipolar forces in the Bose Einstein condensate. While the softcore Rydberg dressing interaction has forms triangular droplet-crystals, adding a Gaussian peak to the plateau type interaction would form multiple roton unstable points in the Bogoliubov spectrum. Therefore, the excitation around the roton instabilities would grow exponentially and form multiple allowed lattice constants leading to quasi-ordered periodic droplet crystals. Applications Quasicrystalline substances have potential applications in several forms. Metallic quasicrystalline coatings can be applied by thermal spraying or magnetron sputtering. A problem that must be resolved is the tendency for cracking due to the materials' extreme brittleness. The cracking could be suppressed by reducing sample dimensions or coating thickness. Recent studies show typically brittle quasicrystals can exhibit remarkable ductility of over 50% strains at room temperature and sub-micrometer scales (<500 nm). An application was the use of low-friction Al–Cu–Fe–Cr quasicrystals as a coating for frying pans. Food did not stick to it as much as to stainless steel making the pan moderately non-stick and easy to clean; heat transfer and durability were better than PTFE non-stick cookware and the pan was free from perfluorooctanoic acid (PFOA); the surface was very hard, claimed to be ten times harder than stainless steel, and not harmed by metal utensils or cleaning in a dishwasher; and the pan could withstand temperatures of without harm. However, after an initial introduction the pans were a chrome steel, probably because of the difficulty of controlling thin films of the quasicrystal. The Nobel citation said that quasicrystals, while brittle, could reinforce steel "like armor". When Shechtman was asked about potential applications of quasicrystals he said that a precipitation-hardened stainless steel is produced that is strengthened by small quasicrystalline particles. It does not corrode and is extremely strong, suitable for razor blades and surgery instruments. The small quasicrystalline particles impede the motion of dislocation in the material. Quasicrystals were also being used to develop heat insulation, LEDs, diesel engines, and new materials that convert heat to electricity. Shechtman suggested new applications taking advantage of the low coefficient of friction and the hardness of some quasicrystalline materials, for example embedding particles in plastic to make strong, hard-wearing, low-friction plastic gears. The low heat conductivity of some quasicrystals makes them good for heat insulating coatings. One of the special properties of quasicrystals is their smooth surface, which despite the irregular atomic structure, the surface of quasicrystals can be smooth and flat. Other potential applications include selective solar absorbers for power conversion, broad-wavelength reflectors, and bone repair and prostheses applications where biocompatibility, low friction and corrosion resistance are required. Magnetron sputtering can be readily applied to other stable quasicrystalline alloys such as Al–Pd–Mn. Non-material science applications Applications in macroscopic engineering have been suggested, building quasi-crystal-like large scale engineering structures, which could have interesting physical properties. Also, aperiodic tiling lattice structures may be used instead of isogrid or honeycomb patterns. None of these seem to have been put to use in practice. See also References Further reading External links A Partial Bibliography of Literature on Quasicrystals (1996–2008). Quasicrystals: an introduction by R. Lifshitz Quasicrystals: an introduction by S. Weber Steinhardt's proposal Quasicrystal Research – Documentary 2011 on the research of the University of Stuttgart "Indiana Steinhardt and the Quest for Quasicrystals – A Conversation with Paul Steinhardt" , Ideas Roadshow, 2016 BBC webpage showing pictures of Quasicrystals Quasicrystal Blocks: Description and Cut & Fold Instructions Space-filling models Crystallography Condensed matter physics Tessellation
Quasicrystal
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
4,532
[ "Symmetry", "Tessellation", "Euclidean plane geometry", "Phases of matter", "Materials science", "Crystallography", "Condensed matter physics", "Planes (geometry)", "Quasicrystals", "Matter" ]
25,418
https://en.wikipedia.org/wiki/Proof%20by%20contradiction
In logic, proof by contradiction is a form of proof that establishes the truth or the validity of a proposition by showing that assuming the proposition to be false leads to a contradiction. Although it is quite freely used in mathematical proofs, not every school of mathematical thought accepts this kind of nonconstructive proof as universally valid. More broadly, proof by contradiction is any form of argument that establishes a statement by arriving at a contradiction, even when the initial assumption is not the negation of the statement to be proved. In this general sense, proof by contradiction is also known as indirect proof, proof by assuming the opposite, and reductio ad impossibile. A mathematical proof employing proof by contradiction usually proceeds as follows: The proposition to be proved is P. We assume P to be false, i.e., we assume ¬P. It is then shown that ¬P implies falsehood. This is typically accomplished by deriving two mutually contradictory assertions, Q and ¬Q, and appealing to the law of noncontradiction. Since assuming P to be false leads to a contradiction, it is concluded that P is in fact true. An important special case is the existence proof by contradiction: in order to demonstrate that an object with a given property exists, we derive a contradiction from the assumption that all objects satisfy the negation of the property. Formalization The principle may be formally expressed as the propositional formula ¬¬P ⇒ P, equivalently (¬P ⇒ ⊥) ⇒ P, which reads: "If assuming P to be false implies falsehood, then P is true." In natural deduction the principle takes the form of the rule of inference which reads: "If is proved, then may be concluded." In sequent calculus the principle is expressed by the sequent which reads: "Hypotheses and entail the conclusion or ." Justification In classical logic the principle may be justified by the examination of the truth table of the proposition ¬¬P ⇒ P, which demonstrates it to be a tautology: Another way to justify the principle is to derive it from the law of the excluded middle, as follows. We assume ¬¬P and seek to prove P. By the law of excluded middle P either holds or it does not: if P holds, then of course P holds. if ¬P holds, then we derive falsehood by applying the law of noncontradiction to ¬P and ¬¬P, after which the principle of explosion allows us to conclude P. In either case, we established P. It turns out that, conversely, proof by contradiction can be used to derive the law of excluded middle. In classical sequent calculus LK proof by contradiction is derivable from the inference rules for negation: Relationship with other proof techniques Refutation by contradiction Proof by contradiction is similar to refutation by contradiction, also known as proof of negation, which states that ¬P is proved as follows: The proposition to be proved is ¬P. Assume P. Derive falsehood. Conclude ¬P. In contrast, proof by contradiction proceeds as follows: The proposition to be proved is P. Assume ¬P. Derive falsehood. Conclude P. Formally these are not the same, as refutation by contradiction applies only when the proposition to be proved is negated, whereas proof by contradiction may be applied to any proposition whatsoever. In classical logic, where and may be freely interchanged, the distinction is largely obscured. Thus in mathematical practice, both principles are referred to as "proof by contradiction". Law of the excluded middle Proof by contradiction is equivalent to the law of the excluded middle, first formulated by Aristotle, which states that either an assertion or its negation is true, P ∨ ¬P. Law of non-contradiction The law of noncontradiction was first stated as a metaphysical principle by Aristotle. It posits that a proposition and its negation cannot both be true, or equivalently, that a proposition cannot be both true and false. Formally the law of non-contradiction is written as ¬(P ∧ ¬P) and read as "it is not the case that a proposition is both true and false". The law of non-contradiction neither follows nor is implied by the principle of Proof by contradiction. The laws of excluded middle and non-contradiction together mean that exactly one of P and ¬P is true. Proof by contradiction in intuitionistic logic In intuitionistic logic proof by contradiction is not generally valid, although some particular instances can be derived. In contrast, proof of negation and principle of noncontradiction are both intuitionistically valid. Brouwer–Heyting–Kolmogorov interpretation of proof by contradiction gives the following intuitionistic validity condition: If we take "method" to mean algorithm, then the condition is not acceptable, as it would allow us to solve the Halting problem. To see how, consider the statement H(M) stating "Turing machine M halts or does not halt". Its negation ¬H(M) states that "M neither halts nor does not halt", which is false by the law of noncontradiction (which is intuitionistically valid). If proof by contradiction were intuitionistically valid, we would obtain an algorithm for deciding whether an arbitrary Turing machine M halts, thereby violating the (intuitionistically valid) proof of non-solvability of the Halting problem. A proposition P which satisfies is known as a ¬¬-stable proposition. Thus in intuitionistic logic proof by contradiction is not universally valid, but can only be applied to the ¬¬-stable propositions. An instance of such a proposition is a decidable one, i.e., satisfying . Indeed, the above proof that the law of excluded middle implies proof by contradiction can be repurposed to show that a decidable proposition is ¬¬-stable. A typical example of a decidable proposition is a statement that can be checked by direct computation, such as " is prime" or " divides ". Examples of proofs by contradiction Euclid's Elements An early occurrence of proof by contradiction can be found in Euclid's Elements, Book 1, Proposition 6: If in a triangle two angles equal one another, then the sides opposite the equal angles also equal one another. The proof proceeds by assuming that the opposite sides are not equal, and derives a contradiction. Hilbert's Nullstellensatz An influential proof by contradiction was given by David Hilbert. His Nullstellensatz states: If are polynomials in indeterminates with complex coefficients, which have no common complex zeros, then there are polynomials such that Hilbert proved the statement by assuming that there are no such polynomials and derived a contradiction. Infinitude of primes Euclid's theorem states that there are infinitely many primes. In Euclid's Elements the theorem is stated in Book IX, Proposition 20: Prime numbers are more than any assigned multitude of prime numbers. Depending on how we formally write the above statement, the usual proof takes either the form of a proof by contradiction or a refutation by contradiction. We present here the former, see below how the proof is done as refutation by contradiction. If we formally express Euclid's theorem as saying that for every natural number there is a prime bigger than it, then we employ proof by contradiction, as follows. Given any number , we seek to prove that there is a prime larger than . Suppose to the contrary that no such p exists (an application of proof by contradiction). Then all primes are smaller than or equal to , and we may form the list of them all. Let be the product of all primes and . Because is larger than all prime numbers it is not prime, hence it must be divisible by one of them, say . Now both and are divisible by , hence so is their difference , but this cannot be because 1 is not divisible by any primes. Hence we have a contradiction and so there is a prime number bigger than Examples of refutations by contradiction The following examples are commonly referred to as proofs by contradiction, but formally employ refutation by contradiction (and therefore are intuitionistically valid). Infinitude of primes Let us take a second look at Euclid's theorem – Book IX, Proposition 20: Prime numbers are more than any assigned multitude of prime numbers. We may read the statement as saying that for every finite list of primes, there is another prime not on that list, which is arguably closer to and in the same spirit as Euclid's original formulation. In this case Euclid's proof applies refutation by contradiction at one step, as follows. Given any finite list of prime numbers , it will be shown that at least one additional prime number not in this list exists. Let be the product of all the listed primes and a prime factor of , possibly itself. We claim that is not in the given list of primes. Suppose to the contrary that it were (an application of refutation by contradiction). Then would divide both and , therefore also their difference, which is . This gives a contradiction, since no prime number divides 1. Irrationality of the square root of 2 The classic proof that the square root of 2 is irrational is a refutation by contradiction. Indeed, we set out to prove the negation ¬ ∃ a, b ∈ . a/b = by assuming that there exist natural numbers a and b whose ratio is the square root of two, and derive a contradiction. Proof by infinite descent Proof by infinite descent is a method of proof whereby a smallest object with desired property is shown not to exist as follows: Assume that there is a smallest object with the desired property. Demonstrate that an even smaller object with the desired property exists, thereby deriving a contradiction. Such a proof is again a refutation by contradiction. A typical example is the proof of the proposition "there is no smallest positive rational number": assume there is a smallest positive rational number q and derive a contradiction by observing that is even smaller than q and still positive. Russell's paradox Russell's paradox, stated set-theoretically as "there is no set whose elements are precisely those sets that do not contain themselves", is a negated statement whose usual proof is a refutation by contradiction. Notation Proofs by contradiction sometimes end with the word "Contradiction!". Isaac Barrow and Baermann used the notation Q.E.A., for "quod est absurdum" ("which is absurd"), along the lines of Q.E.D., but this notation is rarely used today. A graphical symbol sometimes used for contradictions is a downwards zigzag arrow "lightning" symbol (U+21AF: ↯), for example in Davey and Priestley. Others sometimes used include a pair of opposing arrows (as or ), struck-out arrows (), a stylized form of hash (such as U+2A33: ⨳), or the "reference mark" (U+203B: ※), or . Hardy's view G. H. Hardy described proof by contradiction as "one of a mathematician's finest weapons", saying "It is a far finer gambit than any chess gambit: a chess player may offer the sacrifice of a pawn or even a piece, but a mathematician offers the game." Automated theorem proving In automated theorem proving the method of resolution is based on proof by contradiction. That is, in order to show that a given statement is entailed by given hypotheses, the automated prover assumes the hypotheses and the negation of the statement, and attempts to derive a contradiction. See also Law of excluded middle Law of noncontradiction Proof by exhaustion Proof by infinite descent Modus tollens Reductio ad absurdum References Further reading and external links Proof by Contradiction from Larry W. Cusick's How To Write Proofs Reductio ad Absurdum Internet Encyclopedia of Philosophy; ISSN 2161-0002 Mathematical proofs Methods of proof Theorems in propositional logic
Proof by contradiction
[ "Mathematics" ]
2,496
[ "Proof theory", "Methods of proof", "Theorems in propositional logic", "nan", "Theorems in the foundations of mathematics" ]
25,453
https://en.wikipedia.org/wiki/Rheology
Rheology (; ) is the study of the flow of matter, primarily in a fluid (liquid or gas) state but also as "soft solids" or solids under conditions in which they respond with plastic flow rather than deforming elastically in response to an applied force. Rheology is the branch of physics that deals with the deformation and flow of materials, both solids and liquids. The term rheology was coined by Eugene C. Bingham, a professor at Lafayette College, in 1920 from a suggestion by a colleague, Markus Reiner. The term was inspired by the aphorism of Heraclitus (often mistakenly attributed to Simplicius), (, 'everything flows') and was first used to describe the flow of liquids and the deformation of solids. It applies to substances that have a complex microstructure, such as muds, sludges, suspensions, and polymers and other glass formers (e.g., silicates), as well as many foods and additives, bodily fluids (e.g., blood) and other biological materials, and other materials that belong to the class of soft matter such as food. Newtonian fluids can be characterized by a single coefficient of viscosity for a specific temperature. Although this viscosity will change with temperature, it does not change with the strain rate. Only a small group of fluids exhibit such constant viscosity. The large class of fluids whose viscosity changes with the strain rate (the relative flow velocity) are called non-Newtonian fluids. Rheology generally accounts for the behavior of non-Newtonian fluids by characterizing the minimum number of functions that are needed to relate stresses with rate of change of strain or strain rates. For example, ketchup can have its viscosity reduced by shaking (or other forms of mechanical agitation, where the relative movement of different layers in the material actually causes the reduction in viscosity), but water cannot. Ketchup is a shear-thinning material, like yogurt and emulsion paint (US terminology latex paint or acrylic paint), exhibiting thixotropy, where an increase in relative flow velocity will cause a reduction in viscosity, for example, by stirring. Some other non-Newtonian materials show the opposite behavior, rheopecty (viscosity increasing with relative deformation), and are called shear-thickening or dilatant materials. Since Sir Isaac Newton originated the concept of viscosity, the study of liquids with strain-rate-dependent viscosity is also often called Non-Newtonian fluid mechanics. The experimental characterisation of a material's rheological behaviour is known as rheometry, although the term rheology is frequently used synonymously with rheometry, particularly by experimentalists. Theoretical aspects of rheology are the relation of the flow/deformation behaviour of material and its internal structure (e.g., the orientation and elongation of polymer molecules) and the flow/deformation behaviour of materials that cannot be described by classical fluid mechanics or elasticity. Scope In practice, rheology is principally concerned with extending continuum mechanics to characterize the flow of materials that exhibit a combination of elastic, viscous and plastic behavior by properly combining elasticity and (Newtonian) fluid mechanics. It is also concerned with predicting mechanical behavior (on the continuum mechanical scale) based on the micro- or nanostructure of the material, e.g. the molecular size and architecture of polymers in solution or the particle size distribution in a solid suspension. Materials with the characteristics of a fluid will flow when subjected to a stress, which is defined as the force per area. There are different sorts of stress (e.g. shear, torsional, etc.), and materials can respond differently under different stresses. Much of theoretical rheology is concerned with associating external forces and torques with internal stresses, internal strain gradients, and flow velocities. Rheology unites the seemingly unrelated fields of plasticity and non-Newtonian fluid dynamics by recognizing that materials undergoing these types of deformation are unable to support a stress (particularly a shear stress, since it is easier to analyze shear deformation) in static equilibrium. In this sense, a solid undergoing plastic deformation is a fluid, although no viscosity coefficient is associated with this flow. Granular rheology refers to the continuum mechanical description of granular materials. One of the major tasks of rheology is to establish by measurement the relationships between strains (or rates of strain) and stresses, although a number of theoretical developments (such as assuring frame invariants) are also required before using the empirical data. These experimental techniques are known as rheometry and are concerned with the determination of well-defined rheological material functions. Such relationships are then amenable to mathematical treatment by the established methods of continuum mechanics. The characterization of flow or deformation originating from a simple shear stress field is called shear rheometry (or shear rheology). The study of extensional flows is called extensional rheology. Shear flows are much easier to study and thus much more experimental data are available for shear flows than for extensional flows. Viscoelasticity Fluid and solid character are relevant at long times:We consider the application of a constant stress (a so-called creep experiment): if the material, after some deformation, eventually resists further deformation, it is considered a solid if, by contrast, the material flows indefinitely, it is considered a fluid By contrast, elastic and viscous (or intermediate, viscoelastic) behaviour is relevant at short times (transient behaviour):We again consider the application of a constant stress: if the material deformation strain increases linearly with increasing applied stress, then the material is linear elastic within the range it shows recoverable strains. Elasticity is essentially a time independent processes, as the strains appear the moment the stress is applied, without any time delay. if the material deformation strain rate increases linearly with increasing applied stress, then the material is viscous in the Newtonian sense. These materials are characterized due to the time delay between the applied constant stress and the maximum strain. if the materials behaves as a combination of viscous and elastic components, then the material is viscoelastic. Theoretically such materials can show both instantaneous deformation as elastic material and a delayed time dependent deformation as in fluids. Plasticity is the behavior observed after the material is subjected to a yield stress:A material that behaves as a solid under low applied stresses may start to flow above a certain level of stress, called the yield stress of the material. The term plastic solid is often used when this plasticity threshold is rather high, while yield stress fluid is used when the threshold stress is rather low. However, there is no fundamental difference between the two concepts. Dimensionless numbers Deborah number On one end of the spectrum we have an inviscid or a simple Newtonian fluid and on the other end, a rigid solid; thus the behavior of all materials fall somewhere in between these two ends. The difference in material behavior is characterized by the level and nature of elasticity present in the material when it deforms, which takes the material behavior to the non-Newtonian regime. The non-dimensional Deborah number is designed to account for the degree of non-Newtonian behavior in a flow. The Deborah number is defined as the ratio of the characteristic time of relaxation (which purely depends on the material and other conditions like the temperature) to the characteristic time of experiment or observation. Small Deborah numbers represent Newtonian flow, while non-Newtonian (with both viscous and elastic effects present) behavior occurs for intermediate range Deborah numbers, and high Deborah numbers indicate an elastic/rigid solid. Since Deborah number is a relative quantity, the numerator or the denominator can alter the number. A very small Deborah number can be obtained for a fluid with extremely small relaxation time or a very large experimental time, for example. Reynolds number In fluid mechanics, the Reynolds number is a measure of the ratio of inertial forces () to viscous forces () and consequently it quantifies the relative importance of these two types of effect for given flow conditions. Under low Reynolds numbers viscous effects dominate and the flow is laminar, whereas at high Reynolds numbers inertia predominates and the flow may be turbulent. However, since rheology is concerned with fluids which do not have a fixed viscosity, but one which can vary with flow and time, calculation of the Reynolds number can be complicated. It is one of the most important dimensionless numbers in fluid dynamics and is used, usually along with other dimensionless numbers, to provide a criterion for determining dynamic similitude. When two geometrically similar flow patterns, in perhaps different fluids with possibly different flow rates, have the same values for the relevant dimensionless numbers, they are said to be dynamically similar. Typically it is given as follows: where: us – mean flow velocity, [m s−1] L – characteristic length, [m] μ – (absolute) dynamic fluid viscosity, [N s m−2] or [Pa s] ν – kinematic fluid viscosity: , [m2 s−1] ρ – fluid density, [kg m−3]. Measurement Rheometers are instruments used to characterize the rheological properties of materials, typically fluids that are melts or solution. These instruments impose a specific stress field or deformation to the fluid, and monitor the resultant deformation or stress. Instruments can be run in steady flow or oscillatory flow, in both shear and extension. Applications Rheology has applications in materials science, engineering, geophysics, physiology, human biology and pharmaceutics. Materials science is utilized in the production of many industrially important substances, such as cement, paint, and chocolate, which have complex flow characteristics. In addition, plasticity theory has been similarly important for the design of metal forming processes. The science of rheology and the characterization of viscoelastic properties in the production and use of polymeric materials has been critical for the production of many products for use in both the industrial and military sectors. Study of flow properties of liquids is important for pharmacists working in the manufacture of several dosage forms, such as simple liquids, ointments, creams, pastes etc. The flow behavior of liquids under applied stress is of great relevance in the field of pharmacy. Flow properties are used as important quality control tools to maintain the superiority of the product and reduce batch to batch variations. Materials science Polymers Examples may be given to illustrate the potential applications of these principles to practical problems in the processing and use of rubbers, plastics, and fibers. Polymers constitute the basic materials of the rubber and plastic industries and are of vital importance to the textile, petroleum, automobile, paper, and pharmaceutical industries. Their viscoelastic properties determine the mechanical performance of the final products of these industries, and also the success of processing methods at intermediate stages of production. In viscoelastic materials, such as most polymers and plastics, the presence of liquid-like behaviour depends on the properties of and so varies with rate of applied load, i.e., how quickly a force is applied. The silicone toy 'Silly Putty' behaves quite differently depending on the time rate of applying a force. Pull on it slowly and it exhibits continuous flow, similar to that evidenced in a highly viscous liquid. Alternatively, when hit hard and directly, it shatters like a silicate glass. In addition, conventional rubber undergoes a glass transition (often called a rubber-glass transition). E.g. The Space Shuttle Challenger disaster was caused by rubber O-rings that were being used well below their glass transition temperature on an unusually cold Florida morning, and thus could not flex adequately to form proper seals between sections of the two solid-fuel rocket boosters. Biopolymers Sol-gel With the viscosity of a sol adjusted into a proper range, both optical quality glass fiber and refractory ceramic fiber can be drawn which are used for fiber-optic sensors and thermal insulation, respectively. The mechanisms of hydrolysis and condensation, and the rheological factors that bias the structure toward linear or branched structures are the most critical issues of sol-gel science and technology. Geophysics The scientific discipline of geophysics includes study of the flow of molten lava and study of debris flows (fluid mudslides). This disciplinary branch also deals with solid Earth materials which only exhibit flow over extended time-scales. Those that display viscous behaviour are known as rheids. For example, granite can flow plastically with a negligible yield stress at room temperatures (i.e. a viscous flow). Long-term creep experiments (~10 years) indicate that the viscosity of granite and glass under ambient conditions are on the order of 1020 poises. Physiology Physiology includes the study of many bodily fluids that have complex structure and composition, and thus exhibit a wide range of viscoelastic flow characteristics. In particular there is a specialist study of blood flow called hemorheology. This is the study of flow properties of blood and its elements (plasma and formed elements, including red blood cells, white blood cells and platelets). Blood viscosity is determined by plasma viscosity, hematocrit (volume fraction of red blood cell, which constitute 99.9% of the cellular elements) and mechanical behaviour of red blood cells. Therefore, red blood cell mechanics is the major determinant of flow properties of blood.(The ocular Vitreous humor is subject to rheologic observations, particularly during studies of age-related vitreous liquefaction, or synaeresis.) The leading characteristic for hemorheology has been shear thinning in steady shear flow. Other non-Newtonian rheological characteristics that blood can demonstrate includes pseudoplasticity, viscoelasticity, and thixotropy. Red blood cell aggregation There are two current major hypotheses to explain blood flow predictions and shear thinning responses. The two models also attempt to demonstrate the drive for reversible red blood cell aggregation, although the mechanism is still being debated. There is a direct effect of red blood cell aggregation on blood viscosity and circulation. The foundation of hemorheology can also provide information for modeling of other biofluids. The bridging or "cross-bridging" hypothesis suggests that macromolecules physically crosslink adjacent red blood cells into rouleaux structures. This occurs through adsorption of macromolecules onto the red blood cell surfaces. The depletion layer hypothesis suggests the opposite mechanism. The surfaces of the red blood cells are bound together by an osmotic pressure gradient that is created by depletion layers overlapping. The effect of rouleaux aggregation tendency can be explained by hematocrit and fibrinogen concentration in whole blood rheology. Some techniques researchers use are optical trapping and microfluidics to measure cell interaction in vitro. Disease and diagnostics Changes to viscosity has been shown to be linked with diseases like hyperviscosity, hypertension, sickle cell anemia, and diabetes. Hemorheological measurements and genomic testing technologies act as preventative measures and diagnostic tools. Hemorheology has also been correlated with aging effects, especially with impaired blood fluidity, and studies have shown that physical activity may improve the thickening of blood rheology. Zoology Many animals make use of rheological phenomena, for example sandfish that exploit the granular rheology of dry sand to "swim" in it or land gastropods that use snail slime for adhesive locomotion. Certain animals produce specialized endogenous complex fluids, such as the sticky slime produced by velvet worms to immobilize prey or the fast-gelling underwater slime secreted by hagfish to deter predators. Food rheology Food rheology is important in the manufacture and processing of food products, such as cheese and gelato. An adequate rheology is important for the indulgence of many common foods, particularly in the case of sauces, dressings, yogurt, or fondue. Thickening agents, or thickeners, are substances which, when added to an aqueous mixture, increase its viscosity without substantially modifying its other properties, such as taste. They provide body, increase stability, and improve suspension of added ingredients. Thickening agents are often used as food additives and in cosmetics and personal hygiene products. Some thickening agents are gelling agents, forming a gel. The agents are materials used to thicken and stabilize liquid solutions, emulsions, and suspensions. They dissolve in the liquid phase as a colloid mixture that forms a weakly cohesive internal structure. Food thickeners frequently are based on either polysaccharides (starches, vegetable gums, and pectin), or proteins. Concrete rheology Concrete's and mortar's workability is related to the rheological properties of the fresh cement paste. The mechanical properties of hardened concrete increase if less water is used in the concrete mix design, however reducing the water-to-cement ratio may decrease the ease of mixing and application. To avoid these undesired effects, superplasticizers are typically added to decrease the apparent yield stress and the viscosity of the fresh paste. Their addition highly improves concrete and mortar properties. Filled polymer rheology The incorporation of various types of fillers into polymers is a common means of reducing cost and to impart certain desirable mechanical, thermal, electrical and magnetic properties to the resulting material. The advantages that filled polymer systems have to offer come with an increased complexity in the rheological behavior. Usually when the use of fillers is considered, a compromise has to be made between the improved mechanical properties in the solid state on one side and the increased difficulty in melt processing, the problem of achieving uniform dispersion of the filler in the polymer matrix and the economics of the process due to the added step of compounding on the other. The rheological properties of filled polymers are determined not only by the type and amount of filler, but also by the shape, size and size distribution of its particles. The viscosity of filled systems generally increases with increasing filler fraction. This can be partially ameliorated via broad particle size distributions via the Farris effect. An additional factor is the stress transfer at the filler-polymer interface. The interfacial adhesion can be substantially enhanced via a coupling agent that adheres well to both the polymer and the filler particles. The type and amount of surface treatment on the filler are thus additional parameters affecting the rheological and material properties of filled polymeric systems. It is important to take into consideration wall slip when performing the rheological characterization of highly filled materials, as there can be a large difference between the actual strain and the measured strain. Rheologist A rheologist is an interdisciplinary scientist or engineer who studies the flow of complex liquids or the deformation of soft solids. It is not a primary degree subject; there is no qualification of rheologist as such. Most rheologists have a qualification in mathematics, the physical sciences (e.g. chemistry, physics, geology, biology), engineering (e.g. mechanical, chemical, materials science, plastics engineering and engineering or civil engineering), medicine, or certain technologies, notably materials or food. Typically, a small amount of rheology may be studied when obtaining a degree, but a person working in rheology will extend this knowledge during postgraduate research or by attending short courses and by joining a professional association. See also Bingham plastic Die swell Fluid dynamics Glass transition Interfacial rheology Liquid List of rheologists Microrheology Nordic Rheology Society Rheological weldability for thermoplastics Rheopectic Solid Transport phenomena μ(I) rheology: one model of the rheology of a granular flow. References External links "The Origins of Rheology: A short historical excursion" by Deepak Doraiswamy, DuPont iTechnologies RHEOTEST Medingen GmbH – Short history and collection of rheological instruments from the time of Fritz Höppler - On the Rheology of Cats Societies American Society of Rheology Australian Society of Rheology British Society of Rheology European Society of Rheology French Society of Rheology Nordic Rheology Society Romanian Society of Rheology Korean Society of Rheology Journals Applied Rheology Journal of Non-Newtonian Fluid Mechanics Journal of Rheology Rheologica Acta Tribology
Rheology
[ "Chemistry", "Materials_science", "Engineering" ]
4,299
[ "Tribology", "Materials science", "Surface science", "Mechanical engineering", "Rheology", "Fluid dynamics" ]
25,599
https://en.wikipedia.org/wiki/Rubidium
Rubidium is a chemical element; it has symbol Rb and atomic number 37. It is a very soft, whitish-grey solid in the alkali metal group, similar to potassium and caesium. Rubidium is the first alkali metal in the group to have a density higher than water. On Earth, natural rubidium comprises two isotopes: 72% is a stable isotope Rb, and 28% is slightly radioactive Rb, with a half-life of 48.8 billion years – more than three times as long as the estimated age of the universe. German chemists Robert Bunsen and Gustav Kirchhoff discovered rubidium in 1861 by the newly developed technique, flame spectroscopy. The name comes from the Latin word , meaning deep red, the color of its emission spectrum. Rubidium's compounds have various chemical and electronic applications. Rubidium metal is easily vaporized and has a convenient spectral absorption range, making it a frequent target for laser manipulation of atoms. Rubidium is not a known nutrient for any living organisms. However, rubidium ions have similar properties and the same charge as potassium ions, and are actively taken up and treated by animal cells in similar ways. Characteristics Physical properties Rubidium is a very soft, ductile, silvery-white metal. It has a melting point of and a boiling point of . It forms amalgams with mercury and alloys with gold, iron, caesium, sodium, and potassium, but not lithium (despite rubidium and lithium being in the same periodic group). Rubidium and potassium show a very similar purple color in the flame test, and distinguishing the two elements requires more sophisticated analysis, such as spectroscopy. Chemical properties Rubidium is the second most electropositive of the stable alkali metals and has a very low first ionization energy of only 403 kJ/mol. It has an electron configuration of [Kr]5s1 and is photosensitive. Due to its strong electropositive nature, rubidium reacts explosively with water to produce rubidium hydroxide and hydrogen gas. As with all the alkali metals, the reaction is usually vigorous enough to ignite metal or the hydrogen gas produced by the reaction, potentially causing an explosion. Rubidium, being denser than potassium, sinks in water, reacting violently; caesium explodes on contact with water. However, the reaction rates of all alkali metals depend upon surface area of metal in contact with water, with small metal droplets giving explosive rates. Rubidium has also been reported to ignite spontaneously in air. Compounds Rubidium chloride (RbCl) is probably the most used rubidium compound: among several other chlorides, it is used to induce living cells to take up DNA; it is also used as a biomarker, because in nature, it is found only in small quantities in living organisms and when present, replaces potassium. Other common rubidium compounds are the corrosive rubidium hydroxide (RbOH), the starting material for most rubidium-based chemical processes; rubidium carbonate (Rb2CO3), used in some optical glasses, and rubidium copper sulfate, Rb2SO4·CuSO4·6H2O. Rubidium silver iodide (RbAg4I5) has the highest room temperature conductivity of any known ionic crystal, a property exploited in thin film batteries and other applications. Rubidium forms a number of oxides when exposed to air, including rubidium monoxide (Rb2O), Rb6O, and Rb9O2; rubidium in excess oxygen gives the superoxide RbO2. Rubidium forms salts with halogens, producing rubidium fluoride, rubidium chloride, rubidium bromide, and rubidium iodide. Isotopes Although rubidium is monoisotopic, rubidium in the Earth's crust is composed of two isotopes: the stable 85Rb (72.2%) and the radioactive 87Rb (27.8%). Natural rubidium is radioactive, with specific activity of about 670 Bq/g, enough to significantly expose a photographic film in 110 days. Thirty additional rubidium isotopes have been synthesized with half-lives of less than 3 months; most are highly radioactive and have few uses. Rubidium-87 has a half-life of  years, which is more than three times the age of the universe of  years, making it a primordial nuclide. It readily substitutes for potassium in minerals, and is therefore fairly widespread. Rb has been used extensively in dating rocks; 87Rb beta decays to stable 87Sr. During fractional crystallization, Sr tends to concentrate in plagioclase, leaving Rb in the liquid phase. Hence, the Rb/Sr ratio in residual magma may increase over time, and the progressing differentiation results in rocks with elevated Rb/Sr ratios. The highest ratios (10 or more) occur in pegmatites. If the initial amount of Sr is known or can be extrapolated, then the age can be determined by measurement of the Rb and Sr concentrations and of the 87Sr/86Sr ratio. The dates indicate the true age of the minerals only if the rocks have not been subsequently altered (see rubidium–strontium dating). Rubidium-82, one of the element's non-natural isotopes, is produced by electron-capture decay of strontium-82 with a half-life of 25.36 days. With a half-life of 76 seconds, rubidium-82 decays by positron emission to stable krypton-82. Occurrence Rubidium is not abundant, being one of 56 elements that combined make up 0.05% of the Earth's crust; at roughly the 23rd most abundant element in the Earth's crust it is more abundant than zinc or copper. It occurs naturally in the minerals leucite, pollucite, carnallite, and zinnwaldite, which contain as much as 1% rubidium oxide. Lepidolite contains between 0.3% and 3.5% rubidium, and is the commercial source of the element. Some potassium minerals and potassium chlorides also contain the element in commercially significant quantities. Seawater contains an average of 125 μg/L of rubidium compared to the much higher value for potassium of 408 mg/L and the much lower value of 0.3 μg/L for caesium. Rubidium is the 18th most abundant element in seawater. Because of its large ionic radius, rubidium is one of the "incompatible elements". During magma crystallization, rubidium is concentrated together with its heavier analogue caesium in the liquid phase and crystallizes last. Therefore, the largest deposits of rubidium and caesium are zone pegmatite ore bodies formed by this enrichment process. Because rubidium substitutes for potassium in the crystallization of magma, the enrichment is far less effective than that of caesium. Zone pegmatite ore bodies containing mineable quantities of caesium as pollucite or the lithium minerals lepidolite are also a source for rubidium as a by-product. Two notable sources of rubidium are the rich deposits of pollucite at Bernic Lake, Manitoba, Canada, and the rubicline found as impurities in pollucite on the Italian island of Elba, with a rubidium content of 17.5%. Both of those deposits are also sources of caesium. Production Although rubidium is more abundant in Earth's crust than caesium, the limited applications and the lack of a mineral rich in rubidium limits the production of rubidium compounds to 2 to 4 tonnes per year. Several methods are available for separating potassium, rubidium, and caesium. The fractional crystallization of a rubidium and caesium alum yields after 30 subsequent steps pure rubidium alum. Two other methods are reported, the chlorostannate process and the ferrocyanide process. For several years in the 1950s and 1960s, a by-product of potassium production called Alkarb was a main source for rubidium. Alkarb contained 21% rubidium, with the rest being potassium and a small amount of caesium. Today the largest producers of caesium produce rubidium as a by-product from pollucite. History Rubidium was discovered in 1861 by Robert Bunsen and Gustav Kirchhoff, in Heidelberg, Germany, in the mineral lepidolite through flame spectroscopy. Because of the bright red lines in its emission spectrum, they chose a name derived from the Latin word , meaning "deep red". Rubidium is a minor component in lepidolite. Kirchhoff and Bunsen processed 150 kg of a lepidolite containing only 0.24% rubidium monoxide (Rb2O). Both potassium and rubidium form insoluble salts with chloroplatinic acid, but those salts show a slight difference in solubility in hot water. Therefore, the less soluble rubidium hexachloroplatinate (Rb2PtCl6) could be obtained by fractional crystallization. After reduction of the hexachloroplatinate with hydrogen, the process yielded 0.51 grams of rubidium chloride (RbCl) for further studies. Bunsen and Kirchhoff began their first large-scale isolation of caesium and rubidium compounds with of mineral water, which yielded 7.3 grams of caesium chloride and 9.2 grams of rubidium chloride. Rubidium was the second element, shortly after caesium, to be discovered by spectroscopy, just one year after the invention of the spectroscope by Bunsen and Kirchhoff. The two scientists used the rubidium chloride to estimate that the atomic weight of the new element was 85.36 (the currently accepted value is 85.47). They tried to generate elemental rubidium by electrolysis of molten rubidium chloride, but instead of a metal, they obtained a blue homogeneous substance, which "neither under the naked eye nor under the microscope showed the slightest trace of metallic substance". They presumed that it was a subchloride (); however, the product was probably a colloidal mixture of the metal and rubidium chloride. In a second attempt to produce metallic rubidium, Bunsen was able to reduce rubidium by heating charred rubidium tartrate. Although the distilled rubidium was pyrophoric, they were able to determine the density and the melting point. The quality of this research in the 1860s can be appraised by the fact that their determined density differs by less than 0.1 g/cm3 and the melting point by less than 1 °C from the presently accepted values. The slight radioactivity of rubidium was discovered in 1908, but that was before the theory of isotopes was established in 1910, and the low level of activity (half-life greater than 1010 years) made interpretation complicated. The now proven decay of 87Rb to stable 87Sr through beta decay was still under discussion in the late 1940s. Rubidium had minimal industrial value before the 1920s. Since then, the most important use of rubidium is research and development, primarily in chemical and electronic applications. In 1995, rubidium-87 was used to produce a Bose–Einstein condensate, for which the discoverers, Eric Allin Cornell, Carl Edwin Wieman and Wolfgang Ketterle, won the 2001 Nobel Prize in Physics. Applications Rubidium compounds are sometimes used in fireworks to give them a purple color. Rubidium has also been considered for use in a thermoelectric generator using the magnetohydrodynamic principle, whereby hot rubidium ions are passed through a magnetic field. These conduct electricity and act like an armature of a generator, thereby generating an electric current. Rubidium, particularly vaporized 87Rb, is one of the most commonly used atomic species employed for laser cooling and Bose–Einstein condensation. Its desirable features for this application include the ready availability of inexpensive diode laser light at the relevant wavelength and the moderate temperatures required to obtain substantial vapor pressures. For cold-atom applications requiring tunable interactions, 85Rb is preferred for its rich Feshbach spectrum. Rubidium has been used for polarizing 3He, producing volumes of magnetized 3He gas, with the nuclear spins aligned rather than random. Rubidium vapor is optically pumped by a laser, and the polarized Rb polarizes 3He through the hyperfine interaction. Such spin-polarized 3He cells are useful for neutron polarization measurements and for producing polarized neutron beams for other purposes. The resonant element in atomic clocks utilizes the hyperfine structure of rubidium's energy levels, and rubidium is useful for high-precision timing. It is used as the main component of secondary frequency references (rubidium oscillators) in cell site transmitters and other electronic transmitting, networking, and test equipment. These rubidium standards are often used with GNSS to produce a "primary frequency standard" that has greater accuracy and is less expensive than caesium standards. Such rubidium standards are often mass-produced for the telecommunications industry. Other potential or current uses of rubidium include a working fluid in vapor turbines, as a getter in vacuum tubes, and as a photocell component. Rubidium is also used as an ingredient in special types of glass, in the production of superoxide by burning in oxygen, in the study of potassium ion channels in biology, and as the vapor in atomic magnetometers. In particular, 87Rb is used with other alkali metals in the development of spin-exchange relaxation-free (SERF) magnetometers. Rubidium-82 is used for positron emission tomography. Rubidium is very similar to potassium, and tissue with high potassium content will also accumulate the radioactive rubidium. One of the main uses is myocardial perfusion imaging. As a result of changes in the blood–brain barrier in brain tumors, rubidium collects more in brain tumors than normal brain tissue, allowing the use of radioisotope rubidium-82 in nuclear medicine to locate and image brain tumors. Rubidium-82 has a very short half-life of 76 seconds, and the production from decay of strontium-82 must be done close to the patient. Rubidium was tested for the influence on manic depression and depression. Dialysis patients suffering from depression show a depletion in rubidium, and therefore a supplementation may help during depression. In some tests the rubidium was administered as rubidium chloride with up to 720 mg per day for 60 days. Precautions and biological effects Rubidium reacts violently with water and can cause fires. To ensure safety and purity, this metal is usually kept under dry mineral oil or sealed in glass ampoules in an inert atmosphere. Rubidium forms peroxides on exposure even to a small amount of air diffused into the oil, and storage is subject to similar precautions as the storage of metallic potassium. Rubidium, like sodium and potassium, almost always has +1 oxidation state when dissolved in water, even in biological contexts. The human body tends to treat Rb+ ions as if they were potassium ions, and therefore concentrates rubidium in the body's intracellular fluid (i.e., inside cells). The ions are not particularly toxic; a 70 kg person contains on average 0.36 g of rubidium, and an increase in this value by 50 to 100 times did not show negative effects in test persons. The biological half-life of rubidium in humans measures 31–46 days. Although a partial substitution of potassium by rubidium is possible, when more than 50% of the potassium in the muscle tissue of rats was replaced with rubidium, the rats died. References Further reading Meites, Louis (1963). Handbook of Analytical Chemistry (New York: McGraw-Hill Book Company, 1963) External links Rubidium at The Periodic Table of Videos (University of Nottingham) Chemical elements Alkali metals Reducing agents Chemical elements with body-centered cubic structure Pyrophoric materials
Rubidium
[ "Physics", "Chemistry", "Technology" ]
3,312
[ "Chemical elements", "Redox", "Reducing agents", "Atoms", "Matter" ]
25,604
https://en.wikipedia.org/wiki/Radon
Radon is a chemical element; it has symbol Rn and atomic number 86. It is a radioactive noble gas and is colorless and odorless. Of the three naturally occurring radon isotopes, only Rn has a sufficiently long half-life (3.825 days) for it to be released from the soil and rock where it is generated. Radon isotopes are the immediate decay products of radium isotopes. The instability of Rn, its most stable isotope, makes radon one of the rarest elements. Radon will be present on Earth for several billion more years despite its short half-life, because it is constantly being produced as a step in the decay chains of U and Th, both of which are abundant radioactive nuclides with half-lives of at least several billion years. The decay of radon produces many other short-lived nuclides, known as "radon daughters", ending at stable isotopes of lead. Rn occurs in significant quantities as a step in the normal radioactive decay chain of U, also known as the uranium series, which slowly decays into a variety of radioactive nuclides and eventually decays into stable Pb. Rn occurs in minute quantities as an intermediate step in the decay chain of Th, also known as the thorium series, which eventually decays into stable Pb. Radon was discovered in 1899 by Ernest Rutherford and Robert B. Owens at McGill University in Montreal, and was the fifth radioactive element to be discovered. First known as "emanation", the radioactive gas was identified during experiments with radium, thorium oxide, and actinium by Friedrich Ernst Dorn, Rutherford and Owens, and André-Louis Debierne, respectively, and each element's emanation was considered to be a separate substance: radon, thoron, and actinon. Sir William Ramsay and Robert Whytlaw-Gray considered that the radioactive emanations may contain a new element of the noble gas family, and isolated "radium emanation" in 1909 to determine its properties. In 1911, the element Ramsay and Whytlaw-Gray isolated was accepted by the International Commission for Atomic Weights, and in 1923, the International Committee for Chemical Elements and the International Union of Pure and Applied Chemistry (IUPAC) chose radon as the accepted name for the element's most stable isotope, Rn; thoron and actinon were also recognized by IUPAC as distinct isotopes of the element. Under standard conditions, radon is gaseous and can be easily inhaled, posing a health hazard. However, the primary danger comes not from radon itself, but from its decay products, known as radon daughters. These decay products, often existing as single atoms or ions, can attach themselves to airborne dust particles. Although radon is a noble gas and does not adhere to lung tissue (meaning it is often exhaled before decaying), the radon daughters attached to dust are more likely to stick to the lungs. This increases the risk of harm, as the radon daughters can cause damage to lung tissue. Radon and its daughters are, taken together, often the single largest contributor to an individual's background radiation dose, but due to local differences in geology, the level of exposure to radon gas differs by location. A common source of environmental radon is uranium-containing minerals in the ground; it therefore accumulates in subterranean areas such as basements. Radon can also occur in ground water, such as spring waters and hot springs. Radon trapped in permafrost may be released by climate-change-induced thawing of permafrosts, and radon may also be released into groundwater and the atmosphere following seismic events leading to earthquakes, which has led to its investigation in the field of earthquake prediction. It is possible to test for radon in buildings, and to use techniques such as sub-slab depressurization for mitigation. Epidemiological studies have shown a clear association between breathing high concentrations of radon and incidence of lung cancer. Radon is a contaminant that affects indoor air quality worldwide. According to the United States Environmental Protection Agency (EPA), radon is the second most frequent cause of lung cancer, after cigarette smoking, causing 21,000 lung cancer deaths per year in the United States. About 2,900 of these deaths occur among people who have never smoked. While radon is the second most frequent cause of lung cancer, it is the number one cause among non-smokers, according to EPA policy-oriented estimates. Significant uncertainties exist for the health effects of low-dose exposures. Characteristics Physical properties Radon is a colorless, odorless, and tasteless gas and therefore is not detectable by human senses alone. At standard temperature and pressure, it forms a monatomic gas with a density of 9.73 kg/m3, about 8 times the density of the Earth's atmosphere at sea level, 1.217 kg/m3. It is one of the densest gases at room temperature (a few are denser, e.g. CF3(CF2)2CF3 and WF6) and is the densest of the noble gases. Although colorless at standard temperature and pressure, when cooled below its freezing point of , it emits a brilliant radioluminescence that turns from yellow to orange-red as the temperature lowers. Upon condensation, it glows because of the intense radiation it produces. It is sparingly soluble in water, but more soluble than lighter noble gases. It is appreciably more soluble in organic liquids than in water. Its solubility equation is as follows: where is the molar fraction of radon, is the absolute temperature, and and are solvent constants. Chemical properties Radon is a member of the zero-valence elements that are called noble gases, and is chemically not very reactive. The 3.8-day half-life of Rn makes it useful in physical sciences as a natural tracer. Because radon is a gas at standard conditions, unlike its decay-chain parents, it can readily be extracted from them for research. It is inert to most common chemical reactions, such as combustion, because the outer valence shell contains eight electrons. This produces a stable, minimum energy configuration in which the outer electrons are tightly bound. Its first ionization energy—the minimum energy required to extract one electron from it—is 1037 kJ/mol. In accordance with periodic trends, radon has a lower electronegativity than the element one period before it, xenon, and is therefore more reactive. Early studies concluded that the stability of radon hydrate should be of the same order as that of the hydrates of chlorine () or sulfur dioxide (), and significantly higher than the stability of the hydrate of hydrogen sulfide (). Because of its cost and radioactivity, experimental chemical research is seldom performed with radon, and as a result there are very few reported compounds of radon, all either fluorides or oxides. Radon can be oxidized by powerful oxidizing agents such as fluorine, thus forming radon difluoride (). It decomposes back to its elements at a temperature of above , and is reduced by water to radon gas and hydrogen fluoride: it may also be reduced back to its elements by hydrogen gas. It has a low volatility and was thought to be . Because of the short half-life of radon and the radioactivity of its compounds, it has not been possible to study the compound in any detail. Theoretical studies on this molecule predict that it should have a Rn–F bond distance of 2.08 ångströms (Å), and that the compound is thermodynamically more stable and less volatile than its lighter counterpart xenon difluoride (). The octahedral molecule was predicted to have an even lower enthalpy of formation than the difluoride. The [RnF]+ ion is believed to form by the following reaction: Rn (g) + 2 (s) → (s) + 2 (g) For this reason, antimony pentafluoride together with chlorine trifluoride and have been considered for radon gas removal in uranium mines due to the formation of radon–fluorine compounds. Radon compounds can be formed by the decay of radium in radium halides, a reaction that has been used to reduce the amount of radon that escapes from targets during irradiation. Additionally, salts of the [RnF]+ cation with the anions , , and are known. Radon is also oxidised by dioxygen difluoride to at . Radon oxides are among the few other reported compounds of radon; only the trioxide () has been confirmed. The higher fluorides and have been claimed and are calculated to be stable, but their identification is unclear. They may have been observed in experiments where unknown radon-containing products distilled together with xenon hexafluoride: these may have been , , or both. Trace-scale heating of radon with xenon, fluorine, bromine pentafluoride, and either sodium fluoride or nickel fluoride was claimed to produce a higher fluoride as well which hydrolysed to form . While it has been suggested that these claims were really due to radon precipitating out as the solid complex [RnF][NiF6]2−, the fact that radon coprecipitates from aqueous solution with has been taken as confirmation that was formed, which has been supported by further studies of the hydrolysed solution. That [RnO3F]− did not form in other experiments may have been due to the high concentration of fluoride used. Electromigration studies also suggest the presence of cationic [HRnO3]+ and anionic [HRnO4]− forms of radon in weakly acidic aqueous solution (pH > 5), the procedure having previously been validated by examination of the homologous xenon trioxide. The decay technique has also been used. Avrorin et al. reported in 1982 that 212Fr compounds cocrystallised with their caesium analogues appeared to retain chemically bound radon after electron capture; analogies with xenon suggested the formation of RnO3, but this could not be confirmed. It is likely that the difficulty in identifying higher fluorides of radon stems from radon being kinetically hindered from being oxidised beyond the divalent state because of the strong ionicity of radon difluoride () and the high positive charge on radon in RnF+; spatial separation of molecules may be necessary to clearly identify higher fluorides of radon, of which is expected to be more stable than due to spin–orbit splitting of the 6p shell of radon (RnIV would have a closed-shell 6s6p configuration). Therefore, while should have a similar stability to xenon tetrafluoride (), would likely be much less stable than xenon hexafluoride (): radon hexafluoride would also probably be a regular octahedral molecule, unlike the distorted octahedral structure of , because of the inert pair effect. Because radon is quite electropositive for a noble gas, it is possible that radon fluorides actually take on highly fluorine-bridged structures and are not volatile. Extrapolation down the noble gas group would suggest also the possible existence of RnO, RnO2, and RnOF4, as well as the first chemically stable noble gas chlorides RnCl2 and RnCl4, but none of these have yet been found. Radon carbonyl (RnCO) has been predicted to be stable and to have a linear molecular geometry. The molecules and RnXe were found to be significantly stabilized by spin-orbit coupling. Radon caged inside a fullerene has been proposed as a drug for tumors. Despite the existence of Xe(VIII), no Rn(VIII) compounds have been claimed to exist; should be highly unstable chemically (XeF8 is thermodynamically unstable). It is predicted that the most stable Rn(VIII) compound would be barium perradonate (Ba2RnO6), analogous to barium perxenate. The instability of Rn(VIII) is due to the relativistic stabilization of the 6s shell, also known as the inert pair effect. Radon reacts with the liquid halogen fluorides ClF, , , , , and to form . In halogen fluoride solution, radon is nonvolatile and exists as the RnF+ and Rn2+ cations; addition of fluoride anions results in the formation of the complexes and , paralleling the chemistry of beryllium(II) and aluminium(III). The standard electrode potential of the Rn2+/Rn couple has been estimated as +2.0 V, although there is no evidence for the formation of stable radon ions or compounds in aqueous solution. Isotopes Radon has no stable isotopes. Thirty-nine radioactive isotopes have been characterized, with mass numbers ranging from 193 to 231. Six of them, from 217 to 222 inclusive, occur naturally. The most stable isotope is Rn (half-life 3.82 days), which is a decay product of Ra, the latter being itself a decay product of U. A trace amount of the (highly unstable) isotope Rn (half-life about 35 milliseconds) is also among the daughters of Rn. The isotope Rn would be produced by the double beta decay of natural Po; while energetically possible, this process has however never been seen. Three other radon isotopes have a half-life of over an hour: Rn (about 15 hours), Rn (2.4 hours) and Rn (about 1.8 hours). However, none of these three occur naturally. Rn, also called thoron, is a natural decay product of the most stable thorium isotope (Th). It has a half-life of 55.6 seconds and also emits alpha radiation. Similarly, Rn is derived from the most stable isotope of actinium (Ac)—named "actinon"—and is an alpha emitter with a half-life of 3.96 seconds. Daughters Rn belongs to the radium and uranium-238 decay chain, and has a half-life of 3.8235 days. Its first four products (excluding marginal decay schemes) are very short-lived, meaning that the corresponding disintegrations are indicative of the initial radon distribution. Its decay goes through the following sequence: Rn, 3.82 days, alpha decaying to... Po, 3.10 minutes, alpha decaying to... Pb, 26.8 minutes, beta decaying to... Bi, 19.9 minutes, beta decaying to... Po, 0.1643 ms, alpha decaying to... Pb, which has a much longer half-life of 22.3 years, beta decaying to... Bi, 5.013 days, beta decaying to... Po, 138.376 days, alpha decaying to... Pb, stable. The radon equilibrium factor is the ratio between the activity of all short-period radon progenies (which are responsible for most of radon's biological effects), and the activity that would be at equilibrium with the radon parent. If a closed volume is constantly supplied with radon, the concentration of short-lived isotopes will increase until an equilibrium is reached where the overall decay rate of the decay products equals that of the radon itself. The equilibrium factor is 1 when both activities are equal, meaning that the decay products have stayed close to the radon parent long enough for the equilibrium to be reached, within a couple of hours. Under these conditions, each additional pCi/L of radon will increase exposure by 0.01 working level (WL, a measure of radioactivity commonly used in mining). These conditions are not always met; in many homes, the equilibrium factor is typically 40%; that is, there will be 0.004 WL of daughters for each pCi/L of radon in the air. Pb takes much longer to come in equilibrium with radon, dependent on environmental factors, but if the environment permits accumulation of dust over extended periods of time, 210Pb and its decay products may contribute to overall radiation levels as well. Several studies on the radioactive equilibrium of elements in the environment find it more useful to use the ratio of other Rn decay products with Pb, such as Po, in measuring overall radiation levels. Because of their electrostatic charge, radon progenies adhere to surfaces or dust particles, whereas gaseous radon does not. Attachment removes them from the air, usually causing the equilibrium factor in the atmosphere to be less than 1. The equilibrium factor is also lowered by air circulation or air filtration devices, and is increased by airborne dust particles, including cigarette smoke. The equilibrium factor found in epidemiological studies is 0.4. History and etymology Radon was discovered in 1899 by Ernest Rutherford and Robert B. Owens at McGill University in Montreal. It was the fifth radioactive element to be discovered, after uranium, thorium, radium, and polonium. In 1899, Pierre and Marie Curie observed that the gas emitted by radium remained radioactive for a month. Later that year, Rutherford and Owens noticed variations when trying to measure radiation from thorium oxide. Rutherford noticed that the compounds of thorium continuously emit a radioactive gas that remains radioactive for several minutes, and called this gas "emanation" (from , to flow out, and , expiration), and later "thorium emanation" ("Th Em"). In 1900, Friedrich Ernst Dorn reported some experiments in which he noticed that radium compounds emanate a radioactive gas he named "radium emanation" ("Ra Em"). In 1901, Rutherford and Harriet Brooks demonstrated that the emanations are radioactive, but credited the Curies for the discovery of the element. In 1903, similar emanations were observed from actinium by André-Louis Debierne, and were called "actinium emanation" ("Ac Em"). Several shortened names were soon suggested for the three emanations: exradio, exthorio, and exactinio in 1904; radon (Ro), thoron (To), and akton or acton (Ao) in 1918; radeon, thoreon, and actineon in 1919, and eventually radon, thoron, and actinon in 1920. (The name radon is not related to that of the Austrian mathematician Johann Radon.) The likeness of the spectra of these three gases with those of argon, krypton, and xenon, and their observed chemical inertia led Sir William Ramsay to suggest in 1904 that the "emanations" might contain a new element of the noble-gas family. In 1909, Ramsay and Robert Whytlaw-Gray isolated radon and determined its melting temperature and approximate density. In 1910, they determined that it was the heaviest known gas. They wrote that "" ("the expression 'radium emanation' is very awkward") and suggested the new name niton (Nt) (from , shining) to emphasize the radioluminescence property, and in 1912 it was accepted by the International Commission for Atomic Weights. In 1923, the International Committee for Chemical Elements and International Union of Pure and Applied Chemistry (IUPAC) chose the name of the most stable isotope, radon, as the name of the element. The isotopes thoron and actinon were later renamed Rn and Rn. This has caused some confusion in the literature regarding the element's discovery as while Dorn had discovered radon the isotope, he was not the first to discover radon the element. As late as the 1960s, the element was also referred to simply as emanation. The first synthesized compound of radon, radon fluoride, was obtained in 1962. Even today, the word radon may refer to either the element or its isotope 222Rn, with thoron remaining in use as a short name for 220Rn to stem this ambiguity. The name actinon for 219Rn is rarely encountered today, probably due to the short half-life of that isotope. The danger of high exposure to radon in mines, where exposures can reach 1,000,000 Bq/m3, has long been known. In 1530, Paracelsus described a wasting disease of miners, the mala metallorum, and Georg Agricola recommended ventilation in mines to avoid this mountain sickness (Bergsucht). In 1879, this condition was identified as lung cancer by Harting and Hesse in their investigation of miners from Schneeberg, Germany. The first major studies with radon and health occurred in the context of uranium mining in the Joachimsthal region of Bohemia. In the US, studies and mitigation only followed decades of health effects on uranium miners of the Southwestern US employed during the early Cold War; standards were not implemented until 1971. In the early 20th century in the US, gold contaminated with the radon daughter 210Pb entered the jewelry industry. This was from gold brachytherapy seeds that had held 222Rn, which were melted down after the radon had decayed. The presence of radon in indoor air was documented as early as 1950. Beginning in the 1970s, research was initiated to address sources of indoor radon, determinants of concentration, health effects, and mitigation approaches. In the US, the problem of indoor radon received widespread publicity and intensified investigation after a widely publicized incident in 1984. During routine monitoring at a Pennsylvania nuclear power plant, a worker was found to be contaminated with radioactivity. A high concentration of radon in his home was subsequently identified as responsible. Occurrence Concentration units Discussions of radon concentrations in the environment refer to 222Rn, the decay product of uranium and radium. While the average rate of production of 220Rn (from the thorium decay series) is about the same as that of 222Rn, the amount of 220Rn in the environment is much less than that of 222Rn because of the short half-life of 220Rn (55 seconds, versus 3.8 days respectively). Radon concentration in the atmosphere is usually measured in becquerel per cubic meter (Bq/m3), the SI derived unit. Another unit of measurement common in the US is picocuries per liter (pCi/L); 1 pCi/L = 37 Bq/m3. Typical domestic exposures average about 48 Bq/m3 indoors, though this varies widely, and 15 Bq/m3 outdoors. In the mining industry, the exposure is traditionally measured in working level (WL), and the cumulative exposure in working level month (WLM); 1 WL equals any combination of short-lived 222Rn daughters (218Po, 214Pb, 214Bi, and 214Po) in 1 liter of air that releases 1.3 × 105 MeV of potential alpha energy; 1 WL is equivalent to 2.08 × 10−5 joules per cubic meter of air (J/m3). The SI unit of cumulative exposure is expressed in joule-hours per cubic meter (J·h/m3). One WLM is equivalent to 3.6 × 10−3 J·h/m3. An exposure to 1 WL for 1 working-month (170 hours) equals 1 WLM cumulative exposure. The International Commission on Radiological Protection recommends an annual limit of 4.8WLM for miners. Assuming 2000 hours of work per year, this corresponds to a concentration of 1500  Bq/m3. 222Rn decays to 210Pb and other radioisotopes. The levels of 210Pb can be measured. The rate of deposition of this radioisotope is weather-dependent. Radon concentrations found in natural environments are much too low to be detected by chemical means. A 1,000 Bq/m3 (relatively high) concentration corresponds to 0.17 picogram per cubic meter (pg/m3). The average concentration of radon in the atmosphere is about 6 molar percent, or about 150 atoms in each milliliter of air. The radon activity of the entire Earth's atmosphere originates from only a few tens of grams of radon, consistently replaced by decay of larger amounts of radium, thorium, and uranium. Natural Radon is produced by the radioactive decay of radium-226, which is found in uranium ores, phosphate rock, shales, igneous and metamorphic rocks such as granite, gneiss, and schist, and to a lesser degree, in common rocks such as limestone. Every square mile of surface soil, to a depth of 6 inches (2.6 km to a depth of 15 cm), contains about 1 gram of radium, which releases radon in small amounts to the atmosphere. It is estimated that 2.4 billion curies (90 EBq) of radon are released from soil annually worldwide. This is equivalent to some . Radon concentration can differ widely from place to place. In the open air, it ranges from 1 to 100 Bq/m, even less (0.1 Bq/m) above the ocean. In caves or ventilated mines, or poorly ventilated houses, its concentration climbs to 20–2,000 Bq/m. Radon concentration can be much higher in mining contexts. Ventilation regulations instruct to maintain radon concentration in uranium mines under the "working level", with 95th percentile levels ranging up to nearly 3 WL (546 pCi Rn per liter of air; 20.2 kBq/m, measured from 1976 to 1985). The concentration in the air at the (unventilated) Gastein Healing Gallery averages 43 kBq/m (1.2 nCi/L) with maximal value of 160 kBq/m (4.3 nCi/L). Radon mostly appears with the radium/uranium series (decay chain) (Rn), and marginally with the thorium series (Rn). The element emanates naturally from the ground, and some building materials, all over the world, wherever traces of uranium or thorium are found, and particularly in regions with soils containing granite or shale, which have a higher concentration of uranium. Not all granitic regions are prone to high emissions of radon. Being a rare gas, it usually migrates freely through faults and fragmented soils, and may accumulate in caves or water. Owing to its very short half-life (four days for Rn), radon concentration decreases very quickly when the distance from the production area increases. Radon concentration varies greatly with season and atmospheric conditions. For instance, it has been shown to accumulate in the air if there is a meteorological inversion and little wind. High concentrations of radon can be found in some spring waters and hot springs. The towns of Boulder, Montana; Misasa; Bad Kreuznach, Germany; and the country of Japan have radium-rich springs that emit radon. To be classified as a radon mineral water, radon concentration must be above 2 nCi/L (74 kBq/m). The activity of radon mineral water reaches 2 MBq/m in Merano and 4 MBq/m in Lurisia (Italy). Natural radon concentrations in the Earth's atmosphere are so low that radon-rich water in contact with the atmosphere will continually lose radon by volatilization. Hence, ground water has a higher concentration of Rn than surface water, because radon is continuously produced by radioactive decay of Ra present in rocks. Likewise, the saturated zone of a soil frequently has a higher radon content than the unsaturated zone because of diffusional losses to the atmosphere. In 1971, Apollo 15 passed above the Aristarchus plateau on the Moon, and detected a significant rise in alpha particles thought to be caused by the decay of Rn. The presence of Rn has been inferred later from data obtained from the Lunar Prospector alpha particle spectrometer. Radon is found in some petroleum. Because radon has a similar pressure and temperature curve to propane, and oil refineries separate petrochemicals based on their boiling points, the piping carrying freshly separated propane in oil refineries can become contaminated because of decaying radon and its products. Residues from the petroleum and natural gas industry often contain radium and its daughters. The sulfate scale from an oil well can be radium rich, while the water, oil, and gas from a well often contains radon. Radon decays to form solid radioisotopes that form coatings on the inside of pipework. Accumulation in buildings Measurement of radon levels in the first decades of its discovery was mainly done to determine the presence of radium and uranium in geological surveys. In 1956, most likely the first indoor survey of radon decay products was performed in Sweden, with the intent of estimating the public exposure to radon and its decay products. From 1975 up until 1984, small studies in Sweden, Austria, the United States and Norway aimed to measure radon indoors and in metropolitan areas. High concentrations of radon in homes were discovered by chance in 1984 after the stringent radiation testing conducted at the new Limerick Generating Station nuclear power plant in Montgomery County, Pennsylvania, United States revealed that Stanley Watras, a construction engineer at the plant, was contaminated by radioactive substances even though the reactor had never been fueled and Watras had been decontaminated each evening. It was determined that radon levels in his home's basement were in excess of 100,000 Bq/m3 (2.7 nCi/L); he was told that living in the home was the equivalent of smoking 135 packs of cigarettes a day, and he and his family had increased their risk of developing lung cancer by 13 or 14 percent. The incident dramatized the fact that radon levels in particular dwellings can occasionally be orders of magnitude higher than typical. Since the incident in Pennsylvania, millions of short-term radon measurements have been taken in homes in the United States. Outside the United States, radon measurements are typically performed over the long term. In the United States, typical domestic exposures are of approximately 100 Bq/m3 (2.7 pCi/L) indoors. Some level of radon will be found in all buildings. Radon mostly enters a building directly from the soil through the lowest level in the building that is in contact with the ground. High levels of radon in the water supply can also increase indoor radon air levels. Typical entry points of radon into buildings are cracks in solid foundations and walls, construction joints, gaps in suspended floors and around service pipes, cavities inside walls, and the water supply. Radon concentrations in the same place may differ by double/half over one hour, and the concentration in one room of a building may be significantly different from the concentration in an adjoining room. The distribution of radon concentrations will generally differ from room to room, and the readings are averaged according to regulatory protocols. Indoor radon concentration is usually assumed to follow a log-normal distribution on a given territory. Thus, the geometric mean is generally used for estimating the "average" radon concentration in an area. The mean concentration ranges from less than 10 Bq/m3 to over 100 Bq/m3 in some European countries. Some of the highest radon hazard in the US is found in Iowa and in the Appalachian Mountain areas in southeastern Pennsylvania. Iowa has the highest average radon concentrations in the US due to significant glaciation that ground the granitic rocks from the Canadian Shield and deposited it as soils making up the rich Iowa farmland. Many cities within the state, such as Iowa City, have passed requirements for radon-resistant construction in new homes. The second highest readings in Ireland were found in office buildings in the Irish town of Mallow, County Cork, prompting local fears regarding lung cancer. Since radon is a colorless, odorless gas, the only way to know how much is present in the air or water is to perform tests. In the US, radon test kits are available to the public at retail stores, such as hardware stores, for home use, and testing is available through licensed professionals, who are often home inspectors. Efforts to reduce indoor radon levels are called radon mitigation. In the US, the EPA recommends all houses be tested for radon. In the UK, under the Housing Health & Safety Rating System, property owners have an obligation to evaluate potential risks and hazards to health and safety in a residential property. Alpha-radiation monitoring over the long term is a method of testing for radon that is more common in countries outside the United States. Industrial production Radon is obtained as a by-product of uraniferous ores processing after transferring into 1% solutions of hydrochloric or hydrobromic acids. The gas mixture extracted from the solutions contains , , He, Rn, , and hydrocarbons. The mixture is purified by passing it over copper at to remove the and the , and then KOH and are used to remove the acids and moisture by sorption. Radon is condensed by liquid nitrogen and purified from residue gases by sublimation. Radon commercialization is regulated, but it is available in small quantities for the calibration of 222Rn measurement systems. In 2008 it was priced at almost per milliliter of radium solution (which only contains about 15 picograms of actual radon at any given moment). Radon is produced commercially by a solution of radium-226 (half-life of 1,600 years). Radium-226 decays by alpha-particle emission, producing radon that collects over samples of radium-226 at a rate of about 1 mm3/day per gram of radium; equilibrium is quickly achieved and radon is produced in a steady flow, with an activity equal to that of the radium (50 Bq). Gaseous 222Rn (half-life of about four days) escapes from the capsule through diffusion. Concentration scale Applications Medical Hormesis An early-20th-century form of quackery was the treatment of maladies in a radiotorium. It was a small, sealed room for patients to be exposed to radon for its "medicinal effects". The carcinogenic nature of radon due to its ionizing radiation became apparent later. Radon's molecule-damaging radioactivity has been used to kill cancerous cells, but it does not increase the health of healthy cells. The ionizing radiation causes the formation of free radicals, which results in cell damage, causing increased rates of illness, including cancer. Exposure to radon has been suggested to mitigate autoimmune diseases such as arthritis in a process known as radiation hormesis. As a result, in the late 20th century and early 21st century, "health mines" established in Basin, Montana, attracted people seeking relief from health problems such as arthritis through limited exposure to radioactive mine water and radon. The practice is discouraged because of the well-documented ill effects of high doses of radiation on the body. Radioactive water baths have been applied since 1906 in Jáchymov, Czech Republic, but even before radon discovery they were used in Bad Gastein, Austria. Radium-rich springs are also used in traditional Japanese onsen in Misasa, Tottori Prefecture. Drinking therapy is applied in Bad Brambach, Germany, and during the early 20th century, water from springs with radon in them was bottled and sold (this water had little to no radon in it by the time it got to consumers due to radon's short half-life). Inhalation therapy is carried out in Gasteiner-Heilstollen, Austria; Świeradów-Zdrój, Czerniawa-Zdrój, Kowary, Lądek-Zdrój, Poland; Harghita Băi, Romania; and Boulder, Montana. In the US and Europe, there are several "radon spas", where people sit for minutes or hours in a high-radon atmosphere, such as at Bad Schmiedeberg, Germany. Nuclear medicine Radon has been produced commercially for use in radiation therapy, but for the most part has been replaced by radionuclides made in particle accelerators and nuclear reactors. Radon has been used in implantable seeds, made of gold or glass, primarily used to treat cancers, known as brachytherapy. The gold seeds were produced by filling a long tube with radon pumped from a radium source, the tube being then divided into short sections by crimping and cutting. The gold layer keeps the radon within, and filters out the alpha and beta radiations, while allowing the gamma rays to escape (which kill the diseased tissue). The activities might range from 0.05 to 5 millicuries per seed (2 to 200 MBq). The gamma rays are produced by radon and the first short-lived elements of its decay chain (218Po, 214Pb, 214Bi, 214Po). After 11 half-lives (42 days), radon radioactivity is at 1/2,048 of its original level. At this stage, the predominant residual activity of the seed originates from the radon decay product 210Pb, whose half-life (22.3 years) is 2,000 times that of radon and its descendants 210Bi and 210Po. 211Rn can be used to generate 211At, which has uses in targeted alpha therapy. Scientific Radon emanation from the soil varies with soil type and with surface uranium content, so outdoor radon concentrations can be used to track air masses to a limited degree. Because of radon's rapid loss to air and comparatively rapid decay, radon is used in hydrologic research that studies the interaction between groundwater and streams. Any significant concentration of radon in a river may be an indicator that there are local inputs of groundwater. Radon soil concentration has been used to map buried close-subsurface geological faults because concentrations are generally higher over the faults. Similarly, it has found some limited use in prospecting for geothermal gradients. Some researchers have investigated changes in groundwater radon concentrations for earthquake prediction. Increases in radon were noted before the 1966 Tashkent and 1994 Mindoro earthquakes. Radon has a half-life of approximately 3.8 days, which means that it can be found only shortly after it has been produced in the radioactive decay chain. For this reason, it has been hypothesized that increases in radon concentration is due to the generation of new cracks underground, which would allow increased groundwater circulation, flushing out radon. The generation of new cracks might not unreasonably be assumed to precede major earthquakes. In the 1970s and 1980s, scientific measurements of radon emissions near faults found that earthquakes often occurred with no radon signal, and radon was often detected with no earthquake to follow. It was then dismissed by many as an unreliable indicator. As of 2009, it was under investigation as a possible earthquake precursor by NASA; further research into the subject has suggested that abnormalities in atmospheric radon concentrations can be an indicator of seismic movement. Radon is a known pollutant emitted from geothermal power stations because it is present in the material pumped from deep underground. It disperses rapidly, and no radiological hazard has been demonstrated in various investigations. In addition, typical systems re-inject the material deep underground rather than releasing it at the surface, so its environmental impact is minimal. In 1989, a survey of the collective dose received due to radon in geothermal fluids was measured at 2 man-sieverts per gigawatt-year of electricity produced, in comparison to the 2.5 man-sieverts per gigawatt-year produced from C emissions in nuclear power plants. In the 1940s and 1950s, radon produced from a radium source was used for industrial radiography. Other X-ray sources such as Co and Ir became available after World War II and quickly replaced radium and thus radon for this purpose, being of lower cost and hazard. Health risks In mines Rn decay products have been classified by the International Agency for Research on Cancer as being carcinogenic to humans, and as a gas that can be inhaled, lung cancer is a particular concern for people exposed to elevated levels of radon for sustained periods. During the 1940s and 1950s, when safety standards requiring expensive ventilation in mines were not widely implemented, radon exposure was linked to lung cancer among non-smoking miners of uranium and other hard rock materials in what is now the Czech Republic, and later among miners from the Southwestern US and South Australia. Despite these hazards being known in the early 1950s, this occupational hazard remained poorly managed in many mines until the 1970s. During this period, several entrepreneurs opened former uranium mines in the US to the general public and advertised alleged health benefits from breathing radon gas underground. Health benefits claimed included relief from pain, sinus problems, asthma, and arthritis, but the government banned such advertisements in 1975, and subsequent works have debated the truth of such claimed health effects, citing the documented ill effects of radiation on the body. Since that time, ventilation and other measures have been used to reduce radon levels in most affected mines that continue to operate. In recent years, the average annual exposure of uranium miners has fallen to levels similar to the concentrations inhaled in some homes. This has reduced the risk of occupationally-induced cancer from radon, although health issues may persist for those who are currently employed in affected mines and for those who have been employed in them in the past. As the relative risk for miners has decreased, so has the ability to detect excess risks among that population. Residues from processing of uranium ore can also be a source of radon. Radon resulting from the high radium content in uncovered dumps and tailing ponds can be easily released into the atmosphere and affect people living in the vicinity. The release of radon may be mitigated by covering tailings with soil or clay, though other decay products may leach into groundwater supplies. Non-uranium mines may pose higher risks of radon exposure, as workers are not continuously monitored for radiation, and regulations specific to uranium mines do not apply. A review of radon level measurements across non-uranium mines found the highest concentrations of radon in non-metal mines, such as phosphorus and salt mines. However, older or abandoned uranium mines without ventilation may still have extremely high radon levels. In addition to lung cancer, researchers have theorized a possible increased risk of leukemia due to radon exposure. Empirical support from studies of the general population is inconsistent; a study of uranium miners found a correlation between radon exposure and chronic lymphocytic leukemia, and current research supports a link between indoor radon exposure and poor health outcomes (i.e., an increased risk of lung cancer or childhood leukemia). Legal actions taken by those involved in nuclear industries, including miners, millers, transporters, nuclear site workers, and their respective unions have resulted in compensation for those affected by radon and radiation exposure under programs such as the compensation scheme for radiation-linked diseases (in the United Kingdom) and the Radiation Exposure Compensation Act (in the United States). Domestic-level exposure Radon has been considered the second leading cause of lung cancer in the United States and leading environmental cause of cancer mortality by the EPA, with the first one being smoking. Others have reached similar conclusions for the United Kingdom and France. Radon exposure in buildings may arise from subsurface rock formations and certain building materials (e.g., some granites). The greatest risk of radon exposure arises in buildings that are airtight, insufficiently ventilated, and have foundation leaks that allow air from the soil into basements and dwelling rooms. In some regions, such as Niška Banja, Serbia and Ullensvang, Norway, outdoor radon concentrations may be exceptionally high, though compared to indoors, where people spend more time and air is not dispersed and exchanged as often, outdoor exposure to radon is not considered a significant health risk. Radon exposure (mostly radon daughters) has been linked to lung cancer in case-control studies performed in the US, Europe and China. There are approximately 21,000 deaths per year in the US (0.0063% of a population of 333 million) due to radon-induced lung cancers. In Europe, 2% of all cancers have been attributed to radon; in Slovenia in particular, a country with a high concentration of radon, about 120 people (0.0057% of a population of 2.11 million) die yearly because of radon. One of the most comprehensive radon studies performed in the US by epidemiologist R. William Field and colleagues found a 50% increased lung cancer risk even at the protracted exposures at the EPA's action level of 4 pCi/L. North American and European pooled analyses further support these findings. However, the conclusion that exposure to low levels of radon leads to elevated risk of lung cancer has been disputed, and analyses of the literature point towards elevated risk only when radon accumulates indoors and at levels above 100 Bq/m3. Thoron (220Rn) is less studied than Rn in regards to domestic exposure due to its shorter half-life. However, it has been measured at comparatively high concentrations in buildings with earthen architecture, such as traditional half-timbered houses and modern houses with clay wall finishes, and in regions with thorium- and monazite-rich soil and sand. Thoron is a minor contributor to the overall radiation dose received due to indoor radon exposure, and can interfere with Rn measurements when not taken into account. Action and reference level WHO presented in 2009 a recommended reference level (the national reference level), 100 Bq/m3, for radon in dwellings. The recommendation also says that where this is not possible, 300 Bq/m3 should be selected as the highest level. A national reference level should not be a limit, but should represent the maximum acceptable annual average radon concentration in a dwelling. The actionable concentration of radon in a home varies depending on the organization doing the recommendation, for example, the EPA encourages that action be taken at concentrations as low as 74 Bq/m3 (2 pCi/L), and the European Union recommends action be taken when concentrations reach 400 Bq/m3 (11 pCi/L) for old houses and 200 Bq/m3 (5 pCi/L) for new ones. On 8 July 2010, the UK's Health Protection Agency issued new advice setting a "Target Level" of 100 Bq/m3 whilst retaining an "Action Level" of 200 Bq/m3. Similar levels (as in the UK) are published by Norwegian Radiation and Nuclear Safety Authority (DSA) with the maximum limit for schools, kindergartens, and new dwellings set at 200 Bq/m3, where 100 Bq/m3 is set as the action level. Inhalation and smoking Results from epidemiological studies indicate that the risk of lung cancer increases with exposure to residential radon. A well known example of source of error is smoking, the main risk factor for lung cancer. In the US, cigarette smoking is estimated to cause 80% to 90% of all lung cancers. According to the EPA, the risk of lung cancer for smokers is significant due to synergistic effects of radon and smoking. For this population about 62 people in a total of 1,000 will die of lung cancer compared to 7 people in a total of 1,000 for people who have never smoked. It cannot be excluded that the risk of non-smokers should be primarily explained by an effect of radon. Radon, like other known or suspected external risk factors for lung cancer, is a threat for smokers and former smokers. This was demonstrated by the European pooling study. A commentary to the pooling study stated: "it is not appropriate to talk simply of a risk from radon in homes. The risk is from smoking, compounded by a synergistic effect of radon for smokers. Without smoking, the effect seems to be so small as to be insignificant." According to the European pooling study, there is a difference in risk for the histological subtypes of lung cancer and radon exposure. Small-cell lung carcinoma, which has a high correlation with smoking, has a higher risk after radon exposure. For other histological subtypes such as adenocarcinoma, the type that primarily affects non-smokers, the risk from radon appears to be lower. A study of radiation from post-mastectomy radiotherapy shows that the simple models previously used to assess the combined and separate risks from radiation and smoking need to be developed. This is also supported by new discussion about the calculation method, the linear no-threshold model, which routinely has been used. A study from 2001, which included 436 non-smokers with lung cancer and a control group of 1649 non-smokers without lung cancer, showed that exposure to radon increased the risk of lung cancer in non-smokers. The group that had been exposed to tobacco smoke in the home appeared to have a much higher risk, while those who were not exposed to passive smoking did not show any increased risk with increasing radon exposure. Absorption and ingestion from water The effects of radon if ingested are unknown, although studies have found that its biological half-life ranges from 30 to 70 minutes, with 90% removal at 100 minutes. In 1999, the US National Research Council investigated the issue of radon in drinking water. The risk associated with ingestion was considered almost negligible; Water from underground sources may contain significant amounts of radon depending on the surrounding rock and soil conditions, whereas surface sources generally do not. Radon is also released from water when temperature is increased, pressure is decreased and when water is aerated. Optimum conditions for radon release and exposure in domestic living from water occurred during showering. Water with a radon concentration of 104 pCi/L can increase the indoor airborne radon concentration by 1 pCi/L under normal conditions. However, the concentration of radon released from contaminated groundwater to the air has been measured at 5 orders of magnitude less than the original concentration in water. Ocean surface concentrations of radon exchange within the atmosphere, causing 222Rn to increase through the air-sea interface. Although areas tested were very shallow, additional measurements in a wide variety of coastal regimes should help define the nature of 222Rn observed. Testing and mitigation There are relatively simple tests for radon gas. In some countries these tests are methodically done in areas of known systematic hazards. Radon detection devices are commercially available. Digital radon detectors provide ongoing measurements giving both daily, weekly, short-term and long-term average readouts via a digital display. Short-term radon test devices used for initial screening purposes are inexpensive, in some cases free. There are important protocols for taking short-term radon tests and it is imperative that they be strictly followed. The kit includes a collector that the user hangs in the lowest habitable floor of the house for two to seven days. The user then sends the collector to a laboratory for analysis. Long term kits, taking collections for up to one year or more, are also available. An open-land test kit can test radon emissions from the land before construction begins. Radon concentrations can vary daily, and accurate radon exposure estimates require long-term average radon measurements in the spaces where an individual spends a significant amount of time. Radon levels fluctuate naturally, due to factors like transient weather conditions, so an initial test might not be an accurate assessment of a home's average radon level. Radon levels are at a maximum during the coolest part of the day when pressure differentials are greatest. Therefore, a high result (over 4 pCi/L) justifies repeating the test before undertaking more expensive abatement projects. Measurements between 4 and 10 pCi/L warrant a long-term radon test. Measurements over 10 pCi/L warrant only another short-term test so that abatement measures are not unduly delayed. The EPA has advised purchasers of real estate to delay or decline a purchase if the seller has not successfully abated radon to 4 pCi/L or less. Because the half-life of radon is only 3.8 days, removing or isolating the source will greatly reduce the hazard within a few weeks. Another method of reducing radon levels is to modify the building's ventilation. Generally, the indoor radon concentrations increase as ventilation rates decrease. In a well-ventilated place, the radon concentration tends to align with outdoor values (typically 10 Bq/m3, ranging from 1 to 100 Bq/m3). The four principal ways of reducing the amount of radon accumulating in a house are: Sub-slab depressurization (soil suction) by increasing under-floor ventilation; Improving the ventilation of the house and avoiding the transport of radon from the basement into living rooms; Installing a radon sump system in the basement; Installing a positive pressurization or positive supply ventilation system. According to the EPA, the method to reduce radon "...primarily used is a vent pipe system and fan, which pulls radon from beneath the house and vents it to the outside", which is also called sub-slab depressurization, active soil depressurization, or soil suction. Generally indoor radon can be mitigated by sub-slab depressurization and exhausting such radon-laden air to the outdoors, away from windows and other building openings. "[The] EPA generally recommends methods which prevent the entry of radon. Soil suction, for example, prevents radon from entering your home by drawing the radon from below the home and venting it through a pipe, or pipes, to the air above the home where it is quickly diluted" and the "EPA does not recommend the use of sealing alone to reduce radon because, by itself, sealing has not been shown to lower radon levels significantly or consistently". Positive-pressure ventilation systems can be combined with a heat exchanger to recover energy in the process of exchanging air with the outside, and simply exhausting basement air to the outside is not necessarily a viable solution as this can actually draw radon gas into a dwelling. Homes built on a crawl space may benefit from a radon collector installed under a "radon barrier" (a sheet of plastic that covers the crawl space). For crawl spaces, the EPA states that "[a]n effective method to reduce radon levels in crawl space homes involves covering the earth floor with a high-density plastic sheet. A vent pipe and fan are used to draw the radon from under the sheet and vent it to the outdoors. This form of soil suction is called submembrane suction, and when properly applied is the most effective way to reduce radon levels in crawl space homes." See also International Radon Project Lucas cell Pleochroic halo (aka radiohalo) Radiation Exposure Compensation Act Notes References External links Radon at the United States Environmental Protection Agency Global Radon Map Radon at The Periodic Table of Videos (University of Nottingham) Radon and Lung Health from the American Lung Association The Geology of Radon, James K. Otton, Linda C.S. Gundersen, and R. Randall Schumann Home Buyer's and Seller's Guide to Radon An article by the International Association of Certified Home Inspectors (InterNACHI) Toxicological Profile for Radon, Draft for Public Comment, Agency for Toxic Substances and Disease Registry, September 2008 Chemical elements Hazardous materials Noble gases Building biology Soil contamination IARC Group 1 carcinogens Carcinogens Industrial gases
Radon
[ "Physics", "Chemistry", "Materials_science", "Technology", "Engineering", "Environmental_science" ]
11,737
[ "Noble gases", "Toxicology", "Chemical elements", "Building engineering", "Environmental chemistry", "Nonmetals", "Building biology", "Materials", "Soil contamination", "Industrial gases", "Chemical process engineering", "Carcinogens", "Hazardous materials", "Atoms", "Matter" ]
25,758
https://en.wikipedia.org/wiki/RNA
Ribonucleic acid (RNA) is a polymeric molecule that is essential for most biological functions, either by performing the function itself (non-coding RNA) or by forming a template for the production of proteins (messenger RNA). RNA and deoxyribonucleic acid (DNA) are nucleic acids. The nucleic acids constitute one of the four major macromolecules essential for all known forms of life. RNA is assembled as a chain of nucleotides. Cellular organisms use messenger RNA (mRNA) to convey genetic information (using the nitrogenous bases of guanine, uracil, adenine, and cytosine, denoted by the letters G, U, A, and C) that directs synthesis of specific proteins. Many viruses encode their genetic information using an RNA genome. Some RNA molecules play an active role within cells by catalyzing biological reactions, controlling gene expression, or sensing and communicating responses to cellular signals. One of these active processes is protein synthesis, a universal function in which RNA molecules direct the synthesis of proteins on ribosomes. This process uses transfer RNA (tRNA) molecules to deliver amino acids to the ribosome, where ribosomal RNA (rRNA) then links amino acids together to form coded proteins. It has become widely accepted in science that early in the history of life on Earth, prior to the evolution of DNA and possibly of protein-based enzymes as well, an "RNA world" existed in which RNA served as both living organisms' storage method for genetic information—a role fulfilled today by DNA, except in the case of RNA viruses—and potentially performed catalytic functions in cells—a function performed today by protein enzymes, with the notable and important exception of the ribosome, which is a ribozyme. Chemical structure of RNA Basic chemical composition Each nucleotide in RNA contains a ribose sugar, with carbons numbered 1' through 5'. A base is attached to the 1' position, in general, adenine (A), cytosine (C), guanine (G), or uracil (U). Adenine and guanine are purines, and cytosine and uracil are pyrimidines. A phosphate group is attached to the 3' position of one ribose and the 5' position of the next. The phosphate groups have a negative charge each, making RNA a charged molecule (polyanion). The bases form hydrogen bonds between cytosine and guanine, between adenine and uracil and between guanine and uracil. However, other interactions are possible, such as a group of adenine bases binding to each other in a bulge, or the GNRA tetraloop that has a guanine–adenine base-pair. Differences between DNA and RNA The chemical structure of RNA is very similar to that of DNA, but differs in three primary ways: Unlike double-stranded DNA, RNA is usually a single-stranded molecule (ssRNA) in many of its biological roles and consists of much shorter chains of nucleotides. However, double-stranded RNA (dsRNA) can form and (moreover) a single RNA molecule can, by complementary base pairing, form intrastrand double helixes, as in tRNA. While the sugar-phosphate "backbone" of DNA contains deoxyribose, RNA contains ribose instead. Ribose has a hydroxyl group attached to the pentose ring in the 2' position, whereas deoxyribose does not. The hydroxyl groups in the ribose backbone make RNA more chemically labile than DNA by lowering the activation energy of hydrolysis. The complementary base to adenine in DNA is thymine, whereas in RNA, it is uracil, which is an unmethylated form of thymine. Like DNA, most biologically active RNAs, including mRNA, tRNA, rRNA, snRNAs, and other non-coding RNAs, contain self-complementary sequences that allow parts of the RNA to fold and pair with itself to form double helices. Analysis of these RNAs has revealed that they are highly structured. Unlike DNA, their structures do not consist of long double helices, but rather collections of short helices packed together into structures akin to proteins. In this fashion, RNAs can achieve chemical catalysis (like enzymes). For instance, determination of the structure of the ribosome—an RNA-protein complex that catalyzes the assembly of proteins—revealed that its active site is composed entirely of RNA. An important structural component of RNA that distinguishes it from DNA is the presence of a hydroxyl group at the 2' position of the ribose sugar. The presence of this functional group causes the helix to mostly take the A-form geometry, although in single strand dinucleotide contexts, RNA can rarely also adopt the B-form most commonly observed in DNA. The A-form geometry results in a very deep and narrow major groove and a shallow and wide minor groove. A second consequence of the presence of the 2'-hydroxyl group is that in conformationally flexible regions of an RNA molecule (that is, not involved in formation of a double helix), it can chemically attack the adjacent phosphodiester bond to cleave the backbone. Secondary and tertiary structures The functional form of single-stranded RNA molecules, just like proteins, frequently requires a specific spatial tertiary structure. The scaffold for this structure is provided by secondary structural elements that are hydrogen bonds within the molecule. This leads to several recognizable "domains" of secondary structure like hairpin loops, bulges, and internal loops. In order to create, i.e., design, RNA for any given secondary structure, two or three bases would not be enough, but four bases are enough. This is likely why nature has "chosen" a four base alphabet: fewer than four would not allow the creation of all structures, while more than four bases are not necessary to do so. Since RNA is charged, metal ions such as Mg2+ are needed to stabilise many secondary and tertiary structures. The naturally occurring enantiomer of RNA is D-RNA composed of D-ribonucleotides. All chirality centers are located in the D-ribose. By the use of L-ribose or rather L-ribonucleotides, L-RNA can be synthesized. L-RNA is much more stable against degradation by RNase. Like other structured biopolymers such as proteins, one can define topology of a folded RNA molecule. This is often done based on arrangement of intra-chain contacts within a folded RNA, termed as circuit topology. Chemical modifications RNA is transcribed with only four bases (adenine, cytosine, guanine and uracil), but these bases and attached sugars can be modified in numerous ways as the RNAs mature. Pseudouridine (Ψ), in which the linkage between uracil and ribose is changed from a C–N bond to a C–C bond, and ribothymidine (T) are found in various places (the most notable ones being in the TΨC loop of tRNA). Another notable modified base is hypoxanthine, a deaminated adenine base whose nucleoside is called inosine (I). Inosine plays a key role in the wobble hypothesis of the genetic code. There are more than 100 other naturally occurring modified nucleosides. The greatest structural diversity of modifications can be found in tRNA, while pseudouridine and nucleosides with 2'-O-methylribose often present in rRNA are the most common. The specific roles of many of these modifications in RNA are not fully understood. However, it is notable that, in ribosomal RNA, many of the post-transcriptional modifications occur in highly functional regions, such as the peptidyl transferase center and the subunit interface, implying that they are important for normal function. Types of RNA Messenger RNA (mRNA) is the type of RNA that carries information from DNA to the ribosome, the sites of protein synthesis (translation) in the cell cytoplasm. The coding sequence of the mRNA determines the amino acid sequence in the protein that is produced. However, many RNAs do not code for protein (about 97% of the transcriptional output is non-protein-coding in eukaryotes). These so-called non-coding RNAs ("ncRNA") can be encoded by their own genes (RNA genes), but can also derive from mRNA introns. The most prominent examples of non-coding RNAs are transfer RNA (tRNA) and ribosomal RNA (rRNA), both of which are involved in the process of translation. There are also non-coding RNAs involved in gene regulation, RNA processing and other roles. Certain RNAs are able to catalyse chemical reactions such as cutting and ligating other RNA molecules, and the catalysis of peptide bond formation in the ribosome; these are known as ribozymes. According to the length of RNA chain, RNA includes small RNA and long RNA. Usually, small RNAs are shorter than 200 nt in length, and long RNAs are greater than 200 nt long. Long RNAs, also called large RNAs, mainly include long non-coding RNA (lncRNA) and mRNA. Small RNAs mainly include 5.8S ribosomal RNA (rRNA), 5S rRNA, transfer RNA (tRNA), microRNA (miRNA), small interfering RNA (siRNA), small nucleolar RNA (snoRNAs), Piwi-interacting RNA (piRNA), tRNA-derived small RNA (tsRNA) and small rDNA-derived RNA (srRNA). There are certain exceptions as in the case of the 5S rRNA of the members of the genus Halococcus (Archaea), which have an insertion, thus increasing its size. RNAs involved in protein synthesis Messenger RNA (mRNA) carries information about a protein sequence to the ribosomes, the protein synthesis factories in the cell. It is coded so that every three nucleotides (a codon) corresponds to one amino acid. In eukaryotic cells, once precursor mRNA (pre-mRNA) has been transcribed from DNA, it is processed to mature mRNA. This removes its introns—non-coding sections of the pre-mRNA. The mRNA is then exported from the nucleus to the cytoplasm, where it is bound to ribosomes and translated into its corresponding protein form with the help of tRNA. In prokaryotic cells, which do not have nucleus and cytoplasm compartments, mRNA can bind to ribosomes while it is being transcribed from DNA. After a certain amount of time, the message degrades into its component nucleotides with the assistance of ribonucleases. Transfer RNA (tRNA) is a small RNA chain of about 80 nucleotides that transfers a specific amino acid to a growing polypeptide chain at the ribosomal site of protein synthesis during translation. It has sites for amino acid attachment and an anticodon region for codon recognition that binds to a specific sequence on the messenger RNA chain through hydrogen bonding. Ribosomal RNA (rRNA) is the catalytic component of the ribosomes. The rRNA is the component of the ribosome that hosts translation. Eukaryotic ribosomes contain four different rRNA molecules: 18S, 5.8S, 28S and 5S rRNA. Three of the rRNA molecules are synthesized in the nucleolus, and one is synthesized elsewhere. In the cytoplasm, ribosomal RNA and protein combine to form a nucleoprotein called a ribosome. The ribosome binds mRNA and carries out protein synthesis. Several ribosomes may be attached to a single mRNA at any time. Nearly all the RNA found in a typical eukaryotic cell is rRNA. Transfer-messenger RNA (tmRNA) is found in many bacteria and plastids. It tags proteins encoded by mRNAs that lack stop codons for degradation and prevents the ribosome from stalling. Regulatory RNA The earliest known regulators of gene expression were proteins known as repressors and activators – regulators with specific short binding sites within enhancer regions near the genes to be regulated.  Later studies have shown that RNAs also regulate genes. There are several kinds of RNA-dependent processes in eukaryotes regulating the expression of genes at various points, such as RNAi repressing genes post-transcriptionally, long non-coding RNAs shutting down blocks of chromatin epigenetically, and enhancer RNAs inducing increased gene expression. Bacteria and archaea have also been shown to use regulatory RNA systems such as bacterial small RNAs and CRISPR. Fire and Mello were awarded the 2006 Nobel Prize in Physiology or Medicine for discovering microRNAs (miRNAs), specific short RNA molecules that can base-pair with mRNAs. MicroRNA (miRNA) and small interfering RNA (siRNA) Post-transcriptional expression levels of many genes can be controlled by RNA interference, in which miRNAs, specific short RNA molecules, pair with mRNA regions and target them for degradation. This antisense-based process involves steps that first process the RNA so that it can base-pair with a region of its target mRNAs. Once the base pairing occurs, other proteins direct the mRNA to be destroyed by nucleases. Long non-coding RNAs Next to be linked to regulation were Xist and other long noncoding RNAs associated with X chromosome inactivation.  Their roles, at first mysterious, were shown by Jeannie T. Lee and others to be the silencing of blocks of chromatin via recruitment of Polycomb complex so that messenger RNA could not be transcribed from them. Additional lncRNAs, currently defined as RNAs of more than 200 base pairs that do not appear to have coding potential, have been found associated with regulation of stem cell pluripotency and cell division. Enhancer RNAs The third major group of regulatory RNAs is called enhancer RNAs.  It is not clear at present whether they are a unique category of RNAs of various lengths or constitute a distinct subset of lncRNAs.  In any case, they are transcribed from enhancers, which are known regulatory sites in the DNA near genes they regulate.  They up-regulate the transcription of the gene(s) under control of the enhancer from which they are transcribed. Small RNA in prokaryotes Small RNA At first, regulatory RNA was thought to be a eukaryotic phenomenon, a part of the explanation for why so much more transcription in higher organisms was seen than had been predicted. But as soon as researchers began to look for possible RNA regulators in bacteria, they turned up there as well, termed as small RNA (sRNA). Currently, the ubiquitous nature of systems of RNA regulation of genes has been discussed as support for the RNA World theory. There are indications that the enterobacterial sRNAs are involved in various cellular processes and seem to have significant role in stress responses such as membrane stress, starvation stress, phosphosugar stress and DNA damage. Also, it has been suggested that sRNAs have been evolved to have important role in stress responses because of their kinetic properties that allow for rapid response and stabilisation of the physiological state. Bacterial small RNAs generally act via antisense pairing with mRNA to down-regulate its translation, either by affecting stability or affecting cis-binding ability. Riboswitches have also been discovered. They are cis-acting regulatory RNA sequences acting allosterically. They change shape when they bind metabolites so that they gain or lose the ability to bind chromatin to regulate expression of genes. CRISPR RNA Archaea also have systems of regulatory RNA. The CRISPR system, recently being used to edit DNA in situ, acts via regulatory RNAs in archaea and bacteria to provide protection against virus invaders. RNA synthesis and processing Synthesis Synthesis of RNA typically occurs in the cell nucleus and is usually catalyzed by an enzyme—RNA polymerase—using DNA as a template, a process known as transcription. Initiation of transcription begins with the binding of the enzyme to a promoter sequence in the DNA (usually found "upstream" of a gene). The DNA double helix is unwound by the helicase activity of the enzyme. The enzyme then progresses along the template strand in the 3’ to 5’ direction, synthesizing a complementary RNA molecule with elongation occurring in the 5’ to 3’ direction. The DNA sequence also dictates where termination of RNA synthesis will occur. Primary transcript RNAs are often modified by enzymes after transcription. For example, a poly(A) tail and a 5' cap are added to eukaryotic pre-mRNA and introns are removed by the spliceosome. There are also a number of RNA-dependent RNA polymerases that use RNA as their template for synthesis of a new strand of RNA. For instance, a number of RNA viruses (such as poliovirus) use this type of enzyme to replicate their genetic material. Also, RNA-dependent RNA polymerase is part of the RNA interference pathway in many organisms. RNA processing Many RNAs are involved in modifying other RNAs. Introns are spliced out of pre-mRNA by spliceosomes, which contain several small nuclear RNAs (snRNA), or the introns can be ribozymes that are spliced by themselves. RNA can also be altered by having its nucleotides modified to nucleotides other than A, C, G and U. In eukaryotes, modifications of RNA nucleotides are in general directed by small nucleolar RNAs (snoRNA; 60–300 nt), found in the nucleolus and cajal bodies. snoRNAs associate with enzymes and guide them to a spot on an RNA by basepairing to that RNA. These enzymes then perform the nucleotide modification. rRNAs and tRNAs are extensively modified, but snRNAs and mRNAs can also be the target of base modification. RNA can also be methylated. RNA in genetics RNA genomes Like DNA, RNA can carry genetic information. RNA viruses have genomes composed of RNA that encodes a number of proteins. The viral genome is replicated by some of those proteins, while other proteins protect the genome as the virus particle moves to a new host cell. Viroids are another group of pathogens, but they consist only of RNA, do not encode any protein and are replicated by a host plant cell's polymerase. Reverse transcription Reverse transcribing viruses replicate their genomes by reverse transcribing DNA copies from their RNA; these DNA copies are then transcribed to new RNA. Retrotransposons also spread by copying DNA and RNA from one another, and telomerase contains an RNA that is used as template for building the ends of eukaryotic chromosomes. Double-stranded RNA Double-stranded RNA (dsRNA) is RNA with two complementary strands, similar to the DNA found in all cells, but with the replacement of thymine by uracil and the adding of one oxygen atom. dsRNA forms the genetic material of some viruses (double-stranded RNA viruses). Double-stranded RNA, such as viral RNA or siRNA, can trigger RNA interference in eukaryotes, as well as interferon response in vertebrates. In eukaryotes, double-stranded RNA (dsRNA) plays a role in the activation of the innate immune system against viral infections. Circular RNA In the late 1970s, it was shown that there is a single stranded covalently closed, i.e. circular form of RNA expressed throughout the animal and plant kingdom (see circRNA). circRNAs are thought to arise via a "back-splice" reaction where the spliceosome joins a upstream 3' acceptor to a downstream 5' donor splice site. So far the function of circRNAs is largely unknown, although for few examples a microRNA sponging activity has been demonstrated. Key discoveries in RNA biology Research on RNA has led to many important biological discoveries and numerous Nobel Prizes. Nucleic acids were discovered in 1868 by Friedrich Miescher, who called the material 'nuclein' since it was found in the nucleus. It was later discovered that prokaryotic cells, which do not have a nucleus, also contain nucleic acids. The role of RNA in protein synthesis was suspected already in 1939. Severo Ochoa won the 1959 Nobel Prize in Medicine (shared with Arthur Kornberg) after he discovered an enzyme that can synthesize RNA in the laboratory. However, the enzyme discovered by Ochoa (polynucleotide phosphorylase) was later shown to be responsible for RNA degradation, not RNA synthesis. In 1956 Alex Rich and David Davies hybridized two separate strands of RNA to form the first crystal of RNA whose structure could be determined by X-ray crystallography. The sequence of the 77 nucleotides of a yeast tRNA was found by Robert W. Holley in 1965, winning Holley the 1968 Nobel Prize in Medicine (shared with Har Gobind Khorana and Marshall Nirenberg). In the early 1970s, retroviruses and reverse transcriptase were discovered, showing for the first time that enzymes could copy RNA into DNA (the opposite of the usual route for transmission of genetic information). For this work, David Baltimore, Renato Dulbecco and Howard Temin were awarded a Nobel Prize in 1975. In 1976, Walter Fiers and his team determined the first complete nucleotide sequence of an RNA virus genome, that of bacteriophage MS2. In 1977, introns and RNA splicing were discovered in both mammalian viruses and in cellular genes, resulting in a 1993 Nobel to Philip Sharp and Richard Roberts. Catalytic RNA molecules (ribozymes) were discovered in the early 1980s, leading to a 1989 Nobel award to Thomas Cech and Sidney Altman. In 1990, it was found in Petunia that introduced genes can silence similar genes of the plant's own, now known to be a result of RNA interference. At about the same time, 22 nt long RNAs, now called microRNAs, were found to have a role in the development of C. elegans. Studies on RNA interference earned a Nobel Prize for Andrew Fire and Craig Mello in 2006, and another Nobel for studies on the transcription of RNA to Roger Kornberg in the same year. The discovery of gene regulatory RNAs has led to attempts to develop drugs made of RNA, such as siRNA, to silence genes. Adding to the Nobel prizes for research on RNA, in 2009 it was awarded for the elucidation of the atomic structure of the ribosome to Venki Ramakrishnan, Thomas A. Steitz, and Ada Yonath. In 2023 the Nobel Prize in Physiology or Medicine was awarded to Katalin Karikó and Drew Weissman for their discoveries concerning modified nucleosides that enabled the development of effective mRNA vaccines against COVID-19. Relevance for prebiotic chemistry and abiogenesis In 1968, Carl Woese hypothesized that RNA might be catalytic and suggested that the earliest forms of life (self-replicating molecules) could have relied on RNA both to carry genetic information and to catalyze biochemical reactions—an RNA world. In May 2022, scientists discovered that RNA can form spontaneously on prebiotic basalt lava glass, presumed to have been abundant on the early Earth. In March 2015, DNA and RNA nucleobases, including uracil, cytosine and thymine, were reportedly formed in the laboratory under outer space conditions, using starter chemicals such as pyrimidine, an organic compound commonly found in meteorites. Pyrimidine, like polycyclic aromatic hydrocarbons (PAHs), is one of the most carbon-rich compounds found in the universe and may have been formed in red giants or in interstellar dust and gas clouds. In July 2022, astronomers reported massive amounts of prebiotic molecules, including possible RNA precursors, in the galactic center of the Milky Way Galaxy. Medical applications RNA, initially deemed unsuitable for therapeutics due to its short half-life, has been made useful through advances in stabilization. Therapeutic applications arise as RNA folds into complex conformations and binds proteins, nucleic acids, and small molecules to form catalytic centers. RNA-based vaccines are thought to be easier to produce than traditional vaccines derived from killed or altered pathogens, because it can take months or years to grow and study a pathogen and determine which molecular parts to extract, inactivate, and use in a vaccine. Small molecules with conventional therapeutic properties can target RNA and DNA structures, thereby treating novel diseases. However, research is scarce on small molecules targeting RNA and approved drugs for human illness. Ribavirin, branaplam, and ataluren are currently available medications that stabilize double-stranded RNA structures and control splicing in a variety of disorders. Protein-coding mRNAs have emerged as new therapeutic candidates, with RNA replacement being particularly beneficial for brief but torrential protein expression. In vitro transcribed mRNAs (IVT-mRNA) have been used to deliver proteins for bone regeneration, pluripotency, and heart function in animal models. SiRNAs, short RNA molecules, play a crucial role in innate defense against viruses and chromatin structure. They can be artificially introduced to silence specific genes, making them valuable for gene function studies, therapeutic target validation, and drug development. mRNA vaccines have emerged as an important new class of vaccines, using mRNA to manufacture proteins which provoke an immune response. Their first successful large-scale application came in the form of COVID-19 vaccines during the COVID-19 pandemic. See also Biomolecular structure RNA virus DNA History of RNA Biology List of RNA Biologists RNA Society Macromolecule RNA-based evolution Aptamer RNA origami Transcriptome RNA world hypothesis References External links RNA World website Link collection (structures, sequences, tools, journals) Nucleic Acid Database Images of DNA, RNA, and complexes. Anna Marie Pyle's Seminar: RNA Structure, Function, and Recognition RNA splicing Molecular biology Biotechnology Nucleic acids
RNA
[ "Chemistry", "Biology" ]
5,509
[ "Biomolecules by chemical classification", "Biotechnology", "nan", "Molecular biology", "Biochemistry", "Nucleic acids" ]
25,765
https://en.wikipedia.org/wiki/RNA%20world
The RNA world is a hypothetical stage in the evolutionary history of life on Earth in which self-replicating RNA molecules proliferated before the evolution of DNA and proteins. The term also refers to the hypothesis that posits the existence of this stage. Alexander Rich first proposed the concept of the RNA world in 1962, and Walter Gilbert coined the term in 1986. Among the characteristics of RNA that suggest its original prominence are that: Like DNA, RNA can store and replicate genetic information. Although RNA is considerably more fragile than DNA, some ancient RNAs may have evolved the ability to methylate other RNAs to protect them. The concurrent formation of all four RNA building blocks further strengthens the hypothesis. Enzymes made of RNA (ribozymes) can catalyze (start or accelerate) chemical reactions that are critical for life, so it is conceivable that in an RNA world, ribozymes might have preceded enzymes made of protein. Many coenzymes that have fundamental roles in cellular life, such as acetyl-CoA, NADH, FADH, and F420, are structurally strikingly similar to RNA and so may be surviving remnants of covalently bound coenzymes in an RNA world. One of the most critical components of cells, the ribosome, is composed primarily of RNA. Although alternative chemical paths to life have been proposed, and RNA-based life may not have been the first life to exist, the RNA world hypothesis seems to be the most favored abiogenesis paradigm. However, even proponents agree that there is still not conclusive evidence to completely falsify other paradigms and hypotheses. Regardless of its plausibility in a prebiotic scenario, the RNA world can serve as a model system for studying the origin of life. If the RNA world existed, it was probably followed by an age characterized by the evolution of ribonucleoproteins (RNP world), which in turn ushered in the era of DNA and longer proteins. DNA has greater stability and durability than RNA, which may explain why it became the predominant information storage molecule. Protein enzymes may have replaced RNA-based ribozymes as biocatalysts because the greater abundance and diversity of the monomers of which they are built makes them more versatile. As some cofactors contain both nucleotide and amino-acid characteristics, it may be that amino acids, peptides, and finally proteins initially were cofactors for ribozymes. History One of the challenges in studying abiogenesis is that the system of reproduction and metabolism utilized by all extant life involves three distinct types of interdependent macromolecules (DNA, RNA, and proteins). This suggests that life could not have arisen in its current form, which has led researchers to hypothesize mechanisms whereby the current system might have arisen from a simpler precursor system. American molecular biologist Alexander Rich was the first to posit a coherent hypothesis on the origin of nucleotides as precursors of life. In an article he contributed to a volume issued in honor of Nobel-laureate physiologist Albert Szent-Györgyi, he explained that the primitive Earth's environment could have produced RNA molecules (polynucleotide monomers) that eventually acquired enzymatic and self-replicating functions. Other mentions of RNA as a primordial molecule can be found in papers by Francis Crick and Leslie Orgel, as well as in Carl Woese's 1967 book The Genetic Code. Hans Kuhn in 1972 laid out a possible process by which the modern genetic system might have arisen from a nucleotide-based precursor, and this led Harold White in 1976 to observe that many of the cofactors essential for enzymatic function are either nucleotides or could have been derived from nucleotides. He proposed a scenario whereby the critical electrochemistry of enzymatic reactions would have necessitated retention of the specific nucleotide moieties of the original RNA-based enzymes carrying out the reactions, while the remaining structural elements of the enzymes were gradually replaced by protein, until all that remained of the original RNAs were these nucleotide cofactors, "fossils of nucleic acid enzymes". Properties of RNA The properties of RNA make the idea of the RNA world hypothesis conceptually plausible, though its general acceptance as an explanation for the origin of life requires further evidence. RNA is known to form efficient catalysts, and its similarity to DNA makes clear its ability to store information. Opinions differ, however, as to whether RNA constituted the first autonomous self-replicating system or was a derivative of a still-earlier system. One version of the hypothesis is that a different type of nucleic acid, termed pre-RNA, was the first one to emerge as a self-reproducing molecule, to be replaced by RNA only later. On the other hand, the discovery in 2009 that activated pyrimidine ribonucleotides can be synthesized under plausible prebiotic conditions suggests that it is premature to dismiss the RNA-first scenarios. Suggestions for 'simple' pre-RNA nucleic acids have included peptide nucleic acid (PNA), threose nucleic acid (TNA) or glycol nucleic acid (GNA). Despite their structural simplicity and possession of properties comparable with RNA, the chemically plausible generation of "simpler" nucleic acids under prebiotic conditions has yet to be demonstrated. RNA as an enzyme In the 1980s, RNA structures capable of self-processing were discovered, with the RNA moiety of RNase P acting as its catalytic subunit. These catalytic RNAs were referred to as RNA enzymes, or ribozymes, are found in today's DNA-based life and could be examples of living fossils. Ribozymes play vital roles, such as that of the ribosome. The large subunit of the ribosome includes an rRNA responsible for the peptide bond-forming peptidyl transferase activity of protein synthesis. Many other ribozyme activities exist; for example, the hammerhead ribozyme performs self-cleavage and an RNA polymerase ribozyme can synthesize a short RNA strand from a primed RNA template. Among the enzymatic properties important for the beginning of life are: Self-replication The ability to self-replicate or synthesize other RNA molecules; relatively short RNA molecules that can synthesize others have been artificially produced in the lab. The shortest was 165 bases long, though it has been estimated that only part of the molecule was crucial for this function. One version, 189 bases long, had an error rate of just 1.1% per nucleotide when synthesizing an 11-nucleotide long RNA strand from primed template strands. This 189-base pair ribozyme could polymerize a template of at most 14 nucleotides in length, which is too short for self-replication, but is a potential lead for further investigation. The longest primer extension performed by a ribozyme polymerase was 20 bases. In 2016, researchers reported the use of in vitro evolution to improve dramatically the activity and generality of an RNA polymerase ribozyme by selecting variants that can synthesize functional RNA molecules from an RNA template. Each RNA polymerase ribozyme was engineered to remain linked to its new, synthesized RNA strand; this allowed the team to isolate successful polymerases. The isolated RNA polymerases were again used for another round of evolution. After several rounds of evolution, they obtained one RNA polymerase ribozyme called 24-3 that was able to copy almost any other RNA, from small catalysts to long RNA-based enzymes. Particular RNAs were amplified up to 10,000 times, a first RNA version of the polymerase chain reaction (PCR). Catalysis The ability to catalyze simple chemical reactions—which would enhance creation of molecules that are building blocks of RNA molecules (i.e., a strand of RNA that would make creating more strands of RNA easier). Relatively short RNA molecules with such abilities have been artificially formed in the lab. A recent study showed that almost any nucleic acid can evolve into a catalytic sequence under appropriate selection. For instance, an arbitrarily chosen 50-nucleotide DNA fragment encoding for the Bos taurus (cattle) albumin mRNA was subjected to test-tube evolution to derive a catalytic DNA (Deoxyribozyme, also called DNAzyme) with RNA-cleavage activity. After only a few weeks, a DNAzyme with significant catalytic activity had evolved. In general, DNA is much more chemically inert than RNA and hence much more resistant to obtaining catalytic properties. If in vitro evolution works for DNA it will happen much more easily with RNA. In 2022, Nick Lane and coauthors showed in a computational simulation that short RNA sequences could have been capable of catalyzing fixation which supported protocell replication and growth. Amino acid-RNA ligation The ability to conjugate an amino acid to the 3'-end of an RNA in order to use its chemical groups or provide a long-branched aliphatic sidechain. Peptide bond formation The ability to catalyse the formation of peptide bonds between amino acids to produce short peptides or longer proteins. This is done in modern cells by ribosomes, a complex of several RNA molecules known as rRNA together with many proteins. The rRNA molecules are thought responsible for its enzymatic activity, as no amino-acid residues lie within 18Å of the enzyme's active site, and, when the majority of the amino-acid residues in the ribosome were stringently removed, the resulting ribosome retained its full peptidyl transferase activity, fully able to catalyze the formation of peptide bonds between amino acids. A pseudo 2 fold symmetry of the region surrounding the peptidyl transferase center led to the hypothesis of the Proto-Ribosome, that a vestige of an ancient dimeric molecule from the RNA world is functioning within the ribosome. An RNA molecule with the ribosomal RNA sequence has been synthesized in the lab to test the Proto-ribosome hypothesis and was able to dimerize and to form peptide bonds. A much shorter RNA molecule has been synthesized in the laboratory with the ability to form peptide bonds, and it has been suggested that rRNA has evolved from a similar molecule. It has also been suggested that amino acids may have initially been involved with RNA molecules as cofactors enhancing or diversifying their enzymatic capabilities, before evolving into more complex peptides. Similarly, tRNA is suggested to have evolved from RNA molecules that began to catalyze amino acid transfer. Cofactors Protein enzymes catalyze various chemical reactions, but over half of them incorporate cofactors to facilitate and diversify their catalytic activities. Cofactors are essential in biology, as they are based largely on nucleotides rather than amino acids. Ribozymes use nucleotide cofactors to create metabolism, with two basic choices: non-covalent binding or covalent attachment. Both approaches have been demonstrated using directed evolution to reinvent RNA dupes of protein-catalyzed processes. Lorsch and Szostak investigated ribozymes that could phosphorylate themselves and use ATP-γS as a substrate. However, only one of the seven classes of selected ribozymes had detectable ATP affinity, indicating that the ability to bind ATP was compromised. NAD+- dependent redox ribozymes were also evaluated. The select ribozyme had a rate of enhancement of more than 107 fold and was proven to catalyze the reverse reaction - benzaldehyde reduction by NADH. Since the usage of adenosine as a cofactor is prevalent in current metabolism and is likely to have been common in the RNA world, these discoveries are essential for the evolution of metabolism in the RNA world. RNA in information storage RNA is a very similar molecule to DNA, with only two significant chemical differences (the backbone of RNA uses ribose instead of deoxyribose and its nucleobases include uracil instead of thymine). The overall structure of RNA and DNA are immensely similar—one strand of DNA and one of RNA can bind to form a double helical structure. This makes the storage of information in RNA possible in a very similar way to the storage of information in DNA. However, RNA is less stable, being more prone to hydrolysis due to the presence of a hydroxyl group at the ribose 2' position. Comparison of DNA and RNA structure The major difference between RNA and DNA is the presence of a hydroxyl group at the 2'-position of the ribose sugar in RNA (illustration, right). This group makes the molecule less stable because, when not constrained in a double helix, the 2' hydroxyl can chemically attack the adjacent phosphodiester bond to cleave the phosphodiester backbone. The hydroxyl group also forces the ribose into the C3'-endo sugar conformation unlike the C2'-endo conformation of the deoxyribose sugar in DNA. This forces an RNA double helix to change from a B-DNA structure to one more closely resembling A-DNA. RNA also uses a different set of bases than DNA—adenine, guanine, cytosine and uracil, instead of adenine, guanine, cytosine and thymine. Chemically, uracil is similar to thymine, differing only by a methyl group, and its production requires less energy. In terms of base pairing, this has no effect. Adenine readily binds uracil or thymine. Uracil is, however, one product of damage to cytosine that makes RNA particularly susceptible to mutations that can replace a GC base pair with a GU (wobble) or AU base pair. RNA is thought to have preceded DNA, because of their ordering in the biosynthetic pathways. The deoxyribonucleotides used to make DNA are made from ribonucleotides, the building blocks of RNA, by removing the 2'-hydroxyl group. As a consequence, a cell must have the ability to make RNA before it can make DNA. Limitations of information storage in RNA The chemical properties of RNA make large RNA molecules inherently fragile, and they can easily be broken down into their constituent nucleotides through hydrolysis. These limitations do not make use of RNA as an information storage system impossible, simply energy intensive (to repair or replace damaged RNA molecules) and prone to mutation. While this makes it unsuitable for current 'DNA optimised' life, it may have been acceptable for more primitive life. RNA as a regulator Riboswitches have been found to act as regulators of gene expression, particularly in bacteria, but also in plants and archaea. Riboswitches alter their secondary structure in response to the binding of a metabolite. Riboswitch classes have highly conserved aptamer domains, even among diverse organisms. When a target metabolite is bound to this aptamer, conformational changes occur, modulating the expression of genes carried by mRNA. These changes occur in an expression platform, located downstream from the aptamer. This change in structure can result in the formation or disruption of a terminator, truncating or permitting transcription respectively. Alternatively, riboswitches may bind or occlude the Shine–Dalgarno sequence, affecting translation. It has been suggested that these originated in an RNA-based world. In addition, RNA thermometers regulate gene expression in response to temperature changes. Support and difficulties The RNA world hypothesis is supported by RNA's ability to do all three of to store, to transmit, and to duplicate genetic information, as DNA does, and to perform enzymatic reactions, like protein-based enzymes. Because it can carry out the types of tasks now performed by proteins and DNA, RNA is believed to have once been capable of supporting independent life on its own. Some viruses use RNA as their genetic material, rather than DNA. Further, while nucleotides were not found in experiments based on Miller-Urey experiment, their formation in prebiotically plausible conditions was reported in 2009; a purine base, adenine, is merely a pentamer of hydrogen cyanide, and it happens that this particular base is used as omnipresent energy vehicle in the cell: adenosine triphosphate is used everywhere in preference to guanosine triphosphate, cytidine triphosphate, uridine triphosphate or even deoxythymidine triphosphate, which could serve just as well but are practically never used except as building blocks for nucleic acid chains. Experiments with basic ribozymes, like Bacteriophage Qβ RNA, have shown that simple self-replicating RNA structures can withstand even strong selective pressures (e.g., opposite-chirality chain terminators). Since there were no known chemical pathways for the abiogenic synthesis of nucleotides from pyrimidine nucleobases cytosine and uracil under prebiotic conditions, it is thought by some that nucleic acids did not contain these nucleobases seen in life's nucleic acids. The nucleoside cytosine has a half-life in isolation of 19 days at and 17,000 years in freezing water, which some argue is too short on the geologic time scale for accumulation. Others have questioned whether ribose and other backbone sugars could be stable enough to be found in the original genetic material, and have raised the issue that all ribose molecules would have had to be the same enantiomer, as any nucleotide of the wrong chirality acts as a chain terminator. Pyrimidine ribonucleosides and their respective nucleotides have been prebiotically synthesised by a sequence of reactions that by-pass free sugars and assemble in a stepwise fashion by including nitrogenous and oxygenous chemistries. In a series of publications, John Sutherland and his team at the School of Chemistry, University of Manchester, have demonstrated high yielding routes to cytidine and uridine ribonucleotides built from small 2- and 3-carbon fragments such as glycolaldehyde, glyceraldehyde or glyceraldehyde-3-phosphate, cyanamide, and cyanoacetylene. One of the steps in this sequence allows the isolation of enantiopure ribose aminooxazoline if the enantiomeric excess of glyceraldehyde is 60% or greater, of possible interest toward biological homochirality. This can be viewed as a prebiotic purification step, where the said compound spontaneously crystallised out from a mixture of the other pentose aminooxazolines. Aminooxazolines can react with cyanoacetylene in a mild and highly efficient manner, controlled by inorganic phosphate, to give the cytidine ribonucleotides. Photoanomerization with UV light allows for inversion about the 1' anomeric centre to give the correct beta stereochemistry; one problem with this chemistry is the selective phosphorylation of alpha-cytidine at the 2' position. However, in 2009, they showed that the same simple building blocks allow access, via phosphate controlled nucleobase elaboration, to 2',3'-cyclic pyrimidine nucleotides directly, which are known to be able to polymerise into RNA. Organic chemist Donna Blackmond described this finding as "strong evidence" in favour of the RNA world. However, John Sutherland said that while his team's work suggests that nucleic acids played an early and central role in the origin of life, it did not necessarily support the RNA world hypothesis in the strict sense, which he described as a "restrictive, hypothetical arrangement". The Sutherland group's 2009 paper also highlighted the possibility for the photo-sanitization of the pyrimidine-2',3'-cyclic phosphates. A potential weakness of these routes is the generation of enantioenriched glyceraldehyde, or its 3-phosphate derivative (glyceraldehyde prefers to exist as its keto tautomer dihydroxyacetone). On August 8, 2011, a report, based on NASA studies with meteorites found on Earth, was published suggesting building blocks of RNA (adenine, guanine, and related organic molecules) may have been formed in outer space. In 2017, research using a numerical model suggested that a RNA world may have emerged in warm ponds on the early Earth, and that meteorites were a plausible and probable source of the RNA building blocks (ribose and nucleic acids) to these environments. On August 29, 2012, astronomers at Copenhagen University reported the detection of a specific sugar molecule, glycolaldehyde, in a distant star system. The molecule was found around the protostellar binary IRAS 16293-2422, which is located 400 light years from Earth. Because glycolaldehyde is needed to form RNA, this finding suggests that complex organic molecules may form in stellar systems prior to the formation of planets, eventually arriving on young planets early in their formation. Nitriles, key molecular precursors of the RNA World scenario, are among the most abundant chemical families in the universe and have been found in molecular clouds in the center of the Milky Way, protostars of different masses, meteorites and comets, and also in the atmosphere of Titan, the largest moon of Saturn. A study in 2001 shows that nicotinic acid and its precursor, quinolinic acid can be "produced in yields as high as 7% in a six-step nonenzymatic sequence from aspartic acid and dihydroxyacetone phosphate (DHAP). The biosynthesis of ribose phosphate could have produced DHAP and other three carbon compounds. Aspartic acid could have been available from prebiotic synthesis or from the ribozyme synthesis of pyrimidines." This supports that NAD could have originated in the RNA world. RNA sequences at lengths of 30 nucleotides, 60 nucleotides, 100 nucleotides, and 140 nucleotides, were capable of catalysis of "the synthesis of three common coenzymes, CoA, NAD, and FAD, from their precursors, 4‘-phosphopantetheine, NMN, and FMN, respectively". Prebiotic RNA synthesis Nucleotides are the fundamental molecules that combine in series to form RNA. They consist of a nitrogenous base attached to a sugar-phosphate backbone. RNA is made of long stretches of specific nucleotides arranged so that their sequence of bases carries information. The RNA world hypothesis holds that in the primordial soup (or sandwich), there existed free-floating nucleotides. These nucleotides regularly formed bonds with one another, which often broke because the change in energy was so low. However, certain sequences of base pairs have catalytic properties that lower the energy of their chain being created, enabling them to stay together for longer periods of time. As each chain grew longer, it attracted more matching nucleotides faster, causing chains to now form faster than they were breaking down. These chains have been proposed by some as the first, primitive forms of life. In an RNA world, different sets of RNA strands would have had different replication outputs, which would have increased or decreased their frequency in the population, i.e., natural selection. As the fittest sets of RNA molecules expanded their numbers, novel catalytic properties added by mutation, which benefitted their persistence and expansion, could accumulate in the population. Such an autocatalytic set of ribozymes, capable of self-replication in about an hour, has been identified. It was produced by molecular competition (in vitro evolution) of candidate enzyme mixtures. Competition between RNA may have favored the emergence of cooperation between different RNA chains, opening the way for the formation of the first protocell. Eventually, RNA chains developed with catalytic properties that help amino acids bind together (a process called peptide-bonding). These amino acids could then assist with RNA synthesis, giving those RNA chains that could serve as ribozymes the selective advantage. The ability to catalyze one step in protein synthesis, aminoacylation of RNA, has been demonstrated in a short (five-nucleotide) segment of RNA. In March 2015, NASA scientists reported that, for the first time, complex DNA and RNA organic compounds of life, including uracil, cytosine, and thymine, have been formed in the laboratory under conditions found only in outer space, using starting chemicals, like pyrimidine, found in meteorites. Pyrimidine, like polycyclic aromatic hydrocarbons (PAHs), may have been formed in red giant stars or in interstellar dust and gas clouds, according to the scientists. In 2018, researchers at Georgia Institute of Technology identified three molecular candidates for the bases that might have formed an earliest version of proto-RNA: barbituric acid, melamine, and 2,4,6-triaminopyrimidine (TAP). These three molecules are simpler versions of the four bases in current RNA, which could have been present in larger amounts and could still be forward-compatible with them but may have been discarded by evolution in exchange for more optimal base pairs. Specifically, TAP can form nucleotides with a large range of sugars. Both TAP and melamine base pair with barbituric acid. All three spontaneously form nucleotides with ribose. Evolution of DNA One of the challenges posed by the RNA world hypothesis is to discover the pathway by which an RNA-based system transitioned to one based on DNA. Geoffrey Diemer and Ken Stedman, at Portland State University in Oregon, may have found a solution. While conducting a survey of viruses in a hot acidic lake in Lassen Volcanic National Park, California, they uncovered evidence that a simple DNA virus had acquired a gene from a completely unrelated RNA-based virus. Virologist Luis Villareal of the University of California Irvine also suggests that viruses capable of converting an RNA-based gene into DNA and then incorporating it into a more complex DNA-based genome might have been common in the virus world during the RNA to DNA transition some 4 billion years ago. This finding bolsters the argument for the transfer of information from the RNA world to the emerging DNA world before the emergence of the last universal common ancestor. From the research, the diversity of this virus world is still with us. Viroids Additional evidence supporting the concept of an RNA world has resulted from research on viroids, the first representatives of a novel domain of "subviral pathogens". Viroids infect plants, where most are pathogens, and consist of short stretches of highly complementary, circular, single-stranded and non-coding RNA without a protein coat. They are extremely small, ranging from 246 to 467 nucleobases, compared to the smallest known viruses capable of causing an infection, with genomes about 2,000 nucleobases in length. Based on their characteristic properties, in 1989 plant biologist Theodor Diener argued that viroids are more plausible living relics of the RNA world than introns and other RNAs considered candidates at the time. Diener's hypothesis would be expanded by the research group of Ricardo Flores, and gained a broader audience when in 2014, a New York Times science writer published a popularized version of the proposal. The characteristics of viroids highlighted as consistent with an RNA world were their small size, high guanine and cytosine content, circular structure, structural periodicity, the lack of protein-coding ability and, in some cases, ribozyme-mediated replication. One aspect critics of the hypothesis have focused on is that the exclusive hosts of all known viroids, angiosperms, did not evolve until billions of years after the RNA world was replaced, making viroids more likely to have arisen through later evolutionary mechanisms unrelated to the RNA world than to have survived via a cryptic host over that extended period. Whether they are relics of that world or of more recent origin, their function as autonomous naked RNA is seen as analogous to that envisioned for an RNA world. Origin of sexual reproduction Eigen et al. and Woese proposed that the genomes of early protocells were composed of single-stranded RNA, and that individual genes corresponded to separate RNA segments, rather than being linked end-to-end as in present-day DNA genomes. A protocell that was haploid (one copy of each RNA gene) would be vulnerable to damage, since a single lesion in any RNA segment would be potentially lethal to the protocell (e.g., by blocking replication or inhibiting the function of an essential gene). Vulnerability to damage could be reduced by maintaining two or more copies of each RNA segment in each protocell, i.e., by maintaining diploidy or polyploidy. Genome redundancy would allow a damaged RNA segment to be replaced by an additional replication of its homolog. However, for such a simple organism, the proportion of available resources tied up in the genetic material would be a large fraction of the total resource budget. Under limited resource conditions, the protocell reproductive rate would likely be inversely related to ploidy number. The protocell's fitness would be reduced by the costs of redundancy. Consequently, coping with damaged RNA genes while minimizing the costs of redundancy would likely have been a fundamental problem for early protocells. A cost-benefit analysis was carried out in which the costs of maintaining redundancy were balanced against the costs of genome damage. This analysis led to the conclusion that, under a wide range of circumstances, the selected strategy would be for each protocell to be haploid, but to periodically fuse with another haploid protocell to form a transient diploid. The retention of the haploid state maximizes the growth rate. The periodic fusions permit mutual reactivation of otherwise lethally damaged protocells. If at least one damage-free copy of each RNA gene is present in the transient diploid, viable progeny can be formed. For two, rather than one, viable daughter cells to be produced would require an extra replication of the intact RNA gene homologous to any RNA gene that had been damaged prior to the division of the fused protocell. The cycle of haploid reproduction, with occasional fusion to a transient diploid state, followed by splitting to the haploid state, can be considered to be the sexual cycle in its most primitive form. In the absence of this sexual cycle, haploid protocells with damage in an essential RNA gene would simply die. This model for the early sexual cycle is hypothetical, but it is very similar to the known sexual behavior of the segmented RNA viruses, which are among the simplest organisms known. Influenza virus, whose genome consists of 8 physically separated single-stranded RNA segments, is an example of this type of virus. In segmented RNA viruses, "mating" can occur when a host cell is infected by at least two virus particles. If these viruses each contain an RNA segment with a lethal damage, multiple infection can lead to reactivation providing that at least one undamaged copy of each virus gene is present in the infected cell. This phenomenon is known as "multiplicity reactivation". Multiplicity reactivation has been reported to occur in influenza virus infections after induction of RNA damage by UV-irradiation, and ionizing radiation. Further developments Patrick Forterre has been working on a novel hypothesis, called "three viruses, three domains": that viruses were instrumental in the transition from RNA to DNA and the evolution of Bacteria, Archaea, and Eukaryota. He believes the last universal common ancestor was RNA-based and evolved RNA viruses. Some of the viruses evolved into DNA viruses to protect their genes from attack. Through the process of viral infection into hosts the three domains of life evolved. Another interesting proposal is the idea that RNA synthesis might have been driven by temperature gradients, in the process of thermosynthesis. Single nucleotides have been shown to catalyze organic reactions. Steven Benner has argued that chemical conditions on the planet Mars, such as the presence of boron, molybdenum, and oxygen, may have been better for initially producing RNA molecules than those on Earth. If so, life-suitable molecules, originating on Mars, may have later migrated to Earth via mechanisms of panspermia or similar process. Alternative hypotheses The hypothesized existence of an RNA world does not exclude a "Pre-RNA world", where a metabolic system based on a different nucleic acid is proposed to pre-date RNA. A candidate nucleic acid is peptide nucleic acid (PNA), which uses simple peptide bonds to link nucleobases. PNA is more stable than RNA, but its ability to be generated under prebiological conditions has yet to be demonstrated experimentally. Threose nucleic acid (TNA) or glycol nucleic acid (GNA) have also been proposed as a starting point, and like PNA, also lack experimental evidence for their respective abiogenesis. An alternative—or complementary—theory of RNA origin is proposed in the PAH world hypothesis, whereby polycyclic aromatic hydrocarbons (PAHs) mediate the synthesis of RNA molecules. PAHs are the most common and abundant of the known polyatomic molecules in the visible Universe and are a likely constituent of the primordial sea. PAHs and fullerenes (also implicated in the origin of life) have been detected in nebulae. The iron-sulfur world theory proposes that simple metabolic processes developed before genetic materials did, and these energy-producing cycles catalyzed the production of genes. Some of the difficulties of producing the precursors on earth are bypassed by another alternative or complementary theory for their origin, panspermia. It discusses the possibility that the earliest life on this planet was carried here from somewhere else in the galaxy, possibly on meteorites similar to the Murchison meteorite. Sugar molecules, including ribose, have been found in meteorites. Panspermia does not invalidate the concept of an RNA world, but posits that this world or its precursors originated not on Earth but rather another, probably older, planet. The relative chemical complexity of the nucleotide and the unlikelihood of it spontaneously arising, along with the limited number of combinations possible among four base forms, as well as the need for RNA polymers of some length before seeing enzymatic activity, have led some to reject the RNA world hypothesis in favor of a metabolism-first hypothesis, where the chemistry underlying cellular function arose first, along with the ability to replicate and facilitate this metabolism. RNA-peptide coevolution Another proposal is that the dual-molecule system we see today, where a nucleotide-based molecule is needed to synthesize protein, and a peptide-based (protein) molecule is needed to make nucleic acid polymers, represents the original form of life. This theory is called RNA-peptide coevolution, or the Peptide-RNA world, and offers a possible explanation for the rapid evolution of high-quality replication in RNA (since proteins are catalysts), with the disadvantage of having to postulate the coincident formation of two complex molecules, an enzyme (from peptides) and a RNA (from nucleotides). In this Peptide-RNA World scenario, RNA would have contained the instructions for life, while peptides (simple protein enzymes) would have accelerated key chemical reactions to carry out those instructions. The study leaves open the question of exactly how those primitive systems managed to replicate themselves — something neither the RNA World hypothesis nor the Peptide-RNA World theory can yet explain, unless polymerases (enzymes that rapidly assemble the RNA molecule) played a role. A research project completed in March 2015 by the Sutherland group found that a network of reactions beginning with hydrogen cyanide and hydrogen sulfide, in streams of water irradiated by UV light, could produce the chemical components of proteins and lipids, alongside those of RNA. The researchers used the term "cyanosulfidic" to describe this network of reactions. In November 2017, a team at the Scripps Research Institute identified reactions involving the compound diamidophosphate which could have linked the chemical components into short peptide and lipid chains as well as short RNA-like chains of nucleotides. Implications The RNA world hypothesis, if true, has important implications for the definition of life and the origin of life. For most of the time that followed Franklin, Watson and Crick's elucidation of DNA structure in 1953, life was largely defined in terms of DNA and proteins: DNA and proteins seemed the dominant macromolecules in the living cell, with RNA only aiding in creating proteins from the DNA blueprint. The RNA world hypothesis places RNA at center-stage when life originated. The RNA world hypothesis is supported by the observations that ribosomes are ribozymes: the catalytic site is composed of RNA, and proteins hold no major structural role and are of peripheral functional importance. This was confirmed with the deciphering of the 3-dimensional structure of the ribosome in 2001. Specifically, peptide bond formation, the reaction that binds amino acids together into proteins, is now known to be catalyzed by an adenine residue in the rRNA. RNAs are known to play roles in other cellular catalytic processes, specifically in the targeting of enzymes to specific RNA sequences. In eukaryotes, the processing of pre-mRNA and RNA editing take place at sites determined by the base pairing between the target RNA and RNA constituents of small nuclear ribonucleoproteins (snRNPs). Such enzyme targeting is also responsible for gene down regulation through RNA interference (RNAi), where an enzyme-associated guide RNA targets specific mRNA for selective destruction. Likewise, in eukaryotes the maintenance of telomeres involves copying of an RNA template that is a constituent part of the telomerase ribonucleoprotein enzyme. Another cellular organelle, the vault, includes a ribonucleoprotein component, although the function of this organelle remains to be elucidated. See also GADV-protein world hypothesis The Major Transitions in Evolution RNA-based evolution Protocell or Pre-cell, the primordial version of a cell which confined RNA and later, DNA First universal common ancestor (FUCA) References Further reading External links Biological hypotheses Origin of life Prebiotic chemistry RNA 1962 in biology
RNA world
[ "Chemistry", "Biology" ]
8,093
[ "Biological hypotheses", "Origin of life", "Prebiotic chemistry" ]
25,766
https://en.wikipedia.org/wiki/Ribosome
Ribosomes () are macromolecular machines, found within all cells, that perform biological protein synthesis (messenger RNA translation). Ribosomes link amino acids together in the order specified by the codons of messenger RNA molecules to form polypeptide chains. Ribosomes consist of two major components: the small and large ribosomal subunits. Each subunit consists of one or more ribosomal RNA molecules and many ribosomal proteins (). The ribosomes and associated molecules are also known as the translational apparatus. Overview The sequence of DNA that encodes the sequence of the amino acids in a protein is transcribed into a messenger RNA (mRNA) chain. Ribosomes bind to the messenger RNA molecules and use the RNA's sequence of nucleotides to determine the sequence of amino acids needed to generate a protein. Amino acids are selected and carried to the ribosome by transfer RNA (tRNA) molecules, which enter the ribosome and bind to the messenger RNA chain via an anticodon stem loop. For each coding triplet (codon) in the messenger RNA, there is a unique transfer RNA that must have the exact anti-codon match, and carries the correct amino acid for incorporating into a growing polypeptide chain. Once the protein is produced, it can then fold to produce a functional three-dimensional structure. A ribosome is made from complexes of RNAs and proteins and is therefore a ribonucleoprotein complex. In prokaryotes each ribosome is composed of small (30S) and large (50S) components, called subunits, which are bound to each other: (30S) has mainly a decoding function and is also bound to the mRNA (50S) has mainly a catalytic function and is also bound to the aminoacylated tRNAs. The synthesis of proteins from their building blocks takes place in four phases: initiation, elongation, termination, and recycling. The start codon in all mRNA molecules has the sequence AUG. The stop codon is one of UAA, UAG, or UGA; since there are no tRNA molecules that recognize these codons, the ribosome recognizes that translation is complete. When a ribosome finishes reading an mRNA molecule, the two subunits separate and are usually broken up but can be reused. Ribosomes are a kind of enzyme, called ribozymes because the catalytic peptidyl transferase activity that links amino acids together is performed by the ribosomal RNA. In eukaryotic cells, ribosomes are often associated with the intracellular membranes that make up the rough endoplasmic reticulum. Ribosomes from bacteria, archaea, and eukaryotes (in the three-domain system) resemble each other to a remarkable degree, evidence of a common origin. They differ in their size, sequence, structure, and the ratio of protein to RNA. The differences in structure allow some antibiotics to kill bacteria by inhibiting their ribosomes while leaving human ribosomes unaffected. In all species, more than one ribosome may move along a single mRNA chain at one time (as a polysome), each "reading" a specific sequence and producing a corresponding protein molecule. The mitochondrial ribosomes of eukaryotic cells are distinct from their other ribosomes. They functionally resemble those in bacteria, reflecting the evolutionary origin of mitochondria as endosymbiotic bacteria. Discovery Ribosomes were first observed in the mid-1950s by Romanian-American cell biologist George Emil Palade, using an electron microscope, as dense particles or granules. They were initially called Palade granules due to their granular structure. The term "ribosome" was proposed in 1958 by Howard M. Dintzis: Albert Claude, Christian de Duve, and George Emil Palade were jointly awarded the Nobel Prize in Physiology or Medicine, in 1974, for the discovery of the ribosome. The Nobel Prize in Chemistry 2009 was awarded to Venkatraman Ramakrishnan, Thomas A. Steitz and Ada E. Yonath for determining the detailed structure and mechanism of the ribosome. Structure The ribosome is a complex cellular machine. It is largely made up of specialized RNA known as ribosomal RNA (rRNA) as well as dozens of distinct proteins (the exact number varies slightly between species). The ribosomal proteins and rRNAs are arranged into two distinct ribosomal pieces of different sizes, known generally as the large and small subunits of the ribosome. Ribosomes consist of two subunits that fit together and work as one to translate the mRNA into a polypeptide chain during protein synthesis. Because they are formed from two subunits of non-equal size, they are slightly longer on the axis than in diameter. Prokaryotic ribosomes Prokaryotic ribosomes are around 20 nm (200 Å) in diameter and are composed of 65% rRNA and 35% ribosomal proteins. Eukaryotic ribosomes are between 25 and 30 nm (250–300 Å) in diameter with an rRNA-to-protein ratio that is close to 1. Crystallographic work has shown that there are no ribosomal proteins close to the reaction site for polypeptide synthesis. This suggests that the protein components of ribosomes do not directly participate in peptide bond formation catalysis, but rather that these proteins act as a scaffold that may enhance the ability of rRNA to synthesize protein (see: Ribozyme). The ribosomal subunits of prokaryotes and eukaryotes are quite similar. The unit of measurement used to describe the ribosomal subunits and the rRNA fragments is the Svedberg unit, a measure of the rate of sedimentation in centrifugation rather than size. This accounts for why fragment names do not add up: for example, bacterial 70S ribosomes are made of 50S and 30S subunits. Prokaryotes have 70S ribosomes, each consisting of a small (30S) and a large (50S) subunit. E. coli, for example, has a 16S RNA subunit (consisting of 1540 nucleotides) that is bound to 21 proteins. The large subunit is composed of a 5S RNA subunit (120 nucleotides), a 23S RNA subunit (2900 nucleotides) and 31 proteins. {| class="wikitable float-right" style="text-align:center" |+ Ribosome of E. coli (a bacterium) |- ! width="25%"| ribosome ! width="25%"| subunit ! width="25%"| rRNAs ! width="25%"| r-proteins |- | rowspan="3" | 70S || rowspan="2" | 50S || 23S (2904 nt) || rowspan="2" | 31 |- | 5S (120 nt) |- | 30S || 16S (1542 nt) || 21 |} Affinity label for the tRNA binding sites on the E. coli ribosome allowed the identification of A and P site proteins most likely associated with the peptidyltransferase activity; labelled proteins are L27, L14, L15, L16, L2; at least L27 is located at the donor site, as shown by E. Collatz and A.P. Czernilofsky. Additional research has demonstrated that the S1 and S21 proteins, in association with the 3′-end of 16S ribosomal RNA, are involved in the initiation of translation. Archaeal ribosomes Archaeal ribosomes share the same general dimensions of bacteria ones, being a 70S ribosome made up from a 50S large subunit, a 30S small subunit, and containing three rRNA chains. However, on the sequence level, they are much closer to eukaryotic ones than to bacterial ones. Every extra ribosomal protein archaea have compared to bacteria has a eukaryotic counterpart, while no such relation applies between archaea and bacteria. Eukaryotic ribosomes Eukaryotes have 80S ribosomes located in their cytosol, each consisting of a small (40S) and large (60S) subunit. Their 40S subunit has an 18S RNA (1900 nucleotides) and 33 proteins. The large subunit is composed of a 5S RNA (120 nucleotides), 28S RNA (4700 nucleotides), a 5.8S RNA (160 nucleotides) subunits and 49 proteins. {| class="wikitable float-right" style="text-align:center" |+ eukaryotic cytosolic ribosomes (R. norvegicus) |- ! width="25%"| ribosome ! width="25%"| subunit ! width="25%"| rRNAs ! width="25%"| r-proteins |- | rowspan="4" | 80S || rowspan="3" | 60S || 28S (4718 nt) || rowspan="3" | 49 |- | 5.8S (160 nt) |- | 5S (120 nt) |- | 40S || 18S (1874 nt) || 33 |} During 1977, Czernilofsky published research that used affinity labeling to identify tRNA-binding sites on rat liver ribosomes. Several proteins, including L32/33, L36, L21, L23, L28/29 and L13 were implicated as being at or near the peptidyl transferase center. Plastoribosomes and mitoribosomes In eukaryotes, ribosomes are present in mitochondria (sometimes called mitoribosomes) and in plastids such as chloroplasts (also called plastoribosomes). They also consist of large and small subunits bound together with proteins into one 70S particle. These ribosomes are similar to those of bacteria and these organelles are thought to have originated as symbiotic bacteria. Of the two, chloroplastic ribosomes are closer to bacterial ones than mitochondrial ones are. Many pieces of ribosomal RNA in the mitochondria are shortened, and in the case of 5S rRNA, replaced by other structures in animals and fungi. In particular, Leishmania tarentolae has a minimalized set of mitochondrial rRNA. In contrast, plant mitoribosomes have both extended rRNA and additional proteins as compared to bacteria, in particular, many pentatricopetide repeat proteins. The cryptomonad and chlorarachniophyte algae may contain a nucleomorph that resembles a vestigial eukaryotic nucleus. Eukaryotic 80S ribosomes may be present in the compartment containing the nucleomorph. Making use of the differences The differences between the bacterial and eukaryotic ribosomes are exploited by pharmaceutical chemists to create antibiotics that can destroy a bacterial infection without harming the cells of the infected person. Due to the differences in their structures, the bacterial 70S ribosomes are vulnerable to these antibiotics while the eukaryotic 80S ribosomes are not. Even though mitochondria possess ribosomes similar to the bacterial ones, mitochondria are not affected by these antibiotics because they are surrounded by a double membrane that does not easily admit these antibiotics into the organelle. A noteworthy counterexample is the antineoplastic antibiotic chloramphenicol, which inhibits bacterial 50S and eukaryotic mitochondrial 50S ribosomes. Ribosomes in chloroplasts, however, are different: Antibiotic resistance in chloroplast ribosomal proteins is a trait that has to be introduced as a marker, with genetic engineering. Common properties The various ribosomes share a core structure, which is quite similar despite the large differences in size. Much of the RNA is highly organized into various tertiary structural motifs, for example pseudoknots that exhibit coaxial stacking. The extra RNA in the larger ribosomes is in several long continuous insertions, such that they form loops out of the core structure without disrupting or changing it. All of the catalytic activity of the ribosome is carried out by the RNA; the proteins reside on the surface and seem to stabilize the structure. High-resolution structure The general molecular structure of the ribosome has been known since the early 1970s. In the early 2000s, the structure has been achieved at high resolutions, of the order of a few ångströms. The first papers giving the structure of the ribosome at atomic resolution were published almost simultaneously in late 2000. The 50S (large prokaryotic) subunit was determined from the archaeon Haloarcula marismortui and the bacterium Deinococcus radiodurans, and the structure of the 30S subunit was determined from the bacterium Thermus thermophilus. These structural studies were awarded the Nobel Prize in Chemistry in 2009. In May 2001 these coordinates were used to reconstruct the entire T. thermophilus 70S particle at 5.5 Å resolution. Two papers were published in November 2005 with structures of the Escherichia coli 70S ribosome. The structures of a vacant ribosome were determined at 3.5 Å resolution using X-ray crystallography. Then, two weeks later, a structure based on cryo-electron microscopy was published, which depicts the ribosome at 11–15 Å resolution in the act of passing a newly synthesized protein strand into the protein-conducting channel. The first atomic structures of the ribosome complexed with tRNA and mRNA molecules were solved by using X-ray crystallography by two groups independently, at 2.8 Å and at 3.7 Å. These structures allow one to see the details of interactions of the Thermus thermophilus ribosome with mRNA and with tRNAs bound at classical ribosomal sites. Interactions of the ribosome with long mRNAs containing Shine-Dalgarno sequences were visualized soon after that at 4.5–5.5 Å resolution. In 2011, the first complete atomic structure of the eukaryotic 80S ribosome from the yeast Saccharomyces cerevisiae was obtained by crystallography. The model reveals the architecture of eukaryote-specific elements and their interaction with the universally conserved core. At the same time, the complete model of a eukaryotic 40S ribosomal structure in Tetrahymena thermophila was published and described the structure of the 40S subunit, as well as much about the 40S subunit's interaction with eIF1 during translation initiation. Similarly, the eukaryotic 60S subunit structure was also determined from Tetrahymena thermophila in complex with eIF6. Function Ribosomes are minute particles consisting of RNA and associated proteins that function to synthesize proteins. Proteins are needed for many cellular functions, such as repairing damage or directing chemical processes. Ribosomes can be found floating within the cytoplasm or attached to the endoplasmic reticulum. Their main function is to convert genetic code into an amino acid sequence and to build protein polymers from amino acid monomers. Ribosomes act as catalysts in two extremely important biological processes called peptidyl transfer and peptidyl hydrolysis. The "PT center is responsible for producing protein bonds during protein elongation". In summary, ribosomes have two main functions: Decoding the message, and the formation of peptide bonds. These two functions reside in the ribosomal subunits. Each subunit is made of one or more rRNAs and many r-proteins. The small subunit (30S in bacteria and archaea, 40S in eukaryotes) has the decoding function, whereas the large subunit (50S in bacteria and archaea, 60S in eukaryotes) catalyzes the formation of peptide bonds, referred to as the peptidyl-transferase activity. The bacterial (and archaeal) small subunit contains the 16S rRNA and 21 r-proteins (Escherichia coli), whereas the eukaryotic small subunit contains the 18S rRNA and 32 r-proteins (Saccharomyces cerevisiae, although the numbers vary between species). The bacterial large subunit contains the 5S and 23S rRNAs and 34 r-proteins (E. coli), with the eukaryotic large subunit containing the 5S, 5.8S, and 25S/28S rRNAs and 46 r-proteins (S. cerevisiae; again, the exact numbers vary between species). Translation Ribosomes are the workplaces of protein biosynthesis, the process of translating mRNA into protein. The mRNA comprises a series of codons which are decoded by the ribosome to make the protein. Using the mRNA as a template, the ribosome traverses each codon (3 nucleotides) of the mRNA, pairing it with the appropriate amino acid provided by an aminoacyl-tRNA. Aminoacyl-tRNA contains a complementary anticodon on one end and the appropriate amino acid on the other. For fast and accurate recognition of the appropriate tRNA, the ribosome utilizes large conformational changes (conformational proofreading). The small ribosomal subunit, typically bound to an aminoacyl-tRNA containing the first amino acid methionine, binds to an AUG codon on the mRNA and recruits the large ribosomal subunit. The ribosome contains three RNA binding sites, designated A, P, and E. The A-site binds an aminoacyl-tRNA or termination release factors; the P-site binds a peptidyl-tRNA (a tRNA bound to the poly-peptide chain); and the E-site (exit) binds a free tRNA. Protein synthesis begins at a start codon AUG near the 5' end of the mRNA. mRNA binds to the P site of the ribosome first. The ribosome recognizes the start codon by using the Shine-Dalgarno sequence of the mRNA in prokaryotes and Kozak box in eukaryotes. Although catalysis of the peptide bond involves the C2 hydroxyl of RNA's P-site adenosine in a proton shuttle mechanism, other steps in protein synthesis (such as translocation) are caused by changes in protein conformations. Since their catalytic core is made of RNA, ribosomes are classified as "ribozymes," and it is thought that they might be remnants of the RNA world. In Figure 5, both ribosomal subunits (small and large) assemble at the start codon (towards the 5' end of the mRNA). The ribosome uses tRNA that matches the current codon (triplet) on the mRNA to append an amino acid to the polypeptide chain. This is done for each triplet on the mRNA, while the ribosome moves towards the 3' end of the mRNA. Usually in bacterial cells, several ribosomes are working parallel on a single mRNA, forming what is called a polyribosome or polysome. Cotranslational folding The ribosome is known to actively participate in the protein folding. The structures obtained in this way are usually identical to the ones obtained during protein chemical refolding; however, the pathways leading to the final product may be different. In some cases, the ribosome is crucial in obtaining the functional protein form. For example, one of the possible mechanisms of folding of the deeply knotted proteins relies on the ribosome pushing the chain through the attached loop. Addition of translation-independent amino acids Presence of a ribosome quality control protein Rqc2 is associated with mRNA-independent protein elongation. This elongation is a result of ribosomal addition (via tRNAs brought by Rqc2) of CAT tails: ribosomes extend the C-terminus of a stalled protein with random, translation-independent sequences of alanines and threonines. Ribosome locations Ribosomes are classified as being either "free" or "membrane-bound". Free and membrane-bound ribosomes differ only in their spatial distribution; they are identical in structure. Whether the ribosome exists in a free or membrane-bound state depends on the presence of an ER-targeting signal sequence on the protein being synthesized, so an individual ribosome might be membrane-bound when it is making one protein, but free in the cytosol when it makes another protein. Ribosomes are sometimes referred to as organelles, but the use of the term organelle is often restricted to describing sub-cellular components that include a phospholipid membrane, which ribosomes, being entirely particulate, do not. For this reason, ribosomes may sometimes be described as "non-membranous organelles". Free ribosomes Free ribosomes can move about anywhere in the cytosol, but are excluded from the cell nucleus and other organelles. Proteins that are formed from free ribosomes are released into the cytosol and used within the cell. Since the cytosol contains high concentrations of glutathione and is, therefore, a reducing environment, proteins containing disulfide bonds, which are formed from oxidized cysteine residues, cannot be produced within it. Membrane-bound ribosomes When a ribosome begins to synthesize proteins that are needed in some organelles, the ribosome making this protein can become "membrane-bound". In eukaryotic cells this happens in a region of the endoplasmic reticulum (ER) called the "rough ER". The newly produced polypeptide chains are inserted directly into the ER by the ribosome undertaking vectorial synthesis and are then transported to their destinations, through the secretory pathway. Bound ribosomes usually produce proteins that are used within the plasma membrane or are expelled from the cell via exocytosis. Biogenesis In bacterial cells, ribosomes are synthesized in the cytoplasm through the transcription of multiple ribosome gene operons. In eukaryotes, the process takes place both in the cell cytoplasm and in the nucleolus, which is a region within the cell nucleus. The assembly process involves the coordinated function of over 200 proteins in the synthesis and processing of the four rRNAs, as well as assembly of those rRNAs with the ribosomal proteins. Origin The ribosome may have first originated as a protoribosome, possibly containing a peptidyl transferase centre (PTC), in an RNA world, appearing as a self-replicating complex that only later evolved the ability to synthesize proteins when amino acids began to appear. Studies suggest that ancient ribosomes constructed solely of rRNA could have developed the ability to synthesize peptide bonds. In addition, evidence strongly points to ancient ribosomes as self-replicating complexes, where the rRNA in the ribosomes had informational, structural, and catalytic purposes because it could have coded for tRNAs and proteins needed for ribosomal self-replication. Hypothetical cellular organisms with self-replicating RNA but without DNA are called ribocytes (or ribocells). As amino acids gradually appeared in the RNA world under prebiotic conditions, their interactions with catalytic RNA would increase both the range and efficiency of function of catalytic RNA molecules. Thus, the driving force for the evolution of the ribosome from an ancient self-replicating machine into its current form as a translational machine may have been the selective pressure to incorporate proteins into the ribosome's self-replicating mechanisms, so as to increase its capacity for self-replication. Heterogeneous ribosomes Ribosomes are compositionally heterogeneous between species and even within the same cell, as evidenced by the existence of cytoplasmic and mitochondria ribosomes within the same eukaryotic cells. Certain researchers have suggested that heterogeneity in the composition of ribosomal proteins in mammals is important for gene regulation, i.e., the specialized ribosome hypothesis. However, this hypothesis is controversial and the topic of ongoing research. Heterogeneity in ribosome composition was first proposed to be involved in translational control of protein synthesis by Vince Mauro and Gerald Edelman. They proposed the ribosome filter hypothesis to explain the regulatory functions of ribosomes. Evidence has suggested that specialized ribosomes specific to different cell populations may affect how genes are translated. Some ribosomal proteins exchange from the assembled complex with cytosolic copies suggesting that the structure of the in vivo ribosome can be modified without synthesizing an entire new ribosome. Certain ribosomal proteins are absolutely critical for cellular life while others are not. In budding yeast, 14/78 ribosomal proteins are non-essential for growth, while in humans this depends on the cell of study. Other forms of heterogeneity include post-translational modifications to ribosomal proteins such as acetylation, methylation, and phosphorylation. Arabidopsis, Viral internal ribosome entry sites (IRESs) may mediate translations by compositionally distinct ribosomes. For example, 40S ribosomal units without eS25 in yeast and mammalian cells are unable to recruit the CrPV IGR IRES. Heterogeneity of ribosomal RNA modifications plays a significant role in structural maintenance and/or function and most mRNA modifications are found in highly conserved regions. The most common rRNA modifications are pseudouridylation and 2'-O-methylation of ribose. See also Aminoglycosides Biological machines Posttranslational modification Protein dynamics Ribosome-associated vesicle RNA tertiary structure Translation (genetics) Wobble base pair Ada Yonath—Israeli crystallographer known for her pioneering work on the structure of the ribosome, for which she won the Nobel Prize. References External links Lab computer simulates ribosome in motion Role of the Ribosome, Gwen V. Childs, copied here Ribosome in Proteopedia—The free, collaborative 3D encyclopedia of proteins & other molecules Ribosomal proteins families in ExPASy Molecule of the Month © RCSB Protein Data Bank: Ribosome Elongation Factors Palade 3D electron microscopy structures of ribosomes at the EM Data Bank (EMDB) Ribozymes Protein biosynthesis
Ribosome
[ "Chemistry" ]
5,602
[ "Catalysis", "Protein biosynthesis", "Gene expression", "Biosynthesis", "Ribozymes" ]
25,809
https://en.wikipedia.org/wiki/Riemann%20zeta%20function
The Riemann zeta function or Euler–Riemann zeta function, denoted by the Greek letter (zeta), is a mathematical function of a complex variable defined as for and its analytic continuation elsewhere. The Riemann zeta function plays a pivotal role in analytic number theory and has applications in physics, probability theory, and applied statistics. Leonhard Euler first introduced and studied the function over the reals in the first half of the eighteenth century. Bernhard Riemann's 1859 article "On the Number of Primes Less Than a Given Magnitude" extended the Euler definition to a complex variable, proved its meromorphic continuation and functional equation, and established a relation between its zeros and the distribution of prime numbers. This paper also contained the Riemann hypothesis, a conjecture about the distribution of complex zeros of the Riemann zeta function that many mathematicians consider the most important unsolved problem in pure mathematics. The values of the Riemann zeta function at even positive integers were computed by Euler. The first of them, , provides a solution to the Basel problem. In 1979 Roger Apéry proved the irrationality of . The values at negative integer points, also found by Euler, are rational numbers and play an important role in the theory of modular forms. Many generalizations of the Riemann zeta function, such as Dirichlet series, Dirichlet -functions and -functions, are known. Definition The Riemann zeta function is a function of a complex variable , where and are real numbers. (The notation , , and is used traditionally in the study of the zeta function, following Riemann.) When , the function can be written as a converging summation or as an integral: where is the gamma function. The Riemann zeta function is defined for other complex values via analytic continuation of the function defined for . Leonhard Euler considered the above series in 1740 for positive integer values of , and later Chebyshev extended the definition to The above series is a prototypical Dirichlet series that converges absolutely to an analytic function for such that and diverges for all other values of . Riemann showed that the function defined by the series on the half-plane of convergence can be continued analytically to all complex values . For , the series is the harmonic series which diverges to , and Thus the Riemann zeta function is a meromorphic function on the whole complex plane, which is holomorphic everywhere except for a simple pole at with residue . Euler's product formula In 1737, the connection between the zeta function and prime numbers was discovered by Euler, who proved the identity where, by definition, the left hand side is and the infinite product on the right hand side extends over all prime numbers (such expressions are called Euler products): Both sides of the Euler product formula converge for . The proof of Euler's identity uses only the formula for the geometric series and the fundamental theorem of arithmetic. Since the harmonic series, obtained when , diverges, Euler's formula (which becomes ) implies that there are infinitely many primes. Since the logarithm of is approximately , the formula can also be used to prove the stronger result that the sum of the reciprocals of the primes is infinite. On the other hand, combining that with the sieve of Eratosthenes shows that the density of the set of primes within the set of positive integers is zero. The Euler product formula can be used to calculate the asymptotic probability that randomly selected integers are set-wise coprime. Intuitively, the probability that any single number is divisible by a prime (or any integer) is . Hence the probability that numbers are all divisible by this prime is , and the probability that at least one of them is not is . Now, for distinct primes, these divisibility events are mutually independent because the candidate divisors are coprime (a number is divisible by coprime divisors and if and only if it is divisible by , an event which occurs with probability ). Thus the asymptotic probability that numbers are coprime is given by a product over all primes, Riemann's functional equation This zeta function satisfies the functional equation where is the gamma function. This is an equality of meromorphic functions valid on the whole complex plane. The equation relates values of the Riemann zeta function at the points and , in particular relating even positive integers with odd negative integers. Owing to the zeros of the sine function, the functional equation implies that has a simple zero at each even negative integer , known as the trivial zeros of . When is an even positive integer, the product on the right is non-zero because has a simple pole, which cancels the simple zero of the sine factor. The functional equation was established by Riemann in his 1859 paper "On the Number of Primes Less Than a Given Magnitude" and used to construct the analytic continuation in the first place. Equivalencies An equivalent relationship had been conjectured by Euler over a hundred years earlier, in 1749, for the Dirichlet eta function (the alternating zeta function): Incidentally, this relation gives an equation for calculating in the region i.e. where the η-series is convergent (albeit non-absolutely) in the larger half-plane (for a more detailed survey on the history of the functional equation, see e.g. Blagouchine). Riemann also found a symmetric version of the functional equation applying to the -function: which satisfies: (Riemann's original was slightly different.) The factor was not well-understood at the time of Riemann, until John Tate's (1950) thesis, in which it was shown that this so-called "Gamma factor" is in fact the local L-factor corresponding to the Archimedean place, the other factors in the Euler product expansion being the local L-factors of the non-Archimedean places. Zeros, the critical line, and the Riemann hypothesis The functional equation shows that the Riemann zeta function has zeros at . These are called the trivial zeros. They are trivial in the sense that their existence is relatively easy to prove, for example, from being 0 in the functional equation. The non-trivial zeros have captured far more attention because their distribution not only is far less understood but, more importantly, their study yields important results concerning prime numbers and related objects in number theory. It is known that any non-trivial zero lies in the open strip , which is called the critical strip. The set is called the critical line. The Riemann hypothesis, considered one of the greatest unsolved problems in mathematics, asserts that all non-trivial zeros are on the critical line. In 1989, Conrey proved that more than 40% of the non-trivial zeros of the Riemann zeta function are on the critical line. For the Riemann zeta function on the critical line, see -function. Number of zeros in the critical strip Let be the number of zeros of in the critical strip , whose imaginary parts are in the interval . Trudgian proved that, if , then . The Hardy–Littlewood conjectures In 1914, G. H. Hardy proved that has infinitely many real zeros. Hardy and J. E. Littlewood formulated two conjectures on the density and distance between the zeros of on intervals of large positive real numbers. In the following, is the total number of real zeros and the total number of zeros of odd order of the function lying in the interval . These two conjectures opened up new directions in the investigation of the Riemann zeta function. Zero-free region The location of the Riemann zeta function's zeros is of great importance in number theory. The prime number theorem is equivalent to the fact that there are no zeros of the zeta function on the line. A better result that follows from an effective form of Vinogradov's mean-value theorem is that whenever and . In 2015, Mossinghoff and Trudgian proved that zeta has no zeros in the region for . This is the largest known zero-free region in the critical strip for . The strongest result of this kind one can hope for is the truth of the Riemann hypothesis, which would have many profound consequences in the theory of numbers. Other results It is known that there are infinitely many zeros on the critical line. Littlewood showed that if the sequence () contains the imaginary parts of all zeros in the upper half-plane in ascending order, then The critical line theorem asserts that a positive proportion of the nontrivial zeros lies on the critical line. (The Riemann hypothesis would imply that this proportion is 1.) In the critical strip, the zero with smallest non-negative imaginary part is (). The fact that for all complex implies that the zeros of the Riemann zeta function are symmetric about the real axis. Combining this symmetry with the functional equation, furthermore, one sees that the non-trivial zeros are symmetric about the critical line . It is also known that no zeros lie on the line with real part 1. Specific values For any positive even integer , where is the -th Bernoulli number. For odd positive integers, no such simple expression is known, although these values are thought to be related to the algebraic -theory of the integers; see Special values of -functions. For nonpositive integers, one has for (using the convention that ). In particular, vanishes at the negative even integers because for all odd other than 1. These are the so-called "trivial zeros" of the zeta function. Via analytic continuation, one can show that This gives a pretext for assigning a finite value to the divergent series 1 + 2 + 3 + 4 + ⋯, which has been used in certain contexts (Ramanujan summation) such as string theory. Analogously, the particular value can be viewed as assigning a finite result to the divergent series 1 + 1 + 1 + 1 + ⋯. The value is employed in calculating kinetic boundary layer problems of linear kinetic equations. Although diverges, its Cauchy principal value exists and is equal to the Euler–Mascheroni constant . The demonstration of the particular value is known as the Basel problem. The reciprocal of this sum answers the question: What is the probability that two numbers selected at random are relatively prime? The value is Apéry's constant. Taking the limit through the real numbers, one obtains . But at complex infinity on the Riemann sphere the zeta function has an essential singularity. Various properties For sums involving the zeta function at integer and half-integer values, see rational zeta series. Reciprocal The reciprocal of the zeta function may be expressed as a Dirichlet series over the Möbius function : for every complex number with real part greater than 1. There are a number of similar relations involving various well-known multiplicative functions; these are given in the article on the Dirichlet series. The Riemann hypothesis is equivalent to the claim that this expression is valid when the real part of is greater than . Universality The critical strip of the Riemann zeta function has the remarkable property of universality. This zeta function universality states that there exists some location on the critical strip that approximates any holomorphic function arbitrarily well. Since holomorphic functions are very general, this property is quite remarkable. The first proof of universality was provided by Sergei Mikhailovitch Voronin in 1975. More recent work has included effective versions of Voronin's theorem and extending it to Dirichlet L-functions. Estimates of the maximum of the modulus of the zeta function Let the functions and be defined by the equalities Here is a sufficiently large positive number, , , , . Estimating the values and from below shows, how large (in modulus) values can take on short intervals of the critical line or in small neighborhoods of points lying in the critical strip . The case was studied by Kanakanahalli Ramachandra; the case , where is a sufficiently large constant, is trivial. Anatolii Karatsuba proved, in particular, that if the values and exceed certain sufficiently small constants, then the estimates hold, where and are certain absolute constants. The argument of the Riemann zeta function The function is called the argument of the Riemann zeta function. Here is the increment of an arbitrary continuous branch of along the broken line joining the points , and . There are some theorems on properties of the function . Among those results are the mean value theorems for and its first integral on intervals of the real line, and also the theorem claiming that every interval for contains at least points where the function changes sign. Earlier similar results were obtained by Atle Selberg for the case Representations Dirichlet series An extension of the area of convergence can be obtained by rearranging the original series. The series converges for , while converge even for . In this way, the area of convergence can be extended to for any negative integer . The recurrence connection is clearly visible from the expression valid for enabling further expansion by integration by parts. Mellin-type integrals The Mellin transform of a function is defined as in the region where the integral is defined. There are various expressions for the zeta function as Mellin transform-like integrals. If the real part of is greater than one, we have and , where denotes the gamma function. By modifying the contour, Riemann showed that for all (where denotes the Hankel contour). We can also find expressions which relate to prime numbers and the prime number theorem. If is the prime-counting function, then for values with . A similar Mellin transform involves the Riemann function , which counts prime powers with a weight of , so that Now These expressions can be used to prove the prime number theorem by means of the inverse Mellin transform. Riemann's prime-counting function is easier to work with, and can be recovered from it by Möbius inversion. Theta functions The Riemann zeta function can be given by a Mellin transform in terms of Jacobi's theta function However, this integral only converges if the real part of is greater than 1, but it can be regularized. This gives the following expression for the zeta function, which is well defined for all except 0 and 1: Laurent series The Riemann zeta function is meromorphic with a single pole of order one at . It can therefore be expanded as a Laurent series about ; the series development is then The constants here are called the Stieltjes constants and can be defined by the limit The constant term is the Euler–Mascheroni constant. Integral For all , , the integral relation (cf. Abel–Plana formula) holds true, which may be used for a numerical evaluation of the zeta function. Rising factorial Another series development using the rising factorial valid for the entire complex plane is This can be used recursively to extend the Dirichlet series definition to all complex numbers. The Riemann zeta function also appears in a form similar to the Mellin transform in an integral over the Gauss–Kuzmin–Wirsing operator acting on ; that context gives rise to a series expansion in terms of the falling factorial. Hadamard product On the basis of Weierstrass's factorization theorem, Hadamard gave the infinite product expansion where the product is over the non-trivial zeros of and the letter again denotes the Euler–Mascheroni constant. A simpler infinite product expansion is This form clearly displays the simple pole at , the trivial zeros at −2, −4, ... due to the gamma function term in the denominator, and the non-trivial zeros at . (To ensure convergence in the latter formula, the product should be taken over "matching pairs" of zeros, i.e. the factors for a pair of zeros of the form and should be combined.) Globally convergent series A globally convergent series for the zeta function, valid for all complex numbers except for some integer , was conjectured by Konrad Knopp in 1926 and proven by Helmut Hasse in 1930 (cf. Euler summation): The series appeared in an appendix to Hasse's paper, and was published for the second time by Jonathan Sondow in 1994. Hasse also proved the globally converging series in the same publication. Research by Iaroslav Blagouchine has found that a similar, equivalent series was published by Joseph Ser in 1926. In 1997 K. Maślanka gave another globally convergent (except ) series for the Riemann zeta function: where real coefficients are given by: Here are the Bernoulli numbers and denotes the Pochhammer symbol. Note that this representation of the zeta function is essentially an interpolation with nodes, where the nodes are points , i.e. exactly those where the zeta values are precisely known, as Euler showed. An elegant and very short proof of this representation of the zeta function, based on Carlson's theorem, was presented by Philippe Flajolet in 2006. The asymptotic behavior of the coefficients is rather curious: for growing values, we observe regular oscillations with a nearly exponentially decreasing amplitude and slowly decreasing frequency (roughly as ). Using the saddle point method, we can show that where stands for: (see for details). On the basis of this representation, in 2003 Luis Báez-Duarte provided a new criterion for the Riemann hypothesis. Namely, if we define the coefficients as then the Riemann hypothesis is equivalent to Rapidly convergent series Peter Borwein developed an algorithm that applies Chebyshev polynomials to the Dirichlet eta function to produce a very rapidly convergent series suitable for high precision numerical calculations. Series representation at positive integers via the primorial Here is the primorial sequence and is Jordan's totient function. Series representation by the incomplete poly-Bernoulli numbers The function can be represented, for , by the infinite series where , is the th branch of the Lambert -function, and is an incomplete poly-Bernoulli number. The Mellin transform of the Engel map The function is iterated to find the coefficients appearing in Engel expansions. The Mellin transform of the map is related to the Riemann zeta function by the formula Thue-Morse sequence Certain linear combinations of Dirichlet series whose coefficients are terms of the Thue-Morse sequence give rise to identities involving the Riemann Zeta function (Tóth, 2022 ). For instance: where is the term of the Thue-Morse sequence. In fact, for all with real part greater than , we have In nth dimensions The zeta function can also be represented as an nth amount of integrals: and it only works for Numerical algorithms A classical algorithm, in use prior to about 1930, proceeds by applying the Euler-Maclaurin formula to obtain, for n and m positive integers, where, letting denote the indicated Bernoulli number, and the error satisfies with σ = Re(s). A modern numerical algorithm is the Odlyzko–Schönhage algorithm. Applications The zeta function occurs in applied statistics including Zipf's law, Zipf–Mandelbrot law, and Lotka's law. Zeta function regularization is used as one possible means of regularization of divergent series and divergent integrals in quantum field theory. In one notable example, the Riemann zeta function shows up explicitly in one method of calculating the Casimir effect. The zeta function is also useful for the analysis of dynamical systems. Musical tuning In the theory of musical tunings, the zeta function can be used to find equal divisions of the octave (EDOs) that closely approximate the intervals of the harmonic series. For increasing values of , the value of peaks near integers that correspond to such EDOs. Examples include popular choices such as 12, 19, and 53. Infinite series The zeta function evaluated at equidistant positive integers appears in infinite series representations of a number of constants. In fact the even and odd terms give the two sums and Parametrized versions of the above sums are given by and with and where and are the polygamma function and Euler's constant, respectively, as well as all of which are continuous at . Other sums include where denotes the imaginary part of a complex number. Another interesting series that relates to the natural logarithm of the lemniscate constant is the following There are yet more formulas in the article Harmonic number. Generalizations There are a number of related zeta functions that can be considered to be generalizations of the Riemann zeta function. These include the Hurwitz zeta function (the convergent series representation was given by Helmut Hasse in 1930, cf. Hurwitz zeta function), which coincides with the Riemann zeta function when (the lower limit of summation in the Hurwitz zeta function is 0, not 1), the Dirichlet -functions and the Dedekind zeta function. For other related functions see the articles zeta function and -function. The polylogarithm is given by which coincides with the Riemann zeta function when . The Clausen function can be chosen as the real or imaginary part of . The Lerch transcendent is given by which coincides with the Riemann zeta function when and (the lower limit of summation in the Lerch transcendent is 0, not 1). The multiple zeta functions are defined by One can analytically continue these functions to the -dimensional complex space. The special values taken by these functions at positive integer arguments are called multiple zeta values by number theorists and have been connected to many different branches in mathematics and physics. See also 1 + 2 + 3 + 4 + ··· Arithmetic zeta function Generalized Riemann hypothesis Lehmer pair Particular values of the Riemann zeta function Prime zeta function Riemann Xi function Renormalization Riemann–Siegel theta function ZetaGrid References Sources Has an English translation of Riemann's paper. (Globally convergent series expression.) Also available in External links Riemann Zeta Function, in Wolfram Mathworld — an explanation with a more mathematical approach Tables of selected zeros Prime Numbers Get Hitched A general, non-technical description of the significance of the zeta function in relation to prime numbers. X-Ray of the Zeta Function Visually oriented investigation of where zeta is real or purely imaginary. Formulas and identities for the Riemann Zeta function functions.wolfram.com Riemann Zeta Function and Other Sums of Reciprocal Powers, section 23.2 of Abramowitz and Stegun Mellin transform and the functional equation of the Riemann Zeta function—Computational examples of Mellin transform methods involving the Riemann Zeta Function Visualizing the Riemann zeta function and analytic continuation a video from 3Blue1Brown Zeta and L-functions Analytic number theory Meromorphic functions Articles containing video clips Bernhard Riemann
Riemann zeta function
[ "Mathematics" ]
4,758
[ "Analytic number theory", "Number theory" ]
25,880
https://en.wikipedia.org/wiki/Refractive%20index
In optics, the refractive index (or refraction index) of an optical medium is the ratio of the apparent speed of light in the air or vacuum to the speed in the medium. The refractive index determines how much the path of light is bent, or refracted, when entering a material. This is described by Snell's law of refraction, , where and are the angle of incidence and angle of refraction, respectively, of a ray crossing the interface between two media with refractive indices and . The refractive indices also determine the amount of light that is reflected when reaching the interface, as well as the critical angle for total internal reflection, their intensity (Fresnel equations) and Brewster's angle. The refractive index, , can be seen as the factor by which the speed and the wavelength of the radiation are reduced with respect to their vacuum values: the speed of light in a medium is , and similarly the wavelength in that medium is , where is the wavelength of that light in vacuum. This implies that vacuum has a refractive index of 1, and assumes that the frequency () of the wave is not affected by the refractive index. The refractive index may vary with wavelength. This causes white light to split into constituent colors when refracted. This is called dispersion. This effect can be observed in prisms and rainbows, and as chromatic aberration in lenses. Light propagation in absorbing materials can be described using a complex-valued refractive index. The imaginary part then handles the attenuation, while the real part accounts for refraction. For most materials the refractive index changes with wavelength by several percent across the visible spectrum. Consequently, refractive indices for materials reported using a single value for must specify the wavelength used in the measurement. The concept of refractive index applies across the full electromagnetic spectrum, from X-rays to radio waves. It can also be applied to wave phenomena such as sound. In this case, the speed of sound is used instead of that of light, and a reference medium other than vacuum must be chosen. For lenses (such as eye glasses), a lens made from a high refractive index material will be thinner, and hence lighter, than a conventional lens with a lower refractive index. Such lenses are generally more expensive to manufacture than conventional ones. Definition The relative refractive index of an optical medium 2 with respect to another reference medium 1 () is given by the ratio of speed of light in medium 1 to that in medium 2. This can be expressed as follows: If the reference medium 1 is vacuum, then the refractive index of medium 2 is considered with respect to vacuum. It is simply represented as and is called the absolute refractive index of medium 2. The absolute refractive index n of an optical medium is defined as the ratio of the speed of light in vacuum, , and the phase velocity of light in the medium, Since is constant, is inversely proportional to : The phase velocity is the speed at which the crests or the phase of the wave moves, which may be different from the group velocity, the speed at which the pulse of light or the envelope of the wave moves. Historically air at a standardized pressure and temperature has been common as a reference medium. History Thomas Young was presumably the person who first used, and invented, the name "index of refraction", in 1807. At the same time he changed this value of refractive power into a single number, instead of the traditional ratio of two numbers. The ratio had the disadvantage of different appearances. Newton, who called it the "proportion of the sines of incidence and refraction", wrote it as a ratio of two numbers, like "529 to 396" (or "nearly 4 to 3"; for water). Hauksbee, who called it the "ratio of refraction", wrote it as a ratio with a fixed numerator, like "10000 to 7451.9" (for urine). Hutton wrote it as a ratio with a fixed denominator, like 1.3358 to 1 (water). Young did not use a symbol for the index of refraction, in 1807. In the later years, others started using different symbols: , , and . The symbol gradually prevailed. Typical values Refractive index also varies with wavelength of the light as given by Cauchy's equation. The most general form of this equation is where is the refractive index, is the wavelength, and , , , etc., are coefficients that can be determined for a material by fitting the equation to measured refractive indices at known wavelengths. The coefficients are usually quoted for as the vacuum wavelength in micrometres. Usually, it is sufficient to use a two-term form of the equation: where the coefficients and are determined specifically for this form of the equation. For visible light most transparent media have refractive indices between 1 and 2. A few examples are given in the adjacent table. These values are measured at the yellow doublet D-line of sodium, with a wavelength of 589 nanometers, as is conventionally done. Gases at atmospheric pressure have refractive indices close to 1 because of their low density. Almost all solids and liquids have refractive indices above 1.3, with aerogel as the clear exception. Aerogel is a very low density solid that can be produced with refractive index in the range from 1.002 to 1.265. Moissanite lies at the other end of the range with a refractive index as high as 2.65. Most plastics have refractive indices in the range from 1.3 to 1.7, but some high-refractive-index polymers can have values as high as 1.76. For infrared light refractive indices can be considerably higher. Germanium is transparent in the wavelength region from and has a refractive index of about 4. A type of new materials termed "topological insulators", was recently found which have high refractive index of up to 6 in the near to mid infrared frequency range. Moreover, topological insulators are transparent when they have nanoscale thickness. These properties are potentially important for applications in infrared optics. Refractive index below unity According to the theory of relativity, no information can travel faster than the speed of light in vacuum, but this does not mean that the refractive index cannot be less than 1. The refractive index measures the phase velocity of light, which does not carry information. The phase velocity is the speed at which the crests of the wave move and can be faster than the speed of light in vacuum, and thereby give a refractive index This can occur close to resonance frequencies, for absorbing media, in plasmas, and for X-rays. In the X-ray regime the refractive indices are lower than but very (exceptions close to some resonance frequencies). As an example, water has a refractive index of for X-ray radiation at a photon energy of ( wavelength). An example of a plasma with an index of refraction less than unity is Earth's ionosphere. Since the refractive index of the ionosphere (a plasma), is less than unity, electromagnetic waves propagating through the plasma are bent "away from the normal" (see Geometric optics) allowing the radio wave to be refracted back toward earth, thus enabling long-distance radio communications. See also Radio Propagation and Skywave. Negative refractive index Recent research has also demonstrated the "existence" of materials with a negative refractive index, which can occur if permittivity and permeability have simultaneous negative values. This can be achieved with periodically constructed metamaterials. The resulting negative refraction (i.e., a reversal of Snell's law) offers the possibility of the superlens and other new phenomena to be actively developed by means of metamaterials. Microscopic explanation At the atomic scale, an electromagnetic wave's phase velocity is slowed in a material because the electric field creates a disturbance in the charges of each atom (primarily the electrons) proportional to the electric susceptibility of the medium. (Similarly, the magnetic field creates a disturbance proportional to the magnetic susceptibility.) As the electromagnetic fields oscillate in the wave, the charges in the material will be "shaken" back and forth at the same frequency. The charges thus radiate their own electromagnetic wave that is at the same frequency, but usually with a phase delay, as the charges may move out of phase with the force driving them (see sinusoidally driven harmonic oscillator). The light wave traveling in the medium is the macroscopic superposition (sum) of all such contributions in the material: the original wave plus the waves radiated by all the moving charges. This wave is typically a wave with the same frequency but shorter wavelength than the original, leading to a slowing of the wave's phase velocity. Most of the radiation from oscillating material charges will modify the incoming wave, changing its velocity. However, some net energy will be radiated in other directions or even at other frequencies (see scattering). Depending on the relative phase of the original driving wave and the waves radiated by the charge motion, there are several possibilities: If the electrons emit a light wave which is 90° out of phase with the light wave shaking them, it will cause the total light wave to travel slower. This is the normal refraction of transparent materials like glass or water, and corresponds to a refractive index which is real and greater than 1. If the electrons emit a light wave which is 270° out of phase with the light wave shaking them, it will cause the wave to travel faster. This is called "anomalous refraction", and is observed close to absorption lines (typically in infrared spectra), with X-rays in ordinary materials, and with radio waves in Earth's ionosphere. It corresponds to a permittivity less than 1, which causes the refractive index to be also less than unity and the phase velocity of light greater than the speed of light in vacuum (note that the signal velocity is still less than , as discussed above). If the response is sufficiently strong and out-of-phase, the result is a negative value of permittivity and imaginary index of refraction, as observed in metals or plasma. If the electrons emit a light wave which is 180° out of phase with the light wave shaking them, it will destructively interfere with the original light to reduce the total light intensity. This is light absorption in opaque materials and corresponds to an imaginary refractive index. If the electrons emit a light wave which is in phase with the light wave shaking them, it will amplify the light wave. This is rare, but occurs in lasers due to stimulated emission. It corresponds to an imaginary index of refraction, with the opposite sign to that of absorption. For most materials at visible-light frequencies, the phase is somewhere between 90° and 180°, corresponding to a combination of both refraction and absorption. Dispersion The refractive index of materials varies with the wavelength (and frequency) of light. This is called dispersion and causes prisms and rainbows to divide white light into its constituent spectral colors. As the refractive index varies with wavelength, so will the refraction angle as light goes from one material to another. Dispersion also causes the focal length of lenses to be wavelength dependent. This is a type of chromatic aberration, which often needs to be corrected for in imaging systems. In regions of the spectrum where the material does not absorb light, the refractive index tends to with increasing wavelength, and thus with frequency. This is called "normal dispersion", in contrast to "anomalous dispersion", where the refractive index with wavelength. For visible light normal dispersion means that the refractive index is higher for blue light than for red. For optics in the visual range, the amount of dispersion of a lens material is often quantified by the Abbe number: For a more accurate description of the wavelength dependence of the refractive index, the Sellmeier equation can be used. It is an empirical formula that works well in describing dispersion. Sellmeier coefficients are often quoted instead of the refractive index in tables. Principal refractive index wavelength ambiguity Because of dispersion, it is usually important to specify the vacuum wavelength of light for which a refractive index is measured. Typically, measurements are done at various well-defined spectral emission lines. Manufacturers of optical glass in general define principal index of refraction at yellow spectral line of helium () and alternatively at a green spectral line of mercury (), called and lines respectively. Abbe number is defined for both and denoted and . The spectral data provided by glass manufacturers is also often more precise for these two wavelengths. Both, and spectral lines are singlets and thus are suitable to perform a very precise measurements, such as spectral goniometric method. In practical applications, measurements of refractive index are performed on various refractometers, such as Abbe refractometer. Measurement accuracy of such typical commercial devices is in the order of 0.0002. Refractometers usually measure refractive index , defined for sodium doublet (), which is actually a midpoint between two adjacent yellow spectral lines of sodium. Yellow spectral lines of helium () and sodium () are apart, which can be considered negligible for typical refractometers, but can cause confusion and lead to errors if accuracy is critical. All three typical principle refractive indices definitions can be found depending on application and region, so a proper subscript should be used to avoid ambiguity. Complex refractive index When light passes through a medium, some part of it will always be absorbed. This can be conveniently taken into account by defining a complex refractive index, Here, the real part is the refractive index and indicates the phase velocity, while the imaginary part is called the extinction coefficient indicates the amount of attenuation when the electromagnetic wave propagates through the material. It is related to the absorption coefficient, , through: These values depend upon the frequency of the light used in the measurement. That corresponds to absorption can be seen by inserting this refractive index into the expression for electric field of a plane electromagnetic wave traveling in the -direction. This can be done by relating the complex wave number to the complex refractive index through , with being the vacuum wavelength; this can be inserted into the plane wave expression for a wave travelling in the -direction as: Here we see that gives an exponential decay, as expected from the Beer–Lambert law. Since intensity is proportional to the square of the electric field, intensity will depend on the depth into the material as and thus the absorption coefficient is , and the penetration depth (the distance after which the intensity is reduced by a factor of ) is . Both and are dependent on the frequency. In most circumstances (light is absorbed) or (light travels forever without loss). In special situations, especially in the gain medium of lasers, it is also possible that , corresponding to an amplification of the light. An alternative convention uses instead of , but where still corresponds to loss. Therefore, these two conventions are inconsistent and should not be confused. The difference is related to defining sinusoidal time dependence as versus . See Mathematical descriptions of opacity. Dielectric loss and non-zero DC conductivity in materials cause absorption. Good dielectric materials such as glass have extremely low DC conductivity, and at low frequencies the dielectric loss is also negligible, resulting in almost no absorption. However, at higher frequencies (such as visible light), dielectric loss may increase absorption significantly, reducing the material's transparency to these frequencies. The real , and imaginary , parts of the complex refractive index are related through the Kramers–Kronig relations. In 1986, A.R. Forouhi and I. Bloomer deduced an equation describing as a function of photon energy, , applicable to amorphous materials. Forouhi and Bloomer then applied the Kramers–Kronig relation to derive the corresponding equation for as a function of . The same formalism was applied to crystalline materials by Forouhi and Bloomer in 1988. The refractive index and extinction coefficient, and , are typically measured from quantities that depend on them, such as reflectance, , or transmittance, , or ellipsometric parameters, and . The determination of and from such measured quantities will involve developing a theoretical expression for or , or and in terms of a valid physical model for and . By fitting the theoretical model to the measured or , or and using regression analysis, and can be deduced. X-ray and extreme UV For X-ray and extreme ultraviolet radiation the complex refractive index deviates only slightly from unity and usually has a real part smaller than 1. It is therefore normally written as (or with the alternative convention mentioned above). Far above the atomic resonance frequency delta can be given by where is the classical electron radius, is the X-ray wavelength, and is the electron density. One may assume the electron density is simply the number of electrons per atom multiplied by the atomic density, but more accurate calculation of the refractive index requires replacing with the complex atomic form factor It follows that with and typically of the order of and . Relations to other quantities Optical path length Optical path length (OPL) is the product of the geometric length of the path light follows through a system, and the index of refraction of the medium through which it propagates, This is an important concept in optics because it determines the phase of the light and governs interference and diffraction of light as it propagates. According to Fermat's principle, light rays can be characterized as those curves that optimize the optical path length. Refraction When light moves from one medium to another, it changes direction, i.e. it is refracted. If it moves from a medium with refractive index to one with refractive index , with an incidence angle to the surface normal of , the refraction angle can be calculated from Snell's law: When light enters a material with higher refractive index, the angle of refraction will be smaller than the angle of incidence and the light will be refracted towards the normal of the surface. The higher the refractive index, the closer to the normal direction the light will travel. When passing into a medium with lower refractive index, the light will instead be refracted away from the normal, towards the surface. Total internal reflection If there is no angle fulfilling Snell's law, i.e., the light cannot be transmitted and will instead undergo total internal reflection. This occurs only when going to a less optically dense material, i.e., one with lower refractive index. To get total internal reflection the angles of incidence must be larger than the critical angle Reflectivity Apart from the transmitted light there is also a reflected part. The reflection angle is equal to the incidence angle, and the amount of light that is reflected is determined by the reflectivity of the surface. The reflectivity can be calculated from the refractive index and the incidence angle with the Fresnel equations, which for normal incidence reduces to For common glass in air, and , and thus about 4% of the incident power is reflected. At other incidence angles the reflectivity will also depend on the polarization of the incoming light. At a certain angle called Brewster's angle, p-polarized light (light with the electric field in the plane of incidence) will be totally transmitted. Brewster's angle can be calculated from the two refractive indices of the interface as Lenses The focal length of a lens is determined by its refractive index and the radii of curvature and of its surfaces. The power of a thin lens in air is given by the simplified version of the Lensmaker's formula: where is the focal length of the lens. Microscope resolution The resolution of a good optical microscope is mainly determined by the numerical aperture () of its objective lens. The numerical aperture in turn is determined by the refractive index of the medium filling the space between the sample and the lens and the half collection angle of light according to Carlsson (2007): For this reason oil immersion is commonly used to obtain high resolution in microscopy. In this technique the objective is dipped into a drop of high refractive index immersion oil on the sample under study. Relative permittivity and permeability The refractive index of electromagnetic radiation equals where is the material's relative permittivity, and is its relative permeability. The refractive index is used for optics in Fresnel equations and Snell's law; while the relative permittivity and permeability are used in Maxwell's equations and electronics. Most naturally occurring materials are non-magnetic at optical frequencies, that is is very close to 1, therefore is approximately . In this particular case, the complex relative permittivity , with real and imaginary parts and , and the complex refractive index , with real and imaginary parts and (the latter called the "extinction coefficient"), follow the relation and their components are related by: and: where is the complex modulus. Wave impedance The wave impedance of a plane electromagnetic wave in a non-conductive medium is given by where is the vacuum wave impedance, and are the absolute permeability and permittivity of the medium, is the material's relative permittivity, and is its relative permeability. In non-magnetic media (that is, in materials with ), and Thus refractive index in a non-magnetic media is the ratio of the vacuum wave impedance to the wave impedance of the medium. The reflectivity between two media can thus be expressed both by the wave impedances and the refractive indices as Density In general, it is assumed that the refractive index of a glass increases with its density. However, there does not exist an overall linear relationship between the refractive index and the density for all silicate and borosilicate glasses. A relatively high refractive index and low density can be obtained with glasses containing light metal oxides such as and , while the opposite trend is observed with glasses containing and as seen in the diagram at the right. Many oils (such as olive oil) and ethanol are examples of liquids that are more refractive, but less dense, than water, contrary to the general correlation between density and refractive index. For air, is proportional to the density of the gas as long as the chemical composition does not change. This means that it is also proportional to the pressure and inversely proportional to the temperature for ideal gases. For liquids the same observation can be made as for gases, for instance, the refractive index in alkanes increases nearly perfectly linear with the density. On the other hand, for carboxylic acids, the density decreases with increasing number of C-atoms within the homologeous series. The simple explanation of this finding is that it is not density, but the molar concentration of the chromophore that counts. In homologeous series, this is the excitation of the C-H-bonding. August Beer must have intuitively known that when he gave Hans H. Landolt in 1862 the tip to investigate the refractive index of compounds of homologeous series. While Landolt did not find this relationship, since, at this time dispersion theory was in its infancy, he had the idea of molar refractivity which can even be assigned to single atoms. Based on this concept, the refractive indices of organic materials can be calculated. Bandgap The optical refractive index of a semiconductor tends to increase as the bandgap energy decreases. Many attempts have been made to model this relationship beginning with T. S. Moses in 1949. Empirical models can match experimental data over a wide range of materials and yet fail for important cases like InSb, PbS, and Ge. This negative correlation between refractive index and bandgap energy, along with a negative correlation between bandgap and temperature, means that many semiconductors exhibit a positive correlation between refractive index and temperature. This is the opposite of most materials, where the refractive index decreases with temperature as a result of a decreasing material density. Group index Sometimes, a "group velocity refractive index", usually called the group index is defined: where is the group velocity. This value should not be confused with , which is always defined with respect to the phase velocity. When the dispersion is small, the group velocity can be linked to the phase velocity by the relation where is the wavelength in the medium. In this case the group index can thus be written in terms of the wavelength dependence of the refractive index as When the refractive index of a medium is known as a function of the vacuum wavelength (instead of the wavelength in the medium), the corresponding expressions for the group velocity and index are (for all values of dispersion) where is the wavelength in vacuum. Velocity, momentum, and polarizability As shown in the Fizeau experiment, when light is transmitted through a moving medium, its speed relative to an observer traveling with speed in the same direction as the light is: The momentum of photons in a medium of refractive index is a complex and controversial issue with two different values having different physical interpretations. The refractive index of a substance can be related to its polarizability with the Lorentz–Lorenz equation or to the molar refractivities of its constituents by the Gladstone–Dale relation. Refractivity In atmospheric applications, refractivity is defined as , often rescaled as either or ; the multiplication factors are used because the refractive index for air, deviates from unity by at most a few parts per ten thousand. Molar refractivity, on the other hand, is a measure of the total polarizability of a mole of a substance and can be calculated from the refractive index as where is the density, and is the molar mass. Nonscalar, nonlinear, or nonhomogeneous refraction So far, we have assumed that refraction is given by linear equations involving a spatially constant, scalar refractive index. These assumptions can break down in different ways, to be described in the following subsections. Birefringence In some materials, the refractive index depends on the polarization and propagation direction of the light. This is called birefringence or optical anisotropy. In the simplest form, uniaxial birefringence, there is only one special direction in the material. This axis is known as the optical axis of the material. Light with linear polarization perpendicular to this axis will experience an ordinary refractive index while light polarized in parallel will experience an extraordinary refractive index . The birefringence of the material is the difference between these indices of refraction, . Light propagating in the direction of the optical axis will not be affected by the birefringence since the refractive index will be independent of polarization. For other propagation directions the light will split into two linearly polarized beams. For light traveling perpendicularly to the optical axis the beams will have the same direction. This can be used to change the polarization direction of linearly polarized light or to convert between linear, circular, and elliptical polarizations with waveplates. Many crystals are naturally birefringent, but isotropic materials such as plastics and glass can also often be made birefringent by introducing a preferred direction through, e.g., an external force or electric field. This effect is called photoelasticity, and can be used to reveal stresses in structures. The birefringent material is placed between crossed polarizers. A change in birefringence alters the polarization and thereby the fraction of light that is transmitted through the second polarizer. In the more general case of trirefringent materials described by the field of crystal optics, the dielectric constant is a rank-2 tensor (a 3 by 3 matrix). In this case the propagation of light cannot simply be described by refractive indices except for polarizations along principal axes. Nonlinearity The strong electric field of high intensity light (such as the output of a laser) may cause a medium's refractive index to vary as the light passes through it, giving rise to nonlinear optics. If the index varies quadratically with the field (linearly with the intensity), it is called the optical Kerr effect and causes phenomena such as self-focusing and self-phase modulation. If the index varies linearly with the field (a nontrivial linear coefficient is only possible in materials that do not possess inversion symmetry), it is known as the Pockels effect. Inhomogeneity If the refractive index of a medium is not constant but varies gradually with the position, the material is known as a gradient-index (GRIN) medium and is described by gradient index optics. Light traveling through such a medium can be bent or focused, and this effect can be exploited to produce lenses, some optical fibers, and other devices. Introducing elements in the design of an optical system can greatly simplify the system, reducing the number of elements by as much as a third while maintaining overall performance. The crystalline lens of the human eye is an example of a lens with a refractive index varying from about 1.406 in the inner core to approximately 1.386 at the less dense cortex. Some common mirages are caused by a spatially varying refractive index of air. Refractive index measurement Homogeneous media The refractive index of liquids or solids can be measured with refractometers. They typically measure some angle of refraction or the critical angle for total internal reflection. The first laboratory refractometers sold commercially were developed by Ernst Abbe in the late 19th century. The same principles are still used today. In this instrument, a thin layer of the liquid to be measured is placed between two prisms. Light is shone through the liquid at incidence angles all the way up to 90°, i.e., light rays parallel to the surface. The second prism should have an index of refraction higher than that of the liquid, so that light only enters the prism at angles smaller than the critical angle for total reflection. This angle can then be measured either by looking through a telescope, or with a digital photodetector placed in the focal plane of a lens. The refractive index of the liquid can then be calculated from the maximum transmission angle as , where is the refractive index of the prism. This type of device is commonly used in chemical laboratories for identification of substances and for quality control. Handheld variants are used in agriculture by, e.g., wine makers to determine sugar content in grape juice, and inline process refractometers are used in, e.g., chemical and pharmaceutical industry for process control. In gemology, a different type of refractometer is used to measure the index of refraction and birefringence of gemstones. The gem is placed on a high refractive index prism and illuminated from below. A high refractive index contact liquid is used to achieve optical contact between the gem and the prism. At small incidence angles most of the light will be transmitted into the gem, but at high angles total internal reflection will occur in the prism. The critical angle is normally measured by looking through a telescope. Refractive index variations Unstained biological structures appear mostly transparent under bright-field microscopy as most cellular structures do not attenuate appreciable quantities of light. Nevertheless, the variation in the materials that constitute these structures also corresponds to a variation in the refractive index. The following techniques convert such variation into measurable amplitude differences: To measure the spatial variation of the refractive index in a sample phase-contrast imaging methods are used. These methods measure the variations in phase of the light wave exiting the sample. The phase is proportional to the optical path length the light ray has traversed, and thus gives a measure of the integral of the refractive index along the ray path. The phase cannot be measured directly at optical or higher frequencies, and therefore needs to be converted into intensity by interference with a reference beam. In the visual spectrum this is done using Zernike phase-contrast microscopy, differential interference contrast microscopy (DIC), or interferometry. Zernike phase-contrast microscopy introduces a phase shift to the low spatial frequency components of the image with a phase-shifting annulus in the Fourier plane of the sample, so that high-spatial-frequency parts of the image can interfere with the low-frequency reference beam. In the illumination is split up into two beams that are given different polarizations, are phase shifted differently, and are shifted transversely with slightly different amounts. After the specimen, the two parts are made to interfere, giving an image of the derivative of the optical path length in the direction of the difference in the transverse shift. In interferometry the illumination is split up into two beams by a partially reflective mirror. One of the beams is let through the sample before they are combined to interfere and give a direct image of the phase shifts. If the optical path length variations are more than a wavelength the image will contain fringes. There exist several phase-contrast X-ray imaging techniques to determine 2D or 3D spatial distribution of refractive index of samples in the X-ray regime. Applications The refractive index is an important property of the components of any optical instrument. It determines the focusing power of lenses, the dispersive power of prisms, the reflectivity of lens coatings, and the light-guiding nature of optical fiber. Since the refractive index is a fundamental physical property of a substance, it is often used to identify a particular substance, confirm its purity, or measure its concentration. The refractive index is used to measure solids, liquids, and gases. Most commonly it is used to measure the concentration of a solute in an aqueous solution. It can also be used as a useful tool to differentiate between different types of gemstone, due to the unique chatoyance each individual stone displays. A refractometer is the instrument used to measure the refractive index. For a solution of sugar, the refractive index can be used to determine the sugar content (see Brix). See also Calculation of glass properties Clausius–Mossotti relation Ellipsometry Fermat's principle Index ellipsoid Index-matching material Laser Schlieren deflectometry Optical properties of water and ice Phase-contrast X-ray imaging Prism-coupling refractometry Velocity factor Footnotes References External links NIST calculator for determining the refractive index of air Dielectric materials Science World Filmetrics' online database Free database of refractive index and absorption coefficient information RefractiveIndex.INFO Refractive index database featuring online plotting and parameterisation of data LUXPOP Thin film and bulk index of refraction and photonics calculations The Feynman Lectures on Physics Vol. II Ch. 32: Refractive Index of Dense Materials Dimensionless quantities Refraction Optical quantities
Refractive index
[ "Physics", "Mathematics" ]
7,280
[ "Physical phenomena", "Physical quantities", "Refraction", "Quantity", "Optical phenomena", "Dimensionless quantities", "Optical quantities" ]
25,929
https://en.wikipedia.org/wiki/Regiomontanus
Johannes Müller von Königsberg (6 June 1436 – 6 July 1476), better known as Regiomontanus (), was a mathematician, astrologer and astronomer of the German Renaissance, active in Vienna, Buda and Nuremberg. His contributions were instrumental in the development of Copernican heliocentrism in the decades following his death. Regiomontanus wrote under the Latinized name of Ioannes de Monteregio (or Monte Regio; Regio Monte); the toponym Regiomontanus was first used by Philipp Melanchthon in 1534. He is named after Königsberg in Lower Franconia, not the larger Königsberg (modern Kaliningrad) in Prussia. Life Although little is known of Regiomontanus' early life, it is believed that at eleven years of age, he became a student at the University of Leipzig, Saxony. In 1451 he continued his studies at Alma Mater Rudolfina, the university in Vienna, in the Duchy of Austria, where he became a pupil and friend of Georg von Peuerbach. In 1452 he was awarded his bachelor's degree (baccalaureus), and he was awarded his master's degree (magister artium) at the age of 21 in 1457. He lectured in optics and ancient literature. In 1460 the papal legate Basilios Bessarion came to Vienna on a diplomatic mission. Being a humanist scholar with a great interest in the mathematical sciences, Bessarion sought out Peuerbach's company. George of Trebizond who was Bessarion's philosophical rival had recently produced a new Latin translation of Ptolemy's Almagest from the Greek, which Bessarion, correctly, regarded as inaccurate and badly translated, so he asked Peuerbach to produce a new one. Peuerbach's Greek was not good enough to do a translation but he knew the Almagest intimately so instead he started work on a modernised, improved abridgement of the work. Bessarion also invited Peuerbach to become part of his household and to accompany him back to Italy when his work in Vienna was finished. Peuerbach accepted the invitation on the condition that Regiomontanus could also accompany them. However Peuerbach fell ill in 1461 and died having completed only the first six books of his abridgement of the Almagest. On his death bed Peuerbach made Regiomontanus promise to finish the book and publish it. In 1461 Regiomontanus left Vienna with Bessarion and spent the next four years travelling around Northern Italy as a member of Bessarion's household, looking for and copying mathematical and astronomical manuscripts for Bessarion, who possessed the largest private library in Europe at the time. Regiomontanus also made the acquaintance of the leading Italian mathematicians of the age such as Giovanni Bianchini and Paolo dal Pozzo Toscanelli who had also been friends of Peuerbach during his prolonged stay in Italy more than twenty years earlier. In 1467, he went to work for János Vitéz, archbishop of Esztergom. There he calculated extensive astronomical tables and built astronomical instruments. Next he went to Buda, and the court of Matthias Corvinus of Hungary, for whom he built an astrolabe, and where he collated Greek manuscripts for a handsome salary. The trigonometric tables that he created while living in Hungary, his Tabulae directionum profectionumque (printed posthum., 1490), were designed for astrology, including finding astrological houses. The Tabulae also contained several tangent tables. In 1471 Regiomontanus moved to the Free City of Nuremberg, in Franconia, then one of the Empire's important seats of learning, publication, commerce and art, where he worked with the humanist and merchant Bernhard Walther. Here he founded the world's first scientific printing press, and in 1472 he published the first printed astronomical textbook, the Theoricae novae Planetarum of his teacher Georg von Peurbach. Regiomontanus and Bernhard Walther observed the comet of 1472. Regiomontanus tried to estimate its distance from Earth, using the angle of parallax. According to David A. Seargeant: The 1472 comet was visible from Christmas Day 1471 to 1 March 1472 (Julian Calendar), a total of 59 days. In 1475, Regiomontanus was called to Rome by Pope Sixtus IV on to work on the planned calendar reform. Sixtus promised substantial rewards, including the title of bishop of Regensburg, but it is unlikely that he was actually appointed to the role. On his way to Rome, stopping in Venice, he commissioned the publication of his Calendarium with Erhard Ratdolt (printed in 1476). Regiomontanus reached Rome, but he died there after only a few months, in his 41st year, on 6 July 1476. According to a rumor repeated by Gassendi in his Regiomontanus biography, he was poisoned by relatives of George of Trebizond whom he had criticized in his writing; it is however considered more likely that he died from the plague. Work During his time in Italy he completed Peuerbach's abridgement of Almagest, Epytoma in almagesti Ptolemei. In 1464, he completed De triangulis omnimodis ("On Triangles of All Kinds"). De triangulis omnimodis was one of the first textbooks presenting the current state of trigonometry and included lists of questions for review of individual chapters. In it he wrote: In 1465, he built a portable sundial for Pope Paul II. In Epytoma in almagesti Ptolemei, he critiqued the translation of Almagest by George of Trebizond, pointing out inaccuracies. Later Nicolaus Copernicus would refer to this book as an influence on his own work. A prolific author, Regiomontanus was internationally famous in his lifetime. Despite having completed only a quarter of what he had intended to write, he left a substantial body of work. Nicolaus Copernicus' teacher, Domenico Maria Novara da Ferrara, referred to Regiomontanus as having been his own teacher. There is speculation that Regiomontanus had arrived at a theory of heliocentrism before he died; a manuscript shows particular attention to the heliocentric theory of the Pythagorean Aristarchus, mention was also given to the motion of the earth in a letter to a friend. Much of the material on spherical trigonometry in Regiomontanus' On Triangles was taken directly from the twelfth-century work of Jabir ibn Aflah otherwise known as Geber, as noted in the sixteenth century by Gerolamo Cardano. Publications Legacy Simon Stevin, in his book describing decimal representation of fractions (De Thiende), cites the trigonometric tables of Regiomontanus as suggestive of positional notation. Regiomontanus designed his own astrological house system, which became one of the most popular systems in Europe. In 1561, Daniel Santbech compiled a collected edition of the works of Regiomontanus, De triangulis planis et sphaericis libri quinque (first published in 1533) and Compositio tabularum sinum recto, as well as Santbech's own Problematum astronomicorum et geometricorum sectiones septem. It was published in Basel by Henrich Petri and Petrus Perna. There is an image of him in Hartmann Schedel's 1493 Nuremberg Chronicle. He is holding an astrolabe. Yet, although there are thirteen illustrations of comets in the Chronicle (from 471 to 1472), they are stylized, rather than representing the actual objects. The crater Regiomontanus on the Moon is named after him. See also List of unsolved deaths Regiomontanus' angle maximization problem Notes References Further reading Irmela Bues, Johannes Regiomontanus (1436–1476). In: Fränkische Lebensbilder 11. Neustadt/Aisch 1984, pp. 28–43 Rudolf Mett: Regiomontanus. Wegbereiter des neuen Weltbildes. Teubner / Vieweg, Stuttgart / Leipzig 1996, Helmuth Gericke: Mathematik im Abendland: Von den römischen Feldmessern bis zu Descartes. Springer-Verlag, Berlin 1990, Günther Harmann (Hrsg.): Regiomontanus-Studien. (= Österreichische Akademie der Wissenschaften, Philosophisch-historische Klasse, Sitzungsberichte, Bd. 364; Veröffentlichungen der Kommission für Geschichte der Mathematik, Naturwissenschaften und Medizin, volumes 28–30), Vienna 1980. Samuel Eliot Morison, Christopher Columbus, Mariner, Boston, Little, Brown and Company, 1955. Ralf Kern: Wissenschaftliche Instrumente in ihrer Zeit/Band 1. Vom Astrolab zum mathematischen Besteck. Köln, 2010. Michela Malpangotto, Regiomontano e il rinnovamento del sapere matematico e astronomico nel Quattrocento, Cacucci, 2008 (with the critical edition of Oratio in praelectione Alfragani, Editorial Programm, Preface to the Dialogus inter Viennensem et Cracoviensem adversus Gerardi Cremonensis in planetarum theoricas deliramenta) Ernst Zinner: Leben und Wirken des Joh. Müller von Königsberg, genannt Regiomontanus; Translated into English by Ezra A. Brown as Regiomontanus: His Life and Work External links Adam Mosley, Regiomontanus Biography, web site at the Department of History and Philosophy of Science of the University of Cambridge (1999). Electronic facsimile-editions of the rare book collection at the Vienna Institute of Astronomy Regiomontanus and Calendar Reform Polybiblio: Regiomontanus, Johannes/Santbech, Daniel, ed. De triangulis planis et sphaericis libri. Basel Henrich Petri & Petrus Perna 1561 Joannes Regiomontanus: Calendarium, Venedig 1485, Digitalisat Beitrag bei „Astronomie in Nürnberg“ "" Digitalisierte Werke von Regiomontanus—SICD der Universitäten von Strasbourg Online Galleries, History of Science Collections, University of Oklahoma Libraries (). High resolution images of works by and/or portraits of Regiomontanus in JPEG and TIFF formats. Regiomontanus, Joannes, 1436–1476. Calendarium. Venice, Bernhard Maler Pictor, Erhard Ratdolt, Peter Löslein, 1476. [32] leaves. woodcuts: border, diagrs. (1 movable, 1 with brass pointer) 29.6 cm. (4to). From the Lessing J. Rosenwald Collection in the Rare Book and Special Collections Division at the Library of Congress Doctissimi viri et mathematicarum disciplinarum eximii professoris Ioannis de Regio Monte De triangvlis omnímodis libri qvinqve From the Rare Book and Special Collection Division at the Library of Congress Regiomontanus' Defensio Theonis digital edition (scans and transcription) 1436 births 1476 deaths 15th-century apocalypticists 15th-century astrologers 15th-century German astronomers 15th-century German mathematicians 15th-century German writers 15th-century writers in Latin Catholic clergy scientists Christian astrologers German astrological writers German male writers German Roman Catholics Medieval German astrologers People from Königsberg, Bavaria German scientific instrument makers Unsolved deaths Astronomical instrument makers
Regiomontanus
[ "Astronomy" ]
2,542
[ "Astronomical instrument makers", "Astronomical instruments" ]
26,118
https://en.wikipedia.org/wiki/Roof
A roof (: roofs or rooves) is the top covering of a building, including all materials and constructions necessary to support it on the walls of the building or on uprights, providing protection against rain, snow, sunlight, extremes of temperature, and wind. A roof is part of the building envelope. The characteristics of a roof are dependent upon the purpose of the building that it covers, the available roofing materials and the local traditions of construction and wider concepts of architectural design and practice, and may also be governed by local or national legislation. In most countries, a roof protects primarily against rain. A verandah may be roofed with material that protects against sunlight but admits the other elements. The roof of a garden conservatory protects plants from cold, wind, and rain, but admits light. A roof may also provide additional living space, for example, a roof garden. Etymology Old English 'roof, ceiling, top, summit; heaven, sky', also figuratively, 'highest point of something', from Proto-Germanic (cf. Dutch 'deckhouse, cabin, coffin-lid', Middle High German 'penthouse', Old Norse 'boat shed'). There are no apparent connections outside the Germanic family. "English alone has retained the word in a general sense, for which the other languages use forms corresponding to OE. thatch". Design elements The elements in the design of a roof are: the material the construction the durability The material of a roof may range from banana leaves, wheaten straw or seagrass to laminated glass, copper (see: copper roofing), aluminium sheeting and pre-cast concrete. In many parts of the world ceramic roof tiles have been the predominant roofing material for centuries, if not millennia. Other roofing materials include asphalt, coal tar pitch, EPDM rubber, Hypalon, polyurethane foam, PVC, slate, Teflon fabric, TPO, and wood shakes and shingles. The construction of a roof is determined by its method of support and how the underneath space is bridged and whether or not the roof is pitched. The pitch is the angle at which the roof rises from its lowest to its highest point. Most US domestic architecture, except in very dry regions, has roofs that are sloped, or pitched. Although modern construction elements such as drainpipes may remove the need for pitch, roofs are pitched for reasons of tradition and aesthetics. So the pitch is partly dependent upon stylistic factors, and partially to do with practicalities. Some types of roofing, for example thatch, require a steep pitch in order to be waterproof and durable. Other types of roofing, for example pantiles, are unstable on a steeply pitched roof but provide excellent weather protection at a relatively low angle. In regions where there is little rain, an almost flat roof with a slight run-off provides adequate protection against an occasional downpour. Drainpipes also remove the need for a sloping roof. A person that specializes in roof construction is called a roofer. The durability of a roof is a matter of concern because the roof is often the least accessible part of a building for purposes of repair and renewal, while its damage or destruction can have serious effects. Form The shape of roofs differs greatly from region to region. The main factors which influence the shape of roofs are the climate and the materials available for roof structure and the outer covering. The basic shapes of roofs are flat, mono-pitched, gabled, mansard, hipped, butterfly, arched and domed. There are many variations on these types. Roofs constructed of flat sections that are sloped are referred to as pitched roofs (generally if the angle exceeds 10 degrees). Pitched roofs, including gabled, hipped and skillion roofs, make up the greatest number of domestic roofs. Some roofs follow organic shapes, either by architectural design or because a flexible material such as thatch has been used in the construction. Parts There are two parts to a roof: its supporting structure and its outer skin, or uppermost weatherproof layer. In a minority of buildings, the outer layer is also a self-supporting structure. The roof structure is generally supported upon walls, although some building styles, for example, geodesic and A-frame, blur the distinction between wall and roof. Support The supporting structure of a roof usually comprises beams that are long and of strong, fairly rigid material such as timber, and since the mid-19th century, cast iron or steel. In countries that use bamboo extensively, the flexibility of the material causes a distinctive curving line to the roof, characteristic of Oriental architecture. Timber lends itself to a great variety of roof shapes. The timber structure can fulfil an aesthetic as well as practical function, when left exposed to view. Stone lintels have been used to support roofs since prehistoric times, but cannot bridge large distances. The stone arch came into extensive use in the ancient Roman period and in variant forms could be used to span spaces up to across. The stone arch or vault, with or without ribs, dominated the roof structures of major architectural works for about 2,000 years, only giving way to iron beams with the Industrial Revolution and the designing of such buildings as Paxton's Crystal Palace, completed 1851. With continual improvements in steel girders, these became the major structural support for large roofs, and eventually for ordinary houses as well. Another form of girder is the reinforced concrete beam, in which metal rods are encased in concrete, giving it greater strength under tension. Roof support can also serve as living spaces as can be seen in roof decking. Roof decking are spaces within the roof structure that is converted into a room of some sort. Outer layer This part of the roof shows great variation dependent upon availability of material. In vernacular architecture, roofing material is often vegetation, such as thatches, the most durable being sea grass with a life of perhaps 40 years. In many Asian countries bamboo is used both for the supporting structure and the outer layer where split bamboo stems are laid turned alternately and overlapped. In areas with an abundance of timber, wooden shingles, shakes and boards are used, while in some countries the bark of certain trees can be peeled off in thick, heavy sheets and used for roofing. The 20th century saw the manufacture of composition asphalt shingles which can last from a thin 20-year shingle to the thickest which are limited lifetime shingles, the cost depending on the thickness and durability of the shingle. When a layer of shingles wears out, they are usually stripped, along with the underlay and roofing nails, allowing a new layer to be installed. An alternative method is to install another layer directly over the worn layer. While this method is faster, it does not allow the roof sheathing to be inspected and water damage, often associated with worn shingles, to be repaired. Having multiple layers of old shingles under a new layer causes roofing nails to be located further from the sheathing, weakening their hold. The greatest concern with this method is that the weight of the extra material could exceed the dead load capacity of the roof structure and cause collapse. Because of this, jurisdictions which use the International Building Code prohibit the installation of new roofing on top of an existing roof that has two or more applications of any type of roof covering; the existing roofing material must be removed before installing a new roof. Slate is an ideal, and durable material, while in the Swiss Alps roofs are made from huge slabs of stone, several inches thick. The slate roof is often considered the best type of roofing. A slate roof may last 75 to 150 years, and even longer. However, slate roofs are often expensive to install – in the US, for example, a slate roof may have the same cost as the rest of the house. Often, the first part of a slate roof to fail is the fixing nails; they corrode, allowing the slates to slip. In the UK, this condition is known as "nail sickness". Because of this problem, fixing nails made of stainless steel or copper are recommended, and even these must be protected from the weather. Asbestos, usually in bonded corrugated panels, has been used widely in the 20th century as an inexpensive, non-flammable roofing material with excellent insulating properties. Health and legal issues involved in the mining and handling of asbestos products means that it is no longer used as a new roofing material. However, many asbestos roofs continue to exist, particularly in South America and Asia. Roofs made of cut turf (modern ones known as green roofs, traditional ones as sod roofs) have good insulating properties and are increasingly encouraged as a way of "greening" the Earth. The soil and vegetation function as living insulation, moderating building temperatures. Adobe roofs are roofs of clay, mixed with binding material such as straw or animal hair, and plastered on lathes to form a flat or gently sloped roof, usually in areas of low rainfall. In areas where clay is plentiful, roofs of baked tiles have been the major form of roofing. The casting and firing of roof tiles is an industry that is often associated with brickworks. While the shape and colour of tiles was once regionally distinctive, now tiles of many shapes and colours are produced commercially, to suit the taste and pocketbook of the purchaser. Concrete roof tiles are also a common choice, being available in many different styles and shapes. Sheet metal in the form of copper and lead has also been used for many hundreds of years. Both are expensive but durable, the vast copper roof of Chartres Cathedral, oxidised to a pale green colour, having been in place for hundreds of years. Lead, which is sometimes used for church roofs, was most commonly used as flashing in valleys and around chimneys on domestic roofs, particularly those of slate. Copper was used for the same purpose. In the 19th century, iron, electroplated with zinc to improve its resistance to rust, became a light-weight, easily transported, waterproofing material. Its low cost and easy application made it the most accessible commercial roofing, worldwide. Since then, many types of metal roofing have been developed. Steel shingle or standing-seam roofs last about 50 years or more depending on both the method of installation and the moisture barrier (underlayment) used and are between the cost of shingle roofs and slate roofs. In the 20th century, a large number of roofing materials were developed, including roofs based on bitumen (already used in previous centuries), on rubber and on a range of synthetics such as thermoplastic and on fibreglass. Functions A roof assembly has more than one function. It may provide any or all of the following functions: 1. To shed water i.e., prevent water from standing on the roof surface. Water standing on the roof surface increases the live load on the roof structure, which is a safety issue. Standing water also contributes to premature deterioration of most roofing materials. Some roofing manufacturers' warranties are rendered void due to standing water. 2. To protect the building interior from the effects of weather elements such as rain, wind, sun, heat and snow. 3. To provide thermal insulation. Most modern commercial/industrial roof assemblies incorporate insulation boards or batt insulation. In most cases, the International Building Code and International Residential Code establish the minimum R-value required within the roof assembly. 4. To perform for the expected service life. All standard roofing materials have established histories of their respective longevity, based on anecdotal evidence. Most roof materials will last long after the manufacturer's warranty has expired, given adequate ongoing maintenance, and absent storm damage. Metal and tile roofs may last fifty years or more. Asphalt shingles may last 30–50 years. Coal tar built-up roofs may last forty or more years. Single-ply roofs may last twenty or more years. 5. Provide a desired, unblemished appearance. Some roofs are selected not only for the above functions, but also for aesthetics, similar to wall cladding. Premium prices are often paid for certain systems because of their attractive appearance and "curb appeal." Insulation Because the purpose of a roof is to secure people and their possessions from climatic elements, the insulating properties of a roof are a consideration in its structure and the choice of roofing material. Some roofing materials, particularly those of natural fibrous material, such as thatch, have excellent insulating properties. For those that do not, extra insulation is often installed under the outer layer. In developed countries, the majority of dwellings have a ceiling installed under the structural members of the roof. The purpose of a ceiling is to insulate against heat and cold, noise, dirt and often from the droppings and lice of birds who frequently choose roofs as nesting places. Concrete tiles can be used as insulation. When installed leaving a space between the tiles and the roof surface, it can reduce heating caused by the sun. Forms of insulation are felt or plastic sheeting, sometimes with a reflective surface, installed directly below the tiles or other material; synthetic foam batting laid above the ceiling and recycled paper products and other such materials that can be inserted or sprayed into roof cavities. Cool roofs are becoming increasingly popular, and in some cases are mandated by local codes. Cool roofs are defined as roofs with both high reflectivity and high thermal emittance. Poorly insulated and ventilated roofing can suffer from problems such as the formation of ice dams around the overhanging eaves in cold weather, causing water from melted snow on upper parts of the roof to penetrate the roofing material. Ice dams occur when heat escapes through the uppermost part of the roof, and the snow at those points melts, refreezing as it drips along the shingles, and collecting in the form of ice at the lower points. This can result in structural damage from stress, including the destruction of gutter and drainage systems. Drainage The primary job of most roofs is to keep out water. The large area of a roof repels a lot of water, which must be directed in some suitable way, so that it does not cause damage or inconvenience. Flat roof of adobe dwellings generally have a very slight slope. In a Middle Eastern country, where the roof may be used for recreation, it is often walled, and drainage holes must be provided to stop water from pooling and seeping through the porous roofing material. While flat roofs are more prone to drainage issues, poorly designed or textured sloping roofs can face similar problems. Standing water on a roof can lead to mold growth, which is highly damaging to both the building’s structure and the health of its occupants. Repairing drainage issues is significantly less costly than fixing the damage caused by mold. Similar problems, although on a very much larger scale, confront the builders of modern commercial properties which often have flat roofs. Because of the very large nature of such roofs, it is essential that the outer skin be of a highly impermeable material. Most industrial and commercial structures have conventional roofs of low pitch. In general, the pitch of the roof is proportional to the amount of precipitation. Houses in areas of low rainfall frequently have roofs of low pitch while those in areas of high rainfall and snow, have steep roofs. The longhouses of Papua New Guinea, for example, being roof-dominated architecture, the high roofs sweeping almost to the ground. The high steeply-pitched roofs of Germany and Holland are typical in regions of snowfall. In parts of North America such as Buffalo, New York, United States, or Montreal, Quebec, Canada, there is a required minimum slope of 6 in 12 (1:2, a pitch of 30°). There are regional building styles which contradict this trend, the stone roofs of the Alpine chalets being usually of gentler incline. These buildings tend to accumulate a large amount of snow on them, which is seen as a factor in their insulation. The pitch of the roof is in part determined by the roofing material available, a pitch of 3 in 12 (1:4) or greater slope generally being covered with asphalt shingles, wood shake, corrugated steel, slate or tile. The water repelled by the roof during a rainstorm is potentially damaging to the building that the roof protects. If it runs down the walls, it may seep into the mortar or through panels. If it lies around the foundations it may cause seepage to the interior, rising damp or dry rot. For this reason most buildings have a system in place to protect the walls of a building from most of the roof water. Overhanging eaves are commonly employed for this purpose. Most modern roofs and many old ones have systems of valleys, gutters, waterspouts, waterheads and drainpipes to remove the water from the vicinity of the building. In many parts of the world, roofwater is collected and stored for domestic use. Areas prone to heavy snow benefit from a metal roof because their smooth surfaces shed the weight of snow more easily and resist the force of wind better than a wood shingle or a concrete tile roof. Solar roofs Newer systems include solar shingles which generate electricity as well as cover the roof. There are also solar systems available that generate hot water or hot air and which can also act as a roof covering. More complex systems may carry out all of these functions: generate electricity, recover thermal energy, and also act as a roof covering. Solar systems can be integrated with roofs by: integration in the covering of pitched roofs, e.g. solar shingles, mounting on an existing roof, e.g. solar panel on a tile roof, integration in a flat roof membrane using heat welding (e.g. PVC) or mounting on a flat roof with a construction and additional weight to prevent uplift from wind. Gallery of roof shapes Gallery of significant roofs See also Blue roof Building-integrated photovoltaics Domestic roof construction List of Greco-Roman roofs List of roof shapes Roof cleaning Rubber shingle roof Solar shingle Tensile architecture Thin-shell structure References Structural engineering Structural system
Roof
[ "Technology", "Engineering" ]
3,713
[ "Structural engineering", "Building engineering", "Structural system", "Construction", "Civil engineering", "Roofs" ]
26,262
https://en.wikipedia.org/wiki/Redshift
In physics, a redshift is an increase in the wavelength, and corresponding decrease in the frequency and photon energy, of electromagnetic radiation (such as light). The opposite change, a decrease in wavelength and increase in frequency and energy, is known as a blueshift, or negative redshift. The terms derive from the colours red and blue which form the extremes of the visible light spectrum. The main causes of electromagnetic redshift in astronomy and cosmology are the relative motions of radiation sources, which give rise to the relativistic Doppler effect, and gravitational potentials, which gravitationally redshift escaping radiation. All sufficiently distant light sources show cosmological redshift corresponding to recession speeds proportional to their distances from Earth, a fact known as Hubble's law that implies the universe is expanding. All redshifts can be understood under the umbrella of frame transformation laws. Gravitational waves, which also travel at the speed of light, are subject to the same redshift phenomena. The value of a redshift is often denoted by the letter , corresponding to the fractional change in wavelength (positive for redshifts, negative for blueshifts), and by the wavelength ratio (which is greater than 1 for redshifts and less than 1 for blueshifts). Examples of strong redshifting are a gamma ray perceived as an X-ray, or initially visible light perceived as radio waves. Subtler redshifts are seen in the spectroscopic observations of astronomical objects, and are used in terrestrial technologies such as Doppler radar and radar guns. Other physical processes exist that can lead to a shift in the frequency of electromagnetic radiation, including scattering and optical effects; however, the resulting changes are distinguishable from (astronomical) redshift and are not generally referred to as such (see section on physical optics and radiative transfer). History The history of the subject began in the 19th century, with the development of classical wave mechanics and the exploration of phenomena which are associated with the Doppler effect. The effect is named after the Austrian mathematician, Christian Doppler, who offered the first known physical explanation for the phenomenon in 1842. In 1845, the hypothesis was tested and confirmed for sound waves by the Dutch scientist Christophorus Buys Ballot. Doppler correctly predicted that the phenomenon would apply to all waves and, in particular, suggested that the varying colors of stars could be attributed to their motion with respect to the Earth. Before this was verified, it was found that stellar colors were primarily due to a star's temperature, not motion. Only later was Doppler vindicated by verified redshift observations. The Doppler redshift was first described by French physicist Hippolyte Fizeau in 1848, who noted the shift in spectral lines seen in stars as being due to the Doppler effect. The effect is sometimes called the "Doppler–Fizeau effect". In 1868, British astronomer William Huggins was the first to determine the velocity of a star moving away from the Earth by the method. In 1871, optical redshift was confirmed when the phenomenon was observed in Fraunhofer lines, using solar rotation, about 0.1 Å in the red. In 1887, Vogel and Scheiner discovered the "annual Doppler effect", the yearly change in the Doppler shift of stars located near the ecliptic, due to the orbital velocity of the Earth. In 1901, Aristarkh Belopolsky verified optical redshift in the laboratory using a system of rotating mirrors. In the earlier part of the twentieth century, Slipher, Wirtz and others made the first measurements of the redshifts and blueshifts of galaxies beyond the Milky Way. They initially interpreted these redshifts and blueshifts as being due to random motions, but later Lemaître (1927) and Hubble (1929), using previous data, discovered a roughly linear correlation between the increasing redshifts of, and distances to, galaxies. Lemaître realized that these observations could be explained by a mechanism of producing redshifts seen in Friedmann's solutions to Einstein's equations of general relativity. The correlation between redshifts and distances arises in all expanding models. Beginning with observations in 1912, Vesto Slipher discovered that most spiral galaxies, then mostly thought to be spiral nebulae, had considerable redshifts. Slipher first reported on his measurement in the inaugural volume of the Lowell Observatory Bulletin. Three years later, he wrote a review in the journal Popular Astronomy. In it he stated that "the early discovery that the great Andromeda spiral had the quite exceptional velocity of –300 km(/s) showed the means then available, capable of investigating not only the spectra of the spirals but their velocities as well." Slipher reported the velocities for 15 spiral nebulae spread across the entire celestial sphere, all but three having observable "positive" (that is recessional) velocities. Subsequently, Edwin Hubble discovered an approximate relationship between the redshifts of such "nebulae", and the distances to them, with the formulation of his eponymous Hubble's law. Milton Humason worked on those observations with Hubble. These observations corroborated Alexander Friedmann's 1922 work, in which he derived the Friedmann–Lemaître equations. They are now considered to be strong evidence for an expanding universe and the Big Bang theory. Arthur Eddington used the term "red-shift" as early as 1923, although the word does not appear unhyphenated until about 1934, when Willem de Sitter used it. Measurement, characterization, and interpretation The spectrum of light that comes from a source (see idealized spectrum illustration top-right) can be measured. To determine the redshift, one searches for features in the spectrum such as absorption lines, emission lines, or other variations in light intensity. If found, these features can be compared with known features in the spectrum of various chemical compounds found in experiments where that compound is located on Earth. A very common atomic element in space is hydrogen. The spectrum of originally featureless light shone through hydrogen will show a signature spectrum specific to hydrogen that has features at regular intervals. If restricted to absorption lines it would look similar to the illustration (top right). If the same pattern of intervals is seen in an observed spectrum from a distant source but occurring at shifted wavelengths, it can be identified as hydrogen too. If the same spectral line is identified in both spectra—but at different wavelengths—then the redshift can be calculated using the table below. Determining the redshift of an object in this way requires a frequency or wavelength range. In order to calculate the redshift, one has to know the wavelength of the emitted light in the rest frame of the source: in other words, the wavelength that would be measured by an observer located adjacent to and comoving with the source. Since in astronomical applications this measurement cannot be done directly, because that would require traveling to the distant star of interest, the method using spectral lines described here is used instead. Redshifts cannot be calculated by looking at unidentified features whose rest-frame frequency is unknown, or with a spectrum that is featureless or white noise (random fluctuations in a spectrum). Redshift (and blueshift) may be characterized by the relative difference between the observed and emitted wavelengths (or frequency) of an object. In astronomy, it is customary to refer to this change using a dimensionless quantity called . If represents wavelength and represents frequency (note, where is the speed of light), then is defined by the equations: After is measured, the distinction between redshift and blueshift is simply a matter of whether is positive or negative. For example, Doppler effect blueshifts () are associated with objects approaching (moving closer to) the observer with the light shifting to greater energies. Conversely, Doppler effect redshifts () are associated with objects receding (moving away) from the observer with the light shifting to lower energies. Likewise, gravitational blueshifts are associated with light emitted from a source residing within a weaker gravitational field as observed from within a stronger gravitational field, while gravitational redshifting implies the opposite conditions. Physical origins A redshift can occur due to relative motion of the source and observer, due to the expansion of the cosmos after emission, or due to the effect of mass-energy density on the space between the emitter and observer. The following sections explain each origin. Doppler effect If a source of the light is moving away from an observer, then redshift () occurs; if the source moves towards the observer, then blueshift () occurs. This is true for all electromagnetic waves and is explained by the Doppler effect. Consequently, this type of redshift is called the Doppler redshift. If the source moves away from the observer with velocity , which is much less than the speed of light (), the redshift is given by     (since ) where is the speed of light. In the classical Doppler effect, the frequency of the source is not modified, but the recessional motion causes the illusion of a lower frequency. A more complete treatment of the Doppler redshift requires considering relativistic effects associated with motion of sources close to the speed of light. A complete derivation of the effect can be found in the article on the relativistic Doppler effect. In brief, objects moving close to the speed of light will experience deviations from the above formula due to the time dilation of special relativity which can be corrected for by introducing the Lorentz factor into the classical Doppler formula as follows (for motion solely in the line of sight): This phenomenon was first observed in a 1938 experiment performed by Herbert E. Ives and G.R. Stilwell, called the Ives–Stilwell experiment. Since the Lorentz factor is dependent only on the magnitude of the velocity, this causes the redshift associated with the relativistic correction to be independent of the orientation of the source movement. In contrast, the classical part of the formula is dependent on the projection of the movement of the source into the line-of-sight which yields different results for different orientations. If is the angle between the direction of relative motion and the direction of emission in the observer's frame (zero angle is directly away from the observer), the full form for the relativistic Doppler effect becomes: and for motion solely in the line of sight (), this equation reduces to: For the special case that the light is moving at right angle () to the direction of relative motion in the observer's frame, the relativistic redshift is known as the transverse redshift, and a redshift: is measured, even though the object is not moving away from the observer. Even when the source is moving towards the observer, if there is a transverse component to the motion then there is some speed at which the dilation just cancels the expected blueshift and at higher speed the approaching source will be redshifted. Cosmic expansion The observations of increasing redshifts from more and more distant galaxies can be modeled assuming a homogeneous and isotropic universe combined with general relativity. This cosmological redshift can be written as a function of , the time-dependent cosmic scale factor: The scale factor is monotonically increasing as time passes. Thus is positive, close to zero for local stars, and increasing for distant galaxies that appear redshifted. Using a model of the expansion of the universe, redshift can be related to the age of an observed object, the so-called cosmic time–redshift relation. Denote a density ratio as : with the critical density demarcating a universe that eventually crunches from one that simply expands. This density is about three hydrogen atoms per cubic meter of space. At large redshifts, , one finds: where is the present-day Hubble constant, and is the redshift. The cosmological redshift is commonly attributed to stretching of the wavelengths of photons due to the stretching of space. This interpretation can be misleading. As required by general relativity, the cosmological expansion of space has no effect on local physics. There is no term related to expansion in Maxwell's equations that govern light propagation. The cosmological redshift is can be interpreted as an accumulation of infinitesimal Doppler shifts along the trajectory of the light. There are several websites for calculating various times and distances from redshift, as the precise calculations require numerical integrals for most values of the parameters. Distinguishing between cosmological and local effects For cosmological redshifts of additional Doppler redshifts and blueshifts due to the peculiar motions of the galaxies relative to one another cause a wide scatter from the standard Hubble Law. The resulting situation can be illustrated by the Expanding Rubber Sheet Universe, a common cosmological analogy used to describe the expansion of the universe. If two objects are represented by ball bearings and spacetime by a stretching rubber sheet, the Doppler effect is caused by rolling the balls across the sheet to create peculiar motion. The cosmological redshift occurs when the ball bearings are stuck to the sheet and the sheet is stretched. The redshifts of galaxies include both a component related to recessional velocity from expansion of the universe, and a component related to peculiar motion (Doppler shift). The redshift due to expansion of the universe depends upon the recessional velocity in a fashion determined by the cosmological model chosen to describe the expansion of the universe, which is very different from how Doppler redshift depends upon local velocity. Describing the cosmological expansion origin of redshift, cosmologist Edward Robert Harrison said, "Light leaves a galaxy, which is stationary in its local region of space, and is eventually received by observers who are stationary in their own local region of space. Between the galaxy and the observer, light travels through vast regions of expanding space. As a result, all wavelengths of the light are stretched by the expansion of space. It is as simple as that..." Steven Weinberg clarified, "The increase of wavelength from emission to absorption of light does not depend on the rate of change of [the scale factor] at the times of emission or absorption, but on the increase of in the whole period from emission to absorption." If the universe were contracting instead of expanding, we would see distant galaxies blueshifted by an amount proportional to their distance instead of redshifted. Gravitational redshift In the theory of general relativity, there is time dilation within a gravitational well. This is known as the gravitational redshift or Einstein Shift. The theoretical derivation of this effect follows from the Schwarzschild solution of the Einstein equations which yields the following formula for redshift associated with a photon traveling in the gravitational field of an uncharged, nonrotating, spherically symmetric mass: where is the gravitational constant, is the mass of the object creating the gravitational field, is the radial coordinate of the source (which is analogous to the classical distance from the center of the object, but is actually a Schwarzschild coordinate), and is the speed of light. This gravitational redshift result can be derived from the assumptions of special relativity and the equivalence principle; the full theory of general relativity is not required. The effect is very small but measurable on Earth using the Mössbauer effect and was first observed in the Pound–Rebka experiment. However, it is significant near a black hole, and as an object approaches the event horizon the red shift becomes infinite. It is also the dominant cause of large angular-scale temperature fluctuations in the cosmic microwave background radiation (see Sachs–Wolfe effect). Summary table Several important special-case formulae for redshift in certain special spacetime geometries, as summarized in the following table. In all cases the magnitude of the shift (the value of ) is independent of the wavelength. Observations in astronomy The redshift observed in astronomy can be measured because the emission and absorption spectra for atoms are distinctive and well known, calibrated from spectroscopic experiments in laboratories on Earth. When the redshift of various absorption and emission lines from a single astronomical object is measured, is found to be remarkably constant. Although distant objects may be slightly blurred and lines broadened, it is by no more than can be explained by thermal or mechanical motion of the source. For these reasons and others, the consensus among astronomers is that the redshifts they observe are due to some combination of the three established forms of Doppler-like redshifts. Alternative hypotheses and explanations for redshift such as tired light are not generally considered plausible. Spectroscopy, as a measurement, is considerably more difficult than simple photometry, which measures the brightness of astronomical objects through certain filters. When photometric data is all that is available (for example, the Hubble Deep Field and the Hubble Ultra Deep Field), astronomers rely on a technique for measuring photometric redshifts. Due to the broad wavelength ranges in photometric filters and the necessary assumptions about the nature of the spectrum at the light-source, errors for these sorts of measurements can range up to , and are much less reliable than spectroscopic determinations. However, photometry does at least allow a qualitative characterization of a redshift. For example, if a Sun-like spectrum had a redshift of , it would be brightest in the infrared(1000nm) rather than at the blue-green(500nm) color associated with the peak of its blackbody spectrum, and the light intensity will be reduced in the filter by a factor of four, . Both the photon count rate and the photon energy are redshifted. (See K correction for more details on the photometric consequences of redshift.) Local observations In nearby objects (within our Milky Way galaxy) observed redshifts are almost always related to the line-of-sight velocities associated with the objects being observed. Observations of such redshifts and blueshifts have enabled astronomers to measure velocities and parametrize the masses of the orbiting stars in spectroscopic binaries, a method first employed in 1868 by British astronomer William Huggins. Similarly, small redshifts and blueshifts detected in the spectroscopic measurements of individual stars are one way astronomers have been able to diagnose and measure the presence and characteristics of planetary systems around other stars and have even made very detailed differential measurements of redshifts during planetary transits to determine precise orbital parameters. Finely detailed measurements of redshifts are used in helioseismology to determine the precise movements of the photosphere of the Sun. Redshifts have also been used to make the first measurements of the rotation rates of planets, velocities of interstellar clouds, the rotation of galaxies, and the dynamics of accretion onto neutron stars and black holes which exhibit both Doppler and gravitational redshifts. The temperatures of various emitting and absorbing objects can be obtained by measuring Doppler broadening—effectively redshifts and blueshifts over a single emission or absorption line. By measuring the broadening and shifts of the 21-centimeter hydrogen line in different directions, astronomers have been able to measure the recessional velocities of interstellar gas, which in turn reveals the rotation curve of our Milky Way. Similar measurements have been performed on other galaxies, such as Andromeda. As a diagnostic tool, redshift measurements are one of the most important spectroscopic measurements made in astronomy. Extragalactic observations The most distant objects exhibit larger redshifts corresponding to the Hubble flow of the universe. The largest-observed redshift, corresponding to the greatest distance and furthest back in time, is that of the cosmic microwave background radiation; the numerical value of its redshift is about ( corresponds to present time), and it shows the state of the universe about 13.8 billion years ago, and 379,000 years after the initial moments of the Big Bang. The luminous point-like cores of quasars were the first "high-redshift" () objects discovered before the improvement of telescopes allowed for the discovery of other high-redshift galaxies. For galaxies more distant than the Local Group and the nearby Virgo Cluster, but within a thousand megaparsecs or so, the redshift is approximately proportional to the galaxy's distance. This correlation was first observed by Edwin Hubble and has come to be known as Hubble's law. Vesto Slipher was the first to discover galactic redshifts, in about 1912, while Hubble correlated Slipher's measurements with distances he measured by other means to formulate his Law. Hubble's law follows in part from the Copernican principle. Because it is usually not known how luminous objects are, measuring the redshift is easier than more direct distance measurements, so redshift is sometimes in practice converted to a crude distance measurement using Hubble's law. Gravitational interactions of galaxies with each other and clusters cause a significant scatter in the normal plot of the Hubble diagram. The peculiar velocities associated with galaxies superimpose a rough trace of the mass of virialized objects in the universe. This effect leads to such phenomena as nearby galaxies (such as the Andromeda Galaxy) exhibiting blueshifts as we fall towards a common barycenter, and redshift maps of clusters showing a fingers of god effect due to the scatter of peculiar velocities in a roughly spherical distribution. This added component gives cosmologists a chance to measure the masses of objects independent of the mass-to-light ratio (the ratio of a galaxy's mass in solar masses to its brightness in solar luminosities), an important tool for measuring dark matter. The Hubble law's linear relationship between distance and redshift assumes that the rate of expansion of the universe is constant. However, when the universe was much younger, the expansion rate, and thus the Hubble "constant", was larger than it is today. For more distant galaxies, then, whose light has been travelling to us for much longer times, the approximation of constant expansion rate fails, and the Hubble law becomes a non-linear integral relationship and dependent on the history of the expansion rate since the emission of the light from the galaxy in question. Observations of the redshift-distance relationship can be used, then, to determine the expansion history of the universe and thus the matter and energy content. While it was long believed that the expansion rate has been continuously decreasing since the Big Bang, observations beginning in 1988 of the redshift-distance relationship using Type Ia supernovae have suggested that in comparatively recent times the expansion rate of the universe has begun to accelerate. Highest redshifts Currently, the objects with the highest known redshifts are galaxies and the objects producing gamma ray bursts. The most reliable redshifts are from spectroscopic data, and the highest-confirmed spectroscopic redshift of a galaxy is that of JADES-GS-z14-0 with a redshift of , corresponding to 290 million years after the Big Bang. The previous record was held by GN-z11, with a redshift of , corresponding to 400 million years after the Big Bang, and by UDFy-38135539 at a redshift of , corresponding to 600 million years after the Big Bang. Slightly less reliable are Lyman-break redshifts, the highest of which is the lensed galaxy A1689-zD1 at a redshift and the next highest being . The most distant-observed gamma-ray burst with a spectroscopic redshift measurement was GRB 090423, which had a redshift of . The most distant-known quasar, ULAS J1342+0928, is at . The highest-known redshift radio galaxy (TGSS1530) is at a redshift and the highest-known redshift molecular material is the detection of emission from the CO molecule from the quasar SDSS J1148+5251 at . Extremely red objects (EROs) are astronomical sources of radiation that radiate energy in the red and near infrared part of the electromagnetic spectrum. These may be starburst galaxies that have a high redshift accompanied by reddening from intervening dust, or they could be highly redshifted elliptical galaxies with an older (and therefore redder) stellar population. Objects that are even redder than EROs are termed hyper extremely red objects (HEROs). The cosmic microwave background has a redshift of , corresponding to an age of approximately 379,000 years after the Big Bang and a proper distance of more than 46 billion light-years. The yet-to-be-observed first light from the oldest Population III stars, not long after atoms first formed and the CMB ceased to be absorbed almost completely, may have redshifts in the range of . Other high-redshift events predicted by physics but not presently observable are the cosmic neutrino background from about two seconds after the Big Bang (and a redshift in excess of ) and the cosmic gravitational wave background emitted directly from inflation at a redshift in excess of . In June 2015, astronomers reported evidence for Population III stars in the Cosmos Redshift 7 galaxy at . Such stars are likely to have existed in the very early universe (i.e., at high redshift), and may have started the production of chemical elements heavier than hydrogen that are needed for the later formation of planets and life as we know it. Redshift surveys With advent of automated telescopes and improvements in spectroscopes, a number of collaborations have been made to map the universe in redshift space. By combining redshift with angular position data, a redshift survey maps the 3D distribution of matter within a field of the sky. These observations are used to measure properties of the large-scale structure of the universe. The Great Wall, a vast supercluster of galaxies over 500 million light-years wide, provides a dramatic example of a large-scale structure that redshift surveys can detect. The first redshift survey was the CfA Redshift Survey, started in 1977 with the initial data collection completed in 1982. More recently, the 2dF Galaxy Redshift Survey determined the large-scale structure of one section of the universe, measuring redshifts for over 220,000 galaxies; data collection was completed in 2002, and the final data set was released 30 June 2003. The Sloan Digital Sky Survey (SDSS), is ongoing as of 2013 and aims to measure the redshifts of around 3 million objects. SDSS has recorded redshifts for galaxies as high as 0.8, and has been involved in the detection of quasars beyond . The DEEP2 Redshift Survey uses the Keck telescopes with the new "DEIMOS" spectrograph; a follow-up to the pilot program DEEP1, DEEP2 is designed to measure faint galaxies with redshifts 0.7 and above, and it is therefore planned to provide a high-redshift complement to SDSS and 2dF. Effects from physical optics or radiative transfer The interactions and phenomena summarized in the subjects of radiative transfer and physical optics can result in shifts in the wavelength and frequency of electromagnetic radiation. In such cases, the shifts correspond to a physical energy transfer to matter or other photons rather than being by a transformation between reference frames. Such shifts can be from such physical phenomena as coherence effects or the scattering of electromagnetic radiation whether from charged elementary particles, from particulates, or from fluctuations of the index of refraction in a dielectric medium as occurs in the radio phenomenon of radio whistlers. While such phenomena are sometimes referred to as "redshifts" and "blueshifts", in astrophysics light-matter interactions that result in energy shifts in the radiation field are generally referred to as "reddening" rather than "redshifting" which, as a term, is normally reserved for the effects discussed above. In many circumstances scattering causes radiation to redden because entropy results in the predominance of many low-energy photons over few high-energy ones (while conserving total energy). Except possibly under carefully controlled conditions, scattering does not produce the same relative change in wavelength across the whole spectrum; that is, any calculated is generally a function of wavelength. Furthermore, scattering from random media generally occurs at many angles, and is a function of the scattering angle. If multiple scattering occurs, or the scattering particles have relative motion, then there is generally distortion of spectral lines as well. In interstellar astronomy, visible spectra can appear redder due to scattering processes in a phenomenon referred to as interstellar reddening—similarly Rayleigh scattering causes the atmospheric reddening of the Sun seen in the sunrise or sunset and causes the rest of the sky to have a blue color. This phenomenon is distinct from redshifting because the spectroscopic lines are not shifted to other wavelengths in reddened objects and there is an additional dimming and distortion associated with the phenomenon due to photons being scattered in and out of the line of sight. Blueshift The opposite of a redshift is a blueshift. A blueshift is any decrease in wavelength (increase in energy), with a corresponding increase in frequency, of an electromagnetic wave. In visible light, this shifts a color towards the blue end of the spectrum. Doppler blueshift Doppler blueshift is caused by movement of a source towards the observer. The term applies to any decrease in wavelength and increase in frequency caused by relative motion, even outside the visible spectrum. Only objects moving at near-relativistic speeds toward the observer are noticeably bluer to the naked eye, but the wavelength of any reflected or emitted photon or other particle is shortened in the direction of travel. Doppler blueshift is used in astronomy to determine relative motion: The Andromeda Galaxy is moving toward our own Milky Way galaxy within the Local Group; thus, when observed from Earth, its light is undergoing a blueshift. Components of a binary star system will be blueshifted when moving towards Earth When observing spiral galaxies, the side spinning toward us will have a slight blueshift relative to the side spinning away from us (see Tully–Fisher relation). Blazars are known to propel relativistic jets toward us, emitting synchrotron radiation and bremsstrahlung that appears blueshifted. Nearby stars such as Barnard's Star are moving toward us, resulting in a very small blueshift. Doppler blueshift of distant objects with a high z can be subtracted from the much larger cosmological redshift to determine relative motion in the expanding universe. Gravitational blueshift Unlike the relative Doppler blueshift, caused by movement of a source towards the observer and thus dependent on the received angle of the photon, gravitational blueshift is absolute and does not depend on the received angle of the photon: It is a natural consequence of conservation of energy and mass–energy equivalence, and was confirmed experimentally in 1959 with the Pound–Rebka experiment. Gravitational blueshift contributes to cosmic microwave background (CMB) anisotropy via the Sachs–Wolfe effect: when a gravitational well evolves while a photon is passing, the amount of blueshift on approach will differ from the amount of gravitational redshift as it leaves the region. Blue outliers There are faraway active galaxies that show a blueshift in their [O III] emission lines. One of the largest blueshifts is found in the narrow-line quasar, PG 1543+489, which has a relative velocity of -1150 km/s. These types of galaxies are called "blue outliers". Cosmological blueshift In a hypothetical universe undergoing a runaway Big Crunch contraction, a cosmological blueshift would be observed, with galaxies further away being increasingly blueshifted—the exact opposite of the actually observed cosmological redshift in the present expanding universe. See also Gravitational potential Relativistic Doppler effect References Sources Articles Odenwald, S. & Fienberg, RT. 1993; "Galaxy Redshifts Reconsidered" in Sky & Telescope Feb. 2003; pp31–35 (This article is useful further reading in distinguishing between the 3 types of redshift and their causes.) Lineweaver, Charles H. and Tamara M. Davis, "Misconceptions about the Big Bang", Scientific American, March 2005. (This article is useful for explaining the cosmological redshift mechanism as well as clearing up misconceptions regarding the physics of the expansion of space.) Books See also physical cosmology textbooks for applications of the cosmological and gravitational redshifts. External links Ned Wright's Cosmology tutorial Cosmic reference guide entry on redshift Mike Luciuk's Astronomical Redshift tutorial Animated GIF of Cosmological Redshift by Wayne Hu Astronomical spectroscopy Doppler effects Effects of gravity Physical cosmology Physical quantities Concepts in astronomy
Redshift
[ "Physics", "Chemistry", "Astronomy", "Mathematics" ]
6,992
[ "Physical phenomena", "Astronomical sub-disciplines", "Physical quantities", "Spectrum (physical sciences)", "Concepts in astronomy", "Quantity", "Theoretical physics", "Astrophysics", "Astronomical spectroscopy", "Doppler effects", "Spectroscopy", "Physical properties", "Physical cosmology"...
26,301
https://en.wikipedia.org/wiki/Rocket
A rocket (from , and so named for its shape) is a vehicle that uses jet propulsion to accelerate without using any surrounding air. A rocket engine produces thrust by reaction to exhaust expelled at high speed. Rocket engines work entirely from propellant carried within the vehicle; therefore a rocket can fly in the vacuum of space. Rockets work more efficiently in a vacuum and incur a loss of thrust due to the opposing pressure of the atmosphere. Multistage rockets are capable of attaining escape velocity from Earth and therefore can achieve unlimited maximum altitude. Compared with airbreathing engines, rockets are lightweight and powerful and capable of generating large accelerations. To control their flight, rockets rely on momentum, airfoils, auxiliary reaction engines, gimballed thrust, momentum wheels, deflection of the exhaust stream, propellant flow, spin, or gravity. Rockets for military and recreational uses date back to at least 13th-century China. Significant scientific, interplanetary and industrial use did not occur until the 20th century, when rocketry was the enabling technology for the Space Age, including setting foot on the Moon. Rockets are now used for fireworks, missiles and other weaponry, ejection seats, launch vehicles for artificial satellites, human spaceflight, and space exploration. Chemical rockets are the most common type of high power rocket, typically creating a high speed exhaust by the combustion of fuel with an oxidizer. The stored propellant can be a simple pressurized gas or a single liquid fuel that disassociates in the presence of a catalyst (monopropellant), two liquids that spontaneously react on contact (hypergolic propellants), two liquids that must be ignited to react (like kerosene (RP1) and liquid oxygen, used in most liquid-propellant rockets), a solid combination of fuel with oxidizer (solid fuel), or solid fuel with liquid or gaseous oxidizer (hybrid propellant system). Chemical rockets store a large amount of energy in an easily released form, and can be very dangerous. However, careful design, testing, construction and use minimizes risks. History In China, gunpowder-powered rockets evolved in medieval China under the Song dynasty by the 13th century. They also developed an early form of multiple rocket launcher during this time. The Mongols adopted Chinese rocket technology and the invention spread via the Mongol invasions to the Middle East and to Europe in the mid-13th century. According to Joseph Needham, the Song navy used rockets in a military exercise dated to 1245. Internal-combustion rocket propulsion is mentioned in a reference to 1264, recording that the "ground-rat", a type of firework, had frightened the Empress-Mother Gongsheng at a feast held in her honor by her son the Emperor Lizong. Subsequently, rockets are included in the military treatise Huolongjing, also known as the Fire Drake Manual, written by the Chinese artillery officer Jiao Yu in the mid-14th century. This text mentions the first known multistage rocket, the 'fire-dragon issuing from the water' (Huo long chu shui), thought to have been used by the Chinese navy. Medieval and early modern rockets were used militarily as incendiary weapons in sieges. Between 1270 and 1280, Hasan al-Rammah wrote al-furusiyyah wa al-manasib al-harbiyya (The Book of Military Horsemanship and Ingenious War Devices), which included 107 gunpowder recipes, 22 of them for rockets. In Europe, Roger Bacon mentioned firecrackers made in various parts of the world in the Opus Majus of 1267. Between 1280 and 1300, the Liber Ignium gave instructions for constructing devices that are similar to firecrackers based on second hand accounts. Konrad Kyeser described rockets in his military treatise Bellifortis around 1405. Giovanni Fontana, a Paduan engineer in 1420, created rocket-propelled animal figures. The name "rocket" comes from the Italian rocchetta, meaning "bobbin" or "little spindle", given due to the similarity in shape to the bobbin or spool used to hold the thread from a spinning wheel. Leonhard Fronsperger and Conrad Haas adopted the Italian term into German in the mid-16th century; "rocket" appears in English by the early 17th century. Artis Magnae Artilleriae pars prima, an important early modern work on rocket artillery, by Casimir Siemienowicz, was first printed in Amsterdam in 1650. The Mysorean rockets were the first successful iron-cased rockets, developed in the late 18th century in the Kingdom of Mysore (part of present-day India) under the rule of Hyder Ali. The Congreve rocket was a British weapon designed and developed by Sir William Congreve in 1804. This rocket was based directly on the Mysorean rockets, used compressed powder and was fielded in the Napoleonic Wars. It was Congreve rockets to which Francis Scott Key was referring, when he wrote of the "rockets' red glare" while held captive on a British ship that was laying siege to Fort McHenry in 1814. Together, the Mysorean and British innovations increased the effective range of military rockets from . The first mathematical treatment of the dynamics of rocket propulsion is due to William Moore (1813). In 1814, Congreve published a book in which he discussed the use of multiple rocket launching apparatus. In 1815 Alexander Dmitrievich Zasyadko constructed rocket-launching platforms, which allowed rockets to be fired in salvos (6 rockets at a time), and gun-laying devices. William Hale in 1844 greatly increased the accuracy of rocket artillery. Edward Mounier Boxer further improved the Congreve rocket in 1865. William Leitch first proposed the concept of using rockets to enable human spaceflight in 1861. Leitch's rocket spaceflight description was first provided in his 1861 essay "A Journey Through Space", which was later published in his book God's Glory in the Heavens (1862). Konstantin Tsiolkovsky later (in 1903) also conceived this idea, and extensively developed a body of theory that has provided the foundation for subsequent spaceflight development. The British Royal Flying Corps designed a guided rocket during World War I. Archibald Low stated "...in 1917 the Experimental Works designed an electrically steered rocket… Rocket experiments were conducted under my own patents with the help of Cdr. Brock." The patent "Improvements in Rockets" was raised in July 1918 but not published until February 1923 for security reasons. Firing and guidance controls could be either wire or wireless. The propulsion and guidance rocket eflux emerged from the deflecting cowl at the nose. In 1920, Professor Robert Goddard of Clark University published proposed improvements to rocket technology in A Method of Reaching Extreme Altitudes. In 1923, Hermann Oberth (1894–1989) published Die Rakete zu den Planetenräumen (The Rocket into Planetary Space). Modern rockets originated in 1926 when Goddard attached a supersonic (de Laval) nozzle to a high pressure combustion chamber. These nozzles turn the hot gas from the combustion chamber into a cooler, hypersonic, highly directed jet of gas, more than doubling the thrust and raising the engine efficiency from 2% to 64%. His use of liquid propellants instead of gunpowder greatly lowered the weight and increased the effectiveness of rockets. In 1921, the Soviet research and development laboratory Gas Dynamics Laboratory began developing solid-propellant rockets, which resulted in the first launch in 1928, which flew for approximately 1,300 metres. These rockets were used in 1931 for the world's first successful use of rockets for jet-assisted takeoff of aircraft and became the prototypes for the Katyusha rocket launcher, which were used during World War II. In 1929, Fritz Lang's German science fiction film Woman in the Moon was released. It showcased the use of a multi-stage rocket, and also pioneered the concept of a rocket launch pad (a rocket standing upright against a tall building before launch having been slowly rolled into place) and the rocket-launch countdown clock. The Guardian film critic Stephen Armstrong states Lang "created the rocket industry". Lang was inspired by the 1923 book The Rocket into Interplanetary Space by Hermann Oberth, who became the film's scientific adviser and later an important figure in the team that developed the V-2 rocket. The film was thought to be so realistic that it was banned by the Nazis when they came to power for fear it would reveal secrets about the V-2 rockets. In 1943 production of the V-2 rocket began in Germany. It was designed by the Peenemünde Army Research Center with Wernher von Braun serving as the technical director. The V-2 became the first artificial object to travel into space by crossing the Kármán line with the vertical launch of MW 18014 on 20 June 1944. Doug Millard, space historian and curator of space technology at the Science Museum, London, where a V-2 is exhibited in the main exhibition hall, states: "The V-2 was a quantum leap of technological change. We got to the Moon using V-2 technology but this was technology that was developed with massive resources, including some particularly grim ones. The V-2 programme was hugely expensive in terms of lives, with the Nazis using slave labour to manufacture these rockets". In parallel with the German guided-missile programme, rockets were also used on aircraft, either for assisting horizontal take-off (RATO), vertical take-off (Bachem Ba 349 "Natter") or for powering them (Me 163, see list of World War II guided missiles of Germany). The Allies' rocket programs were less technological, relying mostly on unguided missiles like the Soviet Katyusha rocket in the artillery role, and the American anti tank bazooka projectile. These used solid chemical propellants. The Americans captured a large number of German rocket scientists, including Wernher von Braun, in 1945, and brought them to the United States as part of Operation Paperclip. After World War II scientists used rockets to study high-altitude conditions, by radio telemetry of temperature and pressure of the atmosphere, detection of cosmic rays, and further techniques; note too the Bell X-1, the first crewed vehicle to break the sound barrier (1947). Independently, in the Soviet Union's space program research continued under the leadership of the chief designer Sergei Korolev (1907–1966). During the Cold War rockets became extremely important militarily with the development of modern intercontinental ballistic missiles (ICBMs). The 1960s saw rapid development of rocket technology, particularly in the Soviet Union (Vostok, Soyuz, Proton) and in the United States (e.g. the X-15). Rockets came into use for space exploration. American crewed programs (Project Mercury, Project Gemini and later the Apollo programme) culminated in 1969 with the first crewed landing on the Moon – using equipment launched by the Saturn V rocket. Types Vehicle configurations Rocket vehicles are often constructed in the archetypal tall thin "rocket" shape that takes off vertically, but there are actually many different types of rockets including: tiny models such as balloon rockets, water rockets, skyrockets or small solid rockets that can be purchased at a hobby store missiles space rockets such as the enormous Saturn V used for the Apollo program rocket cars rocket bike rocket-powered aircraft (including rocket-assisted takeoff of conventional aircraft – RATO) rocket sleds rocket trains rocket torpedoes rocket-powered jet packs rapid escape systems such as ejection seats and launch escape systems space probes Design A rocket design can be as simple as a cardboard tube filled with black powder, but to make an efficient, accurate rocket or missile involves overcoming a number of difficult problems. The main difficulties include cooling the combustion chamber, pumping the fuel (in the case of a liquid fuel), and controlling and correcting the direction of motion. Components Rockets consist of a propellant, a place to put propellant (such as a propellant tank), and a nozzle. They may also have one or more rocket engines, directional stabilization device(s) (such as fins, vernier engines or engine gimbals for thrust vectoring, gyroscopes) and a structure (typically monocoque) to hold these components together. Rockets intended for high speed atmospheric use also have an aerodynamic fairing such as a nose cone, which usually holds the payload. As well as these components, rockets can have any number of other components, such as wings (rocketplanes), parachutes, wheels (rocket cars), even, in a sense, a person (rocket belt). Vehicles frequently possess navigation systems and guidance systems that typically use satellite navigation and inertial navigation systems. Engines Rocket engines employ the principle of jet propulsion. The rocket engines powering rockets come in a great variety of different types; a comprehensive list can be found in the main article, Rocket engine. Most current rockets are chemically powered rockets (usually internal combustion engines, but some employ a decomposing monopropellant) that emit a hot exhaust gas. A rocket engine can use gas propellants, solid propellant, liquid propellant, or a hybrid mixture of both solid and liquid. Some rockets use heat or pressure that is supplied from a source other than the chemical reaction of propellant(s), such as steam rockets, solar thermal rockets, nuclear thermal rocket engines or simple pressurized rockets such as water rocket or cold gas thrusters. With combustive propellants a chemical reaction is initiated between the fuel and the oxidizer in the combustion chamber, and the resultant hot gases accelerate out of a rocket engine nozzle (or nozzles) at the rearward-facing end of the rocket. The acceleration of these gases through the engine exerts force ("thrust") on the combustion chamber and nozzle, propelling the vehicle (according to Newton's Third Law). This actually happens because the force (pressure times area) on the combustion chamber wall is unbalanced by the nozzle opening; this is not the case in any other direction. The shape of the nozzle also generates force by directing the exhaust gas along the axis of the rocket. Propellant Rocket propellant is mass that is stored, usually in some form of propellant tank or casing, prior to being used as the propulsive mass that is ejected from a rocket engine in the form of a fluid jet to produce thrust. For chemical rockets often the propellants are a fuel such as liquid hydrogen or kerosene burned with an oxidizer such as liquid oxygen or nitric acid to produce large volumes of very hot gas. The oxidiser is either kept separate and mixed in the combustion chamber, or comes premixed, as with solid rockets. Sometimes the propellant is not burned but still undergoes a chemical reaction, and can be a 'monopropellant' such as hydrazine, nitrous oxide or hydrogen peroxide that can be catalytically decomposed to hot gas. Alternatively, an inert propellant can be used that can be externally heated, such as in steam rocket, solar thermal rocket or nuclear thermal rockets. For smaller, low performance rockets such as attitude control thrusters where high performance is less necessary, a pressurised fluid is used as propellant that simply escapes the spacecraft through a propelling nozzle. Pendulum rocket fallacy The first liquid-fuel rocket, constructed by Robert H. Goddard, differed significantly from modern rockets. The rocket engine was at the top and the fuel tank at the bottom of the rocket, based on Goddard's belief that the rocket would achieve stability by "hanging" from the engine like a pendulum in flight. However, the rocket veered off course and crashed away from the launch site, indicating that the rocket was no more stable than one with the rocket engine at the base. Uses Rockets or other similar reaction devices carrying their own propellant must be used when there is no other substance (land, water, or air) or force (gravity, magnetism, light) that a vehicle may usefully employ for propulsion, such as in space. In these circumstances, it is necessary to carry all the propellant to be used. However, they are also useful in other situations: Military Some military weapons use rockets to propel warheads to their targets. A rocket and its payload together are generally referred to as a missile when the weapon has a guidance system (not all missiles use rocket engines, some use other engines such as jets) or as a rocket if it is unguided. Anti-tank and anti-aircraft missiles use rocket engines to engage targets at high speed at a range of several miles, while intercontinental ballistic missiles can be used to deliver multiple nuclear warheads from thousands of miles, and anti-ballistic missiles try to stop them. Rockets have also been tested for reconnaissance, such as the Ping-Pong rocket, which was launched to surveil enemy targets, however, recon rockets have never come into wide use in the military. Science and research Sounding rockets are commonly used to carry instruments that take readings from to above the surface of the Earth. The first images of Earth from space were obtained from a V-2 rocket in 1946 (flight #13). Rocket engines are also used to propel rocket sleds along a rail at extremely high speed. The world record for this is Mach 8.5. Spaceflight Larger rockets are normally launched from a launch pad that provides stable support until a few seconds after ignition. Due to their high exhaust velocity——rockets are particularly useful when very high speeds are required, such as orbital speed at approximately . Spacecraft delivered into orbital trajectories become artificial satellites, which are used for many commercial purposes. Indeed, rockets remain the only way to launch spacecraft into orbit and beyond. They are also used to rapidly accelerate spacecraft when they change orbits or de-orbit for landing. Also, a rocket may be used to soften a hard parachute landing immediately before touchdown (see retrorocket). Rescue Rockets were used to propel a line to a stricken ship so that a Breeches buoy can be used to rescue those on board. Rockets are also used to launch emergency flares. Some crewed rockets, notably the Saturn V and Soyuz, have launch escape systems. This is a small, usually solid rocket that is capable of pulling the crewed capsule away from the main vehicle towards safety at a moments notice. These types of systems have been operated several times, both in testing and in flight, and operated correctly each time. This was the case when the Safety Assurance System (Soviet nomenclature) successfully pulled away the L3 capsule during three of the four failed launches of the Soviet Moon rocket, N1 vehicles 3L, 5L and 7L. In all three cases the capsule, albeit uncrewed, was saved from destruction. Only the three aforementioned N1 rockets had functional Safety Assurance Systems. The outstanding vehicle, 6L, had dummy upper stages and therefore no escape system giving the N1 booster a 100% success rate for egress from a failed launch. A successful escape of a crewed capsule occurred when Soyuz T-10, on a mission to the Salyut 7 space station, exploded on the pad. Solid rocket propelled ejection seats are used in many military aircraft to propel crew away to safety from a vehicle when flight control is lost. Hobby, sport, and entertainment A model rocket is a small rocket designed to reach low altitudes (e.g., for model) and be recovered by a variety of means. According to the United States National Association of Rocketry (nar) Safety Code, model rockets are constructed of paper, wood, plastic and other lightweight materials. The code also provides guidelines for motor use, launch site selection, launch methods, launcher placement, recovery system design and deployment and more. Since the early 1960s, a copy of the Model Rocket Safety Code has been provided with most model rocket kits and motors. Despite its inherent association with extremely flammable substances and objects with a pointed tip traveling at high speeds, model rocketry historically has proven to be a very safe hobby and has been credited as a significant source of inspiration for children who eventually become scientists and engineers. Hobbyists build and fly a wide variety of model rockets. Many companies produce model rocket kits and parts but due to their inherent simplicity some hobbyists have been known to make rockets out of almost anything. Rockets are also used in some types of consumer and professional fireworks. A water rocket is a type of model rocket using water as its reaction mass. The pressure vessel (the engine of the rocket) is usually a used plastic soft drink bottle. The water is forced out by a pressurized gas, typically compressed air. It is an example of Newton's third law of motion. The scale of amateur rocketry can range from a small rocket launched in one's own backyard to a rocket that reached space. Amateur rocketry is split into three categories according to total engine impulse: low-power, mid-power, and high-power. Hydrogen peroxide rockets are used to power jet packs, and have been used to power cars and a rocket car holds the all time (albeit unofficial) drag racing record. Corpulent Stump is the most powerful non-commercial rocket ever launched on an Aerotech engine in the United Kingdom. Flight Launches for orbital spaceflights, or into interplanetary space, are usually from a fixed location on the ground, but would also be possible from an aircraft or ship. Rocket launch technologies include the entire set of systems needed to successfully launch a vehicle, not just the vehicle itself, but also the firing control systems, mission control center, launch pad, ground stations, and tracking stations needed for a successful launch or recovery or both. These are often collectively referred to as the "ground segment". Orbital launch vehicles commonly take off vertically, and then begin to progressively lean over, usually following a gravity turn trajectory. Once above the majority of the atmosphere, the vehicle then angles the rocket jet, pointing it largely horizontally but somewhat downwards, which permits the vehicle to gain and then maintain altitude while increasing horizontal speed. As the speed grows, the vehicle will become more and more horizontal until at orbital speed, the engine will cut off. All current vehicles stage, that is, jettison hardware on the way to orbit. Although vehicles have been proposed which would be able to reach orbit without staging, none have ever been constructed, and, if powered only by rockets, the exponentially increasing fuel requirements of such a vehicle would make its useful payload tiny or nonexistent. Most current and historical launch vehicles "expend" their jettisoned hardware, typically by allowing it to crash into the ocean, but some have recovered and reused jettisoned hardware, either by parachute or by propulsive landing. When launching a spacecraft to orbit, a "" is a guided, powered turn during ascent phase that causes a rocket's flight path to deviate from a "straight" path. A dogleg is necessary if the desired launch azimuth, to reach a desired orbital inclination, would take the ground track over land (or over a populated area, e.g. Russia usually does launch over land, but over unpopulated areas), or if the rocket is trying to reach an orbital plane that does not reach the latitude of the launch site. Doglegs are undesirable due to extra onboard fuel required, causing heavier load, and a reduction of vehicle performance. Noise Rocket exhaust generates a significant amount of acoustic energy. As the supersonic exhaust collides with the ambient air, shock waves are formed. The sound intensity from these shock waves depends on the size of the rocket as well as the exhaust velocity. The sound intensity of large, high performance rockets could potentially kill at close range. The Space Shuttle generated 180 dB of noise around its base. To combat this, NASA developed a sound suppression system which can flow water at rates up to 900,000 gallons per minute (57 m3/s) onto the launch pad. The water reduces the noise level from 180 dB down to 142 dB (the design requirement is 145 dB). Without the sound suppression system, acoustic waves would reflect off of the launch pad towards the rocket, vibrating the sensitive payload and crew. These acoustic waves can be so severe as to damage or destroy the rocket. Noise is generally most intense when a rocket is close to the ground, since the noise from the engines radiates up away from the jet, as well as reflecting off the ground. This noise can be reduced somewhat by flame trenches with roofs, by water injection around the jet and by deflecting the jet at an angle. For crewed rockets various methods are used to reduce the sound intensity for the passengers, and typically the placement of the astronauts far away from the rocket engines helps significantly. For the passengers and crew, when a vehicle goes supersonic the sound cuts off as the sound waves are no longer able to keep up with the vehicle. Physics Operation The effect of the combustion of propellant in the rocket engine is to increase the internal energy of the resulting gases, utilizing the stored chemical energy in the fuel. As the internal energy increases, pressure increases, and a nozzle is used to convert this energy into a directed kinetic energy. This produces thrust against the ambient environment to which these gases are released. The ideal direction of motion of the exhaust is in the direction so as to cause thrust. At the top end of the combustion chamber the hot, energetic gas fluid cannot move forward, and so, it pushes upward against the top of the rocket engine's combustion chamber. As the combustion gases approach the exit of the combustion chamber, they increase in speed. The effect of the convergent part of the rocket engine nozzle on the high pressure fluid of combustion gases, is to cause the gases to accelerate to high speed. The higher the speed of the gases, the lower the pressure of the gas (Bernoulli's principle or conservation of energy) acting on that part of the combustion chamber. In a properly designed engine, the flow will reach Mach 1 at the throat of the nozzle. At which point the speed of the flow increases. Beyond the throat of the nozzle, a bell shaped expansion part of the engine allows the gases that are expanding to push against that part of the rocket engine. Thus, the bell part of the nozzle gives additional thrust. Simply expressed, for every action there is an equal and opposite reaction, according to Newton's third law with the result that the exiting gases produce the reaction of a force on the rocket causing it to accelerate the rocket. In a closed chamber, the pressures are equal in each direction and no acceleration occurs. If an opening is provided in the bottom of the chamber then the pressure is no longer acting on the missing section. This opening permits the exhaust to escape. The remaining pressures give a resultant thrust on the side opposite the opening, and these pressures are what push the rocket along. The shape of the nozzle is important. Consider a balloon propelled by air coming out of a tapering nozzle. In such a case the combination of air pressure and viscous friction is such that the nozzle does not push the balloon but is pulled by it. Using a convergent/divergent nozzle gives more force since the exhaust also presses on it as it expands outwards, roughly doubling the total force. If propellant gas is continuously added to the chamber then these pressures can be maintained for as long as propellant remains. Note that in the case of liquid propellant engines, the pumps moving the propellant into the combustion chamber must maintain a pressure larger than the combustion chamber—typically on the order of 100 atmospheres. As a side effect, these pressures on the rocket also act on the exhaust in the opposite direction and accelerate this exhaust to very high speeds (according to Newton's Third Law). From the principle of conservation of momentum the speed of the exhaust of a rocket determines how much momentum increase is created for a given amount of propellant. This is called the rocket's specific impulse. Because a rocket, propellant and exhaust in flight, without any external perturbations, may be considered as a closed system, the total momentum is always constant. Therefore, the faster the net speed of the exhaust in one direction, the greater the speed of the rocket can achieve in the opposite direction. This is especially true since the rocket body's mass is typically far lower than the final total exhaust mass. Forces on a rocket in flight The general study of the forces on a rocket is part of the field of ballistics. Spacecraft are further studied in the subfield of astrodynamics. Flying rockets are primarily affected by the following: Thrust from the engine(s) Gravity from celestial bodies Drag if moving in atmosphere Lift; usually relatively small effect except for rocket-powered aircraft In addition, the inertia and centrifugal pseudo-force can be significant due to the path of the rocket around the center of a celestial body; when high enough speeds in the right direction and altitude are achieved a stable orbit or escape velocity is obtained. These forces, with a stabilizing tail (the empennage) present will, unless deliberate control efforts are made, naturally cause the vehicle to follow a roughly parabolic trajectory termed a gravity turn, and this trajectory is often used at least during the initial part of a launch. (This is true even if the rocket engine is mounted at the nose.) Vehicles can thus maintain low or even zero angle of attack, which minimizes transverse stress on the launch vehicle, permitting a weaker, and hence lighter, launch vehicle. Drag Drag is a force opposite to the direction of the rocket's motion relative to any air it is moving through. This slows the speed of the vehicle and produces structural loads. The deceleration forces for fast-moving rockets are calculated using the drag equation. Drag can be minimised by an aerodynamic nose cone and by using a shape with a high ballistic coefficient (the "classic" rocket shape—long and thin), and by keeping the rocket's angle of attack as low as possible. During a launch, as the vehicle speed increases, and the atmosphere thins, there is a point of maximum aerodynamic drag called max Q. This determines the minimum aerodynamic strength of the vehicle, as the rocket must avoid buckling under these forces. Net thrust A typical rocket engine can handle a significant fraction of its own mass in propellant each second, with the propellant leaving the nozzle at several kilometres per second. This means that the thrust-to-weight ratio of a rocket engine, and often the entire vehicle can be very high, in extreme cases over 100. This compares with other jet propulsion engines that can exceed 5 for some of the better engines. The net thrust of a rocket is where The effective exhaust velocity is more or less the speed the exhaust leaves the vehicle, and in the vacuum of space, the effective exhaust velocity is often equal to the actual average exhaust speed along the thrust axis. However, the effective exhaust velocity allows for various losses, and notably, is reduced when operated within an atmosphere. The rate of propellant flow through a rocket engine is often deliberately varied over a flight, to provide a way to control the thrust and thus the airspeed of the vehicle. This, for example, allows minimization of aerodynamic losses and can limit the increase of g-forces due to the reduction in propellant load. Total impulse Impulse is defined as a force acting on an object over time, which in the absence of opposing forces (gravity and aerodynamic drag), changes the momentum (integral of mass and velocity) of the object. As such, it is the best performance class (payload mass and terminal velocity capability) indicator of a rocket, rather than takeoff thrust, mass, or "power". The total impulse of a rocket (stage) burning its propellant is: When there is fixed thrust, this is simply: The total impulse of a multi-stage rocket is the sum of the impulses of the individual stages. Specific impulse As can be seen from the thrust equation, the effective speed of the exhaust controls the amount of thrust produced from a particular quantity of fuel burnt per second. An equivalent measure, the net impulse per weight unit of propellant expelled, is called specific Impulse, , and this is one of the most important figures that describes a rocket's performance. It is defined such that it is related to the effective exhaust velocity by: where: Thus, the greater the specific impulse, the greater the net thrust and performance of the engine. is determined by measurement while testing the engine. In practice the effective exhaust velocities of rockets varies but can be extremely high, ~4500 m/s, about 15 times the sea level speed of sound in air. Delta-v (rocket equation) The delta-v capacity of a rocket is the theoretical total change in velocity that a rocket can achieve without any external interference (without air drag or gravity or other forces). When is constant, the delta-v that a rocket vehicle can provide can be calculated from the Tsiolkovsky rocket equation: where: When launched from the Earth practical delta-vs for a single rockets carrying payloads can be a few km/s. Some theoretical designs have rockets with delta-vs over 9 km/s. The required delta-v can also be calculated for a particular manoeuvre; for example the delta-v to launch from the surface of the Earth to low Earth orbit is about 9.7 km/s, which leaves the vehicle with a sideways speed of about 7.8 km/s at an altitude of around 200 km. In this manoeuvre about 1.9 km/s is lost in air drag, gravity drag and gaining altitude. The ratio is sometimes called the mass ratio. Mass ratios Almost all of a launch vehicle's mass consists of propellant. Mass ratio is, for any 'burn', the ratio between the rocket's initial mass and its final mass. Everything else being equal, a high mass ratio is desirable for good performance, since it indicates that the rocket is lightweight and hence performs better, for essentially the same reasons that low weight is desirable in sports cars. Rockets as a group have the highest thrust-to-weight ratio of any type of engine; and this helps vehicles achieve high mass ratios, which improves the performance of flights. The higher the ratio, the less engine mass is needed to be carried. This permits the carrying of even more propellant, enormously improving the delta-v. Alternatively, some rockets such as for rescue scenarios or racing carry relatively little propellant and payload and thus need only a lightweight structure and instead achieve high accelerations. For example, the Soyuz escape system can produce 20 g. Achievable mass ratios are highly dependent on many factors such as propellant type, the design of engine the vehicle uses, structural safety margins and construction techniques. The highest mass ratios are generally achieved with liquid rockets, and these types are usually used for orbital launch vehicles, a situation which calls for a high delta-v. Liquid propellants generally have densities similar to water (with the notable exceptions of liquid hydrogen and liquid methane), and these types are able to use lightweight, low pressure tanks and typically run high-performance turbopumps to force the propellant into the combustion chamber. Some notable mass fractions are found in the following table (some aircraft are included for comparison purposes): Staging Thus far, the required velocity (delta-v) to achieve orbit has been unattained by any single rocket because the propellant, tankage, structure, guidance, valves and engines and so on, take a particular minimum percentage of take-off mass that is too great for the propellant it carries to achieve that delta-v carrying reasonable payloads. Since Single-stage-to-orbit has so far not been achievable, orbital rockets always have more than one stage. For example, the first stage of the Saturn V, carrying the weight of the upper stages, was able to achieve a mass ratio of about 10, and achieved a specific impulse of 263 seconds. This gives a delta-v of around 5.9 km/s whereas around 9.4 km/s delta-v is needed to achieve orbit with all losses allowed for. This problem is frequently solved by staging—the rocket sheds excess weight (usually empty tankage and associated engines) during launch. Staging is either serial where the rockets light after the previous stage has fallen away, or parallel, where rockets are burning together and then detach when they burn out. The maximum speeds that can be achieved with staging is theoretically limited only by the speed of light. However the payload that can be carried goes down geometrically with each extra stage needed, while the additional delta-v for each stage is simply additive. Acceleration and thrust-to-weight ratio From Newton's second law, the acceleration, , of a vehicle is simply: where is the instantaneous mass of the vehicle and is the net force acting on the rocket (mostly thrust, but air drag and other forces can play a part). As the remaining propellant decreases, rocket vehicles become lighter and their acceleration tends to increase until the propellant is exhausted. This means that much of the speed change occurs towards the end of the burn when the vehicle is much lighter. However, the thrust can be throttled to offset or vary this if needed. Discontinuities in acceleration also occur when stages burn out, often starting at a lower acceleration with each new stage firing. Peak accelerations can be increased by designing the vehicle with a reduced mass, usually achieved by a reduction in the fuel load and tankage and associated structures, but obviously this reduces range, delta-v and burn time. Still, for some applications that rockets are used for, a high peak acceleration applied for just a short time is highly desirable. The minimal mass of vehicle consists of a rocket engine with minimal fuel and structure to carry it. In that case the thrust-to-weight ratio of the rocket engine limits the maximum acceleration that can be designed. It turns out that rocket engines generally have truly excellent thrust to weight ratios (137 for the NK-33 engine; some solid rockets are over 1000), and nearly all really high-g vehicles employ or have employed rockets. The high accelerations that rockets naturally possess means that rocket vehicles are often capable of vertical takeoff, and in some cases, with suitable guidance and control of the engines, also vertical landing. For these operations to be done it is necessary for a vehicle's engines to provide more than the local gravitational acceleration. Energy Energy efficiency The energy density of a typical rocket propellant is often around one-third that of conventional hydrocarbon fuels; the bulk of the mass is (often relatively inexpensive) oxidizer. Nevertheless, at take-off the rocket has a great deal of energy in the fuel and oxidizer stored within the vehicle. It is of course desirable that as much of the energy of the propellant end up as kinetic or potential energy of the body of the rocket as possible. Energy from the fuel is lost in air drag and gravity drag and is used for the rocket to gain altitude and speed. However, much of the lost energy ends up in the exhaust. In a chemical propulsion device, the engine efficiency is simply the ratio of the kinetic power of the exhaust gases and the power available from the chemical reaction: 100% efficiency within the engine (engine efficiency ) would mean that all the heat energy of the combustion products is converted into kinetic energy of the jet. This is not possible, but the near-adiabatic high expansion ratio nozzles that can be used with rockets come surprisingly close: when the nozzle expands the gas, the gas is cooled and accelerated, and an energy efficiency of up to 70% can be achieved. Most of the rest is heat energy in the exhaust that is not recovered. The high efficiency is a consequence of the fact that rocket combustion can be performed at very high temperatures and the gas is finally released at much lower temperatures, and so giving good Carnot efficiency. However, engine efficiency is not the whole story. In common with the other jet-based engines, but particularly in rockets due to their high and typically fixed exhaust speeds, rocket vehicles are extremely inefficient at low speeds irrespective of the engine efficiency. The problem is that at low speeds, the exhaust carries away a huge amount of kinetic energy rearward. This phenomenon is termed propulsive efficiency (). However, as speeds rise, the resultant exhaust speed goes down, and the overall vehicle energetic efficiency rises, reaching a peak of around 100% of the engine efficiency when the vehicle is travelling exactly at the same speed that the exhaust is emitted. In this case the exhaust would ideally stop dead in space behind the moving vehicle, taking away zero energy, and from conservation of energy, all the energy would end up in the vehicle. The efficiency then drops off again at even higher speeds as the exhaust ends up traveling forwards – trailing behind the vehicle. From these principles it can be shown that the propulsive efficiency for a rocket moving at speed with an exhaust velocity is: And the overall (instantaneous) energy efficiency is: For example, from the equation, with an of 0.7, a rocket flying at Mach 0.85 (which most aircraft cruise at) with an exhaust velocity of Mach 10, would have a predicted overall energy efficiency of 5.9%, whereas a conventional, modern, air-breathing jet engine achieves closer to 35% efficiency. Thus a rocket would need about 6x more energy; and allowing for the specific energy of rocket propellant being around one third that of conventional air fuel, roughly 18x more mass of propellant would need to be carried for the same journey. This is why rockets are rarely if ever used for general aviation. Since the energy ultimately comes from fuel, these considerations mean that rockets are mainly useful when a very high speed is required, such as ICBMs or orbital launch. For example, NASA's Space Shuttle fired its engines for around 8.5 minutes, consuming 1,000 tonnes of solid propellant (containing 16% aluminium) and an additional 2,000,000 litres of liquid propellant (106,261 kg of liquid hydrogen fuel) to lift the 100,000 kg vehicle (including the 25,000 kg payload) to an altitude of 111 km and an orbital velocity of 30,000 km/h. At this altitude and velocity, the vehicle had a kinetic energy of about 3 TJ and a potential energy of roughly 200 GJ. Given the initial energy of 20 TJ, the Space Shuttle was about 16% energy efficient at launching the orbiter. Thus jet engines, with a better match between speed and jet exhaust speed (such as turbofans—in spite of their worse )—dominate for subsonic and supersonic atmospheric use, while rockets work best at hypersonic speeds. On the other hand, rockets serve in many short-range relatively low speed military applications where their low-speed inefficiency is outweighed by their extremely high thrust and hence high accelerations. Oberth effect One subtle feature of rockets relates to energy. A rocket stage, while carrying a given load, is capable of giving a particular delta-v. This delta-v means that the speed increases (or decreases) by a particular amount, independent of the initial speed. However, because kinetic energy is a square law on speed, this means that the faster the rocket is travelling before the burn the more orbital energy it gains or loses. This fact is used in interplanetary travel. It means that the amount of delta-v to reach other planets, over and above that to reach escape velocity can be much less if the delta-v is applied when the rocket is travelling at high speeds, close to the Earth or other planetary surface; whereas waiting until the rocket has slowed at altitude multiplies up the effort required to achieve the desired trajectory. Safety, reliability and accidents The reliability of rockets, as for all physical systems, is dependent on the quality of engineering design and construction. Because of the enormous chemical energy in rocket propellants (greater energy by weight than explosives, but lower than gasoline), consequences of accidents can be severe. Most space missions have some problems. In 1986, following the Space Shuttle Challenger disaster, American physicist Richard Feynman, having served on the Rogers Commission, estimated that the chance of an unsafe condition for a launch of the Shuttle was very roughly 1%; more recently the historical per person-flight risk in orbital spaceflight has been calculated to be around 2% or 4%. In May 2003 the astronaut office made clear its position on the need and feasibility of improving crew safety for future NASA crewed missions indicating their "consensus that an order of magnitude reduction in the risk of human life during ascent, compared to the Space Shuttle, is both achievable with current technology and consistent with NASA's focus on steadily improving rocket reliability". Costs and economics The costs of rockets can be roughly divided into propellant costs, the costs of obtaining and/or producing the 'dry mass' of the rocket, and the costs of any required support equipment and facilities. Most of the takeoff mass of a rocket is normally propellant. However propellant is seldom more than a few times more expensive than gasoline per kilogram (as of 2009 gasoline was about or less), and although substantial amounts are needed, for all but the very cheapest rockets, it turns out that the propellant costs are usually comparatively small, although not completely negligible. With liquid oxygen costing and liquid hydrogen , the Space Shuttle in 2009 had a liquid propellant expense of approximately $1.4 million for each launch that cost $450 million from other expenses (with 40% of the mass of propellants used by it being liquids in the external fuel tank, 60% solids in the SRBs). Even though a rocket's non-propellant, dry mass is often only between 5–20% of total mass, nevertheless this cost dominates. For hardware with the performance used in orbital launch vehicles, expenses of $2000–$10,000+ per kilogram of dry weight are common, primarily from engineering, fabrication, and testing; raw materials amount to typically around 2% of total expense. For most rockets except reusable ones (shuttle engines) the engines need not function more than a few minutes, which simplifies design. Extreme performance requirements for rockets reaching orbit correlate with high cost, including intensive quality control to ensure reliability despite the limited safety factors allowable for weight reasons. Components produced in small numbers if not individually machined can prevent amortization of R&D and facility costs over mass production to the degree seen in more pedestrian manufacturing. Amongst liquid-fueled rockets, complexity can be influenced by how much hardware must be lightweight, like pressure-fed engines can have two orders of magnitude lesser part count than pump-fed engines but lead to more weight by needing greater tank pressure, most often used in just small maneuvering thrusters as a consequence. To change the preceding factors for orbital launch vehicles, proposed methods have included mass-producing simple rockets in large quantities or on large scale, or developing reusable rockets meant to fly very frequently to amortize their up-front expense over many payloads, or reducing rocket performance requirements by constructing a non-rocket spacelaunch system for part of the velocity to orbit (or all of it but with most methods involving some rocket use). The costs of support equipment, range costs and launch pads generally scale up with the size of the rocket, but vary less with launch rate, and so may be considered to be approximately a fixed cost. Rockets in applications other than launch to orbit (such as military rockets and rocket-assisted take off), commonly not needing comparable performance and sometimes mass-produced, are often relatively inexpensive. 2010s emerging private competition Since the early 2010s, new private options for obtaining spaceflight services emerged, bringing substantial price pressure into the existing market. See also Lists Lists of rockets Timeline of rocket and missile technology General rocketry Rocket propulsion Nuclear thermal rocket Recreational rocketry Weaponry Rockets for research Miscellaneous Notes External links Governing agencies FAA Office of Commercial Space Transportation National Aeronautics and Space Administration (NASA) National Association of Rocketry (US) Tripoli Rocketry Association Asoc. Coheteria Experimental y Modelista de Argentina United Kingdom Rocketry Association IMR – German/Austrian/Swiss Rocketry Association Canadian Association of Rocketry Indian Space Research Organisation Information sites Encyclopedia Astronautica – Rocket and Missile Alphabetical Index Rocket and Space Technology Gunter's Space Page – Complete Rocket and Missile Lists Rocketdyne Technical Articles Relativity Calculator – Learn Tsiolkovsky's rocket equations Robert Goddard – America's Space Pioneer Articles containing video clips Chinese inventions Gunpowder Rocket-powered aircraft Space launch vehicles
Rocket
[ "Engineering" ]
10,016
[ "Rocketry", "Aerospace engineering" ]
26,350
https://en.wikipedia.org/wiki/Radiation%20therapy
Radiation therapy or radiotherapy (RT, RTx, or XRT) is a treatment using ionizing radiation, generally provided as part of cancer therapy to either kill or control the growth of malignant cells. It is normally delivered by a linear particle accelerator. Radiation therapy may be curative in a number of types of cancer if they are localized to one area of the body, and have not spread to other parts. It may also be used as part of adjuvant therapy, to prevent tumor recurrence after surgery to remove a primary malignant tumor (for example, early stages of breast cancer). Radiation therapy is synergistic with chemotherapy, and has been used before, during, and after chemotherapy in susceptible cancers. The subspecialty of oncology concerned with radiotherapy is called radiation oncology. A physician who practices in this subspecialty is a radiation oncologist. Radiation therapy is commonly applied to the cancerous tumor because of its ability to control cell growth. Ionizing radiation works by damaging the DNA of cancerous tissue leading to cellular death. To spare normal tissues (such as skin or organs which radiation must pass through to treat the tumor), shaped radiation beams are aimed from several angles of exposure to intersect at the tumor, providing a much larger absorbed dose there than in the surrounding healthy tissue. Besides the tumor itself, the radiation fields may also include the draining lymph nodes if they are clinically or radiologically involved with the tumor, or if there is thought to be a risk of subclinical malignant spread. It is necessary to include a margin of normal tissue around the tumor to allow for uncertainties in daily set-up and internal tumor motion. These uncertainties can be caused by internal movement (for example, respiration and bladder filling) and movement of external skin marks relative to the tumor position. Radiation oncology is the medical specialty concerned with prescribing radiation, and is distinct from radiology, the use of radiation in medical imaging and diagnosis. Radiation may be prescribed by a radiation oncologist with intent to cure or for adjuvant therapy. It may also be used as palliative treatment (where cure is not possible and the aim is for local disease control or symptomatic relief) or as therapeutic treatment (where the therapy has survival benefit and can be curative). It is also common to combine radiation therapy with surgery, chemotherapy, hormone therapy, immunotherapy or some mixture of the four. Most common cancer types can be treated with radiation therapy in some way. The precise treatment intent (curative, adjuvant, neoadjuvant therapeutic, or palliative) will depend on the tumor type, location, and stage, as well as the general health of the patient. Total body irradiation (TBI) is a radiation therapy technique used to prepare the body to receive a bone marrow transplant. Brachytherapy, in which a radioactive source is placed inside or next to the area requiring treatment, is another form of radiation therapy that minimizes exposure to healthy tissue during procedures to treat cancers of the breast, prostate, and other organs. Radiation therapy has several applications in non-malignant conditions, such as the treatment of trigeminal neuralgia, acoustic neuromas, severe thyroid eye disease, pterygium, pigmented villonodular synovitis, and prevention of keloid scar growth, vascular restenosis, and heterotopic ossification. The use of radiation therapy in non-malignant conditions is limited partly by worries about the risk of radiation-induced cancers. Medical uses It is estimated that half of the US' 1.2M invasive cancer cases diagnosed in 2022 received radiation therapy in their treatment program. Different cancers respond to radiation therapy in different ways. The response of a cancer to radiation is described by its radiosensitivity. Highly radiosensitive cancer cells are rapidly killed by modest doses of radiation. These include leukemias, most lymphomas, and germ cell tumors. The majority of epithelial cancers are only moderately radiosensitive, and require a significantly higher dose of radiation (60–70 Gy) to achieve a radical cure. Some types of cancer are notably radioresistant, that is, much higher doses are required to produce a radical cure than may be safe in clinical practice. Renal cell cancer and melanoma are generally considered to be radioresistant but radiation therapy is still a palliative option for many patients with metastatic melanoma. Combining radiation therapy with immunotherapy is an active area of investigation and has shown some promise for melanoma and other cancers. It is important to distinguish the radiosensitivity of a particular tumor, which to some extent is a laboratory measure, from the radiation "curability" of a cancer in actual clinical practice. For example, leukemias are not generally curable with radiation therapy, because they are disseminated through the body. Lymphoma may be radically curable if it is localized to one area of the body. Similarly, many of the common, moderately radioresponsive tumors are routinely treated with curative doses of radiation therapy if they are at an early stage. For example, non-melanoma skin cancer, head and neck cancer, breast cancer, non-small cell lung cancer, cervical cancer, anal cancer, and prostate cancer. With the exception of oligometastatic disease, metastatic cancers are incurable with radiation therapy because it is not possible to treat the whole body. Modern radiation therapy relies on a CT scan to identify the tumor and surrounding normal structures and to perform dose calculations for the creation of a complex radiation treatment plan. The patient receives small skin marks to guide the placement of treatment fields. Patient positioning is crucial at this stage as the patient will have to be placed in an identical position during each treatment. Many patient positioning devices have been developed for this purpose, including masks and cushions which can be molded to the patient. Image-guided radiation therapy is a method that uses imaging to correct for positional errors of each treatment session. The response of a tumor to radiation therapy is also related to its size. Due to complex radiobiology, very large tumors are affected less by radiation compared to smaller tumors or microscopic disease. Various strategies are used to overcome this effect. The most common technique is surgical resection prior to radiation therapy. This is most commonly seen in the treatment of breast cancer with wide local excision or mastectomy followed by adjuvant radiation therapy. Another method is to shrink the tumor with neoadjuvant chemotherapy prior to radical radiation therapy. A third technique is to enhance the radiosensitivity of the cancer by giving certain drugs during a course of radiation therapy. Examples of radiosensitizing drugs include cisplatin, nimorazole, and cetuximab. The impact of radiotherapy varies between different types of cancer and different groups. For example, for breast cancer after breast-conserving surgery, radiotherapy has been found to halve the rate at which the disease recurs. In pancreatic cancer, radiotherapy has increased survival times for inoperable tumors. Side effects Radiation therapy (RT) is in itself painless, but has iatrogenic side effect risks. Many low-dose palliative treatments (for example, radiation therapy to bony metastases) cause minimal or no side effects, although short-term pain flare-up can be experienced in the days following treatment due to oedema compressing nerves in the treated area. Higher doses can cause varying side effects during treatment (acute side effects), in the months or years following treatment (long-term side effects), or after re-treatment (cumulative side effects). The nature, severity, and longevity of side effects depends on the organs that receive the radiation, the treatment itself (type of radiation, dose, fractionation, concurrent chemotherapy), and the patient. Serious radiation complications may occur in 5% of RT cases. Acute (near immediate) or sub-acute (2 to 3 months post RT) radiation side effects may develop after 50 Gy RT dosing. Late or delayed radiation injury (6 months to decades) may develop after 65 Gy. Most side effects are predictable and expected. Side effects from radiation are usually limited to the area of the patient's body that is under treatment. Side effects are dose-dependent; for example, higher doses of head and neck radiation can be associated with cardiovascular complications, thyroid dysfunction, and pituitary axis dysfunction. Modern radiation therapy aims to reduce side effects to a minimum and to help the patient understand and deal with side effects that are unavoidable. The main side effects reported are fatigue and skin irritation, like a mild to moderate sun burn. The fatigue often sets in during the middle of a course of treatment and can last for weeks after treatment ends. The irritated skin will heal, but may not be as elastic as it was before. Acute side effects Nausea and vomiting This is not a general side effect of radiation therapy, and mechanistically is associated only with treatment of the stomach or abdomen (which commonly react a few hours after treatment), or with radiation therapy to certain nausea-producing structures in the head during treatment of certain head and neck tumors, most commonly the vestibules of the inner ears. As with any distressing treatment, some patients vomit immediately during radiotherapy, or even in anticipation of it, but this is considered a psychological response. Nausea for any reason can be treated with antiemetics. Damage to the epithelial surfaces Epithelial surfaces may sustain damage from radiation therapy. Depending on the area being treated, this may include the skin, oral mucosa, pharyngeal, bowel mucosa, and ureter. The rates of onset of damage and recovery from it depend upon the turnover rate of epithelial cells. Typically the skin starts to become pink and sore several weeks into treatment. The reaction may become more severe during the treatment and for up to about one week following the end of radiation therapy, and the skin may break down. Although this moist desquamation is uncomfortable, recovery is usually quick. Skin reactions tend to be worse in areas where there are natural folds in the skin, such as underneath the female breast, behind the ear, and in the groin. Mouth, throat and stomach sores If the head and neck area is treated, temporary soreness and ulceration commonly occur in the mouth and throat. If severe, this can affect swallowing, and the patient may need painkillers and nutritional support/food supplements. The esophagus can also become sore if it is treated directly, or if, as commonly occurs, it receives a dose of collateral radiation during treatment of lung cancer. When treating liver malignancies and metastases, it is possible for collateral radiation to cause gastric, stomach, or duodenal ulcers This collateral radiation is commonly caused by non-targeted delivery (reflux) of the radioactive agents being infused. Methods, techniques and devices are available to lower the occurrence of this type of adverse side effect. Intestinal discomfort The lower bowel may be treated directly with radiation (treatment of rectal or anal cancer) or be exposed by radiation therapy to other pelvic structures (prostate, bladder, female genital tract). Typical symptoms are soreness, diarrhoea, and nausea. Nutritional interventions may be able to help with diarrhoea associated with radiotherapy. Studies in people having pelvic radiotherapy as part of anticancer treatment for a primary pelvic cancer found that changes in dietary fat, fibre and lactose during radiotherapy reduced diarrhoea at the end of treatment. Swelling As part of the general inflammation that occurs, swelling of soft tissues may cause problems during radiation therapy. This is a concern during treatment of brain tumors and brain metastases, especially where there is pre-existing raised intracranial pressure or where the tumor is causing near-total obstruction of a lumen (e.g., trachea or main bronchus). Surgical intervention may be considered prior to treatment with radiation. If surgery is deemed unnecessary or inappropriate, the patient may receive steroids during radiation therapy to reduce swelling. Infertility The gonads (ovaries and testicles) are very sensitive to radiation. They may be unable to produce gametes following direct exposure to most normal treatment doses of radiation. Treatment planning for all body sites is designed to minimize, if not completely exclude dose to the gonads if they are not the primary area of treatment. Late side effects Late side effects occur months to years after treatment and are generally limited to the area that has been treated. They are often due to damage of blood vessels and connective tissue cells. Many late effects are reduced by fractionating treatment into smaller parts. Fibrosis Tissues which have been irradiated tend to become less elastic over time due to a diffuse scarring process. Epilation Epilation (hair loss) may occur on any hair bearing skin with doses above 1 Gy. It only occurs within the radiation field/s. Hair loss may be permanent with a single dose of 10 Gy, but if the dose is fractionated permanent hair loss may not occur until dose exceeds 45 Gy. Dryness The salivary glands and tear glands have a radiation tolerance of about 30 Gy in 2 Gy fractions, a dose which is exceeded by most radical head and neck cancer treatments. Dry mouth (xerostomia) and dry eyes (xerophthalmia) can become irritating long-term problems and severely reduce the patient's quality of life. Similarly, sweat glands in treated skin (such as the armpit) tend to stop working, and the naturally moist vaginal mucosa is often dry following pelvic irradiation. Chronic sinus drainage Radiation therapy treatments to the head and neck regions for soft tissue, palate or bone cancer can cause chronic sinus tract draining and fistulae from the bone. Lymphedema Lymphedema, a condition of localized fluid retention and tissue swelling, can result from damage to the lymphatic system sustained during radiation therapy. It is the most commonly reported complication in breast radiation therapy patients who receive adjuvant axillary radiotherapy following surgery to clear the axillary lymph nodes . Cancer Radiation is a potential cause of cancer, and secondary malignancies are seen in some patients. Cancer survivors are already more likely than the general population to develop malignancies due to a number of factors including lifestyle choices, genetics, and previous radiation treatment. It is difficult to directly quantify the rates of these secondary cancers from any single cause. Studies have found radiation therapy as the cause of secondary malignancies for only a small minority of patients, e.g., exposure to ionizing radiation is an identified risk factor for subsequent glioma; see main topic Glioma#Causes. The combined risk of a radiation-induced glioblastoma or astrocytoma within 15 years of the initial radiotherapy is 0.5-2.7%. New techniques such as proton beam therapy and carbon ion radiotherapy which aim to reduce dose to healthy tissues will lower these risks. It starts to occur 4–6 years following treatment, although some haematological malignancies may develop within 3 years. In the vast majority of cases, this risk is greatly outweighed by the reduction in risk conferred by treating the primary cancer even in pediatric malignancies which carry a higher burden of secondary malignancies. Cardiovascular disease Radiation can increase the risk of heart disease and death as observed in previous breast cancer RT regimens. Therapeutic radiation increases the risk of a subsequent cardiovascular event (i.e., heart attack or stroke) by 1.5 to 4 times a person's normal rate, aggravating factors included. The increase is dose dependent, related to the RT's dose strength, volume and location. Use of concomitant chemotherapy, e.g. anthracyclines, is an aggravating risk factor. The occurrence rate of RT induced cardiovascular disease is estimated between 10 and 30%. Cardiovascular late side effects have been termed radiation-induced heart disease (RIHD) and radiation-induced cardiovascular disease (RIVD). Symptoms are dose dependent and include cardiomyopathy, myocardial fibrosis, valvular heart disease, coronary artery disease, heart arrhythmia and peripheral artery disease. Radiation-induced fibrosis, vascular cell damage and oxidative stress can lead to these and other late side effect symptoms. Most radiation-induced cardiovascular diseases occur 10 or more years post treatment, making causality determinations more difficult. Cognitive decline In cases of radiation applied to the head radiation therapy may cause cognitive decline. Cognitive decline was especially apparent in young children, between the ages of 5 and 11. Studies found, for example, that the IQ of 5-year-old children declined each year after treatment by several IQ points. Radiation enteropathy The gastrointestinal tract can be damaged following abdominal and pelvic radiotherapy. Atrophy, fibrosis and vascular changes produce malabsorption, diarrhea, steatorrhea and bleeding with bile acid diarrhea and vitamin B12 malabsorption commonly found due to ileal involvement. Pelvic radiation disease includes radiation proctitis, producing bleeding, diarrhoea and urgency, and can also cause radiation cystitis when the bladder is affected. Lung injury Radiation-induced lung injury (RILI) encompasses radiation pneumonitis and pulmonary fibrosis. Lung tissue is sensitive to ionizing radiation, tolerating only 18–20 Gy, a fraction of typical therapeutic dosage levels. The lung's terminal airways and associated alveoli can become damaged, preventing effective respiratory gas exchange. The adverse effects of radiation are often asymptomatic with clinically significant RILI occurrence rates varying widely in literature, affecting 5–25% of those treated for thoracic and mediastinal malignancies and 1–5% of those treated for breast cancer. Radiation-induced polyneuropathy Radiation treatments may damage nerves near the target area or within the delivery path as nerve tissue is also radiosensitive. Nerve damage from ionizing radiation occurs in phases, the initial phase from microvascular injury, capillary damage and nerve demyelination. Subsequent damage occurs from vascular constriction and nerve compression due to uncontrolled fibrous tissue growth caused by radiation. Radiation-induced polyneuropathy, ICD-10-CM Code G62.82, occurs in approximately 1–5% of those receiving radiation therapy. Depending upon the irradiated zone, late effect neuropathy may occur in either the central nervous system (CNS) or the peripheral nervous system (PNS). In the CNS for example, cranial nerve injury typically presents as a visual acuity loss 1–14 years post treatment. In the PNS, injury to the plexus nerves presents as radiation-induced brachial plexopathy or radiation-induced lumbosacral plexopathy appearing up to 3 decades post treatment. Myokymia (muscle cramping, spasms or twitching) may develop. Radiation-induced nerve injury, chronic compressive neuropathies and polyradiculopathies are the most common cause of myokymic discharges. Clinically, the majority of patients receiving radiation therapy have measurable myokymic discharges within their field of radiation which present as focal or segmental myokymia. Common areas affected include the arms, legs or face depending upon the location of nerve injury. Myokymia is more frequent when radiation doses exceed 10 gray (Gy). Radiation necrosis Radiation necrosis is the death of healthy tissue near the irradiated site. It is a type of coagulative necrosis that occurs because the radiation directly or indirectly damages blood vessels in the area, which reduces the blood supply to the remaining healthy tissue, causing it to die by ischemia, similar to what happens in an ischemic stroke. Because it is an indirect effect of the treatment, it occurs months to decades after radiation exposure. Radiation necrosis most commonly presents as osteoradionecrosis, vaginal radionecrosis, soft tissue radionecrosis, or laryngeal radionecrosis. Cumulative side effects Cumulative effects from this process should not be confused with long-term effects – when short-term effects have disappeared and long-term effects are subclinical, reirradiation can still be problematic. These doses are calculated by the radiation oncologist and many factors are taken into account before the subsequent radiation takes place. Effects on reproduction During the first two weeks after fertilization, radiation therapy is lethal but not teratogenic. High doses of radiation during pregnancy induce anomalies, impaired growth and intellectual disability, and there may be an increased risk of childhood leukemia and other tumors in the offspring. In males previously having undergone radiotherapy, there appears to be no increase in genetic defects or congenital malformations in their children conceived after therapy. However, the use of assisted reproductive technologies and micromanipulation techniques might increase this risk. Effects on pituitary system Hypopituitarism commonly develops after radiation therapy for sellar and parasellar neoplasms, extrasellar brain tumors, head and neck tumors, and following whole body irradiation for systemic malignancies. 40–50% of children treated for childhood cancer develop some endocrine side effect. Radiation-induced hypopituitarism mainly affects growth hormone and gonadal hormones. In contrast, adrenocorticotrophic hormone (ACTH) and thyroid stimulating hormone (TSH) deficiencies are the least common among people with radiation-induced hypopituitarism. Changes in prolactin-secretion is usually mild, and vasopressin deficiency appears to be very rare as a consequence of radiation. Effects on subsequent surgery Delayed tissue injury with impaired wound healing capability often develops after receiving doses in excess of 65 Gy. A diffuse injury pattern due to the external beam radiotherapy's holographic isodosing occurs. While the targeted tumor receives the majority of radiation, healthy tissue at incremental distances from the center of the tumor are also irradiated in a diffuse pattern due to beam divergence. These wounds demonstrate progressive, proliferative endarteritis, inflamed arterial linings that disrupt the tissue's blood supply. Such tissue ends up chronically hypoxic, fibrotic, and without an adequate nutrient and oxygen supply. Surgery of previously irradiated tissue has a very high failure rate, e.g. women who have received radiation for breast cancer develop late effect chest wall tissue fibrosis and hypovascularity, making successful reconstruction and healing difficult, if not impossible. Radiation therapy accidents There are rigorous procedures in place to minimise the risk of accidental overexposure of radiation therapy to patients. However, mistakes do occasionally occur; for example, the radiation therapy machine Therac-25 was responsible for at least six accidents between 1985 and 1987, where patients were given up to one hundred times the intended dose; two people were killed directly by the radiation overdoses. From 2005 to 2010, a hospital in Missouri overexposed 76 patients (most with brain cancer) during a five-year period because new radiation equipment had been set up incorrectly. Although medical errors are exceptionally rare, radiation oncologists, medical physicists and other members of the radiation therapy treatment team are working to eliminate them. In 2010 the American Society for Radiation Oncology (ASTRO) launched a safety initiative called Target Safely that, among other things, aimed to record errors nationwide so that doctors can learn from each and every mistake and prevent them from recurring. ASTRO also publishes a list of questions for patients to ask their doctors about radiation safety to ensure every treatment is as safe as possible. Use in non-cancerous diseases Radiation therapy is used to treat early stage Dupuytren's disease and Ledderhose disease. When Dupuytren's disease is at the nodules and cords stage or fingers are at a minimal deformation stage of less than 10 degrees, then radiation therapy is used to prevent further progress of the disease. Radiation therapy is also used post surgery in some cases to prevent the disease continuing to progress. Low doses of radiation are used typically three gray of radiation for five days, with a break of three months followed by another phase of three gray of radiation for five days. Technique Mechanism of action Radiation therapy works by damaging the DNA of cancer cells and can cause them to undergo mitotic catastrophe. This DNA damage is caused by one of two types of energy, photon or charged particle. This damage is either direct or indirect ionization of the atoms which make up the DNA chain. Indirect ionization happens as a result of the ionization of water, forming free radicals, notably hydroxyl radicals, which then damage the DNA. In photon therapy, most of the radiation effect is through free radicals. Cells have mechanisms for repairing single-strand DNA damage and double-stranded DNA damage. However, double-stranded DNA breaks are much more difficult to repair, and can lead to dramatic chromosomal abnormalities and genetic deletions. Targeting double-stranded breaks increases the probability that cells will undergo cell death. Cancer cells are generally less differentiated and more stem cell-like; they reproduce more than most healthy differentiated cells, and have a diminished ability to repair sub-lethal damage. Single-strand DNA damage is then passed on through cell division; damage to the cancer cells' DNA accumulates, causing them to die or reproduce more slowly. One of the major limitations of photon radiation therapy is that the cells of solid tumors become deficient in oxygen. Solid tumors can outgrow their blood supply, causing a low-oxygen state known as hypoxia. Oxygen is a potent radiosensitizer, increasing the effectiveness of a given dose of radiation by forming DNA-damaging free radicals. Tumor cells in a hypoxic environment may be as much as 2 to 3 times more resistant to radiation damage than those in a normal oxygen environment. Much research has been devoted to overcoming hypoxia including the use of high pressure oxygen tanks, hyperthermia therapy (heat therapy which dilates blood vessels to the tumor site), blood substitutes that carry increased oxygen, hypoxic cell radiosensitizer drugs such as misonidazole and metronidazole, and hypoxic cytotoxins (tissue poisons), such as tirapazamine. Newer research approaches are currently being studied, including preclinical and clinical investigations into the use of an oxygen diffusion-enhancing compound such as trans sodium crocetinate as a radiosensitizer. Charged particles such as protons and boron, carbon, and neon ions can cause direct damage to cancer cell DNA through high-LET (linear energy transfer) and have an antitumor effect independent of tumor oxygen supply because these particles act mostly via direct energy transfer usually causing double-stranded DNA breaks. Due to their relatively large mass, protons and other charged particles have little lateral side scatter in the tissue – the beam does not broaden much, stays focused on the tumor shape, and delivers small dose side-effects to surrounding tissue. They also more precisely target the tumor using the Bragg peak effect. See proton therapy for a good example of the different effects of intensity-modulated radiation therapy (IMRT) vs. charged particle therapy. This procedure reduces damage to healthy tissue between the charged particle radiation source and the tumor and sets a finite range for tissue damage after the tumor has been reached. In contrast, IMRT's use of uncharged particles causes its energy to damage healthy cells when it exits the body. This exiting damage is not therapeutic, can increase treatment side effects, and increases the probability of secondary cancer induction. This difference is very important in cases where the close proximity of other organs makes any stray ionization very damaging (example: head and neck cancers). This X-ray exposure is especially bad for children, due to their growing bodies, and while depending on a multitude of factors, they are around 10 times more sensitive to developing secondary malignancies after radiotherapy as compared to adults. Dose The amount of radiation used in photon radiation therapy is measured in grays (Gy), and varies depending on the type and stage of cancer being treated. For curative cases, the typical dose for a solid epithelial tumor ranges from 60 to 80 Gy, while lymphomas are treated with 20 to 40 Gy. Preventive (adjuvant) doses are typically around 45–60 Gy in 1.8–2 Gy fractions (for breast, head, and neck cancers.) Many other factors are considered by radiation oncologists when selecting a dose, including whether the patient is receiving chemotherapy, patient comorbidities, whether radiation therapy is being administered before or after surgery, and the degree of success of surgery. Delivery parameters of a prescribed dose are determined during treatment planning (part of dosimetry). Treatment planning is generally performed on dedicated computers using specialized treatment planning software. Depending on the radiation delivery method, several angles or sources may be used to sum to the total necessary dose. The planner will try to design a plan that delivers a uniform prescription dose to the tumor and minimizes dose to surrounding healthy tissues. In radiation therapy, three-dimensional dose distributions may be evaluated using the dosimetry technique known as gel dosimetry. Fractionation The total dose is fractionated (spread out over time) for several important reasons. Fractionation allows normal cells time to recover, while tumor cells are generally less efficient in repair between fractions. Fractionation also allows tumor cells that were in a relatively radio-resistant phase of the cell cycle during one treatment to cycle into a sensitive phase of the cycle before the next fraction is given. Similarly, tumor cells that were chronically or acutely hypoxic (and therefore more radioresistant) may reoxygenate between fractions, improving the tumor cell kill. Fractionation regimens are individualised between different radiation therapy centers and even between individual doctors. In North America, Australia, and Europe, the typical fractionation schedule for adults is 1.8 to 2 Gy per day, five days a week. In some cancer types, prolongation of the fraction schedule over too long can allow for the tumor to begin repopulating, and for these tumor types, including head-and-neck and cervical squamous cell cancers, radiation treatment is preferably completed within a certain amount of time. For children, a typical fraction size may be 1.5 to 1.8 Gy per day, as smaller fraction sizes are associated with reduced incidence and severity of late-onset side effects in normal tissues. In some cases, two fractions per day are used near the end of a course of treatment. This schedule, known as a concomitant boost regimen or hyperfractionation, is used on tumors that regenerate more quickly when they are smaller. In particular, tumors in the head-and-neck demonstrate this behavior. Patients receiving palliative radiation to treat uncomplicated painful bone metastasis should not receive more than a single fraction of radiation. A single treatment gives comparable pain relief and morbidity outcomes to multiple-fraction treatments, and for patients with limited life expectancy, a single treatment is best to improve patient comfort. Schedules for fractionation One fractionation schedule that is increasingly being used and continues to be studied is hypofractionation. This is a radiation treatment in which the total dose of radiation is divided into large doses. Typical doses vary significantly by cancer type, from 2.2 Gy/fraction to 20 Gy/fraction, the latter being typical of stereotactic treatments (stereotactic ablative body radiotherapy, or SABR – also known as SBRT, or stereotactic body radiotherapy) for subcranial lesions, or SRS (stereotactic radiosurgery) for intracranial lesions. The rationale of hypofractionation is to reduce the probability of local recurrence by denying clonogenic cells the time they require to reproduce and also to exploit the radiosensitivity of some tumors. In particular, stereotactic treatments are intended to destroy clonogenic cells by a process of ablation, i.e., the delivery of a dose intended to destroy clonogenic cells directly, rather than to interrupt the process of clonogenic cell division repeatedly (apoptosis), as in routine radiotherapy. Estimation of dose based on target sensitivity Different cancer types have different radiation sensitivity. While predicting the sensitivity based on genomic or proteomic analyses of biopsy samples has proven challenging, the predictions of radiation effect on individual patients from genomic signatures of intrinsic cellular radiosensitivity have been shown to associate with clinical outcome. An alternative approach to genomics and proteomics was offered by the discovery that radiation protection in microbes is offered by non-enzymatic complexes of manganese and small organic metabolites. The content and variation of manganese (measurable by electron paramagnetic resonance) were found to be good predictors of radiosensitivity, and this finding extends also to human cells. An association was confirmed between total cellular manganese contents and their variation, and clinically inferred radioresponsiveness in different tumor cells, a finding that may be useful for more precise radiodosages and improved treatment of cancer patients. Types Historically, the three main divisions of radiation therapy are: external beam radiation therapy (EBRT or XRT) or teletherapy; brachytherapy or sealed source radiation therapy; and systemic radioisotope therapy or unsealed source radiotherapy. The differences relate to the position of the radiation source; external is outside the body, brachytherapy uses sealed radioactive sources placed precisely in the area under treatment, and systemic radioisotopes are given by infusion or oral ingestion. Brachytherapy can use temporary or permanent placement of radioactive sources. The temporary sources are usually placed by a technique called afterloading. In afterloading a hollow tube or applicator is placed surgically in the organ to be treated, and the sources are loaded into the applicator after the applicator is implanted. This minimizes radiation exposure to health care personnel. Particle therapy is a special case of external beam radiation therapy where the particles are protons or heavier ions. A review of radiation therapy randomised clinical trials from 2018 to 2021 found many practice-changing data and new concepts that emerge from RCTs, identifying techniques that improve the therapeutic ratio, techniques that lead to more tailored treatments, stressing the importance of patient satisfaction, and identifying areas that require further study. External beam radiation therapy The following three sections refer to treatment using X-rays. Conventional external beam radiation therapy Historically conventional external beam radiation therapy (2DXRT) was delivered via two-dimensional beams using kilovoltage therapy X-ray units, medical linear accelerators that generate high-energy X-rays, or with machines that were similar to a linear accelerator in appearance, but used a sealed radioactive source like the one shown above. 2DXRT mainly consists of a single beam of radiation delivered to the patient from several directions: often front or back, and both sides. Conventional refers to the way the treatment is planned or simulated on a specially calibrated diagnostic X-ray machine known as a simulator because it recreates the linear accelerator actions (or sometimes by eye), and to the usually well-established arrangements of the radiation beams to achieve a desired plan. The aim of simulation is to accurately target or localize the volume which is to be treated. This technique is well established and is generally quick and reliable. The worry is that some high-dose treatments may be limited by the radiation toxicity capacity of healthy tissues which lie close to the target tumor volume. An example of this problem is seen in radiation of the prostate gland, where the sensitivity of the adjacent rectum limited the dose which could be safely prescribed using 2DXRT planning to such an extent that tumor control may not be easily achievable. Prior to the invention of the CT, physicians and physicists had limited knowledge about the true radiation dosage delivered to both cancerous and healthy tissue. For this reason, 3-dimensional conformal radiation therapy has become the standard treatment for almost all tumor sites. More recently other forms of imaging are used including MRI, PET, SPECT and Ultrasound. Stereotactic radiation Stereotactic radiation is a specialized type of external beam radiation therapy. It uses focused radiation beams targeting a well-defined tumor using extremely detailed imaging scans. Radiation oncologists perform stereotactic treatments, often with the help of a neurosurgeon for tumors in the brain or spine. There are two types of stereotactic radiation. Stereotactic radiosurgery (SRS) is when doctors use a single or several stereotactic radiation treatments of the brain or spine. Stereotactic body radiation therapy (SBRT) refers to one or several stereotactic radiation treatments with the body, such as the lungs. Some doctors say an advantage to stereotactic treatments is that they deliver the right amount of radiation to the cancer in a shorter amount of time than traditional treatments, which can often take 6 to 11 weeks. Plus treatments are given with extreme accuracy, which should limit the effect of the radiation on healthy tissues. One problem with stereotactic treatments is that they are only suitable for certain small tumors. Stereotactic treatments can be confusing because many hospitals call the treatments by the name of the manufacturer rather than calling it SRS or SBRT. Brand names for these treatments include Axesse, Cyberknife, Gamma Knife, Novalis, Primatom, Synergy, X-Knife, TomoTherapy, Trilogy and Truebeam. This list changes as equipment manufacturers continue to develop new, specialized technologies to treat cancers. Virtual simulation, and 3-dimensional conformal radiation therapy The planning of radiation therapy treatment has been revolutionized by the ability to delineate tumors and adjacent normal structures in three dimensions using specialized CT and/or MRI scanners and planning software. Virtual simulation, the most basic form of planning, allows more accurate placement of radiation beams than is possible using conventional X-rays, where soft-tissue structures are often difficult to assess and normal tissues difficult to protect. An enhancement of virtual simulation is 3-dimensional conformal radiation therapy (3DCRT), in which the profile of each radiation beam is shaped to fit the profile of the target from a beam's eye view (BEV) using a multileaf collimator (MLC) and a variable number of beams. When the treatment volume conforms to the shape of the tumor, the relative toxicity of radiation to the surrounding normal tissues is reduced, allowing a higher dose of radiation to be delivered to the tumor than conventional techniques would allow. Intensity-modulated radiation therapy (IMRT) Intensity-modulated radiation therapy (IMRT) is an advanced type of high-precision radiation that is the next generation of 3DCRT. IMRT also improves the ability to conform the treatment volume to concave tumor shapes, for example when the tumor is wrapped around a vulnerable structure such as the spinal cord or a major organ or blood vessel. Computer-controlled X-ray accelerators distribute precise radiation doses to malignant tumors or specific areas within the tumor. The pattern of radiation delivery is determined using highly tailored computing applications to perform optimization and treatment simulation (Treatment Planning). The radiation dose is consistent with the 3-D shape of the tumor by controlling, or modulating, the radiation beam's intensity. The radiation dose intensity is elevated near the gross tumor volume while radiation among the neighboring normal tissues is decreased or avoided completely. This results in better tumor targeting, lessened side effects, and improved treatment outcomes than even 3DCRT. 3DCRT is still used extensively for many body sites but the use of IMRT is growing in more complicated body sites such as CNS, head and neck, prostate, breast, and lung. Unfortunately, IMRT is limited by its need for additional time from experienced medical personnel. This is because physicians must manually delineate the tumors one CT image at a time through the entire disease site which can take much longer than 3DCRT preparation. Then, medical physicists and dosimetrists must be engaged to create a viable treatment plan. Also, the IMRT technology has only been used commercially since the late 1990s even at the most advanced cancer centers, so radiation oncologists who did not learn it as part of their residency programs must find additional sources of education before implementing IMRT. Proof of improved survival benefit from either of these two techniques over conventional radiation therapy (2DXRT) is growing for many tumor sites, but the ability to reduce toxicity is generally accepted. This is particularly the case for head and neck cancers in a series of pivotal trials performed by Professor Christopher Nutting of the Royal Marsden Hospital. Both techniques enable dose escalation, potentially increasing usefulness. There has been some concern, particularly with IMRT, about increased exposure of normal tissue to radiation and the consequent potential for secondary malignancy. Overconfidence in the accuracy of imaging may increase the chance of missing lesions that are invisible on the planning scans (and therefore not included in the treatment plan) or that move between or during a treatment (for example, due to respiration or inadequate patient immobilization). New techniques are being developed to better control this uncertainty – for example, real-time imaging combined with real-time adjustment of the therapeutic beams. This new technology is called image-guided radiation therapy or four-dimensional radiation therapy. Another technique is the real-time tracking and localization of one or more small implantable electric devices implanted inside or close to the tumor. There are various types of medical implantable devices that are used for this purpose. It can be a magnetic transponder which senses the magnetic field generated by several transmitting coils, and then transmits the measurements back to the positioning system to determine the location. The implantable device can also be a small wireless transmitter sending out an RF signal which then will be received by a sensor array and used for localization and real-time tracking of the tumor position. A well-studied issue with IMRT is the "tongue and groove effect" which results in unwanted underdosing, due to irradiating through extended tongues and grooves of overlapping MLC (multileaf collimator) leaves. While solutions to this issue have been developed, which either reduce the TG effect to negligible amounts or remove it completely, they depend upon the method of IMRT being used and some of them carry costs of their own. Some texts distinguish "tongue and groove error" from "tongue or groove error", according as both or one side of the aperture is occluded. Volumetric modulated arc therapy (VMAT) Volumetric modulated arc therapy (VMAT) is a radiation technique introduced in 2007 which can achieve highly conformal dose distributions on target volume coverage and sparing of normal tissues. The specificity of this technique is to modify three parameters during the treatment. VMAT delivers radiation by rotating gantry (usually 360° rotating fields with one or more arcs), changing speed and shape of the beam with a multileaf collimator (MLC) ("sliding window" system of moving) and fluence output rate (dose rate) of the medical linear accelerator. VMAT has an advantage in patient treatment, compared with conventional static field intensity modulated radiotherapy (IMRT), of reduced radiation delivery times. Comparisons between VMAT and conventional IMRT for their sparing of healthy tissues and Organs at Risk (OAR) depends upon the cancer type. In the treatment of nasopharyngeal, oropharyngeal and hypopharyngeal carcinomas VMAT provides equivalent or better protection of the organ at risk (OAR). In the treatment of prostate cancer the OAR protection result is mixed with some studies favoring VMAT, others favoring IMRT. Temporally feathered radiation therapy (TFRT) Temporally feathered radiation therapy (TFRT) is a radiation technique introduced in 2018 which aims to use the inherent non-linearities in normal tissue repair to allow for sparing of these tissues without affecting the dose delivered to the tumor. The application of this technique, which has yet to be automated, has been described carefully to enhance the ability of departments to perform it, and in 2021 it was reported as feasible in a small clinical trial, though its efficacy has yet to be formally studied. Automated planning Automated treatment planning has become an integrated part of radiotherapy treatment planning. There are in general two approaches of automated planning. 1) Knowledge based planning where the treatment planning system has a library of high quality plans, from which it can predict the target and dose-volume histogram of the organ at risk. 2) The other approach is commonly called protocol based planning, where the treatment planning system tried to mimic an experienced treatment planner and through an iterative process evaluates the plan quality from on the basis of the protocol. Particle therapy In particle therapy (proton therapy being one example), energetic ionizing particles (protons or carbon ions) are directed at the target tumor. The dose increases while the particle penetrates the tissue, up to a maximum (the Bragg peak) that occurs near the end of the particle's range, and it then drops to (almost) zero. The advantage of this energy deposition profile is that less energy is deposited into the healthy tissue surrounding the target tissue. Auger therapy Auger therapy (AT) makes use of a very high dose of ionizing radiation in situ that provides molecular modifications at an atomic scale. AT differs from conventional radiation therapy in several aspects; it neither relies upon radioactive nuclei to cause cellular radiation damage at a cellular dimension, nor engages multiple external pencil-beams from different directions to zero-in to deliver a dose to the targeted area with reduced dose outside the targeted tissue/organ locations. Instead, the in situ delivery of a very high dose at the molecular level using AT aims for in situ molecular modifications involving molecular breakages and molecular re-arrangements such as a change of stacking structures as well as cellular metabolic functions related to the said molecule structures. Motion compensation In many types of external beam radiotherapy, motion can negatively impact the treatment delivery by moving target tissue out of, or other healthy tissue into, the intended beam path. Some form of patient immobilisation is common, to prevent the large movements of the body during treatment, however this cannot prevent all motion, for example as a result of breathing. Several techniques have been developed to account for motion like this. Deep inspiration breath-hold (DIBH) is commonly used for breast treatments where it is important to avoid irradiating the heart. In DIBH the patient holds their breath after breathing in to provide a stable position for the treatment beam to be turned on. This can be done automatically using an external monitoring system such as a spirometer or a camera and markers. The same monitoring techniques, as well as 4DCT imaging, can also be for respiratory gated treatment, where the patient breathes freely and the beam is only engaged at certain points in the breathing cycle. Other techniques include using 4DCT imaging to plan treatments with margins that account for motion, and active movement of the treatment couch, or beam, to follow motion. Contact X-ray brachytherapy Contact X-ray brachytherapy (also called "CXB", "electronic brachytherapy" or the "Papillon Technique") is a type of radiation therapy using low energy (50 kVp) kilovoltage X-rays applied directly to the tumor to treat rectal cancer. The process involves endoscopic examination first to identify the tumor in the rectum and then inserting treatment applicator on the tumor through the anus into the rectum and placing it against the cancerous tissue. Finally, treatment tube is inserted into the applicator to deliver high doses of X-rays (30Gy) emitted directly onto the tumor at two weekly intervals for three times over four weeks period. It is typically used for treating early rectal cancer in patients who may not be candidates for surgery. A 2015 NICE review found the main side effect to be bleeding that occurred in about 38% of cases, and radiation-induced ulcer which occurred in 27% of cases. Brachytherapy (sealed source radiotherapy) Brachytherapy is delivered by placing radiation source(s) inside or next to the area requiring treatment. Brachytherapy is commonly used as an effective treatment for cervical, prostate, breast, and skin cancer and can also be used to treat tumors in many other body sites. In brachytherapy, radiation sources are precisely placed directly at the site of the cancerous tumor. This means that the irradiation only affects a very localized area – exposure to radiation of healthy tissues further away from the sources is reduced. These characteristics of brachytherapy provide advantages over external beam radiation therapy – the tumor can be treated with very high doses of localized radiation, whilst reducing the probability of unnecessary damage to surrounding healthy tissues. A course of brachytherapy can often be completed in less time than other radiation therapy techniques. This can help reduce the chance of surviving cancer cells dividing and growing in the intervals between each radiation therapy dose. As one example of the localized nature of breast brachytherapy, the SAVI device delivers the radiation dose through multiple catheters, each of which can be individually controlled. This approach decreases the exposure of healthy tissue and resulting side effects, compared both to external beam radiation therapy and older methods of breast brachytherapy. Radionuclide therapy Radionuclide therapy (also known as systemic radioisotope therapy, radiopharmaceutical therapy, or molecular radiotherapy), is a form of targeted therapy. Targeting can be due to the chemical properties of the isotope such as radioiodine which is specifically absorbed by the thyroid gland a thousandfold better than other bodily organs. Targeting can also be achieved by attaching the radioisotope to another molecule or antibody to guide it to the target tissue. The radioisotopes are delivered through infusion (into the bloodstream) or ingestion. Examples are the infusion of metaiodobenzylguanidine (MIBG) to treat neuroblastoma, of oral iodine-131 to treat thyroid cancer or thyrotoxicosis, and of hormone-bound lutetium-177 and yttrium-90 to treat neuroendocrine tumors (peptide receptor radionuclide therapy). Another example is the injection of radioactive yttrium-90 or holmium-166 microspheres into the hepatic artery to radioembolize liver tumors or liver metastases. These microspheres are used for the treatment approach known as selective internal radiation therapy. The microspheres are approximately 30 μm in diameter (about one-third of a human hair) and are delivered directly into the artery supplying blood to the tumors. These treatments begin by guiding a catheter up through the femoral artery in the leg, navigating to the desired target site and administering treatment. The blood feeding the tumor will carry the microspheres directly to the tumor enabling a more selective approach than traditional systemic chemotherapy. There are currently three different kinds of microspheres: SIR-Spheres, TheraSphere and QuiremSpheres. A major use of systemic radioisotope therapy is in the treatment of bone metastasis from cancer. The radioisotopes travel selectively to areas of damaged bone, and spare normal undamaged bone. Isotopes commonly used in the treatment of bone metastasis are radium-223, strontium-89 and samarium (153Sm) lexidronam. In 2002, the United States Food and Drug Administration (FDA) approved ibritumomab tiuxetan (Zevalin), which is an anti-CD20 monoclonal antibody conjugated to yttrium-90. In 2003, the FDA approved the tositumomab/iodine (131I) tositumomab regimen (Bexxar), which is a combination of an iodine-131 labelled and an unlabelled anti-CD20 monoclonal antibody. These medications were the first agents of what is known as radioimmunotherapy, and they were approved for the treatment of refractory non-Hodgkin's lymphoma. Intraoperative radiotherapy Intraoperative radiation therapy (IORT) is applying therapeutic levels of radiation to a target area, such as a cancer tumor, while the area is exposed during surgery. Rationale The rationale for IORT is to deliver a high dose of radiation precisely to the targeted area with minimal exposure of surrounding tissues which are displaced or shielded during the IORT. Conventional radiation techniques such as external beam radiotherapy (EBRT) following surgical removal of the tumor have several drawbacks: The tumor bed where the highest dose should be applied is frequently missed due to the complex localization of the wound cavity even when modern radiotherapy planning is used. Additionally, the usual delay between the surgical removal of the tumor and EBRT may allow a repopulation of the tumor cells. These potentially harmful effects can be avoided by delivering the radiation more precisely to the targeted tissues leading to immediate sterilization of residual tumor cells. Another aspect is that wound fluid has a stimulating effect on tumor cells. IORT was found to inhibit the stimulating effects of wound fluid. History Medicine has used radiation therapy as a treatment for cancer for more than 100 years, with its earliest roots traced from the discovery of X-rays in 1895 by Wilhelm Röntgen. Emil Grubbe of Chicago was possibly the first American physician to use X-rays to treat cancer, beginning in 1896. The field of radiation therapy began to grow in the early 1900s largely due to the groundbreaking work of Nobel Prize–winning scientist Marie Curie (1867–1934), who discovered the radioactive elements polonium and radium in 1898. This began a new era in medical treatment and research. Through the 1920s the hazards of radiation exposure were not understood, and little protection was used. Radium was believed to have wide curative powers and radiotherapy was applied to many diseases. Prior to World War 2, the only practical sources of radiation for radiotherapy were radium, its "emanation", radon gas, and the X-ray tube. External beam radiotherapy (teletherapy) began at the turn of the century with relatively low voltage (<150 kV) X-ray machines. It was found that while superficial tumors could be treated with low voltage X-rays, more penetrating, higher energy beams were required to reach tumors inside the body, requiring higher voltages. Orthovoltage X-rays, which used tube voltages of 200-500 kV, began to be used during the 1920s. To reach the most deeply buried tumors without exposing intervening skin and tissue to dangerous radiation doses required rays with energies of 1 MV or above, called "megavolt" radiation. Producing megavolt X-rays required voltages on the X-ray tube of 3 to 5 million volts, which required huge expensive installations. Megavoltage X-ray units were first built in the late 1930s but because of cost were limited to a few institutions. One of the first, installed at St. Bartholomew's hospital, London in 1937 and used until 1960, used a 30 foot long X-ray tube and weighed 10 tons. Radium produced megavolt gamma rays, but was extremely rare and expensive due to its low occurrence in ores. In 1937 the entire world supply of radium for radiotherapy was 50 grams, valued at £800,000, or $50 million in 2005 dollars. The invention of the nuclear reactor in the Manhattan Project during World War 2 made possible the production of artificial radioisotopes for radiotherapy. Cobalt therapy, teletherapy machines using megavolt gamma rays emitted by cobalt-60, a radioisotope produced by irradiating ordinary cobalt metal in a reactor, revolutionized the field between the 1950s and the early 1980s. Cobalt machines were relatively cheap, robust and simple to use, although due to its 5.27 year half-life the cobalt had to be replaced about every 5 years. Medical linear particle accelerators, developed since the 1940s, began replacing X-ray and cobalt units in the 1980s and these older therapies are now declining. The first medical linear accelerator was used at the Hammersmith Hospital in London in 1953. Linear accelerators can produce higher energies, have more collimated beams, and do not produce radioactive waste with its attendant disposal problems like radioisotope therapies. With Godfrey Hounsfield's invention of computed tomography (CT) in 1971, three-dimensional planning became a possibility and created a shift from 2-D to 3-D radiation delivery. CT-based planning allows physicians to more accurately determine the dose distribution using axial tomographic images of the patient's anatomy. The advent of new imaging technologies, including magnetic resonance imaging (MRI) in the 1970s and positron emission tomography (PET) in the 1980s, has moved radiation therapy from 3-D conformal to intensity-modulated radiation therapy (IMRT) and to image-guided radiation therapy tomotherapy. These advances allowed radiation oncologists to better see and target tumors, which have resulted in better treatment outcomes, more organ preservation and fewer side effects. While access to radiotherapy is improving globally, more than half of patients in low and middle income countries still do not have available access to the therapy as of 2017. See also Beam spoiler Cancer and nausea Fast neutron therapy Neutron capture therapy of cancer Particle beam Radiation therapist Selective internal radiation therapy Treatment of cancer References Further reading External links Information Human Health Campus The official website of the International Atomic Energy Agency dedicated to Professionals in Radiation Medicine. This site is managed by the Division of Human Health, Department of Nuclear Sciences and Applications RT Answers – ASTRO: patient information site The Radiation Therapy Oncology Group: an organisation for radiation oncology research RadiologyInfo -The radiology information resource for patients: Radiation Therapy Source of cancer stem cells' resistance to radiation explained on YouTube. Biologically equivalent dose calculator Radiobiology Treatment Gap Compensator Calculator About the profession PROS (Paediatric Radiation Oncology Society) American Society for Radiation Oncology European Society for Radiotherapy and Oncology Who does what in Radiation Oncology? – Responsibilities of the various personnel within Radiation Oncology in the United States Accidents and QA Verification of dose calculations in radiation therapy Radiation Safety in External Beam Radiotherapy (IAEA) Radioactivity Radiation health effects Medical physics Radiobiology
Radiation therapy
[ "Physics", "Chemistry", "Materials_science", "Biology" ]
12,201
[ "Radiation health effects", "Applied and interdisciplinary physics", "Radiobiology", "Medical physics", "Nuclear physics", "Radiation effects", "Radioactivity" ]
26,452
https://en.wikipedia.org/wiki/Riesz%20representation%20theorem
The Riesz representation theorem, sometimes called the Riesz–Fréchet representation theorem after Frigyes Riesz and Maurice René Fréchet, establishes an important connection between a Hilbert space and its continuous dual space. If the underlying field is the real numbers, the two are isometrically isomorphic; if the underlying field is the complex numbers, the two are isometrically anti-isomorphic. The (anti-) isomorphism is a particular natural isomorphism. Preliminaries and notation Let be a Hilbert space over a field where is either the real numbers or the complex numbers If (resp. if ) then is called a (resp. a ). Every real Hilbert space can be extended to be a dense subset of a unique (up to bijective isometry) complex Hilbert space, called its complexification, which is why Hilbert spaces are often automatically assumed to be complex. Real and complex Hilbert spaces have in common many, but by no means all, properties and results/theorems. This article is intended for both mathematicians and physicists and will describe the theorem for both. In both mathematics and physics, if a Hilbert space is assumed to be real (that is, if ) then this will usually be made clear. Often in mathematics, and especially in physics, unless indicated otherwise, "Hilbert space" is usually automatically assumed to mean "complex Hilbert space." Depending on the author, in mathematics, "Hilbert space" usually means either (1) a complex Hilbert space, or (2) a real complex Hilbert space. Linear and antilinear maps By definition, an (also called a ) is a map between vector spaces that is : and (also called or ): where is the conjugate of the complex number , given by . In contrast, a map is linear if it is additive and : Every constant map is always both linear and antilinear. If then the definitions of linear maps and antilinear maps are completely identical. A linear map from a Hilbert space into a Banach space (or more generally, from any Banach space into any topological vector space) is continuous if and only if it is bounded; the same is true of antilinear maps. The inverse of any antilinear (resp. linear) bijection is again an antilinear (resp. linear) bijection. The composition of two linear maps is a map. Continuous dual and anti-dual spaces A on is a function whose codomain is the underlying scalar field Denote by (resp. by the set of all continuous linear (resp. continuous antilinear) functionals on which is called the (resp. the ) of If then linear functionals on are the same as antilinear functionals and consequently, the same is true for such continuous maps: that is, One-to-one correspondence between linear and antilinear functionals Given any functional the is the functional This assignment is most useful when because if then and the assignment reduces down to the identity map. The assignment defines an antilinear bijective correspondence from the set of all functionals (resp. all linear functionals, all continuous linear functionals ) on onto the set of all functionals (resp. all linear functionals, all continuous linear functionals ) on Mathematics vs. physics notations and definitions of inner product The Hilbert space has an associated inner product valued in 's underlying scalar field that is linear in one coordinate and antilinear in the other (as specified below). If is a complex Hilbert space (), then there is a crucial difference between the notations prevailing in mathematics versus physics, regarding which of the two variables is linear. However, for real Hilbert spaces (), the inner product is a symmetric map that is linear in each coordinate (bilinear), so there can be no such confusion. In mathematics, the inner product on a Hilbert space is often denoted by or while in physics, the bra–ket notation or is typically used. In this article, these two notations will be related by the equality: These have the following properties:The map is linear in its first coordinate; equivalently, the map is linear in its second coordinate. That is, for fixed the map with is a linear functional on This linear functional is continuous, so The map is antilinear in its coordinate; equivalently, the map is antilinear in its coordinate. That is, for fixed the map with is an antilinear functional on This antilinear functional is continuous, so In computations, one must consistently use either the mathematics notation , which is (linear, antilinear); or the physics notation , whch is (antilinear | linear). Canonical norm and inner product on the dual space and anti-dual space If then is a non-negative real number and the map defines a canonical norm on that makes into a normed space. As with all normed spaces, the (continuous) dual space carries a canonical norm, called the , that is defined by The canonical norm on the (continuous) anti-dual space denoted by is defined by using this same equation: This canonical norm on satisfies the parallelogram law, which means that the polarization identity can be used to define a which this article will denote by the notations where this inner product turns into a Hilbert space. There are now two ways of defining a norm on the norm induced by this inner product (that is, the norm defined by ) and the usual dual norm (defined as the supremum over the closed unit ball). These norms are the same; explicitly, this means that the following holds for every As will be described later, the Riesz representation theorem can be used to give an equivalent definition of the canonical norm and the canonical inner product on The same equations that were used above can also be used to define a norm and inner product on 's anti-dual space Canonical isometry between the dual and antidual The complex conjugate of a functional which was defined above, satisfies for every and every This says exactly that the canonical antilinear bijection defined by as well as its inverse are antilinear isometries and consequently also homeomorphisms. The inner products on the dual space and the anti-dual space denoted respectively by and are related by and If then and this canonical map reduces down to the identity map. Riesz representation theorem Two vectors and are if which happens if and only if for all scalars The orthogonal complement of a subset is which is always a closed vector subspace of The Hilbert projection theorem guarantees that for any nonempty closed convex subset of a Hilbert space there exists a unique vector such that that is, is the (unique) global minimum point of the function defined by Statement Historically, the theorem is often attributed simultaneously to Riesz and Fréchet in 1907 (see references). Let denote the underlying scalar field of Fix Define by which is a linear functional on since is in the linear argument. By the Cauchy–Schwarz inequality, which shows that is bounded (equivalently, continuous) and that It remains to show that By using in place of it follows that (the equality holds because is real and non-negative). Thus that The proof above did not use the fact that is complete, which shows that the formula for the norm holds more generally for all inner product spaces. Suppose are such that and for all Then which shows that is the constant linear functional. Consequently which implies that Let If (or equivalently, if ) then taking completes the proof so assume that and The continuity of implies that is a closed subspace of (because and is a closed subset of ). Let denote the orthogonal complement of in Because is closed and is a Hilbert space, can be written as the direct sum (a proof of this is given in the article on the Hilbert projection theorem). Because there exists some non-zero For any which shows that where now implies Solving for shows that which proves that the vector satisfies Applying the norm formula that was proved above with shows that Also, the vector has norm and satisfies It can now be deduced that is -dimensional when Let be any non-zero vector. Replacing with in the proof above shows that the vector satisfies for every The uniqueness of the (non-zero) vector representing implies that which in turn implies that and Thus every vector in is a scalar multiple of The formulas for the inner products follow from the polarization identity. Observations If then So in particular, is always real and furthermore, if and only if if and only if Linear functionals as affine hyperplanes A non-trivial continuous linear functional is often interpreted geometrically by identifying it with the affine hyperplane (the kernel is also often visualized alongside although knowing is enough to reconstruct because if then and otherwise ). In particular, the norm of should somehow be interpretable as the "norm of the hyperplane ". When then the Riesz representation theorem provides such an interpretation of in terms of the affine hyperplane as follows: using the notation from the theorem's statement, from it follows that and so implies and thus This can also be seen by applying the Hilbert projection theorem to and concluding that the global minimum point of the map defined by is The formulas provide the promised interpretation of the linear functional's norm entirely in terms of its associated affine hyperplane (because with this formula, knowing only the is enough to describe the norm of its associated linear ). Defining the infimum formula will also hold when When the supremum is taken in (as is typically assumed), then the supremum of the empty set is but if the supremum is taken in the non-negative reals (which is the image/range of the norm when ) then this supremum is instead in which case the supremum formula will also hold when (although the atypical equality is usually unexpected and so risks causing confusion). Constructions of the representing vector Using the notation from the theorem above, several ways of constructing from are now described. If then ; in other words, This special case of is henceforth assumed to be known, which is why some of the constructions given below start by assuming Orthogonal complement of kernel If then for any If is a unit vector (meaning ) then (this is true even if because in this case ). If is a unit vector satisfying the above condition then the same is true of which is also a unit vector in However, so both these vectors result in the same Orthogonal projection onto kernel If is such that and if is the orthogonal projection of onto then Orthonormal basis Given an orthonormal basis of and a continuous linear functional the vector can be constructed uniquely by where all but at most countably many will be equal to and where the value of does not actually depend on choice of orthonormal basis (that is, using any other orthonormal basis for will result in the same vector). If is written as then and If the orthonormal basis is a sequence then this becomes and if is written as then Example in finite dimensions using matrix transformations Consider the special case of (where is an integer) with the standard inner product where are represented as column matrices and with respect to the standard orthonormal basis on (here, is at its th coordinate and everywhere else; as usual, will now be associated with the dual basis) and where denotes the conjugate transpose of Let be any linear functional and let be the unique scalars such that where it can be shown that for all Then the Riesz representation of is the vector To see why, identify every vector in with the column matrix so that is identified with As usual, also identify the linear functional with its transformation matrix, which is the row matrix so that and the function is the assignment where the right hand side is matrix multiplication. Then for all which shows that satisfies the defining condition of the Riesz representation of The bijective antilinear isometry defined in the corollary to the Riesz representation theorem is the assignment that sends to the linear functional on defined by where under the identification of vectors in with column matrices and vector in with row matrices, is just the assignment As described in the corollary, 's inverse is the antilinear isometry which was just shown above to be: where in terms of matrices, is the assignment Thus in terms of matrices, each of and is just the operation of conjugate transposition (although between different spaces of matrices: if is identified with the space of all column (respectively, row) matrices then is identified with the space of all row (respectively, column matrices). This example used the standard inner product, which is the map but if a different inner product is used, such as where is any Hermitian positive-definite matrix, or if a different orthonormal basis is used then the transformation matrices, and thus also the above formulas, will be different. Relationship with the associated real Hilbert space Assume that is a complex Hilbert space with inner product When the Hilbert space is reinterpreted as a real Hilbert space then it will be denoted by where the (real) inner-product on is the real part of 's inner product; that is: The norm on induced by is equal to the original norm on and the continuous dual space of is the set of all -valued bounded -linear functionals on (see the article about the polarization identity for additional details about this relationship). Let and denote the real and imaginary parts of a linear functional so that The formula expressing a linear functional in terms of its real part is where for all It follows that and that if and only if It can also be shown that where and are the usual operator norms. In particular, a linear functional is bounded if and only if its real part is bounded. Representing a functional and its real part The Riesz representation of a continuous linear function on a complex Hilbert space is equal to the Riesz representation of its real part on its associated real Hilbert space. Explicitly, let and as above, let be the Riesz representation of obtained in so it is the unique vector that satisfies for all The real part of is a continuous real linear functional on and so the Riesz representation theorem may be applied to and the associated real Hilbert space to produce its Riesz representation, which will be denoted by That is, is the unique vector in that satisfies for all The conclusion is This follows from the main theorem because and if then and consequently, if then which shows that Moreover, being a real number implies that In other words, in the theorem and constructions above, if is replaced with its real Hilbert space counterpart and if is replaced with then This means that vector obtained by using and the real linear functional is the equal to the vector obtained by using the origin complex Hilbert space and original complex linear functional (with identical norm values as well). Furthermore, if then is perpendicular to with respect to where the kernel of is be a proper subspace of the kernel of its real part Assume now that Then because and is a proper subset of The vector subspace has real codimension in while has codimension in and That is, is perpendicular to with respect to Canonical injections into the dual and anti-dual Induced linear map into anti-dual The map defined by placing into the coordinate of the inner product and letting the variable vary over the coordinate results in an functional: This map is an element of which is the continuous anti-dual space of The is the operator which is also an injective isometry. The Fundamental theorem of Hilbert spaces, which is related to Riesz representation theorem, states that this map is surjective (and thus bijective). Consequently, every antilinear functional on can be written (uniquely) in this form. If is the canonical linear bijective isometry that was defined above, then the following equality holds: Extending the bra–ket notation to bras and kets Let be a Hilbert space and as before, let Let which is a bijective antilinear isometry that satisfies Bras Given a vector let denote the continuous linear functional ; that is, so that this functional is defined by This map was denoted by earlier in this article. The assignment is just the isometric antilinear isomorphism which is why holds for all and all scalars The result of plugging some given into the functional is the scalar which may be denoted by Bra of a linear functional Given a continuous linear functional let denote the vector ; that is, The assignment is just the isometric antilinear isomorphism which is why holds for all and all scalars The defining condition of the vector is the technically correct but unsightly equality which is why the notation is used in place of With this notation, the defining condition becomes Kets For any given vector the notation is used to denote ; that is, The assignment is just the identity map which is why holds for all and all scalars The notation and is used in place of and respectively. As expected, and really is just the scalar Adjoints and transposes Let be a continuous linear operator between Hilbert spaces and As before, let and Denote by the usual bijective antilinear isometries that satisfy: Definition of the adjoint For every the scalar-valued map on defined by is a continuous linear functional on and so by the Riesz representation theorem, there exists a unique vector in denoted by such that or equivalently, such that The assignment thus induces a function called the of whose defining condition is The adjoint is necessarily a continuous (equivalently, a bounded) linear operator. If is finite dimensional with the standard inner product and if is the transformation matrix of with respect to the standard orthonormal basis then 's conjugate transpose is the transformation matrix of the adjoint Adjoints are transposes It is also possible to define the or of which is the map defined by sending a continuous linear functionals to where the composition is always a continuous linear functional on and it satisfies (this is true more generally, when and are merely normed spaces). So for example, if then sends the continuous linear functional (defined on by ) to the continuous linear functional (defined on by ); using bra-ket notation, this can be written as where the juxtaposition of with on the right hand side denotes function composition: The adjoint is actually just to the transpose when the Riesz representation theorem is used to identify with and with Explicitly, the relationship between the adjoint and transpose is: which can be rewritten as: Alternatively, the value of the left and right hand sides of () at any given can be rewritten in terms of the inner products as: so that holds if and only if holds; but the equality on the right holds by definition of The defining condition of can also be written if bra-ket notation is used. Descriptions of self-adjoint, normal, and unitary operators Assume and let Let be a continuous (that is, bounded) linear operator. Whether or not is self-adjoint, normal, or unitary depends entirely on whether or not satisfies certain defining conditions related to its adjoint, which was shown by () to essentially be just the transpose Because the transpose of is a map between continuous linear functionals, these defining conditions can consequently be re-expressed entirely in terms of linear functionals, as the remainder of subsection will now describe in detail. The linear functionals that are involved are the simplest possible continuous linear functionals on that can be defined entirely in terms of the inner product on and some given vector Specifically, these are and where Self-adjoint operators A continuous linear operator is called self-adjoint if it is equal to its own adjoint; that is, if Using (), this happens if and only if: where this equality can be rewritten in the following two equivalent forms: Unraveling notation and definitions produces the following characterization of self-adjoint operators in terms of the aforementioned continuous linear functionals: is self-adjoint if and only if for all the linear functional is equal to the linear functional ; that is, if and only if where if bra-ket notation is used, this is Normal operators A continuous linear operator is called normal if which happens if and only if for all Using () and unraveling notation and definitions produces the following characterization of normal operators in terms of inner products of continuous linear functionals: is a normal operator if and only if where the left hand side is also equal to The left hand side of this characterization involves only linear functionals of the form while the right hand side involves only linear functions of the form (defined as above). So in plain English, characterization () says that an operator is normal when the inner product of any two linear functions of the first form is equal to the inner product of their second form (using the same vectors for both forms). In other words, if it happens to be the case (and when is injective or self-adjoint, it is) that the assignment of linear functionals is well-defined (or alternatively, if is well-defined) where ranges over then is a normal operator if and only if this assignment preserves the inner product on The fact that every self-adjoint bounded linear operator is normal follows readily by direct substitution of into either side of This same fact also follows immediately from the direct substitution of the equalities () into either side of (). Alternatively, for a complex Hilbert space, the continuous linear operator is a normal operator if and only if for every which happens if and only if Unitary operators An invertible bounded linear operator is said to be unitary if its inverse is its adjoint: By using (), this is seen to be equivalent to Unraveling notation and definitions, it follows that is unitary if and only if The fact that a bounded invertible linear operator is unitary if and only if (or equivalently, ) produces another (well-known) characterization: an invertible bounded linear map is unitary if and only if Because is invertible (and so in particular a bijection), this is also true of the transpose This fact also allows the vector in the above characterizations to be replaced with or thereby producing many more equalities. Similarly, can be replaced with or See also Citations Notes Proofs Bibliography P. Halmos Measure Theory, D. van Nostrand and Co., 1950. P. Halmos, A Hilbert Space Problem Book, Springer, New York 1982 (problem 3 contains version for vector spaces with coordinate systems). Walter Rudin, Real and Complex Analysis, McGraw-Hill, 1966, . Articles containing proofs Duality theories Hilbert spaces Integral representations Linear functionals Theorems in functional analysis
Riesz representation theorem
[ "Physics", "Mathematics" ]
4,726
[ "Theorems in mathematical analysis", "Mathematical structures", "Articles containing proofs", "Quantum mechanics", "Theorems in functional analysis", "Category theory", "Duality theories", "Geometry", "Hilbert spaces" ]
26,469
https://en.wikipedia.org/wiki/General%20recursive%20function
In mathematical logic and computer science, a general recursive function, partial recursive function, or μ-recursive function is a partial function from natural numbers to natural numbers that is "computable" in an intuitive sense – as well as in a formal one. If the function is total, it is also called a total recursive function (sometimes shortened to recursive function). In computability theory, it is shown that the μ-recursive functions are precisely the functions that can be computed by Turing machines (this is one of the theorems that supports the Church–Turing thesis). The μ-recursive functions are closely related to primitive recursive functions, and their inductive definition (below) builds upon that of the primitive recursive functions. However, not every total recursive function is a primitive recursive function—the most famous example is the Ackermann function. Other equivalent classes of functions are the functions of lambda calculus and the functions that can be computed by Markov algorithms. The subset of all total recursive functions with values in is known in computational complexity theory as the complexity class R. Definition The μ-recursive functions (or general recursive functions) are partial functions that take finite tuples of natural numbers and return a single natural number. They are the smallest class of partial functions that includes the initial functions and is closed under composition, primitive recursion, and the minimization operator . The smallest class of functions including the initial functions and closed under composition and primitive recursion (i.e. without minimisation) is the class of primitive recursive functions. While all primitive recursive functions are total, this is not true of partial recursive functions; for example, the minimisation of the successor function is undefined. The primitive recursive functions are a subset of the total recursive functions, which are a subset of the partial recursive functions. For example, the Ackermann function can be proven to be total recursive, and to be non-primitive. Primitive or "basic" functions: Constant functions : For each natural number and every Alternative definitions use instead a zero function as a primitive function that always returns zero, and build the constant functions from the zero function, the successor function and the composition operator. Successor function S: Projection function (also called the Identity function): For all natural numbers such that : Operators (the domain of a function defined by an operator is the set of the values of the arguments such that every function application that must be done during the computation provides a well-defined result): Composition operator (also called the substitution operator): Given an m-ary function and m k-ary functions : This means that is defined only if and are all defined. Primitive recursion operator : Given the k-ary function and k+2 -ary function : This means that is defined only if and are defined for all Minimization operator : Given a (k+1)-ary function , the k-ary function is defined by: Intuitively, minimisation seeks—beginning the search from 0 and proceeding upwards—the smallest argument that causes the function to return zero; if there is no such argument, or if one encounters an argument for which is not defined, then the search never terminates, and is not defined for the argument While some textbooks use the μ-operator as defined here, others demand that the μ-operator is applied to total functions only. Although this restricts the μ-operator as compared to the definition given here, the class of μ-recursive functions remains the same, which follows from Kleene's Normal Form Theorem (see below). The only difference is, that it becomes undecidable whether a specific function definition defines a μ-recursive function, as it is undecidable whether a computable (i.e. μ-recursive) function is total. The strong equality relation can be used to compare partial μ-recursive functions. This is defined for all partial functions f and g so that holds if and only if for any choice of arguments either both functions are defined and their values are equal or both functions are undefined. Examples Examples not involving the minimization operator can be found at Primitive recursive function#Examples. The following examples are intended just to demonstrate the use of the minimization operator; they could also be defined without it, albeit in a more complicated way, since they are all primitive recursive. The following examples define general recursive functions that are not primitive recursive; hence they cannot avoid using the minimization operator. Total recursive function A general recursive function is called total recursive function if it is defined for every input, or, equivalently, if it can be computed by a total Turing machine. There is no way to computably tell if a given general recursive function is total - see Halting problem. Equivalence with other models of computability In the equivalence of models of computability, a parallel is drawn between Turing machines that do not terminate for certain inputs and an undefined result for that input in the corresponding partial recursive function. The unbounded search operator is not definable by the rules of primitive recursion as those do not provide a mechanism for "infinite loops" (undefined values). Normal form theorem A normal form theorem due to Kleene says that for each k there are primitive recursive functions and such that for any μ-recursive function with k free variables there is an e such that . The number e is called an index or Gödel number for the function f. A consequence of this result is that any μ-recursive function can be defined using a single instance of the μ operator applied to a (total) primitive recursive function. Minsky observes the defined above is in essence the μ-recursive equivalent of the universal Turing machine: Symbolism A number of different symbolisms are used in the literature. An advantage to using the symbolism is a derivation of a function by "nesting" of the operators one inside the other is easier to write in a compact form. In the following the string of parameters x1, ..., xn is abbreviated as x: Constant function: Kleene uses " C(x) = q " and Boolos-Burgess-Jeffrey (2002) (B-B-J) use the abbreviation " constn( x) = n ": e.g. C ( r, s, t, u, v, w, x ) = 13 e.g. const13 ( r, s, t, u, v, w, x ) = 13 Successor function: Kleene uses x' and S for "Successor". As "successor" is considered to be primitive, most texts use the apostrophe as follows: S(a) = a +1 =def a', where 1 =def 0', 2 =def 0 ' ', etc. Identity function: Kleene (1952) uses " U " to indicate the identity function over the variables xi; B-B-J use the identity function id over the variables x1 to xn: U( x ) = id( x ) = xi e.g. U = id ( r, s, t, u, v, w, x ) = t Composition (Substitution) operator: Kleene uses a bold-face S (not to be confused with his S for "successor" ! ). The superscript "m" refers to the mth of function "fm", whereas the subscript "n" refers to the nth variable "xn": If we are given h( x )= g( f1(x), ... , fm(x) ) h(x) = S(g, f1, ... , fm ) In a similar manner, but without the sub- and superscripts, B-B-J write: h(x)= Cn[g, f1 ,..., fm](x) Primitive Recursion: Kleene uses the symbol " Rn(base step, induction step) " where n indicates the number of variables, B-B-J use " Pr(base step, induction step)(x)". Given: base step: h( 0, x )= f( x ), and induction step: h( y+1, x ) = g( y, h(y, x),x ) Example: primitive recursion definition of a + b: base step: f( 0, a ) = a = U(a) induction step: f( b' , a ) = ( f ( b, a ) )' = g( b, f( b, a), a ) = g( b, c, a ) = c' = S(U( b, c, a )) R2 { U(a), S [ (U( b, c, a ) ] } Pr{ U(a), S[ (U( b, c, a ) ] } Example: Kleene gives an example of how to perform the recursive derivation of f(b, a) = b + a (notice reversal of variables a and b). He starts with 3 initial functions S(a) = a' U(a) = a U( b, c, a ) = c g(b, c, a) = S(U( b, c, a )) = c' base step: h( 0, a ) = U(a) induction step: h( b', a ) = g( b, h( b, a ), a ) He arrives at: a+b = R2[ U, S'(S, U) ] Examples Fibonacci number McCarthy 91 function See also Recursion theory Recursion Recursion (computer science) References On pages 210-215 Minsky shows how to create the μ-operator using the register machine model, thus demonstrating its equivalence to the general recursive functions. External links Stanford Encyclopedia of Philosophy entry A compiler for transforming a recursive function into an equivalent Turing machine Computability theory Theory of computation ru:Рекурсивная функция (теория вычислимости)#Частично рекурсивная функция
General recursive function
[ "Mathematics" ]
2,215
[ "Computability theory", "Mathematical logic" ]
26,477
https://en.wikipedia.org/wiki/Rust
Rust is an iron oxide, a usually reddish-brown oxide formed by the reaction of iron and oxygen in the catalytic presence of water or air moisture. Rust consists of hydrous iron(III) oxides (Fe2O3·nH2O) and iron(III) oxide-hydroxide (FeO(OH), Fe(OH)3), and is typically associated with the corrosion of refined iron. Given sufficient time, any iron mass, in the presence of water and oxygen, could eventually convert entirely to rust. Surface rust is commonly flaky and friable, and provides no passivational protection to the underlying iron, unlike the formation of patina on copper surfaces. Rusting is the common term for corrosion of elemental iron and its alloys such as steel. Many other metals undergo similar corrosion, but the resulting oxides are not commonly called "rust". Several forms of rust are distinguishable both visually and by spectroscopy, and form under different circumstances. Other forms of rust include the result of reactions between iron and chloride in an environment deprived of oxygen. Rebar used in underwater concrete pillars, which generates green rust, is an example. Although rusting is generally a negative aspect of iron, a particular form of rusting, known as stable rust, causes the object to have a thin coating of rust over the top. If kept in low relative humidity, it makes the "stable" layer protective to the iron below, but not to the extent of other oxides such as aluminium oxide on aluminium. History It was assumed that rust, made by dissolved oxygen with iron in the oceans, began to sink beneath the seafloor, forming banded iron formations from 2.5 to 2.2 billion years ago. Afterwards, rust soon uplifted iron metals toward the ocean surface. They would subsequently transform into foundations of iron and steel, which effectively fuelled the Industrial Revolution. Chemical reactions Rust is a general name for a complex of oxides and hydroxides of iron, which occur when iron or some alloys that contain iron are exposed to oxygen and moisture for a long period of time. Over time, the oxygen combines with the metal, forming new compounds collectively called rust, in a process called rusting. Rusting is an oxidation reaction specifically occurring with iron. Other metals also corrode via similar oxidation, but such corrosion is not called rusting. The main catalyst for the rusting process is water. Iron or steel structures might appear to be solid, but water molecules can penetrate the microscopic pits and cracks in any exposed metal. The hydrogen atoms present in water molecules can combine with other elements to form acids, which will eventually cause more metal to be exposed. If chloride ions are present, as is the case with saltwater, the corrosion is likely to occur more quickly. Meanwhile, the oxygen atoms combine with metallic atoms to form the destructive oxide compound. These iron compounds are brittle and crumbly and replace strong metallic iron, reducing the strength of the object. Oxidation of iron When iron is in contact with water and oxygen, it rusts. If salt is present, for example in seawater or salt spray, the iron tends to rust more quickly, as a result of chemical reactions. Iron metal is relatively unaffected by pure water or by dry oxygen. As with other metals, like aluminium, a tightly adhering oxide coating, a passivation layer, protects the bulk iron from further oxidation. The conversion of the passivating ferrous oxide layer to rust results from the combined action of two agents, usually oxygen and water. Other degrading solutions are sulfur dioxide in water and carbon dioxide in water. Under these corrosive conditions, iron hydroxide species are formed. Unlike ferrous oxides, the hydroxides do not adhere to the bulk metal. As they form and flake off from the surface, fresh iron is exposed, and the corrosion process continues until either all of the iron is consumed or all of the oxygen, water, carbon dioxide or sulfur dioxide in the system are removed or consumed. When iron rusts, the oxides take up more volume than the original metal; this expansion can generate enormous forces, damaging structures made with iron. See economic effect for more details. Associated reactions The rusting of iron is an electrochemical process that begins with the transfer of electrons from iron to oxygen. The iron is the reducing agent (gives up electrons) while the oxygen is the oxidizing agent (gains electrons). The rate of corrosion is affected by water and accelerated by electrolytes, as illustrated by the effects of road salt on the corrosion of automobiles. The key reaction is the reduction of oxygen: O2 + 4  + 2  → 4  Because it forms hydroxide ions, this process is strongly affected by the presence of acid. Likewise, the corrosion of most metals by oxygen is accelerated at low pH. Providing the electrons for the above reaction is the oxidation of iron that may be described as follows: Fe → Fe2+ + 2  The following redox reaction also occurs in the presence of water and is crucial to the formation of rust: 4 Fe2+ + O2 → 4 Fe3+ + 2 O2− In addition, the following multistep acid–base reactions affect the course of rust formation: Fe2+ + 2 H2O ⇌ Fe(OH)2 + 2  Fe3+ + 3 H2O ⇌ Fe(OH)3 + 3  as do the following dehydration equilibria: Fe(OH)2 ⇌ FeO + Fe(OH)3 ⇌ FeO(OH) + 2 FeO(OH) ⇌ Fe2O3 + From the above equations, it is also seen that the corrosion products are dictated by the availability of water and oxygen. With limited dissolved oxygen, iron(II)-containing materials are favoured, including FeO and black lodestone or magnetite (Fe3O4). High oxygen concentrations favour ferric materials with the nominal formulae Fe(OH)3−xO. The nature of rust changes with time, reflecting the slow rates of the reactions of solids. Furthermore, these complex processes are affected by the presence of other ions, such as Ca2+, which serve as electrolytes which accelerate rust formation, or combine with the hydroxides and oxides of iron to precipitate a variety of Ca, Fe, O, OH species. The onset of rusting can also be detected in the laboratory with the use of ferroxyl indicator solution. The solution detects both Fe2+ ions and hydroxyl ions. Formation of Fe2+ ions and hydroxyl ions are indicated by blue and pink patches respectively. Prevention Because of the widespread use and importance of iron and steel products, the prevention or slowing of rust is the basis of major economic activities in a number of specialized technologies. A brief overview of methods is presented here; for detailed coverage, see the cross-referenced articles. Rust is permeable to air and water, therefore the interior metallic iron beneath a rust layer continues to corrode. Rust prevention thus requires coatings that preclude rust formation. Rust-resistant alloys Stainless steel forms a passivation layer of chromium(III) oxide. Similar passivation behavior occurs with magnesium, titanium, zinc, zinc oxides, aluminium, polyaniline, and other electroactive conductive polymers. Special "weathering steel" alloys such as Cor-Ten rust at a much slower rate than normal, because the rust adheres to the surface of the metal in a protective layer. Designs using this material must include measures that avoid worst-case exposures since the material still continues to rust slowly even under near-ideal conditions. Galvanization Galvanization consists of an application on the object to be protected of a layer of metallic zinc by either hot-dip galvanizing or electroplating. Zinc is traditionally used because it is cheap, adheres well to steel, and provides cathodic protection to the steel surface in case of damage of the zinc layer. In more corrosive environments (such as salt water), cadmium plating is preferred instead of the underlying protected metal. The protective zinc layer is consumed by this action, and thus galvanization provides protection only for a limited period of time. More modern coatings add aluminium to the coating as zinc-alume; aluminium will migrate to cover scratches and thus provide protection for a longer period. These approaches rely on the aluminium and zinc oxides protecting a once-scratched surface, rather than oxidizing as a sacrificial anode as in traditional galvanized coatings. In some cases, such as very aggressive environments or long design life, both zinc and a coating are applied to provide enhanced corrosion protection. Typical galvanization of steel products that are to be subjected to normal day-to-day weathering in an outside environment consists of a hot-dipped 85 μm zinc coating. Under normal weather conditions, this will deteriorate at a rate of 1 μm per year, giving approximately 85 years of protection. Cathodic protection Cathodic protection is a technique used to inhibit corrosion on buried or immersed structures by supplying an electrical charge that suppresses the electrochemical reaction. If correctly applied, corrosion can be stopped completely. In its simplest form, it is achieved by attaching a sacrificial anode, thereby making the iron or steel the cathode in the cell formed. The sacrificial anode must be made from something with a more negative electrode potential than the iron or steel, commonly zinc, aluminium, or magnesium. The sacrificial anode will eventually corrode away, ceasing its protective action unless it is replaced in a timely manner. Cathodic protection can also be provided by using an applied electrical current. This would then be known as ICCP Impressed Current Cathodic Protection. Coatings and painting Rust formation can be controlled with coatings, such as paint, lacquer, varnish, or wax tapes that isolate the iron from the environment. Large structures with enclosed box sections, such as ships and modern automobiles, often have a wax-based product (technically a "slushing oil") injected into these sections. Such treatments usually also contain rust inhibitors. Covering steel with concrete can provide some protection to steel because of the alkaline pH environment at the steel–concrete interface. However, rusting of steel in concrete can still be a problem, as expanding rust can fracture concrete from within. As a closely related example, iron clamps were used to join marble blocks during a restoration attempt of the Parthenon in Athens, Greece, in 1898, but caused extensive damage to the marble by the rusting and swelling of unprotected iron. The ancient Greek builders had used a similar fastening system for the marble blocks during construction, however, they also poured molten lead over the iron joints for protection from seismic shocks as well as from corrosion. This method was successful for the 2500-year-old structure, but in less than a century the crude repairs were in imminent danger of collapse. When only temporary protection is needed for storage or transport, a thin layer of oil, grease or a special mixture such as Cosmoline can be applied to an iron surface. Such treatments are extensively used when "mothballing" a steel ship, automobile, or other equipment for long-term storage. Special anti-seize lubricant mixtures are available and are applied to metallic threads and other precision machined surfaces to protect them from rust. These compounds usually contain grease mixed with copper, zinc, or aluminium powder, and other proprietary ingredients. Bluing Bluing is a technique that can provide limited resistance to rusting for small steel items, such as firearms; for it to be successful, a water-displacing oil is rubbed onto the blued steel and other steel . Inhibitors Corrosion inhibitors, such as gas-phase or volatile inhibitors, can be used to prevent corrosion inside sealed systems. They are not effective when air circulation disperses them, and brings in fresh oxygen and moisture. Humidity control Rust can be avoided by controlling the moisture in the atmosphere. An example of this is the use of silica gel packets to control humidity in equipment shipped by sea. Treatment Rust removal from small iron or steel objects by electrolysis can be done in a home workshop using simple materials such as a plastic bucket filled with an electrolyte consisting of washing soda dissolved in tap water, a length of rebar suspended vertically in the solution to act as an anode, another laid across the top of the bucket to act as a support for suspending the object, baling wire to suspend the object in the solution from the horizontal rebar, and a battery charger as a power source in which the positive terminal is clamped to the anode and the negative terminal is clamped to the object to be treated which becomes the cathode. Hydrogen and oxygen gases are produced at the cathode and anode respectively. This mixture is flammable/explosive. Care should also be taken to avoid hydrogen embrittlement. Overvoltage also produces small amounts of ozone, which is highly toxic, so a low voltage phone charger is a far safer source of DC current. The effects of hydrogen on global warming have also recently come under scrutiny. Rust may be treated with commercial products known as rust converter which contain tannic acid or phosphoric acid which combines with rust; removed with organic acids like citric acid and vinegar or the stronger hydrochloric acid; or removed with chelating agents as in some commercial formulations or even a solution of molasses. Economic effect Rust is associated with the degradation of iron-based tools and structures. As rust has a much higher volume than the originating mass of iron, its buildup can also cause failure by forcing apart adjacent parts — a phenomenon sometimes known as "rust packing". It was the cause of the collapse of the Mianus river bridge in 1983, when the bearings rusted internally and pushed one corner of the road slab off its support. Rust was an important factor in the Silver Bridge disaster of 1967 in West Virginia, when a steel suspension bridge collapsed in less than a minute, killing 46 drivers and passengers on the bridge at the time. The Kinzua Bridge in Pennsylvania was blown down by a tornado in 2003, largely because the central base bolts holding the structure to the ground had rusted away, leaving the bridge anchored by gravity alone. Reinforced concrete is also vulnerable to rust damage. Internal pressure caused by expanding corrosion of concrete-covered steel and iron can cause the concrete to spall, creating severe structural problems. It is one of the most common failure modes of reinforced concrete bridges and buildings. Cultural symbolism Rust is a commonly used metaphor for slow decay due to neglect, since it gradually converts robust iron and steel metal into a soft crumbling powder. A wide section of the industrialized American Midwest and American Northeast, once dominated by steel foundries, the automotive industry, and other manufacturers, has experienced harsh economic cutbacks that have caused the region to be dubbed the "Rust Belt". In music, literature, and art, rust is associated with images of faded glory, neglect, decay, and ruin. See also Corrosion engineering References Further reading Waldman, J. (2015): Rust – the longest war. Simon & Schuster, New York. Corrosion – 2nd Edition (elsevier.com) Volume 1and 2; Editor: L L Shreir Corrosion Iron
Rust
[ "Chemistry", "Materials_science" ]
3,159
[ "Materials degradation", "Electrochemistry", "Metallurgy", "Corrosion" ]
26,561
https://en.wikipedia.org/wiki/Rank%20%28linear%20algebra%29
In linear algebra, the rank of a matrix is the dimension of the vector space generated (or spanned) by its columns. This corresponds to the maximal number of linearly independent columns of . This, in turn, is identical to the dimension of the vector space spanned by its rows. Rank is thus a measure of the "nondegenerateness" of the system of linear equations and linear transformation encoded by . There are multiple equivalent definitions of rank. A matrix's rank is one of its most fundamental characteristics. The rank is commonly denoted by or ; sometimes the parentheses are not written, as in . Main definitions In this section, we give some definitions of the rank of a matrix. Many definitions are possible; see Alternative definitions for several of these. The column rank of is the dimension of the column space of , while the row rank of is the dimension of the row space of . A fundamental result in linear algebra is that the column rank and the row rank are always equal. (Three proofs of this result are given in , below.) This number (i.e., the number of linearly independent rows or columns) is simply called the rank of . A matrix is said to have full rank if its rank equals the largest possible for a matrix of the same dimensions, which is the lesser of the number of rows and columns. A matrix is said to be rank-deficient if it does not have full rank. The rank deficiency of a matrix is the difference between the lesser of the number of rows and columns, and the rank. The rank of a linear map or operator is defined as the dimension of its image:where is the dimension of a vector space, and is the image of a map. Examples The matrix has rank 2: the first two columns are linearly independent, so the rank is at least 2, but since the third is a linear combination of the first two (the first column plus the second), the three columns are linearly dependent so the rank must be less than 3. The matrix has rank 1: there are nonzero columns, so the rank is positive, but any pair of columns is linearly dependent. Similarly, the transpose of has rank 1. Indeed, since the column vectors of are the row vectors of the transpose of , the statement that the column rank of a matrix equals its row rank is equivalent to the statement that the rank of a matrix is equal to the rank of its transpose, i.e., . Computing the rank of a matrix Rank from row echelon forms A common approach to finding the rank of a matrix is to reduce it to a simpler form, generally row echelon form, by elementary row operations. Row operations do not change the row space (hence do not change the row rank), and, being invertible, map the column space to an isomorphic space (hence do not change the column rank). Once in row echelon form, the rank is clearly the same for both row rank and column rank, and equals the number of pivots (or basic columns) and also the number of non-zero rows. For example, the matrix given by can be put in reduced row-echelon form by using the following elementary row operations: The final matrix (in reduced row echelon form) has two non-zero rows and thus the rank of matrix is 2. Computation When applied to floating point computations on computers, basic Gaussian elimination (LU decomposition) can be unreliable, and a rank-revealing decomposition should be used instead. An effective alternative is the singular value decomposition (SVD), but there are other less computationally expensive choices, such as QR decomposition with pivoting (so-called rank-revealing QR factorization), which are still more numerically robust than Gaussian elimination. Numerical determination of rank requires a criterion for deciding when a value, such as a singular value from the SVD, should be treated as zero, a practical choice which depends on both the matrix and the application. Proofs that column rank = row rank Proof using row reduction The fact that the column and row ranks of any matrix are equal forms is fundamental in linear algebra. Many proofs have been given. One of the most elementary ones has been sketched in . Here is a variant of this proof: It is straightforward to show that neither the row rank nor the column rank are changed by an elementary row operation. As Gaussian elimination proceeds by elementary row operations, the reduced row echelon form of a matrix has the same row rank and the same column rank as the original matrix. Further elementary column operations allow putting the matrix in the form of an identity matrix possibly bordered by rows and columns of zeros. Again, this changes neither the row rank nor the column rank. It is immediate that both the row and column ranks of this resulting matrix is the number of its nonzero entries. We present two other proofs of this result. The first uses only basic properties of linear combinations of vectors, and is valid over any field. The proof is based upon Wardlaw (2005). The second uses orthogonality and is valid for matrices over the real numbers; it is based upon Mackiw (1995). Both proofs can be found in the book by Banerjee and Roy (2014). Proof using linear combinations Let be an matrix. Let the column rank of be , and let be any basis for the column space of . Place these as the columns of an matrix . Every column of can be expressed as a linear combination of the columns in . This means that there is an matrix such that . is the matrix whose th column is formed from the coefficients giving the th column of as a linear combination of the columns of . In other words, is the matrix which contains the multiples for the bases of the column space of (which is ), which are then used to form as a whole. Now, each row of is given by a linear combination of the rows of . Therefore, the rows of form a spanning set of the row space of and, by the Steinitz exchange lemma, the row rank of cannot exceed . This proves that the row rank of is less than or equal to the column rank of . This result can be applied to any matrix, so apply the result to the transpose of . Since the row rank of the transpose of is the column rank of and the column rank of the transpose of is the row rank of , this establishes the reverse inequality and we obtain the equality of the row rank and the column rank of . (Also see Rank factorization.) Proof using orthogonality Let be an matrix with entries in the real numbers whose row rank is . Therefore, the dimension of the row space of is . Let be a basis of the row space of . We claim that the vectors are linearly independent. To see why, consider a linear homogeneous relation involving these vectors with scalar coefficients : where . We make two observations: (a) is a linear combination of vectors in the row space of , which implies that belongs to the row space of , and (b) since , the vector is orthogonal to every row vector of and, hence, is orthogonal to every vector in the row space of . The facts (a) and (b) together imply that is orthogonal to itself, which proves that or, by the definition of , But recall that the were chosen as a basis of the row space of and so are linearly independent. This implies that . It follows that are linearly independent. Now, each is obviously a vector in the column space of . So, is a set of linearly independent vectors in the column space of and, hence, the dimension of the column space of (i.e., the column rank of ) must be at least as big as . This proves that row rank of is no larger than the column rank of . Now apply this result to the transpose of to get the reverse inequality and conclude as in the previous proof. Alternative definitions In all the definitions in this section, the matrix is taken to be an matrix over an arbitrary field . Dimension of image Given the matrix , there is an associated linear mapping defined by The rank of is the dimension of the image of . This definition has the advantage that it can be applied to any linear map without need for a specific matrix. Rank in terms of nullity Given the same linear mapping as above, the rank is minus the dimension of the kernel of . The rank–nullity theorem states that this definition is equivalent to the preceding one. Column rank – dimension of column space The rank of is the maximal number of linearly independent columns of ; this is the dimension of the column space of (the column space being the subspace of generated by the columns of , which is in fact just the image of the linear map associated to ). Row rank – dimension of row space The rank of is the maximal number of linearly independent rows of ; this is the dimension of the row space of . Decomposition rank The rank of is the smallest integer such that can be factored as , where is an matrix and is a matrix. In fact, for all integers , the following are equivalent: the column rank of is less than or equal to , there exist columns of size such that every column of is a linear combination of , there exist an matrix and a matrix such that (when is the rank, this is a rank factorization of ), there exist rows of size such that every row of is a linear combination of , the row rank of is less than or equal to . Indeed, the following equivalences are obvious: . For example, to prove (3) from (2), take to be the matrix whose columns are from (2). To prove (2) from (3), take to be the columns of . It follows from the equivalence that the row rank is equal to the column rank. As in the case of the "dimension of image" characterization, this can be generalized to a definition of the rank of any linear map: the rank of a linear map is the minimal dimension of an intermediate space such that can be written as the composition of a map and a map . Unfortunately, this definition does not suggest an efficient manner to compute the rank (for which it is better to use one of the alternative definitions). See rank factorization for details. Rank in terms of singular values The rank of equals the number of non-zero singular values, which is the same as the number of non-zero diagonal elements in Σ in the singular value decomposition Determinantal rank – size of largest non-vanishing minor The rank of is the largest order of any non-zero minor in . (The order of a minor is the side-length of the square sub-matrix of which it is the determinant.) Like the decomposition rank characterization, this does not give an efficient way of computing the rank, but it is useful theoretically: a single non-zero minor witnesses a lower bound (namely its order) for the rank of the matrix, which can be useful (for example) to prove that certain operations do not lower the rank of a matrix. A non-vanishing -minor ( submatrix with non-zero determinant) shows that the rows and columns of that submatrix are linearly independent, and thus those rows and columns of the full matrix are linearly independent (in the full matrix), so the row and column rank are at least as large as the determinantal rank; however, the converse is less straightforward. The equivalence of determinantal rank and column rank is a strengthening of the statement that if the span of vectors has dimension , then of those vectors span the space (equivalently, that one can choose a spanning set that is a subset of the vectors): the equivalence implies that a subset of the rows and a subset of the columns simultaneously define an invertible submatrix (equivalently, if the span of vectors has dimension , then of these vectors span the space and there is a set of coordinates on which they are linearly independent). Tensor rank – minimum number of simple tensors The rank of is the smallest number such that can be written as a sum of rank 1 matrices, where a matrix is defined to have rank 1 if and only if it can be written as a nonzero product of a column vector and a row vector . This notion of rank is called tensor rank; it can be generalized in the separable models interpretation of the singular value decomposition. Properties We assume that is an matrix, and we define the linear map by as above. The rank of an matrix is a nonnegative integer and cannot be greater than either or . That is, A matrix that has rank is said to have full rank; otherwise, the matrix is rank deficient. Only a zero matrix has rank zero. is injective (or "one-to-one") if and only if has rank (in this case, we say that has full column rank). is surjective (or "onto") if and only if has rank (in this case, we say that has full row rank). If is a square matrix (i.e., ), then is invertible if and only if has rank (that is, has full rank). If is any matrix, then If is an matrix of rank , then If is an matrix of rank , then The rank of is equal to if and only if there exists an invertible matrix and an invertible matrix such that where denotes the identity matrix. Sylvester’s rank inequality: if is an matrix and is , then This is a special case of the next inequality. The inequality due to Frobenius: if , and are defined, then Subadditivity: when and are of the same dimension. As a consequence, a rank- matrix can be written as the sum of rank-1 matrices, but not fewer. The rank of a matrix plus the nullity of the matrix equals the number of columns of the matrix. (This is the rank–nullity theorem.) If is a matrix over the real numbers then the rank of and the rank of its corresponding Gram matrix are equal. Thus, for real matrices This can be shown by proving equality of their null spaces. The null space of the Gram matrix is given by vectors for which If this condition is fulfilled, we also have If is a matrix over the complex numbers and denotes the complex conjugate of and the conjugate transpose of (i.e., the adjoint of ), then Applications One useful application of calculating the rank of a matrix is the computation of the number of solutions of a system of linear equations. According to the Rouché–Capelli theorem, the system is inconsistent if the rank of the augmented matrix is greater than the rank of the coefficient matrix. If on the other hand, the ranks of these two matrices are equal, then the system must have at least one solution. The solution is unique if and only if the rank equals the number of variables. Otherwise the general solution has free parameters where is the difference between the number of variables and the rank. In this case (and assuming the system of equations is in the real or complex numbers) the system of equations has infinitely many solutions. In control theory, the rank of a matrix can be used to determine whether a linear system is controllable, or observable. In the field of communication complexity, the rank of the communication matrix of a function gives bounds on the amount of communication needed for two parties to compute the function. Generalization There are different generalizations of the concept of rank to matrices over arbitrary rings, where column rank, row rank, dimension of column space, and dimension of row space of a matrix may be different from the others or may not exist. Thinking of matrices as tensors, the tensor rank generalizes to arbitrary tensors; for tensors of order greater than 2 (matrices are order 2 tensors), rank is very hard to compute, unlike for matrices. There is a notion of rank for smooth maps between smooth manifolds. It is equal to the linear rank of the derivative. Matrices as tensors Matrix rank should not be confused with tensor order, which is called tensor rank. Tensor order is the number of indices required to write a tensor, and thus matrices all have tensor order 2. More precisely, matrices are tensors of type (1,1), having one row index and one column index, also called covariant order 1 and contravariant order 1; see Tensor (intrinsic definition) for details. The tensor rank of a matrix can also mean the minimum number of simple tensors necessary to express the matrix as a linear combination, and that this definition does agree with matrix rank as here discussed. See also Matroid rank Nonnegative rank (linear algebra) Rank (differential topology) Multicollinearity Linear dependence Notes References Sources Further reading Kaw, Autar K. Two Chapters from the book Introduction to Matrix Algebra: 1. Vectors and System of Equations Mike Brookes: Matrix Reference Manual. Linear algebra
Rank (linear algebra)
[ "Mathematics" ]
3,503
[ "Linear algebra", "Algebra" ]
26,826
https://en.wikipedia.org/wiki/Sodium
Sodium is a chemical element; it has symbol Na (from Neo-Latin ) and atomic number 11. It is a soft, silvery-white, highly reactive metal. Sodium is an alkali metal, being in group 1 of the periodic table. Its only stable isotope is 23Na. The free metal does not occur in nature and must be prepared from compounds. Sodium is the sixth most abundant element in the Earth's crust and exists in numerous minerals such as feldspars, sodalite, and halite (NaCl). Many salts of sodium are highly water-soluble: sodium ions have been leached by the action of water from the Earth's minerals over eons, and thus sodium and chlorine are the most common dissolved elements by weight in the oceans. Sodium was first isolated by Humphry Davy in 1807 by the electrolysis of sodium hydroxide. Among many other useful sodium compounds, sodium hydroxide (lye) is used in soap manufacture, and sodium chloride (edible salt) is a de-icing agent and a nutrient for animals including humans. Sodium is an essential element for all animals and some plants. Sodium ions are the major cation in the extracellular fluid (ECF) and as such are the major contributor to the ECF osmotic pressure. Animal cells actively pump sodium ions out of the cells by means of the sodium–potassium pump, an enzyme complex embedded in the cell membrane, in order to maintain a roughly ten-times higher concentration of sodium ions outside the cell than inside. In nerve cells, the sudden flow of sodium ions into the cell through voltage-gated sodium channels enables transmission of a nerve impulse in a process called the action potential. Characteristics Physical Sodium at standard temperature and pressure is a soft silvery metal that combines with oxygen in the air, forming sodium oxides. Bulk sodium is usually stored in oil or an inert gas. Sodium metal can be easily cut with a knife. It is a good conductor of electricity and heat. Due to having low atomic mass and large atomic radius, sodium is third-least dense of all elemental metals and is one of only three metals that can float on water, the other two being lithium and potassium. The melting (98 °C) and boiling (883 °C) points of sodium are lower than those of lithium but higher than those of the heavier alkali metals potassium, rubidium, and caesium, following periodic trends down the group. These properties change dramatically at elevated pressures: at 1.5 Mbar, the color changes from silvery metallic to black; at 1.9 Mbar the material becomes transparent with a red color; and at 3 Mbar, sodium is a clear and transparent solid. All of these high-pressure allotropes are insulators and electrides. In a flame test, sodium and its compounds glow yellow because the excited 3s electrons of sodium emit a photon when they fall from 3p to 3s; the wavelength of this photon corresponds to the D line at about 589.3 nm. Spin-orbit interactions involving the electron in the 3p orbital split the D line into two, at 589.0 and 589.6 nm; hyperfine structures involving both orbitals cause many more lines. Isotopes Twenty isotopes of sodium are known, but only 23Na is stable. 23Na is created in the carbon-burning process in stars by fusing two carbon atoms together; this requires temperatures above 600 megakelvins and a star of at least three solar masses. Two radioactive, cosmogenic isotopes are the byproduct of cosmic ray spallation: 22Na has a half-life of 2.6 years and 24Na, a half-life of 15 hours; all other isotopes have a half-life of less than one minute. Two nuclear isomers have been discovered, the longer-lived one being 24mNa with a half-life of around 20.2 milliseconds. Acute neutron radiation, as from a nuclear criticality accident, converts some of the stable 23Na in human blood to 24Na; the neutron radiation dosage of a victim can be calculated by measuring the concentration of 24Na relative to 23Na. Chemistry Sodium atoms have 11 electrons, one more than the stable configuration of the noble gas neon. The first and second ionization energies are 495.8 kJ/mol and 4562 kJ/mol, respectively. As a result, sodium usually forms ionic compounds involving the Na+ cation. Metallic sodium Metallic sodium is generally less reactive than potassium and more reactive than lithium. Sodium metal is highly reducing, with the standard reduction potential for the Na+/Na couple being −2.71 volts, though potassium and lithium have even more negative potentials. Salts and oxides Sodium compounds are of immense commercial importance, being particularly central to industries producing glass, paper, soap, and textiles. The most important sodium compounds are table salt (NaCl), soda ash (Na2CO3), baking soda (NaHCO3), caustic soda (NaOH), sodium nitrate (NaNO3), di- and tri-sodium phosphates, sodium thiosulfate (Na2S2O3·5H2O), and borax (Na2B4O7·10H2O). In compounds, sodium is usually ionically bonded to water and anions and is viewed as a hard Lewis acid. Most soaps are sodium salts of fatty acids. Sodium soaps have a higher melting temperature (and seem "harder") than potassium soaps. Like all the alkali metals, sodium reacts exothermically with water. The reaction produces caustic soda (sodium hydroxide) and flammable hydrogen gas. When burned in air, it forms primarily sodium peroxide with some sodium oxide. Aqueous solutions Sodium tends to form water-soluble compounds, such as halides, sulfates, nitrates, carboxylates and carbonates. The main aqueous species are the aquo complexes [Na(H2O)n]+, where n = 4–8; with n = 6 indicated from X-ray diffraction data and computer simulations. Direct precipitation of sodium salts from aqueous solutions is rare because sodium salts typically have a high affinity for water. An exception is sodium bismuthate (NaBiO3), which is insoluble in cold water and decomposes in hot water. Because of the high solubility of its compounds, sodium salts are usually isolated as solids by evaporation or by precipitation with an organic antisolvent, such as ethanol; for example, only 0.35 g/L of sodium chloride will dissolve in ethanol. A crown ether such as 15-crown-5 may be used as a phase-transfer catalyst. Sodium content of samples is determined by atomic absorption spectrophotometry or by potentiometry using ion-selective electrodes. Electrides and sodides Like the other alkali metals, sodium dissolves in ammonia and some amines to give deeply colored solutions; evaporation of these solutions leaves a shiny film of metallic sodium. The solutions contain the coordination complex [Na(NH3)6]+, with the positive charge counterbalanced by electrons as anions; cryptands permit the isolation of these complexes as crystalline solids. Sodium forms complexes with crown ethers, cryptands and other ligands. For example, 15-crown-5 has a high affinity for sodium because the cavity size of 15-crown-5 is 1.7–2.2 Å, which is enough to fit the sodium ion (1.9 Å). Cryptands, like crown ethers and other ionophores, also have a high affinity for the sodium ion; derivatives of the alkalide Na− are obtainable by the addition of cryptands to solutions of sodium in ammonia via disproportionation. Organosodium compounds Many organosodium compounds have been prepared. Because of the high polarity of the C-Na bonds, they behave like sources of carbanions (salts with organic anions). Some well-known derivatives include sodium cyclopentadienide (NaC5H5) and trityl sodium ((C6H5)3CNa). Sodium naphthalene, Na+[C10H8•]−, a strong reducing agent, forms upon mixing Na and naphthalene in ethereal solutions. Intermetallic compounds Sodium forms alloys with many metals, such as potassium, calcium, lead, and the group 11 and 12 elements. Sodium and potassium form KNa2 and NaK. NaK is 40–90% potassium and it is liquid at ambient temperature. It is an excellent thermal and electrical conductor. Sodium-calcium alloys are by-products of the electrolytic production of sodium from a binary salt mixture of NaCl-CaCl2 and ternary mixture NaCl-CaCl2-BaCl2. Calcium is only partially miscible with sodium, and the 1–2% of it dissolved in the sodium obtained from said mixtures can be precipitated by cooling to 120 °C and filtering. In a liquid state, sodium is completely miscible with lead. There are several methods to make sodium-lead alloys. One is to melt them together and another is to deposit sodium electrolytically on molten lead cathodes. NaPb3, NaPb, Na9Pb4, Na5Pb2, and Na15Pb4 are some of the known sodium-lead alloys. Sodium also forms alloys with gold (NaAu2) and silver (NaAg2). Group 12 metals (zinc, cadmium and mercury) are known to make alloys with sodium. NaZn13 and NaCd2 are alloys of zinc and cadmium. Sodium and mercury form NaHg, NaHg4, NaHg2, Na3Hg2, and Na3Hg. History Because of its importance in human health, salt has long been an important commodity. In medieval Europe, a compound of sodium with the Latin name of sodanum was used as a headache remedy. The name sodium is thought to originate from the Arabic suda, meaning headache, as the headache-alleviating properties of sodium carbonate or soda were well known in early times. Although sodium, sometimes called soda, had long been recognized in compounds, the metal itself was not isolated until 1807 by Sir Humphry Davy through the electrolysis of sodium hydroxide. In 1809, the German physicist and chemist Ludwig Wilhelm Gilbert proposed the names Natronium for Humphry Davy's "sodium" and Kalium for Davy's "potassium". The chemical abbreviation for sodium was first published in 1814 by Jöns Jakob Berzelius in his system of atomic symbols, and is an abbreviation of the element's Neo-Latin name natrium, which refers to the Egyptian natron, a natural mineral salt mainly consisting of hydrated sodium carbonate. Natron historically had several important industrial and household uses, later eclipsed by other sodium compounds. Sodium imparts an intense yellow color to flames. As early as 1860, Kirchhoff and Bunsen noted the high sensitivity of a sodium flame test, and stated in Annalen der Physik und Chemie: In a corner of our 60 m3 room farthest away from the apparatus, we exploded 3 mg of sodium chlorate with milk sugar while observing the nonluminous flame before the slit. After a while, it glowed a bright yellow and showed a strong sodium line that disappeared only after 10 minutes. From the weight of the sodium salt and the volume of air in the room, we easily calculate that one part by weight of air could not contain more than 1/20 millionth weight of sodium. Occurrence The Earth's crust contains 2.27% sodium, making it the sixth most abundant element on Earth and the fourth most abundant metal, behind aluminium, iron, calcium, and magnesium and ahead of potassium.Sodium's estimated oceanic abundance is 10.8 grams per liter. Because of its high reactivity, it is never found as a pure element. It is found in many minerals, some very soluble, such as halite and natron, others much less soluble, such as amphibole and zeolite. The insolubility of certain sodium minerals such as cryolite and feldspar arises from their polymeric anions, which in the case of feldspar is a polysilicate. In the universe, sodium is the 15th most abundant element with a 20,000 parts-per-billion abundance, making sodium 0.002% of the total atoms in the universe. Astronomical observations Atomic sodium has a very strong spectral line in the yellow-orange part of the spectrum (the same line as is used in sodium-vapour street lights). This appears as an absorption line in many types of stars, including the Sun. The line was first studied in 1814 by Joseph von Fraunhofer during his investigation of the lines in the solar spectrum, now known as the Fraunhofer lines. Fraunhofer named it the "D" line, although it is now known to actually be a group of closely spaced lines split by a fine and hyperfine structure. The strength of the D line allows its detection in many other astronomical environments. In stars, it is seen in any whose surfaces are cool enough for sodium to exist in atomic form (rather than ionised). This corresponds to stars of roughly F-type and cooler. Many other stars appear to have a sodium absorption line, but this is actually caused by gas in the foreground interstellar medium. The two can be distinguished via high-resolution spectroscopy, because interstellar lines are much narrower than those broadened by stellar rotation. Sodium has also been detected in numerous Solar System environments, including the exospheres of Mercury and the Moon, and numerous other bodies. Some comets have a sodium tail, which was first detected in observations of Comet Hale–Bopp in 1997. Sodium has even been detected in the atmospheres of some extrasolar planets via transit spectroscopy. Commercial production Employed in rather specialized applications, about 100,000 tonnes of metallic sodium are produced annually. Metallic sodium was first produced commercially in the late nineteenth century by carbothermal reduction of sodium carbonate at 1100 °C, as the first step of the Deville process for the production of aluminium: Na2CO3 + 2 C → 2 Na + 3 CO The high demand for aluminium created the need for the production of sodium. The introduction of the Hall–Héroult process for the production of aluminium by electrolysing a molten salt bath ended the need for large quantities of sodium. A related process based on the reduction of sodium hydroxide was developed in 1886. Sodium is now produced commercially through the electrolysis of molten sodium chloride (common salt), based on a process patented in 1924. This is done in a Downs cell in which the NaCl is mixed with calcium chloride to lower the melting point below 700 °C. As calcium is less electropositive than sodium, no calcium will be deposited at the cathode. This method is less expensive than the previous Castner process (the electrolysis of sodium hydroxide). If sodium of high purity is required, it can be distilled once or several times. The market for sodium is volatile due to the difficulty in its storage and shipping; it must be stored under a dry inert gas atmosphere or anhydrous mineral oil to prevent the formation of a surface layer of sodium oxide or sodium superoxide. Uses Though metallic sodium has some important uses, the major applications for sodium use compounds; millions of tons of sodium chloride, hydroxide, and carbonate are produced annually. Sodium chloride is extensively used for anti-icing and de-icing and as a preservative; examples of the uses of sodium bicarbonate include baking, as a raising agent, and sodablasting. Along with potassium, many important medicines have sodium added to improve their bioavailability; though potassium is the better ion in most cases, sodium is chosen for its lower price and atomic weight. Sodium hydride is used as a base for various reactions (such as the aldol reaction) in organic chemistry. Metallic sodium is used mainly for the production of sodium borohydride, sodium azide, indigo, and triphenylphosphine. A once-common use was the making of tetraethyllead and titanium metal; because of the move away from TEL and new titanium production methods, the production of sodium declined after 1970. Sodium is also used as an alloying metal, an anti-scaling agent, and as a reducing agent for metals when other materials are ineffective. Note the free element is not used as a scaling agent, ions in the water are exchanged for sodium ions. Sodium plasma ("vapor") lamps are often used for street lighting in cities, shedding light that ranges from yellow-orange to peach as the pressure increases. By itself or with potassium, sodium is a desiccant; it gives an intense blue coloration with benzophenone when the desiccate is dry. In organic synthesis, sodium is used in various reactions such as the Birch reduction, and the sodium fusion test is conducted to qualitatively analyse compounds. Sodium reacts with alcohols and gives alkoxides, and when sodium is dissolved in ammonia solution, it can be used to reduce alkynes to trans-alkenes. Lasers emitting light at the sodium D line are used to create artificial laser guide stars that assist in the adaptive optics for land-based visible-light telescopes. Heat transfer Liquid sodium is used as a heat transfer fluid in sodium-cooled fast reactors because it has the high thermal conductivity and low neutron absorption cross section required to achieve a high neutron flux in the reactor. The high boiling point of sodium allows the reactor to operate at ambient (normal) pressure, but drawbacks include its opacity, which hinders visual maintenance, and its strongly reducing properties. Sodium will explode in contact with water, although it will only burn gently in air. Radioactive sodium-24 may be produced by neutron bombardment during operation, posing a slight radiation hazard; the radioactivity stops within a few days after removal from the reactor. If a reactor needs to be shut down frequently, sodium-potassium alloy (NaK) is used. Because NaK is a liquid at room temperature, the coolant does not solidify in the pipes. The pyrophoricity of the NaK means extra precautions must be taken to prevent and detect leaks. Another heat transfer application of sodium is in poppet valves in high-performance internal combustion engines; the valve stems are partially filled with sodium and work as a heat pipe to cool the valves. Biological role Biological role in humans In humans, sodium is an essential mineral that regulates blood volume, blood pressure, osmotic equilibrium and pH. The minimum physiological requirement for sodium is estimated to range from about 120 milligrams per day in newborns to 500 milligrams per day over the age of 10. Diet Sodium chloride, also known as 'edible salt' or 'table salt' (chemical formula ), is the principal source of sodium () in the diet and is used as seasoning and preservative in such commodities as pickled preserves and jerky. For Americans, most sodium chloride comes from processed foods. Other sources of sodium are its natural occurrence in food and such food additives as monosodium glutamate (MSG), sodium nitrite, sodium saccharin, baking soda (sodium bicarbonate), and sodium benzoate. The U.S. Institute of Medicine set its tolerable upper intake level for sodium at 2.3 grams per day, but the average person in the United States consumes 3.4 grams per day. The American Heart Association recommends no more than 1.5 g of sodium per day. The Committee to Review the Dietary Reference Intakes for Sodium and Potassium, which is part of the National Academies of Sciences, Engineering, and Medicine, has determined that there isn't enough evidence from research studies to establish Estimated Average Requirement (EAR) and Recommended Dietary Allowance (RDA) values for sodium. As a result, the committee has established Adequate Intake (AI) levels instead, as follows. The sodium AI for infants of 0–6 months is established at 110 mg/day, 7–12 months: 370 mg/day; for children 1–3 years: 800 mg/day, 4–8 years: 1,000 mg/day; for adolescents: 9–13 years – 1,200 mg/day, 14–18 years 1,500 mg/day; for adults regardless of their age or sex: 1,500 mg/day. Sodium chloride () contains approximately 39.34% of its total mass as elemental sodium (). This means that of sodium chloride contains approximately of elemental sodium. For example, to find out how much sodium chloride contains 1500 mg of elemental sodium (the value of 1500 mg sodium is the adequate intake (AI) for an adult), we can use the proportion: 393.4 mg Na : 1000 mg NaCl = 1500 mg Na : x mg NaCl Solving for x gives us the amount of sodium chloride that contains 1500 mg of elemental sodium x = (1500 mg Na × 1000 mg NaCl) / 393.4 mg Na = 3812.91 mg This mean that 3812.91 mg of sodium chloride contain 1500 mg of elemental sodium. High sodium consumption High sodium consumption is unhealthy, and can lead to alteration in the mechanical performance of the heart. High sodium consumption is also associated with chronic kidney disease, high blood pressure, cardiovascular diseases, and stroke. High blood pressure There is a strong correlation between higher sodium intake and higher blood pressure. Studies have found that lowering sodium intake by 2 g per day tends to lower systolic blood pressure by about two to four mm Hg. It has been estimated that such a decrease in sodium intake would lead to 9–17% fewer cases of hypertension. Hypertension causes 7.6 million premature deaths worldwide each year. Since edible salt contains about 39.3% sodium—the rest being chlorine and trace chemicals; thus, 2.3 g sodium is about 5.9 g, or 5.3 ml, of salt—about one US teaspoon. One scientific review found that people with or without hypertension who excreted less than 3 grams of sodium per day in their urine (and therefore were taking in less than 3 g/d) had a higher risk of death, stroke, or heart attack than those excreting 4 to 5 grams per day. Levels of 7 g per day or more in people with hypertension were associated with higher mortality and cardiovascular events, but this was not found to be true for people without hypertension. The US FDA states that adults with hypertension and prehypertension should reduce daily sodium intake to 1.5 g. Physiology The renin–angiotensin system regulates the amount of fluid and sodium concentration in the body. Reduction of blood pressure and sodium concentration in the kidney result in the production of renin, which in turn produces aldosterone and angiotensin, which stimulates the reabsorption of sodium back into the bloodstream. When the concentration of sodium increases, the production of renin decreases, and the sodium concentration returns to normal. The sodium ion (Na+) is an important electrolyte in neuron function, and in osmoregulation between cells and the extracellular fluid. This is accomplished in all animals by Na+/K+-ATPase, an active transporter pumping ions against the gradient, and sodium/potassium channels. The difference in extracellular and intracellular ion concentration, maintained by the sodium-potassium pump, produce electrical signals in the form of action potentials that supports cardiac muscle contraction and promote long distance communication between neurons. Sodium is the most prevalent metallic ion in extracellular fluid. In humans, unusually low or high sodium levels in the blood is recognized in medicine as hyponatremia and hypernatremia. These conditions may be caused by genetic factors, ageing, or prolonged vomiting or diarrhea. Biological role in plants In C4 plants, sodium is a micronutrient that aids metabolism, specifically in regeneration of phosphoenolpyruvate and synthesis of chlorophyll. In others, it substitutes for potassium in several roles, such as maintaining turgor pressure and aiding in the opening and closing of stomata. Excess sodium in the soil can limit the uptake of water by decreasing the water potential, which may result in plant wilting; excess concentrations in the cytoplasm can lead to enzyme inhibition, which in turn causes necrosis and chlorosis. In response, some plants have developed mechanisms to limit sodium uptake in the roots, to store it in cell vacuoles, and restrict salt transport from roots to leaves. Excess sodium may also be stored in old plant tissue, limiting the damage to new growth. Halophytes have adapted to be able to flourish in sodium rich environments. Safety and precautions Sodium forms flammable hydrogen and caustic sodium hydroxide on contact with water; ingestion and contact with moisture on skin, eyes or mucous membranes can cause severe burns. Sodium spontaneously explodes in the presence of water due to the formation of hydrogen (highly explosive) and sodium hydroxide (which dissolves in the water, liberating more surface). However, sodium exposed to air and ignited or reaching autoignition (reported to occur when a molten pool of sodium reaches about ) displays a relatively mild fire. In the case of massive (non-molten) pieces of sodium, the reaction with oxygen eventually becomes slow due to formation of a protective layer. Fire extinguishers based on water accelerate sodium fires. Those based on carbon dioxide and bromochlorodifluoromethane should not be used on sodium fire. Metal fires are Class D, but not all Class D extinguishers are effective when used to extinguish sodium fires. An effective extinguishing agent for sodium fires is Met-L-X. Other effective agents include Lith-X, which has graphite powder and an organophosphate flame retardant, and dry sand. Sodium fires are prevented in nuclear reactors by isolating sodium from oxygen with surrounding pipes containing inert gas. Pool-type sodium fires are prevented using diverse design measures called catch pan systems. They collect leaking sodium into a leak-recovery tank where it is isolated from oxygen. Liquid sodium fires are more dangerous to handle than solid sodium fires, particularly if there is insufficient experience with the safe handling of molten sodium. In a technical report for the United States Fire Administration, R. J. Gordon writes (emphasis in original) See also References Bibliography External links Sodium at The Periodic Table of Videos (University of Nottingham) Etymology of "natrium" – source of symbol Na The Wooden Periodic Table Table's Entry on Sodium Sodium isotopes data from The Berkeley Laboratory Isotopes Project's Chemical elements Alkali metals Desiccants Dietary minerals Reducing agents Nuclear reactor coolants Chemical elements with body-centered cubic structure
Sodium
[ "Physics", "Chemistry" ]
5,624
[ "Chemical elements", "Redox", "Reducing agents", "Desiccants", "Materials", "Atoms", "Matter" ]
26,872
https://en.wikipedia.org/wiki/SI%20base%20unit
The SI base units are the standard units of measurement defined by the International System of Units (SI) for the seven base quantities of what is now known as the International System of Quantities: they are notably a basic set from which all other SI units can be derived. The units and their physical quantities are the second for time, the metre (sometimes spelled meter) for length or distance, the kilogram for mass, the ampere for electric current, the kelvin for thermodynamic temperature, the mole for amount of substance, and the candela for luminous intensity. The SI base units are a fundamental part of modern metrology, and thus part of the foundation of modern science and technology. The SI base units form a set of mutually independent dimensions as required by dimensional analysis commonly employed in science and technology. The names and symbols of SI base units are written in lowercase, except the symbols of those named after a person, which are written with an initial capital letter. For example, the metre has the symbol m, but the kelvin has symbol K, because it is named after Lord Kelvin and the ampere with symbol A is named after André-Marie Ampère. Definitions On 20 May 2019, as the final act of the 2019 revision of the SI, the BIPM officially introduced the following new definitions, replacing the preceding definitions of the SI base units. 2019 revision of the SI New base unit definitions were adopted on 16 November 2018, and they became effective on 20 May 2019. The definitions of the base units have been modified several times since the Metre Convention in 1875, and new additions of base units have occurred. Since the redefinition of the metre in 1960, the kilogram had been the only base unit still defined directly in terms of a physical artefact, rather than a property of nature. This led to a number of the other SI base units being defined indirectly in terms of the mass of the same artefact; the mole, the ampere, and the candela were linked through their definitions to the mass of the International Prototype of the Kilogram, a roughly golfball-sized platinum–iridium cylinder stored in a vault near Paris. It has long been an objective in metrology to define the kilogram in terms of a fundamental constant, in the same way that the metre is now defined in terms of the speed of light. The 21st General Conference on Weights and Measures (CGPM, 1999) placed these efforts on an official footing, and recommended "that national laboratories continue their efforts to refine experiments that link the unit of mass to fundamental or atomic constants with a view to a future redefinition of the kilogram". Two possibilities attracted particular attention: the Planck constant and the Avogadro constant. In 2005, the International Committee for Weights and Measures (CIPM) approved preparation of new definitions for the kilogram, the ampere, and the kelvin and it noted the possibility of a new definition of the mole based on the Avogadro constant. The 23rd CGPM (2007) decided to postpone any formal change until the next General Conference in 2011. In a note to the CIPM in October 2009, Ian Mills, the President of the CIPM Consultative Committee – Units (CCU) catalogued the uncertainties of the fundamental constants of physics according to the current definitions and their values under the proposed new definition. He urged the CIPM to accept the proposed changes in the definition of the kilogram, ampere, kelvin, and mole so that they are referenced to the values of the fundamental constants, namely the Planck constant (h), the elementary charge (e), the Boltzmann constant (k), and the Avogadro constant (NA). This approach was approved in 2018, only after measurements of these constants were achieved with sufficient accuracy. See also International vocabulary of metrology International System of Quantities Non-SI units mentioned in the SI Metric prefix Physical constant References External links International Bureau of Weights and Measures National Physical Laboratory NIST – SI Base unit Base unit Dimensional analysis
SI base unit
[ "Physics", "Mathematics", "Engineering" ]
827
[ "Dimensional analysis", "Physical quantities", "SI base quantities", "Quantity", "Mechanical engineering" ]
26,884
https://en.wikipedia.org/wiki/Superconductivity
Superconductivity is a set of physical properties observed in superconductors: materials where electrical resistance vanishes and magnetic fields are expelled from the material. Unlike an ordinary metallic conductor, whose resistance decreases gradually as its temperature is lowered, even down to near absolute zero, a superconductor has a characteristic critical temperature below which the resistance drops abruptly to zero. An electric current through a loop of superconducting wire can persist indefinitely with no power source. The superconductivity phenomenon was discovered in 1911 by Dutch physicist Heike Kamerlingh Onnes. Like ferromagnetism and atomic spectral lines, superconductivity is a phenomenon which can only be explained by quantum mechanics. It is characterized by the Meissner effect, the complete cancelation of the magnetic field in the interior of the superconductor during its transitions into the superconducting state. The occurrence of the Meissner effect indicates that superconductivity cannot be understood simply as the idealization of perfect conductivity in classical physics. In 1986, it was discovered that some cuprate-perovskite ceramic materials have a critical temperature above .. It was shortly found (by Ching-Wu Chu) that replacing the lanthanum with yttrium, i.e. making YBCO, raised the critical temperature to , which was important because liquid nitrogen could then be used as a refrigerant. Such a high transition temperature is theoretically impossible for a conventional superconductor, leading the materials to be termed high-temperature superconductors. The cheaply available coolant liquid nitrogen boils at and thus the existence of superconductivity at higher temperatures than this facilitates many experiments and applications that are less practical at lower temperatures. History Superconductivity was discovered on April 8, 1911, by Heike Kamerlingh Onnes, who was studying the resistance of solid mercury at cryogenic temperatures using the recently produced liquid helium as a refrigerant. At the temperature of 4.2 K, he observed that the resistance abruptly disappeared. In the same experiment, he also observed the superfluid transition of helium at 2.2 K, without recognizing its significance. The precise date and circumstances of the discovery were only reconstructed a century later, when Onnes's notebook was found. In subsequent decades, superconductivity was observed in several other materials. In 1913, lead was found to superconduct at 7 K, and in 1941 niobium nitride was found to superconduct at 16 K. Great efforts have been devoted to finding out how and why superconductivity works; the important step occurred in 1933, when Meissner and Ochsenfeld discovered that superconductors expelled applied magnetic fields, a phenomenon which has come to be known as the Meissner effect. In 1935, Fritz and Heinz London showed that the Meissner effect was a consequence of the minimization of the electromagnetic free energy carried by superconducting current. London constitutive equations The theoretical model that was first conceived for superconductivity was completely classical: it is summarized by London constitutive equations. It was put forward by the brothers Fritz and Heinz London in 1935, shortly after the discovery that magnetic fields are expelled from superconductors. A major triumph of the equations of this theory is their ability to explain the Meissner effect, wherein a material exponentially expels all internal magnetic fields as it crosses the superconducting threshold. By using the London equation, one can obtain the dependence of the magnetic field inside the superconductor on the distance to the surface. The two constitutive equations for a superconductor by London are: The first equation follows from Newton's second law for superconducting electrons. Conventional theories (1950s) During the 1950s, theoretical condensed matter physicists arrived at an understanding of "conventional" superconductivity, through a pair of remarkable and important theories: the phenomenological Ginzburg–Landau theory (1950) and the microscopic BCS theory (1957). In 1950, the phenomenological Ginzburg–Landau theory of superconductivity was devised by Landau and Ginzburg. This theory, which combined Landau's theory of second-order phase transitions with a Schrödinger-like wave equation, had great success in explaining the macroscopic properties of superconductors. In particular, Abrikosov showed that Ginzburg–Landau theory predicts the division of superconductors into the two categories now referred to as Type I and Type II. Abrikosov and Ginzburg were awarded the 2003 Nobel Prize for their work (Landau had received the 1962 Nobel Prize for other work, and died in 1968). The four-dimensional extension of the Ginzburg–Landau theory, the Coleman-Weinberg model, is important in quantum field theory and cosmology. Also in 1950, Maxwell and Reynolds et al. found that the critical temperature of a superconductor depends on the isotopic mass of the constituent element. This important discovery pointed to the electron–phonon interaction as the microscopic mechanism responsible for superconductivity. The complete microscopic theory of superconductivity was finally proposed in 1957 by Bardeen, Cooper and Schrieffer. This BCS theory explained the superconducting current as a superfluid of Cooper pairs, pairs of electrons interacting through the exchange of phonons. For this work, the authors were awarded the Nobel Prize in 1972. The BCS theory was set on a firmer footing in 1958, when N. N. Bogolyubov showed that the BCS wavefunction, which had originally been derived from a variational argument, could be obtained using a canonical transformation of the electronic Hamiltonian. In 1959, Lev Gor'kov showed that the BCS theory reduced to the Ginzburg–Landau theory close to the critical temperature. Generalizations of BCS theory for conventional superconductors form the basis for the understanding of the phenomenon of superfluidity, because they fall into the lambda transition universality class. The extent to which such generalizations can be applied to unconventional superconductors is still controversial. Further history The first practical application of superconductivity was developed in 1954 with Dudley Allen Buck's invention of the cryotron. Two superconductors with greatly different values of the critical magnetic field are combined to produce a fast, simple switch for computer elements. Soon after discovering superconductivity in 1911, Kamerlingh Onnes attempted to make an electromagnet with superconducting windings but found that relatively low magnetic fields destroyed superconductivity in the materials he investigated. Much later, in 1955, G. B. Yntema succeeded in constructing a small 0.7-tesla iron-core electromagnet with superconducting niobium wire windings. Then, in 1961, J. E. Kunzler, E. Buehler, F. S. L. Hsu, and J. H. Wernick made the startling discovery that, at 4.2 kelvin, niobium–tin, a compound consisting of three parts niobium and one part tin, was capable of supporting a current density of more than 100,000 amperes per square centimeter in a magnetic field of 8.8 tesla. Despite being brittle and difficult to fabricate, niobium–tin has since proved extremely useful in supermagnets generating magnetic fields as high as 20 tesla. In 1962, T. G. Berlincourt and R. R. Hake discovered that more ductile alloys of niobium and titanium are suitable for applications up to 10 tesla. Promptly thereafter, commercial production of niobium–titanium supermagnet wire commenced at Westinghouse Electric Corporation and at Wah Chang Corporation. Although niobium–titanium boasts less-impressive superconducting properties than those of niobium–tin, niobium–titanium has, nevertheless, become the most widely used "workhorse" supermagnet material, in large measure a consequence of its very high ductility and ease of fabrication. However, both niobium–tin and niobium–titanium find wide application in MRI medical imagers, bending and focusing magnets for enormous high-energy-particle accelerators, and a host of other applications. Conectus, a European superconductivity consortium, estimated that in 2014, global economic activity for which superconductivity was indispensable amounted to about five billion euros, with MRI systems accounting for about 80% of that total. In 1962, Josephson made the important theoretical prediction that a supercurrent can flow between two pieces of superconductor separated by a thin layer of insulator. This phenomenon, now called the Josephson effect, is exploited by superconducting devices such as SQUIDs. It is used in the most accurate available measurements of the magnetic flux quantum Φ0 = h/(2e), where h is the Planck constant. Coupled with the quantum Hall resistivity, this leads to a precise measurement of the Planck constant. Josephson was awarded the Nobel Prize for this work in 1973. In 2008, it was proposed that the same mechanism that produces superconductivity could produce a superinsulator state in some materials, with almost infinite electrical resistance. The first development and study of superconducting Bose–Einstein condensate (BEC) in 2020 suggests that there is a "smooth transition between" BEC and Bardeen-Cooper-Shrieffer regimes. Classification There are many criteria by which superconductors are classified. The most common are: Response to a magnetic field A superconductor can be Type I, meaning it has a single critical field, above which all superconductivity is lost and below which the magnetic field is completely expelled from the superconductor; or Type II, meaning it has two critical fields, between which it allows partial penetration of the magnetic field through isolated points. These points are called vortices. Furthermore, in multicomponent superconductors it is possible to have a combination of the two behaviours. In that case the superconductor is of Type-1.5. By theory of operation A superconductor is conventional if it is driven by electron–phonon interaction and explained by the usual BCS theory or its extension, the Eliashberg theory. Otherwise, it is unconventional. Alternatively, a superconductor is called unconventional if the superconducting order parameter transforms according to a non-trivial irreducible representation of the point group or space group of the system. By critical temperature A superconductor is generally considered high-temperature if it reaches a superconducting state above a temperature of 30 K (−243.15 °C); as in the initial discovery by Georg Bednorz and K. Alex Müller. It may also reference materials that transition to superconductivity when cooled using liquid nitrogen – that is, at only Tc > 77 K, although this is generally used only to emphasize that liquid nitrogen coolant is sufficient. Low temperature superconductors refer to materials with a critical temperature below 30 K, and are cooled mainly by liquid helium (Tc > 4.2 K). One exception to this rule is the iron pnictide group of superconductors which display behaviour and properties typical of high-temperature superconductors, yet some of the group have critical temperatures below 30 K. By material Superconductor material classes include chemical elements (e.g. mercury or lead), alloys (such as niobium–titanium, germanium–niobium, and niobium nitride), ceramics (YBCO and magnesium diboride), superconducting pnictides (like fluorine-doped LaOFeAs) or organic superconductors (fullerenes and carbon nanotubes; though perhaps these examples should be included among the chemical elements, as they are composed entirely of carbon). Elementary properties Several physical properties of superconductors vary from material to material, such as the critical temperature, the value of the superconducting gap, the critical magnetic field, and the critical current density at which superconductivity is destroyed. On the other hand, there is a class of properties that are independent of the underlying material. The Meissner effect, the quantization of the magnetic flux or permanent currents, i.e. the state of zero resistance are the most important examples. The existence of these "universal" properties is rooted in the nature of the broken symmetry of the superconductor and the emergence of off-diagonal long range order. Superconductivity is a thermodynamic phase, and thus possesses certain distinguishing properties which are largely independent of microscopic details. Off diagonal long range order is closely connected to the formation of Cooper pairs. Zero electrical DC resistance The simplest method to measure the electrical resistance of a sample of some material is to place it in an electrical circuit in series with a current source I and measure the resulting voltage V across the sample. The resistance of the sample is given by Ohm's law as R = V / I. If the voltage is zero, this means that the resistance is zero. Superconductors are also able to maintain a current with no applied voltage whatsoever, a property exploited in superconducting electromagnets such as those found in MRI machines. Experiments have demonstrated that currents in superconducting coils can persist for years without any measurable degradation. Experimental evidence points to a lifetime of at least 100,000 years. Theoretical estimates for the lifetime of a persistent current can exceed the estimated lifetime of the universe, depending on the wire geometry and the temperature. In practice, currents injected in superconducting coils persisted for 28 years, 7 months, 27 days in a superconducting gravimeter in Belgium, from August 4, 1995 until March 31, 2024. In such instruments, the measurement is based on the monitoring of the levitation of a superconducting niobium sphere with a mass of four grams. In a normal conductor, an electric current may be visualized as a fluid of electrons moving across a heavy ionic lattice. The electrons are constantly colliding with the ions in the lattice, and during each collision some of the energy carried by the current is absorbed by the lattice and converted into heat, which is essentially the vibrational kinetic energy of the lattice ions. As a result, the energy carried by the current is constantly being dissipated. This is the phenomenon of electrical resistance and Joule heating. The situation is different in a superconductor. In a conventional superconductor, the electronic fluid cannot be resolved into individual electrons. Instead, it consists of bound pairs of electrons known as Cooper pairs. This pairing is caused by an attractive force between electrons from the exchange of phonons. This pairing is very weak, and small thermal vibrations can fracture the bond. Due to quantum mechanics, the energy spectrum of this Cooper pair fluid possesses an energy gap, meaning there is a minimum amount of energy ΔE that must be supplied in order to excite the fluid. Therefore, if ΔE is larger than the thermal energy of the lattice, given by kT, where k is the Boltzmann constant and T is the temperature, the fluid will not be scattered by the lattice. The Cooper pair fluid is thus a superfluid, meaning it can flow without energy dissipation. In the class of superconductors known as type II superconductors, including all known high-temperature superconductors, an extremely low but non-zero resistivity appears at temperatures not too far below the nominal superconducting transition when an electric current is applied in conjunction with a strong magnetic field, which may be caused by the electric current. This is due to the motion of magnetic vortices in the electronic superfluid, which dissipates some of the energy carried by the current. If the current is sufficiently small, the vortices are stationary, and the resistivity vanishes. The resistance due to this effect is minuscule compared with that of non-superconducting materials, but must be taken into account in sensitive experiments. However, as the temperature decreases far enough below the nominal superconducting transition, these vortices can become frozen into a disordered but stationary phase known as a "vortex glass". Below this vortex glass transition temperature, the resistance of the material becomes truly zero. Phase transition In superconducting materials, the characteristics of superconductivity appear when the temperature T is lowered below a critical temperature Tc. The value of this critical temperature varies from material to material. Conventional superconductors usually have critical temperatures ranging from around 20 K to less than 1 K. Solid mercury, for example, has a critical temperature of 4.2 K. As of 2015, the highest critical temperature found for a conventional superconductor is 203 K for H2S, although high pressures of approximately 90 gigapascals were required. Cuprate superconductors can have much higher critical temperatures: YBa2Cu3O7, one of the first cuprate superconductors to be discovered, has a critical temperature above 90 K, and mercury-based cuprates have been found with critical temperatures in excess of 130 K. The basic physical mechanism responsible for the high critical temperature is not yet clear. However, it is clear that a two-electron pairing is involved, although the nature of the pairing ( wave vs. wave) remains controversial. Similarly, at a fixed temperature below the critical temperature, superconducting materials cease to superconduct when an external magnetic field is applied which is greater than the critical magnetic field. This is because the Gibbs free energy of the superconducting phase increases quadratically with the magnetic field while the free energy of the normal phase is roughly independent of the magnetic field. If the material superconducts in the absence of a field, then the superconducting phase free energy is lower than that of the normal phase and so for some finite value of the magnetic field (proportional to the square root of the difference of the free energies at zero magnetic field) the two free energies will be equal and a phase transition to the normal phase will occur. More generally, a higher temperature and a stronger magnetic field lead to a smaller fraction of electrons that are superconducting and consequently to a longer London penetration depth of external magnetic fields and currents. The penetration depth becomes infinite at the phase transition. The onset of superconductivity is accompanied by abrupt changes in various physical properties, which is the hallmark of a phase transition. For example, the electronic heat capacity is proportional to the temperature in the normal (non-superconducting) regime. At the superconducting transition, it suffers a discontinuous jump and thereafter ceases to be linear. At low temperatures, it varies instead as e−α/T for some constant, α. This exponential behavior is one of the pieces of evidence for the existence of the energy gap. The order of the superconducting phase transition was long a matter of debate. Experiments indicate that the transition is second-order, meaning there is no latent heat. However, in the presence of an external magnetic field there is latent heat, because the superconducting phase has a lower entropy below the critical temperature than the normal phase. It has been experimentally demonstrated that, as a consequence, when the magnetic field is increased beyond the critical field, the resulting phase transition leads to a decrease in the temperature of the superconducting material. Calculations in the 1970s suggested that it may actually be weakly first-order due to the effect of long-range fluctuations in the electromagnetic field. In the 1980s it was shown theoretically with the help of a disorder field theory, in which the vortex lines of the superconductor play a major role, that the transition is of second order within the type II regime and of first order (i.e., latent heat) within the type I regime, and that the two regions are separated by a tricritical point. The results were strongly supported by Monte Carlo computer simulations. Meissner effect When a superconductor is placed in a weak external magnetic field H, and cooled below its transition temperature, the magnetic field is ejected. The Meissner effect does not cause the field to be completely ejected but instead, the field penetrates the superconductor but only to a very small distance, characterized by a parameter λ, called the London penetration depth, decaying exponentially to zero within the bulk of the material. The Meissner effect is a defining characteristic of superconductivity. For most superconductors, the London penetration depth is on the order of 100 nm. The Meissner effect is sometimes confused with the kind of diamagnetism one would expect in a perfect electrical conductor: according to Lenz's law, when a changing magnetic field is applied to a conductor, it will induce an electric current in the conductor that creates an opposing magnetic field. In a perfect conductor, an arbitrarily large current can be induced, and the resulting magnetic field exactly cancels the applied field. The Meissner effect is distinct from thisit is the spontaneous expulsion that occurs during transition to superconductivity. Suppose we have a material in its normal state, containing a constant internal magnetic field. When the material is cooled below the critical temperature, we would observe the abrupt expulsion of the internal magnetic field, which we would not expect based on Lenz's law. The Meissner effect was given a phenomenological explanation by the brothers Fritz and Heinz London, who showed that the electromagnetic free energy in a superconductor is minimized provided where H is the magnetic field and λ is the London penetration depth. This equation, which is known as the London equation, predicts that the magnetic field in a superconductor decays exponentially from whatever value it possesses at the surface. A superconductor with little or no magnetic field within it is said to be in the Meissner state. The Meissner state breaks down when the applied magnetic field is too large. Superconductors can be divided into two classes according to how this breakdown occurs. In Type I superconductors, superconductivity is abruptly destroyed when the strength of the applied field rises above a critical value Hc. Depending on the geometry of the sample, one may obtain an intermediate state consisting of a baroque pattern of regions of normal material carrying a magnetic field mixed with regions of superconducting material containing no field. In Type II superconductors, raising the applied field past a critical value Hc1 leads to a mixed state (also known as the vortex state) in which an increasing amount of magnetic flux penetrates the material, but there remains no resistance to the flow of electric current as long as the current is not too large. At a second critical field strength Hc2, superconductivity is destroyed. The mixed state is actually caused by vortices in the electronic superfluid, sometimes called fluxons because the flux carried by these vortices is quantized. Most pure elemental superconductors, except niobium and carbon nanotubes, are Type I, while almost all impure and compound superconductors are Type II. London moment Conversely, a spinning superconductor generates a magnetic field, precisely aligned with the spin axis. The effect, the London moment, was put to good use in Gravity Probe B. This experiment measured the magnetic fields of four superconducting gyroscopes to determine their spin axes. This was critical to the experiment since it is one of the few ways to accurately determine the spin axis of an otherwise featureless sphere. High-temperature superconductivity Until 1986, physicists had believed that BCS theory forbade superconductivity at temperatures above about 30 K. In that year, Bednorz and Müller discovered superconductivity in lanthanum barium copper oxide (LBCO), a lanthanum-based cuprate perovskite material, which had a transition temperature of 35 K (Nobel Prize in Physics, 1987). It was soon found that replacing the lanthanum with yttrium (i.e., making YBCO) raised the critical temperature above 90 K. This temperature jump is of particular engineering significance, since it allows liquid nitrogen as a refrigerant, replacing liquid helium. Liquid nitrogen can be produced relatively cheaply, even on-site. The higher temperatures additionally help to avoid some of the problems that arise at liquid helium temperatures, such as the formation of plugs of frozen air that can block cryogenic lines and cause unanticipated and potentially hazardous pressure buildup. Many other cuprate superconductors have since been discovered, and the theory of superconductivity in these materials is one of the major outstanding challenges of theoretical condensed matter physics. There are currently two main hypotheses – the resonating-valence-bond theory, and spin fluctuation which has the most support in the research community. The second hypothesis proposed that electron pairing in high-temperature superconductors is mediated by short-range spin waves known as paramagnons. In 2008, holographic superconductivity, which uses holographic duality or AdS/CFT correspondence theory, was proposed by Gubser, Hartnoll, Herzog, and Horowitz, as a possible explanation of high-temperature superconductivity in certain materials. From about 1993, the highest-temperature superconductor known was a ceramic material consisting of mercury, barium, calcium, copper and oxygen (HgBa2Ca2Cu3O8+δ) with . In February 2008, an iron-based family of high-temperature superconductors was discovered. Hideo Hosono, of the Tokyo Institute of Technology, and colleagues found lanthanum oxygen fluorine iron arsenide (LaO1−xFxFeAs), an oxypnictide that superconducts below 26 K. Replacing the lanthanum in LaO1−xFxFeAs with samarium leads to superconductors that work at 55 K. In 2014 and 2015, hydrogen sulfide () at extremely high pressures (around 150 gigapascals) was first predicted and then confirmed to be a high-temperature superconductor with a transition temperature of 80 K. Additionally, in 2019 it was discovered that lanthanum hydride () becomes a superconductor at 250 K under a pressure of 170 gigapascals. In 2018, a research team from the Department of Physics, Massachusetts Institute of Technology, discovered superconductivity in bilayer graphene with one layer twisted at an angle of approximately 1.1 degrees with cooling and applying a small electric charge. Even if the experiments were not carried out in a high-temperature environment, the results are correlated less to classical but high temperature superconductors, given that no foreign atoms need to be introduced. The superconductivity effect came about as a result of electrons twisted into a vortex between the graphene layers, called "skyrmions". These act as a single particle and can pair up across the graphene's layers, leading to the basic conditions required for superconductivity. In 2020, a room-temperature superconductor (critical temperature 288 K) made from hydrogen, carbon and sulfur under pressures of around 270 gigapascals was described in a paper in Nature. However, in 2022 the article was retracted by the editors because the validity of background subtraction procedures had been called into question. All nine authors maintain that the raw data strongly support the main claims of the paper. On 31 December 2023 "Global Room-Temperature Superconductivity in Graphite" was published in the journal "Advanced Quantum Technologies" claiming to demonstrate superconductivity at room temperature and ambient pressure in Highly oriented pyrolytic graphite with dense arrays of nearly parallel line defects. Applications Superconductors are promising candidate materials for devising fundamental circuit elements of electronic, spintronic, and quantum technologies. One such example is a superconducting diode, in which supercurrent flows along one direction only, that promise dissipationless superconducting and semiconducting-superconducting hybrid technologies. Superconducting magnets are some of the most powerful electromagnets known. They are used in MRI/NMR machines, mass spectrometers, the beam-steering magnets used in particle accelerators and plasma confining magnets in some tokamaks. They can also be used for magnetic separation, where weakly magnetic particles are extracted from a background of less or non-magnetic particles, as in the pigment industries. They can also be used in large wind turbines to overcome the restrictions imposed by high electrical currents, with an industrial grade 3.6 megawatt superconducting windmill generator having been tested successfully in Denmark. In the 1950s and 1960s, superconductors were used to build experimental digital computers using cryotron switches. More recently, superconductors have been used to make digital circuits based on rapid single flux quantum technology and RF and microwave filters for mobile phone base stations. Superconductors are used to build Josephson junctions which are the building blocks of SQUIDs (superconducting quantum interference devices), the most sensitive magnetometers known. SQUIDs are used in scanning SQUID microscopes and magnetoencephalography. Series of Josephson devices are used to realize the SI volt. Superconducting photon detectors can be realised in a variety of device configurations. Depending on the particular mode of operation, a superconductor–insulator–superconductor Josephson junction can be used as a photon detector or as a mixer. The large resistance change at the transition from the normal to the superconducting state is used to build thermometers in cryogenic micro-calorimeter photon detectors. The same effect is used in ultrasensitive bolometers made from superconducting materials. Superconducting nanowire single-photon detectors offer high speed, low noise single-photon detection and have been employed widely in advanced photon-counting applications. Other early markets are arising where the relative efficiency, size and weight advantages of devices based on high-temperature superconductivity outweigh the additional costs involved. For example, in wind turbines the lower weight and volume of superconducting generators could lead to savings in construction and tower costs, offsetting the higher costs for the generator and lowering the total levelized cost of electricity (LCOE). Promising future applications include high-performance smart grid, electric power transmission, transformers, power storage devices, compact fusion power devices, electric motors (e.g. for vehicle propulsion, as in vactrains or maglev trains), magnetic levitation devices, fault current limiters, enhancing spintronic devices with superconducting materials, and superconducting magnetic refrigeration. However, superconductivity is sensitive to moving magnetic fields, so applications that use alternating current (e.g. transformers) will be more difficult to develop than those that rely upon direct current. Compared to traditional power lines, superconducting transmission lines are more efficient and require only a fraction of the space, which would not only lead to a better environmental performance but could also improve public acceptance for expansion of the electric grid. Another attractive industrial aspect is the ability for high power transmission at lower voltages. Advancements in the efficiency of cooling systems and use of cheap coolants such as liquid nitrogen have also significantly decreased cooling costs needed for superconductivity. Nobel Prizes As of 2022, there have been five Nobel Prizes in Physics for superconductivity related subjects: Heike Kamerlingh Onnes (1913), "for his investigations on the properties of matter at low temperatures which led, inter alia, to the production of liquid helium". John Bardeen, Leon N. Cooper, and J. Robert Schrieffer (1972), "for their jointly developed theory of superconductivity, usually called the BCS-theory". Leo Esaki, Ivar Giaever, and Brian D. Josephson (1973), "for their experimental discoveries regarding tunneling phenomena in semiconductors and superconductors, respectively" and "for his theoretical predictions of the properties of a supercurrent through a tunnel barrier, in particular those phenomena which are generally known as the Josephson effects". Georg Bednorz and K. Alex Müller (1987), "for their important break-through in the discovery of superconductivity in ceramic materials". Alexei A. Abrikosov, Vitaly L. Ginzburg, and Anthony J. Leggett (2003), "for pioneering contributions to the theory of superconductors and superfluids". See also References Further reading IEC standard 60050-815:2000, International Electrotechnical Vocabulary (IEV) – Part 815: Superconductivity . Charlie Wood, Quanta Magazine (2022). "High-Temperature Superconductivity Understood at Last". External links Video about Type I Superconductors: R=0/transition temperatures/ B is a state variable/ Meissner effect/ Energy gap(Giaever)/ BCS model Lectures on Superconductivity (series of videos, including interviews with leading experts) YouTube Video Levitating magnet DoITPoMS Teaching and Learning Package – "Superconductivity" The Schrödinger Equation in a Classical Context: A Seminar on Superconductivity – The Feynman Lectures on Physics. Phases of matter Exotic matter Unsolved problems in physics Magnetic levitation Physical phenomena Spintronics Phase transitions Articles containing video clips Science and technology in the Netherlands Dutch inventions 1911 in science
Superconductivity
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
7,109
[ "Physical phenomena", "Phase transitions", "Matter", "Physical quantities", "Spintronics", "Superconductivity", "Phases of matter", "Critical phenomena", "Unsolved problems in physics", "Materials science", "Condensed matter physics", "Exotic matter", "Statistical mechanics", "Electrical r...
26,962
https://en.wikipedia.org/wiki/Special%20relativity
In physics, the special theory of relativity, or special relativity for short, is a scientific theory of the relationship between space and time. In Albert Einstein's 1905 paper, On the Electrodynamics of Moving Bodies, the theory is presented as being based on just two postulates: The laws of physics are invariant (identical) in all inertial frames of reference (that is, frames of reference with no acceleration). This is known as the principle of relativity. The speed of light in vacuum is the same for all observers, regardless of the motion of light source or observer. This is known as the principle of light constancy, or the principle of light speed invariance. The first postulate was first formulated by Galileo Galilei (see Galilean invariance). Origins and significance Special relativity was described by Albert Einstein in a paper published on 26 September 1905 titled "On the Electrodynamics of Moving Bodies". Maxwell's equations of electromagnetism appeared to be incompatible with Newtonian mechanics, and the Michelson–Morley experiment failed to detect the Earth's motion against the hypothesized luminiferous aether. These led to the development of the Lorentz transformations, by Hendrik Lorentz, which adjust distances and times for moving objects. Special relativity corrects the hitherto laws of mechanics to handle situations involving all motions and especially those at a speed close to that of light (known as ). Today, special relativity is proven to be the most accurate model of motion at any speed when gravitational and quantum effects are negligible. Even so, the Newtonian model is still valid as a simple and accurate approximation at low velocities (relative to the speed of light), for example, everyday motions on Earth. Special relativity has a wide range of consequences that have been experimentally verified. These include the relativity of simultaneity, length contraction, time dilation, the relativistic velocity addition formula, the relativistic Doppler effect, relativistic mass, a universal speed limit, mass–energy equivalence, the speed of causality and the Thomas precession. It has, for example, replaced the conventional notion of an absolute universal time with the notion of a time that is dependent on reference frame and spatial position. Rather than an invariant time interval between two events, there is an invariant spacetime interval. Combined with other laws of physics, the two postulates of special relativity predict the equivalence of mass and energy, as expressed in the mass–energy equivalence formula , where is the speed of light in vacuum. It also explains how the phenomena of electricity and magnetism are related. A defining feature of special relativity is the replacement of the Galilean transformations of Newtonian mechanics with the Lorentz transformations. Time and space cannot be defined separately from each other (as was previously thought to be the case). Rather, space and time are interwoven into a single continuum known as "spacetime". Events that occur at the same time for one observer can occur at different times for another. Until several years later when Einstein developed general relativity, which introduced a curved spacetime to incorporate gravity, the phrase "special relativity" was not used. A translation sometimes used is "restricted relativity"; "special" really means "special case". Some of the work of Albert Einstein in special relativity is built on the earlier work by Hendrik Lorentz and Henri Poincaré. The theory became essentially complete in 1907, with Hermann Minkowski's papers on spacetime. The theory is "special" in that it only applies in the special case where the spacetime is "flat", that is, where the curvature of spacetime (a consequence of the energy–momentum tensor and representing gravity) is negligible. To correctly accommodate gravity, Einstein formulated general relativity in 1915. Special relativity, contrary to some historical descriptions, does accommodate accelerations as well as accelerating frames of reference. Just as Galilean relativity is now accepted to be an approximation of special relativity that is valid for low speeds, special relativity is considered an approximation of general relativity that is valid for weak gravitational fields, that is, at a sufficiently small scale (e.g., when tidal forces are negligible) and in conditions of free fall. But general relativity incorporates non-Euclidean geometry to represent gravitational effects as the geometric curvature of spacetime. Special relativity is restricted to the flat spacetime known as Minkowski space. As long as the universe can be modeled as a pseudo-Riemannian manifold, a Lorentz-invariant frame that abides by special relativity can be defined for a sufficiently small neighborhood of each point in this curved spacetime. Galileo Galilei had already postulated that there is no absolute and well-defined state of rest (no privileged reference frames), a principle now called Galileo's principle of relativity. Einstein extended this principle so that it accounted for the constant speed of light, a phenomenon that had been observed in the Michelson–Morley experiment. He also postulated that it holds for all the laws of physics, including both the laws of mechanics and of electrodynamics. Traditional "two postulates" approach to special relativity Einstein discerned two fundamental propositions that seemed to be the most assured, regardless of the exact validity of the (then) known laws of either mechanics or electrodynamics. These propositions were the constancy of the speed of light in vacuum and the independence of physical laws (especially the constancy of the speed of light) from the choice of inertial system. In his initial presentation of special relativity in 1905 he expressed these postulates as: The principle of relativity – the laws by which the states of physical systems undergo change are not affected, whether these changes of state be referred to the one or the other of two systems in uniform translatory motion relative to each other. The principle of invariant light speed – "... light is always propagated in empty space with a definite velocity [speed] c which is independent of the state of motion of the emitting body" (from the preface). That is, light in vacuum propagates with the speed c (a fixed constant, independent of direction) in at least one system of inertial coordinates (the "stationary system"), regardless of the state of motion of the light source. The constancy of the speed of light was motivated by Maxwell's theory of electromagnetism and the lack of evidence for the luminiferous ether. There is conflicting evidence on the extent to which Einstein was influenced by the null result of the Michelson–Morley experiment. In any case, the null result of the Michelson–Morley experiment helped the notion of the constancy of the speed of light gain widespread and rapid acceptance. The derivation of special relativity depends not only on these two explicit postulates, but also on several tacit assumptions (made in almost all theories of physics), including the isotropy and homogeneity of space and the independence of measuring rods and clocks from their past history. Following Einstein's original presentation of special relativity in 1905, many different sets of postulates have been proposed in various alternative derivations. But the most common set of postulates remains those employed by Einstein in his original paper. A more mathematical statement of the principle of relativity made later by Einstein, which introduces the concept of simplicity not mentioned above is: Henri Poincaré provided the mathematical framework for relativity theory by proving that Lorentz transformations are a subset of his Poincaré group of symmetry transformations. Einstein later derived these transformations from his axioms. Many of Einstein's papers present derivations of the Lorentz transformation based upon these two principles. Principle of relativity Reference frames and relative motion Reference frames play a crucial role in relativity theory. The term reference frame as used here is an observational perspective in space that is not undergoing any change in motion (acceleration), from which a position can be measured along 3 spatial axes (so, at rest or constant velocity). In addition, a reference frame has the ability to determine measurements of the time of events using a "clock" (any reference device with uniform periodicity). An event is an occurrence that can be assigned a single unique moment and location in space relative to a reference frame: it is a "point" in spacetime. Since the speed of light is constant in relativity irrespective of the reference frame, pulses of light can be used to unambiguously measure distances and refer back to the times that events occurred to the clock, even though light takes time to reach the clock after the event has transpired. For example, the explosion of a firecracker may be considered to be an "event". We can completely specify an event by its four spacetime coordinates: The time of occurrence and its 3-dimensional spatial location define a reference point. Let's call this reference frame S. In relativity theory, we often want to calculate the coordinates of an event from differing reference frames. The equations that relate measurements made in different frames are called transformation equations. Standard configuration To gain insight into how the spacetime coordinates measured by observers in different reference frames compare with each other, it is useful to work with a simplified setup with frames in a standard configuration. With care, this allows simplification of the math with no loss of generality in the conclusions that are reached. In Fig. 2-1, two Galilean reference frames (i.e., conventional 3-space frames) are displayed in relative motion. Frame S belongs to a first observer O, and frame (pronounced "S prime" or "S dash") belongs to a second observer . The x, y, z axes of frame S are oriented parallel to the respective primed axes of frame . Frame moves, for simplicity, in a single direction: the x-direction of frame S with a constant velocity v as measured in frame S. The origins of frames S and are coincident when time for frame S and for frame . Since there is no absolute reference frame in relativity theory, a concept of "moving" does not strictly exist, as everything may be moving with respect to some other reference frame. Instead, any two frames that move at the same speed in the same direction are said to be comoving. Therefore, S and are not comoving. Lack of an absolute reference frame The principle of relativity, which states that physical laws have the same form in each inertial reference frame, dates back to Galileo, and was incorporated into Newtonian physics. But in the late 19th century the existence of electromagnetic waves led some physicists to suggest that the universe was filled with a substance they called "aether", which, they postulated, would act as the medium through which these waves, or vibrations, propagated (in many respects similar to the way sound propagates through air). The aether was thought to be an absolute reference frame against which all speeds could be measured, and could be considered fixed and motionless relative to Earth or some other fixed reference point. The aether was supposed to be sufficiently elastic to support electromagnetic waves, while those waves could interact with matter, yet offering no resistance to bodies passing through it (its one property was that it allowed electromagnetic waves to propagate). The results of various experiments, including the Michelson–Morley experiment in 1887 (subsequently verified with more accurate and innovative experiments), led to the theory of special relativity, by showing that the aether did not exist. Einstein's solution was to discard the notion of an aether and the absolute state of rest. In relativity, any reference frame moving with uniform motion will observe the same laws of physics. In particular, the speed of light in vacuum is always measured to be c, even when measured by multiple systems that are moving at different (but constant) velocities. Relativity without the second postulate From the principle of relativity alone without assuming the constancy of the speed of light (i.e., using the isotropy of space and the symmetry implied by the principle of special relativity) it can be shown that the spacetime transformations between inertial frames are either Euclidean, Galilean, or Lorentzian. In the Lorentzian case, one can then obtain relativistic interval conservation and a certain finite limiting speed. Experiments suggest that this speed is the speed of light in vacuum. Lorentz invariance as the essential core of special relativity Alternative approaches to special relativity Einstein consistently based the derivation of Lorentz invariance (the essential core of special relativity) on just the two basic principles of: relativity and invariance of the speed of light. He wrote: Thus many modern treatments of special relativity base it on the single postulate of universal Lorentz covariance, or, equivalently, on the single postulate of Minkowski spacetime. Rather than considering universal Lorentz covariance to be a derived principle, this article considers it to be the fundamental postulate of special relativity. The traditional two-postulate approach to special relativity is presented in innumerable college textbooks and popular presentations. Textbooks starting with the single postulate of Minkowski spacetime include those by Taylor and Wheeler and by Callahan. This is also the approach followed by the Wikipedia articles Spacetime and Minkowski diagram. Lorentz transformation and its inverse Define an event to have spacetime coordinates in system S and in a reference frame moving at a velocity v on the x-axis with respect to that frame, . Then the Lorentz transformation specifies that these coordinates are related in the following way: where is the Lorentz factor and c is the speed of light in vacuum, and the velocity v of , relative to S, is parallel to the x-axis. For simplicity, the y and z coordinates are unaffected; only the x and t coordinates are transformed. These Lorentz transformations form a one-parameter group of linear mappings, that parameter being called rapidity. Solving the four transformation equations above for the unprimed coordinates yields the inverse Lorentz transformation: This shows that the unprimed frame is moving with the velocity −v, as measured in the primed frame. There is nothing special about the x-axis. The transformation can apply to the y- or z-axis, or indeed in any direction parallel to the motion (which are warped by the γ factor) and perpendicular; see the article Lorentz transformation for details. A quantity that is invariant under Lorentz transformations is known as a Lorentz scalar. Writing the Lorentz transformation and its inverse in terms of coordinate differences, where one event has coordinates and , another event has coordinates and , and the differences are defined as       we get       If we take differentials instead of taking differences, we get Graphical representation of the Lorentz transformation Spacetime diagrams (Minkowski diagrams) are an extremely useful aid to visualizing how coordinates transform between different reference frames. Although it is not as easy to perform exact computations using them as directly invoking the Lorentz transformations, their main power is their ability to provide an intuitive grasp of the results of a relativistic scenario. To draw a spacetime diagram, begin by considering two Galilean reference frames, S and S′, in standard configuration, as shown in Fig. 2-1. Fig. 3-1a. Draw the and axes of frame S. The axis is horizontal and the (actually ) axis is vertical, which is the opposite of the usual convention in kinematics. The axis is scaled by a factor of so that both axes have common units of length. In the diagram shown, the gridlines are spaced one unit distance apart. The 45° diagonal lines represent the worldlines of two photons passing through the origin at time The slope of these worldlines is 1 because the photons advance one unit in space per unit of time. Two events, and have been plotted on this graph so that their coordinates may be compared in the S and S' frames. Fig. 3-1b. Draw the and axes of frame S'. The axis represents the worldline of the origin of the S' coordinate system as measured in frame S. In this figure, Both the and axes are tilted from the unprimed axes by an angle where The primed and unprimed axes share a common origin because frames S and S' had been set up in standard configuration, so that when Fig. 3-1c. Units in the primed axes have a different scale from units in the unprimed axes. From the Lorentz transformations, we observe that coordinates of in the primed coordinate system transform to in the unprimed coordinate system. Likewise, coordinates of in the primed coordinate system transform to in the unprimed system. Draw gridlines parallel with the axis through points as measured in the unprimed frame, where is an integer. Likewise, draw gridlines parallel with the axis through as measured in the unprimed frame. Using the Pythagorean theorem, we observe that the spacing between units equals times the spacing between units, as measured in frame S. This ratio is always greater than 1, and ultimately it approaches infinity as Fig. 3-1d. Since the speed of light is an invariant, the worldlines of two photons passing through the origin at time still plot as 45° diagonal lines. The primed coordinates of and are related to the unprimed coordinates through the Lorentz transformations and could be approximately measured from the graph (assuming that it has been plotted accurately enough), but the real merit of a Minkowski diagram is its granting us a geometric view of the scenario. For example, in this figure, we observe that the two timelike-separated events that had different x-coordinates in the unprimed frame are now at the same position in space. While the unprimed frame is drawn with space and time axes that meet at right angles, the primed frame is drawn with axes that meet at acute or obtuse angles. This asymmetry is due to unavoidable distortions in how spacetime coordinates map onto a Cartesian plane, but the frames are actually equivalent. Consequences derived from the Lorentz transformation The consequences of special relativity can be derived from the Lorentz transformation equations. These transformations, and hence special relativity, lead to different physical predictions than those of Newtonian mechanics at all relative velocities, and most pronounced when relative velocities become comparable to the speed of light. The speed of light is so much larger than anything most humans encounter that some of the effects predicted by relativity are initially counterintuitive. Invariant interval In Galilean relativity, the spatial separation, (), and the temporal separation, (), between two events are independent invariants, the values of which do not change when observed from different frames of reference. In special relativity, however, the interweaving of spatial and temporal coordinates generates the concept of an invariant interval, denoted as : In considering the physical significance of , there are three cases to note: Δs2 > 0: In this case, the two events are separated by more time than space, and they are hence said to be timelike separated. This implies that , and given the Lorentz transformation , it is evident that there exists a less than for which (in particular, ). In other words, given two events that are timelike separated, it is possible to find a frame in which the two events happen at the same place. In this frame, the separation in time, , is called the proper time. Δs2 < 0: In this case, the two events are separated by more space than time, and they are hence said to be spacelike separated. This implies that , and given the Lorentz transformation , there exists a less than for which (in particular, ). In other words, given two events that are spacelike separated, it is possible to find a frame in which the two events happen at the same time. In this frame, the separation in space, , is called the proper distance, or proper length. For values of greater than and less than , the sign of changes, meaning that the temporal order of spacelike-separated events changes depending on the frame in which the events are viewed. But the temporal order of timelike-separated events is absolute, since the only way that could be greater than would be if . Δs2 = 0: In this case, the two events are said to be lightlike separated. This implies that , and this relationship is frame independent due to the invariance of . From this, we observe that the speed of light is in every inertial frame. In other words, starting from the assumption of universal Lorentz covariance, the constant speed of light is a derived result, rather than a postulate as in the two-postulates formulation of the special theory. The interweaving of space and time revokes the implicitly assumed concepts of absolute simultaneity and synchronization across non-comoving frames. The form of , being the difference of the squared time lapse and the squared spatial distance, demonstrates a fundamental discrepancy between Euclidean and spacetime distances. The invariance of this interval is a property of the general Lorentz transform (also called the Poincaré transformation), making it an isometry of spacetime. The general Lorentz transform extends the standard Lorentz transform (which deals with translations without rotation, that is, Lorentz boosts, in the x-direction) with all other translations, reflections, and rotations between any Cartesian inertial frame. In the analysis of simplified scenarios, such as spacetime diagrams, a reduced-dimensionality form of the invariant interval is often employed: Demonstrating that the interval is invariant is straightforward for the reduced-dimensionality case and with frames in standard configuration: The value of is hence independent of the frame in which it is measured. Relativity of simultaneity Consider two events happening in two different locations that occur simultaneously in the reference frame of one inertial observer. They may occur non-simultaneously in the reference frame of another inertial observer (lack of absolute simultaneity). From (the forward Lorentz transformation in terms of coordinate differences) It is clear that the two events that are simultaneous in frame S (satisfying ), are not necessarily simultaneous in another inertial frame (satisfying ). Only if these events are additionally co-local in frame S (satisfying ), will they be simultaneous in another frame . The Sagnac effect can be considered a manifestation of the relativity of simultaneity. Since relativity of simultaneity is a first order effect in , instruments based on the Sagnac effect for their operation, such as ring laser gyroscopes and fiber optic gyroscopes, are capable of extreme levels of sensitivity. Time dilation The time lapse between two events is not invariant from one observer to another, but is dependent on the relative speeds of the observers' reference frames. Suppose a clock is at rest in the unprimed system S. The location of the clock on two different ticks is then characterized by . To find the relation between the times between these ticks as measured in both systems, can be used to find: for events satisfying This shows that the time (Δ) between the two ticks as seen in the frame in which the clock is moving (), is longer than the time (Δt) between these ticks as measured in the rest frame of the clock (S). Time dilation explains a number of physical phenomena; for example, the lifetime of high speed muons created by the collision of cosmic rays with particles in the Earth's outer atmosphere and moving towards the surface is greater than the lifetime of slowly moving muons, created and decaying in a laboratory. Whenever one hears a statement to the effect that "moving clocks run slow", one should envision an inertial reference frame thickly populated with identical, synchronized clocks. As a moving clock travels through this array, its reading at any particular point is compared with a stationary clock at the same point. The measurements that we would get if we actually looked at a moving clock would, in general, not at all be the same thing, because the time that we would see would be delayed by the finite speed of light, i.e. the times that we see would be distorted by the Doppler effect. Measurements of relativistic effects must always be understood as having been made after finite speed-of-light effects have been factored out. Langevin's light-clock Paul Langevin, an early proponent of the theory of relativity, did much to popularize the theory in the face of resistance by many physicists to Einstein's revolutionary concepts. Among his numerous contributions to the foundations of special relativity were independent work on the mass–energy relationship, a thorough examination of the twin paradox, and investigations into rotating coordinate systems. His name is frequently attached to a hypothetical construct called a "light-clock" (originally developed by Lewis and Tolman in 1909), which he used to perform a novel derivation of the Lorentz transformation. A light-clock is imagined to be a box of perfectly reflecting walls wherein a light signal reflects back and forth from opposite faces. The concept of time dilation is frequently taught using a light-clock that is traveling in uniform inertial motion perpendicular to a line connecting the two mirrors. (Langevin himself made use of a light-clock oriented parallel to its line of motion.) Consider the scenario illustrated in Observer A holds a light-clock of length as well as an electronic timer with which she measures how long it takes a pulse to make a round trip up and down along the light-clock. Although observer A is traveling rapidly along a train, from her point of view the emission and receipt of the pulse occur at the same place, and she measures the interval using a single clock located at the precise position of these two events. For the interval between these two events, observer A finds . A time interval measured using a single clock that is motionless in a particular reference frame is called a proper time interval. Fig. 4-3B illustrates these same two events from the standpoint of observer B, who is parked by the tracks as the train goes by at a speed of . Instead of making straight up-and-down motions, observer B sees the pulses moving along a zig-zag line. However, because of the postulate of the constancy of the speed of light, the speed of the pulses along these diagonal lines is the same that observer A saw for her up-and-down pulses. B measures the speed of the vertical component of these pulses as so that the total round-trip time of the pulses is . Note that for observer B, the emission and receipt of the light pulse occurred at different places, and he measured the interval using two stationary and synchronized clocks located at two different positions in his reference frame. The interval that B measured was therefore not a proper time interval because he did not measure it with a single resting clock. Reciprocal time dilation In the above description of the Langevin light-clock, the labeling of one observer as stationary and the other as in motion was completely arbitrary. One could just as well have observer B carrying the light-clock and moving at a speed of to the left, in which case observer A would perceive B's clock as running slower than her local clock. There is no paradox here, because there is no independent observer C who will agree with both A and B. Observer C necessarily makes his measurements from his own reference frame. If that reference frame coincides with A's reference frame, then C will agree with A's measurement of time. If C's reference frame coincides with B's reference frame, then C will agree with B's measurement of time. If C's reference frame coincides with neither A's frame nor B's frame, then C's measurement of time will disagree with both A's and B's measurement of time. Twin paradox The reciprocity of time dilation between two observers in separate inertial frames leads to the so-called twin paradox, articulated in its present form by Langevin in 1911. Langevin imagined an adventurer wishing to explore the future of the Earth. This traveler boards a projectile capable of traveling at 99.995% of the speed of light. After making a round-trip journey to and from a nearby star lasting only two years of his own life, he returns to an Earth that is two hundred years older. This result appears puzzling because both the traveler and an Earthbound observer would see the other as moving, and so, because of the reciprocity of time dilation, one might initially expect that each should have found the other to have aged less. In reality, there is no paradox at all, because in order for the two observers to perform side-by-side comparisons of their elapsed proper times, the symmetry of the situation must be broken: At least one of the two observers must change their state of motion to match that of the other. Knowing the general resolution of the paradox, however, does not immediately yield the ability to calculate correct quantitative results. Many solutions to this puzzle have been provided in the literature and have been reviewed in the Twin paradox article. We will examine in the following one such solution to the paradox. Our basic aim will be to demonstrate that, after the trip, both twins are in perfect agreement about who aged by how much, regardless of their different experiences. illustrates a scenario where the traveling twin flies at to and from a star distant. During the trip, each twin sends yearly time signals (measured in their own proper times) to the other. After the trip, the cumulative counts are compared. On the outward phase of the trip, each twin receives the other's signals at the lowered rate of . Initially, the situation is perfectly symmetric: note that each twin receives the other's one-year signal at two years measured on their own clock. The symmetry is broken when the traveling twin turns around at the four-year mark as measured by her clock. During the remaining four years of her trip, she receives signals at the enhanced rate of . The situation is quite different with the stationary twin. Because of light-speed delay, he does not see his sister turn around until eight years have passed on his own clock. Thus, he receives enhanced-rate signals from his sister for only a relatively brief period. Although the twins disagree in their respective measures of total time, we see in the following table, as well as by simple observation of the Minkowski diagram, that each twin is in total agreement with the other as to the total number of signals sent from one to the other. There is hence no paradox. Length contraction The dimensions (e.g., length) of an object as measured by one observer may be smaller than the results of measurements of the same object made by another observer (e.g., the ladder paradox involves a long ladder traveling near the speed of light and being contained within a smaller garage). Similarly, suppose a measuring rod is at rest and aligned along the x-axis in the unprimed system S. In this system, the length of this rod is written as Δx. To measure the length of this rod in the system , in which the rod is moving, the distances to the end points of the rod must be measured simultaneously in that system . In other words, the measurement is characterized by , which can be combined with to find the relation between the lengths Δx and Δ: for events satisfying This shows that the length (Δ) of the rod as measured in the frame in which it is moving (), is shorter than its length (Δx) in its own rest frame (S). Time dilation and length contraction are not merely appearances. Time dilation is explicitly related to our way of measuring time intervals between events that occur at the same place in a given coordinate system (called "co-local" events). These time intervals (which can be, and are, actually measured experimentally by relevant observers) are different in another coordinate system moving with respect to the first, unless the events, in addition to being co-local, are also simultaneous. Similarly, length contraction relates to our measured distances between separated but simultaneous events in a given coordinate system of choice. If these events are not co-local, but are separated by distance (space), they will not occur at the same spatial distance from each other when seen from another moving coordinate system. Lorentz transformation of velocities Consider two frames S and in standard configuration. A particle in S moves in the x direction with velocity vector . What is its velocity in frame ? We can write Substituting expressions for and from into , followed by straightforward mathematical manipulations and back-substitution from yields the Lorentz transformation of the speed to : The inverse relation is obtained by interchanging the primed and unprimed symbols and replacing with . For not aligned along the x-axis, we write: The forward and inverse transformations for this case are: and can be interpreted as giving the resultant of the two velocities and , and they replace the formula . which is valid in Galilean relativity. Interpreted in such a fashion, they are commonly referred to as the relativistic velocity addition (or composition) formulas, valid for the three axes of S and being aligned with each other (although not necessarily in standard configuration). We note the following points: If an object (e.g., a photon) were moving at the speed of light in one frame , then it would also be moving at the speed of light in any other frame, moving at . The resultant speed of two velocities with magnitude less than c is always a velocity with magnitude less than c. If both and (and then also and ) are small with respect to the speed of light (that is, e.g., , then the intuitive Galilean transformations are recovered from the transformation equations for special relativity Attaching a frame to a photon (riding a light beam like Einstein considers) requires special treatment of the transformations. There is nothing special about the x direction in the standard configuration. The above formalism applies to any direction; and three orthogonal directions allow dealing with all directions in space by decomposing the velocity vectors to their components in these directions. See Velocity-addition formula for details. Thomas rotation The composition of two non-collinear Lorentz boosts (i.e., two non-collinear Lorentz transformations, neither of which involve rotation) results in a Lorentz transformation that is not a pure boost but is the composition of a boost and a rotation. Thomas rotation results from the relativity of simultaneity. In Fig. 4-5a, a rod of length in its rest frame (i.e., having a proper length of ) rises vertically along the y-axis in the ground frame. In Fig. 4-5b, the same rod is observed from the frame of a rocket moving at speed to the right. If we imagine two clocks situated at the left and right ends of the rod that are synchronized in the frame of the rod, relativity of simultaneity causes the observer in the rocket frame to observe (not see) the clock at the right end of the rod as being advanced in time by , and the rod is correspondingly observed as tilted. Unlike second-order relativistic effects such as length contraction or time dilation, this effect becomes quite significant even at fairly low velocities. For example, this can be seen in the spin of moving particles, where Thomas precession is a relativistic correction that applies to the spin of an elementary particle or the rotation of a macroscopic gyroscope, relating the angular velocity of the spin of a particle following a curvilinear orbit to the angular velocity of the orbital motion. Thomas rotation provides the resolution to the well-known "meter stick and hole paradox". Causality and prohibition of motion faster than light In Fig. 4-6, the time interval between the events A (the "cause") and B (the "effect") is 'timelike'; that is, there is a frame of reference in which events A and B occur at the same location in space, separated only by occurring at different times. If A precedes B in that frame, then A precedes B in all frames accessible by a Lorentz transformation. It is possible for matter (or information) to travel (below light speed) from the location of A, starting at the time of A, to the location of B, arriving at the time of B, so there can be a causal relationship (with A the cause and B the effect). The interval AC in the diagram is 'spacelike'; that is, there is a frame of reference in which events A and C occur simultaneously, separated only in space. There are also frames in which A precedes C (as shown) and frames in which C precedes A. But no frames are accessible by a Lorentz transformation, in which events A and C occur at the same location. If it were possible for a cause-and-effect relationship to exist between events A and C, paradoxes of causality would result. For example, if signals could be sent faster than light, then signals could be sent into the sender's past (observer B in the diagrams). A variety of causal paradoxes could then be constructed. Consider the spacetime diagrams in Fig. 4-7. A and B stand alongside a railroad track, when a high-speed train passes by, with C riding in the last car of the train and D riding in the leading car. The world lines of A and B are vertical (ct), distinguishing the stationary position of these observers on the ground, while the world lines of C and D are tilted forwards (), reflecting the rapid motion of the observers C and D stationary in their train, as observed from the ground. Fig. 4-7a. The event of "B passing a message to D", as the leading car passes by, is at the origin of D's frame. D sends the message along the train to C in the rear car, using a fictitious "instantaneous communicator". The worldline of this message is the fat red arrow along the axis, which is a line of simultaneity in the primed frames of C and D. In the (unprimed) ground frame the signal arrives earlier than it was sent. Fig. 4-7b. The event of "C passing the message to A", who is standing by the railroad tracks, is at the origin of their frames. Now A sends the message along the tracks to B via an "instantaneous communicator". The worldline of this message is the blue fat arrow, along the axis, which is a line of simultaneity for the frames of A and B. As seen from the spacetime diagram, in the primed frames of C and D, B will receive the message before it was sent out, a violation of causality. It is not necessary for signals to be instantaneous to violate causality. Even if the signal from D to C were slightly shallower than the axis (and the signal from A to B slightly steeper than the axis), it would still be possible for B to receive his message before he had sent it. By increasing the speed of the train to near light speeds, the and axes can be squeezed very close to the dashed line representing the speed of light. With this modified setup, it can be demonstrated that even signals only slightly faster than the speed of light will result in causality violation. Therefore, if causality is to be preserved, one of the consequences of special relativity is that no information signal or material object can travel faster than light in vacuum. This is not to say that all faster than light speeds are impossible. Various trivial situations can be described where some "things" (not actual matter or energy) move faster than light. For example, the location where the beam of a search light hits the bottom of a cloud can move faster than light when the search light is turned rapidly (although this does not violate causality or any other relativistic phenomenon). Optical effects Dragging effects In 1850, Hippolyte Fizeau and Léon Foucault independently established that light travels more slowly in water than in air, thus validating a prediction of Fresnel's wave theory of light and invalidating the corresponding prediction of Newton's corpuscular theory. The speed of light was measured in still water. What would be the speed of light in flowing water? In 1851, Fizeau conducted an experiment to answer this question, a simplified representation of which is illustrated in Fig. 5-1. A beam of light is divided by a beam splitter, and the split beams are passed in opposite directions through a tube of flowing water. They are recombined to form interference fringes, indicating a difference in optical path length, that an observer can view. The experiment demonstrated that dragging of the light by the flowing water caused a displacement of the fringes, showing that the motion of the water had affected the speed of the light. According to the theories prevailing at the time, light traveling through a moving medium would be a simple sum of its speed through the medium plus the speed of the medium. Contrary to expectation, Fizeau found that although light appeared to be dragged by the water, the magnitude of the dragging was much lower than expected. If is the speed of light in still water, and is the speed of the water, and is the water-borne speed of light in the lab frame with the flow of water adding to or subtracting from the speed of light, then Fizeau's results, although consistent with Fresnel's earlier hypothesis of partial aether dragging, were extremely disconcerting to physicists of the time. Among other things, the presence of an index of refraction term meant that, since depends on wavelength, the aether must be capable of sustaining different motions at the same time. A variety of theoretical explanations were proposed to explain Fresnel's dragging coefficient, that were completely at odds with each other. Even before the Michelson–Morley experiment, Fizeau's experimental results were among a number of observations that created a critical situation in explaining the optics of moving bodies. From the point of view of special relativity, Fizeau's result is nothing but an approximation to , the relativistic formula for composition of velocities. Relativistic aberration of light Because of the finite speed of light, if the relative motions of a source and receiver include a transverse component, then the direction from which light arrives at the receiver will be displaced from the geometric position in space of the source relative to the receiver. The classical calculation of the displacement takes two forms and makes different predictions depending on whether the receiver, the source, or both are in motion with respect to the medium. (1) If the receiver is in motion, the displacement would be the consequence of the aberration of light. The incident angle of the beam relative to the receiver would be calculable from the vector sum of the receiver's motions and the velocity of the incident light. (2) If the source is in motion, the displacement would be the consequence of light-time correction. The displacement of the apparent position of the source from its geometric position would be the result of the source's motion during the time that its light takes to reach the receiver. The classical explanation failed experimental test. Since the aberration angle depends on the relationship between the velocity of the receiver and the speed of the incident light, passage of the incident light through a refractive medium should change the aberration angle. In 1810, Arago used this expected phenomenon in a failed attempt to measure the speed of light, and in 1870, George Airy tested the hypothesis using a water-filled telescope, finding that, against expectation, the measured aberration was identical to the aberration measured with an air-filled telescope. A "cumbrous" attempt to explain these results used the hypothesis of partial aether-drag, but was incompatible with the results of the Michelson–Morley experiment, which apparently demanded complete aether-drag. Assuming inertial frames, the relativistic expression for the aberration of light is applicable to both the receiver moving and source moving cases. A variety of trigonometrically equivalent formulas have been published. Expressed in terms of the variables in Fig. 5-2, these include   OR     OR Relativistic Doppler effect Relativistic longitudinal Doppler effect The classical Doppler effect depends on whether the source, receiver, or both are in motion with respect to the medium. The relativistic Doppler effect is independent of any medium. Nevertheless, relativistic Doppler shift for the longitudinal case, with source and receiver moving directly towards or away from each other, can be derived as if it were the classical phenomenon, but modified by the addition of a time dilation term, and that is the treatment described here. Assume the receiver and the source are moving away from each other with a relative speed as measured by an observer on the receiver or the source (The sign convention adopted here is that is negative if the receiver and the source are moving towards each other). Assume that the source is stationary in the medium. Then where is the speed of sound. For light, and with the receiver moving at relativistic speeds, clocks on the receiver are time dilated relative to clocks at the source. The receiver will measure the received frequency to be where   and is the Lorentz factor. An identical expression for relativistic Doppler shift is obtained when performing the analysis in the reference frame of the receiver with a moving source. Transverse Doppler effect The transverse Doppler effect is one of the main novel predictions of the special theory of relativity. Classically, one might expect that if source and receiver are moving transversely with respect to each other with no longitudinal component to their relative motions, that there should be no Doppler shift in the light arriving at the receiver. Special relativity predicts otherwise. Fig. 5-3 illustrates two common variants of this scenario. Both variants can be analyzed using simple time dilation arguments. In Fig. 5-3a, the receiver observes light from the source as being blueshifted by a factor of . In Fig. 5-3b, the light is redshifted by the same factor. Measurement versus visual appearance Time dilation and length contraction are not optical illusions, but genuine effects. Measurements of these effects are not an artifact of Doppler shift, nor are they the result of neglecting to take into account the time it takes light to travel from an event to an observer. Scientists make a fundamental distinction between measurement or observation on the one hand, versus visual appearance, or what one sees. The measured shape of an object is a hypothetical snapshot of all of the object's points as they exist at a single moment in time. But the visual appearance of an object is affected by the varying lengths of time that light takes to travel from different points on the object to one's eye. For many years, the distinction between the two had not been generally appreciated, and it had generally been thought that a length contracted object passing by an observer would in fact actually be seen as length contracted. In 1959, James Terrell and Roger Penrose independently pointed out that differential time lag effects in signals reaching the observer from the different parts of a moving object result in a fast moving object's visual appearance being quite different from its measured shape. For example, a receding object would appear contracted, an approaching object would appear elongated, and a passing object would have a skew appearance that has been likened to a rotation. A sphere in motion retains the circular outline for all speeds, for any distance, and for all view angles, although the surface of the sphere and the images on it will appear distorted. Both Fig. 5-4 and Fig. 5-5 illustrate objects moving transversely to the line of sight. In Fig. 5-4, a cube is viewed from a distance of four times the length of its sides. At high speeds, the sides of the cube that are perpendicular to the direction of motion appear hyperbolic in shape. The cube is actually not rotated. Rather, light from the rear of the cube takes longer to reach one's eyes compared with light from the front, during which time the cube has moved to the right. At high speeds, the sphere in Fig. 5-5 takes on the appearance of a flattened disk tilted up to 45° from the line of sight. If the objects' motions are not strictly transverse but instead include a longitudinal component, exaggerated distortions in perspective may be seen. This illusion has come to be known as Terrell rotation or the Terrell–Penrose effect. Another example where visual appearance is at odds with measurement comes from the observation of apparent superluminal motion in various radio galaxies, BL Lac objects, quasars, and other astronomical objects that eject relativistic-speed jets of matter at narrow angles with respect to the viewer. An apparent optical illusion results giving the appearance of faster than light travel. In Fig. 5-6, galaxy M87 streams out a high-speed jet of subatomic particles almost directly towards us, but Penrose–Terrell rotation causes the jet to appear to be moving laterally in the same manner that the appearance of the cube in Fig. 5-4 has been stretched out. Dynamics Section dealt strictly with kinematics, the study of the motion of points, bodies, and systems of bodies without considering the forces that caused the motion. This section discusses masses, forces, energy and so forth, and as such requires consideration of physical effects beyond those encompassed by the Lorentz transformation itself. Equivalence of mass and energy As an object's speed approaches the speed of light from an observer's point of view, its relativistic mass increases thereby making it more and more difficult to accelerate it from within the observer's frame of reference. The energy content of an object at rest with mass m equals mc2. Conservation of energy implies that, in any reaction, a decrease of the sum of the masses of particles must be accompanied by an increase in kinetic energies of the particles after the reaction. Similarly, the mass of an object can be increased by taking in kinetic energies. In addition to the papers referenced above – which give derivations of the Lorentz transformation and describe the foundations of special relativity—Einstein also wrote at least four papers giving heuristic arguments for the equivalence (and transmutability) of mass and energy, for . Mass–energy equivalence is a consequence of special relativity. The energy and momentum, which are separate in Newtonian mechanics, form a four-vector in relativity, and this relates the time component (the energy) to the space components (the momentum) in a non-trivial way. For an object at rest, the energy–momentum four-vector is : it has a time component, which is the energy, and three space components, which are zero. By changing frames with a Lorentz transformation in the x direction with a small value of the velocity v, the energy momentum four-vector becomes . The momentum is equal to the energy multiplied by the velocity divided by c2. As such, the Newtonian mass of an object, which is the ratio of the momentum to the velocity for slow velocities, is equal to E/c2. The energy and momentum are properties of matter and radiation, and it is impossible to deduce that they form a four-vector just from the two basic postulates of special relativity by themselves, because these do not talk about matter or radiation, they only talk about space and time. The derivation therefore requires some additional physical reasoning. In his 1905 paper, Einstein used the additional principles that Newtonian mechanics should hold for slow velocities, so that there is one energy scalar and one three-vector momentum at slow velocities, and that the conservation law for energy and momentum is exactly true in relativity. Furthermore, he assumed that the energy of light is transformed by the same Doppler-shift factor as its frequency, which he had previously shown to be true based on Maxwell's equations. The first of Einstein's papers on this subject was "Does the Inertia of a Body Depend upon its Energy Content?" in 1905. Although Einstein's argument in this paper is nearly universally accepted by physicists as correct, even self-evident, many authors over the years have suggested that it is wrong. Other authors suggest that the argument was merely inconclusive because it relied on some implicit assumptions. Einstein acknowledged the controversy over his derivation in his 1907 survey paper on special relativity. There he notes that it is problematic to rely on Maxwell's equations for the heuristic mass–energy argument. The argument in his 1905 paper can be carried out with the emission of any massless particles, but the Maxwell equations are implicitly used to make it obvious that the emission of light in particular can be achieved only by doing work. To emit electromagnetic waves, all you have to do is shake a charged particle, and this is clearly doing work, so that the emission is of energy. Einstein's 1905 demonstration of E = mc2 In his fourth of his 1905 Annus mirabilis papers, Einstein presented a heuristic argument for the equivalence of mass and energy. Although, as discussed above, subsequent scholarship has established that his arguments fell short of a broadly definitive proof, the conclusions that he reached in this paper have stood the test of time. Einstein took as starting assumptions his recently discovered formula for relativistic Doppler shift, the laws of conservation of energy and conservation of momentum, and the relationship between the frequency of light and its energy as implied by Maxwell's equations. Fig. 6-1 (top). Consider a system of plane waves of light having frequency traveling in direction relative to the x-axis of reference frame S. The frequency (and hence energy) of the waves as measured in frame that is moving along the x-axis at velocity is given by the relativistic Doppler shift formula that Einstein had developed in his 1905 paper on special relativity: Fig. 6-1 (bottom). Consider an arbitrary body that is stationary in reference frame S. Let this body emit a pair of equal-energy light-pulses in opposite directions at angle with respect to the x-axis. Each pulse has energy . Because of conservation of momentum, the body remains stationary in S after emission of the two pulses. Let be the energy of the body before emission of the two pulses and after their emission. Next, consider the same system observed from frame that is moving along the x-axis at speed relative to frame S. In this frame, light from the forwards and reverse pulses will be relativistically Doppler-shifted. Let be the energy of the body measured in reference frame before emission of the two pulses and after their emission. We obtain the following relationships: From the above equations, we obtain the following: The two differences of form seen in the above equation have a straightforward physical interpretation. Since and are the energies of the arbitrary body in the moving and stationary frames, and represents the kinetic energies of the bodies before and after the emission of light (except for an additive constant that fixes the zero point of energy and is conventionally set to zero). Hence, Taking a Taylor series expansion and neglecting higher order terms, he obtained Comparing the above expression with the classical expression for kinetic energy, K.E. = mv2, Einstein then noted: "If a body gives off the energy L in the form of radiation, its mass diminishes by L/c2." Rindler has observed that Einstein's heuristic argument suggested merely that energy contributes to mass. In 1905, Einstein's cautious expression of the mass–energy relationship allowed for the possibility that "dormant" mass might exist that would remain behind after all the energy of a body was removed. By 1907, however, Einstein was ready to assert that all inertial mass represented a reserve of energy. "To equate all mass with energy required an act of aesthetic faith, very characteristic of Einstein." Einstein's bold hypothesis has been amply confirmed in the years subsequent to his original proposal. For a variety of reasons, Einstein's original derivation is currently seldom taught. Besides the vigorous debate that continues until this day as to the formal correctness of his original derivation, the recognition of special relativity as being what Einstein called a "principle theory" has led to a shift away from reliance on electromagnetic phenomena to purely dynamic methods of proof. How far can you travel from the Earth? Since nothing can travel faster than light, one might conclude that a human can never travel farther from Earth than ~ 100 light years. You would easily think that a traveler would never be able to reach more than the few solar systems that exist within the limit of 100 light years from Earth. However, because of time dilation, a hypothetical spaceship can travel thousands of light years during a passenger's lifetime. If a spaceship could be built that accelerates at a constant 1g, it will, after one year, be travelling at almost the speed of light as seen from Earth. This is described by: where v(t) is the velocity at a time t, a is the acceleration of the spaceship and t is the coordinate time as measured by people on Earth. Therefore, after one year of accelerating at 9.81 m/s2, the spaceship will be travelling at and after three years, relative to Earth. After three years of this acceleration, with the spaceship achieving a velocity of 94.6% of the speed of light relative to Earth, time dilation will result in each second experienced on the spaceship corresponding to 3.1 seconds back on Earth. During their journey, people on Earth will experience more time than they do – since their clocks (all physical phenomena) would really be ticking 3.1 times faster than those of the spaceship. A 5-year round trip for the traveller will take 6.5 Earth years and cover a distance of over 6 light-years. A 20-year round trip for them (5 years accelerating, 5 decelerating, twice each) will land them back on Earth having travelled for 335 Earth years and a distance of 331 light years. A full 40-year trip at 1g will appear on Earth to last 58,000 years and cover a distance of 55,000 light years. A 40-year trip at will take years and cover about light years. A one-way 28 year (14 years accelerating, 14 decelerating as measured with the astronaut's clock) trip at 1g acceleration could reach 2,000,000 light-years to the Andromeda Galaxy. This same time dilation is why a muon travelling close to c is observed to travel much farther than c times its half-life (when at rest). Elastic collisions Examination of the collision products generated by particle accelerators around the world provides scientists evidence of the structure of the subatomic world and the natural laws governing it. Analysis of the collision products, the sum of whose masses may vastly exceed the masses of the incident particles, requires special relativity. In Newtonian mechanics, analysis of collisions involves use of the conservation laws for mass, momentum and energy. In relativistic mechanics, mass is not independently conserved, because it has been subsumed into the total relativistic energy. We illustrate the differences that arise between the Newtonian and relativistic treatments of particle collisions by examining the simple case of two perfectly elastic colliding particles of equal mass. (Inelastic collisions are discussed in Spacetime#Conservation laws. Radioactive decay may be considered a sort of time-reversed inelastic collision.) Elastic scattering of charged elementary particles deviates from ideality due to the production of Bremsstrahlung radiation. Newtonian analysis Fig. 6-2 provides a demonstration of the result, familiar to billiard players, that if a stationary ball is struck elastically by another one of the same mass (assuming no sidespin, or "English"), then after collision, the diverging paths of the two balls will subtend a right angle. (a) In the stationary frame, an incident sphere traveling at 2v strikes a stationary sphere. (b) In the center of momentum frame, the two spheres approach each other symmetrically at ±v. After elastic collision, the two spheres rebound from each other with equal and opposite velocities ±u. Energy conservation requires that = . (c) Reverting to the stationary frame, the rebound velocities are . The dot product , indicating that the vectors are orthogonal. Relativistic analysis Consider the elastic collision scenario in Fig. 6-3 between a moving particle colliding with an equal mass stationary particle. Unlike the Newtonian case, the angle between the two particles after collision is less than 90°, is dependent on the angle of scattering, and becomes smaller and smaller as the velocity of the incident particle approaches the speed of light: The relativistic momentum and total relativistic energy of a particle are given by Conservation of momentum dictates that the sum of the momenta of the incoming particle and the stationary particle (which initially has momentum = 0) equals the sum of the momenta of the emergent particles: Likewise, the sum of the total relativistic energies of the incoming particle and the stationary particle (which initially has total energy mc2) equals the sum of the total energies of the emergent particles: Breaking down () into its components, replacing with the dimensionless , and factoring out common terms from () and () yields the following: From these we obtain the following relationships: For the symmetrical case in which and , () takes on the simpler form: Beyond the basics Rapidity Lorentz transformations relate coordinates of events in one reference frame to those of another frame. Relativistic composition of velocities is used to add two velocities together. The formulas to perform the latter computations are nonlinear, making them more complex than the corresponding Galilean formulas. This nonlinearity is an artifact of our choice of parameters. We have previously noted that in an spacetime diagram, the points at some constant spacetime interval from the origin form an invariant hyperbola. We have also noted that the coordinate systems of two spacetime reference frames in standard configuration are hyperbolically rotated with respect to each other. The natural functions for expressing these relationships are the hyperbolic analogs of the trigonometric functions. Fig. 7-1a shows a unit circle with sin(a) and cos(a), the only difference between this diagram and the familiar unit circle of elementary trigonometry being that a is interpreted, not as the angle between the ray and the , but as twice the area of the sector swept out by the ray from the . Numerically, the angle and measures for the unit circle are identical. Fig. 7-1b shows a unit hyperbola with sinh(a) and cosh(a), where a is likewise interpreted as twice the tinted area. Fig. 7-2 presents plots of the sinh, cosh, and tanh functions. For the unit circle, the slope of the ray is given by In the Cartesian plane, rotation of point into point by angle θ is given by In a spacetime diagram, the velocity parameter is the analog of slope. The rapidity, φ, is defined by where The rapidity defined above is very useful in special relativity because many expressions take on a considerably simpler form when expressed in terms of it. For example, rapidity is simply additive in the collinear velocity-addition formula; or in other words, . The Lorentz transformations take a simple form when expressed in terms of rapidity. The γ factor can be written as Transformations describing relative motion with uniform velocity and without rotation of the space coordinate axes are called boosts. Substituting γ and γβ into the transformations as previously presented and rewriting in matrix form, the Lorentz boost in the may be written as and the inverse Lorentz boost in the may be written as In other words, Lorentz boosts represent hyperbolic rotations in Minkowski spacetime. The advantages of using hyperbolic functions are such that some textbooks such as the classic ones by Taylor and Wheeler introduce their use at a very early stage. 4‑vectors Four‑vectors have been mentioned above in context of the energy–momentum , but without any great emphasis. Indeed, none of the elementary derivations of special relativity require them. But once understood, , and more generally tensors, greatly simplify the mathematics and conceptual understanding of special relativity. Working exclusively with such objects leads to formulas that are manifestly relativistically invariant, which is a considerable advantage in non-trivial contexts. For instance, demonstrating relativistic invariance of Maxwell's equations in their usual form is not trivial, while it is merely a routine calculation, really no more than an observation, using the field strength tensor formulation. On the other hand, general relativity, from the outset, relies heavily on , and more generally tensors, representing physically relevant entities. Relating these via equations that do not rely on specific coordinates requires tensors, capable of connecting such even within a curved spacetime, and not just within a flat one as in special relativity. The study of tensors is outside the scope of this article, which provides only a basic discussion of spacetime. Definition of 4-vectors A 4-tuple, is a "4-vector" if its component Ai transform between frames according to the Lorentz transformation. If using coordinates, A is a if it transforms (in the ) according to which comes from simply replacing ct with A0 and x with A1 in the earlier presentation of the Lorentz transformation. As usual, when we write x, t, etc. we generally mean Δx, Δt etc. The last three components of a must be a standard vector in three-dimensional space. Therefore, a must transform like under Lorentz transformations as well as rotations. Properties of 4-vectors Closure under linear combination: If A and B are , then is also a . Inner-product invariance: If A and B are , then their inner product (scalar product) is invariant, i.e. their inner product is independent of the frame in which it is calculated. Note how the calculation of inner product differs from the calculation of the inner product of a . In the following, and are : In addition to being invariant under Lorentz transformation, the above inner product is also invariant under rotation in . Two vectors are said to be orthogonal if . Unlike the case with , orthogonal are not necessarily at right angles to each other. The rule is that two are orthogonal if they are offset by equal and opposite angles from the 45° line, which is the world line of a light ray. This implies that a lightlike is orthogonal to itself. Invariance of the magnitude of a vector: The magnitude of a vector is the inner product of a with itself, and is a frame-independent property. As with intervals, the magnitude may be positive, negative or zero, so that the vectors are referred to as timelike, spacelike or null (lightlike). Note that a null vector is not the same as a zero vector. A null vector is one for which , while a zero vector is one whose components are all zero. Special cases illustrating the invariance of the norm include the invariant interval and the invariant length of the relativistic momentum vector . Examples of 4-vectors Displacement 4-vector: Otherwise known as the spacetime separation, this is or for infinitesimal separations, . Velocity 4-vector: This results when the displacement is divided by , where is the proper time between the two events that yield dt, dx, dy, and dz. The is tangent to the world line of a particle, and has a length equal to one unit of time in the frame of the particle. An accelerated particle does not have an inertial frame in which it is always at rest. However, an inertial frame can always be found that is momentarily comoving with the particle. This frame, the momentarily comoving reference frame (MCRF), enables application of special relativity to the analysis of accelerated particles. Since photons move on null lines, for a photon, and a cannot be defined. There is no frame in which a photon is at rest, and no MCRF can be established along a photon's path. Energy–momentum 4-vector: As indicated before, there are varying treatments for the energy–momentum so that one may also see it expressed as or . The first component is the total energy (including mass) of the particle (or system of particles) in a given frame, while the remaining components are its spatial momentum. The energy–momentum is a conserved quantity. Acceleration 4-vector: This results from taking the derivative of the velocity with respect to . Force 4-vector: This is the derivative of the momentum with respect to As expected, the final components of the above are all standard corresponding to spatial , etc. 4-vectors and physical law The first postulate of special relativity declares the equivalency of all inertial frames. A physical law holding in one frame must apply in all frames, since otherwise it would be possible to differentiate between frames. Newtonian momenta fail to behave properly under Lorentzian transformation, and Einstein preferred to change the definition of momentum to one involving rather than give up on conservation of momentum. Physical laws must be based on constructs that are frame independent. This means that physical laws may take the form of equations connecting scalars, which are always frame independent. However, equations involving require the use of tensors with appropriate rank, which themselves can be thought of as being built up from . Acceleration It is a common misconception that special relativity is applicable only to inertial frames, and that it is unable to handle accelerating objects or accelerating reference frames. Actually, accelerating objects can generally be analyzed without needing to deal with accelerating frames at all. It is only when gravitation is significant that general relativity is required. Properly handling accelerating frames does require some care, however. The difference between special and general relativity is that (1) In special relativity, all velocities are relative, but acceleration is absolute. (2) In general relativity, all motion is relative, whether inertial, accelerating, or rotating. To accommodate this difference, general relativity uses curved spacetime. In this section, we analyze several scenarios involving accelerated reference frames. Dewan–Beran–Bell spaceship paradox The Dewan–Beran–Bell spaceship paradox (Bell's spaceship paradox) is a good example of a problem where intuitive reasoning unassisted by the geometric insight of the spacetime approach can lead to issues. In Fig. 7-4, two identical spaceships float in space and are at rest relative to each other. They are connected by a string that is capable of only a limited amount of stretching before breaking. At a given instant in our frame, the observer frame, both spaceships accelerate in the same direction along the line between them with the same constant proper acceleration. Will the string break? When the paradox was new and relatively unknown, even professional physicists had difficulty working out the solution. Two lines of reasoning lead to opposite conclusions. Both arguments, which are presented below, are flawed even though one of them yields the correct answer. To observers in the rest frame, the spaceships start a distance L apart and remain the same distance apart during acceleration. During acceleration, L is a length contracted distance of the distance in the frame of the accelerating spaceships. After a sufficiently long time, γ will increase to a sufficiently large factor that the string must break. Let A and B be the rear and front spaceships. In the frame of the spaceships, each spaceship sees the other spaceship doing the same thing that it is doing. A says that B has the same acceleration that he has, and B sees that A matches her every move. So the spaceships stay the same distance apart, and the string does not break. The problem with the first argument is that there is no "frame of the spaceships." There cannot be, because the two spaceships measure a growing distance between the two. Because there is no common frame of the spaceships, the length of the string is ill-defined. Nevertheless, the conclusion is correct, and the argument is mostly right. The second argument, however, completely ignores the relativity of simultaneity. A spacetime diagram (Fig. 7-5) makes the correct solution to this paradox almost immediately evident. Two observers in Minkowski spacetime accelerate with constant magnitude acceleration for proper time (acceleration and elapsed time measured by the observers themselves, not some inertial observer). They are comoving and inertial before and after this phase. In Minkowski geometry, the length along the line of simultaneity turns out to be greater than the length along the line of simultaneity . The length increase can be calculated with the help of the Lorentz transformation. If, as illustrated in Fig. 7-5, the acceleration is finished, the ships will remain at a constant offset in some frame . If and are the ships' positions in , the positions in frame are: The "paradox", as it were, comes from the way that Bell constructed his example. In the usual discussion of Lorentz contraction, the rest length is fixed and the moving length shortens as measured in frame . As shown in Fig. 7-5, Bell's example asserts the moving lengths and measured in frame to be fixed, thereby forcing the rest frame length in frame to increase. Accelerated observer with horizon Certain special relativity problem setups can lead to insight about phenomena normally associated with general relativity, such as event horizons. In the text accompanying Section "Invariant hyperbola" of the article Spacetime, the magenta hyperbolae represented actual paths that are tracked by a constantly accelerating traveler in spacetime. During periods of positive acceleration, the traveler's velocity just approaches the speed of light, while, measured in our frame, the traveler's acceleration constantly decreases. Fig. 7-6 details various features of the traveler's motions with more specificity. At any given moment, her space axis is formed by a line passing through the origin and her current position on the hyperbola, while her time axis is the tangent to the hyperbola at her position. The velocity parameter approaches a limit of one as increases. Likewise, approaches infinity. The shape of the invariant hyperbola corresponds to a path of constant proper acceleration. This is demonstrable as follows: We remember that . Since , we conclude that . From the relativistic force law, . Substituting from step 2 and the expression for from step 3 yields , which is a constant expression. Fig. 7-6 illustrates a specific calculated scenario. Terence (A) and Stella (B) initially stand together 100 light hours from the origin. Stella lifts off at time 0, her spacecraft accelerating at 0.01 c per hour. Every twenty hours, Terence radios updates to Stella about the situation at home (solid green lines). Stella receives these regular transmissions, but the increasing distance (offset in part by time dilation) causes her to receive Terence's communications later and later as measured on her clock, and she never receives any communications from Terence after 100 hours on his clock (dashed green lines). After 100 hours according to Terence's clock, Stella enters a dark region. She has traveled outside Terence's timelike future. On the other hand, Terence can continue to receive Stella's messages to him indefinitely. He just has to wait long enough. Spacetime has been divided into distinct regions separated by an apparent event horizon. So long as Stella continues to accelerate, she can never know what takes place behind this horizon. Relativity and unifying electromagnetism Theoretical investigation in classical electromagnetism led to the discovery of wave propagation. Equations generalizing the electromagnetic effects found that finite propagation speed of the E and B fields required certain behaviors on charged particles. The general study of moving charges forms the Liénard–Wiechert potential, which is a step towards special relativity. The Lorentz transformation of the electric field of a moving charge into a non-moving observer's reference frame results in the appearance of a mathematical term commonly called the magnetic field. Conversely, the magnetic field generated by a moving charge disappears and becomes a purely electrostatic field in a comoving frame of reference. Maxwell's equations are thus simply an empirical fit to special relativistic effects in a classical model of the Universe. As electric and magnetic fields are reference frame dependent and thus intertwined, one speaks of electromagnetic fields. Special relativity provides the transformation rules for how an electromagnetic field in one inertial frame appears in another inertial frame. Maxwell's equations in the 3D form are already consistent with the physical content of special relativity, although they are easier to manipulate in a manifestly covariant form, that is, in the language of tensor calculus. Theories of relativity and quantum mechanics Special relativity can be combined with quantum mechanics to form relativistic quantum mechanics and quantum electrodynamics. How general relativity and quantum mechanics can be unified is one of the unsolved problems in physics; quantum gravity and a "theory of everything", which require a unification including general relativity too, are active and ongoing areas in theoretical research. The early Bohr–Sommerfeld atomic model explained the fine structure of alkali metal atoms using both special relativity and the preliminary knowledge on quantum mechanics of the time. In 1928, Paul Dirac constructed an influential relativistic wave equation, now known as the Dirac equation in his honour, that is fully compatible both with special relativity and with the final version of quantum theory existing after 1926. This equation not only described the intrinsic angular momentum of the electrons called spin, it also led to the prediction of the antiparticle of the electron (the positron), and fine structure could only be fully explained with special relativity. It was the first foundation of relativistic quantum mechanics. On the other hand, the existence of antiparticles leads to the conclusion that relativistic quantum mechanics is not enough for a more accurate and complete theory of particle interactions. Instead, a theory of particles interpreted as quantized fields, called quantum field theory, becomes necessary; in which particles can be created and destroyed throughout space and time. Status Special relativity in its Minkowski spacetime is accurate only when the absolute value of the gravitational potential is much less than c2 in the region of interest. In a strong gravitational field, one must use general relativity. General relativity becomes special relativity at the limit of a weak field. At very small scales, such as at the Planck length and below, quantum effects must be taken into consideration resulting in quantum gravity. But at macroscopic scales and in the absence of strong gravitational fields, special relativity is experimentally tested to extremely high degree of accuracy (10−20) and thus accepted by the physics community. Experimental results that appear to contradict it are not reproducible and are thus widely believed to be due to experimental errors. Special relativity is mathematically self-consistent, and it is an organic part of all modern physical theories, most notably quantum field theory, string theory, and general relativity (in the limiting case of negligible gravitational fields). Newtonian mechanics mathematically follows from special relativity at small velocities (compared to the speed of light) – thus Newtonian mechanics can be considered as a special relativity of slow moving bodies. See Classical mechanics for a more detailed discussion. Several experiments predating Einstein's 1905 paper are now interpreted as evidence for relativity. Of these it is known Einstein was aware of the Fizeau experiment before 1905, and historians have concluded that Einstein was at least aware of the Michelson–Morley experiment as early as 1899 despite claims he made in his later years that it played no role in his development of the theory. The Fizeau experiment (1851, repeated by Michelson and Morley in 1886) measured the speed of light in moving media, with results that are consistent with relativistic addition of colinear velocities. The famous Michelson–Morley experiment (1881, 1887) gave further support to the postulate that detecting an absolute reference velocity was not achievable. It should be stated here that, contrary to many alternative claims, it said little about the invariance of the speed of light with respect to the source and observer's velocity, as both source and observer were travelling together at the same velocity at all times. The Trouton–Noble experiment (1903) showed that the torque on a capacitor is independent of position and inertial reference frame. The Experiments of Rayleigh and Brace (1902, 1904) showed that length contraction does not lead to birefringence for a co-moving observer, in accordance with the relativity principle. Particle accelerators accelerate and measure the properties of particles moving at near the speed of light, where their behavior is consistent with relativity theory and inconsistent with the earlier Newtonian mechanics. These machines would simply not work if they were not engineered according to relativistic principles. In addition, a considerable number of modern experiments have been conducted to test special relativity. Some examples: Tests of relativistic energy and momentum – testing the limiting speed of particles Ives–Stilwell experiment – testing relativistic Doppler effect and time dilation Experimental testing of time dilation – relativistic effects on a fast-moving particle's half-life Kennedy–Thorndike experiment – time dilation in accordance with Lorentz transformations Hughes–Drever experiment – testing isotropy of space and mass Modern searches for Lorentz violation – various modern tests Experiments to test emission theory demonstrated that the speed of light is independent of the speed of the emitter. Experiments to test the aether drag hypothesis – no "aether flow obstruction". Technical discussion of spacetime Geometry of spacetime Comparison between flat Euclidean space and Minkowski space Special relativity uses a "flat" 4-dimensional Minkowski space – an example of a spacetime. Minkowski spacetime appears to be very similar to the standard 3-dimensional Euclidean space, but there is a crucial difference with respect to time. In 3D space, the differential of distance (line element) ds is defined by where are the differentials of the three spatial dimensions. In Minkowski geometry, there is an extra dimension with coordinate X0 derived from time, such that the distance differential fulfills where are the differentials of the four spacetime dimensions. This suggests a deep theoretical insight: special relativity is simply a rotational symmetry of our spacetime, analogous to the rotational symmetry of Euclidean space (see Fig. 10-1). Just as Euclidean space uses a Euclidean metric, so spacetime uses a Minkowski metric. Basically, special relativity can be stated as the invariance of any spacetime interval (that is the 4D distance between any two events) when viewed from any inertial reference frame. All equations and effects of special relativity can be derived from this rotational symmetry (the Poincaré group) of Minkowski spacetime. The actual form of ds above depends on the metric and on the choices for the X0 coordinate. To make the time coordinate look like the space coordinates, it can be treated as imaginary: (this is called a Wick rotation). According to Misner, Thorne and Wheeler (1971, §2.3), ultimately the deeper understanding of both special and general relativity will come from the study of the Minkowski metric (described below) and to take , rather than a "disguised" Euclidean metric using ict as the time coordinate. Some authors use , with factors of c elsewhere to compensate; for instance, spatial coordinates are divided by c or factors of c±2 are included in the metric tensor. These numerous conventions can be superseded by using natural units where . Then space and time have equivalent units, and no factors of c appear anywhere. 3D spacetime If we reduce the spatial dimensions to 2, so that we can represent the physics in a 3D space we see that the null geodesics lie along a dual-cone (see Fig. 10-2) defined by the equation; or simply which is the equation of a circle of radius c dt. 4D spacetime If we extend this to three spatial dimensions, the null geodesics are the 4-dimensional cone: so As illustrated in Fig. 10-3, the null geodesics can be visualized as a set of continuous concentric spheres with radii = c dt. This null dual-cone represents the "line of sight" of a point in space. That is, when we look at the stars and say "The light from that star that I am receiving is X years old", we are looking down this line of sight: a null geodesic. We are looking at an event a distance away and a time d/c in the past. For this reason the null dual cone is also known as the "light cone". (The point in the lower left of the Fig. 10-2 represents the star, the origin represents the observer, and the line represents the null geodesic "line of sight".) The cone in the −t region is the information that the point is "receiving", while the cone in the +t section is the information that the point is "sending". The geometry of Minkowski space can be depicted using Minkowski diagrams, which are useful also in understanding many of the thought experiments in special relativity. Physics in spacetime Transformations of physical quantities between reference frames Above, the Lorentz transformation for the time coordinate and three space coordinates illustrates that they are intertwined. This is true more generally: certain pairs of "timelike" and "spacelike" quantities naturally combine on equal footing under the same Lorentz transformation. The Lorentz transformation in standard configuration above, that is, for a boost in the x-direction, can be recast into matrix form as follows: In Newtonian mechanics, quantities that have magnitude and direction are mathematically described as 3d vectors in Euclidean space, and in general they are parametrized by time. In special relativity, this notion is extended by adding the appropriate timelike quantity to a spacelike vector quantity, and we have 4d vectors, or "four-vectors", in Minkowski spacetime. The components of vectors are written using tensor index notation, as this has numerous advantages. The notation makes it clear the equations are manifestly covariant under the Poincaré group, thus bypassing the tedious calculations to check this fact. In constructing such equations, we often find that equations previously thought to be unrelated are, in fact, closely connected being part of the same tensor equation. Recognizing other physical quantities as tensors simplifies their transformation laws. Throughout, upper indices (superscripts) are contravariant indices rather than exponents except when they indicate a square (this should be clear from the context), and lower indices (subscripts) are covariant indices. For simplicity and consistency with the earlier equations, Cartesian coordinates will be used. The simplest example of a four-vector is the position of an event in spacetime, which constitutes a timelike component ct and spacelike component , in a contravariant position four-vector with components: where we define so that the time coordinate has the same dimension of distance as the other spatial dimensions; so that space and time are treated equally. Now the transformation of the contravariant components of the position 4-vector can be compactly written as: where there is an implied summation on from 0 to 3, and is a matrix. More generally, all contravariant components of a four-vector transform from one frame to another frame by a Lorentz transformation: Examples of other 4-vectors include the four-velocity , defined as the derivative of the position 4-vector with respect to proper time: where the Lorentz factor is: The relativistic energy and relativistic momentum of an object are respectively the timelike and spacelike components of a contravariant four-momentum vector: where m is the invariant mass. The four-acceleration is the proper time derivative of 4-velocity: The transformation rules for three-dimensional velocities and accelerations are very awkward; even above in standard configuration the velocity equations are quite complicated owing to their non-linearity. On the other hand, the transformation of four-velocity and four-acceleration are simpler by means of the Lorentz transformation matrix. The four-gradient of a scalar field φ transforms covariantly rather than contravariantly: which is the transpose of: only in Cartesian coordinates. It is the covariant derivative that transforms in manifest covariance, in Cartesian coordinates this happens to reduce to the partial derivatives, but not in other coordinates. More generally, the covariant components of a 4-vector transform according to the inverse Lorentz transformation: where is the reciprocal matrix of . The postulates of special relativity constrain the exact form the Lorentz transformation matrices take. More generally, most physical quantities are best described as (components of) tensors. So to transform from one frame to another, we use the well-known tensor transformation law where is the reciprocal matrix of . All tensors transform by this rule. An example of a four-dimensional second order antisymmetric tensor is the relativistic angular momentum, which has six components: three are the classical angular momentum, and the other three are related to the boost of the center of mass of the system. The derivative of the relativistic angular momentum with respect to proper time is the relativistic torque, also second order antisymmetric tensor. The electromagnetic field tensor is another second order antisymmetric tensor field, with six components: three for the electric field and another three for the magnetic field. There is also the stress–energy tensor for the electromagnetic field, namely the electromagnetic stress–energy tensor. Metric The metric tensor allows one to define the inner product of two vectors, which in turn allows one to assign a magnitude to the vector. Given the four-dimensional nature of spacetime the Minkowski metric η has components (valid with suitably chosen coordinates), which can be arranged in a matrix: which is equal to its reciprocal, , in those frames. Throughout we use the signs as above, different authors use different conventions – see Minkowski metric alternative signs. The Poincaré group is the most general group of transformations that preserves the Minkowski metric: and this is the physical symmetry underlying special relativity. The metric can be used for raising and lowering indices on vectors and tensors. Invariants can be constructed using the metric, the inner product of a 4-vector T with another 4-vector S is: Invariant means that it takes the same value in all inertial frames, because it is a scalar (0 rank tensor), and so no appears in its trivial transformation. The magnitude of the 4-vector T is the positive square root of the inner product with itself: One can extend this idea to tensors of higher order, for a second order tensor we can form the invariants: similarly for higher order tensors. Invariant expressions, particularly inner products of 4-vectors with themselves, provide equations that are useful for calculations, because one does not need to perform Lorentz transformations to determine the invariants. Relativistic kinematics and invariance The coordinate differentials transform also contravariantly: so the squared length of the differential of the position four-vector dXμ constructed using is an invariant. Notice that when the line element dX2 is negative that is the differential of proper time, while when dX2 is positive, is differential of the proper distance. The 4-velocity Uμ has an invariant form: which means all velocity four-vectors have a magnitude of c. This is an expression of the fact that there is no such thing as being at coordinate rest in relativity: at the least, you are always moving forward through time. Differentiating the above equation by τ produces: So in special relativity, the acceleration four-vector and the velocity four-vector are orthogonal. Relativistic dynamics and invariance The invariant magnitude of the momentum 4-vector generates the energy–momentum relation: We can work out what this invariant is by first arguing that, since it is a scalar, it does not matter in which reference frame we calculate it, and then by transforming to a frame where the total momentum is zero. We see that the rest energy is an independent invariant. A rest energy can be calculated even for particles and systems in motion, by translating to a frame in which momentum is zero. The rest energy is related to the mass according to the celebrated equation discussed above: The mass of systems measured in their center of momentum frame (where total momentum is zero) is given by the total energy of the system in this frame. It may not be equal to the sum of individual system masses measured in other frames. To use Newton's third law of motion, both forces must be defined as the rate of change of momentum with respect to the same time coordinate. That is, it requires the 3D force defined above. Unfortunately, there is no tensor in 4D that contains the components of the 3D force vector among its components. If a particle is not traveling at c, one can transform the 3D force from the particle's co-moving reference frame into the observer's reference frame. This yields a 4-vector called the four-force. It is the rate of change of the above energy momentum four-vector with respect to proper time. The covariant version of the four-force is: In the rest frame of the object, the time component of the four-force is zero unless the "invariant mass" of the object is changing (this requires a non-closed system in which energy/mass is being directly added or removed from the object) in which case it is the negative of that rate of change of mass, times c. In general, though, the components of the four-force are not equal to the components of the three-force, because the three force is defined by the rate of change of momentum with respect to coordinate time, that is, dp/dt while the four-force is defined by the rate of change of momentum with respect to proper time, that is, dp/dτ. In a continuous medium, the 3D density of force combines with the density of power to form a covariant 4-vector. The spatial part is the result of dividing the force on a small cell (in 3-space) by the volume of that cell. The time component is −1/c times the power transferred to that cell divided by the volume of the cell. This will be used below in the section on electromagnetism. See also People Max Planck Hermann Minkowski Max von Laue Arnold Sommerfeld Max Born Mileva Marić Relativity History of special relativity Doubly special relativity Bondi k-calculus Einstein synchronisation Rietdijk–Putnam argument Special relativity (alternative formulations) Relativity priority dispute Physics Einstein's thought experiments physical cosmology Relativistic Euler equations Lorentz ether theory Moving magnet and conductor problem Shape waves Relativistic heat conduction Relativistic disk Born rigidity Born coordinates Mathematics Lorentz group Relativity in the APS formalism Philosophy actualism conventionalism Paradoxes Ehrenfest paradox Bell's spaceship paradox Velocity composition paradox Lighthouse paradox Notes Primary sources References Further reading Texts by Einstein and text about history of special relativity Einstein, Albert (1920). Relativity: The Special and General Theory. Einstein, Albert (1996). The Meaning of Relativity. Fine Communications. Logunov, Anatoly A. (2005). Henri Poincaré and the Relativity Theory (transl. from Russian by G. Pontocorvo and V. O. Soloviev, edited by V. A. Petrov). Nauka, Moscow. Textbooks Charles Misner, Kip Thorne, and John Archibald Wheeler (1971) Gravitation. W. H. Freeman & Co. Post, E.J., 1997 (1962) Formal Structure of Electromagnetics: General Covariance and Electromagnetics. Dover Publications. Wolfgang Rindler (1991). Introduction to Special Relativity (2nd ed.), Oxford University Press. ; Harvey R. Brown (2005). Physical relativity: space–time structure from a dynamical perspective, Oxford University Press, ; Silberstein, Ludwik (1914). The Theory of Relativity. Taylor, Edwin, and John Archibald Wheeler (1992). Spacetime Physics (2nd ed.). W. H. Freeman & Co. . Tipler, Paul, and Llewellyn, Ralph (2002). Modern Physics (4th ed.). W. H. Freeman & Co. . Journal articles Special Relativity Scholarpedia External links Original works Zur Elektrodynamik bewegter Körper Einstein's original work in German, Annalen der Physik, Bern 1905 On the Electrodynamics of Moving Bodies English Translation as published in the 1923 book The Principle of Relativity. Special relativity for a general audience (no mathematical knowledge required) Einstein Light An award-winning, non-technical introduction (film clips and demonstrations) supported by dozens of pages of further explanations and animations, at levels with or without mathematics. Einstein Online Introduction to relativity theory, from the Max Planck Institute for Gravitational Physics. Audio: Cain/Gay (2006) – Astronomy Cast. Einstein's Theory of Special Relativity Special relativity explained (using simple or more advanced mathematics) Bondi K-Calculus – A simple introduction to the special theory of relativity. Greg Egan's Foundations . The Hogg Notes on Special Relativity A good introduction to special relativity at the undergraduate level, using calculus. Relativity Calculator: Special Relativity – An algebraic and integral calculus derivation for . MathPages – Reflections on Relativity A complete online book on relativity with an extensive bibliography. Special Relativity An introduction to special relativity at the undergraduate level. , by Albert Einstein Special Relativity Lecture Notes is a standard introduction to special relativity containing illustrative explanations based on drawings and spacetime diagrams from Virginia Polytechnic Institute and State University. Understanding Special Relativity The theory of special relativity in an easily understandable way. An Introduction to the Special Theory of Relativity (1964) by Robert Katz, "an introduction ... that is accessible to any student who has had an introduction to general physics and some slight acquaintance with the calculus" (130 pp; pdf format). Lecture Notes on Special Relativity by J D Cresser Department of Physics Macquarie University. SpecialRelativity.net – An overview with visualizations and minimal mathematics. Relativity 4-ever? The problem of superluminal motion is discussed in an entertaining manner. Visualization Raytracing Special Relativity Software visualizing several scenarios under the influence of special relativity. Real Time Relativity The Australian National University. Relativistic visual effects experienced through an interactive program. Spacetime travel A variety of visualizations of relativistic effects, from relativistic motion to black holes. Through Einstein's Eyes The Australian National University. Relativistic visual effects explained with movies and images. Warp Special Relativity Simulator A computer program to show the effects of traveling close to the speed of light. visualizing the Lorentz transformation. Original interactive FLASH Animations from John de Pillis illustrating Lorentz and Galilean frames, Train and Tunnel Paradox, the Twin Paradox, Wave Propagation, Clock Synchronization, etc. lightspeed An OpenGL-based program developed to illustrate the effects of special relativity on the appearance of moving objects. Animation showing the stars near Earth, as seen from a spacecraft accelerating rapidly to light speed. Albert Einstein
Special relativity
[ "Physics" ]
20,692
[ "Special relativity", "Theory of relativity" ]
26,985
https://en.wikipedia.org/wiki/Salinity
Salinity () is the saltiness or amount of salt dissolved in a body of water, called saline water (see also soil salinity). It is usually measured in g/L or g/kg (grams of salt per liter/kilogram of water; the latter is dimensionless and equal to ‰). Salinity is an important factor in determining many aspects of the chemistry of natural waters and of biological processes within it, and is a thermodynamic state variable that, along with temperature and pressure, governs physical characteristics like the density and heat capacity of the water. A contour line of constant salinity is called an isohaline, or sometimes isohale. Definitions Salinity in rivers, lakes, and the ocean is conceptually simple, but technically challenging to define and measure precisely. Conceptually the salinity is the quantity of dissolved salt content of the water. Salts are compounds like sodium chloride, magnesium sulfate, potassium nitrate, and sodium bicarbonate which dissolve into ions. The concentration of dissolved chloride ions is sometimes referred to as chlorinity. Operationally, dissolved matter is defined as that which can pass through a very fine filter (historically a filter with a pore size of 0.45 μm, but later usually 0.2 μm). Salinity can be expressed in the form of a mass fraction, i.e. the mass of the dissolved material in a unit mass of solution. Seawater typically has a mass salinity of around 35 g/kg, although lower values are typical near coasts where rivers enter the ocean. Rivers and lakes can have a wide range of salinities, from less than 0.01 g/kg to a few g/kg, although there are many places where higher salinities are found. The Dead Sea has a salinity of more than 200 g/kg. Precipitation typically has a TDS of 20 mg/kg or less. Whatever pore size is used in the definition, the resulting salinity value of a given sample of natural water will not vary by more than a few percent (%). Physical oceanographers working in the abyssal ocean, however, are often concerned with precision and intercomparability of measurements by different researchers, at different times, to almost five significant digits. A bottled seawater product known as IAPSO Standard Seawater is used by oceanographers to standardize their measurements with enough precision to meet this requirement. Composition Measurement and definition difficulties arise because natural waters contain a complex mixture of many different elements from different sources (not all from dissolved salts) in different molecular forms. The chemical properties of some of these forms depend on temperature and pressure. Many of these forms are difficult to measure with high accuracy, and in any case complete chemical analysis is not practical when analyzing multiple samples. Different practical definitions of salinity result from different attempts to account for these problems, to different levels of precision, while still remaining reasonably easy to use. For practical reasons salinity is usually related to the sum of masses of a subset of these dissolved chemical constituents (so-called solution salinity), rather than to the unknown mass of salts that gave rise to this composition (an exception is when artificial seawater is created). For many purposes this sum can be limited to a set of eight major ions in natural waters, although for seawater at highest precision an additional seven minor ions are also included. The major ions dominate the inorganic composition of most (but by no means all) natural waters. Exceptions include some pit lakes and waters from some hydrothermal springs. The concentrations of dissolved gases like oxygen and nitrogen are not usually included in descriptions of salinity. However, carbon dioxide gas, which when dissolved is partially converted into carbonates and bicarbonates, is often included. Silicon in the form of silicic acid, which usually appears as a neutral molecule in the pH range of most natural waters, may also be included for some purposes (e.g., when salinity/density relationships are being investigated). Seawater The term 'salinity' is, for oceanographers, usually associated with one of a set of specific measurement techniques. As the dominant techniques evolve, so do different descriptions of salinity. Salinities were largely measured using titration-based techniques before the 1980s. Titration with silver nitrate could be used to determine the concentration of halide ions (mainly chlorine and bromine) to give a chlorinity. The chlorinity was then multiplied by a factor to account for all other constituents. The resulting 'Knudsen salinities' are expressed in units of parts per thousand (ppt or ‰). The use of electrical conductivity measurements to estimate the ionic content of seawater led to the development of the scale called the practical salinity scale 1978 (PSS-78). Salinities measured using PSS-78 do not have units. The suffix psu or PSU (denoting practical salinity unit) is sometimes added to PSS-78 measurement values. The addition of PSU as a unit after the value is "formally incorrect and strongly discouraged". In 2010 a new standard for the properties of seawater called the thermodynamic equation of seawater 2010 (TEOS-10) was introduced, advocating absolute salinity as a replacement for practical salinity, and conservative temperature as a replacement for potential temperature. This standard includes a new scale called the reference composition salinity scale. Absolute salinities on this scale are expressed as a mass fraction, in grams per kilogram of solution. Salinities on this scale are determined by combining electrical conductivity measurements with other information that can account for regional changes in the composition of seawater. They can also be determined by making direct density measurements. A sample of seawater from most locations with a chlorinity of 19.37 ppt will have a Knudsen salinity of 35.00 ppt, a PSS-78 practical salinity of about 35.0, and a TEOS-10 absolute salinity of about 35.2 g/kg. The electrical conductivity of this water at a temperature of 15 °C is 42.9 mS/cm. On the global scale, it is extremely likely that human-caused climate change has contributed to observed surface and subsurface salinity changes since the 1950s, and projections of surface salinity changes throughout the 21st century indicate that fresh ocean regions will continue to get fresher and salty regions will continue to get saltier. Salinity is serving as a tracer of different masses. Surface water is pulled in to replace the sinking water, which in turn eventually becomes cold and salty enough to sink. Salinity distribution contributes to shape the oceanic circulation. Lakes and rivers Limnologists and chemists often define salinity in terms of mass of salt per unit volume, expressed in units of mg/L or g/L. It is implied, although often not stated, that this value applies accurately only at some reference temperature because solution volume varies with temperature. Values presented in this way are typically accurate to the order of 1%. Limnologists also use electrical conductivity, or "reference conductivity", as a proxy for salinity. This measurement may be corrected for temperature effects, and is usually expressed in units of μS/cm. A river or lake water with a salinity of around 70 mg/L will typically have a specific conductivity at 25 °C of between 80 and 130 μS/cm. The actual ratio depends on the ions present. The actual conductivity usually changes by about 2% per degree Celsius, so the measured conductivity at 5 °C might only be in the range of 50–80 μS/cm. Direct density measurements are also used to estimate salinities, particularly in highly saline lakes. Sometimes density at a specific temperature is used as a proxy for salinity. At other times an empirical salinity/density relationship developed for a particular body of water is used to estimate the salinity of samples from a measured density. Classification of water bodies based upon salinity Marine waters are those of the ocean, another term for which is euhaline seas. The salinity of euhaline seas is 30 to 35 ‰. Brackish seas or waters have salinity in the range of 0.5 to 29 ‰ and metahaline seas from 36 to 40 ‰. These waters are all regarded as thalassic because their salinity is derived from the ocean and defined as homoiohaline if salinity does not vary much over time (essentially constant). The table on the right, modified from Por (1972), follows the "Venice system" (1959). In contrast to homoiohaline environments are certain poikilohaline environments (which may also be thalassic) in which the salinity variation is biologically significant. Poikilohaline water salinities may range anywhere from 0.5 to greater than 300 ‰. The important characteristic is that these waters tend to vary in salinity over some biologically meaningful range seasonally or on some other roughly comparable time scale. Put simply, these are bodies of water with quite variable salinity. Highly saline water, from which salts crystallize (or are about to), is referred to as brine. Environmental considerations Salinity is an ecological factor of considerable importance, influencing the types of organisms that live in a body of water. As well, salinity influences the kinds of plants that will grow either in a water body, or on land fed by a water (or by a groundwater). A plant adapted to saline conditions is called a halophyte. A halophyte which is tolerant to residual sodium carbonate salinity are called glasswort or saltwort or barilla plants. Organisms (mostly bacteria) that can live in very salty conditions are classified as extremophiles, or halophiles specifically. An organism that can withstand a wide range of salinities is euryhaline. Salts are expensive to remove from water, and salt content is an important factor in water use, factoring into potability and suitability for irrigation. Increases in salinity have been observed in lakes and rivers in the United States, due to common road salt and other salt de-icers in runoff. The degree of salinity in oceans is a driver of the world's ocean circulation, where density changes due to both salinity changes and temperature changes at the surface of the ocean produce changes in buoyancy, which cause the sinking and rising of water masses. Changes in the salinity of the oceans are thought to contribute to global changes in carbon dioxide as more saline waters are less soluble to carbon dioxide. In addition, during glacial periods, the hydrography is such that a possible cause of reduced circulation is the production of stratified oceans. In such cases, it is more difficult to subduct water through the thermohaline circulation. Not only is salinity a driver of ocean circulation, but changes in ocean circulation also affect salinity, particularly in the subpolar North Atlantic where from 1990 to 2010 increased contributions of Greenland meltwater were counteracted by increased northward transport of salty Atlantic waters. However, North Atlantic waters have become fresher since the mid-2010s due to increased Greenland meltwater flux. See also Desalination for economic purposes Desalination of water Desalination of soil: soil salinity control Sodium adsorption ratio Measuring salinity Salinometer Salinity by biologic context In organisms generally, with particular emphasis on human health Electrolytes Fluid balance Hypernatremia Hyponatremia Salt poisoning In plants Arabidopsis thaliana responses to salinity In fish Stenohaline fish Euryhaline fish Salinity by geologic context Fresh water Seawater Soil salinity Thermohaline circulation Paleosalinity CORA dataset data on salinity of global oceans General cases of solute concentration Osmotic concentration Tonicity References Further reading External links (dead link) Chemical oceanography Aquatic ecology Oceanography Coastal geography Water quality indicators Articles containing video clips Salts
Salinity
[ "Physics", "Chemistry", "Biology", "Environmental_science" ]
2,532
[ "Hydrology", "Applied and interdisciplinary physics", "Oceanography", "Water pollution", "Chemical oceanography", "Salts", "Water quality indicators", "Ecosystems", "Aquatic ecology" ]
18,071,097
https://en.wikipedia.org/wiki/Scoop%20wheel
A scoop wheel or scoopwheel is a pump, usually used for land drainage. A scoop wheel pump is similar in construction to a water wheel, but works in the opposite manner: a waterwheel is water-powered and used to drive machinery, a scoop wheel is engine-driven and is used to lift water from one level to another. Principally used for land drainage, early scoop wheels were wind-driven but later steam-powered beam engines were used. It can be regarded as a form of pump. A scoop wheel produces a lot of spray. They were frequently encased in a brick building. To maintain efficiency when the river into which the water was discharged was of variable level, or tidal, a 'rising breast' was used, a sort of inclined sluice. The basic construction is, of necessity, similar to an undershot water wheel. The individual blades were frequently called ladles. Scoop wheels have been used in land drainage in Northern Germany, in the Netherlands, and in the UK, and occasionally elsewhere in the world. They began to be replaced in the mid 19th century by centrifugal pumps. The East and West Fens to the north of Boston, Lincolnshire were drained by such pumps in 1867, but although they were smaller and more economical to install, a Mr. Lunn was still arguing that scoop wheels were a better solution if the initial cost did not rule them out, they were employed in situations where the water did not need to be raised by more than , and where the water levels of the input and output did not vary much. An interesting comparison between the two types of pumps is available, because a vertical spindle centrifugal pump was installed at Prickwillow on the River Lark in Cambridgeshire, alongside an existing scoop wheel. A series of tests were carried out in 1880, to check their efficiency. The scoop wheel lifted 71.45 tons per minute through , with the engine indicating that it was developing , while the newer installation was developing , and raised 75.93 tons per minute through . Efficiency was calculated as 46 per cent for the scoop wheel and 52.79 per cent for the centrifugal pump. The most significant difference was the coal consumption, which was reduced from per hour to per hour for the newer system. See also Noria Sakia Dredger Pumping stations employing a scoop wheel Dogdyke Engine, Lincolnshire Pinchbeck Engine, Lincolnshire Pode Hole, Lincolnshire (scoop wheel no longer present) Stretham Old Engine, Cambridgeshire Westonzoyland Pumping Station Museum, Somerset (scoop wheel no longer present) References Bibliography External links Berney Arms windmill, preserved by English Heritage Summary of scoopwheel history An american example, sadly without pictures of the wheel Drainage Industrial archaeology Pumps
Scoop wheel
[ "Physics", "Chemistry" ]
553
[ "Physical systems", "Hydraulics", "Turbomachinery", "Pumps" ]
18,075,735
https://en.wikipedia.org/wiki/Ziegler%E2%80%93Nichols%20method
The Ziegler–Nichols tuning method is a heuristic method of tuning a PID controller. It was developed by John G. Ziegler and Nathaniel B. Nichols. It is performed by setting the I (integral) and D (derivative) gains to zero. The "P" (proportional) gain, is then increased (from zero) until it reaches the ultimate gain , at which the output of the control loop has stable and consistent oscillations. and the oscillation period are then used to set the P, I, and D gains depending on the type of controller used and behaviour desired: The ultimate gain is defined as 1/M, where M = the amplitude ratio, and . These 3 parameters are used to establish the correction from the error via the equation: which has the following transfer function relationship between error and controller output: Evaluation The Ziegler–Nichols tuning (represented by the 'Classic PID' equations in the table above) creates a "quarter wave decay". This is an acceptable result for some purposes, but not optimal for all applications. This tuning rule is meant to give PID loops best disturbance rejection. It yields an aggressive gain and overshoot – some applications wish to instead minimize or eliminate overshoot, and for these this method is inappropriate. In this case, the equations from the row labelled 'no overshoot' can be used to compute appropriate controller gains. References Bequette, B. Wayne. Process Control: Modeling, Design, and Simulation. Prentice Hall PTR, 2010. External links https://web.archive.org/web/20080616062648/http://controls.engin.umich.edu:80/wiki/index.php/PIDTuningClassical#Ziegler-Nichols_Method Control devices
Ziegler–Nichols method
[ "Engineering" ]
377
[ "Control devices", "Control engineering" ]
16,871,767
https://en.wikipedia.org/wiki/Camassa%E2%80%93Holm%20equation
In fluid dynamics, the Camassa–Holm equation is the integrable, dimensionless and non-linear partial differential equation The equation was introduced by Roberto Camassa and Darryl Holm as a bi-Hamiltonian model for waves in shallow water, and in this context the parameter κ is positive and the solitary wave solutions are smooth solitons. In the special case that κ is equal to zero, the Camassa–Holm equation has peakon solutions: solitons with a sharp peak, so with a discontinuity at the peak in the wave slope. Relation to waves in shallow water The Camassa–Holm equation can be written as the system of equations: with p the (dimensionless) pressure or surface elevation. This shows that the Camassa–Holm equation is a model for shallow water waves with non-hydrostatic pressure and a water layer on a horizontal bed. The linear dispersion characteristics of the Camassa–Holm equation are: with ω the angular frequency and k the wavenumber. Not surprisingly, this is of similar form as the one for the Korteweg–de Vries equation, provided κ is non-zero. For κ equal to zero, the Camassa–Holm equation has no frequency dispersion — moreover, the linear phase speed is zero for this case. As a result, κ is the phase speed for the long-wave limit of k approaching zero, and the Camassa–Holm equation is (if κ is non-zero) a model for one-directional wave propagation like the Korteweg–de Vries equation. Hamiltonian structure Introducing the momentum m as then two compatible Hamiltonian descriptions of the Camassa–Holm equation are: Integrability The Camassa–Holm equation is an integrable system. Integrability means that there is a change of variables (action-angle variables) such that the evolution equation in the new variables is equivalent to a linear flow at constant speed. This change of variables is achieved by studying an associated isospectral/scattering problem, and is reminiscent of the fact that integrable classical Hamiltonian systems are equivalent to linear flows at constant speed on tori. The Camassa–Holm equation is integrable provided that the momentum is positive — see and for a detailed description of the spectrum associated to the isospectral problem, for the inverse spectral problem in the case of spatially periodic smooth solutions, and for the inverse scattering approach in the case of smooth solutions that decay at infinity. Exact solutions Traveling waves are solutions of the form representing waves of permanent shape f that propagate at constant speed c. These waves are called solitary waves if they are localized disturbances, that is, if the wave profile f decays at infinity. If the solitary waves retain their shape and speed after interacting with other waves of the same type, we say that the solitary waves are solitons. There is a close connection between integrability and solitons. In the limiting case when κ = 0 the solitons become peaked (shaped like the graph of the function f(x) = e−|x|), and they are then called peakons. It is possible to provide explicit formulas for the peakon interactions, visualizing thus the fact that they are solitons. For the smooth solitons the soliton interactions are less elegant. This is due in part to the fact that, unlike the peakons, the smooth solitons are relatively easy to describe qualitatively — they are smooth, decaying exponentially fast at infinity, symmetric with respect to the crest, and with two inflection points — but explicit formulas are not available. Notice also that the solitary waves are orbitally stable i.e. their shape is stable under small perturbations, both for the smooth solitons and for the peakons. Wave breaking The Camassa–Holm equation models breaking waves: a smooth initial profile with sufficient decay at infinity develops into either a wave that exists for all times or into a breaking wave (wave breaking being characterized by the fact that the solution remains bounded but its slope becomes unbounded in finite time). The fact that the equations admits solutions of this type was discovered by Camassa and Holm and these considerations were subsequently put on a firm mathematical basis. It is known that the only way singularities can occur in solutions is in the form of breaking waves. Moreover, from the knowledge of a smooth initial profile it is possible to predict (via a necessary and sufficient condition) whether wave breaking occurs or not. As for the continuation of solutions after wave breaking, two scenarios are possible: the conservative case and the dissipative case (with the first characterized by conservation of the energy, while the dissipative scenario accounts for loss of energy due to breaking). Long-time asymptotics It can be shown that for sufficiently fast decaying smooth initial conditions with positive momentum splits into a finite number and solitons plus a decaying dispersive part. More precisely, one can show the following for : Abbreviate . In the soliton region the solutions splits into a finite linear combination solitons. In the region the solution is asymptotically given by a modulated sine function whose amplitude decays like . In the region the solution is asymptotically given by a sum of two modulated sine function as in the previous case. In the region the solution decays rapidly. In the case the solution splits into an infinite linear combination of peakons (as previously conjectured). Geometric formulation In the spatially periodic case, the Camassa–Holm equation can be given the following geometric interpretation. The group of diffeomorphisms of the unit circle is an infinite-dimensional Lie group whose Lie algebra consists of smooth vector fields on . The inner product on , induces a right-invariant Riemannian metric on . Here is the standard coordinate on . Let be a time-dependent vector field on , and let be the flow of , i.e. the solution to Then is a solution to the Camassa–Holm equation with , if and only if the path is a geodesic on with respect to the right-invariant metric. For general , the Camassa–Holm equation corresponds to the geodesic equation of a similar right-invariant metric on the universal central extension of , the Virasoro group. See also Degasperis–Procesi equation Hunter–Saxton equation Notes References Further reading Peakon solutions Water wave theory Existence, uniqueness, wellposedness, stability, propagation speed, etc. Travelling waves Integrability structure (symmetries, hierarchy of soliton equations, conservations laws) and differential-geometric formulation Partial differential equations Equations of fluid dynamics Integrable systems Solitons
Camassa–Holm equation
[ "Physics", "Chemistry" ]
1,412
[ "Equations of fluid dynamics", "Equations of physics", "Integrable systems", "Theoretical physics", "Fluid dynamics" ]
16,874,116
https://en.wikipedia.org/wiki/Ultramicroelectrode
An ultramicroelectrode (UME) is a working electrode with a low surface area primarily used in voltammetry experiments. The small size of UMEs limits mass transfer, which give them large diffusion layers and small overall currents at typical electrochemical potentials. These features allow UMEs to achieve useful cyclic steady-state conditions at fast scan rates (V/s) with limited current distortion. UMEs were developed independently by Wightman and Fleischmann around 1980. UMEs enable electrochemical measurements in electrolytes with high solution resistance, such as organic solvents. The low current at an UME limits the Ohmic (or iR) drop, which conventional electrodes do not limit. Furthermore, the low Ohmic drop at UMEs lead to low voltage distortions at the electrode-electrolyte interface, allowing for the use of two electrodes in a voltammetric experiment instead of the conventional three electrodes. Design Ultramicroelectrodes are often defined as electrodes which are smaller than the diffusion layer achieved in a readily accessed experiment. A working definition is an electrode that has at least one dimension (the critical dimension) smaller than 25 μm. Platinum electrodes with a radius of 5 μm are commercially available and electrodes with critical dimension of 0.1 μm have been made. Electrodes with even smaller critical dimension have been reported in the literature, but exist mostly as proofs of concept. The most common UME is a disk shaped electrode created by embedding a thin wire in glass, resin, or plastic. The resin is cut and polished to expose a cross section of the wire. Other shapes, such as wires and rectangles, have also been reported. Carbon-fiber microelectrodes are fabricated with conductive carbon fibers sealed in glass capillaries with exposed tips. These electrodes are frequently used with in vivo voltammetry. Theory Linear region Every electrode has a range of scan rates called the linear region. The response to a reversible redox couple in the linear region is a "diffusion controlled peak" which can be modeled with the Cottrell equation. The upper limit of the useful linear region is bound by an excess of charging current combined with distortions created from large peak currents and associated resistance. The charging current scales linearly with scan rate while the peak current, which contains the useful information, scales with the square root of scan rate. As scan rates increase, the relative peak response diminishes. Some of the charge current can be mitigated with RC compensation and/or mathematically removed after the experiment. However, the distortions resulting from increased current and the associated resistance cannot be subtracted. These distortions ultimately limit the scan rate for which an electrode is useful. For example, a working electrode with a radius of 1.0 mm is not useful for experiments much greater than 500 mV/s. Moving to an UME drops the currents being passed and thus greatly increases the useful sweep rate up to 106 V/s. These faster scan rates allow the investigation of electrochemical reaction mechanisms with much higher rates than can be explored with regular working electrodes. The linear region of an UME only exists at fast scan rates, which is helpful when studying faster electrochemical processes. By adjusting the size of the working electrode an enormous range of speeds can be studied. Steady-state region Scan rates slower than the linear region are mathematically complex to model and rarely investigated. At even slower scan rates there is the steady-state region. In the steady-state region linear, voltammograms display reversible redox couples as steps rather than peaks. These steps can be modeled to gather useful electrochemical information. To access the steady-state region, the scan rate must be lowered. However, as scan rates are slowed, the current also drops, which can reduce the reliability of the measurement. The low ratio of diffusion layer volume to electrode surface area means that regular working electrodes can yield unreliable current measurements at low scan rates. In contrast, the ratio of diffusion layer volume to electrode surface area is much higher for UMEs. When the scan rate of UME is lowered, it quickly enters the steady-state regime at useful scan rates. Although UMEs have small total currents, their steady-state currents are high compared to regular working electrodes. Rg Value The Rg value which is defined as R/r which is the ratio between the radius of insulation sheet (R) and the radius of the conductive material (r or a). The Rg value is a method to evaluate the quality of the UME, where a smaller Rg value means there is less interference to the diffusion towards the conductive material resulting in a better or more sensitive electrode. The Rg value obtain either by a rough estimation from a microscope image (as long as the electrode was fabricated with an homogeneous wire with a known diameter) or by a direct calculation based on the steady state current (iss) obtained from a cyclic voltamogram based on the following equation: iss=knFaDC* Where k is a geometric constant (disk, k = 4; hemispherical, k =2π), n is the number of electrons involved in the reaction, F is the Faraday constant (96 485 C eq−1), a is the radius of the electroactive surface, D is the diffusion coefficient of the redox species (Dferrocene methanol= 7.8 × 10−6 ; Druthenium hexamine = 8.7 × 10−6 cm2s−1) and C* is the concentration of dissolved redox species See also Bioelectronics Multielectrode array Scanning electrochemical microscopy Fast-scan cyclic voltammetry References Electroanalytical chemistry devices Electrodes
Ultramicroelectrode
[ "Chemistry" ]
1,189
[ "Electroanalytical chemistry devices", "Electrochemistry", "Electrodes", "Electroanalytical chemistry" ]
16,877,732
https://en.wikipedia.org/wiki/Beaverboard
Beaverboard (also beaver board) is a fiberboard building material, formed of wood fibre compressed into sheets. It was originally a trademark for a lumber product built up from the fibre of clean white spruce made from 1906 until 1928 by the Beaver Manufacturing Company at their plant in Beaver Falls and marketed from their headquarters on Beaver Road, in Buffalo, New York. Beaverboard has occasionally been used as a canvas by artists. The painting American Gothic (1930) by Grant Wood was painted on a beaverboard panel. See also Paperboard References Building materials
Beaverboard
[ "Physics", "Engineering" ]
110
[ "Building engineering", "Materials stubs", "Construction", "Materials", "Building materials", "Matter", "Architecture" ]
16,879,665
https://en.wikipedia.org/wiki/Circle%20packing
In geometry, circle packing is the study of the arrangement of circles (of equal or varying sizes) on a given surface such that no overlapping occurs and so that no circle can be enlarged without creating an overlap. The associated packing density, , of an arrangement is the proportion of the surface covered by the circles. Generalisations can be made to higher dimensions – this is called sphere packing, which usually deals only with identical spheres. The branch of mathematics generally known as "circle packing" is concerned with the geometry and combinatorics of packings of arbitrarily-sized circles: these give rise to discrete analogs of conformal mapping, Riemann surfaces and the like. Densest packing In the two-dimensional Euclidean plane, Joseph Louis Lagrange proved in 1773 that the highest-density lattice packing of circles is the hexagonal packing arrangement, in which the centres of the circles are arranged in a hexagonal lattice (staggered rows, like a honeycomb), and each circle is surrounded by six other circles. For circles of diameter and hexagons of side length , the hexagon area and the circle area are, respectively: The area covered within each hexagon by circles is: Finally, the packing density is: In 1890, Axel Thue published a proof that this same density is optimal among all packings, not just lattice packings, but his proof was considered by some to be incomplete. The first rigorous proof is attributed to László Fejes Tóth in 1942. While the circle has a relatively low maximum packing density, it does not have the lowest possible, even among centrally-symmetric convex shapes: the smoothed octagon has a packing density of about 0.902414, the smallest known for centrally-symmetric convex shapes and conjectured to be the smallest possible. (Packing densities of concave shapes such as star polygons can be arbitrarily small.) Other packings At the other extreme, Böröczky demonstrated that arbitrarily low density arrangements of rigidly packed circles exist. There are eleven circle packings based on the eleven uniform tilings of the plane. In these packings, every circle can be mapped to every other circle by reflections and rotations. The hexagonal gaps can be filled by one circle and the dodecagonal gaps can be filled with seven circles, creating 3-uniform packings. The truncated trihexagonal tiling with both types of gaps can be filled as a 4-uniform packing. The snub hexagonal tiling has two mirror-image forms. On the sphere A related problem is to determine the lowest-energy arrangement of identically interacting points that are constrained to lie within a given surface. The Thomson problem deals with the lowest energy distribution of identical electric charges on the surface of a sphere. The Tammes problem is a generalisation of this, dealing with maximising the minimum distance between circles on sphere. This is analogous to distributing non-point charges on a sphere. In bounded areas Packing circles in simple bounded shapes is a common type of problem in recreational mathematics. The influence of the container walls is important, and hexagonal packing is generally not optimal for small numbers of circles. Specific problems of this type that have been studied include: Circle packing in a circle Circle packing in a square Circle packing in a rectangle Circle packing in an equilateral triangle Circle packing in an isosceles right triangle See the linked articles for details. Unequal circles There are also a range of problems which permit the sizes of the circles to be non-uniform. One such extension is to find the maximum possible density of a system with two specific sizes of circle (a binary system). Only nine particular radius ratios permit compact packing, which is when every pair of circles in contact is in mutual contact with two other circles (when line segments are drawn from contacting circle-center to circle-center, they triangulate the surface). For all these radius ratios a compact packing is known that achieves the maximum possible packing fraction (above that of uniformly-sized discs) for mixtures of discs with that radius ratio. All nine have ratio-specific packings denser than the uniform hexagonal packing, as do some radius ratios without compact packings. It is also known that if the radius ratio is above 0.742, a binary mixture cannot pack better than uniformly-sized discs. Upper bounds for the density that can be obtained in such binary packings at smaller ratios have also been obtained. Applications Quadrature amplitude modulation is based on packing circles into circles within a phase-amplitude space. A modem transmits data as a series of points in a two-dimensional phase-amplitude plane. The spacing between the points determines the noise tolerance of the transmission, while the circumscribing circle diameter determines the transmitter power required. Performance is maximized when the constellation of code points are at the centres of an efficient circle packing. In practice, suboptimal rectangular packings are often used to simplify decoding. Circle packing has become an essential tool in origami design, as each appendage on an origami figure requires a circle of paper. Robert J. Lang has used the mathematics of circle packing to develop computer programs that aid in the design of complex origami figures. See also Apollonian gasket Circle packing in a rectangle Circle packing in a square Circle packing in a circle Inversive distance Kepler conjecture Malfatti circles Packing problem References Bibliography
Circle packing
[ "Mathematics" ]
1,126
[ "Circle packing", "Mathematical problems", "Packing problems", "Geometry problems" ]
16,880,962
https://en.wikipedia.org/wiki/River%20mile
A river mile is a measure of distance in miles along a river from its mouth. River mile numbers begin at zero and increase further upstream. The corresponding metric unit using kilometers is the river kilometer. They are analogous to vehicle roadway mile markers, except that river miles are rarely marked on the physical river; instead they are marked on navigation charts, and topographic maps. Riverfront properties are sometimes partially legally described by their river mile. The river mile is not the same as the length of the river, rather it is a means of locating any feature along the river relative to its distance from the mouth, when measured along the course (or navigable channel) of the river. River mile zero may not be exactly at the mouth. For example, the Willamette River (which discharges into the Columbia River) has its river mile zero at the edge of the navigable channel in the Columbia, some beyond the mouth. Also, the river mile zero for the Lower Mississippi River is located at Head of Passes, where the main stem of the Mississippi splits into three major branches before flowing into the Gulf of Mexico. Mileages are indicated as AHP (Above Head of Passes) or BHP (Below Head of Passes). Uses in the United States River miles are used in a variety of ways. The Commonwealth of Pennsylvania, in its 2001 Pennsylvania Gazetteer of Streams, lists every named stream and every unnamed stream in a named geographic feature in the state, and gives the drainage basin area, mouth coordinates, and river mile, specifically the distance from the mouth of the tributary to the mouth of its parent stream. Some islands are named for their river mile distance, for example the Allegheny River in Pennsylvania has Six Mile Island, Nine Mile Island, Twelve Mile Island, and Fourteen Mile Island. (The last two islands form Allegheny Islands State Park, although Fourteen Mile Island was split into two parts by a dam). The state of Ohio uses the "River Mile System of Ohio", which is "a method to reference locations on streams and rivers of Ohio". This work began by hand measurements on paper maps between 1972 and 1975 and has since been converted to a computer-based electronic version, which now covers the state in 787 river mile maps. Locations of facilities such as wastewater treatment plants and water quality measurement sites are referenced via river miles. Ohio uses one of two systems. The simplest is just the name of the river and the location in river miles. In cases where there is ambiguity, for example when more than one stream has the same name, it uses a series of river mile strings referring to the distance to the ocean along either the Ohio River (and Mississippi River) or through Lake Erie (and the Saint Lawrence Seaway). Another example of a River Mile System is utilized by the U.S. Bureau of Reclamation, in New Mexico, on the Rio Grande. The river miles in Central New Mexico are measured from Caballo Dam upstream to near Embudo, New Mexico. For example, a river mile sign in the Albuquerque Bosque (part of Albuquerque's Open Space Park) is River Mile 184, approximately 184 miles above Caballo Dam. As mentioned earlier in this system the further you go up stream the higher the river mile number. This system is measured in 10ths of a mile. The U.S. Army Corps of Engineers uses river miles for its navigation maps. References External links Waterway Mile Marker Database Delaware River Mileage System Rivers Hydrology Topography Distance
River mile
[ "Physics", "Chemistry", "Mathematics", "Engineering", "Environmental_science" ]
700
[ "Hydrology", "Distance", "Physical quantities", "Quantity", "Size", "Space", "Environmental engineering", "Spacetime", "Wikipedia categories named after physical quantities" ]
16,881,256
https://en.wikipedia.org/wiki/Lindemann%20index
The Lindemann index is a simple measure of thermally driven disorder in atoms or molecules. Definition The local Lindemann index is defined as: where angle brackets indicate a time average. The global Lindemann index is a system average of this quantity. In condensed matter physics a departure from linearity in the behaviour of the global Lindemann index or an increase above a threshold value related to the spacing between atoms (or micelles, particles, globules, etc.) is often taken as the indication that a solid-liquid phase transition has taken place. See Lindemann melting criterion. Biomolecules often possess separate regions with different order characteristics. In order to quantify or illustrate local disorder, the local Lindemann index can be used. Factors when using the Lindemann index Care must be taken if the molecule possesses globally defined dynamics, such as about a hinge or pivot, because these motions will obscure the local motions which the Lindemann index is designed to quantify. An appropriate tactic in this circumstance is to sum the rij only over a small number of neighbouring atoms to arrive at each qi. A further variety of such modifications to the Lindemann index are available and have different merits, e.g. for the study of glassy vs crystalline materials. References Molecular physics Condensed matter physics Dimensionless numbers of physics
Lindemann index
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
280
[ "Molecular physics", "Phases of matter", "Materials science", " molecular", "Condensed matter physics", "nan", "Atomic", "Matter", " and optical physics" ]
16,884,572
https://en.wikipedia.org/wiki/Laser%20absorption%20spectrometry
Laser absorption spectrometry (LAS) refers to techniques that use lasers to assess the concentration or amount of a species in gas phase by absorption spectrometry (AS). Optical spectroscopic techniques in general, and laser-based techniques in particular, have a great potential for detection and monitoring of constituents in gas phase. They combine a number of important properties, e.g. a high sensitivity and a high selectivity with non-intrusive and remote sensing capabilities. Laser absorption spectrometry has become the foremost used technique for quantitative assessments of atoms and molecules in gas phase. It is also a widely used technique for a variety of other applications, e.g. within the field of optical frequency metrology or in studies of light matter interactions. The most common technique is tunable diode laser absorption spectroscopy (TDLAS) which has become commercialized and is used for a variety of applications. Direct laser absorption spectrometry The most appealing advantages of LAS is its ability to provide absolute quantitative assessments of species. Its biggest disadvantage is that it relies on a measurement of a small change in power from a high level; any noise introduced by the light source or the transmission through the optical system will deteriorate the sensitivity of the technique. Direct laser absorption spectrometric (DLAS) techniques are therefore often limited to detection of absorbance ~10−3, which is far away from the theoretical shot noise level, which for a single pass DAS technique is in the 10−7 – 10−8 range. This detection limit is insufficient for many types of applications. The detection limit can be improved by (1) reducing the noise, (2) using transitions with larger transition strengths or (3) increasing the effective path length. The first can be achieved by the use of a modulation technique, the second can be obtained by using transitions in unconventional wavelength regions, whereas the third by using external cavities. Modulated techniques Modulation techniques make use of the fact that technical noise usually decreases with increasing frequency (often referred to as a 1/f noise) and improves on the signal contrast by encoding and detecting the absorption signal at a high frequency, where the noise level is low. The most common modulation techniques, wavelength modulation spectroscopy (WMS) and frequency modulation spectroscopy (FMS), achieve this by rapidly scanning the frequency of the light across the absorbing transition. Both techniques have the advantage that the demodulated signal is low in the absence of absorbers but they are also limited by residual amplitude modulation, either from the laser or from multiple reflections in the optical system (etalon effects). The most frequently used laser-based technique for environmental investigations and process control applications is based upon diode lasers and WMS (typically referred to as TDLAS). The typical sensitivity of WMS and FMS techniques is in the 10−5 range. Due to their good tunability and long lifetime (> 10,000 hours), most practical laser-based absorption spectroscopy is performed today by distributed feedback diode lasers emitting in the 760 nm – 16 μm range. This gives rise to systems that can run unattended for thousands of hours with minimum maintenance. Laser absorption spectrometry using fundamental vibrational or electronic transitions The second way of improving the detection limit of LAS is to employ transitions with larger line strength, either in the fundamental vibrational band or electronic transitions. The former, which normally reside at ~5 μm, have line strengths that are ~2–3 orders of magnitude higher than those of typical overtone transition. On the other hand, electronic transitions have often yet another 1–2 orders of magnitude larger line strengths. The transitions strengths for the electronic transitions of NO, which are located in the UV range (at ~227 nm) are ~2 orders of magnitude larger than those in the MIR region. The recent development of quantum cascade (QC) lasers working in the MIR region has opened up new possibilities for sensitive detection of molecular species on their fundamental vibrational bands. It is more difficult to generate stable cw light addressing electronic transitions, since these often lie in the UV region. Cavity enhanced absorption spectrometry The third way of improving the sensitivity of LAS is to increase the path length. This can be obtained by placing the species inside a cavity in which the light bounces back and forth many times, whereby the interaction length can be increased considerably. This has led to a group of techniques denoted as cavity enhanced AS (CEAS). The cavity can either be placed inside the laser, giving rise to intracavity AS, or outside, when it is referred to as an external cavity. Although the former technique can provide a high sensitivity, its practical applicability is limited by non-linear processes. External cavities can either be of multi-pass type, i.e. Herriott or White cells, or be of resonant type, most often working as a Fabry–Pérot (FP) etalon. Whereas the multi-pass cells typically can provide an enhanced interaction length of up to ~2 orders of magnitude, the resonant cavities can provide a much larger path length enhancement, in the order of the finesse of the cavity, F, which for a balanced cavity with high reflecting mirrors with reflectivities of ~99.99–99.999% can be ~104 to 105. A problem with resonant cavities is that a high finesse cavity has narrow cavity modes, often in the low kHz range. Since cw lasers often have free-running line-widths in the MHz range, and pulsed even larger, it is difficult to couple laser light effectively into a high finesse cavity. However, there are a few ways this can be achieved. One such method is Vernier Spectroscopy, which employs a frequency comb laser to excite many cavity modes simultaneously and allows for a highly parallel measurement of trace gases. Cavity ring-down spectroscopy In cavity ring-down spectroscopy (CRDS) the mode-matching condition is circumvented by injecting a short light pulse in the cavity. The absorbance is assessed by comparing the cavity decay times of the pulse as it "leaks out" of the cavity on and off-resonance, respectively. While independent of laser amplitude noise, this technique is often limited by drifts in the system between two consecutive measurements and a low transmission through the cavity. Despite this, sensitivities in the ~10−7 range can routinely be obtained (although the most complex setups can reach below this~10−9). CRDS has therefore started to become a standard technique for sensitive trace gas analysis under a variety of conditions. In addition, CRDS is now an effective method for different physical parameters (such as temperature, pressure, strain) sensing. Integrated cavity output spectroscopy Integrated cavity output spectroscopy (ICOS) sometimes called as cavity-enhanced absorption spectroscopy (CEAS) records the integrated intensity behind one of the cavity mirrors, while the laser is repeatedly swept across one or several cavity modes. However, for high finesse cavities the ratio of "on" and "off" a cavity mode is small, given by the inverse of the finesse, whereby the transmission as well as the integrated absorption becomes small. Off-axis ICOS (OA-ICOS) improves on this by coupling the laser light into the cavity from an angle with respect to the main axis so as to not interact with a high density of transverse modes. Although intensity fluctuations are lower than direct on-axis ICOS, the technique is, however, still limited by a low transmission and intensity fluctuations due to partly excitation of high order transverse modes, and can again typically reach sensitivities ~10−7 . Continuous wave cavity enhanced absorption spectrometry The group of CEAS techniques that has the largest potential to improve is that based on a continuous coupling of laser light into the cavity. This requires however an active locking of the laser to one of the cavity modes. There are two ways in which this can be done, either by optical or electronic feedback. Optical feedback (OF) locking, originally developed by Romanini et al. for cw-CRDS, uses the optical feedback from the cavity to lock the laser to the cavity while the laser is slowly scanned across the profile (OF-CEAS). In this case, the cavity needs to have a V-shape in order to avoid OF from the incoupling mirror. OF-CEAS is capable of reaching sensitivities ~10−8 range, limited by a fluctuating feedback efficiency. Electronic locking is usually realized with the Pound-Drever-Hall (PDH) technique, and is nowadays a well established technique, although it can be difficult to achieve for some types of lasers. It has been shown by that also electronically locked CEAS can be used for sensitive AS on overtone lines. Noise-immune cavity-enhanced optical-heterodyne molecular spectroscopy However, all attempts to directly combine CEAS with a locking approach (DCEAS) have one thing in common; they do not manage to use the full power of the cavity, i.e. to reach LODs close to the (multi-pass) shot-noise level, which is roughly 2F/π times below that of DAS and can be down to ~10−13. The reason is twofold: (i) any remaining frequency noise of the laser relative to the cavity mode will, due to the narrow cavity mode, be directly converted to amplitude noise in the transmitted light, thereby impairing the sensitivity; and (ii) none of these techniques makes use of any modulation technique, wherefore they still suffer from the 1/f noise in the system. There is, however, one technique that so far has succeeded in making full use of the cavity by combining locked CEAS with FMS so as to circumvent both of these problems: Noise-immune cavity-enhanced optical heterodyne molecular spectroscopy (NICE-OHMS). The first and so far ultimate realization of this technique, performed for frequency standard applications, reached an astonishing LODs of 5•10−13 (1•10−14 cm−1). It is clear that this technique, correctly developed, has a larger potential than any other technique for trace gas analysis. References External links Zeller, W.; Naehle, L.; Fuchs, P.; Gerschuetz, F.; Hildebrandt, L.; Koeth, J. DFB Lasers Between 760 nm and 16 μm for Sensing Applications. Sensors 2010, 10, 2492–2510. MDPI Absorption spectroscopy
Laser absorption spectrometry
[ "Physics", "Chemistry" ]
2,172
[ "Spectroscopy", "Spectrum (physical sciences)", "Absorption spectroscopy" ]
694,843
https://en.wikipedia.org/wiki/Alexander%E2%80%93Spanier%20cohomology
In mathematics, particularly in algebraic topology, Alexander–Spanier cohomology is a cohomology theory for topological spaces. History It was introduced by for the special case of compact metric spaces, and by for all topological spaces, based on a suggestion of Alexander D. Wallace. Definition If X is a topological space and G is an R module where R is a ring with unity, then there is a cochain complex C whose p-th term is the set of all functions from to G with differential given by The defined cochain complex does not rely on the topology of . In fact, if is a nonempty space, where is a graded module whose only nontrivial module is at degree 0. An element is said to be locally zero if there is a covering of by open sets such that vanishes on any -tuple of which lies in some element of (i.e. vanishes on ). The subset of consisting of locally zero functions is a submodule, denote by . is a cochain subcomplex of so we define a quotient cochain complex . The Alexander–Spanier cohomology groups are defined to be the cohomology groups of . Induced homomorphism Given a function which is not necessarily continuous, there is an induced cochain map defined by If is continuous, there is an induced cochain map Relative cohomology module If is a subspace of and is an inclusion map, then there is an induced epimorphism . The kernel of is a cochain subcomplex of which is denoted by . If denote the subcomplex of of functions that are locally zero on , then . The relative module is is defined to be the cohomology module of . is called the Alexander cohomology module of of degree with coefficients and this module satisfies all cohomology axioms. The resulting cohomology theory is called the Alexander (or Alexander-Spanier) cohomology theory Cohomology theory axioms (Dimension axiom) If is a one-point space, (Exactness axiom) If is a topological pair with inclusion maps and , there is an exact sequence (Excision axiom) For topological pair , if is an open subset of such that , then . (Homotopy axiom) If are homotopic, then Alexander cohomology with compact supports A subset is said to be cobounded if is bounded, i.e. its closure is compact. Similar to the definition of Alexander cohomology module, one can define Alexander cohomology module with compact supports of a pair by adding the property that is locally zero on some cobounded subset of . Formally, one can define as follows : For given topological pair , the submodule of consists of such that is locally zero on some cobounded subset of . Similar to the Alexander cohomology module, one can get a cochain complex and a cochain complex . The cohomology module induced from the cochain complex is called the Alexander cohomology of with compact supports and denoted by . Induced homomorphism of this cohomology is defined as the Alexander cohomology theory. Under this definition, we can modify homotopy axiom for cohomology to a proper homotopy axiom if we define a coboundary homomorphism only when is a closed subset. Similarly, excision axiom can be modified to proper excision axiom i.e. the excision map is a proper map. Property One of the most important property of this Alexander cohomology module with compact support is the following theorem: If is a locally compact Hausdorff space and is the one-point compactification of , then there is an isomorphism Example as . Hence if , and are not of the same proper homotopy type. Relation with tautness From the fact that a closed subspace of a paracompact Hausdorff space is a taut subspace relative to the Alexander cohomology theory and the first Basic property of tautness, if where is a paracompact Hausdorff space and and are closed subspaces of , then is taut pair in relative to the Alexander cohomology theory. Using this tautness property, one can show the following two facts: (Strong excision property) Let and be pairs with and paracompact Hausdorff and and closed. Let be a closed continuous map such that induces a one-to-one map of onto . Then for all and all , (Weak continuity property) Let be a family of compact Hausdorff pairs in some space, directed downward by inclusion, and let . The inclusion maps induce an isomorphism . Difference from singular cohomology theory Recall that the singular cohomology module of a space is the direct product of the singular cohomology modules of its path components. A nonempty space is connected if and only if . Hence for any connected space which is not path connected, singular cohomology and Alexander cohomology differ in degree 0. If is an open covering of by pairwise disjoint sets, then there is a natural isomorphism . In particular, if is the collection of components of a locally connected space , there is a natural isomorphism . Variants It is also possible to define Alexander–Spanier homology and Alexander–Spanier cohomology with compact supports. Connection to other cohomologies The Alexander–Spanier cohomology groups coincide with Čech cohomology groups for compact Hausdorff spaces, and coincide with singular cohomology groups for locally finite complexes. References Bibliography Cohomology theories Duality theories
Alexander–Spanier cohomology
[ "Mathematics" ]
1,157
[ "Mathematical structures", "Category theory", "Duality theories", "Geometry" ]
695,215
https://en.wikipedia.org/wiki/Horizon%20problem
The horizon problem (also known as the homogeneity problem) is a cosmological fine-tuning problem within the Big Bang model of the universe. It arises due to the difficulty in explaining the observed homogeneity of causally disconnected regions of space in the absence of a mechanism that sets the same initial conditions everywhere. It was first pointed out by Wolfgang Rindler in 1956. The most commonly accepted solution is cosmic inflation. Different solutions propose a cyclic universe or a variable speed of light. Background Astronomical distances and particle horizons The distances of observable objects in the night sky correspond to times in the past. We use the light-year (the distance light can travel in the time of one Earth year) to describe these cosmological distances. A galaxy measured at ten billion light-years appears to us as it was ten billion years ago, because the light has taken that long to travel to the observer. If one were to look at a galaxy ten billion light-years away in one direction and another in the opposite direction, the total distance between them is twenty billion light-years. This means that the light from the first has not yet reached the second because the universe is only about 13.8 billion years old. In a more general sense, there are portions of the universe that are visible to us, but invisible to each other, outside each other's respective particle horizons. Causal information propagation In accepted relativistic physical theories, no information can travel faster than the speed of light. In this context, "information" means "any sort of physical interaction". For instance, heat will naturally flow from a hotter area to a cooler one, and in physics terms, this is one example of information exchange. Given the example above, the two galaxies in question cannot have shared any sort of information; they are not in causal contact. In the absence of common initial conditions, one would expect, then, that their physical properties would be different, and more generally, that the universe as a whole would have varying properties in causally disconnected regions. Horizon problem Contrary to this expectation, the observations of the cosmic microwave background (CMB) and galaxy surveys show that the observable universe is nearly isotropic, which, through the Copernican principle, also implies homogeneity. CMB sky surveys show that the temperatures of the CMB are coordinated to a level of where is the difference between the observed temperature in a region of the sky and the average temperature of the sky . This coordination implies that the entire sky, and thus the entire observable universe, must have been causally connected long enough for the universe to come into thermal equilibrium. According to the Big Bang model, as the density of the expanding universe dropped, it eventually reached a temperature where photons fell out of thermal equilibrium with matter; they decoupled from the electron-proton plasma and began free-streaming across the universe. This moment in time is referred to as the epoch of Recombination, when electrons and protons became bound to form electrically neutral hydrogen; without free electrons to scatter the photons, the photons began free-streaming. This epoch is observed through the CMB. Since we observe the CMB as a background to objects at a smaller redshift, we describe this epoch as the transition of the universe from opaque to transparent. The CMB physically describes the ‘surface of last scattering’ as it appears to us as a surface, or a background, as shown in the figure below. Note we use conformal time in the following diagrams. Conformal time describes the amount of time it would take a photon to travel from the location of the observer to the farthest observable distance (if the universe stopped expanding right now). The decoupling, or the last scattering, is thought to have occurred about 300,000 years after the Big Bang, or at a redshift of about . We can determine both the approximate angular diameter of the universe and the physical size of the particle horizon that had existed at this time. The angular diameter distance, in terms of redshift , is described by . If we assume a flat cosmology then, The epoch of recombination occurred during a matter dominated era of the universe, so we can approximate as . Putting these together, we see that the angular diameter distance, or the size of the observable universe for a redshift is Since , we can approximate the above equation as Substituting this into our definition of angular diameter distance, we obtain From this formula, we obtain the angular diameter distance of the cosmic microwave background as . The particle horizon describes the maximum distance light particles could have traveled to the observer given the age of the universe. We can determine the comoving distance for the age of the universe at the time of recombination using from earlier, To get the physical size of the particle horizon , We would expect any region of the CMB within 2 degrees of angular separation to have been in causal contact, but at any scale larger than 2° there should have been no exchange of information. CMB regions that are separated by more than 2° lie outside one another's particle horizons and are causally disconnected. The horizon problem describes the fact that we see isotropy in the CMB temperature across the entire sky, despite the entire sky not being in causal contact to establish thermal equilibrium. Refer to the timespace diagram to the right for a visualization of this problem. If the universe started with even slightly different temperatures in different places, the CMB should not be isotropic unless there is a mechanism that evens out the temperature by the time of decoupling. In reality, the CMB has the same temperature in the entire sky, . Inflationary model The theory of cosmic inflation has attempted to address the problem by positing a 10-second period of exponential expansion in the first second of the history of the universe due to a scalar field interaction. According to the inflationary model, the universe increased in size by a factor of more than 10, from a small and causally connected region in near equilibrium. Inflation then expanded the universe rapidly, isolating nearby regions of spacetime by growing them beyond the limits of causal contact, effectively "locking in" the uniformity at large distances. Essentially, the inflationary model suggests that the universe was entirely in causal contact in the very early universe. Inflation then expands this universe by approximately 60 e-foldings (the scale factor increases by factor ). We observe the CMB after inflation has occurred at a very large scale. It maintained thermal equilibrium to this large size because of the rapid expansion from inflation. One consequence of cosmic inflation is that the anisotropies in the Big Bang due to quantum fluctuations are reduced but not eliminated. Differences in the temperature of the cosmic background are smoothed by cosmic inflation, but they still exist. The theory predicts a spectrum for the anisotropies in the microwave background which is mostly consistent with observations from WMAP and COBE. However, gravity alone may be sufficient to explain this homogeneity. Variable-speed-of-light theories Cosmological models employing a variable speed of light have been proposed to resolve the horizon problem of and provide an alternative to cosmic inflation. In the VSL models, the fundamental constant c, denoting the speed of light in vacuum, is greater in the early universe than its present value, effectively increasing the particle horizon at the time of decoupling sufficiently to account for the observed isotropy of the CMB. See also Flatness problem Magnetic monopole References External links Different Horizons in Cosmology Inflation (cosmology) Physical cosmological concepts
Horizon problem
[ "Physics" ]
1,568
[ "Physical cosmological concepts", "Concepts in astrophysics" ]
695,241
https://en.wikipedia.org/wiki/Scale%20invariance
In physics, mathematics and statistics, scale invariance is a feature of objects or laws that do not change if scales of length, energy, or other variables, are multiplied by a common factor, and thus represent a universality. The technical term for this transformation is a dilatation (also known as dilation). Dilatations can form part of a larger conformal symmetry. In mathematics, scale invariance usually refers to an invariance of individual functions or curves. A closely related concept is self-similarity, where a function or curve is invariant under a discrete subset of the dilations. It is also possible for the probability distributions of random processes to display this kind of scale invariance or self-similarity. In classical field theory, scale invariance most commonly applies to the invariance of a whole theory under dilatations. Such theories typically describe classical physical processes with no characteristic length scale. In quantum field theory, scale invariance has an interpretation in terms of particle physics. In a scale-invariant theory, the strength of particle interactions does not depend on the energy of the particles involved. In statistical mechanics, scale invariance is a feature of phase transitions. The key observation is that near a phase transition or critical point, fluctuations occur at all length scales, and thus one should look for an explicitly scale-invariant theory to describe the phenomena. Such theories are scale-invariant statistical field theories, and are formally very similar to scale-invariant quantum field theories. Universality is the observation that widely different microscopic systems can display the same behaviour at a phase transition. Thus phase transitions in many different systems may be described by the same underlying scale-invariant theory. In general, dimensionless quantities are scale-invariant. The analogous concept in statistics are standardized moments, which are scale-invariant statistics of a variable, while the unstandardized moments are not. Scale-invariant curves and self-similarity In mathematics, one can consider the scaling properties of a function or curve under rescalings of the variable . That is, one is interested in the shape of for some scale factor , which can be taken to be a length or size rescaling. The requirement for to be invariant under all rescalings is usually taken to be for some choice of exponent Δ, and for all dilations . This is equivalent to   being a homogeneous function of degree Δ. Examples of scale-invariant functions are the monomials , for which , in that clearly An example of a scale-invariant curve is the logarithmic spiral, a kind of curve that often appears in nature. In polar coordinates , the spiral can be written as Allowing for rotations of the curve, it is invariant under all rescalings ; that is, is identical to a rotated version of . Projective geometry The idea of scale invariance of a monomial generalizes in higher dimensions to the idea of a homogeneous polynomial, and more generally to a homogeneous function. Homogeneous functions are the natural denizens of projective space, and homogeneous polynomials are studied as projective varieties in projective geometry. Projective geometry is a particularly rich field of mathematics; in its most abstract forms, the geometry of schemes, it has connections to various topics in string theory. Fractals It is sometimes said that fractals are scale-invariant, although more precisely, one should say that they are self-similar. A fractal is equal to itself typically for only a discrete set of values , and even then a translation and rotation may have to be applied to match the fractal up to itself. Thus, for example, the Koch curve scales with , but the scaling holds only for values of for integer . In addition, the Koch curve scales not only at the origin, but, in a certain sense, "everywhere": miniature copies of itself can be found all along the curve. Some fractals may have multiple scaling factors at play at once; such scaling is studied with multi-fractal analysis. Periodic external and internal rays are invariant curves . Scale invariance in stochastic processes If is the average, expected power at frequency , then noise scales as with Δ = 0 for white noise, Δ = −1 for pink noise, and Δ = −2 for Brownian noise (and more generally, Brownian motion). More precisely, scaling in stochastic systems concerns itself with the likelihood of choosing a particular configuration out of the set of all possible random configurations. This likelihood is given by the probability distribution. Examples of scale-invariant distributions are the Pareto distribution and the Zipfian distribution. Scale-invariant Tweedie distributions Tweedie distributions are a special case of exponential dispersion models, a class of statistical models used to describe error distributions for the generalized linear model and characterized by closure under additive and reproductive convolution as well as under scale transformation. These include a number of common distributions: the normal distribution, Poisson distribution and gamma distribution, as well as more unusual distributions like the compound Poisson-gamma distribution, positive stable distributions, and extreme stable distributions. Consequent to their inherent scale invariance Tweedie random variables Y demonstrate a variance var(Y) to mean E(Y) power law: , where a and p are positive constants. This variance to mean power law is known in the physics literature as fluctuation scaling, and in the ecology literature as Taylor's law. Random sequences, governed by the Tweedie distributions and evaluated by the method of expanding bins exhibit a biconditional relationship between the variance to mean power law and power law autocorrelations. The Wiener–Khinchin theorem further implies that for any sequence that exhibits a variance to mean power law under these conditions will also manifest 1/f noise. The Tweedie convergence theorem provides a hypothetical explanation for the wide manifestation of fluctuation scaling and 1/f noise. It requires, in essence, that any exponential dispersion model that asymptotically manifests a variance to mean power law will be required express a variance function that comes within the domain of attraction of a Tweedie model. Almost all distribution functions with finite cumulant generating functions qualify as exponential dispersion models and most exponential dispersion models manifest variance functions of this form. Hence many probability distributions have variance functions that express this asymptotic behavior, and the Tweedie distributions become foci of convergence for a wide range of data types. Much as the central limit theorem requires certain kinds of random variables to have as a focus of convergence the Gaussian distribution and express white noise, the Tweedie convergence theorem requires certain non-Gaussian random variables to express 1/f noise and fluctuation scaling. Cosmology In physical cosmology, the power spectrum of the spatial distribution of the cosmic microwave background is near to being a scale-invariant function. Although in mathematics this means that the spectrum is a power-law, in cosmology the term "scale-invariant" indicates that the amplitude, , of primordial fluctuations as a function of wave number, , is approximately constant, i.e. a flat spectrum. This pattern is consistent with the proposal of cosmic inflation. Scale invariance in classical field theory Classical field theory is generically described by a field, or set of fields, φ, that depend on coordinates, x. Valid field configurations are then determined by solving differential equations for φ, and these equations are known as field equations. For a theory to be scale-invariant, its field equations should be invariant under a rescaling of the coordinates, combined with some specified rescaling of the fields, The parameter Δ is known as the scaling dimension of the field, and its value depends on the theory under consideration. Scale invariance will typically hold provided that no fixed length scale appears in the theory. Conversely, the presence of a fixed length scale indicates that a theory is not scale-invariant. A consequence of scale invariance is that given a solution of a scale-invariant field equation, we can automatically find other solutions by rescaling both the coordinates and the fields appropriately. In technical terms, given a solution, φ(x), one always has other solutions of the form Scale invariance of field configurations For a particular field configuration, φ(x), to be scale-invariant, we require that where Δ is, again, the scaling dimension of the field. We note that this condition is rather restrictive. In general, solutions even of scale-invariant field equations will not be scale-invariant, and in such cases the symmetry is said to be spontaneously broken. Classical electromagnetism An example of a scale-invariant classical field theory is electromagnetism with no charges or currents. The fields are the electric and magnetic fields, E(x,t) and B(x,t), while their field equations are Maxwell's equations. With no charges or currents, these field equations take the form of wave equations where c is the speed of light. These field equations are invariant under the transformation Moreover, given solutions of Maxwell's equations, E(x, t) and B(x, t), it holds that E(λx, λt) and B(λx, λt) are also solutions. Massless scalar field theory Another example of a scale-invariant classical field theory is the massless scalar field (note that the name scalar is unrelated to scale invariance). The scalar field, is a function of a set of spatial variables, x, and a time variable, . Consider first the linear theory. Like the electromagnetic field equations above, the equation of motion for this theory is also a wave equation, and is invariant under the transformation The name massless refers to the absence of a term in the field equation. Such a term is often referred to as a `mass' term, and would break the invariance under the above transformation. In relativistic field theories, a mass-scale, is physically equivalent to a fixed length scale through and so it should not be surprising that massive scalar field theory is not scale-invariant. φ4 theory The field equations in the examples above are all linear in the fields, which has meant that the scaling dimension, Δ, has not been so important. However, one usually requires that the scalar field action is dimensionless, and this fixes the scaling dimension of . In particular, where is the combined number of spatial and time dimensions. Given this scaling dimension for , there are certain nonlinear modifications of massless scalar field theory which are also scale-invariant. One example is massless φ4 theory for  = 4. The field equation is (Note that the name 4 derives from the form of the Lagrangian, which contains the fourth power of .) When  = 4 (e.g. three spatial dimensions and one time dimension), the scalar field scaling dimension is Δ = 1. The field equation is then invariant under the transformation The key point is that the parameter must be dimensionless, otherwise one introduces a fixed length scale into the theory: For 4 theory, this is only the case in  = 4. Note that under these transformations the argument of the function is unchanged. Scale invariance in quantum field theory The scale-dependence of a quantum field theory (QFT) is characterised by the way its coupling parameters depend on the energy-scale of a given physical process. This energy dependence is described by the renormalization group, and is encoded in the beta-functions of the theory. For a QFT to be scale-invariant, its coupling parameters must be independent of the energy-scale, and this is indicated by the vanishing of the beta-functions of the theory. Such theories are also known as fixed points of the corresponding renormalization group flow. Quantum electrodynamics A simple example of a scale-invariant QFT is the quantized electromagnetic field without charged particles. This theory actually has no coupling parameters (since photons are massless and non-interacting) and is therefore scale-invariant, much like the classical theory. However, in nature the electromagnetic field is coupled to charged particles, such as electrons. The QFT describing the interactions of photons and charged particles is quantum electrodynamics (QED), and this theory is not scale-invariant. We can see this from the QED beta-function. This tells us that the electric charge (which is the coupling parameter in the theory) increases with increasing energy. Therefore, while the quantized electromagnetic field without charged particles is scale-invariant, QED is not scale-invariant. Massless scalar field theory Free, massless quantized scalar field theory has no coupling parameters. Therefore, like the classical version, it is scale-invariant. In the language of the renormalization group, this theory is known as the Gaussian fixed point. However, even though the classical massless φ4 theory is scale-invariant in D = 4, the quantized version is not scale-invariant. We can see this from the beta-function for the coupling parameter, g. Even though the quantized massless φ4 is not scale-invariant, there do exist scale-invariant quantized scalar field theories other than the Gaussian fixed point. One example is the Wilson–Fisher fixed point, below. Conformal field theory Scale-invariant QFTs are almost always invariant under the full conformal symmetry, and the study of such QFTs is conformal field theory (CFT). Operators in a CFT have a well-defined scaling dimension, analogous to the scaling dimension, ∆, of a classical field discussed above. However, the scaling dimensions of operators in a CFT typically differ from those of the fields in the corresponding classical theory. The additional contributions appearing in the CFT are known as anomalous scaling dimensions. Scale and conformal anomalies The φ4 theory example above demonstrates that the coupling parameters of a quantum field theory can be scale-dependent even if the corresponding classical field theory is scale-invariant (or conformally invariant). If this is the case, the classical scale (or conformal) invariance is said to be anomalous. A classically scale-invariant field theory, where scale invariance is broken by quantum effects, provides an explication of the nearly exponential expansion of the early universe called cosmic inflation, as long as the theory can be studied through perturbation theory. Phase transitions In statistical mechanics, as a system undergoes a phase transition, its fluctuations are described by a scale-invariant statistical field theory. For a system in equilibrium (i.e. time-independent) in spatial dimensions, the corresponding statistical field theory is formally similar to a -dimensional CFT. The scaling dimensions in such problems are usually referred to as critical exponents, and one can in principle compute these exponents in the appropriate CFT. The Ising model An example that links together many of the ideas in this article is the phase transition of the Ising model, a simple model of ferromagnetic substances. This is a statistical mechanics model, which also has a description in terms of conformal field theory. The system consists of an array of lattice sites, which form a -dimensional periodic lattice. Associated with each lattice site is a magnetic moment, or spin, and this spin can take either the value +1 or −1. (These states are also called up and down, respectively.) The key point is that the Ising model has a spin-spin interaction, making it energetically favourable for two adjacent spins to be aligned. On the other hand, thermal fluctuations typically introduce a randomness into the alignment of spins. At some critical temperature, , spontaneous magnetization is said to occur. This means that below the spin-spin interaction will begin to dominate, and there is some net alignment of spins in one of the two directions. An example of the kind of physical quantities one would like to calculate at this critical temperature is the correlation between spins separated by a distance . This has the generic behaviour: for some particular value of , which is an example of a critical exponent. CFT description The fluctuations at temperature are scale-invariant, and so the Ising model at this phase transition is expected to be described by a scale-invariant statistical field theory. In fact, this theory is the Wilson–Fisher fixed point, a particular scale-invariant scalar field theory. In this context, is understood as a correlation function of scalar fields, Now we can fit together a number of the ideas seen already. From the above, one sees that the critical exponent, , for this phase transition, is also an anomalous dimension. This is because the classical dimension of the scalar field, is modified to become where is the number of dimensions of the Ising model lattice. So this anomalous dimension in the conformal field theory is the same as a particular critical exponent of the Ising model phase transition. Note that for dimension , can be calculated approximately, using the epsilon expansion, and one finds that . In the physically interesting case of three spatial dimensions, we have =1, and so this expansion is not strictly reliable. However, a semi-quantitative prediction is that is numerically small in three dimensions. On the other hand, in the two-dimensional case the Ising model is exactly soluble. In particular, it is equivalent to one of the minimal models, a family of well-understood CFTs, and it is possible to compute (and the other critical exponents) exactly, . Schramm–Loewner evolution The anomalous dimensions in certain two-dimensional CFTs can be related to the typical fractal dimensions of random walks, where the random walks are defined via Schramm–Loewner evolution (SLE). As we have seen above, CFTs describe the physics of phase transitions, and so one can relate the critical exponents of certain phase transitions to these fractal dimensions. Examples include the 2d critical Ising model and the more general 2d critical Potts model. Relating other 2d CFTs to SLE is an active area of research. Universality A phenomenon known as universality is seen in a large variety of physical systems. It expresses the idea that different microscopic physics can give rise to the same scaling behaviour at a phase transition. A canonical example of universality involves the following two systems: The Ising model phase transition, described above. The liquid-vapour transition in classical fluids. Even though the microscopic physics of these two systems is completely different, their critical exponents turn out to be the same. Moreover, one can calculate these exponents using the same statistical field theory. The key observation is that at a phase transition or critical point, fluctuations occur at all length scales, and thus one should look for a scale-invariant statistical field theory to describe the phenomena. In a sense, universality is the observation that there are relatively few such scale-invariant theories. The set of different microscopic theories described by the same scale-invariant theory is known as a universality class. Other examples of systems which belong to a universality class are: Avalanches in piles of sand. The likelihood of an avalanche is in power-law proportion to the size of the avalanche, and avalanches are seen to occur at all size scales. The frequency of network outages on the Internet, as a function of size and duration. The frequency of citations of journal articles, considered in the network of all citations amongst all papers, as a function of the number of citations in a given paper. The formation and propagation of cracks and tears in materials ranging from steel to rock to paper. The variations of the direction of the tear, or the roughness of a fractured surface, are in power-law proportion to the size scale. The electrical breakdown of dielectrics, which resemble cracks and tears. The percolation of fluids through disordered media, such as petroleum through fractured rock beds, or water through filter paper, such as in chromatography. Power-law scaling connects the rate of flow to the distribution of fractures. The diffusion of molecules in solution, and the phenomenon of diffusion-limited aggregation. The distribution of rocks of different sizes in an aggregate mixture that is being shaken (with gravity acting on the rocks). The key observation is that, for all of these different systems, the behaviour resembles a phase transition, and that the language of statistical mechanics and scale-invariant statistical field theory may be applied to describe them. Other examples of scale invariance Newtonian fluid mechanics with no applied forces Under certain circumstances, fluid mechanics is a scale-invariant classical field theory. The fields are the velocity of the fluid flow, , the fluid density, , and the fluid pressure, . These fields must satisfy both the Navier–Stokes equation and the continuity equation. For a Newtonian fluid these take the respective forms where is the dynamic viscosity. In order to deduce the scale invariance of these equations we specify an equation of state, relating the fluid pressure to the fluid density. The equation of state depends on the type of fluid and the conditions to which it is subjected. For example, we consider the isothermal ideal gas, which satisfies where is the speed of sound in the fluid. Given this equation of state, Navier–Stokes and the continuity equation are invariant under the transformations Given the solutions and , we automatically have that and are also solutions. Computer vision In computer vision and biological vision, scaling transformations arise because of the perspective image mapping and because of objects having different physical size in the world. In these areas, scale invariance refers to local image descriptors or visual representations of the image data that remain invariant when the local scale in the image domain is changed. Detecting local maxima over scales of normalized derivative responses provides a general framework for obtaining scale invariance from image data. Examples of applications include blob detection, corner detection, ridge detection, and object recognition via the scale-invariant feature transform. See also Invariant (mathematics) Inverse square potential Power law Scale-free network References Further reading Extensive discussion of scale invariance in quantum and statistical field theories, applications to critical phenomena and the epsilon expansion and related topics. Symmetry Scaling symmetries Conformal field theory Critical phenomena
Scale invariance
[ "Physics", "Materials_science", "Mathematics" ]
4,562
[ "Symmetry", "Physical phenomena", "Critical phenomena", "Scale-invariant systems", "Condensed matter physics", "Geometry", "Statistical mechanics", "Scaling symmetries", "Dynamical systems" ]
695,319
https://en.wikipedia.org/wiki/Supermultiplet
In theoretical physics, a supermultiplet is a representation of a supersymmetry algebra, possibly with extended supersymmetry. Then a superfield is a field on superspace which is valued in such a representation. Naïvely, or when considering flat superspace, a superfield can simply be viewed as a function on superspace. Formally, it is a section of an associated supermultiplet bundle. Phenomenologically, superfields are used to describe particles. It is a feature of supersymmetric field theories that particles form pairs, called superpartners where bosons are paired with fermions. These supersymmetric fields are used to build supersymmetric quantum field theories, where the fields are promoted to operators. History Superfields were introduced by Abdus Salam and J. A. Strathdee in a 1974 article. Operations on superfields and a partial classification were presented a few months later by Sergio Ferrara, Julius Wess and Bruno Zumino. Naming and classification The most commonly used supermultiplets are vector multiplets, chiral multiplets (in supersymmetry for example), hypermultiplets (in supersymmetry for example), tensor multiplets and gravity multiplets. The highest component of a vector multiplet is a gauge boson, the highest component of a chiral or hypermultiplet is a spinor, the highest component of a gravity multiplet is a graviton. The names are defined so as to be invariant under dimensional reduction, although the organization of the fields as representations of the Lorentz group changes. The use of these names for the different multiplets can vary in literature. A chiral multiplet (whose highest component is a spinor) may sometimes be referred to as a scalar multiplet, and in SUSY, a vector multiplet (whose highest component is a vector) can sometimes be referred to as a chiral multiplet. Superfields in d = 4, N = 1 supersymmetry Conventions in this section follow the notes by . A general complex superfield in supersymmetry can be expanded as , where are different complex fields. This is not an irreducible supermultiplet, and so different constraints are needed to isolate irreducible representations. Chiral superfield A (anti-)chiral superfield is a supermultiplet of supersymmetry. In four dimensions, the minimal supersymmetry may be written using the notion of superspace. Superspace contains the usual space-time coordinates , , and four extra fermionic coordinates with , transforming as a two-component (Weyl) spinor and its conjugate. In supersymmetry, a chiral superfield is a function over chiral superspace. There exists a projection from the (full) superspace to chiral superspace. So, a function over chiral superspace can be pulled back to the full superspace. Such a function satisfies the covariant constraint , where is the covariant derivative, given in index notation as A chiral superfield can then be expanded as where . The superfield is independent of the 'conjugate spin coordinates' in the sense that it depends on only through . It can be checked that The expansion has the interpretation that is a complex scalar field, is a Weyl spinor. There is also the auxiliary complex scalar field , named by convention: this is the F-term which plays an important role in some theories. The field can then be expressed in terms of the original coordinates by substituting the expression for : Antichiral superfields Similarly, there is also antichiral superspace, which is the complex conjugate of chiral superspace, and antichiral superfields. An antichiral superfield satisfies where An antichiral superfield can be constructed as the complex conjugate of a chiral superfield. Actions from chiral superfields For an action which can be defined from a single chiral superfield, see Wess–Zumino model. Vector superfield The vector superfield is a supermultiplet of supersymmetry. A vector superfield (also known as a real superfield) is a function which satisfies the reality condition . Such a field admits the expansion The constituent fields are Two real scalar fields and A complex scalar field Two Weyl spinor fields and A real vector field (gauge field) Their transformation properties and uses are further discussed in supersymmetric gauge theory. Using gauge transformations, the fields and can be set to zero. This is known as Wess–Zumino gauge. In this gauge, the expansion takes on the much simpler form Then is the superpartner of , while is an auxiliary scalar field. It is conventionally called , and is known as the D-term. Scalars A scalar is never the highest component of a superfield; whether it appears in a superfield at all depends on the dimension of the spacetime. For example, in a 10-dimensional N=1 theory the vector multiplet contains only a vector and a Majorana–Weyl spinor, while its dimensional reduction on a d-dimensional torus is a vector multiplet containing d real scalars. Similarly, in an 11-dimensional theory there is only one supermultiplet with a finite number of fields, the gravity multiplet, and it contains no scalars. However again its dimensional reduction on a d-torus to a maximal gravity multiplet does contain scalars. Hypermultiplet A hypermultiplet is a type of representation of an extended supersymmetry algebra, in particular the matter multiplet of supersymmetry in 4 dimensions, containing two complex scalars Ai, a Dirac spinor ψ, and two further auxiliary complex scalars Fi. The name "hypermultiplet" comes from old term "hypersymmetry" for N=2 supersymmetry used by ; this term has been abandoned, but the name "hypermultiplet" for some of its representations is still used. Extended supersymmetry (N > 1) This section records some commonly used irreducible supermultiplets in extended supersymmetry in the case. These are constructed by a highest-weight representation construction in the sense that there is a vacuum vector annihilated by the supercharges . The irreps have dimension . For supermultiplets representing massless particles, on physical grounds the maximum allowed is , while for renormalizability, the maximum allowed is . N = 2 The vector or chiral multiplet contains a gauge field , two Weyl fermions , and a scalar (which also transform in the adjoint representation of a gauge group). These can also be organised into a pair of multiplets, an vector multiplet and chiral multiplet . Such a multiplet can be used to define Seiberg–Witten theory concisely. The hypermultiplet or scalar multiplet consists of two Weyl fermions and two complex scalars, or two chiral multiplets. N = 4 The vector multiplet contains one gauge field, four Weyl fermions, six scalars, and CPT conjugates. This appears in N = 4 supersymmetric Yang–Mills theory. See also Supersymmetric gauge theory D-term F-term References Stephen P. Martin. A Supersymmetry Primer, arXiv:hep-ph/9709356 . Yuji Tachikawa. N=2 supersymmetric dynamics for pedestrians, arXiv:1312.2684. Supersymmetry
Supermultiplet
[ "Physics" ]
1,616
[ "Symmetry", "Unsolved problems in physics", "Physics beyond the Standard Model", "Supersymmetry" ]
695,523
https://en.wikipedia.org/wiki/Kelvin%E2%80%93Helmholtz%20mechanism
The Kelvin–Helmholtz mechanism is an astronomical process that occurs when the surface of a star or a planet cools. The cooling causes the internal pressure to drop, and the star or planet shrinks as a result. This compression, in turn, heats the core of the star/planet. This mechanism is evident on Jupiter and Saturn and on brown dwarfs whose central temperatures are not high enough to undergo hydrogen fusion. It is estimated that Jupiter radiates more energy through this mechanism than it receives from the Sun, but Saturn might not. Jupiter has been estimated to shrink at a rate of approximately 1 mm/year by this process, corresponding to an internal flux of 7.485 W/m2. The mechanism was originally proposed by Kelvin and Helmholtz in the late nineteenth century to explain the source of energy of the Sun. By the mid-nineteenth century, conservation of energy had been accepted, and one consequence of this law of physics is that the Sun must have some energy source to continue to shine. Because nuclear reactions were unknown, the main candidate for the source of solar energy was gravitational contraction. However, it soon was recognized by Sir Arthur Eddington and others that the total amount of energy available through this mechanism only allowed the Sun to shine for millions of years rather than the billions of years that the geological and biological evidence suggested for the age of the Earth. (Kelvin himself had argued that the Earth was millions, not billions, of years old.) The true source of the Sun's energy remained uncertain until the 1930s, when it was shown by Hans Bethe to be nuclear fusion. Power generated by a Kelvin–Helmholtz contraction It was theorised that the gravitational potential energy from the contraction of the Sun could be its source of power. To calculate the total amount of energy that would be released by the Sun in such a mechanism (assuming uniform density), it was approximated to a perfect sphere made up of concentric shells. The gravitational potential energy could then be found as the integral over all the shells from the centre to its outer radius. Gravitational potential energy from Newtonian mechanics is defined as: where G is the gravitational constant, and the two masses in this case are that of the thin shells of width dr, and the contained mass within radius r as one integrates between zero and the radius of the total sphere. This gives: where R is the outer radius of the sphere, and m(r) is the mass contained within the radius r. Changing m(r) into a product of volume and density to satisfy the integral, Recasting in terms of the mass of the sphere gives the total gravitational potential energy as According to the Virial Theorem, the total energy for gravitationally bound systems in equilibrium is one half of the time-averaged potential energy, While uniform density is not correct, one can get a rough order of magnitude estimate of the expected age of our star by inserting known values for the mass and radius of the Sun, and then dividing by the known luminosity of the Sun (note that this will involve another approximation, as the power output of the Sun has not always been constant): where is the luminosity of the Sun. While giving enough power for considerably longer than many other physical methods, such as chemical energy, this value was clearly still not long enough due to geological and biological evidence that the Earth was billions of years old. It was eventually discovered that thermonuclear energy was responsible for the power output and long lifetimes of stars. The flux of internal heat for Jupiter is given by the derivative according to the time of the total energy With a shrinking of , one gets dividing by the whole area of Jupiter, i.e. , one gets Of course, one usually calculates this equation in the other direction: the experimental figure of the specific flux of internal heat, 7.485 W/m2, was given from the direct measures made on the spot by the Cassini probe during its flyby on 30 December 2000 and one gets the amount of the shrinking, ~1 mm/year, a minute figure below the boundaries of practical measurement. References Concepts in astrophysics Effects of gravity Mechanism Stellar evolution William Thomson, 1st Baron Kelvin
Kelvin–Helmholtz mechanism
[ "Physics" ]
854
[ "Concepts in astrophysics", "Astrophysics", "Stellar evolution" ]
695,742
https://en.wikipedia.org/wiki/Cordwood%20construction
Cordwood construction (also called cordwood masonry or cordwood building, alternatively stackwall or stovewood particularly in Canada) is a term used for a natural building method in which short logs are piled crosswise to build a wall, using mortar or cob to permanently secure them. This technique can be made to use a wide variety of locally available materials at minimal financial cost, and is a classic example of trading a higher raw labor requirement for technical ease and cost-efficiency of building (a common feature in back-to-the-land alternative/traditional building methods). Construction Walls are usually constructed so that the log ends protrude (are proud)from the mortar by a small amount (an inch or less). Walls typically range between 8 and 24 inches thick, though in northern Canada, some walls are as much as 36 inches thick. Cordwood homes are attractive for their visual appeal, economy of resources, and ease of construction. Wood usually accounts for about 40-60% of the wall system, the remaining portion consisting of a mortar mix and insulating fill. Cordwood construction can be sustainable depending on design and process. There are two main types of cordwood construction, throughwall and M-I-M (mortar-insulation-mortar). In throughwall, the mortar mix itself contains an insulative material, usually sawdust, chopped newsprint, or paper sludge, in sometimes very high percentages by mass (80% paper sludge/20% mortar). In the more common M-I-M, and unlike brick or throughwall masonry, the mortar does not continue throughout the wall. Instead, three- or four-inch (sometimes more) beads of mortar on each side of the wall provide stability and support, with a separate insulation between them. Cordwood walls can be load-bearing (using built-up corners, or curved wall designed) or laid within a post and beam framework which provides structural reinforcement and is suitable for earthquake-prone areas. As a load-bearing wall, the compressive strength of wood and mortar allows for roofing to be tied directly into the wall. Different mortar mixtures and insulation fill material both affect the wall's overall R value, or resistance to heat flow; and conversely, to its inherent thermal mass, or heat/cool storage capacity. History Remains of cordwood structures still standing date back as far as one thousand years in eastern Germany. However, more contemporary versions could be found in Europe, Asia, and the Americas. There is no detailed information about the origins of cordwood construction. However, it is plausible that forest dwellers eventually erected a basic shelter between a fire and a stacked wood pile. In the work of William Tischler of University of Wisconsin, he states that "current" cordwood probably started in the late 1800s in Quebec, Wisconsin, and Sweden. He believes that the technique started in these areas around the same time. Wood Cordwood construction is an economical use of log ends or fallen trees in heavily timbered areas. Other common sources for wood include sawmills, split firewood, utility poles (without creosote), split rail fence posts, and logging slash. It is more sustainable and often economical to use recycled materials for the walls. Regardless of the source, all wood must be debarked before the construction begins. While many different types of wood can be used, the most desirable rot resistant woods are Pacific yew, bald cypress (new growth), cedars, and juniper. Acceptable woods also include Douglas fir, western larch, Eastern White Pine, and Spruce Pine. Less dense and more airy woods are superior because they shrink and expand in lower proportions than dense hardwoods. Most wood can be used in a wall if it is dried properly and stabilized to the external climate's relative humidity. Furthermore, while log ends of different species can be mixed in a wall, log-ends of identical species and source limit expansion/contraction variables. Mortar Various experts advise different recipes for mortar mix. One recipe which has proven to be successful since 1981 is 9 parts sand, 3 sawdust, 3 builder's lime (non-agricultural), 2 Portland cement by volume. Builder's lime makes the wall more flexible, breathable, and self-healing because it takes longer to completely set than cement. Portland cement chemically binds the mortar and should be either Type I or II. Another recipe uses 3 parts sand, 2 soaked sawdust, 1 Portland Cement and 1 Hydrated Lime; intended to have the advantage of curing slower and displaying less cracking. Thermal mass and insulation Depending on a variety of factors (wall thickness, type of wood, particular mortar recipe), the insulative value of a cordwood wall, as expressed in R-value is generally less than that of a high-efficiency stud wall. Cordwood walls have greater thermal mass than stud frame but less than common brick and mortar. This is because the specific heat capacity of clay brick is higher (0.84 versus wood's 0.42), and is denser than airy woods like cedar, cypress, or pine. However, the insulated mortar matrix utilized in most cordwood walls places useful thermal mass on both sides of the insulated internal cavity, helping to store heat in winter and warm slowly in summer. Thermal mass makes it easier for a building to maintain median interior temperatures while going through daily hot and cold phases. In climates like the desert with broad daily temperature swings thermal mass will absorb and then slowly release the midday heat and nighttime cool in sequence, moderating temperature fluctuations. Thermal mass does not replace the function of insulation material, but is used in conjunction with it. The longer the logs (and thicker the wall), the better the insulation qualities. A common 16” cordwood wall for moderate climates comprises of perlite or vermiculite insulation between mortar joints. Another insulation option, used for over 40 years by Rob Roy and other cordwood builders is dry sawdust, passed through a half-inch screen, and treated with builder's (Type S) lime at the ratio of 12 parts sawdust to 1 part lime. With light airy sawdusts, this insulation is similar in its R-value to manufactured loose-fill insulations, at a fraction of the cost. However, wood is an anisotropic material with respect to heat flow. That means its thermal resistance depends on the direction of heat flow relative to the wood grain. While wood has a commonly quoted R-value of about 1.25 per inch (depending on the species and moisture content), that only applies if the heat flow is perpendicular to the grain, such as occurs in common wood-frame construction. With cordwood/stackwall construction, the direction of heat flow is parallel to the grain. For this configuration, the R-value is only about 40% of that perpendicular to the grain. Thus, the actual R-value of wood, when used in cordwood/stackwall construction is closer to about 0.50 per inch. But the R-value of a cordwood masonry wall must take into consideration both the wooden portion and the insulated mortar portion as a combined system. The only authoritative testing on the R-value of cordwood masonry was conducted by Dr. Kris J. Dick (PE) and Luke Chaput during the winter of 2004–2005, based on thermal sensors placed within a 24-inch thick wall at the University of Manitoba. A paper reporting on their findings appears in Cordwood and the Code: a building permit guide The authors' summary says, in part: "Based on approximately three months of mid-winter temperature data, the wall was determined to have an RSI Value of 6.23 (m²K/W), R-35 for a 24-inch wall system." A thermal performance analysis in 1998 using “HOT 2000” computer software showed the relationship of domestic wall types and their insulating values. The simulation revealed an R value of 20.5 for the sample cordwood wall. Compare this to the basic 2 x 4 wooden stud wall, and 2 x 6 foam insulated and sheathed wall with R values of 15.8 and 25.7, respectively. Cordwood walls are not the best natural insulators but can be built to thermal efficient standards. The R value of a cordwood wall is directly related to its ratio of wood to mortar and insulation medium. However, R value in cordwood construction is not as significant as it is in stick-frame building due to the high thermal mass which increases a significantly higher "effective R-value.” Builders tailor their design and ratios to the existing climate. R-value testing was completed at the University of Manitoba in the winter of 2005. The findings compiled by the Engineering Department, found that each inch of cordwood wall (mortar, log end and sawdust/lime insulation yielded an r-value of 1.47). Costs A cordwood home can be constructed for significantly less initial out-of-pocket cost than a standard stick-frame house of comparable size, since sometimes labor is done primarily by the owner, or volunteers. Properly built cordwood walls tend to have fewer maintenance needs than standard stick-frame, because there are fewer manufactured components (e.g., fiberglass insulation, nailings, sidings, flashings, etc.). Some maintenance still will be required, as there is wood and concrete exposed to the elements on the exterior side of the wall. A cordwood house that is poorly built without sufficient insulation can result in higher heating costs than a traditional stud-frame house. In a 1998 comparative economic analysis of stud frame, cordwood, straw bale, and cob, cordwood appeared to be an economically viable alternative. A two-story cordwood house in Cherokee, North Carolina outfitted with "high quality tile, tongue and groove pine, Russian Woodstove, live earth roof, hand shaped cedar trim, raised panel cabinets, and a handmade pine door," cost the owner an estimated $52,000. With the owner providing 99% of the labor, the house cost him $20.70 per sq. ft. A comparably sized and furnished stick-frame house in 1998 would cost between $75,000-$120,000 with zero owner labor. The 1997 residential cost data shows an "average" trim level 1,000- house costing $64.48-$81.76 per sq. ft. Both the acquisition of materials and source of labor play major roles in the initial cost of building a cordwood house. Process In certain jurisdictions construction plans are subject to the building inspector's approval. Before building, soil conditions on the site must be verified to support heavy cordwood masonry walls. With felled timber, bark should be removed in the spring with a shovel, chisel, or bark spud. The sap is still running in spring time and provides a lubricating layer of cambium between the bark and wood, making separation an easier task than if left until the fall when the two layers are well-bonded together. Once debarked, the logs should sit to dry for at least three summers to limit splitting and checking. It is important to cut the logs, once debarked to the chosen building length. Richard Flatau, Cordwood Construction: Best Practices (2012) suggest splitting 70% of the wood for better drying and seasoning. After drying, the logs must be cut to the desired length (usually 8, 12, 16, 18, or 24 in.). In this case a metal handsaw is preferable to a chainsaw because its finer cut helps to ward moisture and pest penetration. Actually a "cut off " saw or "buzz saw" will make quick work of cutting cordwood into chosen lengths. For especially furry ends like on cedar, rasps can be used for smoothing. The wood then needs to be transported to the building site. It is convenient to have the source of cordwood and construction site nearby. Once a proper foundation has been poured which rises 12-24 inches above ground level with a splash guard, construction of the walls can begin. Temporary shelters can be used to cover the worksite and cordwood from rain. A post and beam frame supplies this shelter for subsequent cordwood mortaring. Inexperienced homebuilders should experiment with a number of practice walls. This will ultimately expedite the building process and provide more satisfying results. When experimenting with M-I-M, (the more common form), two parallel 3 to 4 inch beads of mortar are laid down along the foundation, followed by a middle filling of insulation material. Then logs are laid on top with consistent mortar gaps, protruding no more than 1 inch on the inside and outside of the wall. Actual placement will depend on the size and shapes of the logs. Another layer of mortar is spread, then insulation poured in between, more logs follow and so on. When experimenting with Throughwall, a thin, even layer of insulative mortar is laid along the foundation, then the logs are seated firmly in the mortar bed, in an even fashion, leaving only enough space between them to "point" the mortar. The mortar gaps are filled to make a relatively flat top surface, then another thin layer of mortar is added and the process repeats. The shape and exterior orientation of logs is important only for appearance. Pre-split “firewood style” logs check less when in the wall and are easier to point or smooth and press evenly around than round pieces because the mortar gaps are generally smaller. Rob and Jaki Roy, co-directors of Earthwood Building School in West Chazy, NY for 36 years, take a different point of view. They used to use mostly split wood, but now use mostly white cedar (or equal) rounds. The shrinkage is exactly the same in splits and rounds, and the Roys have found the wood easier to lay up because it more readily holds its shape from one side of the wall to the other. Further, the rounds are easier to point, because of the ragged edge that results on the bottom side of a split log. Finally, the greater amount of mortar using rounds is actually a plus because the mortared portion of the wall performs better, thermally, than the wooden portion. If constructing a house with corners, each course of cordwood should be cross hatched for strength. Near the end, small filler slats of wood may be required to finish the joining or tops of walls. Windows and doors are framed with standard window boxes and wooden lintels. Glass bottles can be inserted for a creative stained glass effect. (Plumbing and electrical wiring are issues to consider but will not be elaborated on in this article). A cordwood house should have deep overhanging eaves of at least 12- 16 inches to keep the log ends dry and prevent fungal growth. If the ends are maintained to be dry and well aerated, they will age without problem. Some owners have coated their ends with linseed oil, or set the outside log ends flush with the mortar for further weatherproofing. Over time, some checking is normal, and can be remedied with periodic mortar or caulking maintenance. Sustainability Although cordwood homes have been tested in -40F locations like Alberta, their thermal efficiency in any climate is below that of a purely cob house of comparable dimensions. In frigid areas it is appropriate to either build a thicker 24-36 inch wall, or two separate super insulated walls. In predominantly wet areas, the outside walls can be plastered, smothering the cordwood ends from air and moisture, but this hides cordwood's attractive log ends and the logs will rot. The quantity of labor relative to gaining a specific R value for cordwood is higher when compared to straw bale and stick frame construction. Funds saved in construction may need to be allocated for heating costs or longterm exterior maintenance. An organic, mortar-like cob creates less of an environmental impact because of the use of readily available mud and straw, whereas toxins emitted during the production of Portland cement are very harmful, albeit less tangible in the final product. Like many alternative building styles, the sustainability of cordwood construction is dependent upon materials and construction variables. Following the Cordwood Conference in 2005 at Merrill, Wisconsin, a document was published to address best practices in cordwood construction and building code compliance. The document entitled Cordwood and the Code: A Building Permit Guide assists cordwood builders get the necessary code permits. See also Building construction Cob (material) Wintergreen Studios References Further reading Roy, Rob (2018), Essential Cordwood Building: the complete step-by-step guide, New Society Publishers: Gabriola Island, BC, Canada Roy, Rob (2016) Cordwood Building: A Comprehensive Guide to the State of the Art, New Society Publishers: Gabriola Island, BC, Canada Flatau, Richard (2012) Cordwood Construction: Best Practice . . . Pierquet, P., Bowyer, J., Huelman, P. (1998). Thermal performance and embodied energy of cold climate wall systems. Forest Products Journal, Vol. 48, Issue 6, pp. 53–60. . . External links Earthwood Building School Daycreek Resource Site Cordwood Construction Website Building Structural system
Cordwood construction
[ "Technology", "Engineering" ]
3,534
[ "Structural engineering", "Building", "Building engineering", "Structural system", "Construction" ]
696,317
https://en.wikipedia.org/wiki/Buchberger%27s%20algorithm
In the theory of multivariate polynomials, Buchberger's algorithm is a method for transforming a given set of polynomials into a Gröbner basis, which is another set of polynomials that have the same common zeros and are more convenient for extracting information on these common zeros. It was introduced by Bruno Buchberger simultaneously with the definition of Gröbner bases. Euclidean algorithm for polynomial greatest common divisor computation and Gaussian elimination of linear systems are special cases of Buchberger's algorithm when the number of variables or the degrees of the polynomials are respectively equal to one. For other Gröbner basis algorithms, see . Algorithm A crude version of this algorithm to find a basis for an ideal of a polynomial ring R proceeds as follows: Input A set of polynomials F that generates Output A Gröbner basis G for G := F For every fi, fj in G, denote by gi the leading term of fi with respect to the given monomial ordering, and by aij the least common multiple of gi and gj. Choose two polynomials in G and let (Note that the leading terms here will cancel by construction). Reduce Sij, with the multivariate division algorithm relative to the set G until the result is not further reducible. If the result is non-zero, add it to G. Repeat steps 2-4 until all possible pairs are considered, including those involving the new polynomials added in step 4. Output G The polynomial Sij is commonly referred to as the S-polynomial, where S refers to subtraction (Buchberger) or syzygy (others). The pair of polynomials with which it is associated is commonly referred to as critical pair. There are numerous ways to improve this algorithm beyond what has been stated above. For example, one could reduce all the new elements of F relative to each other before adding them. If the leading terms of fi and fj share no variables in common, then Sij will always reduce to 0 (if we use only and for reduction), so we needn't calculate it at all. The algorithm terminates because it is consistently increasing the size of the monomial ideal generated by the leading terms of our set F, and Dickson's lemma (or the Hilbert basis theorem) guarantees that any such ascending chain must eventually become constant. Complexity The computational complexity of Buchberger's algorithm is very difficult to estimate, because of the number of choices that may dramatically change the computation time. Nevertheless, T. W. Dubé has proved that the degrees of the elements of a reduced Gröbner basis are always bounded by , where is the number of variables, and the maximal total degree of the input polynomials. This allows, in theory, to use linear algebra over the vector space of the polynomials of degree bounded by this value, for getting an algorithm of complexity . On the other hand, there are examples where the Gröbner basis contains elements of degree , and the above upper bound of complexity is optimal. Nevertheless, such examples are extremely rare. Since its discovery, many variants of Buchberger's have been introduced to improve its efficiency. Faugère's F4 and F5 algorithms are presently the most efficient algorithms for computing Gröbner bases, and allow to compute routinely Gröbner bases consisting of several hundreds of polynomials, having each several hundreds of terms and coefficients of several hundreds of digits. See also Knuth–Bendix completion algorithm Quine–McCluskey algorithm – analogous algorithm for Boolean algebra References Further reading David Cox, John Little, and Donald O'Shea (1997). Ideals, Varieties, and Algorithms: An Introduction to Computational Algebraic Geometry and Commutative Algebra, Springer. . Vladimir P. Gerdt, Yuri A. Blinkov (1998). Involutive Bases of Polynomial Ideals, Mathematics and Computers in Simulation, 45:519ff External links Buchberger's algorithm on Scholarpedia Computer algebra Rewriting systems Algebraic geometry Commutative algebra
Buchberger's algorithm
[ "Mathematics", "Technology" ]
821
[ "Computer algebra", "Computational mathematics", "Fields of abstract algebra", "Computer science", "Algebraic geometry", "Commutative algebra", "Algebra" ]
696,449
https://en.wikipedia.org/wiki/Granular%20material
A granular material is a conglomeration of discrete solid, macroscopic particles characterized by a loss of energy whenever the particles interact (the most common example would be friction when grains collide). The constituents that compose granular material are large enough such that they are not subject to thermal motion fluctuations. Thus, the lower size limit for grains in granular material is about 1 μm. On the upper size limit, the physics of granular materials may be applied to ice floes where the individual grains are icebergs and to asteroid belts of the Solar System with individual grains being asteroids. Some examples of granular materials are snow, nuts, coal, sand, rice, coffee, corn flakes, salt, and bearing balls. Research into granular materials is thus directly applicable and goes back at least to Charles-Augustin de Coulomb, whose law of friction was originally stated for granular materials. Granular materials are commercially important in applications as diverse as pharmaceutical industry, agriculture, and energy production. Powders are a special class of granular material due to their small particle size, which makes them more cohesive and more easily suspended in a gas. The soldier/physicist Brigadier Ralph Alger Bagnold was an early pioneer of the physics of granular matter and whose book The Physics of Blown Sand and Desert Dunes remains an important reference to this day. According to material scientist Patrick Richard, "Granular materials are ubiquitous in nature and are the second-most manipulated material in industry (the first one is water)". In some sense, granular materials do not constitute a single phase of matter but have characteristics reminiscent of solids, liquids, or gases depending on the average energy per grain. However, in each of these states, granular materials also exhibit properties that are unique. Granular materials also exhibit a wide range of pattern forming behaviors when excited (e.g. vibrated or allowed to flow). As such granular materials under excitation can be thought of as an example of a complex system. They also display fluid-based instabilities and phenomena such as Magnus effect. Definitions Granular matter is a system composed of many macroscopic particles. Microscopic particles (atoms\molecules) are described (in classical mechanics) by all DOF of the system. Macroscopic particles are described only by DOF of the motion of each particle as a rigid body. In each particle are a lot of internal DOF. Consider inelastic collision between two particles - the energy from velocity as rigid body is transferred to microscopic internal DOF. We get “Dissipation” - irreversible heat generation. The result is that without external driving, eventually all particles will stop moving. In macroscopic particles thermal fluctuations are irrelevant. When a matter is dilute and dynamic (driven) then it is called granular gas and dissipation phenomenon dominates. When a matter is dense and static, then it is called granular solid and jamming phenomenon dominates. When the density is intermediate, then it is called granular liquid. Static behaviors Coulomb friction Law Coulomb regarded internal forces between granular particles as a friction process, and proposed the friction law, that the force of friction of solid particles is proportional to the normal pressure between them and the static friction coefficient is greater than the kinetic friction coefficient. He studied the collapse of piles of sand and found empirically two critical angles: the maximal stable angle and the minimum angle of repose . When the sandpile slope reaches the maximum stable angle, the sand particles on the surface of the pile begin to fall. The process stops when the surface inclination angle is equal to the angle of repose. The difference between these two angles, , is the Bagnold angle, which is a measure of the hysteresis of granular materials. This phenomenon is due to the force chains: stress in a granular solid is not distributed uniformly but is conducted away along so-called force chains which are networks of grains resting on one another. Between these chains are regions of low stress whose grains are shielded for the effects of the grains above by vaulting and arching. When the shear stress reaches a certain value, the force chains can break and the particles at the end of the chains on the surface begin to slide. Then, new force chains form until the shear stress is less than the critical value, and so the sandpile maintains a constant angle of repose. Janssen Effect In 1895, H. A. Janssen discovered that in a vertical cylinder filled with particles, the pressure measured at the base of the cylinder does not depend on the height of the filling, unlike Newtonian fluids at rest which follow Stevin's law. Janssen suggested a simplified model with the following assumptions: 1) The vertical pressure, , is constant in the horizontal plane; 2) The horizontal pressure, , is proportional to the vertical pressure , where is constant in space; 3) The wall friction static coefficient sustains the vertical load at the contact with the wall; 4) The density of the material is constant over all depths. The pressure in the granular material is then described in a different law, which accounts for saturation: , where and is the radius of the cylinder, and at the top of the silo . The given pressure equation does not account for boundary conditions, such as the ratio between the particle size to the radius of the silo. Since the internal stress of the material cannot be measured, Janssen's speculations have not been verified by any direct experiment. Rowe Stress - Dilatancy Relation In the early 1960s, Rowe studied dilatancy effect on shear strength in shear tests and proposed a relation between them. The mechanical properties of assembly of mono-dispersed particles in 2D can be analyzed based on the representative elementary volume, with typical lengths, , in vertical and horizontal directions respectively. The geometric characteristics of the system is described by and the variable , which describes the angle when the contact points begin the process of sliding. Denote by the vertical direction, which is the direction of the major principal stress, and by the horizontal direction, which is the direction of the minor principal stress. Then stress on the boundary can be expressed as the concentrated force borne by individual particles. Under biaxial loading with uniform stress and therefore . At equilibrium state: , where , the friction angle, is the angle between the contact force and the contact normal direction. , which describes the angle that if the tangential force falls within the friction cone the particles would still remain steady. It is determined by the coefficient of friction , so . Once stress is applied to the system then gradually increases while remains unchanged. When then the particles will begin sliding, resulting in changing the structure of the system and creating new force chains. , the horizontal and vertical displacements respectively satisfies . Granular gases If the granular material is driven harder such that contacts between the grains become highly infrequent, the material enters a gaseous state. Correspondingly, one can define a granular temperature equal to the root mean square of grain velocity fluctuations that is analogous to thermodynamic temperature. Unlike conventional gases, granular materials will tend to cluster and clump due to the dissipative nature of the collisions between grains. This clustering has some interesting consequences. For example, if a partially partitioned box of granular materials is vigorously shaken then grains will over time tend to collect in one of the partitions rather than spread evenly into both partitions as would happen in a conventional gas. This effect, known as the granular Maxwell's demon, does not violate any thermodynamics principles since energy is constantly being lost from the system in the process. Ulam Model Consider particles, particle having energy . At some constant rate per unit time, randomly choose two particles with energies and compute the sum . Now, randomly distribute the total energy between the two particles: choose randomly so that the first particle, after the collision, has energy , and the second . The stochastic evolution equation: , where is the collision rate, is randomly picked from (uniform distribution) and j is an index also randomly chosen from a uniform distribution. The average energy per particle: . The second moment: . Now the time derivative of the second moment: . In steady state: . Solving the differential equation for the second moment: . However, instead of characterizing the moments, we can analytically solve the energy distribution, from the moment generating function. Consider the Laplace transform: , where , and . the n derivative: , now: . Solving for with change of variables : . We will show that (Boltzmann Distribution) by taking its Laplace transform and calculate the generating function: . Jamming transition Granular systems are known to exhibit jamming and undergo a jamming transition which is thought of as a thermodynamic phase transition to a jammed state. The transition is from fluid-like phase to a solid-like phase and it is controlled by temperature, , volume fraction, , and shear stress, . The normal phase diagram of glass transition is in the plane and it is divided into a jammed state region and unjammed liquid state by a transition line. The phase diagram for granular matter lies in the plane, and the critical stress curve divides the state phase to jammed\unjammed region, which corresponds to granular solids\liquids respectively. For isotropically jammed granular system, when is reduced around a certain point, , the bulk and shear moduli approach 0. The point corresponds to the critical volume fraction . Define the distance to point , the critical volume fraction, . The behavior of granular systems near the point was empirically found to resemble second-order transition: the bulk modulus shows a power law scaling with and there are some divergent characteristics lengths when approaches zero. While is constant for an infinite system, for a finite system boundary effects result in a distribution of over some range. The Lubachevsky-Stillinger algorithm of jamming allows one to produce simulated jammed granular configurations. Pattern formation Excited granular matter is a rich pattern-forming system. Some of the pattern-forming behaviours seen in granular materials are: The un-mixing or segregation of unlike grains under vibration and flow. An example of this is the so-called Brazil nut effect where Brazil nuts rise to the top of a packet of mixed nuts when shaken. The cause of this effect is that when shaken, granular (and some other) materials move in a circular pattern. Some larger materials (Brazil nuts) get stuck while going down the circle and therefore stay on the top. The formation of structured surface or bulk patterns in vibrated granular layers. These patterns include but are not limited to stripes, squares and hexagons. These patterns are thought to be formed by fundamental excitations of the surface known as oscillons. The formation of ordered volumetric structures in granular materials is known as Granular Crystallisation, and involves a transition from a random packing of particles to an ordered packing such as hexagonal close-packed or body-centred cubic. This is most commonly observed in granular materials with narrow size distributions and uniform grain morphology. The formation of sand ripples, dunes, and sandsheets Some of the pattern-forming behaviours have been possible to reproduce in computer simulations. There are two main computational approaches to such simulations, time-stepped and event-driven, the former being the most efficient for a higher density of the material and the motions of a lower intensity, and the latter for a lower density of the material and the motions of a higher intensity. Acoustic effects Some beach sands, such as those of the aptly named Squeaky Beach, exhibit squeaking when walked upon. Some desert dunes are known to exhibit booming during avalanching or when their surface is otherwise disturbed. Granular materials discharged from silos produce loud acoustic emissions in a process known as silo honking. Granulation Granulation is the act or process in which primary powder particles are made to adhere to form larger, multiparticle entities called granules. Crystallization When water or other liquids are cooled sufficiently slowly, randomly positioned molecules rearrange and solid crystals emerge and grow. A similar crystallisation process may occur in randomly packed granular materials. Unlike removing energy by cooling, crystallization in granular material is achieved by external driving. Ordering, or crystallization of granular materials has been observed to occur in periodically sheared as well as vibrated granular matter. In contrast to molecular systems, the positions of the individual particles can be tracked in the experiment. Computer simulations for a system of spherical grains reveal that homogeneous crystallization emerges at a volume fraction . The computer simulations identify the minimal ingredients necessary for granular crystallization. In particular, gravity and friction are not necessary. Computational modeling of granular materials Several methods are available for modeling of granular materials. Most of these methods consist of the statistical methods by which various statistical properties, derived from either point data or an image, are extracted and used to generate stochastic models of the granular medium. A recent and comprehensive review of such methods is available in Tahmasebi and other (2017). Another alternative for building a pack of granular particles that recently has been presented is based on the level-set algorithm by which the real shape of the particle can be captured and reproduced through the extracted statistics for particles' morphology. See also Aggregate (composite) Fragile matter Random close pack Soil liquefaction Metal powder Particulates Paste (rheology) μ(I) rheology: one model of the rheology of a granular flow. Dilatancy (granular material) References External links Fundamentals of Particle Technology – free book Mester, L., The new physical-mechanical theory of granular materials. 2009, Homonnai, Pareschi, L., Russo, G., Toscani, G., Modelling and Numerics of Kinetic Dissipative Systems, Nova Science Publishers, New York, 2006. Discrete-phase flow
Granular material
[ "Physics", "Chemistry" ]
2,866
[ "Discrete-phase flow", "Materials", "Particle technology", "Granularity of materials", "Matter", "Fluid dynamics" ]
697,387
https://en.wikipedia.org/wiki/Granular%20convection
Granular convection is a phenomenon where granular material subjected to shaking or vibration will exhibit circulation patterns similar to types of fluid convection. It is sometimes called the Brazil nut effect, when the largest of irregularly shaped particles end up on the surface of a granular material containing a mixture of variously sized objects. This name derives from the example of a typical container of mixed nuts, in which the largest will be Brazil nuts. The phenomenon is also known as the muesli effect since it is seen in packets of breakfast cereal containing particles of different sizes but similar density, such as muesli mix. Under experimental conditions, granular convection of variously sized particles has been observed forming convection cells similar to fluid motion. Explanation It may be counterintuitive to find that the largest and (presumably) heaviest particles rise to the top, but several explanations are possible: When the objects are irregularly shaped, random motion causes some oblong items to occasionally turn in a vertical orientation. The vertical orientation allows smaller items to fall beneath the larger item. If subsequent motion causes the larger item to re-orient horizontally, then it will remain at the top of the mixture. The center of mass of the whole system (containing the mixed nuts) in an arbitrary state is not optimally low; it has the tendency to be higher due to there being more empty space around the larger Brazil nuts than around smaller nuts. When the nuts are shaken, the system has the tendency to move to a lower energy state, which means moving the center of mass down by moving the smaller nuts down and thereby the Brazil nuts up. Including the effects of air in spaces between particles, larger particles may become buoyant or sink. Smaller particles can fall into the spaces underneath a larger particle after each shake. Over time, the larger particle rises in the mixture. (According to Heinrich Jaeger, "[this] explanation for size separation might work in situations in which there is no granular convection, for example for containers with completely frictionless side walls or deep below the surface of tall containers (where convection is strongly suppressed). On the other hand, when friction with the side walls or other mechanisms set up a convection roll pattern inside the vibrated container, we found that the convective motion immediately takes over as the dominant mechanism for size separation.") The same explanation without buoyancy or center of mass arguments: As a larger particle moves upward, any motion of smaller particles into the spaces underneath blocks the larger particle from settling back in its previous position. Repetitive motion results in more smaller particles slipping beneath larger particles. A greater density of the larger particles has no effect on this process. Shaking is not necessary; any process which raises particles and then lets them settle would have this effect. The process of raising the particles imparts potential energy into the system. The result of all the particles settling in a different order may be an increase in the potential energy—a raising of the center of mass. When shaken, the particles move in vibration-induced convection flow; individual particles move up through the middle, across the surface, and down the sides. If a large particle is involved, it will be moved up to the top by convection flow. Once at the top, the large particle will stay there because the convection currents are too narrow to sweep it down along the wall. The pore size distribution of a random packing of hard spheres with various sizes makes that smaller spheres have larger probability to move downwards by gravitation than larger spheres. The phenomenon is related to Parrondo's paradox in as much as the Brazil nuts move to the top of the mixed nuts against the gravitational gradient when subjected to random shaking. Study techniques Granular convection has been probed by the use of magnetic resonance imaging (MRI), where convection rolls similar to those in fluids (Bénard cells) can be visualized. Other studies have used time-lapse CT scans, refractive index matched fluids, and positron emission tracing. On the lower-tech end of the scale, researchers have also used thin, clear plastic boxes, so that the motion of some objects is directly visible. The effect has been observed in even tiny particles driven only by brownian motion with no external energy input. Applications Manufacturing The effect is of interest to food manufacturing and similar operations. Once a homogeneous mixture of granular materials has been produced, it is usually undesirable for the different particle types to segregate. Several factors determine the severity of the Brazil nut effect, including the sizes and densities of the particles, the pressure of any gas between the particles, and the shape of the container. A rectangular box (such as a box of breakfast cereal) or cylinder (such as a can of nuts) works well to favour the effect, while a container with outwardly slanting walls (such as in a conical or spherical geometry) results in what is known as the reverse Brazil nut effect. Astronomy In astronomy, it is common in low density, or rubble pile asteroids, for example the asteroid 25143 Itokawa and 101955 Bennu. Geology In geology, the effect is common in formerly glaciated areas such as New England and areas in regions of permafrost where the landscape is shaped into hummocks by frost heave — new stones appear in the fields every year from deeper underground. Horace Greeley noted "Picking stones is a never-ending labor on one of those New England farms. Pick as closely as you may, the next plowing turns up a fresh eruption of boulders and pebbles, from the size of a hickory nut to that of a tea-kettle." A hint to the cause appears in his further description that "this work is mainly to be done in March or April, when the earth is saturated with ice-cold water". Underground water freezes, lifting all particles above it. As the water starts to melt, smaller particles can settle into the opening spaces while larger particles are still raised. By the time ice no longer supports the larger rocks, they are at least partially supported by the smaller particles that slipped below them. Repeated freeze-thaw cycles in a single year speeds up the process. This phenomenon is one of the causes of inverse grading which can be observed in many situations including soil liquefaction during earthquakes or mudslides. Liquefaction is a general phenomenon where a mixture of fluid and granular material subjected to vibration ultimately leads to circulation patterns similar to both fluid convection and granular convection. Indeed, liquefaction is fluid-granular convection with circulation patterns which are known as sand boils or sand volcanoes in the study of soil liquefaction. Granular convection is also exemplified by debris flow, which is a fast moving, liquefied landslide of unconsolidated, saturated debris that looks like flowing concrete. These flows can carry material ranging in size from clay to boulders, including woody debris such as logs and tree stumps. Flows can be triggered by intense rainfall, glacial melt, or a combination of the two. See also Cheerios effect Popcorn effect on high-frequency vibrating screens References External links The Brazil Nut Effect on PhysicsWeb The Brazil Nut Effect: Numerical Simulation Example of a numerical simulation of the Brazil Nut Effect. "Why brazils always end up on top", BBC News, 15 November 2001 "Why does shaking a can of coffee cause the larger grains to move to the surface?", Scientific American, 9 May 2005 "Of airbags, Avalungs and avalanche safety", Toronto Star, 13 January 2008 Granularity of materials Convection
Granular convection
[ "Physics", "Chemistry" ]
1,536
[ "Transport phenomena", "Physical phenomena", "Convection", "Materials", "Thermodynamics", "Particle technology", "Granularity of materials", "Matter" ]
697,531
https://en.wikipedia.org/wiki/Geometric%E2%80%93harmonic%20mean
In mathematics, the geometric–harmonic mean M(x, y) of two positive real numbers x and y is defined as follows: we form the geometric mean of g0 = x and h0 = y and call it g1, i.e. g1 is the square root of xy. We also form the harmonic mean of x and y and call it h1, i.e. h1 is the reciprocal of the arithmetic mean of the reciprocals of x and y. These may be done sequentially (in any order) or simultaneously. Now we can iterate this operation with g1 taking the place of x and h1 taking the place of y. In this way, two interdependent sequences (gn) and (hn) are defined: and Both of these sequences converge to the same number, which we call the geometric–harmonic mean M(x, y) of x and y. The geometric–harmonic mean is also designated as the harmonic–geometric mean. (cf. Wolfram MathWorld below.) The existence of the limit can be proved by the means of Bolzano–Weierstrass theorem in a manner almost identical to the proof of existence of arithmetic–geometric mean. Properties M(x, y) is a number between the geometric and harmonic mean of x and y; in particular it is between x and y. M(x, y) is also homogeneous, i.e. if r > 0, then M(rx, ry) = r M(x, y). If AG(x, y) is the arithmetic–geometric mean, then we also have Inequalities We have the following inequality involving the Pythagorean means {H, G, A} and iterated Pythagorean means {HG, HA, GA}: where the iterated Pythagorean means have been identified with their parts {H, G, A} in progressing order: H(x, y) is the harmonic mean, HG(x, y) is the harmonic–geometric mean, G(x, y) = HA(x, y) is the geometric mean (which is also the harmonic–arithmetic mean), GA(x, y) is the geometric–arithmetic mean, A(x, y) is the arithmetic mean. See also Arithmetic–geometric mean Arithmetic–harmonic mean Mean External links Means
Geometric–harmonic mean
[ "Physics", "Mathematics" ]
498
[ "Means", "Mathematical analysis", "Point (geometry)", "Geometric centers", "Symmetry" ]
698,759
https://en.wikipedia.org/wiki/Propagator
In quantum mechanics and quantum field theory, the propagator is a function that specifies the probability amplitude for a particle to travel from one place to another in a given period of time, or to travel with a certain energy and momentum. In Feynman diagrams, which serve to calculate the rate of collisions in quantum field theory, virtual particles contribute their propagator to the rate of the scattering event described by the respective diagram. Propagators may also be viewed as the inverse of the wave operator appropriate to the particle, and are, therefore, often called (causal) Green's functions (called "causal" to distinguish it from the elliptic Laplacian Green's function). Non-relativistic propagators In non-relativistic quantum mechanics, the propagator gives the probability amplitude for a particle to travel from one spatial point (x') at one time (t') to another spatial point (x) at a later time (t). The Green's function G for the Schrödinger equation is a function satisfying where denotes the Hamiltonian, denotes the Dirac delta-function and is the Heaviside step function. The kernel of the above Schrödinger differential operator in the big parentheses is denoted by and called the propagator. This propagator may also be written as the transition amplitude where is the unitary time-evolution operator for the system taking states at time to states at time . Note the initial condition enforced by The propagator may also be found by using a path integral: where denotes the Lagrangian and the boundary conditions are given by . The paths that are summed over move only forwards in time and are integrated with the differential following the path in time. The propagator lets one find the wave function of a system, given an initial wave function and a time interval. The new wave function is given by If only depends on the difference , this is a convolution of the initial wave function and the propagator. Examples For a time-translationally invariant system, the propagator only depends on the time difference , so it may be rewritten as The propagator of a one-dimensional free particle, obtainable from, e.g., the path integral, is then Similarly, the propagator of a one-dimensional quantum harmonic oscillator is the Mehler kernel, The latter may be obtained from the previous free-particle result upon making use of van Kortryk's SU(1,1) Lie-group identity, valid for operators and satisfying the Heisenberg relation . For the -dimensional case, the propagator can be simply obtained by the product Relativistic propagators In relativistic quantum mechanics and quantum field theory the propagators are Lorentz-invariant. They give the amplitude for a particle to travel between two spacetime events. Scalar propagator In quantum field theory, the theory of a free (or non-interacting) scalar field is a useful and simple example which serves to illustrate the concepts needed for more complicated theories. It describes spin-zero particles. There are a number of possible propagators for free scalar field theory. We now describe the most common ones. Position space The position space propagators are Green's functions for the Klein–Gordon equation. This means that they are functions satisfying where are two points in Minkowski spacetime, is the d'Alembertian operator acting on the coordinates, is the Dirac delta function. (As typical in relativistic quantum field theory calculations, we use units where the speed of light and the reduced Planck constant are set to unity.) We shall restrict attention to 4-dimensional Minkowski spacetime. We can perform a Fourier transform of the equation for the propagator, obtaining This equation can be inverted in the sense of distributions, noting that the equation has the solution (see Sokhotski–Plemelj theorem) with implying the limit to zero. Below, we discuss the right choice of the sign arising from causality requirements. The solution is where is the 4-vector inner product. The different choices for how to deform the integration contour in the above expression lead to various forms for the propagator. The choice of contour is usually phrased in terms of the integral. The integrand then has two poles at so different choices of how to avoid these lead to different propagators. Causal propagators Retarded propagator A contour going clockwise over both poles gives the causal retarded propagator. This is zero if is spacelike or is to the future of , so it is zero if . This choice of contour is equivalent to calculating the limit, Here is the Heaviside step function, is the proper time from to , and is a Bessel function of the first kind. The propagator is non-zero only if , i.e., causally precedes , which, for Minkowski spacetime, means and This expression can be related to the vacuum expectation value of the commutator of the free scalar field operator, where Advanced propagator A contour going anti-clockwise under both poles gives the causal advanced propagator. This is zero if is spacelike or if is to the past of , so it is zero if . This choice of contour is equivalent to calculating the limit This expression can also be expressed in terms of the vacuum expectation value of the commutator of the free scalar field. In this case, Feynman propagator A contour going under the left pole and over the right pole gives the Feynman propagator, introduced by Richard Feynman in 1948. This choice of contour is equivalent to calculating the limit Here, is a Hankel function and is a modified Bessel function. This expression can be derived directly from the field theory as the vacuum expectation value of the time-ordered product of the free scalar field, that is, the product always taken such that the time ordering of the spacetime points is the same, This expression is Lorentz invariant, as long as the field operators commute with one another when the points and are separated by a spacelike interval. The usual derivation is to insert a complete set of single-particle momentum states between the fields with Lorentz covariant normalization, and then to show that the functions providing the causal time ordering may be obtained by a contour integral along the energy axis, if the integrand is as above (hence the infinitesimal imaginary part), to move the pole off the real line. The propagator may also be derived using the path integral formulation of quantum theory. Dirac propagator Introduced by Paul Dirac in 1938. Momentum space propagator The Fourier transform of the position space propagators can be thought of as propagators in momentum space. These take a much simpler form than the position space propagators. They are often written with an explicit term although this is understood to be a reminder about which integration contour is appropriate (see above). This term is included to incorporate boundary conditions and causality (see below). For a 4-momentum the causal and Feynman propagators in momentum space are: For purposes of Feynman diagram calculations, it is usually convenient to write these with an additional overall factor of (conventions vary). Faster than light? The Feynman propagator has some properties that seem baffling at first. In particular, unlike the commutator, the propagator is nonzero outside of the light cone, though it falls off rapidly for spacelike intervals. Interpreted as an amplitude for particle motion, this translates to the virtual particle travelling faster than light. It is not immediately obvious how this can be reconciled with causality: can we use faster-than-light virtual particles to send faster-than-light messages? The answer is no: while in classical mechanics the intervals along which particles and causal effects can travel are the same, this is no longer true in quantum field theory, where it is commutators that determine which operators can affect one another. So what does the spacelike part of the propagator represent? In QFT the vacuum is an active participant, and particle numbers and field values are related by an uncertainty principle; field values are uncertain even for particle number zero. There is a nonzero probability amplitude to find a significant fluctuation in the vacuum value of the field if one measures it locally (or, to be more precise, if one measures an operator obtained by averaging the field over a small region). Furthermore, the dynamics of the fields tend to favor spatially correlated fluctuations to some extent. The nonzero time-ordered product for spacelike-separated fields then just measures the amplitude for a nonlocal correlation in these vacuum fluctuations, analogous to an EPR correlation. Indeed, the propagator is often called a two-point correlation function for the free field. Since, by the postulates of quantum field theory, all observable operators commute with each other at spacelike separation, messages can no more be sent through these correlations than they can through any other EPR correlations; the correlations are in random variables. Regarding virtual particles, the propagator at spacelike separation can be thought of as a means of calculating the amplitude for creating a virtual particle-antiparticle pair that eventually disappears into the vacuum, or for detecting a virtual pair emerging from the vacuum. In Feynman's language, such creation and annihilation processes are equivalent to a virtual particle wandering backward and forward through time, which can take it outside of the light cone. However, no signaling back in time is allowed. Explanation using limits This can be made clearer by writing the propagator in the following form for a massless particle: This is the usual definition but normalised by a factor of . Then the rule is that one only takes the limit at the end of a calculation. One sees that and Hence this means that a single massless particle will always stay on the light cone. It is also shown that the total probability for a photon at any time must be normalised by the reciprocal of the following factor: We see that the parts outside the light cone usually are zero in the limit and only are important in Feynman diagrams. Propagators in Feynman diagrams The most common use of the propagator is in calculating probability amplitudes for particle interactions using Feynman diagrams. These calculations are usually carried out in momentum space. In general, the amplitude gets a factor of the propagator for every internal line, that is, every line that does not represent an incoming or outgoing particle in the initial or final state. It will also get a factor proportional to, and similar in form to, an interaction term in the theory's Lagrangian for every internal vertex where lines meet. These prescriptions are known as Feynman rules. Internal lines correspond to virtual particles. Since the propagator does not vanish for combinations of energy and momentum disallowed by the classical equations of motion, we say that the virtual particles are allowed to be off shell. In fact, since the propagator is obtained by inverting the wave equation, in general, it will have singularities on shell. The energy carried by the particle in the propagator can even be negative. This can be interpreted simply as the case in which, instead of a particle going one way, its antiparticle is going the other way, and therefore carrying an opposing flow of positive energy. The propagator encompasses both possibilities. It does mean that one has to be careful about minus signs for the case of fermions, whose propagators are not even functions in the energy and momentum (see below). Virtual particles conserve energy and momentum. However, since they can be off shell, wherever the diagram contains a closed loop, the energies and momenta of the virtual particles participating in the loop will be partly unconstrained, since a change in a quantity for one particle in the loop can be balanced by an equal and opposite change in another. Therefore, every loop in a Feynman diagram requires an integral over a continuum of possible energies and momenta. In general, these integrals of products of propagators can diverge, a situation that must be handled by the process of renormalization. Other theories Spin If the particle possesses spin then its propagator is in general somewhat more complicated, as it will involve the particle's spin or polarization indices. The differential equation satisfied by the propagator for a spin particle is given by where is the unit matrix in four dimensions, and employing the Feynman slash notation. This is the Dirac equation for a delta function source in spacetime. Using the momentum representation, the equation becomes where on the right-hand side an integral representation of the four-dimensional delta function is used. Thus By multiplying from the left with (dropping unit matrices from the notation) and using properties of the gamma matrices, the momentum-space propagator used in Feynman diagrams for a Dirac field representing the electron in quantum electrodynamics is found to have form The downstairs is a prescription for how to handle the poles in the complex -plane. It automatically yields the Feynman contour of integration by shifting the poles appropriately. It is sometimes written for short. It should be remembered that this expression is just shorthand notation for . "One over matrix" is otherwise nonsensical. In position space one has This is related to the Feynman propagator by where . Spin 1 The propagator for a gauge boson in a gauge theory depends on the choice of convention to fix the gauge. For the gauge used by Feynman and Stueckelberg, the propagator for a photon is The general form with gauge parameter , up to overall sign and the factor of , reads The propagator for a massive vector field can be derived from the Stueckelberg Lagrangian. The general form with gauge parameter , up to overall sign and the factor of , reads With these general forms one obtains the propagators in unitary gauge for , the propagator in Feynman or 't Hooft gauge for and in Landau or Lorenz gauge for . There are also other notations where the gauge parameter is the inverse of , usually denoted (see gauges). The name of the propagator, however, refers to its final form and not necessarily to the value of the gauge parameter. Unitary gauge: Feynman ('t Hooft) gauge: Landau (Lorenz) gauge: Graviton propagator The graviton propagator for Minkowski space in general relativity is where is the number of spacetime dimensions, is the transverse and traceless spin-2 projection operator and is a spin-0 scalar multiplet. The graviton propagator for (Anti) de Sitter space is where is the Hubble constant. Note that upon taking the limit and , the AdS propagator reduces to the Minkowski propagator. Related singular functions The scalar propagators are Green's functions for the Klein–Gordon equation. There are related singular functions which are important in quantum field theory. These functions are most simply defined in terms of the vacuum expectation value of products of field operators. Solutions to the Klein–Gordon equation Pauli–Jordan function The commutator of two scalar field operators defines the Pauli–Jordan function by with This satisfies and is zero if . Positive and negative frequency parts (cut propagators) We can define the positive and negative frequency parts of , sometimes called cut propagators, in a relativistically invariant way. This allows us to define the positive frequency part: and the negative frequency part: These satisfy and Auxiliary function The anti-commutator of two scalar field operators defines function by with This satisfies Green's functions for the Klein–Gordon equation The retarded, advanced and Feynman propagators defined above are all Green's functions for the Klein–Gordon equation. They are related to the singular functions by where is the sign of . See also Source field LSZ reduction formula Notes References (Appendix C.) (Especially pp. 136–156 and Appendix A) (section Dynamical Theory of Groups & Fields, Especially pp. 615–624) (Has useful appendices of Feynman diagram rules, including propagators, in the back.) Scharf, G. (1995). Finite Quantum Electrodynamics, The Causal Approach. Springer. . External links Three Methods for Computing the Feynman Propagator Quantum mechanics Quantum field theory Mathematical physics
Propagator
[ "Physics", "Mathematics" ]
3,465
[ "Quantum field theory", "Applied mathematics", "Theoretical physics", "Quantum mechanics", "Mathematical physics" ]
5,914,620
https://en.wikipedia.org/wiki/Society%20of%20Tribologists%20and%20Lubrication%20Engineers
The Society of Tribologists and Lubrication Engineers (STLE) is an American non-profit technical society for the tribology and lubrication engineering sectors worldwide. Its offices are in Park Ridge, Illinois. Established in 1944 as the American Society of Lubrication Engineers (ASLE), the STLE is now one of the world's largest associations solely dedicated to the advancement of the field of tribology. The STLE currently has over 13,000 members. An official STLE journal, Tribology Transactions, is published by Taylor and Francis and the society is also affiliated with Tribology Letters, published by Springer. The STLE also publish a monthly magazine, Tribology and Lubrication Technology. References Engineering societies based in the United States Mechanical engineering organizations International professional associations Non-profit organizations based in Chicago Tribology
Society of Tribologists and Lubrication Engineers
[ "Chemistry", "Materials_science", "Engineering" ]
177
[ "Tribology", "Materials science", "Mechanical engineering organizations", "Surface science", "Mechanical engineering" ]
5,920,338
https://en.wikipedia.org/wiki/Atomistix
Atomistix A/S was a software company developing tools for atomic scale modelling. It was headquartered in Copenhagen, Denmark, with a subsidiary for Asia Pacific in Singapore and for the Americas in California. In September 2008 Atomistix A/S went bankrupt, but in December 2008 the newly founded company QuantumWise announced that they had acquired all assets from the Atomistix estate and would continue the development and marketing of the products Atomistix ToolKit and Atomistix Virtual NanoLab. QuantumWise was then acquired by Synopsys in 2017. History The company was founded in October 2003 by Dr. Kurt Stokbro, Dr. Jeremy Taylor and Dr. Thomas Magnussen. Dr. Stokbro and Dr. Taylor are co-authors on the article introducing the electron transport method and program TranSIESTA (based on the SIESTA program) for academic research. This method, and methods used in Dr. Taylor Ph.D. research, was the starting point for Atomistix first product, TranSIESTA-C. The C refers to the program being written in the C programming language as opposed to Fortran in which TranSIESTA was written. This code had been completely reengineered and further developed into the commercial products marketed by the company today. Since the very beginning the company had been working in close collaboration with the Nano-Science Center at the Niels Bohr Institute of Copenhagen University, to enhance the product development, and had instituted cooperations with leading nanotechnology centers, experts and private companies around the world. Business The management team consisted of: Børge Witthøft, President and Chief Executive Officer (CEO). Dr. Jeremy Taylor, co-founder and Chief Technical Officer (CTO). Klaus Melchior, Chief Operating Officer (COO) and Chief Financial Officer (CFO). Niels Nielsen, Vice President (VP) Sales and Marketing. Products Atomistix A/S provided the following products: Atomistix ToolKit (ATK) Atomistix Virtual NanoLab (VNL) Legacy Products: IView TranSIESTA-C These products represented a package of integrated software modules for quantum chemistry modelling, providing a user-friendly graphical interface interaction to complex computational methods. From the usability point of view, the setup of the computation is done through Atomistix Virtual NanoLab, a metaphoric interface, mimicking in silico the approach of an experiment in a real laboratory. The underlying computational engine is the Atomistix ToolKit (ATK) which is the next generation of TranSIESTA-C. ATK also has a Python based interface NanoLanguage and a text file interface. The methods used in the software products are based primarily on density functional theory (DFT) and non-equilibrium Green's function (NEGF) techniques and also all the underlying quantum mechanics. See also Semiconductor Nanotechnology Molecular modelling Software for molecular modeling References Sources External links QuantumWise website Danish companies established in 2003 Nanotechnology companies Software companies based in Copenhagen 2008 disestablishments in Denmark Software companies established in 2003 Software companies disestablished in 2008 2008 mergers and acquisitions
Atomistix
[ "Materials_science" ]
644
[ "Nanotechnology", "Nanotechnology companies" ]
5,920,421
https://en.wikipedia.org/wiki/Pur%20%28brand%29
Pur (styled PŪR; pronounced as "pure") is a division of Helen of Troy Limited that produces Pur Water products. Pur's products include water filter faucet mounts, pitchers, side taps, dispensers, coolers, and filtration systems for Kenmore refrigerators of Sears Holdings Corporation. History The Pur brand was created and products invented by Minneapolis-Based Recovery Engineering, Inc., which was sold to Procter & Gamble in 1999 for approximately US$213 million. Outdoor products under the Pur brand were sold to Katadyn USA, and the Minneapolis manufacturing plant for all Pur products was closed in 2004. P&G sold Pur to Helen of Troy in January 2012 for an undisclosed amount. P&G maintained ownership of the products sold in various countries as a form of humanitarian aid under the Children's Safe Drinking Water program. These no longer use the Pur brand name. See also Chamberland filter References External links Official Website RO Filter System UV Water Sterilizer Water filters Helen of Troy Limited American brands
Pur (brand)
[ "Chemistry" ]
220
[ "Water treatment", "Water filters", "Filters" ]
19,097,104
https://en.wikipedia.org/wiki/NEFA%20%28drug%29
NEFA is a moderate affinity NMDA antagonist (IC50 = 0.51 μM). It is a structural analog of phencyclidine. It was first synthesized by a team at Parke-Davis in the late 1950s. References Dissociative drugs NMDA receptor antagonists Fluorenes Amines
NEFA (drug)
[ "Chemistry" ]
67
[ "Amines", "Bases (chemistry)", "Functional groups" ]
19,099,507
https://en.wikipedia.org/wiki/Anachronisms%20in%20the%20Book%20of%20Mormon
There are a number of anachronistic words and phrases in the Book of Mormon—their existence in the text contradicts known linguistic patterns or archaeological findings. Each of the anachronisms is a word, phrase, artifact, or other concept that did not exist in the Americas during the time period in which the Book of Mormon claims to have been written. Background According to Joseph Smith, the Book of Mormon was originally engraved on golden plates, which he received in 1827 from an angel named Moroni, whom Smith identified as a resurrected former inhabitant of the American continent. Smith claimed to translate the original text of the plates into English; the book says that a portion of the text was written on the plates in "reformed Egyptian". The Book of Mormon is said to have taken place somewhere in the Americas from c. 2500 BC to 420 AD, thus placing its events within the pre-Columbian era. Mainstream scholarly consensus is that the book was created in the 19th century by Smith with the resources available to him, including the standard English translation of the Bible at the time, the King James Version (KJV). No manuscripts in the claimed original language of the Book of Mormon exist. No manuscripts or plates containing text similar to Egyptian or Hebrew have ever been discovered. There is a wide consensus that the archaeological record does not support the historicity of the Book of Mormon, and rather directly contradicts it. Smith stated that "the Book of Mormon is the most correct of any book on Earth", a claim repeated in modern introductions to the book. Modern apologists affirm that "when Joseph Smith referred to the Book of Mormon as the 'most correct book' on earth, he was referring to the principles that it teaches, not the accuracy of its textual structure", and therefore readers should not expect it to be "without any errors in grammar, spelling, punctuation, clarity of phrasing, [or] other such ways." Indeed, the original title page of the Book of Mormon claims that "if there are faults [in the book] they are the mistakes of men". Latter Day Saint scholars and apologists have dealt with these in multiple ways. Depending on the anachronism in question, apologists attempt to: establish parallels to currently known ancient cultures, technologies, plants or animals; reframe the usage of individual words in question; question assumptions that may lead to an apparent anachronism; or point out that it is not known exactly where the Book of Mormon actually took place (and so supporting evidence simply remains to be found - see Limited geography model). Historical anachronisms Quoting Isaiah Book of Mormon prophets quote chapters 48 through 54 of the Book of Isaiah after having left Jerusalem around 600 BC. Since Isaiah died around 698 BC, under traditional biblical belief, there would be no conflict. However, the evidence indicates that these chapters were not written by Isaiah, but rather by one or more other people during the Babylonian captivity, sometime between 586 and 538 BC (between 14 and 82 years after it could have been known to the Book of Mormon prophets). The Mormons who know about this fact and still believe the Book of Mormon has ancient provenance necessarily rely on the hold-out conservative Biblical scholars that still assert, contrary to the evidence, that Isaiah authored the entire book. Baptism Baptism is mentioned as a ritual that is taught and performed among the Nephites, with its first mention being taught by Nephi, son of Lehi between 559 and 545 BC. Research by Everett Ferguson (2009) has concluded that "the date for the origin of proselyte baptism cannot be determined." The Babylonian captivity occurred after the departure of the Lehites recounted in the Book of Mormon. Both Christian and Rabbinic baptism is rooted in the washings in the Book of Leviticus, which traditional Biblical timelines date to approximately 1445 BC although current texts are considered to date from the Achaemenid Empire, which began about 539 BC. A practice similar to baptism is known to have been practiced by the Essenes between the 2nd century BC and the 1st century AD. The Jewish Encyclopedia of 1906 compares Christian baptism to ancient Jewish ritual purification and initiation rites involving immersion in water and states that "Baptism was practised in ancient (Hasidean or Essene) Judaism". Dating of known historical events The Book of Mormon chronology accounts for 600 years from the time that Lehi "came out" of Jerusalem to the birth of Jesus Christ, reflecting an 1820s view of the timeline but which contradicts the timing of known historical events. Lehi is said to have left Jerusalem in the first year of the reign of Zedekiah, which occurred in 597 BC. The date of birth of Jesus was no later than 4 BC, based on the Bible stating that it occurred during the reign of Herod the Great, who died in 4 BC. Flora and fauna anachronisms Horses There are several instances where horses are mentioned in the Book of Mormon, and are portrayed as being in the forest upon first arrival of the Nephites, "raise(d)", "fed", "prepared" (in conjunction with chariots), used for food, and being "useful unto man". There is no evidence that horses existed on the American continent during the time frame of the Book of Mormon. While there were horses in North America during the Pleistocene, and modern horses partly evolved in the Americas, fossil records show that they became extinct on the American continent approximately 10,000 years ago. Horses did not reappear in the Americas until the Spaniards brought them from Europe. They were brought to the Caribbean by Christopher Columbus in 1493, and to the American continent by Hernán Cortés in 1519. At this point then there is no convincing evidence that the horse survived until the period of the Mesoamerican civilizations. Others, such as John L. Sorenson, believe that the word "horse" in the Book of Mormon does not refer to members of the genus Equus but instead to other animals such as deer or tapirs. Elephants Elephants are mentioned twice in a single verse in the Book of Ether and are indicated to be at least semi-domesticated. Mastodons and mammoths lived in the New World during the Pleistocene and the very early Holocene with a disappearance of the Mastodon from North America about 10,500 years ago where recent eDNA research of sediments indicates mammoths survived in north central Siberia at least as late as 2000 BC, in continental northeast Siberia until at least 5300 BC, and until at least 6600 BC in North America. The fossil record indicates that they became extinct along with most of the megafauna towards the end of the last glacial period. The source of this extinction, known as the Holocene extinction is speculated to be the result of human predation, a significant climate change, or a combination of both factors. It is known that a small population of mammoths survived on Saint Paul Island, Alaska, up until 5725 BP (3705 BC), but this date is more than 1000 years before the Jaredite record in the Book of Mormon begins. The main point of contention is how late these animals were present in the Americas before becoming extinct, with Mormon authors asserting that a population island of these animals continued to exist into Jaredite times. Cattle and cows There are four separate instances of "cows" or "cattle" in the New World in the Book of Mormon, including verbiage that they were "raise(d)" and were "for the use of man" or "useful for the food of man." There is no evidence that Old World cattle (members of the genus Bos) inhabited the New World prior to European contact in the 16th century AD. Goats There are four mentions of the existence of goats in the Book of Mormon. The Jaredites noted goats "were useful for the food of man" (approximately 2300 BC), the Nephites did "find" "the goat and the wild goat" upon arrival (approximately 589 BC) and later "raise(d)" "goats and wild goats" (approximately 500 BC), and the goat was mentioned allegorically (approximately 80 BC). Domesticated goats are not native to the Americas, having been domesticated in prehistoric times on the Eurasian continent. Domesticated goats are believed to have been introduced on the American continent upon the arrival of the Europeans in the 15th century, 1000 years after the conclusion of the Book of Mormon, and nearly 2000 years after they are last mentioned in the Book of Mormon. The mountain goat is indigenous to North America and has been hunted, and the fleece used for clothing. However it has never been domesticated, and is known for being aggressive towards humans. Swine "Swine" are referred to twice in the Book of Mormon, and states that the swine were "useful for the food of man" among the Jaredites. There have not been any remains, references, artwork, tools, or any other evidence suggesting that swine were ever present in the pre-Columbian New World. Barley and wheat Grains are mentioned 28 times in the Book of Mormon, including "barley" and "wheat". The introduction of domesticated modern barley and wheat to the New World was made by Europeans sometime after 1492, many centuries after the time in which the Book of Mormon is set. FARMS scholar Robert Bennett suggests either that the terms barley and wheat were plants given Old World designations, or the terms may refer to genuine New World varieties of the plants. He also postulates that references to "barley" could refer to Hordeum pusillum, also known as "little barley", a species of grass with edible seeds native to the Americas. Evidence exists that this plant was cultivated in North America in the Woodland periods contemporary with mound-builder societies (early centuries AD) and has been carbon-dated to 2,500 years ago, although it is questionable whether it was ever domesticated. Technology anachronisms Chariots The Book of Mormon mentions the presence of "chariots" in three instances, in two instances (both around 90 BC at the same location) inferring them as a mode of transportation. There is no archaeological evidence to support the use of wheeled vehicles in pre-Columbian Mesoamerica. Many parts of ancient Mesoamerica were not suitable for wheeled transport. Clark Wissler, the Curator of Ethnography at the American Museum of Natural History in New York City, noted: "we see that the prevailing mode of land transport in the New World was by human carrier. The wheel was unknown in pre-Columbian times." Wheels were used in a limited context in Mesoamerica for what were probably ritual objects, "small clay animal effigies mounted on wheels." Richard Diehl and Margaret Mandeville have documented the archaeological discovery of wheeled toys in Teotihuacan, Tres Zapotes, Veracruz, and Panuco in Mesoamerica. Some of these wheeled toys were referred to by Smithsonian archaeologist William Henry Holmes and archaeologist Désiré Charnay as "chariots". While these items establish that the concept of the wheel was known in ancient Mesoamerica, lack of suitable draft animals and a terrain unsuitable for wheeled traffic are the probable reasons that wheeled transport was never developed." A comparison of the South American Inca civilization to Mesoamerican civilizations shows the same lack of wheeled vehicles. Although the Incas used a vast network of paved roads, these roads are so rough, steep, and narrow that they were likely unsuitable for wheeled use. Bridges that the Inca people built, and even continue to use and maintain today in some remote areas, are straw-rope bridges so narrow (about 2–3 feet wide) that no wheeled vehicle can fit. Inca roads were used mainly by chaski message runners and llama caravans. Mayan paved roads at Yucatan had characteristics which could allow the use of wheeled vehicles, but there is no evidence that those highways were used other than by people on foot and nobles who were borne on litters. Referencing the discovery of wheeled chariot "toys" in Mayan funerary settings, Mormon scholar William J. Hamblin has suggested that the "chariots" mentioned in the Book of Mormon might refer to mythic or cultic wheeled vehicles. Mormon author Brant Gardner suggested that "chariot" may be a palanquin or litter vehicle, since the Book of Mormon makes no reference to the specific use of the wheel. Compass The Book of Mormon also states that a "compass" or "Liahona" was used by Nephi in the 6th-century BC. The compass is widely recognized to have been invented in China around 1100 AD, and remains of a compass have never been found in America. In the Book of Alma, Alma explains to his son that "our fathers called it Liahona, which is, being interpreted, a compass". Windows The Book of Mormon describes that the Jaredite people were familiar with the concept of "windows" near the time of the biblical Tower of Babel, and that they specifically avoided crafting windows for lighting in their covered seagoing vessels, because of fears that "they would be dashed in pieces" during the ocean voyage. Uses of metal The Book of Mormon mentions a number of metals, and the use of metal. "Dross" The word "dross" appears twice in the Book of Alma, dross being a byproduct of the refining of metals. In the Americas, pre-Inca civilizations of the central Andes in Peru had mastered the smelting of copper and silver at least six centuries before the first Europeans arrived in the 16th century, while never mastering the smelting of metals such as iron for use with weapon-craft. Ice core studies in Bolivia suggest copper smelting may have begun as early as 2000 BCE. Steel and iron Three instances of "steel" in the New World are mentioned in the Book of Mormon, one early amongst the Jaredites after their arrival around 2400 BC, one immediately after the Lehi party's arrival in the New World discussing Nephi's knowledge of steel at approximately 580 BC, and one occurrence amongst the Nephites around 400 BC. Four instances of "iron" in the New World are mentioned in the Book of Mormon, one amongst the Jaredites around 1000 BC, one immediately after the Lehi party's arrival in the New World discussing Nephi's knowledge of iron at approximately 580 BC, and two of occurrence amongst the Nephites, one around 400 BC and the other around 160 BC. Between 2004 and 2007, a Purdue University archaeologist, Kevin J. Vaughn, discovered a 2000-year-old iron ore mine near Nazca, Peru; however there is no evidence of smelting, and the hematite was apparently used to make pigments. Metal swords The Book of Mormon makes numerous references to "swords" and their use in battle. What the swords are made of is mostly ambiguous except for two instances involving the Jaredites. The first was an early battle (around 2400 BC) involving the king Shule which used "steel" swords. When the remnants of the Jaredite's abandoned cities were discovered (around 120 BC), the Book of Mormon narrative states that some swords were brought back "the hilts thereof have perished, and the blades thereof were cankered with rust", suggesting that these swords had metal blades. Some studies have shown that metallurgy did exist in a primitive state in Mesoamerica during the Preclassic/Formative and Classic periods (which corresponds to the time period in the Book of Mormon). These metals include brass, iron ore, copper, silver, and gold. However, the metals were never used to make swords. The closest evidence to a pre-Columbian metal blade on Mesoamerica comes from the Maya, but those artifacts were not swords, but small copper axes used as tools. Cimeters "Cimeters" are mentioned in eight instances in the Book of Mormon stretching from approximately 500 BC to 51 BC. Critics argue this existed hundreds of years before the term "scimitar" was coined. The word "cimiter" is considered an anachronism since the word was never used by the Hebrews (from which some of the Book of Mormon peoples came) or any other civilization prior to 450 AD and because metal swords are not found in the Americas in the Book of Mormon timeframe. The word 'cimeterre' is found in the 1661 English dictionary Glossographia and is defined as "a crooked sword" and was part of the English language at the time that the Book of Mormon was translated. In the 7th century, scimitars generally first appeared among the Turko-Mongol nomads of Central Asia. Apologists, including Michael R. Ash, and William Hamblin of FAIR, note that the Book of Mormon does not mention the materials that the "cimiters" were made out of, and postulate that the word was chosen by Joseph Smith as the closest workable English word for the weapon used by the Nephites that was not made of metal, and was short and curved like various weapons found in Mesoamerica. System of exchange based on measures of grain using precious metals as a standard The Book of Mormon details a system of measures used by the Nephite society described therein. However, the overall use of metal in ancient America seems to have been extremely limited. A more common exchange medium in Mesoamerica were cacao beans. Linguistic anachronisms Knowledge of a modified Hebrew and reformed Egyptian languages The Book of Mormon account refers to various groups of literate peoples, at least one of which is described as using a language and writing system with roots in Hebrew and Egyptian. Fifteen examples of distinct scripts have been identified in pre-Columbian Mesoamerica, many from a single inscription. Archaeological dating methods make it difficult to establish which was earliest (and hence the forebear from which the others developed) and a significant portion of the documented scripts have not been deciphered. None of the documented Mesoamerican language scripts have any relation to Hebrew or Egyptian. The Book of Mormon describes another literate culture, the Jaredites, but does not identify the language or writing system by name. The text that describes the Jaredites (Book of Ether) refers only to a language used prior to the alleged confounding of languages at the great tower, presumably a reference to the Tower of Babel. Linguistic studies on the evolution of the spoken languages of the Americas agree with the widely held model that the initial colonization of the Americas by Homo sapiens occurred over 10,000 years ago. "Christ" and "Messiah" The words "Christ" and "Messiah" are used several hundred times throughout the Book of Mormon. The first instance of the word "Christ" in the Book of Mormon dates to between 559 and 545 BC. The first instance of the word "Messiah" dates to about 600 BC. "Christ" is the English transliteration of the Greek word (transliterated precisely as Christós); it is relatively synonymous with the Hebrew word rendered "Messiah" (). Both words have the meaning of "anointed", and are used in the Bible to refer to "the Anointed One". In Greek translations of the Old Testament (including the Septuagint), the word "Christ" is used for the Hebrew "Messiah", and in Hebrew translations of the New Testament, the word "Messiah" is used for the Greek "Christ". Any usage in the Bible of the word "Christ" can be alternately translated as "Messiah" with no change in meaning (e.g. ). The word "Christ" is found in English dictionaries at the time of the translation of the plates so was not considered an exclusively Greek word at that time. The Book of Mormon uses both terms throughout the book. In the vast majority of cases, it uses the terms in an identical manner as the Bible, where it does not matter which word is used: And now, my sons, remember, remember that it is upon the rock of our Redeemer, who is Christ, the Son of God, that ye must build your foundation; that when the devil shall send forth his mighty winds, yea, his shafts in the whirlwind, yea, when all his hail and his mighty storm shall beat upon you, it shall have no power over you to drag you down to the gulf of misery and endless wo, because of the rock upon which ye are built, which is a sure foundation, a foundation whereon if men build they cannot fall. () And after he had baptized the Messiah with water, he should behold and bear record that he had baptized the Lamb of God, who should take away the sins of the world. () The Book of Mormon occasionally uses the word "Christ" in a way that is not interchangeable with "Messiah". For example, in , the Book of Mormon prophet Jacob says an angel informed him that the name of the Messiah would be Christ:Wherefore, as I said unto you, it must needs be expedient that Christ—for in the last night the angel spake unto me that this should be his name—should come among the Jews () The word "Messiah" is used in the text before this point, but from this point on the word "Christ" is used almost exclusively. Richard Packham argues that the Greek word "Christ" in the Book of Mormon challenges the authenticity of the work since Joseph Smith clearly stated that, "There was no Greek or Latin upon the plates from which I, through the grace of the Lord, translated the Book of Mormon." Greek names Joseph Smith stated in a letter to the editor of Times and Seasons, "There was no Greek or Latin upon the plates from which I, through the grace of the Lord, translated the Book of Mormon." The Book of Mormon contains some names which appear to be Greek (e.g. Timothy), some of which are Hellenizations of Hebrew names (e.g. Antipas, Archeantus, Esrom, Ezias, Jonas, Judea, Lachoneus, and Zenos). "Church" and "synagogue" The word "church" first occurs in 1 Nephi 4:26, where a prophet named Nephi disguises himself as Laban, a prominent man in Jerusalem whom Nephi had slain:And he [Laban's servant], supposing that I spake of the brethren of the church, and that I was truly that Laban whom I had slain, wherefore he did follow me (). According to the Book of Mormon, this exchange happened in Jerusalem, around 600 BC. The meaning of the word "church" in the Book of Mormon is more comparable to usage in the KJV than modern English. Aside from its extensive use throughout the New Testament, the sense of a convocation of believers can be attached to certain wordings in the Old Testament For instance, Psalms speaks of praising the Lord "in the congregation of the saints"; the Septuagint contains the Greek word "ecclesia" for "congregation", which is also translated as "church" in the New Testament. A similar question regards the word "synagogue", found in Alma 16:13:And Alma and Amulek went forth preaching repentance to the people in their temples, and in their sanctuaries, and also in their synagogues, which were built after the manner of the Jews (). Scholars note that synagogues did not exist in their modern form before the destruction of the temple and the Babylonian captivity. The oldest known synagogue is located in Delos, Greece, and has been dated to 150 BC. References to synagogues have been found in Egypt as early as the 3rd Century BC. The name "Sam" as an anachronism Critics Jerald and Sandra Tanner and Marvin W. Cowan contend that certain linguistic properties of the Book of Mormon provide evidence that the book was fabricated by Joseph Smith. These critics cite as a linguistic anachronism the Americanized name "Sam" (1 Nephi 2:5,17). The name "Isabel" as an anachronism The name Isabel appears in the Book of Mormon at Alma 39:3. According to the Book of Mormon, Isabel lived about 74 BC. Isabel is a female name of Spanish origin. It originates as the medieval Spanish form of Elisabeth (ultimately Hebrew Elisheva). The name arose in the 12th century AD well after the Isabel in the Book of Mormon. King James's translation A significant portion of the Book of Mormon quotes from the brass plates, which purport to be another source of Old Testament writings mirroring those of the Bible. In many cases, the biblical quotations in the English-language Book of Mormon, are close, or identical to the equivalent sections of the KJV. Critics consider several Book of Mormon anachronisms to originate in the KJV. "All the ships of the sea, and upon all the ships of Tarshish" Isaiah 2:16 is quoted in the Book of Mormon 2 Nephi 12:16, but includes a mistranslated line from the Septuagint, where the word Tarshish was mistaken for a similar Greek word for "sea" (THARSES and THALASSES). Furthermore, the added line in the Book of Mormon disrupts the synonymous parallelisms in the poetic structure of the section. As the error appeared in Septuagint the 3rd century BCE this is anachronistic to the 6th century BCE setting of 2 Nephi. The Septuagint version of the verse was discussed in numerous readily available Bible commentaries in the 1820s, including ones by Adam Clarke and John Wesley. "Satyr" In 2 Nephi 23:21, the Book of Mormon quotes Isaiah 13:21, which mentions a "satyr". Satyrs are creatures from Greek mythology, which are half-man, half-goat. The KJV translates Isaiah 34:14 thus: The wild beasts of the desert shall also meet with the wild beasts of the island, and the satyr shall cry to his fellow; the screech owl also shall rest there, and find for herself a place of rest. ("וְרָבְצוּ־שָׁם צִיִּים וּמָלְאוּ בָתֵּיהֶם אֹחִים וְשָׁכְנוּ שָׁם בְּנֹות יַֽעֲנָה וּשְׂעִירִים יְרַקְּדוּ־") Other English-language versions of the Bible, including the New International Version, translate the word (sa`iyr) as "wild goat"; other translations include "monkey" and "dancing devil". New Testament anachronisms The Book of Mormon has 441 phrases that are seven words or longer that appear in the King James Version of the New Testament demonstrating that the Book of Mormon postdates the 1611 King James Translation of the Bible. This is problematic both because the authors of the New Testament and Book of Mormon were geographically separate, and in instances where the portions of the New Testament were quoted hundreds of years earlier. Extended quoted sections include portions of Mark 16, Acts 3, 1 Corinthians 12-13, and 1 John 3. Specific derivative sections include: Moroni's discourse on faith (Ether 12) is derived from the Epistle to the Hebrews (Hebrews 11). Alma chapter 7 and 13 discussion on Melchizedek shows reliance on Hebrews 7. The longer ending of Mark is almost universally rejected by scholars as not being original to the text, but is quoted in the Book of Mormon (Ether 4:18, Mormon 9:22-24). Doctrinal anachronisms Anti-Universalist rhetoric Universalism, or the doctrine that all humanity would be saved, was a prominent theology that peaked in popularity in the northeastern United States in the 1820s and 1830s. The Book of Mormon contains a number of sermons and passages that use anti-Universalist religious arguments common to that time and place, not known to have occurred in any ancient American setting. The existence of 19th century anti-Universalist arguments and rhetoric in the Book of Mormon has been pointed out as anachronistic by various scholars, including Fawn M. Brodie and Dan Vogel. Satisfaction theory of atonement The satisfaction theory of atonement was a medieval theological development, created to explain how God could be both merciful and just through an infinite atonement, and is not known to have appeared in any ancient American setting. See also Archaeology and the Book of Mormon Columbian Exchange Dené–Yeniseian languages Genetics and the Book of Mormon Historicity of the Book of Mormon Linguistics and the Book of Mormon List of pre-Columbian engineering projects in the Americas Pre-Columbian trans-oceanic contact References Sources . Criticism of Mormonism Anachronisms Anachronism
Anachronisms in the Book of Mormon
[ "Physics" ]
5,957
[ "Spacetime", "Physical quantities", "Anachronism", "Time" ]
19,099,981
https://en.wikipedia.org/wiki/TANGO
The TANGO control system is a free open source device-oriented controls toolkit for controlling any kind of hardware or software and building SCADA systems. It is used for controlling synchrotrons, lasers, physics experiments in over 20 sites. It is being actively developed by a consortium of research institutes. TANGO is a distributed control system. It runs on a single machine as well as hundreds of machines. TANGO uses two network protocols - the omniorb implementation of CORBA and Zeromq. The basic communication model is the client-server model. Communication between clients and servers can be synchronous, asynchronous or event driven. CORBA is used for synchronous and asynchronous communication and Zeromq is used for event-driven communication (since version 8 of TANGO). TANGO is based on the concept of Devices. Devices implement object oriented and service oriented approaches to software architecture. The Device model in TANGO implements commands/methods, attributes / data fields and properties for configuring Devices. In TANGO all control objects are Devices. Device Servers TANGO is a software for building control systems which need to provide network access to hardware. Hardware can range from single bits of digital input/output up to sophisticated detector systems or entire plant control systems (SCADAs). Hardware access is managed in a process called a Device Server. The Device Server contains Devices belonging to different Device Classes which implement the hardware access. At Device Server startup time Devices (instances of Device Classes) are created which then represent logical instances of hardware in the control system. Clients "import" the Devices via a database and send requests to the devices using TANGO. Devices can store configuration and setup values in a Mysql database permanently. Hundreds of Device Classes have been written by the community. TANGO manages complexity using hierarchies. Bindings TANGO supports bindings to the following languages : C, C++, Java, Python, MATLAB, LabVIEW, IGOR Pro Licensing TANGO is distributed under 2 licenses. The libraries are licensed under the GNU Lesser General Public License (LGPLv3). Tools and device servers are (unless otherwise stated) under the GNU General Public License (GPLv3). The LGPL licence allows the TANGO libraries in products which are not GNU GPL. Projects using TANGO Some of the projects using TANGO (in addition to the consortium) : the diagnostics of the Laser Mégajoule Consortium The consortium is a group of institutes who are actively developing TANGO. To join the consortium an institute has to sign the Memorandum of Understanding and actively commit resources to the development of TANGO. The consortium currently consists of the following institutes : ESRF - European Synchrotron Radiation Facility, Grenoble, France SOLEIL - Soleil Synchrotron, Paris, France ELETTRA - Elettra Synchrotron, Trieste, Italy ALBA - Alba Synchrotron, Barcelona, Spain DESY - PETRA III Synchrotron, Hamburg, Germany MAXIV - MAXIV Synchrotron, Lund, Sweden FRMII - FRMII neutron source, Munich, Germany SOLARIS - National Synchrotron Radiation Centre SOLARIS, Kraków, Poland ANKA - ANKA Synchrotron, Karlsruhe, Germany INAF - Istituto Nazionale di Astrofisica, IT The goal of the consortium is to guarantee the development of TANGO. See also EPICS—Experimental Physics and Industrial Control System SCADA—Supervisory Control And Data Acquisition References Refer to the following publications on TANGO for more information : TANGO - an object oriented control system based on CORBA, ICALEPCS 1999, Trieste (Italy) TANGO a CORBA based Control System, ICALEPCS 2003, Gyeongju (Korea) Ubiquitous TANGO, ICALEPCS 2007, Knoxville (USA) Future of TANGO, ICALEPCS 2007, Knoxville (USA) TANGO papers presented at ICALEPCS 2009, Kobe (Japan) TANGO papers presented at ICALEPCS 2011. Grenoble (France) Cross-platform free software Industrial automation
TANGO
[ "Engineering" ]
832
[ "Industrial automation", "Industrial engineering", "Automation" ]
19,101,222
https://en.wikipedia.org/wiki/Easington%20Gas%20Terminal
The Easington Gas Terminal is one of six main gas terminals in the UK, and is situated on the North Sea coast at Easington, East Riding of Yorkshire and Dimlington. The other main gas terminals are at St Fergus, Aberdeenshire; Bacton, Norfolk; Teesside; Theddlethorpe, Lincolnshire and Rampside gas terminal, Barrow, Cumbria. The whole site consists of four plants: two run by Perenco, one by Centrica and one by Gassco. The Easington Gas Terminals are protected by Ministry of Defence Police officers and are provided with resources by the Centre for the Protection of National Infrastructure. History BP Easington Terminal opened in March 1967. This was the first time that North Sea Gas had been brought ashore in the UK from the West Sole field. In 1980 British Gas purchased the field Rough and in 1983 began conversion to a storage field. BP Dimlington opened in October 1988. BP's Ravenspurn North field was added in 1990 and the Johnston field was added in 1994. The Easington Catchment Area was added in 2000, and the Juno development in 2003. Up to 20% of the winter peak demand for gas is exported from Easington via Feeder 9 through the Humber Gas Tunnel. Discovery of gas in the North Sea Britain's first oil rig, the Sea Gem, first discovered gas in the North Sea on 20 August 1965. It was not a large enough field, but at the time it was not even known that there was a large amount of gas under the North Sea. Unfortunately the rig sank in December later that year, when it capsized. The Forties and Brent oilfields were discovered later in 1970 and 1971 respectively. Langeled pipeline Since October 2006, gas has been brought into the UK direct from the Norwegian Sleipner gas field via the Langeled pipeline, the world's longest subsea pipeline before the completion of the Nord Stream pipeline, owned by Gassco which itself is owned by the Kingdom of Norway. Operation The sites are run by and gas is produced by Perenco (after BP sold its operations to them in 2012), Gassco and Centrica Storage Ltd. Gas can be transferred to and from the Centrica Storage plant at Easington dependent on grid demand. The control of the Perenco sites takes place at the Dimlington site, and conditioning of the gas also takes place there. The function that is at the Perenco Easington site is the connection to the National Transmission System. Gas flows from the Easington terminal via a 24-inch diameter, pipeline known as Feeder No 1 across the Humber to Totley near Sheffield. Perenco Easington used to compress gas as well, but from 2007–9, the construction of the £125 million Onshore Compression and Terminal Integration Project (OCTIP) situated all compression and processing from the gas fields at the Dimlington site. As part of the facility, two RB211-GT61 gas turbines, built by Rolls-Royce Energy Systems in Mount Vernon, Ohio, were installed in a £12.7 million contract. Centrica Rough Terminal The Rough (facility) is a partially depleted offshore gas field that was converted for storage by British Gas. It is currently operated by Centrica Storage Ltd (a subsidiary of Centrica). The Rough Terminal also processes gas for the newly developed York field. The Rough Terminal used to receive gas from the Amethyst gasfield which was until 1988 owned by Britoil but this is now processed by Perenco. Since 2013 The Rough Terminal has also processed gas from the York field on behalf of Centrica Energy. Langeled Receiving Facilities The Langeled pipeline, which is controlled at the UK end by Gassco (Centrica Storage Ltd before 2011), can transfer up to 2,500 m cubic feet of gas per day from Nyhamna in Norway. Perenco Easington The gas is collected from the Hyde, Hoton, Newsham and West Sole natural gas fields. It can process up to 300 m cubic feet of gas per day. A gas turbine power generator is used to compress the gas. Perenco Dimlington Dimlington is the larger site of the four. The natural gas condensate is transferred to the Dimlington terminal. Dimlington also processes dry gas from the (former) Cleeton, Ravenspurn South, Ravenspurn North, Johnston, the Easington Catchment Area (Neptune and Mercury), and the Juno development (Whittle, Wollaston, Minerva and Apollo) gas fields. The Dimlington site has the control room for all of Perenco's gas fields that ship gas to the Easington site. Dimlington can handle up to 950m cubic feet of gas per day. Fire risk All sites are a considerable fire hazard, so have large water reservoirs for fire fighting containing about one million and three million litres of water each. Dimlington gas fields Cleeton Cleeton and Ravenspurn South form part of the Villages Complex. Both were discovered in 1976. Gas production began in April 1987. Production stopped in 1999. Now used as a hub for the Easington Catchment Area. Named after the scientist, Claud E. Cleeton. Ravenspurn South Discovered in April 1983, off the East Riding of Yorkshire coast. Gas production began in October 1989. Gas via Cleeton to Dimlington. Named after Ravenspurn, the former coastal town. Owned and operated by Perenco. Ravenspurn North Discovered in October 1984 and developed in April 1988 by Hamilton Brothers. First gas produced in October 1989, and BP took over the operatorship of the field from BHP on 12 January 1998. Gas via Cleeton to Dimlington. Operated by Perenco and owned mostly by them, with smaller parts owned by Centrica Resources Ltd and E.ON Ruhrgas UK EU Ltd. Johnston Operated by E.ON Ruhrgas, and previously to them, Caledonia EU, and also by Consort EU Ltd. Discovered in April 1990. Gas first produced in October 1994. Pipeline to Dimlington via Ravenspurn North and Cleeton. Owned 50% by Dana Petroleum (E&P) Ltd and E.ON Ruhrgas UK EU Ltd. Babbage Discovered in 1989 with the first gas being brought ashore in August 2010. Gas will be transported via West Sole to Dimlington. Owned 40% by Dana Petroleum (E&P) Ltd, 47% by E.ON Ruhrgas UK EU Ltd and 13% by Centrica Resources Ltd. Named after the mathematician, Charles Babbage. Easington Catchment Area Consists of Neptune and Mercury fields. Operated by BG Group. Transported to Dimlington via BP's Cleeton. Mercury discovered in February 1983 and production started in November 1999. Named after the planet Mercury. 73% owned by BG Group. Neptune discovered in November 1985 and production started in November 1999. Named after the planet Neptune. 79% owned by BG Group. Juno development These are the most recent of the Dimlington gas fields. Named after Juno, the Roman goddess. BG Group operates the Minerva, Apollo and Artemis fields, and owns 65% of these fields. Production started in 2003. Artemis was discovered in August 1974, and named after Artemis the Greek hunter goddess. Apollo was discovered in July 1987, named after Apollo the Greek sungod, brother of Artemis. Minerva was discovered in January 1969, named after the Roman goddess Minerva. BP operates the Whittle and Wollaston fields. They are 30% owned by BG Group. Production started in 2002. Wollaston was discovered in April 1989, and named after William Hyde Wollaston, the Norfolk chemist. Whittle was discovered in July 1990, and named after Frank Whittle. Easington gas fields These fields are around off the East Riding of Yorkshire coast. These fields are connected to the national grid by BP and Rough Terminals. Some of these were one of the 'Villages' gas fields; named after villages lost to the sea along the Holderness coast. These villages include: Cleeton, Dimlington, Hoton, Hyde, Newsham and Ravenspurn. West Sole Discovered in December 1965, east of the Humber. It is a faulted dome whose maximum dimensions are about wide, lying at a depth of . The reservoir comprises about of Permian Rotliegendes sandstone, and the gas has a high methane content and low nitrogen (1.3%). Gas first produced in March 1967. It had initial recoverable reserves of 61 billion m3. Owned and operated by BP until 2012. Acquired by Perenco 2012 Hyde Discovered in May 1982. Gas first produced in August 1993. Was owned 55% by BP and 45% by Statoil. BP took control in January 1997, in exchange for its Jupiter gas field. Newsham Discovered in October 1989. Production began March 1996. Enters the West Sole pipeline. Owned and operated by BP. Hoton Discovered in February 1977. Gas first produced in December 2001. Owned and operated by BP. Named after Hoton, one of the East Riding of Yorkshire lost villages that fell into the sea due to coastal erosion. Amethyst East and West Amethyst East discovered in October 1972 and Amethyst West in April 1970. Owned 59.5% by BP, 24% by BG Group, 9% by Centrica, and 7.5% by Murphy. Amethyst East began in October 1990 and Amethyst West in July 1992. Control of the platform is entirely from Dimlington and therefore operated by BP. Comprises the Amethyst gasfield. Acquired by Perenco 2012 Rough Discovered in May 1968. It had initial recoverable reserves of 14 billion m3. Gas production began in 1975, and it was bought by British Gas in 1980. In 1983, they decided to convert it into gas storage. The gas storage started February 1985. As a depleted gas field, it is used as a storage facility, for essentially the whole of the UK, giving four days worth of gas. Originally owned by BG Storage Ltd (BGSL), who were bought by Dynegy Europe Ltd in November 2001 for £421 million. BGSL became known as Dynegy Storage Ltd, based in Solihull. This company was bought by Centrica on 14 November 2002 for £304 million. Centrica was essentially buying the Easington plant. To operate the field Centrica has to comply with a set of undertakings laid down by DECC and Ofgem due to its unique position in the UK gas market. York Owned and operated by Centrica. Gas back to Centrica Rough Terminal via new pipeline. Helvellyn Discovered in February 1985 with the first gas coming on stream in 2004. Operated by ATP Oil and Gas. Owned 50% by ATP Oil & Gas (UK) Ltd and First Oil Expro Ltd. Gas back to Easington via the Amethyst field. Named after Helvellyn in Cumbria. Rose Discovered in March 1998. Owned and operated by Centrica with the gas pumped back to Easington via the Amethyst field. The operation started in 2004 and was plugged and abandoned in 2015. See also List of oil and gas fields of the North Sea Oil fields operated by BP St Fergus Gas Terminal Bacton Gas Terminal Energy in the United Kingdom References External links Centrica Storage Centrica's purchase of the plant at Ofgem World War Two bomb found in March 2008 1967 establishments in England Buildings and structures in the East Riding of Yorkshire Economy of the East Riding of Yorkshire Energy infrastructure completed in 1967 Holderness Natural gas infrastructure in the United Kingdom Natural gas plants Natural gas terminals North Sea energy Science and technology in the East Riding of Yorkshire
Easington Gas Terminal
[ "Chemistry" ]
2,375
[ "Natural gas technology", "Natural gas plants" ]
19,103,227
https://en.wikipedia.org/wiki/Cerium%28III%29%20bromide
Cerium(III) bromide is an inorganic compound with the formula CeBr3. This white hygroscopic solid is of interest as a component of scintillation counters. Preparation and basic properties The compound has been known since at least 1899, when Muthman and Stützel reported its preparation from cerium sulfide and gaseous HBr. Aqueous solutions of CeBr3 can be prepared from the reaction of Ce2(CO3)3·H2O with HBr. The product, CeBr3·H2O can be dehydrated by heating with NH4Br followed by sublimation of residual NH4Br. CeBr3 can be distilled at reduced pressure (~ 0.1 Pa) in a quartz ampoule at 875-880 °C. Like the related salt CeCl3, the bromide absorbs water on exposure to moist air. The compound melts congruently at 722 °C, and well ordered single crystals may be produced using standard crystal growth methods like Bridgman or Czochralski. CeBr3 adopts the hexagonal, UCl3-type crystal structure with the P63/m space group. The cerium ions are 9-coordinate and adopt a tricapped trigonal prismatic geometry. The cerium–bromine bond lengths are 3.11 Å and 3.16 Å. Applications CeBr3-doped lanthanum bromide single crystals are known to exhibit superior scintillation properties for applications in the security, medical imaging, and geophysics detectors. Undoped single crystals of CeBr3 have shown promise as a γ-ray scintillation detector in nuclear non-proliferation testing, medical imaging, environmental remediation, and oil exploration. Suppliers Sigma-Aldrich References Cerium(III) compounds Bromides Lanthanide halides
Cerium(III) bromide
[ "Chemistry" ]
387
[ "Bromides", "Salts" ]
19,103,379
https://en.wikipedia.org/wiki/Carmichael%27s%20totient%20function%20conjecture
In mathematics, Carmichael's totient function conjecture concerns the multiplicity of values of Euler's totient function φ(n), which counts the number of integers less than and coprime to n. It states that, for every n there is at least one other integer m ≠ n such that φ(m) = φ(n). Robert Carmichael first stated this conjecture in 1907, but as a theorem rather than as a conjecture. However, his proof was faulty, and in 1922, he retracted his claim and stated the conjecture as an open problem. Examples The totient function φ(n) is equal to 2 when n is one of the three values 3, 4, and 6. Thus, if we take any one of these three values as n, then either of the other two values can be used as the m for which φ(m) = φ(n). Similarly, the totient is equal to 4 when n is one of the four values 5, 8, 10, and 12, and it is equal to 6 when n is one of the four values 7, 9, 14, and 18. In each case, there is more than one value of n having the same value of φ(n). The conjecture states that this phenomenon of repeated values holds for every n. Lower bounds There are very high lower bounds for Carmichael's conjecture that are relatively easy to determine. Carmichael himself proved that any counterexample to his conjecture (that is, a value n such that φ(n) is different from the totients of all other numbers) must be at least 1037, and Victor Klee extended this result to 10400. A lower bound of was given by Schlafly and Wagon, and a lower bound of was determined by Kevin Ford in 1998. The computational technique underlying these lower bounds depends on some key results of Klee that make it possible to show that the smallest counterexample must be divisible by squares of the primes dividing its totient value. Klee's results imply that 8 and Fermat primes (primes of the form 2k + 1) excluding 3 do not divide the smallest counterexample. Consequently, proving the conjecture is equivalent to proving that the conjecture holds for all integers congruent to 4 (mod 8). Other results Ford also proved that if there exists a counterexample to the conjecture, then a positive proportion (in the sense of asymptotic density) of the integers are likewise counterexamples. Although the conjecture is widely believed, Carl Pomerance gave a sufficient condition for an integer n to be a counterexample to the conjecture . According to this condition, n is a counterexample if for every prime p such that p − 1 divides φ(n), p2 divides n. However Pomerance showed that the existence of such an integer is highly improbable. Essentially, one can show that if the first k primes p congruent to 1 (mod q) (where q is a prime) are all less than qk+1, then such an integer will be divisible by every prime and thus cannot exist. In any case, proving that Pomerance's counterexample does not exist is far from proving Carmichael's conjecture. However if it exists then infinitely many counterexamples exist as asserted by Ford. Another way of stating Carmichael's conjecture is that, if A(f) denotes the number of positive integers n for which φ(n) = f, then A(f) can never equal 1. Relatedly, Wacław Sierpiński conjectured that every positive integer other than 1 occurs as a value of A(f), a conjecture that was proven in 1999 by Kevin Ford. Notes References . . . . . . . . External links Multiplicative functions Conjectures Unsolved problems in number theory
Carmichael's totient function conjecture
[ "Mathematics" ]
816
[ "Unsolved problems in mathematics", "Multiplicative functions", "Unsolved problems in number theory", "Conjectures", "Mathematical problems", "Number theory" ]
19,103,773
https://en.wikipedia.org/wiki/Circuit%20topology%20%28electrical%29
The circuit topology of an electronic circuit is the form taken by the network of interconnections of the circuit components. Different specific values or ratings of the components are regarded as being the same topology. Topology is not concerned with the physical layout of components in a circuit, nor with their positions on a circuit diagram; similarly to the mathematical concept of topology, it is only concerned with what connections exist between the components. Numerous physical layouts and circuit diagrams may all amount to the same topology. Strictly speaking, replacing a component with one of an entirely different type is still the same topology. In some contexts, however, these can loosely be described as different topologies. For instance, interchanging inductors and capacitors in a low-pass filter results in a high-pass filter. These might be described as high-pass and low-pass topologies even though the network topology is identical. A more correct term for these classes of object (that is, a network where the type of component is specified but not the absolute value) is prototype network. Electronic network topology is related to mathematical topology. In particular, for networks which contain only two-terminal devices, circuit topology can be viewed as an application of graph theory. In a network analysis of such a circuit from a topological point of view, the network nodes are the vertices of graph theory, and the network branches are the edges of graph theory. Standard graph theory can be extended to deal with active components and multi-terminal devices such as integrated circuits. Graphs can also be used in the analysis of infinite networks. Circuit diagrams The circuit diagrams in this article follow the usual conventions in electronics; lines represent conductors, filled small circles represent junctions of conductors, and open small circles represent terminals for connection to the outside world. In most cases, impedances are represented by rectangles. A practical circuit diagram would use the specific symbols for resistors, inductors, capacitors etc., but topology is not concerned with the type of component in the network, so the symbol for a general impedance has been used instead. The Graph theory section of this article gives an alternative method of representing networks. Topology names Many topology names relate to their appearance when drawn diagrammatically. Most circuits can be drawn in a variety of ways and consequently have a variety of names. For instance, the three circuits shown in Figure 1.1 all look different but have identical topologies. This example also demonstrates a common convention of naming topologies after a letter of the alphabet to which they have a resemblance. Greek alphabet letters can also be used in this way, for example Π (pi) topology and Δ (delta) topology. Series and parallel topologies A network with two components or branches has only two possible topologies: series and parallel. Even for these simplest of topologies, the circuit can be presented in varying ways. A network with three branches has four possible topologies. Note that the parallel-series topology is another representation of the Delta topology discussed later. Series and parallel topologies can continue to be constructed with greater and greater numbers of branches ad infinitum. The number of unique topologies that can be obtained from series or parallel branches is 1, 2, 4, 10, 24, 66, 180, 522, 1532, 4624, . Y and Δ topologies Y and Δ are important topologies in linear network analysis due to these being the simplest possible three-terminal networks. A Y-Δ transform is available for linear circuits. This transform is important because some networks cannot be analysed in terms of series and parallel combinations. These networks arise often in 3-phase power circuits as they are the two most common topologies for 3-phase motor or transformer windings. An example of this is the network of figure 1.6, consisting of a Y network connected in parallel with a Δ network. Say it is desired to calculate the impedance between two nodes of the network. In many networks this can be done by successive applications of the rules for combination of series or parallel impedances. This is not, however, possible in this case where the Y-Δ transform is needed in addition to the series and parallel rules. The Y topology is also called star topology. However, star topology may also refer to the more general case of many branches connected to the same node rather than just three. Simple filter topologies The topologies shown in figure 1.7 are commonly used for filter and attenuator designs. The L-section is identical topology to the potential divider topology. The T-section is identical topology to the Y topology. The Π-section is identical topology to the Δ topology. All these topologies can be viewed as a short section of a ladder topology. Longer sections would normally be described as ladder topology. These kinds of circuits are commonly analysed and characterised in terms of a two-port network. Bridge topology Bridge topology is an important topology with many uses in both linear and non-linear applications, including, amongst many others, the bridge rectifier, the Wheatstone bridge and the lattice phase equaliser. Bridge topology is rendered in circuit diagrams in several ways. The first rendering in figure 1.8 is the traditional depiction of a bridge circuit. The second rendering clearly shows the equivalence between the bridge topology and a topology derived by series and parallel combinations. The third rendering is more commonly known as lattice topology. It is not so obvious that this is topologically equivalent. It can be seen that this is indeed so by visualising the top left node moved to the right of the top right node. It is normal to call a network bridge topology only if it is being used as a two-port network with the input and output ports each consisting of a pair of diagonally opposite nodes. The box topology in figure 1.7 can be seen to be identical to bridge topology but in the case of the filter the input and output ports are each a pair of adjacent nodes. Sometimes the loading (or null indication) component on the output port of the bridge will be included in the bridge topology as shown in figure 1.9. Bridged T and twin-T topologies Bridged T topology is derived from bridge topology in a way explained in the Zobel network article. Many derivative topologies are also discussed in the same article. There is also a twin-T topology, which has practical applications where it is desirable to have the input and output share a common (ground) terminal. This may be, for instance, because the input and output connections are made with co-axial topology. Connecting an input and output terminal is not allowable with normal bridge topology, so Twin-T is used where a bridge would otherwise be used for balance or null measurement applications. The topology is also used in the twin-T oscillator as a sine-wave generator. The lower part of figure 1.11 shows twin-T topology redrawn to emphasise the connection with bridge topology. Infinite topologies Ladder topology can be extended without limit and is much used in filter designs. There are many variations on ladder topology, some of which are discussed in the Electronic filter topology and Composite image filter articles. The balanced form of ladder topology can be viewed as being the graph of the side of a prism of arbitrary order. The side of an antiprism forms a topology which, in this sense, is an anti-ladder. Anti-ladder topology finds an application in voltage multiplier circuits, in particular the Cockcroft-Walton generator. There is also a full-wave version of the Cockcroft-Walton generator which uses a double anti-ladder topology. Infinite topologies can also be formed by cascading multiple sections of some other simple topology, such as lattice or bridge-T sections. Such infinite chains of lattice sections occur in the theoretical analysis and artificial simulation of transmission lines, but are rarely used as a practical circuit implementation. Components with more than two terminals Circuits containing components with three or more terminals greatly increase the number of possible topologies. Conversely, the number of different circuits represented by a topology diminishes and in many cases the circuit is easily recognisable from the topology even when specific components are not identified. With more complex circuits the description may proceed by specification of a transfer function between the ports of the network rather than the topology of the components. Graph theory Graph theory is the branch of mathematics dealing with graphs. In network analysis, graphs are used extensively to represent a network being analysed. The graph of a network captures only certain aspects of a network: those aspects related to its connectivity, or, in other words, its topology. This can be a useful representation and generalisation of a network because many network equations are invariant across networks with the same topology. This includes equations derived from Kirchhoff's laws and Tellegen's theorem. History Graph theory has been used in the network analysis of linear, passive networks almost from the moment that Kirchhoff's laws were formulated. Gustav Kirchhoff himself, in 1847, used graphs as an abstract representation of a network in his loop analysis of resistive circuits. This approach was later generalised to RLC circuits, replacing resistances with impedances. In 1873 James Clerk Maxwell provided the dual of this analysis with node analysis. Maxwell is also responsible for the topological theorem that the determinant of the node-admittance matrix is equal to the sum of all the tree admittance products. In 1900 Henri Poincaré introduced the idea of representing a graph by its incidence matrix, hence founding the field of algebraic topology. In 1916 Oswald Veblen applied the algebraic topology of Poincaré to Kirchhoff's analysis. Veblen is also responsible for the introduction of the spanning tree to aid choosing a compatible set of network variables. Comprehensive cataloguing of network graphs as they apply to electrical circuits began with Percy MacMahon in 1891 (with an engineer-friendly article in The Electrician in 1892) who limited his survey to series and parallel combinations. MacMahon called these graphs yoke-chains. Ronald M. Foster in 1932 categorised graphs by their nullity or rank and provided charts of all those with a small number of nodes. This work grew out of an earlier survey by Foster while collaborating with George Campbell in 1920 on 4-port telephone repeaters and produced 83,539 distinct graphs. For a long time topology in electrical circuit theory remained concerned only with linear passive networks. The more recent developments of semiconductor devices and circuits have required new tools in topology to deal with them. Enormous increases in circuit complexity have led to the use of combinatorics in graph theory to improve the efficiency of computer calculation. Graphs and circuit diagrams Networks are commonly classified by the kind of electrical elements making them up. In a circuit diagram these element-kinds are specifically drawn, each with its own unique symbol. Resistive networks are one-element-kind networks, consisting only of R elements. Likewise capacitive or inductive networks are one-element-kind. The RC, RL and LC circuits are simple two-element-kind networks. The RLC circuit is the simplest three-element-kind network. The LC ladder network commonly used for low-pass filters can have many elements but is another example of a two-element-kind network. Conversely, topology is concerned only with the geometric relationship between the elements of a network, not with the kind of elements themselves. The heart of a topological representation of a network is the graph of the network. Elements are represented as the edges of the graph. An edge is drawn as a line, terminating on dots or small circles from which other edges (elements) may emanate. In circuit analysis, the edges of the graph are called branches. The dots are called the vertices of the graph and represent the nodes of the network. Node and vertex are terms that can be used interchangeably when discussing graphs of networks. Figure 2.2 shows a graph representation of the circuit in figure 2.1. Graphs used in network analysis are usually, in addition, both directed graphs, to capture the direction of current flow and voltage, and labelled graphs, to capture the uniqueness of the branches and nodes. For instance, a graph consisting of a square of branches would still be the same topological graph if two branches were interchanged unless the branches were uniquely labelled. In directed graphs, the two nodes that a branch connects to are designated the source and target nodes. Typically, these will be indicated by an arrow drawn on the branch. Incidence Incidence is one of the basic properties of a graph. An edge that is connected to a vertex is said to be incident on that vertex. The incidence of a graph can be captured in matrix format with a matrix called an incidence matrix. In fact, the incidence matrix is an alternative mathematical representation of the graph which dispenses with the need for any kind of drawing. Matrix rows correspond to nodes and matrix columns correspond to branches. The elements of the matrix are either zero, for no incidence, or one, for incidence between the node and branch. Direction in directed graphs is indicated by the sign of the element. Equivalence Graphs are equivalent if one can be transformed into the other by deformation. Deformation can include the operations of translation, rotation and reflection; bending and stretching the branches; and crossing or knotting the branches. Two graphs which are equivalent through deformation are said to be congruent. In the field of electrical networks, two additional transforms are considered to result in equivalent graphs which do not produce congruent graphs. The first of these is the interchange of series-connected branches. This is the dual of interchange of parallel-connected branches which can be achieved by deformation without the need for a special rule. The second is concerned with graphs divided into two or more separate parts, that is, a graph with two sets of nodes which have no branches incident to a node in each set. Two such separate parts are considered an equivalent graph to one where the parts are joined by combining a node from each into a single node. Likewise, a graph that can be split into two separate parts by splitting a node in two is also considered equivalent. Trees and links A tree is a graph in which all the nodes are connected, either directly or indirectly, by branches, but without forming any closed loops. Since there are no closed loops, there are no currents in a tree. In network analysis, we are interested in spanning trees, that is, trees that connect every node in the graph of the network. In this article, spanning tree is meant by an unqualified tree unless otherwise stated. A given network graph can contain a number of different trees. The branches removed from a graph in order to form a tree are called links; the branches remaining in the tree are called twigs. For a graph with n nodes, the number of branches in each tree, t, must be: An important relationship for circuit analysis is: where b is the number of branches in the graph and ℓ is the number of links removed to form the tree. Tie sets and cut sets The goal of circuit analysis is to determine all the branch currents and voltages in the network. These network variables are not all independent. The branch voltages are related to the branch currents by the transfer function of the elements of which they are composed. A complete solution of the network can therefore be either in terms of branch currents or branch voltages only. Nor are all the branch currents independent from each other. The minimum number of branch currents required for a complete solution is l. This is a consequence of the fact that a tree has l links removed and there can be no currents in a tree. Since the remaining branches of the tree have zero current they cannot be independent of the link currents. The branch currents chosen as a set of independent variables must be a set associated with the links of a tree: one cannot choose any l branches arbitrarily. In terms of branch voltages, a complete solution of the network can be obtained with t branch voltages. This is a consequence the fact that short-circuiting all the branches of a tree results in the voltage being zero everywhere. The link voltages cannot, therefore, be independent of the tree branch voltages. A common analysis approach is to solve for loop currents rather than branch currents. The branch currents are then found in terms of the loop currents. Again, the set of loop currents cannot be chosen arbitrarily. To guarantee a set of independent variables the loop currents must be those associated with a certain set of loops. This set of loops consists of those loops formed by replacing a single link of a given tree of the graph of the circuit to be analysed. Since replacing a single link in a tree forms exactly one unique loop, the number of loop currents so defined is equal to l. The term loop in this context is not the same as the usual meaning of loop in graph theory. The set of branches forming a given loop is called a tie set. The set of network equations are formed by equating the loop currents to the algebraic sum of the tie set branch currents. It is possible to choose a set of independent loop currents without reference to the trees and tie sets. A sufficient, but not necessary, condition for choosing a set of independent loops is to ensure that each chosen loop includes at least one branch that was not previously included by loops already chosen. A particularly straightforward choice is that used in mesh analysis, in which the loops are all chosen to be meshes. Mesh analysis can only be applied if it is possible to map the graph onto a plane or a sphere without any of the branches crossing over. Such graphs are called planar graphs. Ability to map onto a plane or a sphere are equivalent conditions. Any finite graph mapped onto a plane can be shrunk until it will map onto a small region of a sphere. Conversely, a mesh of any graph mapped onto a sphere can be stretched until the space inside it occupies nearly all of the sphere. The entire graph then occupies only a small region of the sphere. This is the same as the first case, hence the graph will also map onto a plane. There is an approach to choosing network variables with voltages which is analogous and dual to the loop current method. Here the voltage associated with pairs of nodes are the primary variables and the branch voltages are found in terms of them. In this method also, a particular tree of the graph must be chosen in order to ensure that all the variables are independent. The dual of the tie set is the cut set. A tie set is formed by allowing all but one of the graph links to be open circuit. A cut set is formed by allowing all but one of the tree branches to be short circuit. The cut set consists of the tree branch which was not short-circuited and any of the links which are not short-circuited by the other tree branches. A cut set of a graph produces two disjoint subgraphs, that is, it cuts the graph into two parts, and is the minimum set of branches needed to do so. The set of network equations are formed by equating the node pair voltages to the algebraic sum of the cut set branch voltages. The dual of the special case of mesh analysis is nodal analysis. Nullity and rank The nullity, N, of a graph with s separate parts and b branches is defined by: The nullity of a graph represents the number of degrees of freedom of its set of network equations. For a planar graph, the nullity is equal to the number of meshes in the graph. The rank, R of a graph is defined by: Rank plays the same role in nodal analysis as nullity plays in mesh analysis. That is, it gives the number of node voltage equations required. Rank and nullity are dual concepts and are related by: Solving the network variables Once a set of geometrically independent variables have been chosen the state of the network is expressed in terms of these. The result is a set of independent linear equations which need to be solved simultaneously in order to find the values of the network variables. This set of equations can be expressed in a matrix format which leads to a characteristic parameter matrix for the network. Parameter matrices take the form of an impedance matrix if the equations have been formed on a loop-analysis basis, or as an admittance matrix if the equations have been formed on a node-analysis basis. These equations can be solved in a number of well-known ways. One method is the systematic elimination of variables. Another method involves the use of determinants. This is known as Cramer's rule and provides a direct expression for the unknown variable in terms of determinants. This is useful in that it provides a compact expression for the solution. However, for anything more than the most trivial networks, a greater calculation effort is required for this method when working manually. Duality Two graphs are dual when the relationship between branches and node pairs in one is the same as the relationship between branches and loops in the other. The dual of a graph can be found entirely by a graphical method. The dual of a graph is another graph. For a given tree in a graph, the complementary set of branches (i.e., the branches not in the tree) form a tree in the dual graph. The set of current loop equations associated with the tie sets of the original graph and tree is identical to the set of voltage node-pair equations associated with the cut sets of the dual graph. The following table lists dual concepts in topology related to circuit theory. The dual of a tree is sometimes called a maze. It consists of spaces connected by links in the same way that the tree consists of nodes connected by tree branches. Duals cannot be formed for every graph. Duality requires that every tie set has a dual cut set in the dual graph. This condition is met if and only if the graph is mappable on to a sphere with no branches crossing. To see this, note that a tie set is required to "tie off" a graph into two portions and its dual, the cut set, is required to cut a graph into two portions. The graph of a finite network which will not map on to a sphere will require an n-fold torus. A tie set that passes through a hole in a torus will fail to tie the graph into two parts. Consequently, the dual graph will not be cut into two parts and will not contain the required cut set. Consequently, only planar graphs have duals. Duals also cannot be formed for networks containing mutual inductances since there is no corresponding capacitive element. Equivalent circuits can be developed which do have duals, but the dual cannot be formed of a mutual inductance directly. Node and mesh elimination Operations on a set of network equations have a topological meaning which can aid visualisation of what is happening. Elimination of a node voltage from a set of network equations corresponds topologically to the elimination of that node from the graph. For a node connected to three other nodes, this corresponds to the well known Y-Δ transform. The transform can be extended to greater numbers of connected nodes and is then known as the star-mesh transform. The inverse of this transform is the Δ-Y transform which analytically corresponds to the elimination of a mesh current and topologically corresponds to the elimination of a mesh. However, elimination of a mesh current whose mesh has branches in common with an arbitrary number of other meshes will not, in general, result in a realisable graph. This is because the graph of the transform of the general star is a graph which will not map on to a sphere (it contains star polygons and hence multiple crossovers). The dual of such a graph cannot exist, but is the graph required to represent a generalised mesh elimination. Mutual coupling In conventional graph representation of circuits, there is no means of explicitly representing mutual inductive couplings, such as occurs in a transformer, and such components may result in a disconnected graph with more than one separate part. For convenience of analysis, a graph with multiple parts can be combined into a single graph by unifying one node in each part into a single node. This makes no difference to the theoretical behaviour of the circuit, so analysis carried out on it is still valid. It would, however, make a practical difference if a circuit were to be implemented this way in that it would destroy the isolation between the parts. An example would be a transformer earthed on both the primary and secondary side. The transformer still functions as a transformer with the same voltage ratio but can now no longer be used as an isolation transformer. More recent techniques in graph theory are able to deal with active components, which are also problematic in conventional theory. These new techniques are also able to deal with mutual couplings. Active components There are two basic approaches available for dealing with mutual couplings and active components. In the first of these, Samuel Jefferson Mason in 1953 introduced signal-flow graphs. Signal-flow graphs are weighted, directed graphs. He used these to analyse circuits containing mutual couplings and active networks. The weight of a directed edge in these graphs represents a gain, such as possessed by an amplifier. In general, signal-flow graphs, unlike the regular directed graphs described above, do not correspond to the topology of the physical arrangement of components. The second approach is to extend the classical method so that it includes mutual couplings and active components. Several methods have been proposed for achieving this. In one of these, two graphs are constructed, one representing the currents in the circuit and the other representing the voltages. Passive components will have identical branches in both trees but active components may not. The method relies on identifying spanning trees that are common to both graphs. An alternative method of extending the classical approach which requires only one graph was proposed by Chen in 1965. Chen's method is based on a rooted tree. Hypergraphs Another way of extending classical graph theory for active components is through the use of hypergraphs. Some electronic components are not represented naturally using graphs. The transistor has three connection points, but a normal graph branch may only connect to two nodes. Modern integrated circuits have many more connections than this. This problem can be overcome by using hypergraphs instead of regular graphs. In a conventional representation components are represented by edges, each of which connects to two nodes. In a hypergraph, components are represented by hyperedges which can connect to an arbitrary number of nodes. Hyperedges have tentacles which connect the hyperedge to the nodes. The graphical representation of a hyperedge may be a box (compared to the edge which is a line) and the representations of its tentacles are lines from the box to the connected nodes. In a directed hypergraph, the tentacles carry labels which are determined by the hyperedge's label. A conventional directed graph can be thought of as a hypergraph with hyperedges each of which has two tentacles. These two tentacles are labelled source and target and usually indicated by an arrow. In a general hypergraph with more tentacles, more complex labelling will be required. Hypergraphs can be characterised by their incidence matrices. A regular graph containing only two-terminal components will have exactly two non-zero entries in each row. Any incidence matrix with more than two non-zero entries in any row is a representation of a hypergraph. The number of non-zero entries in a row is the rank of the corresponding branch, and the highest branch rank is the rank of the incidence matrix. Non-homogeneous variables Classical network analysis develops a set of network equations whose network variables are homogeneous in either current (loop analysis) or voltage (node analysis). The set of network variables so found is not necessarily the minimum necessary to form a set of independent equations. There may be a difference between the number of variables in a loop analysis to a node analysis. In some cases the minimum number possible may be less than either of these if the requirement for homogeneity is relaxed and a mix of current and voltage variables allowed. A result from Kishi and Katajini in 1967 is that the absolute minimum number of variables required to describe the behaviour of the network is given by the maximum distance between any two spanning forests of the network graph. Network synthesis Graph theory can be applied to network synthesis. Classical network synthesis realises the required network in one of a number of canonical forms. Examples of canonical forms are the realisation of a driving-point impedance by Cauer's canonical ladder network or Foster's canonical form or Brune's realisation of an immittance from his positive-real functions. Topological methods, on the other hand, do not start from a given canonical form. Rather, the form is a result of the mathematical representation. Some canonical forms require mutual inductances for their realisation. A major aim of topological methods of network synthesis has been to eliminate the need for these mutual inductances. One theorem to come out of topology is that a realisation of a driving-point impedance without mutual couplings is minimal if and only if there are no all-inductor or all-capacitor loops. Graph theory is at its most powerful in network synthesis when the elements of the network can be represented by real numbers (one-element-kind networks such as resistive networks) or binary states (such as switching networks). Infinite networks Perhaps the earliest network with an infinite graph to be studied was the ladder network used to represent transmission lines developed, in its final form, by Oliver Heaviside in 1881. Certainly all early studies of infinite networks were limited to periodic structures such as ladders or grids with the same elements repeated over and over. It was not until the late 20th century that tools for analysing infinite networks with an arbitrary topology became available. Infinite networks are largely of only theoretical interest and are the plaything of mathematicians. Infinite networks that are not constrained by real-world restrictions can have some very unphysical properties. For instance Kirchhoff's laws can fail in some cases and infinite resistor ladders can be defined which have a driving-point impedance which depends on the termination at infinity. Another unphysical property of theoretical infinite networks is that, in general, they will dissipate infinite power unless constraints are placed on them in addition to the usual network laws such as Ohm's and Kirchhoff's laws. There are, however, some real-world applications. The transmission line example is one of a class of practical problems that can be modelled by infinitesimal elements (the distributed-element model). Other examples are launching waves into a continuous medium, fringing field problems, and measurement of resistance between points of a substrate or down a borehole. Transfinite networks extend the idea of infinite networks even further. A node at an extremity of an infinite network can have another branch connected to it leading to another network. This new network can itself be infinite. Thus, topologies can be constructed which have pairs of nodes with no finite path between them. Such networks of infinite networks are called transfinite networks. Notes See also Symbolic circuit analysis Network topology       Topological quantum computer References Bibliography Brittain, James E., The introduction of the loading coil: George A. Campbell and Michael I. Pupin", Technology and Culture, vol. 11, no. 1, pp. 36–57, The Johns Hopkins University Press, January 1970 . Campbell, G. A., "Physical theory of the electric wave-filter", Bell System Technical Journal, November 1922, vol. 1, no. 2, pp. 1–32. Cederbaum, I., "Some applications of graph theory to network analysis and synthesis", IEEE Transactions on Circuits and Systems, vol.31, iss.1, pp. 64–68, January 1984. Farago, P. S., An Introduction to Linear Network Analysis, The English Universities Press Ltd, 1961. Foster, Ronald M., "Geometrical circuits of electrical networks", Transactions of the American Institute of Electrical Engineers, vol.51, iss.2, pp. 309–317, June 1932. Foster, Ronald M.; Campbell, George A., "Maximum output networks for telephone substation and repeater circuits", Transactions of the American Institute of Electrical Engineers, vol.39, iss.1, pp. 230–290, January 1920. Guillemin, Ernst A., Introductory Circuit Theory, New York: John Wiley & Sons, 1953 Kind, Dieter; Feser, Kurt, High-voltage Test Techniques, translator Y. Narayana Rao, Newnes, 2001 . Kishi, Genya; Kajitani, Yoji, "Maximally distant trees and principal partition of a linear graph", IEEE Transactions on Circuit Theory, vol.16, iss.3, pp. 323–330, August 1969. MacMahon, Percy A., "Yoke-chains and multipartite compositions in connexion with the analytical forms called “Trees”", Proceedings of the London Mathematical Society, vol.22 (1891), pp.330–346 . MacMahon, Percy A., "Combinations of resistances", The Electrician, vol.28, pp. 601–602, 8 April 1892.Reprinted in Discrete Applied Mathematics, vol.54, iss.Iss.2–3, pp. 225–228, 17 October 1994 . Minas, M., "Creating semantic representations of diagrams", Applications of Graph Transformations with Industrial Relevance: international workshop, AGTIVE'99, Kerkrade, The Netherlands, September 1–3, 1999: proceedings, pp. 209–224, Springer, 2000 . Redifon Radio Diary, 1970, William Collins Sons & Co, 1969. Skiena, Steven S., The Algorithm Design Manual, Springer, 2008, . Suresh, Kumar K. S., "Introduction to network topology" chapter 11 in Electric Circuits And Networks, Pearson Education India, 2010 . Tooley, Mike, BTEC First Engineering: Mandatory and Selected Optional Units for BTEC Firsts in Engineering, Routledge, 2010 . Wildes, Karl L.; Lindgren, Nilo A., "Network analysis and synthesis: Ernst A. Guillemin", A Century of Electrical Engineering and Computer Science at MIT, 1882–1982, pp. 154–159, MIT Press, 1985 . Zemanian, Armen H., Infinite Electrical Networks, Cambridge University Press, 1991 . Electrical engineering Electronic engineering
Circuit topology (electrical)
[ "Technology", "Engineering" ]
7,051
[ "Electrical engineering", "Electronic engineering", "Computer engineering" ]
19,104,225
https://en.wikipedia.org/wiki/Mond%20gas
Mond gas is a cheap coal gas that was used for industrial heating purposes. Coal gases are made by decomposing coal through heating it to a high temperature. Coal gases were the primary source of gas fuel during the 1940s and 1950s until the adoption of natural gas. They were used for lighting, heating, and cooking, typically being supplied to households through pipe distribution systems. The gas was named after its discoverer, Ludwig Mond. Discovery In 1889, Ludwig Mond discovered that the combustion of coal with air and steam produced ammonia along with an extra gas, which was named the Mond gas. He discovered this while looking for a process to form ammonium sulfate, which was useful in agriculture. The process involved reacting low-quality coal with superheated steam, which produced the Mond gas. The gas was then passed through dilute sulfuric acid spray, which ultimately removed the ammonia, forming ammonium sulfate. Mond modified the gasification process by restricting the air supply and filling the air with steam, providing a low working temperature. This temperature was below ammonia's point of dissociation, maximizing the amount of ammonia that could be produced from the nitrogen, a product from superheating coal. Gas production The Mond gas process was designed to convert cheap coal into flammable gas, which was made up of mainly hydrogen, while recovering ammonium sulfate. The gas produced was rich in hydrogen and poor in carbon monoxide. Although it could be used for some industrial purposes and power generation, the gas was limited for heating or lighting. In 1897, the first Mond gas plant began at the Brunner Mond & Company in Northwich, Cheshire. Mond plants which recovered ammonia needed to be large in order to be profitable, using at least 182 tons of coal per week. Reaction Predominant reaction in Mond Gas Process: C + 2H2O = CO2+ 2H2 The Mond gas was composed of roughly: 12% CO (Carbon monoxide) 28% H2 (Hydrogen) 2.2% CH4 (Methane) 16% CO2 (Carbon dioxide) 42% N2 (Nitrogen) Uses Mond gas could be produced and used more efficiently than other gases in the late 19th and early 20th century. The gas was used as fuel for street lighting and basic residential uses that required gas such as ovens, kilns, furnaces, and boilers. Advantages The Mond gas could be produced very cheaply since it required only a low-quality coal, offering large savings for many processes. The production of Mond gas did not require much labor. The Mond gas became popularized during the industrial power generation in the beginning of the 20th century, since industries were very interested in a source of low-cost energy. The Mond gas provided a boost to the gas engine industry in particular. For example, a large gas engine that used Mond gas was 5–6 times more efficient than a standard steam engine. This is primarily because Mond gas was produced from the lowest cost coal rather than steam coal, resulting in cheaper electricity at about 1/20 of the normal price. Modern use The Mond gas was used primarily during the early 20th century, and its process was further developed by the Power Gas Corporation as the Lymn system; however, the gas has been widely forgotten. The use of coal gases has become far less popular due to the adoption of natural gas in the 1960s. Natural gases were better for the environment because they burned more cleanly than other fuels such as coal and oil and could also be transported more safely and efficiently over sea. References Fuel gas Fuels Chemical mixtures Industrial gases Synthetic fuel technologies
Mond gas
[ "Chemistry" ]
746
[ "Chemical energy sources", "Petroleum technology", "Chemical mixtures", "Industrial gases", "Synthetic fuel technologies", "nan", "Chemical process engineering", "Fuels" ]
19,106,129
https://en.wikipedia.org/wiki/Lanthanum%28III%29%20bromide
Lanthanum(III) bromide (LaBr3) is an inorganic halide salt of lanthanum. When pure, it is a colorless white powder. The single crystals of LaBr3 are hexagonal crystals with melting point of 783 °C. It is highly hygroscopic and water-soluble. There are several hydrates, La3Br·x H2O, of the salt also known. It is often used as a source of lanthanum in chemical synthesis and as a scintillation material in certain applications. Lanthanum bromide scintillation detector The scintillator material cerium activated lanthanum bromide (LaBr3:Ce) was first produced in 2001. LaBr3:Ce-based radiation detectors offer improved energy resolution, fast emission and excellent temperature and linearity characteristics. Typical energy resolution at 662 keV is 3% as compared to sodium iodide detectors at 7%. The improved resolution is due to a photoelectron yield that is 160% greater than is achieved with sodium iodide. Another advantage of LaBr3:Ce is the nearly flat photo emission over a 70 °C temperature range (~1% change in light output). Today LaBr3 detectors are offered with bialkali photomultiplier tubes (PMT) that can be two inches in diameter and 10 or more inches long. However, miniature packaging can be obtained by the use of a silicon drift detector (SDD) or a Silicon Photomultiplier (SiPM). These UV enhanced diodes provide excellent wavelength matching to the 380 nm emission of LaBr3. The SDD is not as sensitive to temperature and bias drift as PMT. The reported spectroscopy performance of the SDD configuration resulted in a 2.8% energy resolution at 662 keV for the detector sizes considered. LaBr3 introduces an enhanced set of capabilities to a range of gamma spectroscopy radioisotope detection and identification systems used in the homeland security market. Isotope identification utilizes several techniques (known as algorithms) which rely on the detector's ability to discriminate peaks. The improvements in resolution allow more accurate peak discrimination in ranges where isotopes often have many overlapping peaks. This leads to better isotope classification. Screening of all types (pedestrians, cargo, conveyor belts, shipping containers, vehicles, etc.) often requires accurate isotopic identification to differentiate concerning materials from non-concerning materials (medical isotopes in patients, naturally occurring radioactive materials, etc.) Heavy R&D and deployment of instruments utilizing LaBr3 is expected in the upcoming years. References Lanthanum compounds Bromides Phosphors and scintillators Lanthanide halides
Lanthanum(III) bromide
[ "Chemistry", "Technology", "Engineering" ]
560
[ "Luminescence", "Radioactive contamination", "Measuring instruments", "Salts", "Bromides", "Ionising radiation detectors", "Phosphors and scintillators" ]
10,038,696
https://en.wikipedia.org/wiki/Polyvinyl%20siloxane
Polyvinyl siloxane (PVS), also called poly-vinyl siloxane, vinyl polysiloxane (VPS), or vinylpolysiloxane, is an addition-reaction silicone elastomer (an addition silicone). It is a viscous liquid that cures (solidifies) quickly into a rubber-like solid, taking the shape of whatever surface it was lying against while curing. As with two-part epoxy, its package keeps its two component liquids in separate tubes until the moment they are mixed and applied, because once mixed, they cure (harden) rapidly. Polyvinyl siloxane is widely used in dentistry as an impression material. It is also used in other contexts where an impression similar to a dental impression is needed, such as in audiology (to take ear impressions for fitting custom hearing protection or hearing aids) or in industrial applications (such as to aid in the inspection of interior features of machined parts, for example, internal grooves inside bores). Polyvinyl siloxane was commercially introduced in the 1970s. To create the material, the user simply mixes a colored putty (often blue or pink) with a white putty, and the chemical reaction begins. PVS with a wide variety of working and setting times is available commercially. Final set is noted when the product rebounds upon touching with a blunt or sharp instrument. This reaction also gives off hydrogen gas and it is therefore advisable to wait up to an hour before pouring the ensuing cast. In dentistry, this material is commonly referred to as having light or heavy body depending on specific usage. See also Dental impression Dentures References Dental materials Polymers Impression material
Polyvinyl siloxane
[ "Physics", "Chemistry", "Materials_science" ]
353
[ "Dental materials", "Materials", "Polymer chemistry", "Polymers", "Matter" ]
10,039,881
https://en.wikipedia.org/wiki/Girih%20tiles
Girih tiles are a set of five tiles that were used in the creation of Islamic geometric patterns using strapwork (girih) for decoration of buildings in Islamic architecture. They have been used since about the year 1200 and their arrangements found significant improvement starting with the Darb-i Imam shrine in Isfahan in Iran built in 1453. Five tiles The five shapes of the tiles, and their Persian names, are: All sides of these figures have the same length, and all their angles are multiples of 36° (π/5 radians). All of them except the pentagon have bilateral (reflection) symmetry through two perpendicular lines. Some have additional symmetries. Specifically, the decagon has tenfold rotational symmetry (rotation by 36°); and the pentagon has fivefold rotational symmetry (rotation by 72°). The emergence of girih tiles By 11th century, the Islamic discovered a new way to construct the “tile mosaic” due to the development of arithmetic calculation and geometry—the girih tiles. Girih Girih are lines (strapwork) that decorate the tiles. The tiles are used to form girih patterns, from the Persian word , meaning "knot". In most cases, only the girih (and other minor decorations like flowers) are visible rather than the boundaries of the tiles themselves. The girih are piece-wise straight lines that cross the boundaries of the tiles at the center of an edge at 54° (3π/10 radians) to the edge. Two intersecting girih cross each edge of a tile. Most tiles have a unique pattern of girih inside the tile that are continuous and follow the symmetry of the tile. However, the decagon has two possible girih patterns one of which has only fivefold rather than tenfold rotational symmetry. Mathematics of girih tilings In 2007, the physicists Peter J. Lu and Paul J. Steinhardt suggested that girih tilings possess properties consistent with self-similar fractal quasicrystalline tilings such as Penrose tilings, predating them by five centuries. This finding was supported both by analysis of patterns on surviving structures, and by examination of 15th-century Persian scrolls. There is no indication of how much more the architects may have known about the mathematics involved. It is generally believed that such designs were constructed by drafting zigzag outlines with only a straightedge and a compass. Templates found on scrolls such as the 97-foot (29.5 metres) long Topkapi Scroll may have been consulted. Found in the Topkapi Palace in Istanbul, the administrative center of the Ottoman Empire, and believed to date from the late 15th century, the scroll shows a succession of two- and three-dimensional geometric patterns. There is no text, but there is a grid pattern and color-coding used to highlight symmetries and distinguish three-dimensional projections. Drawings such as shown on this scroll would have served as pattern-books for the artisans who fabricated the tiles, and the shapes of the girih tiles dictated how they could be combined into large patterns. In this way, craftsmen could make highly complex designs without resorting to mathematics and without necessarily understanding their underlying principles. This use of repeating patterns created from a limited number of geometric shapes available to craftsmen of the day is similar to the practice of contemporary European Gothic artisans. Designers of both styles were concerned with using their inventories of geometrical shapes to create the maximum diversity of forms. This demanded a skill and practice very different from mathematics. Geometric construction of an interlocking decagram-polygon mosaic design First, divide the right angle A into five parts of the same degree by creating four rays that start from A. Find an arbitrary point C on the second ray and drop perpendiculars from C to the sides of angle A counterclockwise. This step creates the rectangle ABCD along with four segments that each have an endpoint at A; other endpoints are the intersections of the four rays with the two sides of BC and DC of rectangle ABCD. Then, find the midpoint of the fourth segment created from the fourth ray point E. Construct an arc with center A and radius AE to intersect AB at point F and the second ray at point G. The second segment is now a part of the rectangle's diagonal. Make a line parallel to AD and passing through point G that intersects the first ray at point H and the third ray at point I. The line HF passes through point E and intersects the third ray at L and line AD at J. Construct a line passing through J that is parallel to the third ray. Also construct line EI and find M which is the intersection of this line with AD. From the point F make a parallel line to the third ray to meet the first ray at K. Construct segments GK, GL, and EM. Find the point N such that GI = IN by constructing a circle with center I and radius IG. Construct the line DN which is parallel to GK to intersect the line emanating from J, and find P to complete the regular pentagon EINPJ. Line DN meets the perpendicular bisector of AB at Q. From Q construct a line parallel to FK to intersect ray MI at R. As shown in the figure, using O which is the center of the rectangle ABCD as a center of rotation for 180°, one can make the fundamental region for the tiling. Geometric construction of a tessellation from Mirza Akbar architectural scrolls First, divide the right angle into five congruent angles. An arbitrary point P is selected on the first ray counterclockwise. For the radius of the circle inscribed in the decagram, one half of the segment created from the third ray, segment AM, is selected. The following figure illustrates a step-by-step compass-straightedge visual solution to the problem by the author. Note that the way to divide a right angle into five congruent angles is not a part of the instructions provided, because it is considered an elementary step for designers. Examples The girih has been widely applied on architecture. Girih on Persian geometric windows meet the requirements of Persian architecture. The specific types of embellishments utilized in orosi typically linked the windows to the patron's social and political eminence. The more ornate a window is, the higher social and economic status the owner is more likely to have. A good example of this is Azad Koliji, a Dowlatabad Garden in Iran . The girih patterns on its window successfully demonstrate multiple layers. The first layer would be the actual garden, of which people can have a glimpse when they open the window. Then there is the first girih pattern on the outside of the window, the carved pattern. Another artificial layer is represented by the colorful glass of the window, whose multicolored layers create the sense of a mass of flowers. This abstract layer forms a clear contradiction with the real layer outside the window, and gives the space for the imagination. See also Aperiodic tiling Moorish architecture Penrose tiling Tadelakt Topkapı Scroll Zellij References External links Patterns in Arabic Architecture Islamic architectural elements Islamic architecture Tessellation Islamic art Girih Architecture in Iran Iranian art
Girih tiles
[ "Physics", "Mathematics" ]
1,509
[ "Girih", "Tessellation", "Euclidean plane geometry", "Planes (geometry)", "Symmetry" ]
10,040,229
https://en.wikipedia.org/wiki/Green%20infrastructure
Green infrastructure or blue-green infrastructure refers to a network that provides the “ingredients” for solving urban and climatic challenges by building with nature. The main components of this approach include stormwater management, climate adaptation, the reduction of heat stress, increasing biodiversity, food production, better air quality, sustainable energy production, clean water, and healthy soils, as well as more human centered functions, such as increased quality of life through recreation and the provision of shade and shelter in and around towns and cities. Green infrastructure also serves to provide an ecological framework for social, economic, and environmental health of the surroundings. More recently scholars and activists have also called for green infrastructure that promotes social inclusion and equity rather than reinforcing pre-existing structures of unequal access to nature-based services. Green infrastructure is considered a subset of "Sustainable and Resilient Infrastructure", which is defined in standards such as SuRe, the Standard for Sustainable and Resilient Infrastructure. However, green infrastructure can also mean "low-carbon infrastructure" such as renewable energy infrastructure and public transportation systems (See "low-carbon infrastructure"). Blue-green infrastructure can also be a component of "sustainable drainage systems" or "sustainable urban drainage systems" (SuDS or SUDS) designed to manage water quantity and quality, while providing improvements to biodiversity and amenity. Introduction Green infrastructure Nature can be used to provide important services for communities by protecting them against flooding or excessive heat, or helping to improve air, soil and water quality. When nature is harnessed by people and used as an infrastructural system it is called “green infrastructure”. Many such efforts take as their model prairies, where absorbent soil prevents runoff and vegetation filters out pollutants. Green infrastructure occurs at all scales. It is most often associated with green stormwater management systems, which are smart and cost-effective. However, green infrastructure acts as a supplemental component to other related concepts, and ultimately provides an ecological framework for social, economic, and environmental health of the surroundings. Blue infrastructure "Blue infrastructure" refers to urban infrastructure relating to water. Blue infrastructure is commonly associated with green infrastructure in urban environments and may be referred to as "blue-green infrastructure" when being viewed in combination. Rivers, streams, ponds, and lakes may exist as natural features within cities, or be added to an urban environment as an aspect of its design. Coastal urban developments may also utilize pre-existing features of the coastline specifically employed in their design. Harbours, quays, piers, and other extensions of the urban environment are also often added to capture benefits associated with the marine environment. Blue infrastructure can support unique aquatic biodiversity in urban areas, including aquatic insects, amphibians, and water birds. There may considerable co-benefits to the health and wellbeing of populations with access to blue spaces in the urban context. Accessible blue infrastructure in urban areas is also referred as to blue spaces. Terminology Ideas for green urban structures began in the 1870s with concepts of urban farming and garden allotments. Alternative terminology includes stormwater best management practices, source controls, and low impact development (LID) practices. Green infrastructure concepts originated in mid-1980s proposals for best management practices that would achieve more holistic stormwater quantity management goals for runoff volume reduction, erosion prevention, and aquifer recharge. In 1987, amendments to the U.S. Clean Water Act introduced new provisions for management of diffuse pollutant sources from urban land uses, establishing the regulatory need for practices that unlike conventional drainage infrastructure managed runoff "at source." The U.S. Environmental Protection Agency (EPA) published its initial regulations for municipal separate storm sewer systems ("MS4") in 1990, requiring large MS4s to develop stormwater pollution prevention plans and implement "source control practices". EPA's 1993 handbook, Urban Runoff Pollution Prevention and Control Planning, identified best management practices to consider in such plans, including vegetative controls, filtration practices and infiltration practices (trenches, porous pavement). Regulations covering smaller municipalities were published in 1999. MS4s serve over 80% of the US population and provide drainage for 4% of the land area. Green infrastructure is a concept that highlights the importance of the natural environment in decisions about land-use planning. However, the term does not have a widely recognized definition. Also known as “blue-green infrastructure”, or “green-blue urban grids” the terms are used by many design-, conservation- and planning-related disciplines and commonly feature stormwater management, climate adaptation and multifunctional green space. The term "green infrastructure" is sometimes expanded to "multifunctional" green infrastructure. Multifunctionality in this context refers to the integration and interaction of different functions or activities on the same piece of land. The EPA extended the concept of “green infrastructure” to apply to the management of stormwater runoff at the local level through the use of natural systems, or engineered systems that mimic natural systems, to treat polluted runoff. This use of the term "green infrastructure" to refer to urban "green" best management practices contributes to the overall health of natural ecosystems, even though it is not central to the larger concept. However, it is apparent that the term “blue-green infrastructure” is applied in an urban context and places a greater emphasis on the management of stormwater as an integral part of creating a sustainable, multifunctional urban environment. At the building level, the term "blue-green architecture" is used, which implements the same principles on a smaller scale. The focus here is on building greening with water management from alternative water resources such as grey water and rainwater. History Green Infrastructure as a term did not appear until the early 1990s, although ideas of Green Infrastructure had been used long before that. The first coined use of the term was seen in a 1994 report by Buddy MacKay, chair of the Florida Greenways Commission, to Florida governor Lawton Chiles about a Green Infrastructure project undertaken in 1991: Florida Greenways Project. MacKay states, "Just as we carefully plan the infrastructure our communities need to support the people who live there—the roads, water and electricity—so must we begin to plan and manage Florida’s green infrastructure”. Ancient China Chinese literary gardens are an example of a sustainable lawn that showcased natural beauty in suburban areas. These gardens, dating back to the Shang Dynasty (1600–1046 BC), were designed to allow native plant species to thrive in their natural conditions and appear untouched by humans. This created ecological havens within the city. 8th Century BC - 1st Century BC Greece was an early adopter of the concept of green Infrastructure with the invention of Greek agora. Agoras were meeting spaces that were built for social conversations and allowed Greeks to converse in public. Many were built across Greece, and some incorporated nature as a design aspect, giving nature a space among the public. 5th century - 15th century A common urban habitat, the lawn, consists of short grass and sometimes herbaceous plants. While modern artificial lawns have been connected to a negative environmental impact, lawns in the past have been more sustainable, and they promoted biodiversity and the growth of native plants. These historical lawns are impacting lawn design today to create more sustainable ‘alternative lawns’. In Medieval Europe, lawns rich with flowers and herbaceous plants known as ‘flower meads’ are a good example of a more sustainable lawn. Since then, this idea has been used. In the Edwardian Era, lawns full of thyme, whose flowers attracted insects and pollinators, created biodiversity. A 20th century take on this lawn, the ‘enamelled mead’, has been used in England, and has the purpose of both aesthetics and for stormwater management. During the height of the Renaissance, public areas became more common in new cities and infrastructure. These areas were carefully selected and would often be urban parks and gardens for the public to converse and relax at. Other than social uses, urban parks and gardens were used to improve the aesthetic of the urban environment they were present in. Urban spaces had environmental uses for the implementation of fresh air and reduced urban heating. 17th Century – 18th Century Green Infrastructure can be traced as far back as the 17th century in European society beginning in France. France used the presence of nature to provide social and spatial organization to their towns. Originally, nature in cities was used to provide social areas to interact, and plants were grown in these spaces to provide food in close proximity to the inhabitants. In this period, Large open spaces were used to provide a calm setting that could give "sites of power with sites of sanctity" across France. These sites were used by the French elites to bring rural country town house beauty to their new urban houses in a showcase of power and elaborate display of wealth. The French implemented many different types of infrastructure throughout the 17th century that involved incorporating nature in some shape or form. Another example would be the use of promenades that were used by the French elites to flee the unhealthy living conditions of the cities and to avoid the filthy public areas available to the common folks. These areas were lush gardens that had a wide variety of vegetation and foliage that kept the air clean for the wealthy while allowing them to relax away from the poorer members of French society. Again, Mathis goes on to state, "The first cours [or promenades] were established in the capital at the instigation of Marie de Medici: the Mail de l'Arsenal (1604) and above all the Allée du Cours-la-Reine (1616), 1300 mètres long and lined with elms, running along the Seine, from the Tuileries Garden to the high ground of Chaillot," establishing the use of nature as a symbol of power and achievement amongst French royalty and the common people at the time. Keeping and making cities green were at the forefront for city planners in France. They often incorporated design elements blending urbanism and nature, forming a relationship that showcased how the French grew alongside nature and often made it a key aspect of their expansion. In 18th century France, citizens were able to request to have old and battered city walls destroyed to make room for new gardens, vegetation sites, and green walkways. This opened up new areas to the city landscape and incorporated greenery into the new areas where the walls were torn down. Along with this, the town hall as well as the city center were elaborately decorated with different types of vegetation and trees, especially rare and unique species that had been brought from other countries. Mathis goes on to state, "A French-style garden is linked to the town hall to make the view of it more sublime", showing the use of foliage as a way to impress and beautify French cities. 19th Century In 1847, a speech by George Perkins Marsh called attention to negative human impacts such as deforestation. Marsh later wrote Man and Nature in 1864 based on his idea for conserving forests. Around the same time, Henry David Thoreau's 1854 book Walden discussed preservation of nature and applied these ideas to urban planning saying, “I think every town should have a park,” and stated the “importance of preserving some portions of nature herself unimpaired.” Frederick Law Olmsted, a landscape architect, agreed with these ideas and planned many parks, areas of preserved land, and scenic roads, and in 1887, the Emerald Necklace of Boston. The Emerald Necklace is a system of public parks linked by parkways that serves as a home to diverse wildlife and provides environmental benefits such as flood protection and water storage. In Europe, Ebenezer Howard led the garden city movement to balance development with nature. He planned agricultural greenbelts and wide, radiating boulevards surrounded by trees and shrubbery for Victoria, England. One of Howard's concepts was of the "marriage of town and country" to promote sustainable relationships between human society and nature through the planning of garden cities. The US government became more involved in conservation and land preservation in the late 1800s. This was seen in the 1864 legislation to preserve the Yosemite Valley as a California public park, and 8 years later, the United States’ first national park. 20th Century Many industrial leaders in the 19th century had the goal of increasing worker's quality of life through quality sanitation and outdoor activity, which would in turn create increased productivity in the workforce. These ideas carried into the 20th century where efforts in green infrastructure were seen in industrial parks, integrated landscaping, and suburban gardens. The Anaconda Copper Mining Company was responsible for environmental damage in Montana, but a refinery in Great Falls saw this impact and used the surrounding land to create a green open space that was also used for recreation. This natural haven included a golf course, flower beds, picnic areas, a lily pond, and pedestrian paths. The role of water: blue spaces and blue infrastructure Proximity and access to water have been key factors in human settlement through history. Water, along with the spaces around it, create a potential for transport, trade, and power generation. They also provide the human population with resources like recreation and tourism in addition to drinking water and food. Many of the world's largest cities are located near water sources, and networks of urban "blue infrastructure", such as canals, harbors and so forth, have been constructed to capture the benefits and minimize risks. Globally, cities are facing severe water uncertainties such as floods, droughts, and upstream activities on trans-boundary rivers. The increasing pressure, intensity, and speed of urbanization has led to the disappearance of any visible form of water infrastructure in most cities. Urban coastal populations are growing, and many cities have seen an extensive post-industrial transformation of canals, riversides, docks, etc. following changes in global trading patterns. The potential implications of such waterside regeneration in terms of public health have only recently been scientifically investigated. A systematic review conducted in 2017 found consistent evidence of positive associations between exposure of people to blue space and mental health and physical activity. One-fifth of the world's population, 1.2 billion people, live in areas of water scarcity. Climate change and water-related disasters will place increasing demands on urban systems and will result in increased migration to urban areas. Cities require a very large input of freshwater and in turn have a huge impact on freshwater systems. Urban and industrial water use is projected to double by 2050. In 2010 the United Nations declared that access to clean water and sanitation is a human right. New solutions for improving the sustainability of cities are being explored. Good urban water management is complex and requires not only water and wastewater infrastructure, but also pollution control and flood prevention. It requires coordination across many sectors, and between different local authorities and changes in governance, that lead to more sustainable and equitable use of urban water resources. Types of green infrastructure Urban forests Urban forests are forests located in cities. They are an important component of urban green infrastructure systems. Urban forests use appropriate tree and vegetation species, instead of noxious and invasive kinds, which reduce the need of maintenance and irrigation. In addition, native species also provide aesthetic value while reducing cost. Diversity of plant species should also be considered in design of urban forests to avoid monocultures; this makes the urban forests more durable and resilient to pests and other harms. Benefits Energy use: According to a study conducted by the Lawrence Berkeley National Laboratory and Sacramento Municipal Utility District, it was found that strategically located shade trees planted around houses can provide up to 47% energy savings for heating and cooling. Urban heat island mitigation: Maximum air temperature for tree groves were found to be lower than that of open areas without trees. This is contributed to by the principal processes of evaporative cooling from transpiration, radiation interception from the shading effect of canopies, and increasing urban surface roughness to enhance its convective cooling efficiency. Water management: Urban forests helps with city water management on diverting storm water from water channels. Trees intercept a large amount of rainfall that hit them. Property values: In response to fluctuating demand from residents wanting increased amounts of urban greenery, increasing vegetation like tree cover within urban areas can result in the surrounding areas of real estate to increase in value. Public health: Urban greenery can also improve mental health and well-being. Creating urban forests affects public health in many ways. Urban heat islands are created by the condensation of heat due to the materials and infrastructure used in metropolitan areas, which can negatively impact human health. Urban forests provide natural shading structures at a fraction of the cost of artificial shading structures and it counters the negative health impacts of increasing global temperatures. Beyond countering the negative impacts of man-made infrastructure, green infrastructure has the potential to enhance existing ecosystems and make them more stable, which has been historically done in traditional Japanese agriculture. Green infrastructure in an urbanized area can help restore and enhance the resiliency of an ecosystem to natural disturbances and disasters that disrupt the lives of residents. Building new urban forests in an existing metropolitan area creates new labor jobs that do not require a high level of education, which can decrease unemployment in the working class which benefits society. Furthermore, green infrastructure helps states to implement the principles of the 1992 Rio Declaration on Environment and Development that was designed to alleviate the social and economic consequences of environmental degradation. Constructed wetlands Constructed wetlands are manmade wetlands, which work as a bio-filtration system. They contain wetland vegetation and are mostly built on uplands and floodplains. Constructed wetlands are built this way to avoid connection or damage to natural wetlands and other aquatic resources. There are two main categories of constructed wetlands: subsurface flow system and free water surface system. Proper planning and operating can help avoid possible harm done to the wetlands, which are caused by alteration of natural hydrology and introduction of invasive species. Benefits Water efficiency: Constructed wetlands try to replicate natural wetland ecosystems. They are built to improve water efficiency and water quality. They also create wildlife habitats by using natural processes of plants, soils, and associated microorganisms. In these types of wetlands, vegetation can trap parts of suspended solids and slow down water flow; the microorganisms that live there process some other pollutants. Cost-effective: Wetlands have low operating and maintenance costs. They can also help with fluctuating water levels. Aesthetically, constructed wetlands are able to add greenery to its surrounding environment. It also helps to reduce unpleasing odors of wastewater. Green and blue roofs Green roofs improve air and water quality while reducing energy cost. The implementation of green roofs in some regions have correlated with increased albedo, providing slightly cooler temperatures and thus, lower energy consumption. The plants and soil provide more green space and insulation on roofs. Green and blue roofs also help reducing city runoff by retaining rainfall providing a potential solution for the stormwater management in highly concentrated urban areas. The social benefit of green roofs is the rooftop agriculture for the residents. Green roofs also sequester rain and carbon pollution. Forty to eighty percent of the total volume of rain that falls on green roofs are able to be reserved. The water released from the roofs flow at a slow pace, reducing the amount of runoff entering the watershed at once. Blue roofs, not technically being green infrastructure, collect and store rainfall, reducing the inrush of runoff water into sewer systems. Blue roofs use detention ponds, or detention basins, for collecting the rainfall before it gets drained into waterways and sewers at a controlled rate. As well as saving energy by reducing cooling expenses, blue roofs reduce the urban heat island effect when coupled with reflective roofing material. Rain gardens Rain gardens are a form of stormwater management using water capture. Rain gardens are shallow depressed areas in the landscape, planted with shrubs and plants that are used to collect rainwater from roofs or pavement and allows for the stormwater to slowly infiltrate into the ground. Ubiquitous lawn grass is not a solution for controlling runoff, so an alternative is required to reduce urban and suburban first flush (highly toxic) runoff and to slow the water down for infiltration. In residential applications, water runoff can be reduced by 30% with the use of rain gardens in the homeowner's yard. A minimum size of 150 sq. ft. up to a range of 300 sq. ft. is the usual size considered for a private property residence. The cost per square foot is about $5–$25, depending on the type of plants you use and the slope of the property. Native trees, shrubs, and herbaceous perennials of the wetland and riparian zones being the most useful for runoff detoxification. Downspout disconnection Downspout disconnection is a form of green infrastructure that separates roof downspouts from the sewer system and redirects roof water runoff into permeable surfaces. It can be used for storing stormwater or allowing the water to penetrate the ground. Downspout disconnection is especially beneficial in cities with combined sewer systems. With high volumes of rain, downspouts on buildings can send 12 gallons of water a minute into the sewer system, which increases the risk of basement backups and sewer overflows. In attempts to reduce the amount of rainwater that enters the combined sewer systems, agencies such as the Milwaukee Metropolitan Sewerage District amended regulations that require downspout disconnection at residential areas. Bioswales Bioswales are stormwater runoff systems providing an alternative to traditional storm sewers. Much like rain gardens, bioswales are vegetated or mulched channels commonly placed in long narrow spaces in urban areas. They absorb flows or carry stormwater runoff from heavy rains into sewer channels or directly to surface waters. Vegetated bioswales infiltrate, slow down, and filter stormwater flows that are most beneficial along streets and parking lots. Green alleys The Trust for Public Land is working in partnership with the City of Los Angeles' Community Redevelopment Agency, Bureau of Sanitation, the University of Southern California's Center for Sustainable Cities, and Jefferson High School by converting the existing 900 miles of alleys in the city to green alleys. The concept is to re-engineer existing alleyways to reflect more light to mitigate heat island effect, capture storm water, and make the space beautiful and usable by the neighboring communities. The first alley, completed in 2015, saved more than 750,000 gallons in its first year. The Green alleys will provide open space on top of these ecological benefits, converting spaces which used to feel unsafe, or used for dumping into a playground, and walking/biking corridor. Green school yards The Trust for Public Land has completed 183 green school yards across the 5 boroughs in New York. Existing asphalt school yards are converted to a more vibrant and exciting place while also incorporating infrastructure to capture and store rainwater: rain garden, rain barrel, tree groves with pervious pavers, and an artificial field with a turf base. The children are engaged in the design process, lending to a sense of ownership and encourages children to take better care of their school yard. Success in New York has allowed other cities like Philadelphia and Oakland to also convert to green school yards. Low-impact development Low-impact development (also referred to as green stormwater infrastructure) are systems and practices that use or mimic natural processes that result in the infiltration, evapotranspiration or use of stormwater in order to protect water quality and associated aquatic habitat. LID practices aim to preserve, restore and create green space using soils, vegetation, and rainwater harvest techniques. It is an approach to land development (or re-development) that works with nature to manage stormwater as close to its source as possible. Many low impact development tools integrate vegetation or the existing soil to reduce runoff and let rainfall enter the natural water cycle. Planning approach The Green Infrastructure approach analyses the natural environment in a way that highlights its function and subsequently seeks to put in place, through regulatory or planning policy, mechanisms that safeguard critical natural areas. Where life support functions are found to be lacking, plans may propose how these can be put in place through landscaped and/or engineered improvements. Within an urban context, this can be applied to re-introducing natural waterways and making a city self-sustaining particularly with regard to water, for example, to harvest water locally, recycle it, re-use it and integrate stormwater management into everyday infrastructure. The multi-functionality of this approach is key to the efficient and sustainable use of land, especially in a compact and bustling country such as England where pressures on land are particularly acute. An example might be an urban edge river floodplain which provides a repository for flood waters, acts as a nature reserve, provides a recreational green space and could also be productively farmed (probably through grazing). There is growing evidence that the natural environment also has a positive effect on human health. United Kingdom In the United Kingdom, Green Infrastructure planning is increasingly recognised as a valuable approach for spatial planning and is now seen in national, regional and local planning and policy documents and strategies, for example in the Milton Keynes and South Midlands Growth area. In 2009, guidance on green infrastructure planning was published by Natural England. This guidance promotes the importance of green infrastructure in 'place-making', i.e. in recognizing and maintaining the character of a particular location, especially where new developments are planned. In North West England the former Regional Spatial Strategy had a specific Green Infrastructure Policy (EM3 – Green Infrastructure) as well as other references to the concept in other land use development policies (e.g. DP6). The policy was supported by the North West Green Infrastructure Guide. The Green Infrastructure Think Tank (GrITT) provides the support for policy development in the region and manages the web site that acts as a repository for information on Green Infrastructure. The Natural Economy Northwest programme has supported a number of projects, commissioned by The Mersey Forest to develop the evidence base for green infrastructure in the region. In particular work has been undertaken to look at the economic value of green infrastructure, the linkage between grey and green infrastructure and also to identify areas where green infrastructure may play critical role in helping to overcome issues such as risks of flood or poor air quality. In March 2011, a prototype Green Infrastructure Valuation Toolkit was launched. The Toolkit is available under a Creative Commons license, and provides a range of tools that provide economic valuation of green infrastructure interventions. The toolkit has been trialled in a number of areas and strategies, including the Liverpool Green Infrastructure Strategy. In 2012, the Greater London Authority published the All London Green Grid Supplementary Planning Guidance (ALGG SPG) which proposes an integrated network of green and open spaces together with the Blue Ribbon Network of rivers and waterways. The ALGG SPG aims to promote the concept of green infrastructure, and increase its delivery by boroughs, developers, and communities, to benefit areas such as sustainable travel, flood management, healthy living and the economic and social uplift these support. Green Infrastructure is being promoted as an effective and efficient response to projected climate change. Green Infrastructure may include geodiversity objectives. United States Green infrastructure programs managed by EPA and partner organizations are intended to improve water quality generally through more extensive management of stormwater runoff. The practices are expected to reduce stress on traditional water drainage infrastructure--storm sewers and combined sewers—which are typically extensive networks of underground pipes and/or surface water channels in U.S. cities, towns and suburban areas. Improved stormwater management is expected to reduce the frequency of combined sewer overflows and sanitary sewer overflows, reduce the impacts of urban flooding, and provide other environmental benefits. Though green infrastructure is yet to become a mainstream practice, many US cities have initiated its implementation to comply with their MS4 permit requirements. For example, the City of Philadelphia has installed or supported a variety of retrofit projects in neighborhoods throughout the city. Installed improvements include: permeable pavements in parks, basketball courts and parking lots rain gardens and bioretention systems at schools and other public facilities constructed wetlands for management of stormwater runoff. Some of these facilities reduce the volume of runoff entering the city's aging combined sewer system, and thereby reduce the extent of system overflows during rainstorms. Another U.S. example is the State of Maryland's promotion of a program called "GreenPrint." GreenPrint Maryland is the first web-enabled map in the nation that shows the relative ecological importance of every parcel of land in the state. Combining color-coded maps, information layers, and aerial photography with public openness and transparency, Greenprint Maryland applies the best environmental science and Geographic Information Systems (GIS) to the urgent work of preserving and protecting environmentally critical lands. A valuable new tool not only for making land conservation decisions today, but for building a broader and better informed public consensus for sustainable growth and land preservation decisions into the future. The program was established in 2001 with the objective to "preserve an extensive intertwined network of lands vital to the long-term protection of the State's natural resources, in concert with other Smart Growth initiatives." In April 2011, EPA announced the Strategic Agenda to Protect Waters and Build More Livable Communities through Green Infrastructure and the selection of the first ten communities to be green infrastructure partners. The communities selected were: Austin, Texas; Chelsea, Massachusetts; the Northeast Ohio Regional Sewer District (Cleveland, Ohio); the City and County of Denver, Colorado; Jacksonville, Florida; Kansas City, Missouri; Los Angeles, California; Puyallup, Washington; Onondaga County and the City of Syracuse, New York; and Washington, D.C. The Federal Emergency Management Agency (FEMA) is also promoting green infrastructure as a means of managing urban flooding (also known as localized flooding). Singapore Since 2009, two editions of the ABC (Active, Beautiful, Clean) Waters Design Guidelines have been published by the Public Utilities Board, Singapore. The latest version (2011) contains planning and design considerations for the holistic integration of drains, canals and reservoirs with the surrounding environment. The Public Utilities Board encourages the various stakeholders — landowners, private developers to incorporate ABC Waters design features into their developments, and the community to embrace these infrastructures for recreational & educational purposes. The main benefits outlined in the ABC Waters Concept include: Treating stormwater runoff closer to the source naturally, without the use of chemicals through the use of plants and soil media, so that cleaner water is discharged into waterways and eventually our reservoirs. Enhancing biodiversity and site aesthetics. Bringing people closer to water, and creating new recreational and community spaces for people to enjoy. Other states A 2012 paper by the Overseas Development Institute reviewed evidence of the economic impacts of green infrastructure in fragile states. Upfront construction costs for GI were up to 8% higher than non-green infrastructure projects. Climate Finance was not adequately captured by Fragile states for GI investments, and governance issues may further hinder capability to take full advantage. GI Investments needed strong government participation as well as institutional capacities and capabilities that fragile states may not possess. Potential poverty reduction includes improved agricultural yields and higher rural electrification rates, benefits that can be transmitted to other sectors of the economy not directly linked to the GI investment. Whilst there are examples of GI investments creating new jobs in a number of sectors, it is unclear what the employment opportunities advantages are in respect to traditional infrastructure investments. The correct market conditions (i.e. labour regulations or energy demand) are also required in order to maximise employment creation opportunities. Such factors that may not be fully exploited by fragile state governments lacking the capacity to do so. GI investments have a number of co-benefits including increased energy security and improved health outcomes, whilst a potential reduction of a country's vulnerability to the negative effects of climate change being arguably the most important co-benefit for such investments in a fragile state context. There is some evidence that GI options are taken into consideration during project appraisal. Engagement mostly occurs in projects specifically designed with green goals, hence there is no data showing decision making that leads to a shift towards any green alternative. Comparisons of costs, co-benefits, poverty reduction benefits or employment creation benefits between the two typologies are also not evident. Currently, an international standard for green infrastructure is developed: SuRe – The Standard for Sustainable and Resilient Infrastructure is a global voluntary standard which integrates key criteria of sustainability and resilience into infrastructure development and upgrade. SuRe is developed by the Swiss Global Infrastructure Basel Foundation and the French bank Natixis as part of a multi-stakeholder process and will be compliant with ISEAL guidelines. The foundation has also developed the SuRe SmartScan, a simplified version of the SuRe Standard which serves as a self-assessment tool for infrastructure project developers. It provides them with a comprehensive and time-efficient analysis of the various themes covered by the SuRe Standard, offering a solid foundation for projects that are planning to become certified by the SuRe Standard in the future. Upon completion of the SmartScan, project developers receive a spider diagram evaluation, which indicates their project's performance in the different themes and benchmarks the performances with other SmartScan assessed projects. Examples Beijing, China A good example of green infrastructure principles being applied at landscape scale is the Beijing Olympic site. First developed for the 2008 Summer Olympics but used also for the 2022 Winter Olympics, the Beijing Olympic site covers a large area of brownfield redevelopment in the northern sector of the city between the 4th and 5th ring roads. The central green infrastructure feature of the Olympic site is the "Dragon-shaped river" – a complex of retention basins and wetlands covering more than a half million square metres configured to look from the air like a traditional Chinese dragon. In addition to referencing Chinese culture, the system is capable of significantly reducing nutrient loads from influent waters, which are provided by a nearby wastewater recycling facility. Surrey, British Columbia Farmers claimed that flooding of their farmlands was caused by suburban development upstream. The flooding was a result of funneled runoff directed into storm drains by impervious cove, which ran unmitigated and unabsorbed into their farmlands downstream. The farmers were awarded an undisclosed amount of money in the tens of millions as compensation. Low density and highly paved residential communities redirect stormwater from impervious surfaces and pipes to stream at velocities much greater than predevelopment rates. Not only are these practices environmentally damaging, they can be costly and inefficient to maintain. In response, the city of Surrey opted to employ a green infrastructure strategy and chose a 250-hectare site called East Clayton as a demonstration project. The approach reduced the stormwater flowing downstream and allows for infiltration of rainwater closer if not at its point of origin. In result, the stormwater system at East Clayton had the ability to hold one inch of rainfall per day, accounting for 90% of the annual rainfall. The incorporation of green infrastructure at Surrey, British Columbia was able to create a sustainable environment that diminishes runoff and to save around $12,000 per household. Nya Krokslätt, Sweden The site of former factory Nya Krokslätt is situated between a mountain and a stream. Danish engineers, Ramboll, have designed a concept of slowing down and guiding storm water in the area with methods such as vegetation combined with ponds, streams and soak-away pits as well as glazed green-blue climate zones surrounding the buildings which delay and clean roof water and greywater. The design concept provides for a multifunctional, rich urban environment, which includes not only technical solutions for energy efficient buildings, but encompasses the implementation of blue-green infrastructure and ecosystem services in an urban area. Zürich, Switzerland Since 1991, the city of Zürich has had a law stating all flat roofs (unless used as terraces) must be greened roofed surfaces. The main advantages as a result of this policy include increased biodiversity, rainwater storage and outflow delay, and micro-climatic compensation (temperature extremes, radiation balance, evaporation and filtration efficiency). Roof biotopes are stepping stones which, together with the earthbound green areas and the seeds distributed by wind and birds, make an important contribution to the urban green infrastructure. Duisburg-Nord, Germany In the old industrial area of the Ruhr District in Germany, Duisburg-Nord is a landscape park which incorporates former industrial structures and natural biodiversity. The architects Latz + Partner developed the water park which now consists of the old River Emscher, subdivided into five main sections: Klarwasserkanal (Clear Water Canal), the Emschergraben (Dyke), the Emscherrinne (Channel), the Emscherschlucht (Gorge) and the Emscherbach (Stream). The open waste water canal of the “Old Emscher” river is now fed gradually by rainwater collection through a series of barrages and water shoots. This gradual supply means that, even in lengthy dry spells, water can be supplied to the Old Emscher to replenish the oxygen levels. This has allowed the canalised river bed to become a valley with possibilities for nature development and recreation. As a key part of the ecological objectives, much of the overgrown areas of the property were included in the plan as they were found to contain a wide diversity of flora and fauna, including threatened species from the red list. Another important theme in the development of the plan was to make the water system visible, in order to stimulate a relationship between visitors and the water. New York Sun Works Center, US The Greenhouse Project was started in 2008 by a small group of public school parents and educators to facilitate hands-on learning, not only to teach about food and nutrition, but also to help children make educated choices regarding their impact on the environment. The laboratory is typically built as a traditional greenhouse on school rooftops and accommodates a hydroponic urban farm and environmental science laboratory. It includes solar panels, hydroponic growing systems, a rainwater catchment system, a weather station and a vermi composting station. Main topics of education include nutrition, water resource management, efficient land use, climate change, biodiversity, conservation, contamination, pollution, waste management, and sustainable development. Students learn the relationship between humans and the environment and gain a greater appreciation of sustainable development and its direct relationship to cultural diversity. Hammarby Sjöstad, Stockholm, Sweden In the early 1990s, Hammarby Sjöstad had a reputation for being a run-down, polluted and unsafe industrial and residential area. Now, it is a new district in Stockholm where the city has imposed tough environmental requirements on buildings, technical installations and the traffic environment. An ‘eco-cycle’ solution named the Hammarby Model, developed by Fortum, Stockholm Water Company and the Stockholm Waste Management Administration, is an integral energy, waste and water system for both housing and offices. The goal is to create a residential environment based on sustainable resource usage. Examples include waste heat from the treated wastewater being used for heating up the water in the district heating system, rainwater runoff is returned to the natural cycle through infiltration in green roofs and treatment pools, sludge from the local wastewater treatment is recycled as fertiliser for farming and forestry. This sustainable model has been a source of inspiration to many urban development projects including the Toronto (Canada) Waterfront, London's New Wembley, and a number of cities/city areas in China. Emeryville, California, US EPA supported the city of Emeryville, California in the development of "Stormwater Guidelines for Green, Dense Redevelopment." Emeryville, which is a suburb of San Francisco, began in the 1990s reclaiming, remediating and redeveloping the many brownfields within its borders. These efforts sparked a successful economic rebound. The city did not stop there, and decided in the 2000s to harness the redevelopment progress for even better environmental outcomes, in particular that related to stormwater runoff, by requiring in 2005 the use of on-site GI practices in all new private development projects. The city faced several challenges, including a high water table, tidal flows, clay soils, contaminated soil and water, and few absorbent natural areas among the primarily impervious, paved parcels of existing and redeveloped industrial sites. The guidelines, and an accompanying spreadsheet model, were developed to make as much use of redevelopment sites as possible for handling stormwater. The main strategies fell into several categories: Reducing the need, space and stormwater impact of motor vehicle parking by way of increased densities, height limits and floor area ratios; shared, stacked, indoor and unbundled automobile parking; making the best use of on-street parking and pricing strategies; car-sharing; free citywide mass transit; requiring one secure indoor bicycle parking space per bedroom and better bicycle and pedestrian roadway infrastructure. Sustainable landscape design features, such as tree preservation and minimum rootable soil volumes for new tree planting, use of structural soils, suspended paving systems, bioretention and biofiltration strategies and requiring the use of the holistic practices of Bay-Friendly Landscaping. Water storage and harvesting through cisterns and rooftop containers. Other strategies to handle or infiltrate water on development and redevelopment sites. Gowanus Canal Sponge Park, New York, US The Gowanus Canal, in Brooklyn, New York, is bounded by several communities including Park Slope, Cobble Hill, Carroll Gardens, and Red Hook. The canal empties into New York Harbor. Completed in 1869, the canal was once a major transportation route for the then separate cities of Brooklyn and New York City. Manufactured gas plants, mills, tanneries, and chemical plants are among the many facilities that operated along the canal. As a result of years of discharges, storm water runoff, sewer outflows, and industrial pollutants, the canal has become one of the nation's most extensively contaminated water bodies. Contaminants include PCBs, coal tar wastes, heavy metals, and volatile organics. On March 2, 2010, EPA added the canal to its Superfund National Priorities List (NPL). Placing the canal on the list allows the agency to further investigate contamination at the site and develop an approach to address the contamination. After the NPL designation, several firms tried to redesign the area surrounding the canal to meet EPA's principles. One of the proposals was the Gowanus Canal Sponge Park, suggested by Susannah Drake of DLANDstudio, an architecture and landscape architecture firm based in Brooklyn. The firm designed a public open space system that slows, absorbs, and filters surface water runoff with the goal of remediating contaminated water, activating the private canal waterfront, and revitalizing the neighborhood. The unique feature of the park is its character as a working landscape that means the ability to improve the environment of the canal over time while simultaneously supporting public engagement with the canal ecosystem. The park was cited in a professional award by the American Society of Landscape Architects (ASLA), in the Analysis and Planning category, in 2010. Lafitte Greenway, New Orleans, Louisiana, US The Lafitte Greenway in New Orleans, Louisiana, is a post-Hurricane Katrina revitalization effort that utilizes green infrastructure to improve water quality as well as support wildlife habitat. The site was previously an industrial corridor that connected the French Quarter to Bayou St. John and Lake Pontchartrain. Part of the revitalization plan was to incorporate green infrastructure for environmental sustainability. One strategy to mitigate localized flooding was to create recreation fields that are carved out to hold water during times of heavy rains. Another strategy was to restore the native ecology of the corridor, giving special attention to the ecotones that bisect the site. The design proposed retrofitting historic buildings with stormwater management techniques, such as rainwater collection systems, which allows historic buildings to be preserved. This project received the Award of Excellence from the ASLA in 2013. Geographic information system applications A geographic information system (GIS) is a computer system for that allows users to capture, store, display, and analyze all kinds of spatial data on Earth. GIS can gather multiple layers of information on one single map regarding streets, buildings, soil types, vegetation, and more. Planners can combine or calculate useful information such as impervious area percentage or vegetation coverage status of a specific region to design or analyze the use of green infrastructure. The continued development of geographic information systems and their increasing level of use is particularly important in the development of Green Infrastructure plans. The plans frequently are based on GIS analysis of many layers of geographic information. Green Infrastructure Master Plan According to the "Green Infrastructure Master Plan" developed by Hawkins Partners, civil engineers use GIS to analyze the modeling of impervious surfaces with historical Nashville rainfall data within the CSS (combined sewer system) to find the current rates of runoff. GIS systems are able to help planning teams analyze potential volume reductions at the specific region for green infrastructures, including water harvesting, green roofs, urban trees, and structural control measures. Implementation Barriers Lack of funding is consistently cited as a barrier to the implementation of green infrastructure. One advantage that green infrastructure projects offer, however, is that they generate so many benefits that they can compete for a variety of diverse funding sources. Some tax incentive programs administered by federal agencies can be used to attract financing to green infrastructure projects. Here are two examples of programs whose missions are broad enough to support green infrastructure projects: The U.S. Department of Energy administers a range of energy efficiency tax incentives, and green infrastructure could be integrated into project design to claim the incentive. An example of how this might work is found in Oregon's Energy Efficiency Construction Credits. In Eugene, Oregon, a new biofuel station built on an abandoned gas station site included a green roof, bioswales and rain gardens. In this case, nearly $250,000 worth of tax credits reduced income and sales tax for the private company that built and operated the project. The U.S. Department of Treasury administers the multibillion-dollar New Markets Tax Credit Program, which encourages private investment for a range of project types (typically real estate or business development projects) in distressed areas. Awards are allocated to non-profit and private entities based on their proposals for distributing these tax benefits. Benefits Some people might expect that green spaces are extravagant and excessively difficult to maintain, but high-performing green spaces can provide tangible economic, ecological, and social benefits. For example: Urban forestry in an urban environment can supplement stormwater management and reduce associated energy usage costs and runoff. Bioretention systems can be applied to the creation of a green transportation system. Lawn grass is not an answer to runoff, so an alternative is required to reduce urban and suburban first flush (highly toxic) runoff and to slow the water down for infiltration. In residential applications, water runoff can be reduced by 30% with the use of rain gardens in the homeowner's yard. A minimum size of 150 sq. ft. up to a range of 300 sq. ft. is the usual size considered for a private property residence. The cost per square foot is about $5–$25, depending on the type of plants you use and the slope of your property. Native trees, shrubs, and herbaceous perennials of the wetland and riparian zones being the most useful for runoff detoxification. As a result, high-performing green spaces work to create a balance between built and natural environments. A higher abundance of green space in communities or neighbourhoods, for example, has been observed to promote participation in physical activities among elderly men, while more green space around one's house is associated with improved mental health. In addition to these benefits, recent studies have shown that residents highly value the experiential aspects of green infrastructure, emphasizing the importance of aesthetics, wellbeing, and a sense of place. This focus on cultural ecosystem services suggests that the design and implementation of green infrastructure should prioritize these elements, as they significantly contribute to the community's perception of value and overall quality of life. Economic effects A 2012 study focusing on 479 green infrastructure projects across the United States found that 44% of green infrastructure projects reduced costs, compared to the 31% that increased the costs. The most notable cost savings were due to reduced stormwater runoff and decreased heating and cooling costs. Green infrastructure is often cheaper than other conventional water management strategies. The city of Philadelphia, for example, discovered that a new green infrastructure plan would cost $1.2 billion over a 25-year period, compared to the $6 billion that would have been needed to finance a grey infrastructure plan. A comprehensive green infrastructure in Philadelphia is planned to cost just $1.2 billion over the next 25 years, compared to over $6 billion for "grey" infrastructure (concrete tunnels created to move water). Under the new green infrastructure plan it is expected that: 250 people will be employed annually in green jobs. Up to 1.5 billion pounds of carbon dioxide emission to be avoided or absorbed through green infrastructure each year (the equivalent of removing close to 3,400 vehicles from roadways) Air quality will improve due to all the new trees, green roofs, and parks Communities will benefit on the social and health side About 20 deaths due to asthma will be avoided 250 fewer work or school days will be missed Deaths due to excessive urban heat could also be cut by 250 over 20 years. The new greenery will increase property values by $390 million over 45 years, also boosting the property taxes the city takes in. A green infrastructure plan in New York City is expected to cost $1.5 billion less than a comparable grey infrastructure approach. Also, the green stormwater management systems alone will save $1 billion, at a cost of about $0.15 less per gallon. The sustainability benefits in New York City range from $139–418 million over the 20 year life of the project. This green plan estimates that “every fully vegetated acre of green infrastructure would provide total annual benefits of $8.522 in reduced energy demand, $166 in reduced CO2 emissions, $1,044 in improved air quality, and $4,725 in increased property value.” In addition to ambitious infrastructure plans and layouts offering economical and health benefits with the investment of green infrastructure, a study conducted in 2016 within the United Kingdom analyzed the "willingness-to-pay" capacity held by residents in response to green infrastructure. Their findings concluded that, "investment in urban [green infrastructure] that is visibly greener, that facilitates access to [green infrastructure] and other amenities, and that is perceived to promote multiple functions and benefits on a single site (i.e. multi-functionality) generate higher [willingness-to-pay] values." The "willingness-to-pay" obligation is pronounced with the idea that the locations of some living spaces with functionality and aesthetics are more likely to wield larger amounts of social and economical capital. By incentivising residents to invest in green infrastructure within their own zones for development and communities, it allows the potential for increased revenue to be used in order to facilitate further green infrastructure, ultimately increasing the "economic viability" for future projects to occur. Environmental Justice Impacts In cities such as Chicago, green infrastructure projects are aimed at enhancing the environment through sustainability and livability, but often they create more social justice concerns like gentrification. This often happens when urban green spaces added in lower income communities attract wealthier residents, which causes the property values to increase and displace the current residence of lower income communities. The impacts of gentrification varies depending on the community, with different implemented infrastructures like greenspaces and transportation avenues along with the size and location of them, which reshapes the demographic and the economic landscape of the community. The challenges with incorporating more green infrastructure with a beneficial goal for social justice is often due to how the government funds and fulfills projects. Many of the projects are managed by nonprofits so they are not the focus nor are the proper skills necessary acquired which creates a larger social justice issue like the decrease in affordable housing. This causes a focus on environmental and recreational improvements and neglects the socioeconomic dimensions of sustainability. The planning process of infrastructure should consider the environmental outcomes while also integrating social equity considerations. The impacts of green gentrification upon local communities can ultimately contradict the positives brought by sustainable and green infrastructure initially. Green infrastructure like increased green spaces or walkability in cities can potentially improve the well-being of individuals living within the communities, but more often at the expense of dispelling homeless populations or those with decreased housing accessibility living in the future project areas for urban improvement. In order to combat the negative effects of gentrification occurring as a byproduct of haphazard implementation of green infrastructure, different "critical barriers" that act as components prohibiting affordable housing must be addressed. Five major barriers that need to be addressed in future policies and legislation for communities are, "green retrofit-related; land market-related; incentive-related; housing market-related and infrastructural-related barriers." The success of implementing green infrastructure within communities that have experienced environmental injustice, like excess exposure to pollution or affordable housing, is dependent on the interaction and collaboration of project managers overseeing green infrastructure sites alongside community residents. The most prominent concerns raised by residents in a community in New Jersey cited concerns regarding the maintenance and upkeep of future green stormwater infrastructure (GSI), the necessity for future GSI projects to be multifaceted rather than universal amongst communities, and advocacy for environmental justice to be implemented within project outlines, as "GSI projects, as part of broader community greening initiatives, do not automatically guarantee EJ and health equity, which may be absent in many shrinking cities." It is important to comprehend the environmental and economical capabilities that green infrastructure can provide, but the environmental inequity in respect to being able to access these spaces must be considered in application of green infrastructure within communities. The imperative need to focus on communities with less accessibility to ecosystem services and green infrastructure is a major part of ensuring all communities and residents feel the benefits and effects of implementation. Initiatives One program that has integrated green infrastructure into construction projects worldwide is the Leadership in Energy and Environmental Design (LEED) certification. This system offers a benchmark rating for green buildings and neighborhoods, credibly quantifying a project's environmental responsibility. The LEED program incentivizes development that uses resources efficiently. For example, it offers specific credits for reducing indoor and outdoor water use, optimizing energy performance, producing renewable energy, and minimizing or recycling project waste. Two LEED initiatives that directly promote the use of green infrastructure include the rainwater management and heat island reduction credits. An example of a successfully LEED-certified neighborhood development is the 9th and Berks Street transit-oriented development (TOD) in Philadelphia, Pennsylvania, which achieved a Platinum level rating on October 12, 2017. Another approach to implementing green infrastructure has been developed by the International Living Future Institute. Their Living Community Challenge assesses a community or city in twenty different aspects of sustainability. Notably, the Challenge considers whether the development achieves net positive water and energy uses and utilizes replenishable materials. See also European green infrastructure Green belt Land recycling Permaculture Recycling infrastructure Street reclamation Sustainable architecture Sustainable engineering Baubotanik (building method – Germany) Urban vitality Artificialization References Notes Further reading . Design manuals published by state and local agencies. Natural Resources Defense Council (2011). Rooftops to Rivers II: Green Strategies for Controlling Stormwater and Combined Sewer Overflows City of Philadelphia (2009). "Green City Clean Waters" City of Nashville & Davidson County (2009). "Green Infrastructure Design" City of Chicago (2010). "Green Alley Handbook" Southeast Tennessee Development District (2011). "Green Infrastructure Handbook" Center for Green Infrastructure Design (2011). "Why is Green Infrastructure Important?" Center for Green Infrastructure Design (2011). "The Benefits of Green Infrastructure" External links PaveShare – Permeable Paver Design Green Infrastructure Case Studies The Conservation Fund Save the Rain – Onondaga County, NY Maryland's Green Infrastructure- Maryland Department of Natural Resources Sonoran Desert Conservation Plan – Pima County, Arizona The Center for Green Infrastructure Design – The Center for Green Infrastructure Design Green Infrastructure Wiki Low Impact Development – The Low Impact Development Center (US) Green Infrastructure Resource Center – American Society of Landscape Architects Gowanus Sponge Park – ASLA award winner project Global Infrastructure Basel Foundation (GIB) Environmental engineering Hydrology and urban planning Landscape Sustainable urban planning Water pollution Biodiversity Sustainable technologies
Green infrastructure
[ "Chemistry", "Engineering", "Biology", "Environmental_science" ]
11,561
[ "Hydrology", "Chemical engineering", "Water pollution", "Civil engineering", "Hydrology and urban planning", "Biodiversity", "Environmental engineering" ]
10,043,708
https://en.wikipedia.org/wiki/Glycoprotein%20Ib
Glycoprotein Ib (GPIb), also known as CD42, is a component of the GPIb-V-IX complex on platelets. The GPIb-V-IX complex binds von Willebrand factor, allowing platelet adhesion and platelet plug formation at sites of vascular injury. Glycoprotein Ibα (GPIbα) is the major ligand-binding subunit of the GPIb-V-IX complex. GPIbα is heavily glycosylated. It is deficient in the Bernard–Soulier syndrome. A gain-of-function mutation causes platelet-type von Willebrand disease. Autoantibodies against Ib/IX can be produced in immune thrombocytopenic purpura. Components include GP1BA and GP1BB. It complexes with Glycoprotein IX. References External links Glycoproteins
Glycoprotein Ib
[ "Chemistry" ]
189
[ "Glycoproteins", "Glycobiology", "Protein stubs", "Biochemistry stubs" ]
12,424,551
https://en.wikipedia.org/wiki/Fluorescence%20cross-correlation%20spectroscopy
Fluorescence cross-correlation spectroscopy (FCCS) is a spectroscopic technique that examines the interactions of fluorescent particles of different colours as they randomly diffuse through a microscopic detection volume over time, under steady conditions. Discovery Eigen and Rigler first introduced the fluorescence cross-correlation spectroscopy (FCCS) method in 1994. Later, in 1997, Schwille experimentally implemented this method. Theory FCCS is an extension of the fluorescence correlation spectroscopy (FCS) method that uses two fluorescent molecules instead of one that emits different colours. The technique measures coincident green and red intensity fluctuations of distinct molecules that correlate if green and red labelled particles move together through a predefined confocal volume. FCCS utilizes two species that are independently labeled with two different fluorescent probes of different colours. These fluorescent probes are excited and detected by two different laser light sources and detectors typically labeled as "green" and "red". By combining FCCS with a confocal microscope, the technique's capabilities are highlighted, as it becomes possible to detect fluorescence molecules in femtoliter volumes within the nanomolar range, with a high signal-to-noise ratio, and at a microsecond time scale. The normalized cross-correlation function is defined for two fluorescent species, G and R, which are independent green and red channels, respectively: where differential fluorescent signals at a specific time, and at a delay time, later is correlated with each other. In the absence of spectral bleed-through – when the fluorescence signal from an adjacent channel is visible in the channel being observed – the cross-correlation function is zero for non-interacting particles. In contrast to FCS, the cross-correlation function increases with increasing numbers of interacting particles. FCCS is mainly used to study bio-molecular interactions both in living cells and in vitro. It allows for measuring simple molecular stoichiometries and binding constants. It is one of the few techniques that can provide information about protein–protein interactions at a specific time and location within a living cell. Unlike fluorescence resonance energy transfer, FCCS does not have a distance limit for interactions making it suitable for probing large complexes. However, FCCS requires active diffusion of the complexes through the microscope focus on a relatively short time scale, typically seconds. Modeling The mathematical function used to model cross-correlation curves in FCCS is slightly more complex compared to that used in FCS. One of the primary differences is the effective superimposed observation volume, denoted as in which the G and R channels form a single observation volume: where and are radial parameters and and are the axial parameters for the G and R channels respectively. The diffusion time, for a doubly (G and R) fluorescent species is therefore described as follows: where is the diffusion coefficient of the doubly fluorescent particle. The cross-correlation curve generated from diffusing doubly labelled fluorescent particles can be modelled in separate channels as follows: In the ideal case, the cross-correlation function is proportional to the concentration of the doubly labeled fluorescent complex: with The cross-correlation amplitude is directly proportional to the concentration of double-labeled (red and green) species. Experimental method FCCS measures the coincident green and red intensity fluctuations of distinct molecules that correlate if green and red labeled particles move together through a predefined confocal volume. To perform fluorescence cross-correlation spectroscopy (FCCS), samples of interest are first labeled with fluorescent probes of different colours. The FCCS setup typically includes a confocal microscope, two laser sources, and two detectors. The confocal microscope is used to focus the laser beams and collect the fluorescence signals. The signals from the detectors are then collected and recorded over time. Data analysis involves cross-correlating the signals to determine the degree of correlation between the two fluorescent probes. This information can be used to extract data on the stoichiometry and binding constants of molecular complexes, as well as the timing and location of interactions within living cells. Applications Fluorescence cross-correlation spectroscopy (FCCS) has several applications in the field of biophysics and biochemistry. Fluorescence cross-correlation spectroscopy (FCCS) is a powerful technique that enables the investigation of interactions between various types of biomolecules, including proteins, nucleic acids, and lipids. FCCS is one of the few techniques that can provide information about protein-protein interactions at a specific time and location within a living cell. FCCS can be used to study the dynamics of biomolecules in living cells, including their diffusion rates and localization. This can provide insights into the function and regulation of cellular processes. Unlike Förster resonance energy transfer, FCCS does not have a distance limit for interactions making it suitable for probing large complexes. However, FCCS requires active diffusion of the complexes through the microscope focus on a relatively short time scale, typically seconds. FCCS allows for measuring simple molecular stoichiometries and binding constants. See also Diffusion coefficient Dynamic light scattering Fluorescence spectroscopy References External links Fluorescence Cross Correlation (FCCS) (Becker & Hickl GmbH, web page) Spectroscopy Physical chemistry Fluorescence techniques Biochemistry methods
Fluorescence cross-correlation spectroscopy
[ "Physics", "Chemistry", "Biology" ]
1,070
[ "Biochemistry methods", "Applied and interdisciplinary physics", "Spectrum (physical sciences)", "Molecular physics", "Instrumental analysis", "Spectroscopy", "nan", "Biochemistry", "Physical chemistry", "Fluorescence techniques" ]
15,149,776
https://en.wikipedia.org/wiki/Configuration%20entropy
In statistical mechanics, configuration entropy is the portion of a system's entropy that is related to discrete representative positions of its constituent particles. For example, it may refer to the number of ways that atoms or molecules pack together in a mixture, alloy or glass, the number of conformations of a molecule, or the number of spin configurations in a magnet. The name might suggest that it relates to all possible configurations or particle positions of a system, excluding the entropy of their velocity or momentum, but that usage rarely occurs. Calculation If the configurations all have the same weighting, or energy, the configurational entropy is given by Boltzmann's entropy formula where kB is the Boltzmann constant and W is the number of possible configurations. In a more general formulation, if a system can be in states n with probabilities Pn, the configurational entropy of the system is given by which in the perfect disorder limit (all Pn = 1/W) leads to Boltzmann's formula, while in the opposite limit (one configuration with probability 1), the entropy vanishes. This formulation is called the Gibbs entropy formula and is analogous to that of Shannon's information entropy. The mathematical field of combinatorics, and in particular the mathematics of combinations and permutations is highly important in the calculation of configurational entropy. In particular, this field of mathematics offers formalized approaches for calculating the number of ways of choosing or arranging discrete objects; in this case, atoms or molecules. However, it is important to note that the positions of molecules are not strictly speaking discrete above the quantum level. Thus a variety of approximations may be used in discretizing a system to allow for a purely combinatorial approach. Alternatively, integral methods may be used in some cases to work directly with continuous position functions, usually denoted as a configurational integral. See also Conformational entropy Combinatorics Entropic force Entropy of mixing High entropy oxide Nanomechanics Notes References Statistical mechanics Thermodynamic entropy Philosophy of thermal and statistical physics Entropy
Configuration entropy
[ "Physics", "Chemistry", "Mathematics" ]
419
[ "Thermodynamics stubs", "Statistical mechanics stubs", "Physical phenomena", "Physical chemistry stubs", "Philosophy of thermal and statistical physics", "Physical quantities", "Quantity", "Thermodynamic entropy", "Entropy", "Thermodynamics", "Statistical mechanics", "Physical properties" ]
15,152,001
https://en.wikipedia.org/wiki/Anomalous%20X-ray%20scattering
Anomalous X-ray scattering (AXRS or XRAS) is a non-destructive determination technique within X-ray diffraction that makes use of the anomalous dispersion that occurs when a wavelength is selected that is in the vicinity of an absorption edge of one of the constituent elements of the sample. It is used in materials research to study nanometer sized differences in structure. Atomic scattering factors In X-ray diffraction the scattering factor f for an atom is roughly proportional to the number of electrons that it possesses. However, for wavelengths that approximate those for which the atom strongly absorbs radiation the scattering factor undergoes a change due to anomalous dispersion. The dispersion not only affects the magnitude of the factor but also imparts a phase shift in the elastic collision of the photon. The scattering factor can therefore best be described as a complex number f = fo + Δf + i.Δf" Contrast variation The anomalous aspects of X-ray scattering have become the focus of considerable interest in the scientific community because of the availability of synchrotron radiation. In contrast to desktop X-ray sources that work at a limited set of fixed wavelengths, synchrotron radiation is generated by accelerating electrons and using an undulator (device of periodic placed dipole magnets) to "wiggle" the electrons in their path, to generate the wanted wavelength of X-rays. This allows scientists to vary the wavelength, which in turn makes it possible to vary the scattering factor for one particular element in the sample under investigation. Thus a particular element can be highlighted. This is known as contrast variation. In addition to this effect the anomalous scatter is more sensitive to any deviation from sphericity of the electron cloud around the atom. This can lead to resonant effects involving transitions in the outer shell of the atom: resonant anomalous X-ray scattering. Protein crystallography In protein crystallography, anomalous scattering' refers to a change in a diffracting X-ray's phase that is unique from the rest of the atoms in a crystal due to strong X-ray absorbance. The amount of energy that individual atoms absorb depends on their atomic number. The relatively light atoms found in proteins such as carbon, nitrogen, and oxygen do not contribute to anomalous scattering at normal X-ray wavelengths used for X-ray crystallography. Thus, in order to observe anomalous scattering, a heavy atom must be native to the protein or a heavy atom derivative should be made. In addition, the X-ray's wavelength should be close to the heavy atom's absorption edge. List of methods Multi-wavelength anomalous diffraction (MAD) Single-wavelength anomalous diffraction (SAD) Diffraction anomalous fine structure (DAFS) combines the use of anomalous diffraction with X-ray absorption fine structure (XAFS). References External links X-ray Anomalous Scattering at skuld.bmsc.washington.edu. A resource mainly aimed at crystallographers. PHENIX glossary, describes the techniques supported by the commonly-used PHENIX refining program, including MAD & SAD. Scientific techniques X-ray crystallography
Anomalous X-ray scattering
[ "Chemistry", "Materials_science" ]
678
[ "X-ray crystallography", "Crystallography" ]
15,152,367
https://en.wikipedia.org/wiki/Low-angle%20laser%20light%20scattering
Low-angle laser light scattering or LALLS is an application of light scattering that is particularly useful in conjunction with the technique of Size exclusion chromatography, one of the most powerful and widely used techniques to study the molecular mass distribution of a polymer. Typically the eluent of the SEC column is allowed to pass through both a refractive index detector (that gives measures for the concentration in the solution as a function time) and through a laser scattering cell. The scattered intensity is measured as a function of time under a small angle with respect to the laser beam. The low-angle light scattering data can be analyzed if one assumes that the low-angle data is the same as the scattering at zero angle. For the relevant equations, see the article on static light scattering. Under these conditions the laser signal together with the concentration data can be translated into a curve that yields both the Mn and the Mw, the molar mass weighted by number and by weight respectively. The combination of those two data gives information on the linearity of the polymer. The technique is sometimes complemented or combined with viscometry and polystyrene standards are available for validation of the results. References Polymer physics Scattering
Low-angle laser light scattering
[ "Physics", "Chemistry", "Materials_science" ]
243
[ "Polymer physics", "Polymer stubs", "Scattering stubs", "Scattering", "Condensed matter physics", "Particle physics", "Nuclear physics", "Polymer chemistry", "Organic chemistry stubs" ]
15,152,836
https://en.wikipedia.org/wiki/Positron%20annihilation%20spectroscopy
Positron annihilation spectroscopy (PAS) or sometimes specifically referred to as positron annihilation lifetime spectroscopy (PALS) is a non-destructive spectroscopy technique to study voids and defects in solids. Theory The technique operates on the principle that a positron or positronium will annihilate through interaction with electrons. This annihilation releases gamma rays that can be detected; the time between emission of positrons from a radioactive source and detection of gamma rays due to annihilation corresponds to the lifetime of positron or positronium. When positrons are injected into a solid body, they interact in some manner with the electrons in that species. For solids containing free electrons (such as metals or semiconductors), the implanted positrons annihilate rapidly unless voids such as vacancy defects are present. If voids are available, positrons will reside in them and annihilate less rapidly than in the bulk of the material, on time scales up to ~1 ns. For insulators such as polymers or zeolites, implanted positrons interact with electrons in the material to form positronium. Positronium is a metastable hydrogen-like bound state of an electron and a positron which can exist in two spin states. Para-positronium, p-Ps, is a singlet state (the positron and electron spins are anti-parallel) with a characteristic self-annihilation lifetime of 125 ps in vacuum. Ortho-positronium, o-Ps, is a triplet state (the positron and electron spins are parallel) with a characteristic self-annihilation lifetime of 142 ns in vacuum. In molecular materials, the lifetime of o-Ps is environment dependent and it delivers information pertaining to the size of the void in which it resides. Ps can pick up a molecular electron with an opposite spin to that of the positron, leading to a reduction of the o-Ps lifetime from 142 ns to 1-4 ns (depending on the size of the free volume in which it resides). The size of the molecular free volume can be derived from the o-Ps lifetime via the semi-empirical Tao-Eldrup model. While the PALS is successful in examining local free volumes, it still needs to employ data from combined methods in order to yield free volume fractions. Even approaches to obtain fractional free volume from the PALS data that claim to be independent on other experiments, such as PVT measurements, they still do employ theoretical considerations, such as iso-free-volume amount from Simha-Boyer theory. A convenient emerging method for obtaining free volume amounts in an independent manner are computer simulations; these can be combined with the PALS measurements and help to interpret the PALS measurements. Pore structure in insulators can be determined using the quantum mechanical Tao-Eldrup model and extensions thereof. By changing the temperature at which a sample is analyzed, the pore structure can be fit to a model where positronium is confined in one, two, or three dimensions. However, interconnected pores result in averaged lifetimes that cannot distinguish between smooth channels or channels having smaller, open, peripheral pores due to energetically favored positronium diffusion from small to larger pores. The behavior of positrons in molecules or condensed matter is nontrivial due to the strong correlation between electrons and positrons. Even the simplest case, that of a single positron immersed in a homogeneous gas of electrons, has proved to be a significant challenge for theory. The positron attracts electrons to it, increasing the contact density and hence enhancing the annihilation rate. Furthermore, the momentum density of annihilating electron-positron pairs is enhanced near the Fermi surface. Theoretical approaches used to study this problem have included the Tamm-Dancoff approximation, Fermi and perturbed hypernetted chain approximations, density functional theory methods and quantum Monte Carlo. Implementation The experiment itself involves having a radioactive positron source (often 22Na) situated near the analyte. Positrons are emitted near-simultaneously with gamma rays. These gamma rays are detected by a nearby scintillator. References Spectroscopy Semiconductor analysis
Positron annihilation spectroscopy
[ "Physics", "Chemistry" ]
888
[ "Instrumental analysis", "Molecular physics", "Spectroscopy", "Spectrum (physical sciences)" ]
14,065,904
https://en.wikipedia.org/wiki/TCF3
Transcription factor 3 (E2A immunoglobulin enhancer-binding factors E12/E47), also known as TCF3, is a protein that in humans is encoded by the TCF3 gene. TCF3 has been shown to directly enhance Hes1 (a well-known target of Notch signaling) expression. Function This gene encodes a member of the E protein (class I) family of helix-loop-helix transcription factors. The 9aaTAD transactivation domains of E proteins and MLL are very similar and both bind to the KIX domain of general transcriptional mediator CBP. E proteins activate transcription by binding to regulatory E-box sequences on target genes as heterodimers or homodimers, and are inhibited by heterodimerization with inhibitor of DNA-binding (class IV) helix-loop-helix proteins. E proteins play a critical role in lymphopoiesis, and the encoded protein is required for the B and T lymphocyte development. This gene regulates many developmental patterning processes such as lymphocyte and central nervous system (CNS) development. E proteins are involved in the development of lymphocytes. They initiate transcription by binding to regulatory E-box sequences on target genes. Clinical significance Deletion of this gene or diminished activity of the encoded protein may play a role in lymphoid malignancies. This gene is also involved in several chromosomal translocations that are associated with lymphoid malignancies including pre-B-cell acute lymphoblastic leukemia (t(1;19), with PBX1 and t(17;19), with HLF), childhood leukemia (t(19;19), with TFPT) and acute leukemia (t(12;19), with ZNF384). Interactions TCF3 has been shown to interact with: CBFA2T3, CREBBP, ELK3, EP300, ID3, LDB1, LMX1A, LYL1, MAPKAPK3, MyoD, Myogenin, PCAF, TAL1 TWIST1, and UBE2I. References Further reading Transcription factors
TCF3
[ "Chemistry", "Biology" ]
472
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
14,065,909
https://en.wikipedia.org/wiki/ZNF35
Zinc finger protein 35 is a protein that in humans is encoded by the ZNF35 gene. See also Zinc finger References External links Transcription factors
ZNF35
[ "Chemistry", "Biology" ]
31
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
14,066,008
https://en.wikipedia.org/wiki/Peroxisome%20proliferator-activated%20receptor%20alpha
Peroxisome proliferator-activated receptor alpha (PPAR-α), also known as NR1C1 (nuclear receptor subfamily 1, group C, member 1), is a nuclear receptor protein functioning as a transcription factor that in humans is encoded by the PPARA gene. Together with peroxisome proliferator-activated receptor delta and peroxisome proliferator-activated receptor gamma, PPAR-alpha is part of the subfamily of peroxisome proliferator-activated receptors. It was the first member of the PPAR family to be cloned in 1990 by Stephen Green and has been identified as the nuclear receptor for a diverse class of rodent hepatocarcinogens that causes proliferation of peroxisomes. Expression PPAR-α is primarily activated through ligand binding. Endogenous ligands include fatty acids such as arachidonic acid as well as other polyunsaturated fatty acids and various fatty acid-derived compounds such as certain members of the 15-hydroxyeicosatetraenoic acid family of arachidonic acid metabolites, e.g. 15(S)-HETE, 15(R)-HETE, and 15(S)-HpETE and 13-hydroxyoctadecadienoic acid, a linoleic acid metabolite. Synthetic ligands include the fibrate drugs, which are used to treat hyperlipidemia, and a diverse set of insecticides, herbicides, plasticizers, and organic solvents collectively referred to as peroxisome proliferators. Function PPAR-α is a transcription factor regulated by free fatty acids, and is a major regulator of lipid metabolism in the liver. PPAR-alpha is activated under conditions of energy deprivation and is necessary for the process of ketogenesis, a key adaptive response to prolonged fasting. Activation of PPAR-alpha promotes uptake, utilization, and catabolism of fatty acids by upregulation of genes involved in fatty acid transport, fatty acid binding and activation, and peroxisomal and mitochondrial fatty acid β-oxidation. Activation of fatty acid oxidation is facilitated by increased expression of CPT1 (which brings long-chain lipids into mitochondria) by PPAR-α. PPAR-α also inhibits glycolysis, while promoting liver gluconeogenesis and glycogen synthesis. In macrophages, PPAR-α inhibits the uptake of glycated low-density lipoprotein (LDL cholesterol), inhibits foam cell (atherosclerosis) formation, and inhibits pro-inflammatory cytokines. Tissue distribution Expression of PPAR-α is highest in tissues that oxidize fatty acids at a rapid rate. In rodents, highest mRNA expression levels of PPAR-alpha are found in liver and brown adipose tissue, followed by heart and kidney. Lower PPAR-alpha expression levels are found in small and large intestine, skeletal muscle and adrenal gland. Human PPAR-alpha seems to be expressed more equally among various tissues, with high expression in liver, intestine, heart, and kidney. Knockout studies Studies using mice lacking functional PPAR-alpha indicate that PPAR-α is essential for induction of peroxisome proliferation by a diverse set of synthetic compounds referred to as peroxisome proliferators. Mice lacking PPAR-alpha also have an impaired response to fasting, characterized by major metabolic perturbations including low plasma levels of ketone bodies, hypoglycemia, and fatty liver. Pharmacology PPAR-α is the pharmaceutical target of fibrates, a class of drugs used in the treatment of dyslipidemia. Fibrates effectively lower serum triglycerides and raises serum HDL-cholesterol levels. Although clinical benefits of fibrate treatment have been observed, the overall results are mixed and have led to reservations about the broad application of fibrates for the treatment of coronary heart disease, in contrast to statins. PPAR-α, agonists may carry therapeutic value for the treatment of non-alcoholic fatty liver disease. PPAR-alpha may also be a site of action of certain anticonvulsants. An endogenous compound, 7(S)-Hydroxydocosahexaenoic Acid (7(S)-HDHA/), which is a Docosanoid derivative of the omega-3 fatty acid DHA was isolated as an endogenous high affinity ligand for PPAR-alpha in the rat and mouse brain. The 7(S) enantiomer bound with micromolar affity to PPAR alpha with 10 fold higher affinity compared to the (R) enantiomer and could trigger dendritic activation. Previous evidence for the compound's function was speculative based on the structure and study of the chemical synthesis. Both high sugar and low protein diets elevate the circulating liver hormone FGF21 in humans by means of PPAR-α, although this effect can be accompanied by FGF21-resistance. Target genes PPAR-α governs biological processes by altering the expression of a large number of target genes. Accordingly, the functional role of PPAR-alpha is directly related to the biological function of its target genes. Gene expression profiling studies have indicated that PPAR-alpha target genes number in the hundreds. Classical target genes of PPAR-alpha include PDK4, ACOX1, and CPT1. Low and high throughput gene expression analysis have allowed the creation of comprehensive maps illustrating the role of PPAR-alpha as master regulator of lipid metabolism via regulation of numerous genes involved in various aspects of lipid metabolism. These maps, constructed for mouse liver and human liver, put PPAR-alpha at the center of a regulatory hub impacting fatty acid uptake and intracellular binding, mitochondrial β-oxidation and peroxisomal fatty acid oxidation, ketogenesis, triglyceride turnover, gluconeogenesis, and bile synthesis/secretion. Interactions PPAR-α has been shown to interact with: AIP, EP300 HSP90AA1, NCOA1, and NCOR1. Palmitoylethanolamide (PEA) Oleoylethanolamide (OEA) Anandamide (AEA) 7( S)-Hydroxydocosahexaenoic Acid (7-HDoHE) PFAS See also Peroxisome proliferator-activated receptor Fibrate Endocannabinoid system References Further reading Intracellular receptors Transcription factors
Peroxisome proliferator-activated receptor alpha
[ "Chemistry", "Biology" ]
1,378
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
14,066,275
https://en.wikipedia.org/wiki/Inversion%20temperature
The inversion temperature in thermodynamics and cryogenics is the critical temperature below which a non-ideal gas (all gases in reality) that is expanding at constant enthalpy will experience a temperature decrease, and above which will experience a temperature increase. This temperature change is known as the Joule–Thomson effect, and is exploited in the liquefaction of gases. Inversion temperature depends on the nature of the gas. For a van der Waals gas we can calculate the enthalpy using statistical mechanics as where is the number of molecules, is volume, is temperature (in the Kelvin scale), is the Boltzmann constant, and and are constants depending on intermolecular forces and molecular volume, respectively. From this equation, if enthalpy is kept constant and there is an increase of volume, temperature must change depending on the sign of . Therefore, our inversion temperature is given where the sign flips at zero, or , where is the critical temperature of the substance. So for , an expansion at constant enthalpy increases temperature as the work done by the repulsive interactions of the gas is dominant, and so the change in kinetic energy is positive. But for , expansion causes temperature to decrease because the work of attractive intermolecular forces dominates, giving a negative change in average molecular speed, and therefore kinetic energy. See also Critical point (thermodynamics) Phase transition Joule–Thomson effect References External links Thermodynamic Concepts and Processes (Chapter 2) (part of the Statistical and Thermal Physics (STP) Curriculum Development Project at Clark University) Temperature Thermodynamic properties Engineering thermodynamics Industrial gases Gases
Inversion temperature
[ "Physics", "Chemistry", "Mathematics", "Engineering" ]
345
[ "Scalar physical quantities", "Thermodynamic properties", "Temperature", "Gases", "Physical quantities", "Engineering thermodynamics", "Intensive quantities", "Phases of matter", "SI base quantities", "Quantity", "Industrial gases", "Thermodynamics", "Mechanical engineering", "Chemical pro...
14,068,005
https://en.wikipedia.org/wiki/Prolactin-releasing%20peptide%20receptor
The prolactin-releasing peptide receptor (PrRPR) also known as G-protein coupled receptor 10 (GPR10) is a protein that in humans is encoded by the PRLHR gene. PrRPR is a G-protein coupled receptor that binds the prolactin-releasing peptide (PRLH). Function PrRPR is a 7-transmembrane domain receptor for prolactin-releasing peptide that is highly expressed in the anterior pituitary. References Further reading External links G protein-coupled receptors
Prolactin-releasing peptide receptor
[ "Chemistry" ]
114
[ "G protein-coupled receptors", "Signal transduction" ]
14,069,368
https://en.wikipedia.org/wiki/Deep-sub-voltage%20nanoelectronics
Deep-sub-voltage nanoelectronics are integrated circuits (ICs) operating near theoretical limits of energy consumption per unit of processing. These devices are intended to address the needs of applications such as wireless sensor networks which have dramatically different requirements from traditional electronics. For example, for microprocessors where performance is a primary metric of interest, but for some new devices, energy per instruction has become a more sensible metric. The important case of fundamental ultimate limit for logic operation is the reversible computing. The tiny autonomous devices (for example smartdust or autonomous Microelectromechanical systems) are based on deep-sub-voltage nanoelectronics. References Meindl J. Low power microelectronics: retrospect and prospect. Proc. IEEE 1995. V.83. NO.4. P. 619-635. Frank M.P. Reversible computing and truly adiabatic circuits: The next great challenge for digital engineering. Powerpoint slideshow Meindl J., Davis J. The fundamental limit on binary switching energy for terascale integration (TSI). IEEE Journal of Solid-State Circuits, 2000. V.35. NO.10. P. 1515-1516. Itoh K. Ultra-low voltage nano-scale memories. Springer. 2007. Silvester D. IC design Strategies at ultra-low voltages Cavin R. K., Zhirnov V. V., Herr D. J. C., Avila A., Hutchby J. Research directions and challenges in nanoelectronics. Journal of Nanoparticle Research, 2006 V.8. P. 841–858. Hanson S., Zhai B., Bernstein K., Blaauw D., Bryant A., Chang L., Das K. K., Haensch W., Nowak E. J., Sylvester D. M. Ultra-low-voltage, minimum-energy CMOS. IBM J. RES. & DEV. 2006. V. 50. NO. 4/5. P. 469-490. Alexander Despotuli, Alexandra Andreeva. High-capacity capacitors for 0.5 voltage nanoelectronics of the future. Modern Electronics № 7, 2007, P. 24-29 Alexander Despotuli, Alexandra Andreeva. A short review on deep-sub-voltage nanoelectronics and related technologies. International Journal of Nanoscience, 2009. V.8. NO.4-5. P. 389-402. Nanoelectronics
Deep-sub-voltage nanoelectronics
[ "Materials_science" ]
546
[ "Nanotechnology", "Nanoelectronics" ]
14,071,313
https://en.wikipedia.org/wiki/CEBPB
CCAAT/enhancer-binding protein beta is a protein that in humans is encoded by the CEBPB gene. Function The protein encoded by this intronless gene is a bZIP transcription factor that can bind as a homodimer to certain DNA regulatory regions. It can also form heterodimers with the related proteins CEBP-alpha, CEBP-delta, and CEBP-gamma. The encoded protein is important in the regulation of genes involved in immune and inflammatory responses and has been shown to bind to the IL-1 response element in the IL-6 gene, as well as to regulatory regions of several acute-phase and cytokine genes. In addition, the encoded protein can bind the promoter and upstream element and stimulate the expression of the collagen type I gene. CEBP-beta is critical for normal macrophage functioning, an important immune cell sub-type; mice unable to express CEBP-beta have macrophages that cannot differentiate (specialize) and thus are unable to perform all their biological functions—including macrophage-mediated muscle repair. Observational work has shown that expression of CEBP-beta in blood leukocytes is positively associated with muscle strength in humans, emphasizing the importance of the immune system, and particularly macrophages, in the maintenance of muscle function. Function of CEBPB gene can be effectively examined by siRNA knockdown based on an independent validation. Upon further investigation, it was noted that CEBPB has close to 8,600 similar correlations with biological manipulations ranging from molecules to proteins or abstracted microRNAs. This protein is found in blood and is upregulated in diseases by acute myeloid leukemia, Glioma, and prostate cancer. This idea is predicated in an intracellular location and precisely localized to the nucleoplasm. Target genes CEBPB is capable of increasing the expression of several target genes. Among them, some have specific role in the nervous system such as the preprotachykinin-1 gene, giving rise to substance P and neurokinin A and the choline acetyltransferase responsible for the biosynthesis of the important neurotransmitter acetylcholine. Other targets include genes coding for cytokines such as IL-6, IL-4, IL-5, and TNF-alpha. Genes coding for transporter proteins that confer multidrug resistance to the cells have also been found to be activated by CEBPB. Such genes include ABCC2 and ABCB1. Enhancer Binding-Protein The CEBPB gene encodes a transcription factor. As previously mentioned, "It contains a leucine zipper (bZIP) domain and the encoded protein functions as a homodimer. It can also form heterodimers with enhancer-binding proteins such as alpha, delta, and gamma. The activity of this protein is important in regulating genes involving the immune and inflammatory responses, among other processes. The "AUG" start codons, resulting in multiple protein isoforms".  It was also mentioned that each of these codon has a different biological function in the body. This pathway allows for proliferation, inhibition, and even survival. This gene is a vital part of proliferation and segregation. It's important "as the transcription factor regulates  the expression of genes that are  involved in the  immune and inflammatory response, it includes  the gluconeogenic pathway and liver recovery. It has a probiotic effect on many cell types, like hepatocytes and adipocytes. However, it exerts differential "effects on T cells by inhibiting MYC expression and promoting differentiation of the T helper lineage." It binds to the regulatory regions of several phase and cytokine genes". Cancer CEBPB is a type of CEBP transcript. CEBPB gene is noted in macrophages in SKCM and provides a favorable prognosis with metastatic cancer by being a biomarker for the patient's diagnosis stratification. Through integrated analysis of single-cell and bulk RNA-sequence datasets. Since CEBPB is a transcription factor in regulating gene expression, patients with metastatic melanoma may benefit long-term by blocking proteins such as CTLA-4 Other. Any other pathway of immune activation, such as targeting CEBPB. It is widely expressed in several different cancers. Interactions CEBPB has been shown to interact with: CREB1, CRSP3 DNA damage-inducible transcript 3, EP300, Estrogen receptor alpha, Glucocorticoid receptor, HMGA1, HSF1, Nucleolar phosphoprotein p130, RELA, Serum response factor, SMARCA2, Sp1 transcription factor, TRIM28, and Zif268. See also Ccaat-enhancer-binding proteins References External links Transcription factors
CEBPB
[ "Chemistry", "Biology" ]
1,032
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
14,071,714
https://en.wikipedia.org/wiki/WHAT%20IF%20software
WHAT IF is a computer program used in a wide variety of computational (in silico) macromolecular structure research fields. The software provides a flexible environment to display, manipulate, and analyze small and large molecules, proteins, nucleic acids, and their interactions. History The first version of the WHAT IF software was developed by Gert Vriend in 1987 at the University of Groningen, Groningen, Netherlands. Most of its development occurred during 1989–2000 at the European Molecular Biology Laboratory (EMBL) in Heidelberg, Germany. Other contributors include Chris Sander, and Wolfgang Kabsch. In 2000, maintenance of the software moved to the Dutch Center for Molecular and Biomolecular Informatics (CMBI) in Nijmegen, Netherlands. It is available for in-house use, or as a web-based resource. , the original paper describing WHAT IF has been cited more than 4,000 times. Software WHAT IF provides a flexible environment to display, manipulate, and analyze small molecules, proteins, nucleic acids, and their interactions. One notable use was detecting many millions of errors (often small, but sometimes catastrophic) in Protein Data Bank (PDB) files. WHAT IF also provides an environment for: homology modeling of protein tertiary structures and quaternary structures; validating protein structures, notably those deposited in the PDB; correcting protein structures; visualising macromolecules and their interaction partners (for example, lipids, drugs, ions, and water), and manipulating macromolecules interactively. WHAT IF is compatible with several other bioinformatics software packages, including YASARA and Jmol. See also List of molecular graphics systems Molecule editor External links References Molecular modelling software Bioinformatics software Protein structure
WHAT IF software
[ "Chemistry", "Biology" ]
364
[ "Molecular modelling software", "Computational chemistry software", "Bioinformatics software", "Bioinformatics", "Molecular modelling", "Structural biology", "Protein structure" ]
14,073,967
https://en.wikipedia.org/wiki/Enthalpy%20of%20mixing
In thermodynamics, the enthalpy of mixing (also heat of mixing and excess enthalpy) is the enthalpy liberated or absorbed from a substance upon mixing. When a substance or compound is combined with any other substance or compound, the enthalpy of mixing is the consequence of the new interactions between the two substances or compounds. This enthalpy, if released exothermically, can in an extreme case cause an explosion. Enthalpy of mixing can often be ignored in calculations for mixtures where other heat terms exist, or in cases where the mixture is ideal. The sign convention is the same as for enthalpy of reaction: when the enthalpy of mixing is positive, mixing is endothermic, while negative enthalpy of mixing signifies exothermic mixing. In ideal mixtures, the enthalpy of mixing is null. In non-ideal mixtures, the thermodynamic activity of each component is different from its concentration by multiplying with the activity coefficient. One approximation for calculating the heat of mixing is Flory–Huggins solution theory for polymer solutions. Formal definition For a liquid, enthalpy of mixing can be defined as follows Where: H(mixture) is the total enthalpy of the system after mixing ΔHmix is the enthalpy of mixing xi is the mole fraction of component i in the system Hi is the enthalpy of the pure solution i Enthalpy of mixing can also be defined using Gibbs free energy of mixing However, Gibbs free energy of mixing and entropy of mixing tend to be more difficult to determine experimentally. As such, enthalpy of mixing tends to be determined experimentally in order to calculate entropy of mixing, rather than the reverse. Enthalpy of mixing is defined exclusively for the continuum regime, which excludes molecular-scale effects (However, first-principles calculations have been made for some metal-alloy systems such as Al-Co-Cr or β-Ti). When two substances are mixed the resulting enthalpy is not an addition of the pure component enthalpies, unless the substances form an ideal mixture. The interactions between each set of molecules determines the final change in enthalpy. For example, when compound “x” has a strong attractive interaction with compound “y” the resulting enthalpy is exothermic. In the case of alcohol and its interactions with a hydrocarbon, the alcohol molecule participates in hydrogen bonding with other alcohol molecules, and these hydrogen bonding interactions are much stronger than alcohol-hydrocarbon interactions, which results in an endothermic heat of mixing. Calculations Enthalpy of mixing is often calculated experimentally using calorimetry methods. A bomb calorimeter is created to be an isolated system with an insulated frame and a reaction chamber, and is used to transfer the heat of mixing into surrounding water for which the temperature is measured. A typical solution would use the equation (derived from the definition above) in conjunction experimentally determined total-mixture enthalpies and tabulated pure species enthalpies, the difference being equal to enthalpy of mixing. More complex models, such as the Flory-Huggins and UNIFAC models, allow prediction of enthalpies of mixing. Flory-Huggins is useful in calculating enthalpies of mixing for polymeric mixtures and considers a system from a multiplicity perspective. Calculations of organic enthalpies of mixing can be made by modifying UNIFAC using the equations Where: = liquid mole fraction of i = partial molar excess enthalpy of i = number of groups of type k in i = excess enthalpy of group k = excess enthalpy of group k in pure i = area parameter of group k = area fraction of group m = mole fraction of group m in the mixture = Temperature dependent coordination number It can be seen that prediction of enthalpy of mixing is incredibly complex and requires a plethora of system variables to be known. This explains why enthalpy of mixing is typically experimentally determined. Relation to the Gibbs free energy of mixing The excess Gibbs free energy of mixing can be related to the enthalpy of mixing by the use of the Gibbs-Helmholtz equation: or equivalently In these equations, the excess and total enthalpies of mixing are equal because the ideal enthalpy of mixing is zero. This is not true for the corresponding Gibbs free energies however. Ideal and regular mixtures An ideal mixture is any in which the arithmetic mean (with respect to mole fraction) of the two pure substances is the same as that of the final mixture. Among other important thermodynamic simplifications, this means that enthalpy of mixing is zero: . Any gas that follows the ideal gas law can be assumed to mix ideally, as can hydrocarbons and liquids with similar molecular interactions and properties. A regular solution or mixture has a non-zero enthalpy of mixing with an ideal entropy of mixing. Under this assumption, scales linearly with , and is equivalent to the excess internal energy. Mixing binary mixtures to form ternary mixtures The enthalpy of mixing for a ternary mixture can be expressed in terms of the enthalpies of mixing of the corresponding binary mixtures: Where: is the mole fraction of species i in the ternary mixture is the molar enthalpy of mixing of the binary mixture consisting of species i and j This method requires that the interactions between two species are unaffected by the addition of the third species. is then evaluated for a binary concentration ratio equal to the concentration ratio of species i to j in the ternary mixture (). Intermolecular forces Intermolecular forces are the main constituent of changes in the enthalpy of a mixture. Stronger attractive forces between the mixed molecules, such as hydrogen-bonding, induced-dipole, and dipole-dipole interactions result in a lower enthalpy of the mixture and a release of heat. If strong interactions only exist between like-molecules, such as H-bonds between water in a water-hexane solution, the mixture will have a higher total enthalpy and absorb heat. See also Apparent molar property Enthalpy Enthalpy change of solution Excess molar quantity Entropy of mixing Calorimetry Miedema's Model References External links Can. J. Chem. Eng. Duran Kaliaguine Enthalpy
Enthalpy of mixing
[ "Physics", "Chemistry", "Mathematics" ]
1,316
[ "Enthalpy", "Quantity", "Physical quantities", "Thermodynamic properties" ]
14,075,504
https://en.wikipedia.org/wiki/VEGFR1
Vascular endothelial growth factor receptor 1 is a protein that in humans is encoded by the FLT1 gene. Function FLT1 is a member of VEGF receptor gene family. It encodes a receptor tyrosine kinase which is activated by VEGF-A, VEGF-B, and placental growth factor. The sequence structure of the FLT1 gene resembles that of the FMS (now CSF1R) gene; hence, Yoshida et al. (1987) proposed the name FLT as an acronym for FMS-like tyrosine kinase. The ablation of VEGFR1 by chemical and genetic means has also recently been found to augment the conversion of white adipose tissue to brown adipose tissue as well as increase brown adipose angiogenesis in mice. Functional genetic variation in FLT1 (rs9582036) has been found to affect non-small cell lung cancer survival. Interactions FLT1 has been shown to interact with PLCG1 and vascular endothelial growth factor B (VEGF-B). See also VEGF receptors References Further reading Tyrosine kinase receptors
VEGFR1
[ "Chemistry" ]
234
[ "Tyrosine kinase receptors", "Signal transduction" ]